As computer software has become more complex, the need for specialized testing approaches has also grown. The white-box and black-box testi...
As computer software has become more complex, the need for specialized testing approaches has also grown. The white-box and black-box testing methods are applicable across all environments, architectures, and applications, but unique guidelines and approaches to testing are sometimes warranted.
Testing GUIs
Graphical user interfaces (GUIs) present interesting challenges for software engineers. Because of reusable components provided as part of GUI development environments, the creation of the user interface has become less time consuming and more precise. But, at the same time, the complexity of GUIs has grown, leading to more difficulty in the design and execution of test cases.
Because many modern GUIs have the same look and feel, a series of standard tests can be derived. Finite state modeling graphs may be used to derive a series of tests that address specific data and program objects that are relevant to the GUI.
Due to the large number of permutations associated with GUI operations, testing should be approached using automated tools. A wide array of GUI testing tools has appeared on the market over the past few years.
Testing of Client/Server Architectures
Client/server (C/S) architectures represent a significant challenge for software testers. The distributed nature of client/server environments, the performance issues associated with transaction processing, the potential presence of a number of different hardware platforms, the complexities of network communication, the need to service multiple clients from a centralized (or in some cases, distributed) database, and the coordination requirements imposed on the server all combine to make testing of C/S architectures and the software that reside within them considerably more difficult than stand-alone applications. In fact, recent industry studies indicate a significant increase in testing time and cost when C/S environments are developed.
Testing Documentation and Help Facilities
The term software testing conjures images of large numbers of test cases prepared to exercise computer programs and the data that they manipulate. It is important to note that testing must also extend to the third element of the software configuration—documentation.
Errors in documentation can be as devastating to the acceptance of the program as errors in data or source code. Nothing is more frustrating than following a user guide or an on-line help facility exactly and getting results or behaviors that do not coincide with those predicted by the documentation. It is for this reason that that documentation testing should be a meaningful part of every software test plan.
Documentation testing can be approached in two phases. The first phase, review and inspection , examines the document for editorial clarity. The second phase, live test, uses the documentation in conjunction with the use of the actual program.
Surprisingly, a live test for documentation can be approached using techniques that are analogous to many of the black-box testing methods. Graph-based testing can be used to describe the use of the program; equivalence partitioning and boundary value analysis can be used to define various classes of input and associated interactions. Program usage is then tracked through the documentation.
Surprisingly, a live test for documentation can be approached using techniques that are analogous to many of the black-box testing methods. Graph-based testing can be used to describe the use of the program; equivalence partitioning and boundary value analysis can be used to define various classes of input and associated interactions. Program usage is then tracked through the documentation.
The following questions should be answered during both phases:
• Does the documentation accurately describe how to accomplish each mode of use?
• Is the description of each interaction sequence accurate?
• Are examples accurate?
• Are terminology, menu descriptions, and system responses consistent with the actual program?
• Is it relatively easy to locate guidance within the documentation?
• Can troubleshooting be accomplished easily with the documentation?
• Are the document table of contents and index accurate and complete?
• Is the design of the document (layout, typefaces, indentation, graphics) conducive to understanding and quick assimilation of information?
• Are all software error messages displayed for the user described in more detail in the document? Are actions to be taken as a consequence of an error message clearly delineated?
• If hypertext links are used, are they accurate and complete?
• If hypertext is used, is the navigation design appropriate for the information required?
The only viable way to answer these questions is to have an independent third party (e.g., selected users) test the documentation in the context of program usage. All discrepancies are noted and areas of document ambiguity or weakness are defined for potential rewrite.
Testing for Real-Time Systems
The time-dependent, asynchronous nature of many real-time applications adds a new and potentially difficult element to the testing mix—time. Not only does the test case designer have to consider white- and black-box test cases but also event handling (i.e., interrupt processing), the timing of the data, and the parallelism of the tasks (processes) that handle the data. In many situations, test data provided when a real time system is in one state will result in proper processing, while the same data provided when the system is in a different state may lead to error.
For example, the real-time software that controls a new photocopier accepts operator interrupts (i.e., the machine operator hits control keys such as RESET or DARKEN) with no error when the machine is making copies (in the "copying" state). These same operator interrupts, if input when the machine is in the "jammed" state, cause a display of the diagnostic code indicating the location of the jam to be lost (an error).
In addition, the intimate relationship that exists between real-time software and its hardware environment can also cause testing problems. Software tests must consider the impact of hardware faults on software processing. Such faults can be extremely difficult to simulate realistically.
Comprehensive test case design methods for real-time systems have yet to evolve. However, an overall four-step strategy can be proposed:
Task testing. The first step in the testing of real-time software is to test each task independently. That is, white-box and black-box tests are designed and executed for each task. Each task is executed independently during these tests. Task testing uncovers errors in logic and function but not timing or behavior.
Behavioral testing. Using system models created with CASE tools, it is possible to simulate the behavior of a real-time system and examine its behavior as a consequence of external events. These analysis activities can serve as the basis for the design of test cases that are conducted when the real-time software has been built. Using a technique that is similar to equivalence partitioning , events (e.g., interrupts, control signals) are categorized for testing. For example, events for the photocopier might be user interrupts (e.g., reset counter), mechanical interrupts (e.g., paper jammed), system interrupts (e.g., toner low), and failure modes (e.g., roller overheated). Each of these events is tested individually and the behavior of the executable system is examined to detect errors that occur as a consequence of processing associated with these events. The behavior of the system model (developed during the analysis activity) and the executable software can be compared for conformance. Once each class of events has been tested, events are presented to the system in random order and with random frequency. The behavior of the software is examined to detect behavior errors.
Intertask testing. Once errors in individual tasks and in system behavior have been isolated, testing shifts to time-related errors. Asynchronous tasks that are known to communicate with one another are tested with different data rates and processing load to determine if intertask synchronization errors will occur. In addition, tasks that communicate via a message queue or data store are tested to uncover errors in the sizing of these data storage areas.
System testing. Software and hardware are integrated and a full range of system tests are conducted in an attempt to uncover errors at the software/hardware interface. Most real-time systems process interrupts. Therefore, testing the handling of these Boolean events is essential. Using the state transition diagram and the control specification , the tester develops a list of all possible interrupts and the processing that occurs as a consequence of the interrupts. Tests are then designed to assess the following system characteristics:
• Are interrupt priorities properly assigned and properly handled?
• Is processing for each interrupt handled correctly?
• Does the performance (e.g., processing time) of each interrupt-handling procedure conform to requirements?
• Does a high volume of interrupts arriving at critical times create problems in function or performance?
In addition, global data areas that are used to transfer information as part of interrupt processing should be tested to assess the potential for the generation of side effects.