There has been much discussion of the relative advantages and disadvantages of top-down versus bottom-up integration testing. In general, t...
There has been much discussion of the relative advantages and disadvantages of top-down versus bottom-up integration testing. In general, the advantages of one strategy tend to result in disadvantages for the other strategy. The major disadvantage of the top-down approach is the need for stubs and the attendant testing difficulties that can be associated with them. Problems associated with stubs may be offset by the advantage of testing major control functions early. The major disadvantage of bottom-up integration is that "the program as an entity does not exist until the last module is added" . This drawback is tempered by easier test case design and a lack of stubs.
Selection of an integration strategy depends upon software characteristics and, sometimes, project schedule. In general, a combined approach (sometimes called sandwich testing) that uses top-down tests for upper levels of the program structure, coupled with bottom-up tests for subordinate levels may be the best compromise.
As integration testing is conducted, the tester should identify critical modules. A critical module has one or more of the following characteristics: (1) addresses several software requirements, (2) has a high level of control (resides relatively high in the program structure), (3) is complex or error prone (cyclomatic complexity may be used as an indicator), or (4) has definite performance requirements. Critical modules should be tested as early as is possible. In addition, regression tests should focus on critical module function.
Integration Test Documentation
An overall plan for integration of the software and a description of specific tests are documented in a Test Specification. This document contains a test plan, and a test procedure, is a work product of the software process, and becomes part of the software configuration.
The test plan describes the overall strategy for integration. Testing is divided into phases and builds that address specific functional and behavioral characteristics of the software. For example, integration testing for a CAD system might be divided into the following test phases:
• User interaction (command selection, drawing creation, display representation, error processing and representation).
• Data manipulation and analysis (symbol creation, dimensioning; rotation, computation of physical properties).
• Display processing and generation (two-dimensional displays, threedimensional displays, graphs and charts).
• Database management (access, update, integrity, performance).
Each of these phases and subphases (denoted in parentheses) delineates a broad functional category within the software and can generally be related to a specific domain of the program structure. Therefore, program builds (groups of modules) are created to correspond to each phase. The following criteria and corresponding tests are applied for all test phases:
Interface integrity. Internal and external interfaces are tested as each module (or cluster) is incorporated into the structure.
Functional validity. Tests designed to uncover functional errors are conducted.
Information content. Tests designed to uncover errors associated with local or global data structures are conducted.
Performance. Tests designed to verify performance bounds established during software design are conducted.
A schedule for integration, the development of overhead software, and related topics is also discussed as part of the test plan. Start and end dates for each phase are established and "availability windows" for unit tested modules are defined. A brief description of overhead software (stubs and drivers) concentrates on characteristics that might require special effort. Finally, test environment and resources are described.
The detailed testing procedure that is required to accomplish the test plan is described next. The order of integration and corresponding tests at each integration Test Specification step are described. A listing of all test cases (annotated for subsequent reference) and expected results is also included.
A history of actual test results, problems, or peculiarities is recorded in the Test Specification. Information contained in this section can be vital during software maintenance. Appropriate references and appendixes are also presented.
Like all other elements of a software configuration, the test specification format may be tailored to the local needs of a software engineering organization. It is important to note, however, that an integration strategy (contained in a test plan) and testing details (described in a test procedure) are essential ingredients and must appear.