The distributed nature of client/server systems pose a set of unique problems for software testers. Binder suggests the following areas of...
The distributed nature of client/server systems pose a set of unique problems for software testers. Binder suggests the following areas of focus:
• Client GUI considerations.
• Target environment and platform diversity considerations.
• Distributed database considerations (including replicated data).
• Distributed processing considerations (including replicated processes).
• Nonrobust target environment.
• Nonlinear performance relationships.
The strategy and tactics associated with c/s testing must be designed in a manner that allows each of these issues to be addressed.
Overall c/s Testing Strategy
In general, the testing of client/server software occurs at three different levels: (1) individual client applications are tested in a “disconnected” mode, the operation of the server and the underlying network are not considered; (2) the client software and associated server applications are tested in concert, but network operations are not explicitly exercised; (3) the complete c/s architecture, including network operation and performance, is tested.
Although many different types of tests are conducted at each of these levels of detail, the following testing approaches are commonly encountered for c/s applications:
Application function tests. The functionality of client applications is tested using the methods . In essence, the application is tested in stand-alone fashion in an attempt to uncover errors in its operation.
Server tests. The coordination and data management functions of the server are tested. Server performance (overall response time and data throughput) is also considered.
Database tests. The accuracy and integrity of data stored by the server is tested. Transactions posted by client applications are examined to ensure that data are properly stored, updated, and retrieved. Archiving is also tested.
Transaction tests. A series of tests are created to ensure that each class of transactions is processed according to requirements. Tests focus on the correctness of processing and also on performance issues (e.g., transaction processing times and transaction volume).
Network communication tests. These tests verify that communication among the nodes of the network occurs correctly and that message passing, transactions, and related network traffic occur without error. Network security tests may also be conducted as part of these tests.
To accomplish these testing approaches, Musa recommends the development of operational profiles derived from client/server usage scenarios. An operational profile indicates how different types of users interoperate with the c/s system. That is, the profiles provide a “pattern of usage” that can be applied when tests are designed and executed. For example, for a particular type of user, what percentage of transactions will be inquiries? updates? orders?
To develop the operational profile, it is necessary to derive a set of user scenarios that are similar to use-cases discussed earlier in this book. Each scenario addresses who, where, what, and why. That is, who the user is, where (in the physical c/s architecture) the system interaction occurs, what the transaction is, and why it has occurred. Scenarios can be derived using requirements elicitation techniques or through less formal discussions with end-users. The result, however, should be the same. Each scenario should provide an indication of the system functions that will be required to service a particular user, the order in which those functions are required, the timing and response that is expected, and the frequency with which each function is used. These data are then combined (for all users) to create the operational profile.
The strategy for testing c/s architectures is analogous to the testing strategy for software-based systems . Testing begins with testing in the small. That is, a single client application is tested. Integration of the clients, the server, and the network are tested progressively. Finally, the entire system is tested as an operational entity.
Traditional testing views module/subsystem/system integration and testing as top down, bottom up, or some variation of the two. Module integration in c/s development may have some top-down or bottom-up aspects, but integration in c/s projects tends more toward parallel development and integration of modules across all design levels. Therefore, integration testing in c/s projects is sometimes best accomplished using a nonincremental or "big bang" approach.
The fact that the system is not being built to use prespecified hardware and software affects system testing. The networked cross-platform nature of c/s systems requires that we pay considerably more attention to configuration testing and compatibility testing.
Configuration testing doctrine forces testing of the system in all of the known hardware and software environments in which it will operate. Compatibility testing ensures a functionally consistent interface across hardware and software platforms. For example, a Windows-type interface may be visually different depending on the implementation environment, but the same basic user behaviors should produce the same results regardless of the client interface standard.
c/s Testing Tactics
Even if the c/s system has not been implemented using object technology, objectoriented testing techniques make good sense because the replicated data and processes can be organized into classes of objects that share the same set of properties. Once test cases have been derived for a class of objects (or their equivalent in a conventionally developed system), those test cases should be broadly applicable for all instances of the class.
The OO point of view is particularly valuable when the graphical user interface of modern c/s systems is considered. The GUI is inherently object oriented and departs from traditional interfaces because it must operate on many platforms. In addition, testing must explore a large number of logic paths because the GUI creates, manipulates, and modifies a broad range of graphical objects. Testing is further complicated because the objects can be present or absent, they may exist for a length of time, and they can appear anywhere on the desktop.
What this means is that the traditional capture/playback approach for testing conventional character-based interfaces must be modified in order to handle the complexities of the GUI environment. A functional variation of the capture/playback paradigm called structured capture/playback has evolved for GUI testing.
Traditional capture/playback records input as keystrokes and output as screen images, which are saved and compared against inputs and output images of subsequent tests. Structured capture/playback is based on an internal (logical) view of external activities. The application program's interactions with the GUI are recorded as internal events, which can be saved as "scripts" written in Microsoft's Visual Basic, one of the C variants, or in the vendor's proprietary language.