The questions posed in the preceding section provide a preliminary assessment of the architectural style chosen for a given system. However...
The questions posed in the preceding section provide a preliminary assessment of the architectural style chosen for a given system. However, a more complete method for evaluating the quality of an architecture is essential if design is to be accomplished effectively. In the sections that follow, we consider two different approaches for the analysis of alternative architectural designs. The first method uses an iterative method to assess design trade-offs. The second approach applies a pseudo-quantitative technique for assessing design quality.
An Architecture Trade-off Analysis Method
The Software Engineering Institute (SEI) has developed an architecture trade-off analysis method that establishes an iterative evaluation process for software architectures. The design analysis activities that follow are performed iteratively:
1. Collect scenarios. A set of use-cases is developed to represent the system from the user’s point of view.
2. Elicit requirements, constraints, and environment description. This information is required as part of requirements engineering and is used to be certain that all customer, user, and stakeholder concerns have been addressed.
3. Describe the architectural styles/patterns that have been chosen to address the scenarios and requirements. The style(s) should be described using architectural views such as
• Module view for analysis of work assignments with components and the degree to which information hiding has been achieved.
• Process view for analysis of system performance.
• Data flow view for analysis of the degree to which the architecture meets functional requirements.
4. Evaluate quality attributes by considering each attribute in isolation. The number of quality attributes chosen for analysis is a function of the time available for review and the degree to which quality attributes are relevant to the system at hand. Quality attributes for architectural design assessment include reliability, performance, security, maintainability, flexibility, testability, portability, reusability, and interoperability.
5. Identify the sensitivity of quality attributes to various architectural attributes for a specific architectural style. This can be accomplished by making small changes in the architecture and determining how sensitive a quality attribute, say performance, is to the change. Any attributes that are significantly affected by variation in the architecture are termed sensitivity points.
6. Critique candidate architectures (developed in step 3) using the sensitivity analysis conducted in step 5. The SEI describes this approach in the following manner :
Once the architectural sensitivity points have been determined, finding trade-off points is simply the identification of architectural elements to which multiple attributes are sensitive. For example, the performance of a client-server architecture might be highly sensitive to the number of servers (performance increases, within some range, by increasing the number of servers). The availability of that architecture might also vary directly with the number of servers. However, the security of the system might vary inversely with the number of servers (because the system contains more potential points of attack). The number of servers, then, is a trade-off point with respect to this architecture. It is an element, potentially one of many, where architectural trade-offs will be made, consciously or unconsciously. These six steps represent the first ATAM iteration. Based on the results of steps 5 and 6, some architecture alternatives may be eliminated, one or more of the remaining architectures may be modified and represented in more detail, and then the ATAM steps are reapplied.
Quantitative Guidance for Architectural Design
One of the many problems faced by software engineers during the design process is a general lack of quantitative methods for assessing the quality of proposed designs. The ATAM approach is representative of a useful but undeniably qualitative approach to design analysis.
Work in the area of quantitative analysis of architectural design is still in its formative stages. Asada and his colleagues suggest a number of pseudoquantitative techniques that can be used to complement the ATAM approach as a method for the analysis of architectural design quality. Asada proposes a number of simple models that assist a designer in determining the degree to which a particular architecture meets predefined “goodness” criteria.These criteria, sometimes called design dimensions, often encompass the quality attributes defined in the last section: reliability, performance, security, maintainability, flexibility, testability, portability, reusability, and interoperability, among others.
The first model, called spectrum analysis, assesses an architectural design on a “goodness” spectrum from the best to worst possible designs. Once the software architecture has been proposed, it is assessed by assigning a “score” to each of its design dimensions. These dimension scores are summed to determine the total score, S, of the design as a whole. Worst-case scores4 are then assigned to a hypothetical design, and a total score, Sw, for the worst case architecture is computed. A best-case score, Sb, is computed for an optimal design.5 We then calculate a spectrum index, Is, using the equation
Is = [(S Sw)/(Sb Sw)] x 100
The spectrum index indicates the degree to which a proposed architecture approaches an optimal system within the spectrum of reasonable choices for a design.
If modifications are made to the proposed design or if an entirely new design is proposed, the spectrum indices for both may be compared and an improvement index, Imp, may be computed:
Imp = Is1 Is2
This provides a designer with a relative indication of the improvement associated with architectural changes or a new proposed architecture. If Imp is positive, then we an conclude that system 1 has been improved relative to system .
Design selection analysis is another model that requires a set of design dimensions to be defined. The proposed architecture is then assessed to determine the number of design dimensions that it achieves when compared to an ideal (best-case) system. For example, if a proposed architecture would achieve excellent component reuse, and this dimension is required for an idea system, the reusability dimension has been achieved. If the proposed architecture has weak security and strong security is required,
that design dimension has not been achieved. We calculate a design selection index, d, as
that design dimension has not been achieved. We calculate a design selection index, d, as
d = (Ns/Na) x 100
where Ns is the number of design dimensions achieved by a proposed architecture and Na is the total number of dimensions in the design space. The higher the design selection index, the more closely the proposed architecture approaches an ideal system.
Contribution analysis “identifies the reasons that one set of design choices gets a lower score than another” . Value analysis is conducted to determine the relative priority of requirements determined during function deployment, information deployment, and task deployment. A set of “realization mechanisms” (features of the architecture) are identified. All customer requirements (determined
using QFD) are listed and a cross-reference matrix is created. The cells of the matrix indicate the relative strength of the relationship (on a numeric scale of 1 to 10) between a realization mechanism and a requirement for each alternative architecture. This is sometimes called a quantified design space (QDS). The QDS is relatively easy to implement as a spreadsheet model and can be used to isolate why one set of design choices gets a lower score than another.
A useful technique for assessing the overall complexity of a proposed architecture is to consider dependencies between components within the architecture. These dependencies are driven by information/control flow within the system.
Zhao [ZHA98] suggests three types of dependencies:
Sharing dependencies represent dependence relationships among consumers who use the same resource or producers who produce for the same consumers. For example, for two components u and v, if u and v refer to the same global data, then there exists a shared dependence relationship between u and v.
Flow dependencies represent dependence relationships between producers and consumers of resources. For example, for two components u and v, if u must complete before control flows into v (prerequisite), or if u communicates with v by parameters, then there exists a flow dependence relationship between u and v.
Constrained dependencies represent constraints on the relative flow of control among a set of activities. For example, for two components u and v, u and v cannot execute at the same time (mutual exclusion), then there exists a constrained dependence relationship between u and v.