It is inconceivable that the design of a new aircraft, a new computer chip, or a new office building would be conducted without defining design measures, determining metrics for various aspects of design quality, and using them to guide the manner in which the design evolves. And yet, the design of complex software-based systems often proceeds with virtually no measurement. The irony of this is that design metrics for software are available, but the vast majority of software engineers continue to be unaware of their existence.

Design metrics for computer software, like all other software metrics, are not perfect. Debate continues over their efficacy and the manner in which they should be applied. Many experts argue that further experimentation is required before design measures can be used. And yet, design without measurement is an unacceptable alternative .

We can examine some of the more common design metrics for computer software. Each can provide the designer with improved insight and all can help the design to evolve to a higher level of quality.

**Architectural Design Metrics**

Architectural design metrics focus on characteristics of the program architecture with an emphasis on the architectural structure and the effectiveness of modules. These metrics are black box in the sense that they do not require any knowledge of the inner workings of a particular software component.

Card and Glass define three software design complexity measures: structural complexity, data complexity, and system complexity.

Structural complexity of a module i is defined in the following manner:

**S(i) = f 2 out(i)**

where fout(i) is the fan-out7 of module i.

Data complexity provides an indication of the complexity in the internal interface for a module i and is defined as

**D(i) = v(i)/[ fout(i) +1]**

where v(i) is the number of input and output variables that are passed to and from module i.

Finally, system complexity is defined as the sum of structural and data complexity, specified as

**C(i) = S(i) + D(i)**

As each of these complexity values increases, the overall architectural complexity of the system also increases. This leads to a greater likelihood that integration and testing effort will also increase.

An earlier high-level architectural design metric proposed by Henry and Kafura also makes use the fan-in and fan-out. The authors define a complexity metric (applicable to call and return architectures) of the form

**HKM = length(i) x [ fin(i) + fout(i)]2**

where length(i) is the number of programming language statements in a module i and fin(i) is the fan-in of a module i. Henry and Kafura extend the definitions of fanin and fan-out presented in this book to include not only the number of module control connections (module calls) but also the number of data structures from which a module i retrieves (fan-in) or updates (fan-out) data. To compute HKM during design, the procedural design may be used to estimate the number of programming language statements for module i. Like the Card and Glass metrics noted previously, an increase in the Henry-Kafura metric leads to a greater likelihood that integration and testing effort will also increase for a module.

Fenton suggests a number of simple morphology (i.e., shape) metrics that enable different program architectures to be compared using a set of straightforward dimensions. Referring to figure, the following metrics can be defined:

**size = n + a**

where n is the number of nodes and a is the number of arcs. For the architecture shown in figure,

*size = 17 + 18 = 35*

depth = the longest path from the root (top) node to a leaf node. For the architecture shown infigure, depth = 4.

width = maximum number of nodes at any one level of the architecture. For the architecture shown in figure, width = 6.

arc-to-node ratio, r = a/n,

depth = the longest path from the root (top) node to a leaf node. For the architecture shown infigure, depth = 4.

width = maximum number of nodes at any one level of the architecture. For the architecture shown in figure, width = 6.

arc-to-node ratio, r = a/n,

which measures the connectivity density of the architecture and may provide a simple indication of the coupling of the architecture. For the architecture shown in figure, r = 18/17 = 1.06.

The U.S. Air Force Systems Command ] has developed a number of software quality indicators that are based on measurable design characteristics of a computer program. Using concepts similar to those proposed in IEEE Std. 982.1-1988 , the Air Force uses information obtained from data and architectural design to derive a design structure quality index (DSQI) that ranges from 0 to 1. The following values must be ascertained to compute the DSQI :

**S1**= the total number of modules defined in the program architecture.

**S2**= the number of modules whose correct function depends on the source of data input or that produce data to be used elsewhere (in general, control modules, among others, would not be counted as part of S2).

**S3**= the number of modules whose correct function depends on prior processing.

**S4**= the number of database items (includes data objects and all attributes that define objects).

**S5**= the total number of unique database items.

**S6**= the number of database segments (different records or individual objects).

**S7**= the number of modules with a single entry and exit (exception processing is not considered to be a multiple exit).

Once values S1 through S7 are determined for a computer program, the following intermediate values can be computed:

**Program structure:**D1, where D1 is defined as follows: If the architectural design was developed using a distinct method (e.g., data flow-oriented design or object-oriented design), then D1 = 1, otherwise D1 = 0.

**Module independence**: D2 = 1 (S2/S1)

**Modules not dependent on prior processing**: D3 = 1 (S3/S1)

**Database size:**D4 = 1 (S5/S4)

**Database compartmentalization:**D5 = 1 (S6/S4)

**Module entrance/exit characteristic**: D6 = 1 (S7/S1)

With these intermediate values determined, the DSQI is computed in the following manner:

**DSQI = wiDi**

where i = 1 to 6, wi is the relative weighting of the importance of each of the intermediate values, and wi = 1 (if all Di are weighted equally, then wi = 0.167).

The value of DSQI for past designs can be determined and compared to a design that is currently under development. If the DSQI is significantly lower than average, further design work and review are indicated. Similarly, if major changes are to be made to an existing design, the effect of those changes on DSQI can be calculated.