Throughout the software process, a project team creates work products (e.g., requirements specifications or prototype, design documents, so...
Throughout the software process, a project team creates work products (e.g., requirements specifications or prototype, design documents, source code). But the team also creates (and hopefully corrects) errors associated with each work product. If errorrelated measures and resultant metrics are collected over many software projects, a project manager can use these data as a baseline for comparison against error data collected in real time. Error tracking can be used as one means for assessing the status of a current project.
The software team performs formal technical reviews (and, later, testing) to find and correct errors, E, in work products produced during software engineering tasks. Any errors that are not uncovered (but found in later tasks) are considered to be defects, D. Defect removal efficiency (Chapter 4) has been defined as
DRE = E/(E + D)
DRE is a process metric that provides a strong indication of the effectiveness of quality assurance activities, but DRE and the error and defect counts associated with it can also be used to assist a project manager in determining the progress that is being made as a software project moves through its scheduled work tasks.
Let us assume that a software organization has collected error and defect data over the past 24 months and has developed averages for the following metrics:
• Errors per requirements specification page, Ereq
• Errors per component—design level, Edesign
• Errors per component—code level, Ecode
• DRE—requirements analysis
• DRE—architectural design
• DRE—component level design
• DRE—coding
As the project progresses through each software engineering step, the software team records and reports the number of errors found during requirements, design, and code reviews. The project manager calculates current values for Ereq, Edesign, and Ecode. These are then compared to averages for past projects. If current results vary by more than 20% from the average, there may be cause for concern and there is certainly cause for investigation.
For example, if Ereq = 2.1 for project X, yet the organizational average is 3.6, one of two scenarios is possible: (1) the software team has done an outstanding job of developing the requirements specification or (2) the team has been lax in its review approach. If the second scenario appears likely, the project manager should take immediate steps to build additional design time12 into the schedule to accommodate the requirements defects that have likely been propagated into the design activity.
These error tracking metrics can also be used to better target review and/or testing resources. For example, if a system is composed of 120 components, but 32 of these component exhibit Edesign values that have substantial variance from the average, the project manager might elect to dedicate code review resources to the 32 components and allow others to pass into testing with no code review. Although all components should undergo code review in an ideal setting, a selective approach (reviewing only those modules that have suspect quality based on the Edesign value) might be an effective means for recouping lost time and/or saving costs for a project that has gone over budget.