Software Engineering-Empirical Estimation Tools

An estimation model for computer software uses empirically derived formulas to predict effort as a function of LOC or FP. But instead of using the tables, the resultant values for LOC or FP are plugged into the estimation model.

The empirical data that support most estimation models are derived from a limited sample of projects. For this reason, no estimation model is appropriate for all classes of software and in all development environments. Therefore, the results obtained from such models must be used judiciously.

The Structure of Estimation Models

A typical estimation model is derived using regression analysis on data collected from past software projects. The overall structure of such models takes the form
                                                       E = A + B x (ev)C 
where A, B, and C are empirically derived constants, E is effort in person-months, and ev is the estimation variable (either LOC or FP). 
The majority of estimation models have some form of project adjustment component that enables E to be adjusted by other project characteristics (e.g., problem complexity, staff experience, development environment). Among the many LOC-oriented estimation models proposed in the literature are

                          E = 5.2 x (KLOC)0.91 Walston-Felix model
                          E = 5.5 + 0.73 x (KLOC)1.16 Bailey-Basili model
                          E = 3.2 x (KLOC)1.05 Boehm simple model
                          E = 5.288 x (KLOC)1.047 Doty model for KLOC > 9

             FP-oriented models have also been proposed. These include
                          E = 13.39 + 0.0545 FP Albrecht and Gaffney model
                          E = 60.62 x 7.728 x 10-8 FP3 Kemerer model
                          E = 585.7 + 15.12 FP Matson, Barnett, and Mellichamp model

A quick examination of these models indicates that each will yield a different result for the same values of LOC or FP. The implication is clear. Estimation models must be calibrated for local needs.

The COCOMO Model

In his classic book on “software engineering economics,” Barry Boehm introduced a hierarchy of software estimation models bearing the name COCOMO, for COnstructive COst MOdel. The original COCOMO model became one of the most widely used and discussed software cost estimation models in the industry. It has evolved into a more comprehensive estimation model, called COCOMO II . Like its predecessor, COCOMO II is actually a hierarchy of estimation models that address the following areas:

Application composition model. Used during the early stages of software engineering, when prototyping of user interfaces, consideration of software and system interaction, assessment of performance, and evaluation of technology maturity are paramount.

Early design stage model. Used once requirements have been stabilized and basic software architecture has been established.
Post-architecture-stage model. Used during the construction of the software.

Like all estimation models for software, the COCOMO II models require sizing information. Three different sizing options are available as part of the model hierarchy: object points, function points, and lines of source code.

The COCOMO II application composition model uses object points . It should be noted that other, more
sophisticated estimation models (using FP and KLOC) are also available as part of COCOMO II.

Like function points , the object point is an indirect software measure that is computed using counts of the number of (1) screens (at the user interface), (2) reports, and (3) components likely to be required to build the application. Each object instance (e.g., a screen or report) is classified into one of three complexity levels (i.e., simple, medium, or difficult) using criteria suggested by Boehm . In essence, complexity is a function of the number and source of the client and server data tables that are required to generate the screen or report and the number of views or sections presented as part of the screen or report.
Once complexity is determined, the number of screens, reports, and components
are weighted  The object point count is then determined by multiplying the original number of object instances by the weighting factor  and summing to obtain a total object point count. When component-based development or general software reuse is to be applied, the percent of reuse (%reuse) is estimated and the object point count is adjusted:

                        NOP = (object points) x [(100 %reuse)/100]

where NOP is defined as new object points.

To derive an estimate of effort based on the computed NOP value, a “productivity rate” must be derived. Table 5.2 presents the productivity rate
                         PROD = NOP/person-month

for different levels of developer experience and development environment maturity. Once the productivity rate has been determined, an estimate of project effort can be derived as

                       estimated effort = NOP/PROD
Share this article :
Copyright © 2012. Best Online Tutorials | Source codes | Programming Languages - All Rights Reserved