Lines of code and function points were described as measures from which productivity metrics can be computed. LOC and FP data are used in t...

Lines of code and function points were described as measures from which productivity metrics can be computed. LOC and FP data are used in two ways during software project estimation: (1) as an estimation variable to "size" each element of the software and (2) as baseline metrics collected from past projects and used in conjunction with estimation variables to develop cost and effort projections.

LOC and FP estimation are distinct estimation techniques. Yet both have a number of characteristics in common. The project planner begins with a bounded statement of software scope and from this statement attempts to decompose software into problem functions that can each be estimated individually. LOC or FP (the estimation variable) is then estimated for each function. Alternatively, the planner may choose another component for sizing such as classes or objects, changes, or business processes affected.

Baseline productivity metrics (e.g., LOC/pm or FP/pm) are then applied to the appropriate estimation variable, and cost or effort for the function is derived. Functionestimates are combined to produce an overall estimate for the entire project.

It is important to note, however, that there is often substantial scatter in productivity metrics for an organization, making the use of a single baseline productivity metric suspect. In general, LOC/pm or FP/pm averages should be computed by project domain. That is, projects should be grouped by team size, application area, complexity, and other relevant parameters. Local domain averages should then be

computed. When a new project is estimated, it should first be allocated to a domain, and then the appropriate domain average for productivity should be used in generating the estimate.

The LOC and FP estimation techniques differ in the level of detail required for decomposition and the target of the partitioning. When LOC is used as the estimation variable, decomposition is absolutely essential and is often taken to considerable levels of detail. The following decomposition approach has been adapted from Phillips :

**define product scope;**

identify functions by decomposing scope;

do while functions remain

select a functionj

assign all functions to subfunctions list;

identify functions by decomposing scope;

do while functions remain

select a functionj

assign all functions to subfunctions list;

**do while subfunctions remain**

select subfunctionk

if subfunctionk resembles subfunctiond described in a historical data base

then note historical cost, effort, size (LOC or FP) data for subfunctiond;

adjust historical cost, effort, size data based on any differences;

use adjusted cost, effort, size data to derive partial estimate, Ep;

project estimate = sum of {Ep};

else if cost, effort, size (LOC or FP) for subfunctionk can be estimated

then derive partial estimate, Ep;

project estimate = sum of {Ep};

else subdivide subfunctionk into smaller subfunctions;

add these to subfunctions list;

endif

endif

enddo

enddo

select subfunctionk

if subfunctionk resembles subfunctiond described in a historical data base

then note historical cost, effort, size (LOC or FP) data for subfunctiond;

adjust historical cost, effort, size data based on any differences;

use adjusted cost, effort, size data to derive partial estimate, Ep;

project estimate = sum of {Ep};

else if cost, effort, size (LOC or FP) for subfunctionk can be estimated

then derive partial estimate, Ep;

project estimate = sum of {Ep};

else subdivide subfunctionk into smaller subfunctions;

add these to subfunctions list;

endif

endif

enddo

enddo

This decomposition approach assumes that all functions can be decomposed into subfunctions that will resemble entries in a historical data base. If this is not the case, then another sizing approach must be applied. The greater the degree of partitioning, the more likely reasonably accurate estimates of LOC can be developed.

For FP estimates, decomposition works differently. Rather than focusing on function, each of the information domain characteristics—inputs, outputs, data files, inquiries, and external interfaces—as well as the 14 complexity adjustment are estimated. The resultant estimates can then be used to derive a FP value that can be tied to past data and used to generate an estimate.

Regardless of the estimation variable that is used, the project planner begins by estimating a range of values for each function or information domain value. Using historical data or (when all else fails) intuition, the planner estimates an optimistic, most likely, and pessimistic size value for each function or count for each information domain value. An implicit indication of the degree of uncertainty is provided

when a range of values is specified.

A three-point or expected value can then be computed. The expected value for the estimation variable (size), S, can be computed as a weighted average of the optimistic (sopt), most likely (sm), and pessimistic (spess) estimates. For example,

S = (sopt + 4sm + spess)/6

gives heaviest credence to the “most likely” estimate and follows a beta probability distribution. We assume that there is a very small probability the actual size result will fall outside the optimistic or pessimistic values.

Once the expected value for the estimation variable has been determined, historical LOC or FP productivity data are applied.