Software Engineering-Metrics for Analysis model

Technical work in software engineering begins with the creation of the analysis model. It is at this stage that requirements are derived an...


Technical work in software engineering begins with the creation of the analysis model. It is at this stage that requirements are derived and that a foundation for design is established. Therefore, technical metrics that provide insight into the quality of the analysis model are desirable.

Although relatively few analysis and specification metrics have appeared in the literature, it is possible to adapt metrics derived for project application  for use in this context. These metrics examine the analysis model with the intent of predicting the “size” of the resultant system. It is likely that size and design complexity will be directly correlated.

Function-Based Metrics

The function point metric  an be used effectively as a means for predicting the size of a system that will be derived from the analysis model. To illustrate the use of the FP metric in this context, we consider a simple analysis model representation, illustrated in figure. Referring to the figure, a data flow diagram for a function within the SafeHome software is represented. The function manages user interaction, accepting a user password to activate or deactivate the system, and allows inquiries on the status of security zones and various security sensors. The function displays a series of prompting messages and sends appropriate control signals to various components of the security system.

The data flow diagram is evaluated to determine the key measures required for computation of the function point metric :
number of user inputs
number of user outputs
number of user inquiries
number of files
number of external interfaces

Three user inputs—password, panic button, and activate/deactivate—are shown in the figure along with two inquires—zone inquiry and sensor inquiry. One file (system configuration file) is shown. Two user outputs (messages and sensor status) and four external interfaces (test sensor, zone setting, activate/deactivate, and alarm alert) are also present. These data, along with the appropriate complexity, are shown in figure.

The count total shown in Figure 19.4 must be adjusted using Equation :

                 FP = count total [0.65 + 0.01 (Fi)]

where count total is the sum of all FP entries obtained from the first figure and Fi (i = 1 to 14) are "complexity adjustment values." For the purposes of this example, we assume that (Fi) is 46 (a moderately complex product). Therefore,

                 FP = 50 [0.65 + (0.01 46)] = 56

Based on the projected FP value derived from the analysis model, the project team can estimate the overall implemented size of the SafeHome user interaction function. Assume that past data indicates that one FP translates into 60 lines of code (an objectoriented language is to be used) and that 12 FPs are produced for each person-month of effort. These historical data provide the project manager with important planning information that is based on the analysis model rather than preliminary estimates. Assume further that past projects have found an average of three errors per function point during analysis and design reviews and four errors per function point during unit and integration testing. These data can help software engineers assess the completeness of their review and testing activities.

The Bang Metric

Like the function point metric, the bang metric can be used to develop an indication of the size of the software to be implemented as a consequence of the analysis model. Developed by DeMarco, the bang metric is “an implementation independent indication of system size.” To compute the bang metric, the software engineer must first evaluate a set of primitives—elements of the analysis model that are not further subdivided at the analysis level. Primitives  are determined by evaluating the analysis model and developing counts for the following forms:

Functional primitives (FuP). The number of transformations (bubbles) that appear at the lowest level of a data flow diagram.

Data elements (DE). The number of attributes of a data object, data elements are not composite data and appear within the data dictionary.

Objects (OB). The number of data objects .

Relationships (RE). The number of connections between data objects .
States (ST). The number of user observable states in the state transition diagram.

Transitions (TR). The number of state transitions in the state transition diagram.

In addition to these six primitives, additional counts are determined for

Modified manual function primitives (FuPM). Functions that lie outside the system boundary but must be modified to accommodate the new system.

Input data elements (DEI). Those data elements that are input to the system.

Output data elements. (DEO). Those data elements that are output from the system.

Retained data elements. (DER). Those data elements that are retained (stored) by the system.

Data tokens (TCi). The data tokens (data items that are not subdivided within a functional primitive) that exist at the boundary of the ith functional primitive (evaluated for each primitive).

Relationship connections (REi). The relationships that connect the ith object in the data model to other objects.

DeMarco  suggests that most software can be allocated to one of two domains: function strong or data strong, depending upon the ratio RE/FuP. Function-strong applications (often encountered in engineering and scientific applications) emphasize the transformation of data and do not generally have complex data structures. Data-strong applications (often encountered in information systems applications) tend to have complex data models.

RE/FuP < 0.7 implies a function-strong application.
0.8 < RE/FuP < 1.4 implies a hybrid application.
RE/FuP > 1.5 implies a data-strong application.

Because different analysis models will partition the model to greater or lessor degrees of refinement, DeMarco suggests that an average token count per primitive is

            TCavg = TCi /FuP

be used to control uniformity of partitioning across many different models within an application domain.

To compute the bang metric for function-strong applications, the following algorithm is used:
set initial value of bang = 0;
do while functional primitives remain to be evaluated
          Compute token-count around the boundary of primitive i
          Compute corrected FuP increment (CFuPI)
          Allocate primitive to class
          Assess class and note assessed weight
          Multiply CFuPI by the assessed weight
          bang = bang + weighted CFuPI
enddo

The token-count is computed by determining how many separate tokens are “visible”  within the primitive. It is possible that the number of tokens and the number of data elements will differ, if data elements can be moved from input to output without any internal transformation. The corrected CFuPI is determined from a table published by DeMarco. A much abbreviated version follows:

TCi            CFuPI
  2               1.0
  5               5.8
10             16.6
15             29.3
20             43.2

The assessed weight noted in the algorithm is determined from 16 different classes of functional primitives defined by DeMarco. A weight ranging from 0.6 (simple data routing) to 2.5 (data management functions) is assigned, depending on the class of the primitive.

For data-strong applications, the bang metric is computed using the following algorithm:

set initial value of bang = 0;
do while objects remain to be evaluated in the data model
        compute count of relationships for object i
         compute corrected OB increment (COBI)
        bang = bang + COBI
enddo

The COBI is determined from a table published by DeMarco. An abbreviated version follows:
REi            COBI1                  1.0
3                  4.0
6                  9.0

Once the bang metric has been computed, past history can be used to associate it with size and effort. DeMarco suggests that an organization build its own versions of the CFuPI and COBI tables using calibration information from completed software projects.

Metrics for Specification Quality

Davis and his colleagues  propose a list of characteristics that can be used to assess the quality of the analysis model and the corresponding requirements specification: specificity (lack of ambiguity), completeness, correctness, understandability, verifiability, internal and external consistency, achievability, concision, traceability, modifiability, precision, and reusability. In addition, the authors note that high-quality specifications are electronically stored, executable or at least interpretable, annotated by relative importance and stable, versioned, organized, cross-referenced, and specified at the right level of detail.

Although many of these characteristics appear to be qualitative in nature, Davis et al.  suggest that each can be represented using one or more metrics. For example, we assume that there are nr requirements in a specification, such that

            nr = nf + nnf

where nf is the number of functional requirements and nnf is the number of nonfunctional (e.g., performance) requirements.

To determine the specificity (lack of ambiguity) of requirements, Davis et al. suggest a metric that is based on the consistency of the reviewers’ interpretation of each requirement:

           Q1 = nui/nr

where nui is the number of requirements for which all reviewers had identical interpretations. The closer the value of Q to 1, the lower is the ambiguity of the specification.

The completeness of functional requirements can be determined by computing the ratio

          Q2 = nu/[ni x ns]

where nu is the number of unique function requirements, ni is the number of inputs (stimuli) defined or implied by the specification, and ns is the number of states specified. The Q2 ratio measures the percentage of necessary functions that have been specified for a system. However, it does not address nonfunctional requirements. To incorporate these into an overall metric for completeness, we must consider the degree to which requirements have been validated:

         Q3 = nc/[nc + nnv]

where nc is the number of requirements that have been validated as correct and nnv is the number of requirements that have not yet been validated.
Name

ADO,131,ASP,3,C++,61,CORE JAVA,1,CSS,115,HTML,297,index,5,JAVASCRIPT,210,OS,47,PHP,65,SAD,53,SERVLETS,23,SOFTWARE ENGINEERING,245,SQL,71,TCP/IP,1,XHTML,9,XML,18,
ltr
item
Best Online Tutorials | Source codes | Programming Languages: Software Engineering-Metrics for Analysis model
Software Engineering-Metrics for Analysis model
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-lH7dyqSnErkjbr0wYp6E8dspwUnCpAUeD1CEHqkoC9RvZ8n4sQrS3MNgE16F2gOL3l0_ZopHcEfwX7njFMgQqR9o9ro2cRrOUgJb2ef_jhGE5DT4ustFOXmaXIXK4Cfi2febQHE6rpA3/s320/Capture.PNG
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-lH7dyqSnErkjbr0wYp6E8dspwUnCpAUeD1CEHqkoC9RvZ8n4sQrS3MNgE16F2gOL3l0_ZopHcEfwX7njFMgQqR9o9ro2cRrOUgJb2ef_jhGE5DT4ustFOXmaXIXK4Cfi2febQHE6rpA3/s72-c/Capture.PNG
Best Online Tutorials | Source codes | Programming Languages
https://www.1000sourcecodes.com/2012/05/software-engineering-metrics-for_161.html
https://www.1000sourcecodes.com/
https://www.1000sourcecodes.com/
https://www.1000sourcecodes.com/2012/05/software-engineering-metrics-for_161.html
true
357226456970214079
UTF-8
Loaded All Posts Not found any posts VIEW ALL Readmore Reply Cancel reply Delete By Home PAGES POSTS View All RECOMMENDED FOR YOU LABEL ARCHIVE SEARCH ALL POSTS Not found any post match with your request Back Home Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sun Mon Tue Wed Thu Fri Sat January February March April May June July August September October November December Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec just now 1 minute ago $$1$$ minutes ago 1 hour ago $$1$$ hours ago Yesterday $$1$$ days ago $$1$$ weeks ago more than 5 weeks ago Followers Follow THIS PREMIUM CONTENT IS LOCKED STEP 1: Share to a social network STEP 2: Click the link on your social network Copy All Code Select All Code All codes were copied to your clipboard Can not copy the codes / texts, please press [CTRL]+[C] (or CMD+C with Mac) to copy Table of Content