bannera

Book A.
Introduction

Book B.
7150 Requirements Guidance

Book C.
Topics

Tools,
References, & Terms

SPAN
(NASA Only)


SWE-091 - Measurement Selection

1. Requirements

4.4.2 The project shall select and record the selection of specific measures in the following areas:

a. Software progress tracking.
b. Software functionality.
c. Software quality.
d. Software requirements volatility.
e. Software characteristics

1.1 Notes

The requirement for a Software Metrics Report is defined in Chapter 5 [of the NPR 7150.2, NASA Software Engineering Requirements, Section 5.3.1 (also see SWE-117.)]

1.2 Applicability Across Classes

  • Classes C-E and Safety Critical are labeled "P (Center) +SO." This means that this requirement applies to the safety-critical aspects of the software and that an approved Center-defined process which meets a non-empty subset of this full requirement can be used to meet the intent of this requirement.
  • Class C and Not Safety Critical and Class G are labeled with "P (Center)." This means that a Center-defined process which meets a non-empty subset of this full requirement can be used to meet the intent of this requirement.
  • Class F is labeled with "X (not OTS)." This means that this requirement does not apply to off-the-shelf (OTS) software for these classes.

Class

  A_SC 

A_NSC

  B_SC 

B_NSC

  C_SC 

C_NSC

  D_SC 

D_NSC

  E_SC 

E_NSC

     F      

     G      

     H      

Applicable?

   

   

   

   

    X

    P(C)

    X

   

    X

   

    X

    P(C)

   

Key:    A_SC = Class A Software, Safety-Critical | A_NSC = Class A Software, Not Safety-Critical | ... | - Applicable | - Not Applicable
X - Applicable with details, read above for more | P(C) - P(Center), follow center requirements or procedures

2. Rationale

Numerous years of experience on many NASA projects show the three key reasons for software measurement activities 329:

  1. To understand and model software engineering processes and products.
  2. To aid in assessing the status of software projects.
  3. To guide improvements in software engineering processes.

Measures are established with these reasons in mind and are tailored to achieve specific project and/or Directorate goals. In general, measures are designed to achieve time-based specific goals, which are usually derived from the following general statements:

  • To provide realistic data for progress tracking.
  • To assess the software's functionality when compared to the requirements and the user's needs.
  • To provide indicators of software quality which provides confidence in the final product.
  • To assess the volatility of the requirements throughout the life cycle.
  • To provide indicators of the software's characteristics and performance when compared to the requirements and the user's needs.
  • To improve future planning and cost estimation.
  • To provide baseline information for future process improvement activities.

The measurement areas chosen to provide specific measures are closely tied to the NASA measurement objectives listed above and in NPR 7150.2. There are specific example measurements listed in the notes in SWE-117 that were chosen based on information needs identified and corresponding questions during several NASA software measurement workshops.

3. Guidance

Many resources exist to help a Center develop a software measurement program.  The NASA "Software Measurement Guidebook" 329 is aimed at helping organizations begin or improve a measurement program. The Software Engineering Institute 327 at Carnegie Mellon University has detailed specific practices for measurement and analysis within its CMMI-Dev 157, Version 1.3 model. NASA resources that will help with the selection of measures for your project are: the "Software Metrics Selection Presentation" 316 and the "Project-Type/Goal/Metric Matrix" 089. The Software Technology Support Center (STSC), at Hill Air Force Base, has its "Software Metrics Capability Evaluation Guide" 336. Other resources are suggested in the Resources section of this SWE.

Each organization or Mission Directorate develops its own measurement program that is tailored to its needs and objectives, and is based on an understanding of its unique development environment (see SWE-095 and SWE-096). Once a manager has the ability to track actual project measures against planning estimates, any observed differences are used to evaluate the status of the project and to support decisions to take corrective actions. The SEPG (Software Engineering Process Group)can also use this data to improve the software development processes. The manager may also consider comparing actual measures to established norms or benchmarks either from the Center measurement program or from industry for other possible insights.

When choosing project measures, check to see if your Center has a pre-defined set of measurements that meets the project's objectives. If so, then the specification of measures for the project begins there. Review the measures specified and initially choose those required by your Center. Make sure they are all tied to project objectives or are measures that are required to meet your organization's objectives.

To determine if any additional measures are required or if your Center does not have a pre-defined set of measures, think about the questions that need to be asked to satisfy project objectives. For example, if an objective is to complete on schedule, the following might need to be asked:

  • How long is the schedule?
  • How much of the schedule has been used and how much is left?
  • How much work has been done? How much work remains to be done?
  • How long will it take to do the remaining work?

From these questions, determine what needs to be measured to get the answers to key questions. One possible set of measures for the above set of questions is "earned value" (sum of budgeted cost for task and products that have actually been produced (completed or in progress) at a given time in the schedule). Similarly, think about the questions that need to be answered for each objective and see what measures will provide the answers. If several different measures will provide the answers, choose the measures that are already being collected or those that are easily obtained from tools.

The presentation, "Software Metrics Selection Presentation" 316, gives a method for choosing project measures and provides a number of examples of measurement charts, with information showing how the charts might be useful for the project. The "Project-Type/Goal/Metric Matrix" 089 is also a matrix developed following the series of NASA software workshops at Headquarters that might be helpful in choosing the project's measures. This matrix specifies the types of measures a project might want to collect to meet a particular goal, based on project characteristics, such as size.

The measurements need to be defined so project personnel collect data items consistently. The measurement definitions are documented in the project Software Management Plan (see SWE-102) or Software Metrics Report (see SWE-117) plan along with the measurement objectives. Items to be included as part of a project's measurement collection and storage procedure are:

  • A clear description of all data to be provided.
  • A clear and precise definition of terms.
  • Who is responsible for providing which data.
  • When and to whom the data are to be provided.

The data collection involvement by the software development team works better if the team's time to collect the data is minimized. If the software developers see this as a non-value added task, data collection will become sporadic affecting data quality and usefulness. Some suggestions for specifying measures:

  • Don't collect too many measures. Be sure the project is going to use the measures.
  • Think about how the project will use them. Visualize the way charts look to best communicate information.
  • Make sure measures apply to project objectives (or are being provided to meet sponsor or institutional objectives).
  • Consider whether suitable measures already exist or whether they can be collected easily. The use of tools that automatically collect needed measures help ensure consistent, accurate collection.

Tools can be used to track and report (e.g.,JIRA) measures and provide status reports of the all open and closed issues, including their position in the issue tracking life-cycle. This can be used as a measure of progress.

Static analysis tools (e.g.,Coverity, CodeSonar) can provide measures of software quality and identify software characteristics at the source code level.

Characterizations of measures like requirements volatility can be tracked with general purpose requirements development and management tools (e.g., DOORS), along with tracking verification progress. They can also provide reports on software functionality and software verification progress.

Links to the aforementioned tools are found in section "5.1: Tools" on the Resources tab of this SWE.

4. Small Projects

See small project information in SWE-090. A few key measures to monitor the project's status and meet sponsor and institutional objectives may be sufficient. Data collection timing may be limited in frequency. The use of tools that collect measures automatically help considerably. In some cases, an organization will provide some organization staff support to help with the measurement collection, storage and analysis for a group of small projects.

5. Resources

  • (SWEREF-089) Project-Type/Goal/Metric Matrix, developed by NASA Software Working Group Metrics Subgroup, 2004. This NASA-specific information and resource is available in Software Processes Across NASA (SPAN), accessible to NASA-users from the SPAN tab in this Handbook.
  • (SWEREF-157) CMMI Development Team (2010). CMU/SEI-2010-TR-033, Software Engineering Institute.
  • (SWEREF-197) Software Processes Across NASA (SPAN) web site in NEN SPAN is a compendium of Processes, Procedures, Job Aids, Examples and other recommended best practices.
  • (SWEREF-252) Mills, Everald E. (1988). Carnegie-Mellon University-Software Engineering Institute. Retrieved on December 2017 from http://www.sei.cmu.edu/reports/88cm012.pdf.
  • (SWEREF-316) Software Metrics Selection Presentation: A tutorial prepared for NASA personnel by the NASA Software Working Group and the Fraunhofer Center for Empirical Software Engineering College Park, MD. Seaman, Carolyn (2005) This NASA-specific information and resource is available in Software Processes Across NASA (SPAN), accessible to NASA-users from the SPAN tab in this Handbook. Retrieved on February 27, 2012 from https://nen.nasa.gov/web/software/nasa-software-process-asset-library-pal?.
  • (SWEREF-329) Technical Report - NASA-GB-001-94 - Doc ID: 19980228474 (Acquired Nov 14, 1998), Software Engineering Program,
  • (SWEREF-336) Software Technology Support Center (STSC) (1995), Hill Air Force Base. Accessed 6/25/2019.
  • (SWEREF-355) Westfall, Linda, The Westfall Team (2005), Retrieved November 3, 2014 from http://www.win.tue.nl/~wstomv/edu/2ip30/references/Metrics_in_12_steps_paper.pdf
  • (SWEREF-572) Public Lessons Learned Entry: 2218.
  • (SWEREF-577) Public Lessons Learned Entry: 3556.

5.1 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN).

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN.

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

6. Lessons Learned

The NASA Lessons Learned database contains the following lessons learned related to Measurement Selection:

1. Selection and use of Software Metrics for Software Development Projects. Lesson Learned Number 3556:  "The design, development, and sustaining support of Launch Processing System (LPS) application software for the Space Shuttle Program provide the driving event behind this lesson.

"Metrics or measurements provide visibility into a software project's status during all phases of the software development life cycle in order to facilitate an efficient and successful project." The Recommendation states that: "As early as possible in the planning stages of a software project, perform an analysis to determine what measures or metrics will used to identify the 'health' or hindrances (risks) to the project. Because collection and analysis of metrics require additional resources, select measures that are tailored and applicable to the unique characteristics of the software project, and use them only if efficiencies in the project can be realized as a result. The following are examples of useful metrics:

  • "The number of software requirement changes (added/deleted/modified) during each phase of the software process (e.g., design, development, testing).
  • "The number of errors found during software verification/validation.
  • "The number of errors found in delivered software (a.k.a., 'process escapes').
  • "Projected versus actual labor hours expended.
  • "Projected versus actual lines of code, and the number of function points in delivered software." 577

2. Flight Software Engineering Lessons. Lesson Learned Number 2218: "The engineering of flight software (FSW) for a typical NASA/Caltech Jet Propulsion Laboratory (JPL) spacecraft is a major consideration in establishing the total project cost and schedule because every mission requires a significant amount of new software to implement new spacecraft functionality."

The lesson learned Recommendation No. 8 provides this step as well as other steps to mitigate the risk from defects in the FSW development process:

"Use objective measures to monitor FSW development progress and to determine the adequacy of software verification activities. To reliably assess FSW production and quality, these measures include metrics such as the percentage of code, requirements, and defined faults tested, and the percentage of tests passed in both simulation and test bed environments. These measures also identify the number of units where both the allocated requirements and the detailed design have been baselined, where coding has been completed and successfully passed all unit tests in both the simulated and test bed environments, and where they have successfully passed all stress tests." 572

3. Also, see Lessons Learned listed in SWE-090.