bannerd


SWE-089 - Software Peer Reviews and Inspections - Basic Measurements

1. Requirements

5.3.4 The project manager shall, for each planned software peer review or software inspection, record necessary measurements. 

1.1 Notes

NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.

1.2 History

SWE-089 - Last used in rev NPR 7150.2D

RevSWE Statement
A

4.3.4 The project shall, for each planned software peer review/inspection, record basic measurements.


Difference between A and B

No change

B

5.3.4 The project manager shall, for each planned software peer review or software inspection, record basic measurements.

Difference between B and CChanged "basic" to "necessary" measurements
C

5.3.4 The project manager shall, for each planned software peer review or software inspection, record necessary measurements. 

Difference between C and DNo change
D

5.3.4 The project manager shall, for each planned software peer review or software inspection, record necessary measurements. 



1.3 Applicability Across Classes

Class

     A      

     B      

     C      

     D      

     E      

     F      

Applicable?

   

   

   

   

   

   

Key:    - Applicable | - Not Applicable


2. Rationale

As with other engineering practices, it is important to monitor defects, pass/fail results, and effort. This is necessary to ensure that peer reviews and software inspections are being used appropriately as part of the overall software development life cycle, and to be able to improve the process itself over time. Moreover, key measurements are required to interpret inspection results correctly. For example, if very little effort is expended on an inspection or key phrases (such as individual preparation) are skipped altogether, it is very unlikely that the inspection will have found a majority of the existing defects.

3. Guidance

3.1 Best Practices

NASA-STD-8739.9, Software Formal Inspections Standard, includes lessons that have been learned by practitioners over the last decade.

The Software Formal Inspections Standard suggests several best practices related to the collection and the use of inspection data. 277

This requirement collects effort, the number of participants, defects, number and types of defects found pass/fail, and identification to ensure the effectiveness of the inspection. Where peer reviews and software inspections yield less than expected results, some questions to address may include:

  • How are peer reviews and software inspections being applied concerning other verification and validation (V&V) activities? It may be worth considering whether this process is being applied only after other approaches to quality assurance (e.g., unit testing) that are already finding defects, perhaps less cost-effectively.
  • Are peer review and software inspection practices being followed appropriately? Tailoring away key parts of the inspection process (e.g., planning or preparation), or undertaking inspections with key expertise missing from the team, will not produce the best results.

See also 5.03 - Inspect - Software Inspection, Peer Reviews, Inspections

3.2 Collection and Analysis of Data

As with other forms of software measurement, best practices for ensuring that the collection and analysis of peer review and software inspection metrics are done well include:

  • Clear triggers indicate when the metrics are gathered and analyzed (e.g., after every inspection; once per month).
  • Clear task assignments for this task.
  • Consistent recording of the units of measure, (e.g., one inspection does not record effort in person-hours and another in calendar days).
  • Consistency checking for collected measures, including investigation of outliers to verify whether the data was entered correctly and the correct definitions were applied.

Best practices related to the collection and analysis of inspection data include:

  • The moderator is responsible for compiling and reporting the inspection data.
  • The project manager explicitly specifies the location and the format of the recorded data.
  • Inspections are checked for process compliance using the collected inspection data, for example, to verify that:
    • Any inspection team consists of at least three persons.
    • Any inspection meeting is limited to approximately 2 hours, and if the discussion looks likely to extend far longer, the remainder of the meeting is rescheduled for another time when inspectors can be fresh and re-focused.
    • The rate of inspection adheres to the recommended or specified rate for different inspection types.
  • A set of analyses is performed periodically on the recorded data to monitor progress (i.e., number of inspections planned versus completed) and to understand the costs and benefits of inspection.
  • The outcome of the analyses is leveraged to support the continuous improvement of the inspection process.

3.3 Metrics for Projects Using Acquisition

In an acquisition context, there are several important considerations for assuring proper inspection usage by software provider(s):

  • The metrics to be furnished by the software provider(s) must be specified in the contract.
  • It must be clear and agreed upon ahead of time whether or not software providers can define their defect taxonomies. If providers may use their taxonomy, request that the software providers furnish the definition or the data dictionary of the taxonomy. It is also important (especially when the provider team contains subcontractors) to ensure that consistent definitions are used for: defect types; defect severity levels; effort reporting (how comprehensive or restrictive are the activities that are part of the actual inspection).

3.4 Base Metrics

Examples of Software Peer Review Base Metrics

Category

Base Metric

Description

Size

Size planned

Lines of code or document pages that you planned to inspect

Size

Size Actual

Lines of code or documents pages that were inspected or peer-reviewed

Time

Time Meeting

The time required to complete the inspection, if done over several meetings then add up the total time required

Effort

Planning

Total number of hours spent planning and preparing for the review


Meeting time

Total number of hours spent in the inspection meeting (multiply the Time meeting by the number of participates


Rework

The total number of hours spent by the author making improvements based on the findings.

Defects

Major Defects found

Number of Major defects found during the review


Minor Defect found

Number of Minor defects found during the review


Major Defects Corrected

Number of major defects corrected during rework


Minor Defects Corrected

Number of minor defects corrected during rework

Other

Number of Inspectors

Number of people, not counting observers, who participated in the review


Product Appraisal

Review teams assessment of the work product (accepted, accepted conditionally, review again following rework, review not complete, etc.)

Derived Data

Peer Review Defects

The Peer Review Defect metric measures the average number of defects per peer review to determine defect density over time.

Number of defects found per Peer Review = [Total number of defects] / [To number of Peer Reviews]

3.5 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:

3.6 Center Process Asset Libraries

SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki  197

See the following link(s) in SPAN for process assets from contributing Centers (NASA Only). 

SPAN Links

4. Small Projects

Projects with small budgets or a limited number of personnel need not use complex or user-intensive data collection logistics.

Given the amount of data typically collected, well-known and easy-to-use tools such as Excel sheets or small databases (e.g., implemented in MS Access) are usually sufficient to store and analyze the inspections performed on a project.

5. Resources

5.1 References

5.2 Tools


Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

6. Lessons Learned

6.1 NASA Lessons Learned

No Lessons Learned have currently been identified for this requirement.

6.2 Other Lessons Learned

  • Throughout hundreds of inspections and analyses of their results, the Jet Propulsion Laboratory (JPL) has identified key lessons learned that lead to more effective inspections 235, including:
    • Capturing statistics on the number of defects, the types of defects, and the time expended by engineers on the inspections.


7. Software Assurance

SWE-089 - Software Peer Reviews and Inspections - Basic Measurements
5.3.4 The project manager shall, for each planned software peer review or software inspection, record necessary measurements. 

7.1 Tasking for Software Assurance

From NASA-STD-8739.8B

1. Confirm that the project records the software peer reviews and results of software inspection measurements.

7.2 Software Assurance Products

  • None at this time.


    Objective Evidence

    • Peer review metrics, reports, data, or findings
    • List of participants in the software peer reviews
    • Defect or problem reporting tracking data
    • Software assurance audit reports on the peer-review process 

    Objective evidence is an unbiased, documented fact showing that an activity was confirmed or performed by the software assurance/safety person(s). The evidence for confirmation of the activity can take any number of different forms, depending on the activity in the task. Examples are:

    • Observations, findings, issues, risks found by the SA/safety person and may be expressed in an audit or checklist record, email, memo or entry into a tracking system (e.g. Risk Log).
    • Meeting minutes with attendance lists or SA meeting notes or assessments of the activities and recorded in the project repository.
    • Status report, email or memo containing statements that confirmation has been performed with date (a checklist of confirmations could be used to record when each confirmation has been done!).
    • Signatures on SA reviewed or witnessed products or activities, or
    • Status report, email or memo containing a short summary of information gained by performing the activity. Some examples of using a “short summary” as objective evidence of a confirmation are:
      • To confirm that: “IV&V Program Execution exists”, the summary might be: IV&V Plan is in draft state. It is expected to be complete by (some date).
      • To confirm that: “Traceability between software requirements and hazards with SW contributions exists”, the summary might be x% of the hazards with software contributions are traced to the requirements.
    • The specific products listed in the Introduction of 8.16 are also objective evidence as well as the examples listed above.

7.3 Metrics

  • # of Non-Conformances from reviews (Open vs. Closed; # of days Open)
  • Preparation time each review participant spent preparing for the review
  • Time required to close peer review audit Non-Conformances
  • Trends on non-conformances from audits (Open, Closed, Life cycle Phase)
  • # of peer review Non-Conformances per work product vs. # of peer reviewers
  • # of peer review participants vs. total # invited
  • Total # of peer review Non-Conformances (Open, Closed)
  • # of Non-Conformances identified by software assurance during each peer review
  • # of Non-Conformances identified in each peer review
  • # of peer reviews performed vs. # of peer reviews planned
  • Time required to close review Non-Conformances
  • Preparation time each audit participant spent preparing for audit

See also Topic 8.18 - SA Suggested Metrics

7.4 Guidance

Software Assurance needs to confirm that all the measurements from any peer reviews are collected and stored after the review. Centers may have a standard set of metrics to be collected for peer reviews contained in their process asset library. If so, software assurance should verify that the appropriate set was recorded. There is also a set of recommended metrics in the software guidance portion of this requirement. An additional useful piece of information to record is the type of product that is being inspected. 

Software assurance may want to collect some measures on their involvement in the peer reviews with two objectives in mind:

  • To improve the effort estimates concerning the amount of software assurance that is usually needed for various types of peer reviews. A sample set of assurance metrics to collect for each type of peer review are:
    • The number of software assurance personnel that participated in the peer review
    • The number of hours spent in assuring that peer review planning and close-out activities were performed as required by NPR-7150.2
    • The number of hours the software assurance personnel spent reviewing the peer review material before the meeting
    • The number of hours spent in the peer review meeting
    • The number of hours software assurance spent tracking the peer review findings to closure
  • To demonstrate the value of software assurance participation in peer reviews collect:
      • The number of issues, and defects found by software assurance 
      • A list of the issues, defects found, and their severity

7.5 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:


  • No labels