See edit history of this section
Post feedback on this section
- 1. The Requirement
- 2. Rationale
- 3. Guidance
- 4. Small Projects
- 5. Resources
- 6. Lessons Learned
- 7. Software Assurance
1. Requirements
5.3.3 The project manager shall, for each planned software peer review or software inspection:
a. Use a checklist or formal reading technique (e.g., perspective-based reading) to evaluate the work products.
b. Use established readiness and completion criteria.
c. Track actions identified in the reviews until they are resolved.
d. Identify the required participants.
1.1 Notes
NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.
1.2 History
1.3 Applicability Across Classes
Class A B C D E F Applicable?
Key: - Applicable | - Not Applicable
2. Rationale
Checklists, criteria, and tracking of actions and participants are needed to conduct an effective peer review or inspection. Peer reviews and inspections contribute to product and process quality, risk reduction, confirmation of approach, defect identification, and product improvements.
3. Guidance
This requirement calls out four important best practices that are associated with effective inspections:
3.1 Peer Review Checklist
Using a checklist supports the software peer review or software inspection team members by giving them a memory aid regarding what quality aspects they are responsible for in the document under review. The checklists provide a concrete way for the inspection to improve over time. Defect types that are seen to continually slip through peer reviews or software inspections are added to the checklist so that future teams are aware that they are important to look for. Checklist items that no longer lead to defects being found are candidates for deletion. If kept up to date in this way, checklists provide a timely and efficient list of the types of issues on which review time should be spent.
Using a formal reading technique such as perspective-based reading helps ensure that the viewpoints of the various customers and stakeholders of the product under review are represented. Peer review or inspection team members take on the roles to represent the different points of view.
The goal of PBR is to provide operational scenarios where members of a review team read a document from a particular perspective, e.g., tester, developer, user. Our assumption is that the combination of different perspectives provides better coverage of the document, i.e., uncovers a wider range of defects, than the same number of readers using their usual technique. 474
3.2 Readiness and Completion Criteria
Readiness and completion criteria are used to ensure that peer review or software inspection time is being spent effectively and that confidence can be had in the outcome. Readiness criteria are satisfied before an inspection can begin. They represent the minimal set of quality characteristics that are to be satisfied before it is worthwhile to have a team of subject matter experts spend significant time understanding, assessing, and discussing the product under review or inspection. Readiness criteria also indicate the preparedness of the peer review or software inspection team to conduct the review or inspection. Readiness criteria may specify standards and guidelines to be adhered to; set project-specific criteria like the level of detail or a particular policy to be followed, and may require the use of automated tools (like static analysis tools or traceability tools). Completion criteria represent a set of measurable activities that are to be completed at the end of the inspection so that statements can be made with confidence regarding the outcome. For example, completion criteria may require that all process steps have been completed and documented; metrics have been collected; or that all major defects have been completed and approved.
Table G-19 below is from the NASA System Engineering Processes and Requirements, NPR 7123.1, and shows the entrance criteria and success criteria for a peer review activity.
Table G-19 - Peer Review Entrance and Success Criteria
Peer Review | |
Entrance Criteria | Success Criteria |
|
|
*Required per NPD 2570.5.
3.3 Action Items
Action items are required to be tracked through completion so that it is assured that the inspection has a positive impact on software quality. Due to time pressures, teams who identify significant numbers of defects in inspection and then do not take the time to resolve them, are wasting effort. Tracking the action items ensures that such an outcome is avoided. In addition to the impact on software quality, this best practice also aims at keeping the morale of inspection teams high. Nothing is more demoralizing for a team than investing significant time in identifying and reporting software defects if they are never fixed afterward.
3.4 Planning Phase
Effective peer reviews or software inspections begin with a planning phase in which plans are made regarding the scope of the document under review, the time available, and other key parameters. One of the most important issues to address in this step is to analyze which perspectives of stakeholders are needed to ensure that all quality aspects can be adequately addressed in an inspection. Taking the time to apply a rigorous inspection process will not automatically yield an effective outcome if the actual engineering knowledge and expertise are never brought to bear on analyzing the document.
3.5 Use Of Checklists
NASA-STD-8739.9, Software Formal Inspections Standard suggests several best practices related to the use of checklists. They recommend that:
- Each team member uses a checklist or similar work aid available, with items relevant to the perspective each is representing.
- Checklists are included as input to any inspection.
- Inspectors use the given checklists during their preparation. 277
See also SWE-087 - Software Peer Reviews and Inspections for Requirements, Plans, Design, Code, and Test Procedures, 7.10 - Peer Review and Inspections Including Checklists
3.6 Inspecting Quality
The Standard offers detailed suggestions as to what types of quality aspects need to be covered by checklists in a variety of different circumstances.
3.7 Selecting Stakeholder Representation
The Standard also suggests that the perspectives of key stakeholders be represented on the inspection team. Recommended practices include:
- Choose inspectors in consultation with the author.
- Moderator ensures that objectivity in the selection is maintained.
- One individual may represent multiple perspectives.
- Inspectors representing key perspectives must be present and prepared for each relevant stage of the inspection process. 277
3.8 Readiness and Completion Criteria
Best practices related to the establishment of readiness and completion criteria include:
- Entrance and exit criteria are specified as part of the inspection procedure, and provide several examples of criteria found useful on NASA teams.
- During the inspection planning, the work product under inspection is evaluated against the entrance criteria before the inspection can begin.
- The project manager defines the criteria to be used to determine if an inspection ends by passing the document under review, or requiring a re-inspection.
- To ensure that close-out activities are undertaken, at the end of any inspection meeting, the moderator:
- Determines based on the outcome of inspections, using the criteria previously defined by the project manager, if a re-inspection will be needed.
- Compiles, as the outcome of an inspection meeting:
- A list of classified anomalies or defects identified from the inspections.
- A list of change requests or discrepancy reports for defects found in work products of the previous development phase that have been put under configuration management (CM).
- The inspected work product was marked with clerical defects.
- Ensures that authors of the work product inspected receive the list of classified anomalies or defects.
3.9 Tracking Actions To Resolution
Best practices related to tracking actions identified in the reviews until they are resolved to include:
- Action items and defects discussed during the inspection meeting are compiled and tracked starting at that time.
- The author's fixes to defects discovered during an inspection are verified before the end of that inspection.
- Each project defines a method for documenting, tracking, and measuring such action items.
See also 5.03 - Inspect - Software Inspection, Peer Reviews, Inspections.
3.10 Identification of Participants
Best practices related to the identification of required participants include:
- The team consists of a minimum of three inspectors, reflecting that diverse viewpoints and objectivity are required to be brought to bear during an inspection.
- The inspection team members are based on an analysis of the key stakeholders in the document under inspection.
3.11 Other Resources
The Fraunhofer Center 421 in Maryland maintains a public website that collects checklists found from NASA and other contexts, which can be applied to the types of work products mentioned in these requirements.
See also SWE-089 - Software Peer Reviews and Inspections - Basic Measurements.
3.12 Additional Guidance
Additional guidance related to this requirement may be found in the following materials in this Handbook:
3.13 Center Process Asset Libraries
SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki 197
See the following link(s) in SPAN for process assets from contributing Centers (NASA Only).
SPAN Links |
---|
4. Small Projects
Checklists for various types of inspections can be found at the Fraunhofer Center website 421. Various inspection tools can be used to reduce the effort of tracking the information associated with inspections. See the "Tools" section of the Resources tab for a list of tools.
5. Resources
5.1 References
- (SWEREF-179) A variety of articles from Dr. Forrest Shull is available on this cite.
- (SWEREF-197) Software Processes Across NASA (SPAN) web site in NEN SPAN is a compendium of Processes, Procedures, Job Aids, Examples and other recommended best practices.
- (SWEREF-235) John C. Kelly, Joseph S. Sherif, Jonathan Hops, NASA. Goddard Space Flight Center, Proceedings of the 15th Annual Software Engineering Workshop; 35 p
- (SWEREF-277) NASA-STD-8739.9, NASA Office of Safety and Mission Assurance, 2013. Change Date: 2016-10-07, Change Number: 1
- (SWEREF-421) "A collection of checklists for different purposes," Fraunhofer USA, The University of Maryland. This web page contains several resources for performing software inspections.
- (SWEREF-474) Basili, V., Green , S., Laitenberger, O., Lanubile, F., Shull, F., Soerumgaard, S., and Zelkowitz, M. Empirical Software Engineering: An International Journal, 1(2): 133-164, 1996.
5.2 Tools
NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN.
The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.
6. Lessons Learned
6.1 NASA Lessons Learned
No Lessons Learned have currently been identified for this requirement.
6.2 Other Lessons Learned
- Throughout hundreds of inspections and analyses of their results, the Jet Propulsion Laboratory (JPL) has identified key lessons learned which lead to more effective inspections 235, including:
- Inspections are carried out by peers representing the areas of the life cycle affected by the material being inspected. Everyone participating should have a vested interest in the work product.
- Management is not present during inspections.
- Checklists of questions are used to define the task and to stimulate defect findings.
7. Software Assurance
a. Use a checklist or formal reading technique (e.g., perspective-based reading) to evaluate the work products.
b. Use established readiness and completion criteria.
c. Track actions identified in the reviews until they are resolved.
d. Identify the required participants.
7.1 Tasking for Software Assurance
1. Confirm that the project meets the NPR 7150.2 criteria in "a" through "d" for each software peer review.
2. Confirm that the project resolves the actions identified from the software peer reviews.
3. Perform audits on the peer-review process.
7.2 Software Assurance Products
Peer Review Process Audit Report (SA audit results and findings on software peer-review process).
Objective Evidence
- Peer review metrics, reports, data, or findings
- List of participates in the software peer reviews
- Defect or problem reporting tracking data
- Software assurance audit reports on the peer-review process
7.3 Metrics
- # of software process Non-Conformances by life cycle phase over time
- Preparation time each audit participant spent preparing for the audit
- Time required to close peer review audit Non-Conformances
- Time required to close review Non-Conformances
- Trends on non-conformances from audits (Open, Closed, Life Cycle Phase)
- # of Peer Review Audits planned vs. # of Peer Review Audits performed
- # of audit Non-Conformances per peer review audit
- # of peer review Non-Conformances per work product vs. # of peer reviewers
- # of peer review participants vs. total # invited
- Preparation time each review participant spent preparing for the review
- Total # of peer review Non-Conformances (Open, Closed)
- # of Non-Conformances identified by software assurance during each peer review
- # of Non-Conformances from reviews (Open vs. Closed; # of days Open)
- # of process Non-Conformances (e.g., activities not performed) identified by SA vs. # accepted by the project
- Trends of # Open vs. # Closed over time
- # of Non-Conformances per audit (including findings from process and compliance audits, process maturity)
- # of Open vs. Closed Audit Non-Conformances over time
- Trends of # of Non-Conformances from audits over time (Include counts from process and standards audits and work product audits.)
- # of Compliance Audits planned vs. # of Compliance Audits performed
See also Topic 8.18 - SA Suggested Metrics.
7.4 Guidance
Task 1:
Confirm that the project has met the 4 conditions in the activities specified in the software requirement:
- Confirm that the project has prepared one or more checklists to evaluate the work product before the peer review.
The selection and preparation of the checklists to use during the review is part of the preparation for the review that needs to be completed, along with other practical considerations such as determining who should attend the review, when and where the review should be held, reserving a room, and distributing the review announcement and material to be reviewed.
Each review should have specific checklists, depending on the type of asset that is being reviewed, so different checklists would be used for reviewing software development plans, test plans, and procedures, configuration management plans, a design document, or a portion of the code. A checklist may also be written to capture a particular perspective of some roles of the team. For example, there may be checklist(s) from a perspective of a person on the requirements development team, the design team, the coding team, the test team, the operations team, or software assurance.
Checklists should be used during the review for guidance on typical types of defects to be found in the type of product being inspected. In addition, the product being inspected is checked against higher-level work products, standards, and interface documents to assure compliance and correctness.
There are many inspection checklists available that can be used as a starting point for different kinds of reviews. Check with your Center process asset team or look in SPAN for example.
- The next item to confirm is that the entrance and exit criteria have been established. Before the peer review, software assurance should confirm that the entrance criteria have been established and before the close-out of the review, software assurance should confirm that the exit criteria have been met. For the entrance criteria, check on the following:
- Has the leader selected the material to be peer-reviewed at this review, and sent it out to the participants, along with any necessary background material?
- Has the leader reserved a room, prepared checklists, selected participants with varying roles in the project?
- Have the participants reviewed the materials in advance of the review?
- For the exit criteria, consider the following:
- Have all the identified issues and defects been recorded and assigned to further investigation and resolution?
- Has a priority been assigned to the issues and defects?
- Have the selected metrics been collected and recorded?
- Needs a re-inspection been established?
- Have all the issues and problems been closed out? Software assurance should be tracking these issues and verifying they are closed out.
- The third activity software assurance needs to confirm is that the peer review items have been resolved before the closure of the review. Software assurance will do this by tracking the item independently and verifying that items are closed.
- The final activity that software assurance will confirm is that the selection of the participants has been made by the leader. Several considerations should be considered when selecting review team members.
- Often peer reviews are more effective if the team size is limited to 5 to 9 people.
- Team members must have technical knowledge about the project and be familiar with the asset being reviewed. If this is not the case, it is recommended that the leader give the team an overview before the review.
- Team members should be selected to provide different perspectives on the product being reviewed. So, team members might be selected from the following (depending on the product being reviewed): requirements developers, designers, coders, test team members, software assurance, or knowledgeable peers from a similar project.
- Usually, it is best to avoid including the manager of the product being reviewed as a team member. The primary purpose of the peer reviews is to find errors and there might be some hesitation to mention all the defects for fear of a poor performance rating.
- Specific roles may be assigned to the participants to make sure the review is efficient. For example, someone may be assigned to “read” or verbally walk through the product, while someone else is assigned to record the issues and defects.
Task 2:
The other SA assigned task is to confirm that all the actions from the review have been closed out. The software assurance personnel will do this by tracking the action items and verifying they have been closed. Before the final close-out of the review, the software assurance personnel should meet with the review lead to verify no outstanding issues remain.
Task 3:
Audit the peer review process at least once every year. Every task that involves performing an audit should also clarify that all audit findings are promptly shared with the project and that improvements to the peer review process will be addressed in the handbook guidance.
See also Topic 8.12 - Basics of Software Auditing
7.5 Additional Guidance
Additional guidance related to this requirement may be found in the following materials in this Handbook: