SWE-065 - Test Plan, Procedures, Reports

1. Requirements

4.5.2 The project manager shall establish and maintain:

    1. Software test plan(s).
    2. Software test procedure(s).
    3. Software test report(s)

1.1 Notes

NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.

1.2 History

SWE-065 - Last used in rev NPR 7150.2D

RevSWE Statement

3.4.1 The project shall establish and maintain:
        a.    Software Test Plan(s).
        b.    Software Test Procedure(s).
        c.    Software Test Report(s).

Difference between A and B

No change


4.5.2 The project manager shall establish and maintain:
        a.    Software test plan(s).
        b.    Software test procedure(s).
        c.    Software test report(s).

Difference between B and C

No change


4.5.2 The project manager shall establish and maintain:

    1. Software test plan(s).
    2. Software test procedure(s).
    3. Software test report(s)

Difference between C and DRequirement updated to added tests and code (item c).

4.5.2 The project manager shall establish and maintain: 

a. Software test plan(s).
b. Software test procedure(s).
c. Software test(s), including any code specifically written to perform test procedures.
d. Software test report(s).

1.3 Applicability Across Classes















Key:    - Applicable | - Not Applicable

2. Rationale

Having plans and procedures in place ensures that all necessary and required tasks are performed and performed consistently. The development of plans and procedures provides the opportunity for stakeholders to give input and assist with the documentation and tailoring of the planned testing activities to ensure the outcome will meet the expectations and goals of the project. Test reports ensure that results of verification activities are documented and stored in the configuration management system for use in acceptance reviews or readiness reviews.

Ensuring the test plans, procedures, and reports follow templates ensures consistency of documents across projects, ensures proper planning occurs, ensures proper activity and results are captured, and prevents repeating problems of the past.

3. Guidance

Projects create test plans, procedures, and reports following the content recommendations in topic 7.18 - Documentation Guidance.

Objectives of software test procedures are to perform software testing following the following guidelines:

  1. Software testing is performed to demonstrate to the project that the software requirements have been met, including all interface requirements.
  2. If a software item is developed in multiple builds, its software testing will not be completed until the final build for the software item, or possibly until later builds involving items with which the software item is required to interface.  Software testing in each build is interpreted to mean planning and performing the test of the current build of each software item to ensure that the software item requirements to be implemented in that build have been met. 478

Independence in software item testing

For Class A, B, and Safety-critical class C software, the person(s) responsible for software testing of a given software item should not be the persons who performed detailed design, implementation, or unit testing of the software item.  This does not preclude persons who performed detailed design, implementation, or unit testing of the software item from contributing to the process, for example by contributing test cases that rely on knowledge of the software items' internal implementation. 478

Software Test Procedure Development Guidelines

The project should “establish test cases (in terms of inputs, expected results, and evaluation criteria), test procedures, and test data for testing the software.” 478The test cases, test procedures, should cover the software requirements and design, including, as a minimum, the correct execution of all interfaces (including between software units), statements and branches; all error and exception handling; all software unit interfaces including limits and boundary conditions; end-to-end functional capabilities, performance testing, operational input, and output data rates and timing and accuracy requirements, stress testing, worst case scenario(s), fault detection, isolation and recovery handling, resource utilization, hazard mitigations,  start-up, termination, and restart (when applicable); and all algorithms.  Legacy reuse software should be tested for all modified reuse software, for all reuse software units where the track record indicates potential problems, and all critical reuse software components even if the reuse software component has not been modified. 478

All software testing should be following the defined test cases and procedures.

“Based on the results of the software testing, the developer [should] make all necessary revisions to the software, perform all necessary retesting, update the SDFs, and other software products as needed...  Regression testing ... [should] be performed after any modification to previously test software.” 478

Ensure test rig configuration sufficiency for planned testing (e.g., sufficient real hardware is included).

Testing on the target computer system

Software testing should be performed using the target hardware.  The target hardware used for software qualification testing should be as close as possible to the operational target hardware and should be in a configuration as close as possible to the operational configuration. 478(see SWE-073)  Typically, high-fidelity simulation has the exact processor, processor performance, timing, memory size, and interfaces as the target system.

Software Assurance Witnessing

The software test procedure developer should “dry run the software item test cases and procedures to ensure that they are complete and accurate and that the software is ready for witnessed testing.  The developer should record the results of this activity in the appropriate SDFs and should update the software test cases and procedures as appropriate.” 478

Formal and acceptance software testing is witnessed by software assurance personnel to verify satisfactory completion and outcome.  Software assurance is required to witness or review/audit the results of software testing and demonstration.

Software Test Report Guidance

The software tester is required to analyze the results of the software testing and record the test and analysis results in the appropriate test report.

Ensure that the data captured is analyzed.

Software Test Documentation Maintenance

Once these documents are created, they need to be maintained to reflect the current project status, progress, and plans, which will change over the life of the project. When requirements change (SWE-071), test plans, procedures, and the resulting test reports may also need to be updated or revised to reflect the changes. Changes to test plans and procedures may result from:

  • Inspections/peer reviews of documentation.
  • Inspections/peer reviews of code.
  • Design changes.
  • Code maturation and changes (e.g., code changes to correct bugs or problems found during testing, interfaces revised during development).
  • Availability of relevant test tools that were not originally part of the test plan (e.g., tools freed up from another project, funding becomes available to purchase new tools).
  • Updated software hazards and mitigations (e.g., new hazards identified, hazards eliminated, mitigations are added or revised).
  • Execution of the tests (e.g., issues found in test procedures).
  • Test report/results analysis (e.g., incomplete, insufficient requirements coverage).
  • Changes in test objectives or scope.
  • Changes to schedule, milestones, or budget changes.
  • Changes in test resource numbers or availability (e.g., personnel, tools, facilities).
  • Changes to software classification or safety criticality (e.g., a research project not intended for flight becomes destined for use on the ISS (International Space Station)).
  • Process improvements relevant to test activities.
  • Changes in the project affect the software testing effort.

Just as the initial test plans, procedures, and reports require review and approval before use, the project team ensures that updates are also reviewed and approved the following project procedures.

Maintaining accurate and current test plans, procedures, and reports continue into the operation and maintenance phases of a project.

NASA users should consult Center Process Asset Libraries (PALs) for Center-specific guidance and resources related to the test plan, test procedures, and test reports, including templates and examples. 

NASA-specific test documentation information and resources are available in Software Processes Across NASA (SPAN), accessible to NASA users from the SPAN tab in this Handbook. 

Additional guidance related to the test plan, test procedures, and test reports may be found in the following related requirement in this Handbook:

4. Small Projects

No additional guidance is available for small projects.

5. Resources

5.1 References

5.2 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

6. Lessons Learned

6.1 NASA Lessons Learned

The NASA Lessons Learned database contains the following lessons learned related to insufficiencies in software test plans:

  • Aquarius Reflector Over-Test Incident (Procedures should be complete.) Lesson Number 2419 573:  Lessons Learned No. 1 states: "The Aquarius Reflector test procedure lacked complete instructions for configuring the controller software before the test."  Lesson Learned No. 4 states: "The roles and responsibilities of the various personnel involved in the Aquarius acoustic test operations were not documented.  This could lead to confusion during test operations."
  • Planning and Conduct of Hazardous Tests Require Extra Precautions (2000-2001) (Special measures needed for potentially hazardous tests.) Lesson Number 0991 579: "When planning tests that are potentially hazardous to personnel, flight hardware or facilities (e.g., high/low temperatures or pressure, stored energy, deployables), special measures should be taken to ensure that:
    1. "Test procedures are especially well written, well organized, and easy to understand by both engineering and quality assurance personnel.
    2. "Known test anomalies that history has shown to be inherent to the test equipment or conditions (including their likely causes, effects, and remedies) are documented and included in pre-test training. 
    3. "Readouts of safety-critical test control data are provided in an easily understood form (e.g., audible, visible, or graphic format).
    4. "Test readiness reviews are held, and test procedures require confirmation that GSE test equipment and sensors have been properly maintained.
    5. "Quality assurance personnel are present and involved throughout the test to ensure procedures are properly followed, including prescribed responses to pre-identified potential anomalies."
  • Test plans should reflect proper configurations 581: "Testing of the software changes was inadequate at the Unit, Integrated and Formal test level. In reviewing test plans...neither had test steps where BAT06 and BAT04 were running concurrently in a launch configuration scenario. Thus no test runs were done with the ... program that would reflect the new fully loaded console configuration. Had the launch configuration scenarios been included in integrated and acceptance testing, this might have revealed the code timing problems."
  • Ensure Test Monitoring Software Imposes Limits to Prevent Overtest (2003) (Include test monitoring software safety steps.) Lesson Number 1529 561:   Recommendation No. 2 states: "Before the test, under the test principle of 'First, Do No Harm' to flight equipment, assure that test monitoring and control software is programmed or a limiting hardware device is inserted to prevent over-test under all conditions..."

6.2 Other Lessons Learned

No other Lessons Learned have currently been identified for this requirement.

7. Software Assurance

SWE-065 - Test Plan, Procedures, Reports
4.5.2 The project manager shall establish and maintain:
    1. Software test plan(s).
    2. Software test procedure(s).
    3. Software test report(s)

7.1 Tasking for Software Assurance

For requirement a:

  • Confirm that software test plans have been established, contain correct content, and are maintained.
  • Confirm that the software test plan addresses the verification of safety-critical software, specifically the off-nominal scenarios.

For requirement b:

  • Confirm that test procedures have been established and are updated when changes to tests or requirements occur.

  • Analyze the software test procedures for:
    • Coverage of the software requirements.

    • Acceptance or pass/fail criteria,

    • The inclusion of operational and off-nominal conditions, including boundary conditions,

    • Requirements coverage and hazards per SWE-66 and SWE-192, respectively.

For requirement c:

  • Confirm that the project creates and maintains the test reports throughout software integration and test.

  • Confirm that the project records the test report data and that the data contains the as-run test data, the test results, and required approvals.

  • Confirm that the project records all issues and discrepancies found during each test.

  • Confirm that the project tracks closure errors, defects, etc. found during testing.

7.2 Software Assurance Products

For 65a:

  • Confirmations that test plans have correct content, including verification of safety-critical software, and are updated, as needed.
  • Results of any peer reviews on the test plans, including any issues and corrective actions.
  • Evidence that Software Assurance has approved or signed off on the software test plans.

For 65b:

  • Evidence of confirmation that test procedures are established and maintained as tests or requirements change. 
  • Issues and corrective actions are identified with the test procedures or during any test procedure peer reviews.
  • Software Assurance analysis of test procedure attributes listed in a through d.

For 65c:

  • Software assurance assessment of project test status.
  • SA approval for test reports, where required (e.g. safety-critical software).

  • List of types of issues and discrepancies found during testing.

    Objective Evidence

    • Software test plan
    • Software test procedures
    • Software test reports

    Objective evidence is an unbiased, documented fact showing that an activity was confirmed or performed by the software assurance/safety person(s). The evidence for confirmation of the activity can take any number of different forms, depending on the activity in the task. Examples are:

    • Observations, findings, issues, risks found by the SA/safety person and may be expressed in an audit or checklist record, email, memo or entry into a tracking system (e.g. Risk Log).
    • Meeting minutes with attendance lists or SA meeting notes or assessments of the activities and recorded in the project repository.
    • Status report, email or memo containing statements that confirmation has been performed with date (a checklist of confirmations could be used to record when each confirmation has been done!).
    • Signatures on SA reviewed or witnessed products or activities, or
    • Status report, email or memo containing a short summary of information gained by performing the activity. Some examples of using a “short summary” as objective evidence of a confirmation are:
      • To confirm that: “IV&V Program Execution exists”, the summary might be: IV&V Plan is in draft state. It is expected to be complete by (some date).
      • To confirm that: “Traceability between software requirements and hazards with SW contributions exists”, the summary might be x% of the hazards with software contributions are traced to the requirements.
    • The specific products listed in the Introduction of 8.16 are also objective evidence as well as the examples listed above.

7.3 Metrics

For 65a:

  • # of safety-related non-conformances identified by life-cycle phase over time

For 65b:

  • # of Software Requirements (e.g. Project, Application, Subsystem, System, etc.)
  • # of software requirements with completed test procedures over time
  • # of Software Requirements being met via satisfactory testing vs. total # of Software Requirements
  • # of Software Requirements without associated test cases
  • # of software work product Non-Conformances identified by life-cycle phase over time
  • # of safety-related requirement issues (Open, Closed) over time 
  • # of safety-related non-conformances identified by life-cycle phase over time
  • # of Non-Conformances and risks open vs. # of Non-Conformances, risks identified with test procedures
  • # of hazards with completed test procedures/cases vs. total number of hazards over time
  • # of software requirements with completed test procedures/cases over time
  • # of Non-Conformances identified when the approved, updated requirements are not reflected in test procedures 
  • # of Non-Conformances identified while confirming hazard controls are verified through test plans/procedures/cases
  • # of Requirements tested successfully vs. total # of Requirements
  • # of detailed software requirements tested to date vs. total # of detailed software requirements
  • # of issues and risks/corrective actions open versus total # of issues and risks/corrective actions identified with test procedures. 

For 65c:

  • Total # of Non-Conformances over time (Open, Closed, # of days Open, and Severity of Open)
  • # of Non-Conformances in the current reporting period (Open, Closed, Severity)
  • # of Closed action items vs. # of Open action items 
  • # of software work product Non-Conformances identified by life-cycle phase over time
  • Total # of tests completed vs. number of test results evaluated and signed off 
  • # of Safety-Critical tests executed vs. # of Safety-Critical tests witnessed by SA
  • # of tests executed vs. # of tests successfully completed
  • # of Non-Conformances identified during each testing phase (Open, Closed, Severity)
  • # of Requirements tested successfully vs. total # of Requirements
  • # of tests successfully completed vs. total # of tests
  • # of detailed software requirements tested to date vs. total # of detailed software requirements
  • Trends of open versus closed problem/change reports over time.

7.4 Guidance

Guidance for part a and b:

Software assurance will confirm that software test plans are started during the preliminary design period and baselined at the end of CDR. Whenever changes occur that affect the test plan information, confirm that the test plan has been updated. Review the expected test plan contents in 7.18 - Documentation Guidance and assess that the expected contents have been included. Confirm that the test plan specifically addresses the coverage of hazard controls, particularly the off-nominal scenarios.

Software assurance will confirm that the software test procedures are started around the end of the CDR and are refined and updated to reflect changes in the requirements, design, or software through implementation. Any updates to requirements, design, safety, or software changes may cause changes in the test procedures. Software assurance should confirm that the expected content for test procedures is included in the project test procedures, using the guidance for test procedure content in 7.18 - Documentation Guidance.

Software assurance will assess the test procedures for the following:

  • Coverage of the software requirements (See the chart for recommended coverage in the software guidance for SWE-189
  • Acceptance criteria for the test procedures; pass/fail criteria for each test
  • Operational conditions, off-nominal conditions as well as boundary conditions are tested
  • Requirements coverage as per SWE-066 and SWE-192

Software assurance personnel will want to use the traceability matrices to help determine whether the tests are defined to cover the requirements and whether the safety aspects are adequately covered. As explained below, some of the safety-related software can only be tested at a unit or component level, so software assurance will want to check whether that has been considered in the testing. Also, often the safety requirements are found in a hazard report or safety plan and need to be included in the test planning.

Traceability is a link or definable relationship between two or more entities.  Requirements are linked from their more general form (e.g., the system specification) to their more explicit form (e.g., subsystem specifications).  They are also linked forward to the design, source code, and test cases.  Many software safety-related hazard events, conditions, causes, controls, or mitigations are derived from multiple sources (the system safety analysis, risk assessments, or organizational, facility, vehicle, system-specific generic hazards).  The HRs need to be updated as those sources change and mature.  Also, the resulting software requirements linked to those HRs need to be maintained and updated as needed.  Changes to the software related to HRs also need to be fed back to the HRs and assure changes occur in both directions. 

Tracing requirements is a vital part of system verification and validation, especially in safety verifications.  Full requirements test coverage is virtually impossible without some form of requirements traceability.  Tracing also provides a way to understand and communicate the impact on the system of changing requirements or modification of software elements.  A tracing system can be as simple as a spreadsheet or as complicated as an automatic tracing tool.

The relationship of software requirements to hazards, controls, conditions, and events is usually kept in the hazard or safety report as well as in the requirements traceability document where the requirement(s) associated with safety-critical functions, are traceable to the Hazard Report (HR), Risk Analyses or CIL.  Enough detail is flowed down with the resulting safety implicated requirement(s) to capture needed conditions, triggers, contingencies, etc.  Tests need to be established for all these safety features.

Plans for unit and component testing also need to take into account the testing of safety features, controls, inhibits, mitigations, data and command exchanges, and execution at the unit or component level. Unit-level testing is often the only place where the software paths can be completely checked for both the full range of expected inputs and its response to wrong, out of sequence, or garbled input.  The stubs and drivers, test suites, test data, models, simulations, and simulators used for unit testing are very important to capture and maintain for future regression testing and the proof of thorough safety testing.  The reports of unit-level testing of safety-critical software components need to be thoroughly documented as well.  

Software safety testing will include verification and validation that the implemented fault and failure modes detection and recovery work as derived from the safety analyses, such as PHAs, sub-system hazard analyses, failure-modes-effects-analysis, fault-tree-analysis.  This can include both software and hardware failures, interface failures, or multiple concurrent hardware. failures.  FDIR is often used in place of failure detection when using the software as software can detect and react to faults before they become failures.  Refer to NPR 7150.2 (requirement SWE-134 in Revision C).

For part c: 

Software assurance will review the test results and develop a list of the types of software issues and discrepancies discovered during software and system testing. The types of issues should be reported to management and the project as information for future improvement. 

  • No labels