SWE-066 - Perform Testing

1. Requirements

4.5.3 The project manager shall test the software against its requirements.

1.1 Notes

A best practice for Class A, B, and C software projects is to have formal software testing planned, conducted, witnessed, and approved by an independent organization outside of the development team.

1.2 History

SWE-066 - Last used in rev NPR 7150.2D

RevSWE Statement

3.4.2 The project shall perform software testing as defined in the Software Test Plan.

Difference between A and BRemoved reference to the  Software Test Plan which is defined in SWE-104.

4.5.3 The project manager shall perform software testing.

Difference between B and C

Instead of just performing testing, specifies testing shall be done against the requirements.


4.5.3 The project manager shall test the software against its requirements.

Difference between C and DNo change

4.5.3 The project manager shall test the software against its requirements.

1.3 Applicability Across Classes















Key:    - Applicable | - Not Applicable

2. Rationale

Software testing is required to ensure that the software meets the agreed requirements and design. The application works as expected. The application doesn’t contain serious bugs, and the software meets its intended use as per user expectations.

3. Guidance

Per section 3.2 of the IEEE 730-2014 IEEE Standard for Software Quality Assurance Processes 469, “software testing is an activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component.” Per the ISO/IEC TR 19759:2005 Software Engineering -- Guide to the Software Engineering Body of Knowledge (SWEBOK), software testing is “the dynamic verification of the behavior of a program on a finite set of test cases, suitably selected from the usually infinite executions domain, against the expected behavior.”

The developer performs software testing to demonstrate to the acquirer that the software item requirements have been met, including the interface requirements.  If the software item is developed in multiple builds, its software item qualification testing will not be completed until the final build for that software item.  The persons responsible for qualification testing of a given software item should not be the persons who performed detailed design or implementation of the software item.  This does not preclude persons who performed detailed design or implementation of the software item from contributing to the process.

Software testing is essential for the following reasons:

  1. Software testing is required to point out the defects and errors.
  2. Ensure reliability in the application.
  3. Verify the quality of the product. 
  4. Testing is required for the effective performance of software applications or products.

Code Coverage

One intent of software testing is to test all paths through the code—every decision and nominal and off-nominal path—by executing test cases.  Code coverage metrics identify additional tests that need to be added to the test run. Code coverage tools monitor the path the software executes and can be used during test runs to identify code paths that were not executed by any test.  By analyzing these missed areas, tests can be identified and implemented to execute the missed path.  It is challenging to get 100% coverage due to off-nominal and hardware issues not possible or not advisable to execute during a test run (e.g., radiation effects, hardware failures).  Code coverage metrics can also identify sections of orphaned or unused code (dead code).

Code coverage can require the code to be compiled and instrumented in a specific manner and then executed in the test environments to indicate the code coverage of the tests.  This instrumentation invalidates any functional acceptance testing so that acceptance testing would require test runs without modification.  Code coverage should be verified as part of the in-line development test schedule.  Code coverage of software-only unit tests would minimally impact the in-line development test schedule and be recorded as part of the code coverage metrics. New hardware-only coverage tools can provide the metrics without intrusive instrumentation of the code. 

Consider using code coverage as a part of a project’s software testing metrics.  Code coverage (also referred to as structural coverage analysis) is an important verification tool for establishing the completeness and adequacy of testing.  Traceability between code, requirements, and tests is complemented by measuring the structural coverage of the code when the tests are executed. Where coverage is less than 100%, this points to:

  • Code that is not traceable to requirements.
  • Inadequate tests.
  • Incomplete requirements.
  • A combination of the above.

When using requirements-based testing, 100% code coverage means that subject to the coverage criteria used. No code exists which cannot be traced to a requirement. For example, every function is traceable to a requirement (but individual statements within the coverage may not be). What 100% code coverage does not mean is:

  • The code is correct. The test cases, when aggregated, exercise every line of code. This is not sufficient to show there are no bugs. As long ago as 1969, Edsger Dijkstra noted, “testing shows the presence of bugs, not their absence” – in other words, just because testing doesn’t show any errors, it doesn’t mean they are not present.
  • The software requirements are correct. This is determined through the validation of the requirements with the customer.
  • 100% of the requirements have been tested. Merely achieving 100% code coverage isn’t enough. This is only true if the project achieves 100% code coverage AND the project has a test for 100% of the requirements, and every test passes.
  • The compiler translated the code correctly. The compiler might be inserting errors that cause incorrect results in some situations (ones the project hasn’t tested for).
  • 100% of the object code is covered. Even when all statements and conditions of the source code are being executed, the compiler can introduce additional structures into the object code.

Consider requiring code-coverage tools for determining testing completeness, and identification of untested code automates testing quality and can provide metrics for determining test completeness.

Additional Software Test Guidance

Per NASA-GB-8719.13, NASA Software Safety Guidebook 276, "Testing serves several purposes: to find defects; to validate the system or an element of the system; and to verify functionality, performance, and safety requirements. The focus of testing is often on the verification and validation aspects. However, defect detection is probably the most important aspect of testing. While you cannot test quality into the software, you can certainly work to remove as many defects as possible."

Software testing has many levels, including unit testing, integration testing, and system testing, including functionality, performance, load, stress, safety, and acceptance testing. While the development team typically performs unit testing, some testing, such as integration, system, or regression testing, may be performed by a separate and/or independent test group.

  • Predefine verification/validation needed for all configuration data loads (CDLs)
  • Predeclare configuration data load (CDL) values which are expected/allowed to change with associated nominal verification activities
  • Any tests (formal or informal) which fail should be rerun and verified before software change tickets are closed, in the original environment, or as close to it as possible. Preferably this would be done with the original author of the software change ticket but with appropriate control board approval.

In mind, formal testing, such as acceptance testing, is witnessed by an external organization, such as software assurance (see NASA-STD-8739.8, Software Assurance, and Software Safety Standard 278).

"Scheduling testing phases is always an art and depends on the expected quality of the software product. Relatively defect-free software passes through testing within a minimal time frame. An inordinate amount of resources can be expended testing buggy software. Previous history, either of the development team or similar projects, can help determine how long testing will take. Some methods (such as error seeding and Halstead's defect metric) exist for estimating defect density (number of defects per unit of code) when historical information is not available." (NASA-GB-8719.13, NASA Software Safety Guidebook 276)

The following basic principles of testing come from NASA-GB-8719.13, NASA Software Safety Guidebook 276:

  • All tests need to be traceable to the requirements, and all requirements need to be verified by one or more methods (e.g., test, demonstration, inspection, analysis).
  • Tests need to be planned before testing begins. Test planning can occur as soon as the relevant stage has been completed. System test planning can start when the requirements document is complete.
  • The "80/20" principle applies to software testing. In general, 80 percent of errors can be traced back to 20 percent of the components. Anything you can do ahead of time to identify components likely to fall in that 20 percent (e.g., high risk, complex, many interfaces, demanding timing constraints) will help focus the testing effort for better results.
  • Start small and then integrate into the larger system. Finding defects deep in the code is difficult to do at the system level. Such defects are easier to uncover at the unit level.
  • You can't test everything. However, a well-planned testing effort can test all parts of the system. Missing logic paths or branches may mean missing important defects, so test coverages need to be determined.
  • Testing by an independent party is most effective. It is hard for developers to see their bugs. While unit tests are usually written and run by the developer, it is good to have a fellow team member review the tests. A separate testing group will usually perform the other tests. An independent viewpoint helps find defects, which is the goal of testing.

NASA-GB-8719.13, NASA Software Safety Guidebook 276, includes a chapter on testing with a focus on safety testing. Some general testing highlights of that chapter include:

  • Software testing beyond the unit level (integration and system testing) is usually performed by someone other than the developer, except in the smallest teams.
  • Normally, software testing ensures that the software performs all required functions correctly and can exhibit graceful behavior under anomalous conditions.
  • Integration testing is often done in a simulated environment, and system testing is usually done on the actual hardware. However, hazardous commands or operations need to be tested in a simulated environment first.
  • During testing, any problems discovered need to be analyzed and documented in discrepancy reports and summarized in test reports.
  • Create and follow written test procedures for integration and system testing.
  • Perform regression testing after each change to the system.
  • Prepare Test Report upon completion of a test.
  • Verify that commercial-off-the-shelf (COTS) software operates as expected.
  • Follow problem reporting and corrective action procedures when defects are detected.
  • Perform testing in a controlled environment using a structured test procedure and monitoring results or a demonstration environment where the software is exercised without interference.
  • Analyze tests before use to ensure adequate test coverage.
  • Analyze test results to verify that requirements have been satisfied and that all identified hazards are eliminated or controlled to an acceptable level of risk.

Other useful practices include:

  • Plan and document testing activities to ensure all required testing is performed.
  • Have test plans, procedures, and test cases inspected and approved before use.
  • Use a test verification matrix to ensure coverage of all requirements.
  • Consider dry running test procedures in offline labs with simulations before actual hardware/software integration tests.
  • Consider various types of testing to achieve more comprehensive coverage. (See Software QA and Testing Frequently-Asked-Questions 207 or NASA-GB-8719.13, NASA Software Safety Guidebook 276, for a list with descriptions.)
  • When time and resources are limited, identify areas of highest risk and set priorities to focus effort to achieve the greatest benefit with the available resources. (See Software QA and Testing Frequently-Asked-Questions 207  or NASA-GB-8719.13, NASA Software Safety Guidebook 276,  for suggested risk analysis considerations.)
  • As necessary and appropriate, including support from the software development and/or test team when performing formal testing of the final system. Support could include:
    • Identifying system test requirements unique to software.
    • Providing input for software to system test procedures.
    • Providing software design documentation.
    • Providing software test plans and procedures.
  • Predefine verification/validation needed for all configuration data loads (CDLs)
  • Predeclare configuration data load (CDL) values which are expected/allowed to change with associated nominal verification activities
  • Any tests (formal or informal) which fail should be rerun and verified before software change tickets are closed, in the original environment, or as close to it as possible. Preferably this would be done with the original author of the software change ticket but with appropriate control board approval.

While NASA Centers typically have their procedures and guidance, NASA-GB-8719.13, NASA Software Safety Guidebook 276,  lists and describes the following testing which needs to be considered when planning any software test effort:

  • Functional system testing.
  • Stress testing.
  • Stability tests.
  • Resistance to failure testing.
  • Compatibility tests.
  • Performance testing.

The following chart shows a basic flow for software testing activities from planning through maintenance. Several elements of this flow are addressed in related requirements in this Handbook (listed in the table at the end of this section). 

Tools that may be useful when performing software testing include the following non-exhaustive list. Each project needs to evaluate and choose the appropriate tools for the testing for that project.

  • Software analysis tools.
  • Reverse engineering, code navigation, metrics, and cross-reference tools.
  • Debuggers.
  • Compilers.
  • Coding standards checkers.
  • Memory management tools.
  • Screen capture utilities.
  • Serial interface utilities.
  • Telemetry display utilities.
  • Automated scripts.
  • Etc.

NASA users should consult Center Process Asset Libraries (PALs) for Center-specific guidance and resources related to software testing.

NASA-specific planning information and resources for software testing are available in Software Processes Across NASA (SPAN), accessible to NASA users from the SPAN tab in this Handbook.  

Additional guidance related to software testing, including specifics of the plan, procedure, and report contents, may be found in the following related requirements in this Handbook:

4. Small Projects

Software testing is required regardless of project size. 

5. Resources

5.1 References

5.2 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

6. Lessons Learned

6.1 NASA Lessons Learned

The NASA Lessons Learned database contains the following lessons learned related to the importance of and potential issues related to software testing:

  • International Space Station Program/Hardware-Software/Qualification Testing-Verification and Validation (Issues related to using software before completion of testing.) Lesson Number 1104 537: "Some hardware is being used in MEIT before it has completed qualification testing. Software is also often used before its verification and validation are complete. In both cases, modification to the hardware or software may be required before certification is completed, thereby potentially invalidating the results of the initial MEIT testing."
  • International Space Station Program/Hardware-Software/Integration Testing (The importance of end-user involvement in the testing process.) Lesson Number 1106 538: "Astronaut crew participation in testing improves the fidelity of the test and better familiarizes the crew with systems and procedures."
  • MPL Uplink Loss Timer Software/Test Errors (1998) (The importance of recognizing and testing high-risk aspects of the software.) Lesson Number 0939 530: 1) "Recognize that the transition to another mission phase (e.g., from EDL to the landed phase) is a high-risk sequence. Devote extra effort to planning and performing tests of these transitions.  2) Unit and integration testing should, at a minimum, test against the full operational range of parameters. When changes are made to database parameters that affect logic decisions, the logic should be re-tested."
  • Deep Space 2 Telecom Hardware-Software Interaction (1999) (Considerations for performance testing.) Lesson Number 1197 545: The Recommendation states: "To fully validate performance, test integrated software and hardware over the flight operational temperature range... ['test as you fly, and fly as you test...'"]
  • Probable Scenario for Mars Polar Lander Mission Loss (1998) (Testing failures.) Lesson Number 0938 529: "1) Project test policy and procedures should specify actions to be taken when a failure occurs during the test. When tests are aborted or known to have had flawed procedures, they must be rerun after the test deficiencies are corrected. When test article hardware or software is changed, the test should be rerun unless there is a clear rationale for omitting the rerun.  2) All known hardware operational characteristics, including transients and spurious signals, must be reflected in the software requirements documents and verified by test."
  • Arianne 5 -The Inquiry Board's Recommendations:  685

    • Prepare a test facility including as much real equipment as technically feasible, inject realistic input data, and perform complete, closed-loop system testing. Complete simulations must take place before any mission. High test coverage has to be obtained.

    • Include trajectory data in specifications and test requirements.

    • Review the test coverage of existing equipment and extend it where deemed necessary.

    • Give the justification documents the same attention as code. Improve the technique for keeping code and its justifications consistent.

    • Set up a team that will prepare the procedure for qualifying software, propose stringent rules for confirming such qualification, and ascertain that specification, verification, and testing of software are of consistently high quality in the Ariane-5 Programme. Inclusion of external RAMS (Reliability, Availability, Maintainability, Safety) experts is to be considered.

6.2 Other Lessons Learned

  • All software requirements with multiple logic conditions require Formal Qualification Testing (FQT) to exercise all logic conditions.
    • If unable to provide this via FQT, the FQT test designer must confirm coverage via other means, e.g., by leveraging lower-level unit tests.  Exceptions to be documented and approved by the software control board.
    • Guidance for test case coverage must be documented.
  • Be judicious in identifying software flaws during testing
    • Review all test scripts and test procedures for occurrences of non-flight-like actions. All occurrences must be approved (program to decide appropriate approval level).
    • Apply the ‘Test Like You Fly’ exception process to test scripts and procedures applied to the scenario and validation testing.
  • Hardware/software integration testing campaign
    • Avoid reliance on lower-level software tests to verify system-level requirements.
    • Hardware/Software Integration test campaign must be used to verify critical vehicle and system functions (system spec) and utilize a high fidelity test environment (e.g., flight-like hardware in the loop), especially for external and internal interfaces.
    • Ensure test rig configuration sufficiency for planned testing (e.g., sufficient real hardware included) and the data captured is analyzed.
  • Perform end-to-end mission scenario testing
    • Programs must establish an end-to-end “run for record” test before each flight to include all applicable dynamic/critical phases of flight using a maximum available suite of flight hardware.
  • Simulation validation
    • Simulations/emulations must be validated by the system provider and with real hardware data signatures.
    • Simulations/emulations must be kept in sync with hardware/software updates.
  • Increase involvement of SE&I in the development lifecycle
    • Quality spacecraft software development and test require deep and persistent partnership and joint accountability between SE&I, subsystem designers, software community, and external suppliers.
  • Software Change Tickets
    • Any tests (formal or informal) that fail should be rerun and verified before software change tickets are closed, in the original environment, or as close to it as possible. Preferably this would be done with the original author of the software change ticket but with appropriate control board approval.
    • Be cautious about closing overlapping software change tickets to ensure that the full scope of all associated software change tickets is addressed via retest. Be careful about missing any unique elements to the individual software change ticket.
  • Modifications to board decisions
    • Any aspects of control board decisions that are modified must be re-approved by the board (e.g., impacted artifacts that need updating).

7. Software Assurance

SWE-066 - Perform Testing
4.5.3 The project manager shall test the software against its requirements.

7.1 Tasking for Software Assurance

  1. Confirm test coverage of the requirements through the execution of the test procedures.

  2. Perform test witnessing for safety-criticality software.

  3. Ensure that any newly identified software contributions to hazards, events, or conditions found during testing are in the system safety data package.

7.2 Software Assurance Products

  • Test Witnessing Signatures 

Objective Evidence

  • Test coverage metric data.
  • Confirmation that the system safety data package contains newly identified software contributions to hazards, events, or conditions found during testing.
  • Software test plan(s).
  • Software test procedure(s).
  • Software test report(s)

Objective evidence is an unbiased, documented fact showing that an activity was confirmed or performed by the software assurance/safety person(s). The evidence for confirmation of the activity can take any number of different forms, depending on the activity in the task. Examples are:

  • Observations, findings, issues, risks found by the SA/safety person and may be expressed in an audit or checklist record, email, memo or entry into a tracking system (e.g. Risk Log).
  • Meeting minutes with attendance lists or SA meeting notes or assessments of the activities and recorded in the project repository.
  • Status report, email or memo containing statements that confirmation has been performed with date (a checklist of confirmations could be used to record when each confirmation has been done!).
  • Signatures on SA reviewed or witnessed products or activities, or
  • Status report, email or memo containing a short summary of information gained by performing the activity. Some examples of using a “short summary” as objective evidence of a confirmation are:
    • To confirm that: “IV&V Program Execution exists”, the summary might be: IV&V Plan is in draft state. It is expected to be complete by (some date).
    • To confirm that: “Traceability between software requirements and hazards with SW contributions exists”, the summary might be x% of the hazards with software contributions are traced to the requirements.
  • The specific products listed in the Introduction of 8.16 are also objective evidence as well as the examples listed above.

7.3 Metrics

  • # of Software Requirements (e.g., Project, Application, Subsystem, System, etc.)
  • # of software requirements with completed test procedures over time
  • # of safety-critical requirement verifications vs. total # of safety-critical requirement verifications completed
  • # of Open issues vs. # of Closed over time
  • # of Source Lines of Code (SLOC) tested vs. total # of SLOC
  • # of detailed software requirements tested to date vs. total # of detailed software requirements
  • # of tests successfully completed vs. total # of tests
  • Software code/test coverage percentages for all identified safety-critical components (e.g., # of paths tested vs. total # of possible paths)
  • # of Hazards containing software that has been successfully tested vs. total # of Hazards containing software
  • # of Requirements tested successfully vs. total # of Requirements
  • # of Non-Conformances identified during each testing phase (Open, Closed, Severity)
  • # of tests executed vs. # of tests successfully completed
  • # of Non-Conformances identified while confirming hazard controls are verified through test plans/procedures/cases
  • # of safety-related non-conformances identified by life-cycle phase over time
  • # of safety-related requirement issues (Open, Closed) over time
  • # of TBD/TBC/TBR requirements trended over time
  • # of Software Requirements without associated test cases
  • # of Software Requirements being met via satisfactory testing vs. total # of Software Requirements
  • # of Safety-Critical tests executed vs. # of Safety-Critical tests witnessed by SA
  • Total # of tests completed vs. number of test results evaluated and signed off

          Note: Metrics in bold type are required by all projects

7.4 Guidance

Software assurance will review the test procedures and either review test results or witness the tests being run to confirm the test coverage of the requirements. This assumes that the bidirectional tracing of the test procedures and test requirements has been done previously and shows that all requirements have been traced to one or more tests. See SWE-052 for requirements traceability requirements and guidance and SWE-190 for code coverage.

In projects with safety-critical code, software assurance will perform extra rigor to ensure that all safety-related features are thoroughly tested. This may involve witnessing the tests or doing a more thorough review of the test results to check that all safety features have been tested successfully. In many cases, the requirements for the specific safety features are captured in the hazard reports, so it is important to ensure all of these safety features have been included in the trace to tests. Tests for safety features should include testing in operational scenarios, nominal scenarios, off-nominal conditions, stress conditions, and error conditions that require bringing the system to a safe mode.

Projects should do regression for any changes made to the software during the test process, following the project’s change management process. Tests including any safety features should be part of the regression test set. See SWE-080 for tracking and evaluating changes and SWE-191 for regression testing.

All software requirements with multiple logic conditions require Formal Qualification Testing (FQT) to exercise all logic conditions.

  • If unable to provide this via FQT, the FQT test designer must confirm coverage via other means, e.g., by leveraging lower-level unit tests.  Exceptions to be documented and approved by the software control board.
  • Guidance for test case coverage must be documented.

  • No labels