See edit history of this section
Post feedback on this section
- 1. The Requirement
- 2. Rationale
- 3. Guidance
- 4. Small Projects
- 5. Resources
- 6. Lessons Learned
- 7. Software Assurance
1. Requirements
4.5.11 The project manager shall plan and conduct software regression testing to demonstrate that defects have not been introduced into previously integrated or tested software and have not produced a security vulnerability.
1.1 Notes
NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.
1.2 History
1.3 Applicability Across Classes
Class A B C D E F Applicable?
Key: - Applicable | - Not Applicable
2. Rationale
The purpose of regression testing is to ensure that changes made to the software have not introduced new defects. One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts of the software. To ensure no new defects are injected into the previously integrated or tested software, the project manager should both plan and conduct regression testing to demonstrate the newly integrated software.
3. Guidance
Regression testing ensures that previously developed and tested software still performs the same way after it is changed or interfaced with other software. Changes may include software enhancements, patches, configuration changes, etc. During regression testing, new software bugs may be uncovered. Sometimes a software change-impact analysis is performed to determine which areas could be affected by the proposed changes. For changes made to the system after it is baselined (first release), a regression test must be run to verify that previous functionality is not affected and that no new errors have been added. This is vitally important! “Fixed” code may well add its own set of errors to the code. If the system is close to some capacity limit, the corrected code may push it over the edge. Performance issues, race conditions, or other problems that were not evident before may be there now. A regression test is a test method that is applicable at all levels of testing. Performed after any modifications to code, design, or requirements, it assures that the implemented changes have not introduced new defects or errors. Regression testing may be completed by re-execution of existing tests that are traceable to the modified code or requirement. If time permits, the entire suite of system and safety tests would be rerun as the complete regression test. More information on software test levels is found in Topic 7.06 - Software Test Estimation and Testing Levels.
However, for even moderately complex systems, such testing is likely to be too costly in time and money. Usually, a subset of the system tests makes up the regression test suite. Picking the proper subset, however, is an art and not a science.
Regression Test approaches:
Minimization is one approach to regression test selection. The goal is to create a regression test suite with a minimal number of tests that will cover the code change and modified blocks. The criteria for this approach is coverage – what statements are executed by the test. In particular, every statement in the changed code must be executed, and every modified block must have at least one test.
Coverage approaches are based on coverage criteria, like the minimization approach, but they are not concerned about minimizing the number of tests. Instead, all system tests that exercise the changed or affected program component(s) are used.
Safe approaches place less emphasis on coverage criteria and attempt instead to select every test that will cause the modified program to produce different output than the original program. Safe regression test selection techniques select subsets that, under certain well-defined conditions, exclude no tests (from the original test suite) that if executed would reveal faults in the modified software.
Program slicing can be a helpful technique for determining what tests to run. Slicing finds all the statements that can affect a variable or all statements that a variable is involved with. Depending on the changes, slicing may be able to show what components may be affected by the modification.
A requirements management program is a useful tool in determining what tests need to be run. When changes impact a specific requirement, especially a safety requirement, all test cases that test that requirement should be run. Knowing what test cases test what requirements are one aspect of requirements traceability.
Whatever strategy is used to select the regression tests, it should be a well-thought-out process. Balance the risks of missing an error with the time and money spent on regression testing. Very minor code changes usually require less regression testing, unless they are in a very critical area of the software. Also consider including in the regression suite tests that previously found errors, tests that stress the system, and performance tests. You want the system to run at least as well after the change as it did before the change! For safety-critical code, or software that resides on the same platform as safety-critical code, the software safety tests must be repeated, even for minor changes.
Additional guidance related to software testing and topics may be found in the following related requirements in this Handbook:
4. Small Projects
No additional guidance is available for small projects.
5. Resources
5.1 References
- (SWEREF-197) Software Processes Across NASA (SPAN) web site in NEN SPAN is a compendium of Processes, Procedures, Job Aids, Examples and other recommended best practices.
- (SWEREF-337) Souppaya, Murugiah, Scarfone, Karen, NIST Special Publication NIST SP 800-40r4, April, 2022
- (SWEREF-367) IEEE Computer Society, Sponsored by the Software Engineering Standards Committee, IEEE Std 982.1™-2005, (Revision of IEEE Std 982.1-1988),
5.2 Tools
NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN.
The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.
6. Lessons Learned
6.1 NASA Lessons Learned
No Lessons Learned have currently been identified for this requirement.
6.2 Other Lessons Learned
No other Lessons Learned have currently been identified for this requirement.
7. Software Assurance
7.1 Tasking for Software Assurance
Confirm that the project plans regression testing and that the regression testing is adequate and includes retesting of all safety-critical code components.
Confirm that the project performs the planned regression testing.
Identify any risks and issues associated with the regression test set selection and execution.
Confirm that the regression test procedures are updated to incorporate tests that validate the correction of critical anomalies.
7.2 Software Assurance Products
- Verification Activities Analysis
- Software assessment of regression test set, including any risks or issues associated with the regression test set selection and execution. Include risks and issues in the project tracking system; track to closure.
Objective Evidence
- Software regression test procedures
- Software regression test reports
- Software test plan
7.3 Metrics
- # of software work product Non-Conformances identified by life-cycle phase over time
- # of safety-critical requirement verifications vs. total # of safety-critical requirement verifications completed
- # of hazards with completed test procedures/cases vs. total number of hazards over time
- # of Open issues vs. # of Closed over time
- # of detailed software requirements tested to date vs. total # of detailed software requirements
- # of tests successfully completed vs. total # of tests
- # of Cybersecurity mitigation implementations identified with associated test procedures vs. # of Cybersecurity mitigation implementations identified
- # of Cybersecurity mitigation implementations identified from the security vulnerabilities and security weaknesses
- # of Regression test set Non-Conformances/Risks over time (Open, Closed, Severity)
- # of Requirements tested successfully vs. total # of Requirements
- # of Non-Conformances identified during each testing phase (Open, Closed, Severity)
- # of Non-Conformances identified while confirming hazard controls are verified through test plans/procedures/cases
- # of Non-Conformances identified when the approved, updated requirements are not reflected in test procedures
- # of Non-Conformances and risks open vs. # of Non-Conformances, risks identified with test procedures
- # of safety-related non-conformances identified by life-cycle phase over time
- # of Risks trending up over time
- # of Risks trending down over time
- # of Risks with mitigation plans vs. total # of Risks
- # of Risks by Severity (e.g., red, yellow, green) over time
- # of software requirements with completed test procedures over time
- # of software work product Non-Conformances identified by life-cycle phase over time
- # of safety-critical requirement verifications vs. total # of safety-critical requirement verifications completed
- # of tests executed vs. # of tests successfully completed
7.4 Guidance
Software assurance will review the project’s software test plans and procedures to confirm that regression testing has been planned and the planned regression testing is adequate.
Regression testing is the testing of software to confirm that functions that were previously performed correctly, continue to perform correctly after a change has been made.
Typically, a subset of previously executed tests will be chosen for the regression set to retest the functionality where changes were made. A few tests will need to be chosen to also ensure that the changes made do not affect the functionality in other areas of the system. There are many considerations in choosing the right set of regression tests including the necessary balance between the time and effort to run the regression tests and the number of regression tests necessary to provide the level of confidence that the changes made have not affected anything else in the system.
Some potential guidelines for choosing an adequate regression test set are:
- Regression testing should include functional, performance, stress, and safety testing of the altered code and all modules it interacts with
- All safety-critical code components should be retested
- In addition to the above, select the following test cases:
- Test cases in areas where frequent errors have been found
- Test cases in areas of high complexity code
- Test cases where the code produces primary functions of the system
- Test cases that cover the essential operational activities
- Test cases for boundary conditions
- Integration tests
- Test cases in areas where there have been many changes
- Test cases in areas where security vulnerabilities are more likely
- A sample of test cases with previous successful runs
- A sample of test cases with previous failures
Software assurance personnel should confirm that a regression set has been planned that is adequate for the software system being tested, taking into consideration the above guidelines. In the case of safety-critical software, all safety-critical functions should be retested. For more information on safety-critical software, including the list of software safety-critical functions, see SWE-205 - Determination of Safety-Critical Software.
Regression tests for safety-critical software should:
- Demonstrate the correct execution of critical software elements.
- Demonstrate that critical computer software units still execute together as specified.
- Include computer software configuration item testing that demonstrates the execution of one or more system components.
- Demonstrate the software’s performance within the overall system.
- Demonstrate the software will not cause hazards under abnormal circumstances, such as unexpected input values or overload conditions.
- Demonstrate that changes made to the software did not introduce conditions for new hazards.
Once software assurance confirms that regression testing is planned and the regression test sets planned are adequate, SA will need to confirm that the regression plans are being followed and the regression test set is being run as planned, including the retesting of any safety-critical code components, as part of the change process and during the testing phases.
In addition to confirming the regression test sets are adequate and are being run, software assurance will analyze the regression test sets to verify that the test sets will:
- Address defects in the software,
- Provide coverage in testing the essential operational activities, and
- Test security vulnerabilities.
Essential operational activities can be obtained by reviewing the current operational concepts for the mission. Often, the specific tests that focus on key essential operational activities are classified as "day in the life of" tests in the test procedures. Security vulnerabilities can be found in the latest security risk assessments. For more information on security vulnerabilities, see the following SWE's in this Handbook:
SWE-154 - Identify Security Risks |
SWE-156 - Evaluate Systems for Security Risks |
SWE-158 - Evaluate Software for Security Vulnerabilities |
Finally, software assurance will analyze the test results from the regression tests and compare them with the results from previous regression test results or the results from previously tested or integrated software. Any discrepancies need to be brought to the attention of the development and test teams and resolved before tests can be considered as successfully tested.
While regression testing is being performed, the software assurance team will identify, record, and maintain any issues or risks they find with the regression test selection, the execution of the regression tests, or the regression test process. These issues and risks will help improve regression selection, test execution, and regression procedures.
Note: If using an Agile or incremental development process, then code verification and verification of Sprint and daily tasks need to be assured within the Sprint time frame. Daily testing is done on the new capability and regression tests must be done on previously tested features, so the regression test set builds up as the Sprint progresses. Generally, an automated process is used for testing. At the end of the Sprint, the code and any supporting products such as documentation and test suites needed to meet the definition of “Done” as determined by the Project are saved. Daily regression testing needs to be updated with new features and functions.