bannerd


SWE-192 - Software Hazardous Requirements

1. Requirements

4.5.12 The project manager shall verify through test the software requirements that trace to a hazardous event, cause, or mitigation technique.

1.1 Notes

NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.

1.2 History

SWE-192 - Last used in rev NPR 7150.2D

RevSWE Statement
A


Difference between A and B

N/A

B


Difference between B and C

NEW

C

4.5.12 The project manager shall verify through test the software requirements that trace to a hazardous event, cause, or mitigation technique.

Difference between C and DNo change
D

4.5.12 The project manager shall verify through test the software requirements that trace to a hazardous event, cause, or mitigation technique.



1.3 Applicability Across Classes

Class

     A      

     B      

     C      

     D      

     E      

     F      

Applicable?

   

   

   

   

   

   

Key:    - Applicable | - Not Applicable


2. Rationale

Verify by the test that any safety features related to system hazards, fault trees, or FMEA events are reliable and work as planned.  

3. Guidance

Software testing is required to verify that software functions work correctly, and safety features related to system hazards, fault trees, or FMEA events are reliable and work as planned.  Software requirements that trace to a hazardous event, cause, or mitigation technique are considered safety-critical software requirements. Safety features related to system hazards, fault trees, or FMEA events must be reliable and work as planned, especially when the hazards impact humans and critical missions. For software, the requirements that trace to a hazardous event, cause, or mitigation technique are considered safety-critical software requirements and need to be tested to ensure that the safety features they describe perform as expected under all conditions to keep people safe and support mission success.

See also Topic 8.05 - SW Failure Modes and Effects Analysis

There are typically four ways to show proof of compliance with requirements/specifications. Verification is determined by the test, analysis, demonstration, inspection, or a combination thereof. Due to the criticality of safety-critical software code, the only acceptable way to show verification of the safety-critical software is through a test.

Safety-critical software testing should use information from the system safety analysis to identify the controls, inhibits, and mitigations that will be verified to work as needed. The safety-critical software testing should be officially recorded, and the test procedures, test software, test hardware, results, and artifacts should be placed under configuration management. To assure that the risk has been reduced, all safety-critical software testing should aggressively explore the potential for a hazard using realistic conditions and scenarios, including abnormal and off-nominal conditions. All safety controls or mitigations are tested and verified to work as expected as well as the software’s ability to take the system to a safe state in the presence of unexpected or irresolvable hazardous conditions. See also Topic 8.01 - Off Nominal Testing. Modified condition/decision coverage (MC/DC) is a criterion for determining code coverage of the branches.

The software safety-critical testing should include verification and validation that the implemented fault and failure modes detection and recovery work as derived from the safety analyses, such as PHAs, sub-system hazard analyses, failure-modes-effects-analysis, and fault-tree-analysis. This can include both software and hardware failures, interface failures, or multiple concurrent hardware failures. FDIR is often used in place of failure detection when using the software as software can detect and react to faults before they become failures.

Code Coverage should be 100% for all safety-critical software functions or components. See also SWE-190 - Verify Code Coverage.

One intent of software testing is to test all paths through the code—every decision, and every nominal and off-nominal path—by executing test cases. Code coverage metrics identify additional tests that need to be added to the test run. Code coverage tools monitor the path the software executes and can be used during test runs to identify code paths that were not executed by any test. By analyzing these missed areas, tests can be identified and implemented to execute the missed path. It may be challenging to get 100% coverage due to off-nominal and hardware issues not possible or not advisable to execute during a test run (e.g., radiation effects, hardware failures), but if we can’t show 100% test coverage for all safety-critical functions and components we may not be able to ensure safety features related to system hazards or fault-tree or FMEA events are reliable and work as planned. The Code coverage metric can also identify sections of orphaned or unused code (dead code) within a safety-critical component. 

Consider using code coverage as a part of a project’s software testing metrics. Code coverage (also referred to as structural coverage analysis) is an important verification tool for establishing the completeness and adequacy of testing. Traceability between code, requirements, and tests is complemented by measuring the structural coverage of the code when the tests are executed. Where coverage is less than 100%, this points to:

  • Code that is not traceable to requirements.
  • Inadequate tests.
  • Incomplete requirements.
  • A combination of the above.

When using requirements-based testing, 100% code coverage means that subject to the coverage criteria used; no code exists which cannot be traced to a requirement. For example, using function coverage, every function is traceable to a requirement (but individual statements within the coverage may not be). What 100% code coverage does not mean is:

  • The code is correct. The test cases, when aggregated, exercise every line of code. Exercising every line of code is not sufficient to show there are no bugs. As long ago as 1969 Edsger Dijkstra noted “testing shows the presence of bugs, not their absence” – in other words, just because testing doesn’t show any errors, it doesn’t mean they are not present.
  • The software requirements are correct. The software requirements are correct and are determined through the validation of the requirements with the customer.
  • 100% of the requirements have been tested. Merely achieving 100% code coverage is not enough. This is only true if the project achieves 100% code coverage AND the project has a test for 100% of the requirements AND every test passes.
  • The compiler translated the code correctly. The compiler might be inserting errors that cause incorrect results in some situations (ones the project has not tested for).
  • 100% of the object code is covered. Even when all statements and conditions of the source code are being executed, the compiler can introduce additional structures into the object code.

See also SWE-189 - Code Coverage Measurements

Per section 3.2 of the IEEE 730-2014 IEEE Standard for Software Quality Assurance Processes 002, “software testing is an activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component.” Per the ISO/IEC TR 19759:2005 Software Engineering -- Guide to the Software Engineering Body of Knowledge 026  (SWEBOK), software testing is “the dynamic verification of the behavior of a program on a finite set of test cases, suitably selected from the usually infinite executions domain, against the expected behavior.”

The developer performs software testing to demonstrate to the acquirer that the software item requirements have been met, including all safety-critical requirements. If the software item is developed in multiple builds, its software item qualification testing will not be completed until the final build for that software item. See also SWE-193 - Acceptance Testing for Affected System and Software Behavior

Software testing is critical for the following reasons:

  1. Software testing is required to point out the defects and errors that were made during the development phases.
  2. Ensure reliability in the safety-critical application.
  3. Verify the quality of the product and that the testing provides proof that a safety feature works as designed, and thus may be necessary to the safety of operations and certification of the system, which means the safety-critical software will be robust and trusted.
  4. Testing is required for the effective performance of software applications or products.

See also SWE-065 - Test Plan, Procedures, Reports, SWE-066 - Perform Testing, SWE-068 - Evaluate Test Results, SWE-071 - Update Test Plans and Procedures

3.1 Additional Software Test Guidance

Testing serves several purposes: to find defects; to validate the system or an element of the system; and to verify functionality, performance, and safety requirements. The focus of testing is often on the verification and validation aspects. However, defect detection is probably the most important aspect of testing. While you cannot test the quality of the software, you can certainly work to remove as many defects as possible.

The following are basic principles of testing:

  • All tests need to be traceable to the requirements, and all requirements need to be verified by one or more methods (e.g., test, demonstration, inspection, analysis). See also SWE-052 - Bidirectional Traceability
  • Tests need to be planned before testing begins. Test planning can occur as soon as the relevant stage has been completed. System test planning can start when the requirements document is complete.  
  • The "80/20" principle applies to software testing. In general, 80 percent of errors can be traced back to 20 percent of the components. Anything you can do ahead of time to identify components likely to fall in that 20 percent (e.g., high risk, complex, many interfaces, demanding timing constraints) will help focus the testing effort for better results.
  • Start small and then integrate into the larger system. Finding defects deep in the code is difficult to do at the system level. Such defects are easier to uncover at the unit level.
  • You cannot test everything. Exhaustive testing cannot be done except for the most trivial of systems. However, a well-planned testing effort can test all parts of the system. Missing logic paths or branches may mean missing vital defects, so test coverages need to be determined.
  • Testing by an independent party is most effective. It is hard for developers to see their bugs. While unit tests are usually written and run by the developer, it is a good idea to have a fellow team member review the tests. A separate testing group will usually perform the other tests. An independent viewpoint helps find defects, which is the goal of testing.

Other principles to consider when focusing on safety testing:

  • Software testing beyond the unit level (integration and system testing) is usually performed by someone other than the developer, except in the smallest of teams.
  • Normally, software testing ensures that the software performs all required functions correctly, and can exhibit graceful behavior under anomalous conditions.
  • Integration testing is often done in a simulated environment, and system testing is usually done on the actual hardware. However, hazardous commands or operations need to be tested in a simulated environment first.
  • Any problems discovered during testing need to be analyzed and documented in discrepancy reports and summarized in test reports.
  • Create and follow written test procedures for integration and system testing.
  • Perform regression testing after each change to the system.
  • Prepare a Test Report upon completion of a test.
  • Verify that commercial off-the-shelf (COTS) software operates as expected.
  • Follow problem reporting and corrective action procedures when defects are detected.
  • Perform testing in a controlled environment using a structured test procedure and monitoring of results or in a demonstration environment where the software is exercised without interference.
  • Analyze tests before use to ensure adequate test coverage.
  • Analyze test results to verify that requirements have been satisfied and that all identified hazards are eliminated or controlled to an acceptable level of risk.

See also Topic 8.08 - COTS Software Safety Considerations

Other useful practices include:

  • Plan and document testing activities to ensure all required testing is performed.
  • Have test plans, procedures, and test cases inspected and approved before use.
  • Use a test verification matrix to ensure coverage of all requirements.
  • Consider dry running test procedures in offline labs with simulations before actual hardware/software integration tests.
  • Consider various types of testing to achieve more comprehensive coverage. (See Software QA and Testing Frequently-Asked-Questions 207 or NASA-GB-8719.13, NASA Software Safety Guidebook 276, for a list with descriptions.)
  • When time and resources are limited, identify areas of highest risk and set priorities to focus effort where the greatest benefit will be achieved with the available resources. (See Software QA and Testing Frequently-Asked-Questions, for suggested risk analysis considerations.)
  • As necessary and appropriate, including support from the software development and test team when performing formal testing of the final system. Support could include:
  • Identifying system test requirements unique to software.
  • Providing input for software to system test procedures.
  • Providing software design documentation.
  • Providing software test plans and procedures.

While NASA Centers typically have their procedures and guidance, the following list needs to be considered when planning any software test effort:

  • Functional system testing.
  • Stress testing.
  • Stability tests.
  • Resistance to failure testing.
  • Compatibility tests.
  • Performance testing.

Tools that may be useful when performing software testing include the following, non-exhaustive list. Each project needs to evaluate and choose the appropriate tools for the testing to be performed for that project.

  • Software analysis tools.
  • Reverse engineering, code navigation, metrics, and cross-reference tools.
  • Debuggers.
  • Compilers.
  • Coding standards checkers.
  • Memory management tools.
  • Screen capture utilities.
  • Serial interface utilities.
  • Telemetry display utilities.
  • Automated scripts.
  • Etc.

3.2 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:

3.7 Center Process Asset Libraries

SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki  197

See the following link(s) in SPAN for process assets from contributing Centers (NASA Only). 

4. Small Projects

No additional guidance is available for small projects.

5. Resources

5.1 References


5.2 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

 

6. Lessons Learned

6.1 NASA Lessons Learned

No Lessons Learned have currently been identified for this requirement.

6.2 Other Lessons Learned

No other Lessons Learned have currently been identified for this requirement.

7. Software Assurance

SWE-192 - Software Hazardous Requirements
4.5.12 The project manager shall verify through test the software requirements that trace to a hazardous event, cause, or mitigation technique.

7.1 Tasking for Software Assurance

From NASA-STD-8739.8B

1. Through testing, confirm that the project verifies the software requirements which trace to a hazardous event, cause, or mitigation techniques.

7.2 Software Assurance Products

  • None at this time


Objective Evidence

  • Software test reports
  • Software traceability data
  • Evidence that SA has approved or signed off on the software tests procedures and test results that trace to a safety-critical software component(s).

Objective evidence is an unbiased, documented fact showing that an activity was confirmed or performed by the software assurance/safety person(s). The evidence for confirmation of the activity can take any number of different forms, depending on the activity in the task. Examples are:

  • Observations, findings, issues, risks found by the SA/safety person and may be expressed in an audit or checklist record, email, memo or entry into a tracking system (e.g. Risk Log).
  • Meeting minutes with attendance lists or SA meeting notes or assessments of the activities and recorded in the project repository.
  • Status report, email or memo containing statements that confirmation has been performed with date (a checklist of confirmations could be used to record when each confirmation has been done!).
  • Signatures on SA reviewed or witnessed products or activities, or
  • Status report, email or memo containing a short summary of information gained by performing the activity. Some examples of using a “short summary” as objective evidence of a confirmation are:
    • To confirm that: “IV&V Program Execution exists”, the summary might be: IV&V Plan is in draft state. It is expected to be complete by (some date).
    • To confirm that: “Traceability between software requirements and hazards with SW contributions exists”, the summary might be x% of the hazards with software contributions are traced to the requirements.
  • The specific products listed in the Introduction of 8.16 are also objective evidence as well as the examples listed above.

7.3 Metrics

  • # of safety-critical requirement verifications vs. total # of safety-critical requirement verifications completed
  • # of Open issues vs. # of Closed over time
  • # of detailed software requirements tested to date vs. total # of detailed software requirements
  • # of tests completed vs. total # of tests
  • # of Hazards containing software that have been tested vs. total # of Hazards containing software
  • # of Requirements tested vs. total # of Requirements
  • # of Non-Conformances identified during each testing phase (Open, Closed, Severity)
  • # of tests executed vs. # of tests completed
  • # of safety-related requirement issues (Open, Closed) over time
  • # of Software Requirements being met via satisfactory testing vs. total # of Software Requirements

7.4 Guidance

Software assurance will review the project traceability matrix to confirm that any requirements that are necessary for software related to a hazard have been included in the set of documented requirements. A hazard requirements flow down matrix should be developed which maps safety requirements and hazard controls to system/software functions. All requirements that trace to a hazardous event, cause, or mitigation should be included in this flow down matrix. Software assurance should confirm that all the system/software functions in this flow down matrix are traced to a test procedure/test. Software Assurance will verify that all of these tests have been run successfully and passed. 

In addition, software assurance needs to perform a thorough requirements analysis of safety-related requirements. Guidance on performing the requirements analysis can be found in topic 8.16 - SA Products in the section titled: 8.54 - Software Requirements Analysis (See tab 2, SW Requirements Analysis Techniques)  in this Handbook. In addition to the items listed in SAANALYSIS, there are other requirements analysis activities that are very useful to help ensure that all the safety requirements have been captured.

The requirements analysis activity verifies that safety requirements for the software were properly flowed down from the system safety requirements and that they are correct, consistent, and complete.  It also looks for new hazards, software functions that can impact hazard controls, and ways the software can behave that are unexpected. These are primarily top-down analyses.

A bottom-up analysis of software requirements is performed such as Requirements Criticality Analysis to identify possible hazardous conditions.  This results in another iteration of the Preliminary Hazard Analysis (PHA) that may generate new software requirements. Specification analysis is also performed to ensure consistency of requirements.

Analyses for safety that should be considered in the Software Requirements Phase include:

  • Software Safety Requirements Flow Down Analysis
  • Requirements Criticality Analysis
  • Specification Analysis
  • Formal Inspections
  • Timing, Throughput, and Sizing Analysis
  • Preliminary Software Fault Tree Analysis

For the safety-related requirements, a software safety analysis should also be done. See the SA Tasking for SWE-205 - Determination of Safety-Critical Software.

There is more guidance on testing safety-related requirements in the software guidance section of this SWE requirement. See also Topic 8.57 - Testing Analysis

7.5 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:

  • No labels