bannerc
Software Requirements Analysis

1. Introduction

The Software Requirements Analysis product focuses on analyzing the software requirements that have been developed from the system requirements. This topic describes some of the methods and techniques Software Assurance and Software Safety personnel may use to evaluate the quality of the software requirements that were developed.

The software requirements process begins with a good understanding of the operational concept, system architecture, and system design. From there, software engineering can begin to derive the requirements for the application, component, feature, etc.

Since the software requirements provide the roadmap for software engineering to build a correct, robust software system/application that meets or exceeds the operational needs of the system, it is key that the requirements adequately reflect what is being requested. It is the role of the Software Assurance and Software Safety (if applicable) Teams to ensure the documented set of requirements is the “best set” of requirements that can be defined by performing a software requirements analysis.

During the software requirements analysis activity, all aspects of the requirements are examined carefully to determine if any improvements are needed before design and implementation begin. There are many tools and techniques that may be used to perform a thorough analysis of the requirements and these are discussed on the various tabs of this topic. NPR 7150.2 and NASA-STD-8739.8 require some specific methods be used for the analysis. Those are listed in the table below.

Many other techniques are available and aid in locating various types of requirements problems. The Software Engineering, Software Assurance, and Software Safety teams should use some of those techniques/methods to supplement the required methods by choosing any of the techniques listed that would benefit their project. Note: Safety Analysis during the requirements development phase is an important part of the requirements analysis for safety-critical systems and as such, it is included in this topic (see tab 3.) Each team performs the required SWEs or SA/Safety activities and then should choose other listed techniques/methods that would help the team do a more thorough job of analyzing the requirements. The results of the analysis and the techniques/methods used are documented in a Software Requirements Analysis Report (including the Software Safety Requirements Analysis).

For safety-critical software, the requirements portion of the Software Safety Analysis should be done in conjunction with the Software Requirements Analysis. See tab 3. Safety Analysis During Requirements for items to be included.

Many characteristics of the software requirements are considered. Requirements should be: complete, correct, understandable, unambiguous, testable, traceable to the higher-level requirements, consistent, able to meet the user’s expectations, and detailed enough to include boundary conditions, constraints, desired controls, etc.

The information in this topic is divided into several tabs as follows:

  • Tab 1 – Introduction
  • Tab 2 – SW Requirements Analysis Techniques – lists required and some other possible methods/techniques for requirements analysis
  • Tab 3 – Safety Analysis During Requirements – provides additional guidance when safety critical software is involved with analysis emphasis on safety features
  • Tab 4 – Requirements Analysis Report – guidance on reporting the results of the requirements analysis performed
  • Tab 5 – Resources for this topic

The following is a list of the applicable SWE requirements that relate to the generation of the Software Requirements Analysis product:

SWE #

NPR 7150.2 Requirement

NASA-STD-8739.8 Software Assurance and Software Safety Tasks



051

The project manager shall perform software requirements analysis based on flowed-down and derived requirements from the top-level systems engineering requirements, safety and reliability analyses, and the hardware specifications and design.1. Perform a software assurance analysis on the detailed software requirements to analyze the software requirement sources and identify any incorrect, missing, or incomplete requirements. 

052

The project manager shall perform, record, and maintain bi-directional traceability between the following software elements: 

Bi-directional Traceability

Class A, B, and C

Class

D

Class

F

Higher-level requirements to the software requirements

X


X

Software requirements to the system hazards

X

X


Software requirements to the software design components

X



Software design components to the software code

X



Software requirements to the software verification(s)

X

X

X

Software requirements to the software non-conformances

X

X

X

1. Confirm that bi-directional traceability has been completed, recorded, and maintained.

2. Confirm that the software traceability includes traceability to any hazard that includes software.



080

The project manager shall track and evaluate changes to software products.

1. Analyze proposed software and hardware changes to software products for impacts, particularly safety and security.

081

The project manager shall identify the software configuration items (e.g., software records, code, data, tools, models, scripts) and their versions to be controlled for the project.

2. Assess that the software safety-critical items are configuration managed, including hazard reports and safety analysis.

087

The project manager shall perform and report the results of software peer reviews or software inspections for:
a. Software requirements.
b. Software plans, including cybersecurity.
c. Any design items that the project identified for software peer review or software inspections according to the software development plans.
d. Software code as defined in the software and or project plans.
e. Software test procedures.
1. Confirm that software peer reviews are performed and reported on for project activities. 2. Confirm that the project addresses the accepted software peer review findings. 

134

If a project has safety-critical software or mission-critical software, the project manager shall implement the following items in the software:
a. The software is initialized, at first start and restarts, to a known safe state.
b. The software safely transitions between all predefined known states.
c. Termination performed by software of functions is performed to a known safe state.
d. Operator overrides of software functions require at least two independent actions by an operator.
e. Software rejects commands received out of sequence when execution of those commands out of sequence can cause a hazard.
f. The software detects inadvertent memory modification and recovers to a known safe state.
g. The software performs integrity checks on inputs and outputs to/from the software system.
h. The software performs prerequisite checks prior to the execution of safety-critical software commands.
i. No single software event or action is allowed to initiate an identified hazard.
j. The software responds to an off-nominal condition within the time needed to prevent a hazardous event.
k. The software provides error handling.
l. The software can place the system into a safe state.

1. Analyze the software requirements and the software design and work with the project to implement NPR 7150.2 requirement items "a" through "l."




184

The project manager shall include software related safety constraints, controls, mitigations, and assumptions between the hardware, operator, and software in the software requirements documentation.

1. Analyze that the software requirements documentation contains the software related safety constraints, controls, mitigations, and assumptions between the hardware, operator, and the software.

203The project manager shall implement mandatory assessments of reported non-conformances for all COTS, GOTS, MOTS, OSS, and/or reused software components.2. Assess the impact of non-conformances on the project software's safety, quality, and reliability.

2. SW Requirements Analysis Techniques

NASA Missions go through a logical decomposition in defining their requirements. Software Requirements Analysis addresses a system’s or application’s software requirements including analysis of the functional and performance requirements, hardware requirements, interfaces external to the software, and requirements for qualification, quality, safety, security, dependability, human interfaces, data definitions, user requirements, installation, acceptance, user operation, and user maintenance.  

There are several techniques that may be used by either Software Assurance or Software Safety personnel to aid in analyzing these software requirements. NPR 7150.2 and NASA-STD-8739.8 requires some specific techniques be used for the analysis and those are listed in the table on Tab 1. Introduction. It is recommended that at least one other method or technique be selected to supplement the required techniques. Consider the areas where the software requirements are typically weak or where  issues have been found previously and tailor the approach.

This tab contains some checklists and guidance for Software Requirements Analysis. Other guidance to consider while analyzing requirements may be found in this SWE Handbook in tabs 3 and 7 of the following requirements: 

Some methods and techniques are:

  1. Walk-throughs – Establishes a common understanding of the system/software requirements and/or operations concept.
    1. The following roles are recommended for attendance/participation in the walk-throughs: System Engineers, Software Developers, Software Testers (including IV&V), Software Assurance, Software Safety, System Safety, Operations people, and users.
    2. Walk-throughs are intended to give participants a good understanding of the expected data flows through the system/software and the activities to be performed for each operational scenario. This helps determine whether the correct requirements are in place to support all the operational scenarios, data management needs, and helps identify potential system/software hazards.
    3. Walk-throughs also provide an opportunity for open discussion of the requirements. These in-depth discussions may lead to the identification of additional requirements as the requirements and their intent are better understood.
  2. Peer Reviews or Formal Inspections – Peer Reviews or Formal Inspections can be used to focus on small sections of concern or to look at potential problem areas of the requirements. For example, a peer review could focus on just correctness or consistency. Here is what is meant by these terms:

Determine correctness

Requirements are considered correct if they "respond properly to situations" 001 and are appropriate to meet the objectives of higher-level requirements. A technique for determining correctness is to compare the requirements set against operational scenarios developed for the system/software.

Determine consistency

Requirements are consistent if they do not conflict with each other within the same requirements set and if they do not conflict with system (or higher-level) requirements. It is helpful to have at least one person read through the entire set of requirements to confirm the use of consistent terms/terminology throughout.

Peer reviews or formal inspections are particularly important for the areas where software requirements are typically weak and where there is a history of issues found previously.

3. Checklists: Several checklists can be used to help analyze requirements. It is often useful to consider using one or more of these to supplement other analysis methods.

The checklist SAANALYSIS is shown below (previously located in 7.18) is a good general checklist that covers many areas to be considered in your analysis. Click in the thumbnail to view it. From the viewer, you may download a copy for your use. 


When evaluating the software requirements, consider the list of items below:

    1. Is the approach to requirements decomposition reasonable, appropriate, and consistent?
    2. Are the system’s software requirements both individually and in aggregate of high quality (clear, concise, complete, consistent, feasible, implementation independent, necessary, singular, traceable, accurate, unambiguous, and verifiable)?
      • Clear and concise. The requirement ensures that statements can only be interpreted unambiguously. “ The terms and syntax used must be simple, clear and exact”. For a clear and concise requirement, the use of weak terms, synonyms, and unclear sentence structure leads to misunderstandings.

      • Complete. The requirement describes adequately “the capability and characteristics to meet the stakeholder’s needs”. Further explanation and enhancement of the requirement are not necessary.

      • Consistent. The requirement has no conflicts. Defined terms are used consistently throughout the requirement.

      • Feasible. The requirement can be implemented technically and does not need further advanced technologies. The system constraints are considered regarding legal, cost, and schedule aspects.

      • Implementation independent. The requirement is specified independently from the implementation: “The requirement states what is required, not how the requirement should be met”.

      • Necessary. The requirement contains relevant information and is not deprecated.

      • Singular. A requirement cannot be divided into further requirements. It includes one single statement.

      • Traceable. “The requirement is upwards [and downwards] traceable”. Every requirement at each development stage can be traced to a requirement either to the current or to the previous and subsequent development stage. The requirement considers the dependency and possible conflicts among software.

      • Accurate. Each requirement must accurately describe the functionality to be built.
      • Unambiguous. A Requirements Document is unambiguous if and only if every requirement stated therein has only one interpretation.
      • Verifiable. The requirement necessitates the verification of the statement by using the standard methods of inspection, analysis, demonstration, or test.

    3. Will requirements adequately meet the needs of the system and expectations of its customers and users?
    4. Do requirements consider the operational environment under nominal and off-nominal conditions? Are the requirements complete enough to avoid the introduction of unintended features? Specifically:
      • Do the requirements specify what the system is supposed to do?
      • Do requirements guard against what the system is not supposed to do?
      • Do the requirements describe how the software responds under adverse conditions?
    5. Is this requirement necessary?
    6. Are the requirements understandable?
    7. Are the requirements organized in a manner such that additions and changes can be made easily?
    8. Are the requirements unnecessarily complicated?
    9. Has system performance been captured as part of the requirements?
    10. Are the system boundaries (or perhaps operational environment) well defined?
    11. Is a requirement realistic given the current technology?
    12. Is the requirement singular in nature, or could it be broken down into several requirements? (looking at grammar not whether it can be decomposed or not)
    13. Within each requirement level, are requirements at an appropriate and consistent level of abstraction?
    14. In the traceability, are the parent requirements represented appropriate child requirements?
    15. Do the parent requirements include outside sources such as:
      • Hardware specifications
      • Computer\Processor\Programmable Logic Device specifications
      • Hardware interfaces
      • Operating system requirements and board support packages
      • Data\File definitions and interfaces
      • Communication interfaces including bus communications Software interfaces
      • Derived from Domain Analysis
      • Fault Detection, Isolation and Recovery requirements
      • Models
      • Commercial Software interfaces and functional requirements
      • Software Security Requirements
      • User Interface Requirements
      • Algorithms
      • Legacy or Reuse software requirements
      • Derived from Operational Analysis
      • Prototyping activities
      • Interviews
      • Surveys
      • Questionnaires
      • Brainstorming
      • Observation
      • Software Test Requirements
      • Software Fault Management Requirements
      • Hazard Analysis
    16. Does the Software Requirements Specification contain the following information:
      • System overview.
      • CSCI requirements:
        • Functional requirements.
        • Required states and modes.
      • External interface requirements.
      • Internal interface requirements.
      • Internal data requirements.
      • Adaptation requirements (data used to adapt a program to a given installation site or given conditions in its operational environment).
      • Safety requirements.
      • Performance and timing requirements.
      • Security and privacy requirements.
      • Environment requirements.
      • Computer resource requirements:
        • Computer hardware resource requirements, including utilization requirements.
        • Computer software requirements.
        • Computer communications requirements.
      • Software quality characteristics.
      • Design and implementation constraints.
      • Personnel-related requirements.
      • Training-related requirements.
      • Logistics-related requirements.
      • Precedence and criticality of requirements.
      • FDIR requirements for system, hardware, and software failures
      • Software State transitions, state diagrams
      • Assumptions for design and operations are documented
      • Qualification provisions (e.g., demonstration, test, analysis, inspection).
      • Bidirectional requirements traceability.
      • Requirements partitioning for phased delivery.
      • Testing requirements that drive software design decisions (e.g., special system-level timing requirements/checkpoint restart).
      • Supporting requirements rationale.
    17. Is there bidirectional traceability between parent requirements, requirements, and preliminary design components?
    18. Do the detailed software requirements trace to a reasonable number of parent requirements? (e.g., does the ratio between detailed requirements and parent requirements look reasonable, do all of the software detailed requirements trace to too few parent requirements (inadequate system requirement definitions)).
    19. Are trace links of high quality (e.g., avoidance of widows and orphans, circular traces, traces within a requirements level, etc.)?
    20. Have high-risk behaviors or functions been identified, and does SA agree with the identified behaviors? This should result in a list of critical activities to be performed by software and analyzed further by software assurance.
    21. For critical activities, are associated requirements correct and complete against SA understanding of the system behavior? Note: consider additional analysis rigor to address critical activities.
    22. Are interface requirements with the hardware, users, operators, or other systems adequate to meet the needs of the system concerning expectations of its customer and users, the operational environment, safety and fault tolerance, and both functional and non-functional perspectives?
    23. Has a fault tolerance strategy for fault detection, identification, and recovery (FDIR) from faults been provided, and is it reasonable?
    24. Is the role of software as part of the FDIR understood?
    25. Are software-related activities associated with the fault tolerance strategy for fault detection and identification and recovery captured as requirements?
    26. Is human safety addressed through the requirements?
    27. Have hazards and hazard causes been adequately identified, with associated software detection and mitigations captured as requirements?
    28. Are must-work and must-not-work requirements understood and specified?
    29. Have requirements addressed the security threats and risks identified within the system concept specifications and the system security concept of operations (e.g., System Security Plan)
    30. Do requirements define appropriate security controls to the system and software?
    31. Can the requirement(s) be efficiently and practically tested?
    32. Do the requirements address the configuration of, and any associated constraints, associated with COTS, GOTS, MOTS, and Open Source software?
    33. Do the requirements appropriately address operational constraints?
    34. Does the requirement conflict with domain constraints, system constraints, policies, or regulations (local and governmental)?
    35. Have users/operators been consulted during requirements development to identify any potential operational issues?
    36. Have the software requirements been peer-reviewed?

4. Bidirectional Traceability

For the basic requirements on bi-directional traceabilitysee SWE-052 in NASA-STD-8739.8. Software Assurance should be confirming that the bi-directional trace is complete and includes traces for all the levels specified in SWE-052.

The requirements traceability matrix identifies all of the system-level and higher-level requirements assigned to the software to be coded. It will also identify the parent requirements.  If a software requirement doesn’t have a parent, it may not be necessary. Traceability is important throughout the full life cycle tracing the requirements allocation to the design, implementation, and test phases. Traceability is particularly important in support of the system/software change or configuration management process. For example, if there is a requirement change during code development, the traceability matrix will help identify the design impacts in the code. The traceability between the requirements and the tests determines whether all the requirements in the system/software are being tested. Similarly, the traceability between the system and software hazards to the software requirements will help determine whether all the hazards are being addressed.

5. Verify Accuracy of Mathematical Specifications

During the requirements analysis process, the algorithms and mathematical specifications need to be verified to ensure that the inputs to the system/software are computed correctly, and that the specifications and algorithms produce the correct values.

Ensure that correct units have been specified for all components in the specifications/algorithms.

Any type conversion limitations should be identified.

The range of values that are valid for inputs should be specified.

6. Models and Diagrams

Models and diagrams are techniques that can be used in requirements analysis to assist in getting the best requirements set possible from both the end-user, customer and product perspective. Some are:

  • Modeling of inputs and outputs
  • Activity Diagram
  • Data Flow Diagrams,
  • Control Flow Diagrams,
  • Scenario-Based/Use Case Models,
  • Behavioral-Based Modeling
  • Architectural Diagram
  • State/Mode Diagram

7. Reviewing Requirements in Agile - Use Cases, User Stories (Typically used in Agile)

When using Agile Methodologies, use cases and user stories are used to generate and build the system/software requirements. The user stories/use cases are analyzed to extract the requirements to ensure all of the details are captured. Well written user stories/use cases will contain information that may be used to build the software as well as the Software Requirements Specification (SRS). It is assumed the Product Owner will not provide an SRS. Their mechanism for providing requirements is via user stories/use cases. Note: NASA organizations may develop their higher level requirements in a more traditional Waterfall fashion with the lower-level requirements generated using Agile methodologies.

a. To analyze the requirements in an Agile software development environment, it is important to understand the relationship between user stories and requirements and where use cases fit in.

i. Agile User Stories:

The Agile Alliance describes a user story as work to be done divided into functional increments.

A more simplified definition can be attributed to Atlassian: “A user story is an informal, general explanation of a software feature written from the perspective of the end-user. Its purpose is to articulate how a software feature will provide value to the customer.”

User stories are often used to plan the work in an Agile software development environment. They usually consist of one or two sentences in a specific structure to identify the user, what they want, and why. User stories are all about providing value to the customer. It’s what the user wants (the “results”). Typically, the low-level details will be derived from conversations at Sprint Planning and/or scrum meetings. These details should be captured in the SRS.


ii. Agile Use Cases

Per Wikipedia, a use case is a list of actions or event steps typically defining the interactions between a role and a system to achieve a goal.

Use cases are often more detailed than user stories. They are all about the “behavior” that is built into the software to meet the customer/user’s needs/goals. Use cases focus on all of the ways the system/software will be used and describe the process/procedures the user takes to accomplish the goal.


iii. When Agile use stories/use cases are used, analysis is still required but may take on a different flavor. They give context to the requirements. When doing requirements analysis in Agile systems, include the following activities:

-Ensure the User Stories are well formulated. To convey the appropriate information, User Stories are commonly in the form of:

-As a <role>,

-I want <requirement>

-so that <rationale/goal>.

-Ensure that the set of user stories/use cases capture all of the information that may exist in the more traditional forms of requirements.

-Consider how the user stories can be tested and ensure that all the required capabilities can be tested through the user stories.

-Perform bidirectional traces between the traditional Waterfall requirements and the user stories/use cases. Ensure that all safety-critical hazard controls trace to user stories and use cases.

-Review Appendix A in NASA-STD-8739.8 to ensure that all applicable software hazard causes have been considered for inclusion in the user stories and use cases.

b. Requirements have been traditionally used for the NASA systems and software development.

Requirements describe the features of the system being built and convey the user’s expectations. They tend to be very specific and go into detail on how the software should work. Traditionally, the requirements are written by the product manager, the systems engineer, software engineer, or the technical lead. Projects using many of the traditional development methodologies use the written requirements and go directly into the design phase after their requirements analysis. In Agile development where they are provided with high-level NASA requirements, the Product Owner typically breaks the requirements into agile user stories and/or use cases. It is up to Software Engineering to work with the Product Owner to derive the detailed requirements and develop an SRS.

8. Requirements Analysis Tools

The use of requirements analysis tools (e.g., Innoslate, DOORS) or requirements modeling tools (e.g., Xplenty, IBM Rational, SQL DBM) may be helpful in some analysis activities. Some tools can be used to check the attributes being reviewed.

In FY21, the Software Assurance Research Program (SARP) has two projects researching different aspects of doing requirements analysis including the development of tools. This research may be available for NASA use shortly.

3. SW Safety Analysis During Requirements

There are requirements analysis activities that apply specifically to software with safety-critical components. The safety team needs to focus on safety considerations while the requirements analysis is being performed and ensure that these safety-critical aspects have been addressed. The safety requirements analysis portion should be done in conjunction with the software requirements analysis and the results combined in reporting on the analyses results. For additional information on identifying safety related requirements, see the SW Safety and Hazard Analysis topic.

Software Safety personnel should:

  1. Ensure that the requirements analysis discussed in Topic 8.9 Software Safety Analysis has been performed.
  2. Consider using the Safety Requirements Analysis Checklist below in addition to the other requirements analysis methods/techniques listed on Tab 2 – Software Requirements Analysis during the safety requirements analysis.


Safety Requirements Analysis Checklist
  1. Have all safety-critical software or components been identified?
  2. Have agreements been reached on criticality designations among the project, Software Assurance personnel, Software Safety personnel, and IV&V personnel?
  3. Is the bidirectional trace complete between the software requirements, software hazards, and the software-related system hazards, including hazard controls, hazard mitigations, hazardous conditions, and hazardous events? (Required by Task 1, SWE-052)
  4. Has there been an analysis to determine that all necessary safety requirements have been flowed down to the software requirements?
  5. Have all system hazards been reviewed to identify software contributions, mitigations, or controls?
  6. Have the requirements been reviewed to ensure they contain all the necessary safety and security-related requirements? Consider:
    1. Fault Detection, Isolation, and Recovery Requirements
    2. Software Fault Management Requirements
    3. Software Mitigations, Controls, etc. identified in Hazard Analysis Reports
    4. System constraints such as activities the hardware must not do or limitations in sensor precision.
    5. Software Security Requirements e.g., command authentication schemes may be necessary for network-controlled systems. Unauthorized access may be either inadvertent or malicious. This could be for:
      1. Access control to the environment or data
      2. Communication controls per NASA-STD-1006.
      3. Review of COTS, MOTS, OTS, GOTS, and OSS software for security vulnerabilities and weaknesses
  7. Has a list of generic hazards been reviewed for applicability? (A set of software hazard causes may be found in Appendix A of NASA-STD-8739.8 or topic 21 - Software Hazard Causes.)
  8. Are there requirements associated with mitigating the hazards that need to be added in the requirements document? Have generic safety requirements been reviewed to see if any are applicable for this project? (See 2 - Checklist for General Software Safety Requirements under the tab Programming Checklists for an example of this. There may be other generic safety requirements lists in your Center Asset Library.)
  9. Have Software Assurance personnel or Software Safety personnel verified that the configuration items include safety and security requirements, hazard reports, and safety analysis reports (Required by Task 2,SWE-081 – Software Configuration Items)
  10. Do the requirements include those listed in items a through l of SWE-134? (Required by Task 1, SWE-134 – Safety Critical Software Design Requirements)
  11. Are assumptions and boundary conditions identified for safety-related functions?
  12. Have the software safety requirements been derived from appropriate parent requirements, and do they include modes, states of operation, and safety-related constraints?
  13. Do the software safety requirements "maintain the system in a safe state” and provide adequate proactive and reactive responses to potential failures? For example, do the requirements include capabilities like system failover, redundant systems, backup servers, ability to shut down gracefully, etc.?
  14. Have timing, data throughput, and performance been considered, and are the requirements for them feasible to meet the safety requirements? Has adequate human operator and control system response time been included? Are there adequate margins of capacity for all critical resources?
  15. Are any safety-related constraints between the hardware and software included in the software requirements documentation?
  16. Have safety “Best Practices” been included in the requirements? Some examples of safety “Best Practices” are:
    1. Notifying the controller when an automated safety-critical process is executed.
    2. Requiring hazardous commands to involve multiple, independent steps to execute.
    3. Requiring hazardous commands or data to differ from non-hazardous commands by multiple bits.
    4. Making the current state of all inhibits available to controllers (human or executive program).
    5. Ensuring unused code cannot cause a hazard if executed.
  17. Have any planned COTS, MOTS, GOTS, Open Source, or reused software modules been included in the safety requirements analysis? (See Topic 8 COTS Software Safety Considerations)
  18. Have all changes in the requirements been evaluated for impacts to safety or security?
  19. Have verification methods for the safety requirements been considered? Often considering the verification methods for a requirement can highlight a requirement that is ambiguous, conflicting, or not testable.

3. Perform a Fault Tree Analysis? (See Topic 7 - Software Fault Tree Analysis for information on performing a Fault Tree Analysis))

4. Consider performing a Software Failure Modes and Effects Analysis (SFMEA)or a Software Failure Modes, Effects and Criticality (SFMECA). (See Topic 5 - SW Failure Modes and Effects Analysis for information on performing a SFMEA.)

4. Requirements Analysis Report

Documenting and Reporting of Analysis Results.

When the requirements are analyzed, the Software Requirements Analysis work product is generated to document results capturing the findings and corrective actions that need to be addressed to improve the overall requirements set. It should include a detailed report of the requirements analysis results. Analysis results should also be reported in a high-level summary and conveyed as part of weekly or monthly SA Status Reports. The high-level summary should provide an overall evaluation of the analysis, any issues/concerns, and any associated risks. If a time-critical issue is uncovered, it should be reported to management immediately so that the affected organization may begin addressing it at once.

When a project has safety-critical software, analysis results should be shared with the Software Safety personnel. The results of analysis conducted by Software Assurance personnel and those done by Software Safety personnel may be combined into one analysis report, if desired.

4.1 High-Level Analysis Content for SA Status Report

Any requirements analysis performed since the last SA Status Report or project management meeting should be reported to project management and the rest of the Software Assurance team. When a project has safety-critical software, any analysis done by Software Assurance should be shared with the Software Safety personnel.

When reporting the results of an analysis in a SA Status Report, the following defines the minimum recommended contents:

  • Identification of what was analyzed: Mission/Project/Application
  • Period/Timeframe/Phase analysis performed during
  • Summary of analysis techniques used
  • Overall assessment of design, based on analysis
  • Major findings and associated risk
  • Current status of findings: open/closed; projection for closure timeframe

4.2 Detailed Content for Analysis Product:

The detailed results of all software requirements analysis activities are captured in the Software Requirements Analysis product along with the types of analysis techniques used to provide information on the robustness of the analysis done. The techniques/methods used provide information on those that produced the most useful results . This document is placed under configuration management and delivered to the project management team as the Software Assurance record for the activity. When a project has safety-critical software, this product should be shared with the Software Safety personnel.

When reporting the detailed results of the software requirements analysis, the following defines the minimum recommended content:

  • Identification of what was analyzed: Mission/Project/Application
  • Person(s) or group/role performing the analysis
  • Period/Timeframe/Phase analysis performed
  • Documents used in analysis (e.g., versions of the system and software requirements, interfaces document, Concept of Operations)
  • A high-level scope and description of the techniques/methodologies used in the analysis
    • Use the list of possible analysis techniques/methodologies listed in Tab 2 as a starting point.
    • For each technique/methodology on the list, state why/or why not it was used.
    • List any additional techniques/methodologies used that were not included in the Tab 2 list.
  • Summary of results found using each technique/methodology
    • How many findings resulted from each technique/methodology?
    • Difficulty/Ease of technique/methodology used
    • The general assessment of the technique/methodology
    • High-Level Summary of the findings
  • Results, major findings, and associated risk:
    • Overall assessment of the quality/completeness of the requirements, based on the analysis
    • Either list each result, finding, or corrective action or summarize them and list the links to the detailed findings.
    • Assessment of the overall risk involved with the findings.
  • Documentation should include types of findings:
    • Missing requirements
    • Requirements that need rewriting: because they are incomplete, incorrect, unclear, not testable/verifiable, etc.
    • Requirement with safety concerns
    • Requirements with security concerns (e.g., access control, vulnerabilities, weaknesses, etc.)
    • Incompatibilities in interfaces not clearly defined
    • Issues in traceability (Child with no parent, parent with no children, etc.)
    • Requirements with unnecessary functions
    • Requirements are not detailed enough to provide the information needed to develop a detailed design that can be implemented in the code
    • Hazards and safety-related software controls, constraints, features not included in the requirements
    • Any other requirements issues discovered during the analyses
  • Minor findings
  • Current status of findings: open/closed; projection for closure timeframe
    • Include counts for those discovered by SA and Software Safety
    • Include overall counts from the Project’s problem/issue tracking system.

5. Resources 

5.1 References

  • (SWEREF-001) Software Development Process Description Document, EI32-OI-001, Revision R, Flight and Ground Software Division, Marshall Space Flight Center (MSFC), 2010.This NASA-specific information and resource is available in Software Processes Across NASA (SPAN), accessible to NASA-users from the SPAN tab in this Handbook.

  • (SWEREF-083)

    NPR 7150.2D, Effective Date: March 08, 2022, Expiration Date: March 08, 2027 https://nodis3.gsfc.nasa.gov/displayDir.cfm?t=NPR&c=7150&s=2D Contains link to full text copy in PDF format. Search for "SWEREF-083" for links to old NPR7150.2 copies.

  • (SWEREF-278)

    NASA-STD-8739.8A , NASA TECHNICAL STANDARDS SYSTEM, Approved 2020-06-01 Superseding "NASA-STD-8739.8, With Change 1"

5.2 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

5.3 Checklists and Guidance 

There are many lists of guidance items and checklists in this Handbook, but sometimes they are very difficult to locate. The table below is intended to help users find the assets they need more easily. Most of the lists provided here are intended to be guidance for activities and could be used as a self-check to see whether all the expected items have been included in the activity. The items that are noted as checklists can be used as audit checklists. Some are already formatted and can be downloaded directly for tailoring and use. Others are in the process of being formatted so they can be downloaded. Note: Currently, this list does not include every guidance list in the Handbook, but it has a good sampling of them.

Checklists and Guidance Lists in SWEHB

Asset Name

Location

Format

Category

Maintenance, Operations, Retirement Planning

SWE-075 - Plan Operations, Maintenance, Retirement, Tab 3, (7.4 -Retirement)

List

Late Phase Planning Considerations

Auto-generated Code

SWE-146, Tab 3

List

Implementation, Planning

Selection of Real Time Operating System (RTOS)

SWE-027, Tab 3

Checklist

Commercial & Legacy SW

Selection of Real Time Operating System (RTOS)

Programming Practices Topic,  6.3

Checklist

Programming Checklists, Planning, Commercial SW

Choosing Off-the Shelf (OTS) Software

Assurance & Safety Topics, Programming Tab, 6.4

Checklist

Commercial & Legacy SW

Selection of Commercial & Legacy SW

SWE-027, Tab 3

Questions

Commercial & Legacy SW

Assurance of models, simulations, analysis tools

SWE-070, Tab 7.4

List

Models, Sims, Tools

Requirements Development/Assessment (SRS contents)

SWE-050, Tab 7.4

Questions (2 sets)

Requirements development

Requirements Analysis

SWE-051, Tab 3; Topic 8.16: Software Req. Analysis, tab 2

Checklist: SAANALYSIS

Requirements Analysis

Requirements Development/Assessment

SWE-050, Tab 7.4

List of SRS contents, Questions (2 sets) for requirements considerations

Requirements development

Analysis of Requirements Changes

SWE-053 Tabs 3, 7.4

List

Requirements practices

Checklist for General SW Safety Requirements

Topics, Programming Practices Tab, 6.2

Checklist

Safety Requirements

Configuration Items for Consideration

SWE-079, Tab 3

List

Configuration Management

Functional Configuration Audit Checklist (FCA)

SWE-084, Tab 7

Checklist

Configuration Management

Physical Configuration Audit (PCA)

SWE-084, Tab 7 (Not currently there)

Checklist

Configuration Management

Peer Review Best Practices

SWE-087, Tab 3

List

Peer Review Guidance

SA Non-Conformance Activities

SWE-201, Tab 7.4

List

Non-Conformance Handling Guidance

Change Evaluation

SWE-80, tab 3

List

Guidance for handling changes

Design Considerations

SWE-058, Tab 3, 7.4

Checklist

Design Practices

Design Evaluation (SARB)

SWE-143, Tab 3, 7.4

Questions

Design Practices

Software Design Analysis

Assurance & Safety Topics, 8.16, Tab SW Design Analysis

Checklist

Design Practices

Design for Safety

Assurance & Safety Topics, 6.1

Questions

Design Practices

C Programming Practices for Safety

SWE-060, Tab 3, 7.4

Checklist

Safety Coding Practices

C Programming Practices for Safety

Programming Practices Topic, 6.5

Checklist

Safety, Coding Practices

C++ Programming Practices for Safety

Programming Practices Topic, 6.6

Checklist

Safety, Coding Practices

Ada Programming Practices for Safety

Programming Practices Topic, 6.7

Checklist

Safety, Coding Practices

FORTRAN Programming Practices for Safety

Programming Practices Topic, 6.8

Checklist

Safety, Coding Practices

Generic (Non-Language Specific) Programming Practices for Safety

Programming Practices Topic, 6.9

Checklist

Safety, Coding Practices

General Good Programming Practices for Safety

Programming Practices Topic, 6.10

Checklist

Safety, Coding Practices

ISO 27001-2013 Audit Checklist

Assurance & Safety Topics, 8.16, 5.2.2

Checklist

Audit

Software Safety Process Audit

Assurance & Safety Topics -8.17, Tab 2

Checklist

Safety, Audits

Software Safety Activities for Internal Audit

Assurance & Safety Topics – 8.17, Tab 3

Checklist

Safety, Audits

Software Safety-Specific Activities in Each Phase

Assurance & Safety Topics – 8.20, Tab 1

List

Safety

Hazard Reports

SWE-205, Tab 7.4

Steps in hazard Analysis

Safety

Potential Software Hazard Causes

Assurance & Safety Topics – 8.21, Tab 1

Table

Safety, Hazard Analysis

Considerations for identifying SW Hazard Causes

SWE-205, Tab 7.4

Checklist

Safety, Hazard Analysis

Considerations for Identifying SW Causes in a General SW-Centric HA

SWE-205, Tab 7.4

List

Safety, Hazard Analysis

Updates to Test Documents

SWE-071, Tabs, 3, 7.4

List

Testing Guidance

Analysis of test results

SWE-068, Tab 3

List

Test Analysis

Test Practices (incl. safety)

SWE-066, Tab 3,

List

Test Analysis

Test Documentation Changes

SWE--065

List

Test Documentation

Unit test guidance

SWE-186 Tabs 3, 7.4 Repeated in SWE-062

List

Test Results, Unit Testing

Release Package Activities

SWE-085, Tab 3

List

 Release Guidance

Confirmation of Delivery Activities

SWE-077, Tab 7

List

Delivery Activities





  • No labels