bannerc

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin
Tabsetup
01. The Requirement
12. Rationale
23. Guidance
34. Small Projects
45. Resources
56. Lessons Learned
67. Software Assurance
Div
idtabs-1

1. Requirements

Excerpt

4.4.5 The project manager shall unit test the software code.

1.1 Notes

NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.

1.2 History

Expand
titleClick here to view the history of this requirement: SWE-062 History

Include Page
SITE:SWE-062 History
SITE:SWE-062 History

1.3 Applicability Across Classes

Applicable c
a1
b1
csc1
c1
d1
dsc1
e0
f1
g0
h0

Div
idtabs-2

2. Rationale

Unit testing is the process of testing the range of inputs to a unit to ensure that only the intended outputs are produced. By doing this at the lowest level, fewer issues will be discovered when the components are later integrated and tested as a whole. Therefore, during unit testing, it is important to check the maximum and minimum values, invalid values, empty and corrupt data, etc. for each input and output to ensure the unit properly handles the data (processes or rejects it).

Panel

Unit testing can be described as the confirmation that the unit performs the capability assigned to it, correctly interfaces with other units and data, and represents a faithful implementation of the unit design. 

Ensuring that developers perform unit testing following written test plans helps build quality into the software from the beginning and allows bugs to be corrected early in the project life cycle when such corrections cost the least to the project.

Div
idtabs-3

3. Guidance

The project manager shall assure that the unit test results are repeatable. SWE-186

Per IEEE Std 610.12-1990, IEEE Standard Glossary of Software Engineering Terminology, a "unit" is defined as:
 
(1) A separately testable element specified in the design of a computer software component. 
(2) A logically separable part of a computer program.
(3) A software component that is not subdivided into other components.
 
Given the low-level nature of a unit of code, the person most able to fully test that unit is the developer who created it.  

Projects ensure that the appropriate test environment, test materials, and personnel training (SWE-017), are in place and then conduct unit tests per the approved plans (5.10 - STP - Software Test Plan), according to the schedule (SWE-016), and with proper monitoring per the software assurance plan, making sure that:

  • Criteria for a successful test are established before the test.
  • The test environment represents inputs, output, and stimulus the unit will experience in operation.
  • Capture weaknesses or differences between the unit test environment and the actual target environment.
  • Following the approved plans for unit testing:
    • Unit test results are captured.
    • Issues are identified and documented (some minor issues, such as typos, as defined by the project, may simply be corrected without documentation).
    • Unit test issues are corrected; these may include:
      • Issues found in the code.
      • Issues found in test instruments (e.g., scripts, data, procedures).
      • Issues found in testing tools (e.g., setup, configuration).
    • Unit test corrections are captured (for root cause analysis, as well as proof that the unit test plans were followed).
  • Unit test results are evaluated by someone other than the tester to confirm the results, as applicable and practical; evaluation results captured.
  • Unit test data, scripts, test cases, procedures, test drivers, test stubs are captured for reference and any required regression testing.
  • Notes captured in software engineering notebooks or other documents are captured for reference.
  • Objective evidence that unit tests were completed and unit test objectives met is captured in the Software Development Folders (SDFs) or other appropriate project location as called out in the project documentation (e.g., Software Development Plan (SDP)/ Software Management Plan (SMP), Configuration Management (CM)Plan).
  • Unit test metrics captured, as appropriate and defined for the project.

Per NASA-GB-8719.13

Swerefn
refnum276
, NASA Software Safety Guidebook, software assurance is to "Verify unit testing and data verification is completed before the unit is integrated." Either software assurance or Independent Verification and Validation (IV&V) personnel "Verify unit tests adequately test the software and are performed." When less formal confirmation of unit testing is needed, a software team lead or other designated project member may verify the completeness and correctness of the testing by comparing the results to the test plan to ensure that all logic paths have been tested and verifying the test results are accurate.

Unit testing tools and some integrated development environments (IDEs) can auto-generate unit tests based on the code. These tools provide a quick method to generate unit tests, but may not completely exercise the unit of code. Rerun units test each time the unit is updated to ensure the code continues to work as expected. When continuous integration is part of the life cycle, all of the unit tests are rerun each time the code is updated to ensure only the working code is integrated.

Documented test results, results evaluations, issues, problem reports, corrections, and tester notes can all serve as evidence that unit tests were completed. Comparing those documents to the software test plans for unit testing can ensure the tests were completed following those documented procedures.

Make sure evidence of all test passes is captured.

NASA-GB-8719.13, NASA Software Safety Guidebook, further states in the section on safety-critical unit test plans that "documentation is required to prove adequate safety testing of the software." Therefore, unit test results can play an important role in supporting reviews of safety-critical software.

Note

Consult Center PALs for Center-specific guidance and resources related to unit testing.


Additional guidance related to unit testing may be found in the following related requirements in this handbook:

Div
idtabs-4

4. Small Projects

Projects with limited budgets and personnel may choose to perform unit testing or capture unit test results and artifacts in a less formal manner than projects with greater resources. Regardless of the formality of the procedures used, the software test plans for unit testing need to describe the test environment/setup, the results captured, simple documentation procedures, and compliance checks against the procedures. Some Centers have tailored lean unit test procedures and support tools specifically for small projects.

Div
idtabs-5

5. Resources

5.1 References

refstable
Show If
groupconfluence-users
Panel
titleColorred
titleVisible to editors only

Enter the necessary modifications to be made in the table below:

SWEREFs to be addedSWEREFS to be deleted


SWEREFs called out in text: 013, 047, 276, 452, 530, 533

SWEREFs NOT called out in text but listed as germane: 001, 031, 220, 222, 271

5.2 Tools


Include Page
Tools Table Statement
Tools Table Statement

Div
idtabs-6

6. Lessons Learned

6.1 NASA Lessons Learned

The NASA Lessons Learned database contains the following lessons learned related to unit testing:

  • MPL Uplink Loss Timer Software/Test Errors (1998) (Plan to test against a full range of parameters.) Lesson Number 0939
    Swerefn
    refnum530
    :
      Lesson Learned No. 2 states: "Unit and integration testing should, at a minimum, test against the full operational range of parameters. When changes are made to database parameters that affect logic decisions, the logic should be re-tested."
  • Computer Software/Configuration Control/Verification and Validation (V&V) (Unit level V&V needed for auto-code and auto-code generators.) Lesson Number 1023
    Swerefn
    refnum533
    :
      "The use of the Matrix X auto code generator for ISS software can lead to serious problems if the generated code and Matrix X itself are not subjected to effective configuration control or the products are not subjected to unit-level V&V. These problems can be exacerbated if the code generated by Matrix X is modified by hand."

6.2 Other Lessons Learned

No other Lessons Learned have currently been identified for this requirement.

Div
idtabs-7

7. Software Assurance

Excerpt Include
SWE-062 - Unit Test
SWE-062 - Unit Test

7.1 Tasking for Software Assurance

  1. Confirm that the project successfully executes the required unit tests, particularly those testing safety-critical functions.

  2. Confirm that the project addresses or otherwise tracks closure errors, defects, or problem reports found during unit tests.

7.2 Software Assurance Products

  • None at this time.


    Note
    titleObjective Evidence
    • Unit test results.
    • Software problem or defect report findings related to issues identified in unit testing.
    Expand
    titleDefinition of objective evidence

    Include Page
    SITE:Definition of Objective Evidence
    SITE:Definition of Objective Evidence

7.3 Metrics

  • # of planned unit test cases vs. # of actual unit test cases completed
  • # of tests successfully completed vs. total # of tests
  • # of tests executed vs. # of tests successfully completed
  • # of software work product Non-Conformances identified by life-cycle phase over time
  • # of Non-Conformances identified during each testing phase (Open, Closed, Severity)
  • # of Requirements tested successfully vs. total # of Requirements
  • Total # of Non-Conformances over time (Open, Closed, # of days Open, and Severity of Open)
  • # of Non-Conformances in the current reporting period (Open, Closed, Severity)
  • # of safety-related non-conformances identified by life-cycle phase over time
  • # of Closed action items vs. # of Open action items
  • # of Safety-Critical tests executed vs. # of Safety-Critical tests witnessed by SA
  • # of detailed software requirements tested to date vs. total # of detailed software requirements
  • # of safety-critical requirement verifications vs. total # of safety-critical requirement verifications completed
  • # of Open issues vs. # of Closed over time
  • # of Hazards containing software that has been successfully tested vs. total # of Hazards containing software

7.4 Guidance

Software assurance will confirm that a thorough set of unit tests are included as part of the software test plan and that the developers are executing those tests, as planned. It is particularly important to include any code performing a safety-critical function in the unit tests since that is often the only place those functions can be tested well. When confirming the tests are being run, be aware that the unit tests must also be repeatable, as per SWE-186. See the SA guidance on SWE-186 for the information needed to make the test repeatable. The software guidance provides the following list of activities that should be occurring during unit testing:

  • Criteria for a successful test are established before the test.
  • The test environment represents inputs, output, and stimulus the unit will experience in operation.
  • Capture weaknesses or differences between the unit test environment and the actual target environment.
  • Following the approved plans for unit testing:
    • Unit test results are captured.
    • Issues are identified and documented (some minor issues, such as typos, as defined by the project, may simply be corrected without documentation).
    • Unit test issues are corrected; these may include:
      • Issues found in the code.
      • Issues found in test instruments (e.g., scripts, data, procedures).
      • Issues found in testing tools (e.g., setup, configuration).
    • Unit test corrections are captured (for root cause analysis, as well as proof that the unit test plans were followed).
  • Unit test results are evaluated by someone other than the tester to confirm the results, as applicable and practical; evaluation results captured.
  • Unit test data, scripts, test cases, procedures, test drivers, test stubs are captured for reference and any required regression testing.
  • Notes captured in software engineering notebooks or other documents are captured for reference.
  • Objective evidence that unit tests were completed and unit test objectives met is captured in the Software Development Folders (SDFs) or other appropriate project location as called out in the project documentation (e.g., Software Development Plan (SDP)/ Software Management Plan (SMP), Configuration Management (CM)Plan).
  • Unit test metrics captured, as appropriate and defined for the project

Finally, software assurance will confirm that any errors, defects, etc., found during the unit tests are addressed or tracked to closure. Unit tests should be rerun to verify that the changes made to the software fixed the problem. Any unit tests for safety functions may need to be rerun to make sure that have not been affected.