See edit history of this section
Post feedback on this section
- 1. The Requirement
- 2. Rationale
- 3. Guidance
- 4. Small Projects
- 5. Resources
- 6. Lessons Learned
- 7. Software Assurance
1. Requirements
3.1.14 The project manager shall satisfy the following conditions when a COTS, GOTS, MOTS, OSS, or reused software component is acquired or used:
a. The requirements to be met by the software component are identified.
b. The software component includes documentation to fulfill its intended purpose (e.g., usage instructions).
c. Proprietary rights, usage rights, ownership, warranty, licensing rights, transfer rights, and conditions of use (e.g., required copyright, author, and applicable license notices within the software code, or a requirement to redistribute the licensed software only under the same license (e.g., GNU GPL, ver. 3, license)) have been addressed and coordinated with Center Intellectual Property Counsel.
d. Future support for the software product is planned and adequate for project needs.
e. The software component is verified and validated to the same level required to accept a similar developed software component for its intended use.
f. The project has a plan to perform periodic assessments of vendor reported defects to ensure the defects do not impact the selected software components.
1.1 Notes
The project responsible for procuring off-the-shelf software is responsible for documenting, prior to procurement, a plan for verifying and validating the software to the same level that would be required for a developed software component. The project ensures that the COTS, GOTS, MOTS, reused, and auto-generated code software components and data meet the applicable requirements in this directive assigned to its software classification as shown in Appendix C.
1.2 History
1.3 Applicability Across Classes
Class A B C D E F Applicable?
Key: - Applicable | - Not Applicable
2. Rationale
All software used on a project must meet the project requirements and be tested, verified, and validated, including the incorporated Commercial Off the Shelf (COTS), Government Off the Shelf (GOTS), Modified Off the Shelf (MOTS), Open Source Software (OSS) or reused software components. The project must know that each COTS, GOTS, MOTS, OSS, or reused software component meets NASA requirements; that all of the legal requirements for proprietary rights, usage rights, ownership, warranty, licensing rights, and transfer rights are understood and met by the project’s planned use. That software's future support for the software is planned. To reduce the risk of failure, the project performs periodic assessments of vendor-reported defects to ensure the defects do not impact the selected software components.
3. Guidance
Several key concepts are addressed in this requirement for software used in a project but are not hand-generated. Open Source Software is considered to be a software component for this requirement.
See also SWE-033 - Acquisition vs. Development Assessment,
3.1 Identify Requirements
Identifying the requirements to be met by the software component allows the project to perform analysis to ensure the software component is adequate for the function it is intended to fulfill. Identifying the requirements for the software component also allows the project to determine how the risk of using Commercial Off the Shelf (COTS), Government Off the Shelf (GOTS), Modified Off the Shelf (MOTS), Open Source Software (OSS) or reused software components affects the overall risk posture of the software system. Identifying the requirements to be met by the COTS, GOTS, MOTS, OSS, or reused software components allows the project to perform testing to ensure that the COTS, GOTS, MOTS, OSS, or reused software component performs the required functions. Without requirements for the COTS, GOTS, MOTS, OSS, or reused software components, the software functionality cannot be tested.
See also 6.3 - Checklist for Choosing a Real Time Operating System (RTOS), PAT-025 - Checklist for Choosing a Real Time Operating System (RTOS),
3.2 Obtain Documentation
The second concept is to ensure that the COTS, GOTS, MOTS, OSS, or reused software components include documentation that allows the software to be used such that its intended purpose can be fulfilled (e.g., usage instructions). Usage instructions are important to ensure the purpose, installation, and functions of the components are properly understood and used in the project.
3.3 Obtain Usage Rights
For COTS, GOTS, MOTS, OSS, or reused software components, projects need to be sure that any proprietary, usage, ownership, warranty, licensing, and transfer rights have been addressed to ensure that NASA has all the rights necessary to use the software component in the project. It may help for the project to work with procurement to determine the documents needed from the software developer.
See also SWE-215 - Software License Rights, SWE-217 - List of All Contributors and Disclaimer Notice, SWE-077 - Deliver Software Products,
3.4 Understand the License
Read the legal code, not just the deed. The human-readable deed is a summary of, but not a replacement for, the legal code. It does not explain everything you need to know before using the licensed material.
Make sure the license grants permission for what you want to do. There are different licenses. Some licenses prohibit the sharing of adaptations.
Take note of the particular version of the license. The license version may differ from prior versions in important respects. Similarly, the jurisdiction ports may differ in certain terms, such as dispute resolution and choice of law.
3.5 Understand the Scope of the License
Pay attention to what exactly is being licensed. The licensor should have marked which elements of the work are subject to the license and which are not. For those elements that are not subject to the license, you may need separate permission.
Consider clearing rights if you are concerned. The license does not contain a warranty, so if you think there may be third party rights in the material, you may want to clear those rights in advance.
Some uses of licensed material do not require permission under the license. If the use you want to make of work falls within an exception or limitation to copyright or similar rights, you may do so. Those uses are unregulated by the license.
3.6 Know Your Obligations
Provide attribution. Some licenses require you to provide attribution and mark the material when you share it publicly. The specific requirements vary slightly across versions.
Do not restrict others from exercising rights under the license. Some licenses prohibit you from applying effective technological measures or from imposing legal terms that would prevent others from doing what the license permits.
Determine what, if anything, you can do with adaptations you make. Depending on what type of license is applied, you are limited in whether you can share your adaptation and, if so, what license you can apply to your contributions.
Termination is automatic. Some licenses terminate automatically when you fail to comply with its terms.
3.7 Consider Licensor Preferences
Consider complying with non-binding requests by the licensor. The licensor may make special requests when you use the material. We recommend you do so when reasonable, but that is your option and not your obligation.
3.8 Plan Future Support
COTS, GOTS, MOTS, OSS, or reused software components may have limited lifetimes, so it is important to plan for future support of this software adequate for project needs. As applicable, consider the following:
- Put in place a supplier agreement to deliver or escrow source code or a third party maintenance agreement.
- Ensure a risk mitigation plan is in place to cover the following cases:
- Loss of supplier or third party support for the product.
- Loss of maintenance for the product (or product version)
- Loss of the product (e.g., license revoked, recall of the product, etc.).
- Obtain an agreement that the project has access to defects discovered by the community of users. When available, the project can consider joining a product user group to obtain this information.
- Ensure a plan to provide adequate support is in place; the plan needs to include maintenance planning and maintenance costs.
- Document changes to the software management, development, operations, or maintenance plans affected by the use or incorporation of COTS, GOTS, MOTS, or reused software components.
3.9 Ensure Fitness for Use
COTS, GOTS, MOTS, OSS or reused software components are required to be tested, verified and validated, to the level required to ensure its fitness for use in the intended application. The software should be verified and validated, to the extent possible, in the same manner as software that is hand-generated using the project classification and criticality as the basis for the level of effort to be applied.
For COTS, GOTS, MOTS, OSS, or reused software components like commercial real-time operating systems, it is sufficient to test, in the project environment, the features being used to meet the software system’s requirements. It is not necessary to test every claim made by the software. On Class A projects, when the software test suites for the COTS, GOTS, MOTS, OSS, or reused software components are available, they are to be used when appropriate to address the intended environment of use, interfaces to the software system, as well as the requirements of the project.
See also Topic 8.08 - COTS Software Safety Considerations.
3.10 Assess Vendor-Reported Defects
The project should periodically assess vendor-reported defects in the COTS, GOTS, MOTS, OSS, or reused software components. This assessment plan should include the frequency of evaluations, likely within a short period of time following the vendor defect reports' release to users. The plan should capture how the project will assess the impact of vendor-reported defects in the project’s environment. This plan could refer back to procedures used to ensure the COTS, GOTS, MOTS, OSS, or reused software component’s fitness for use in the project environment and capture any additional activities necessary to ensure the defects do not impact the system quality, reliability, performance, safety, etc.
See also Topic 8.02 - Software Reliability
3.11 Software Reuse
Software reuse (either software acquired commercially or existing software from previous development activities) comes with a special set of concerns. The reuse of commercially-acquired software includes COTS, GOTS, MOTS, and OSS. Reuse of in-house software may include legacy or heritage software. Reused software often requires modification to be used by the project. Modification may be extensive or just require wrappers or glueware to make the software useable. The acquired and existing software must be evaluated during the selection process to determine the effort required to bring it up to date. The basis for this evaluation is typically the criteria used for developing software as a new effort. The evaluation of the reused software requires ascertaining whether the software's quality is sufficient for the intended application. The requirement statement for SWE-027 calls for five conditions to be satisfied. Also, the note indicates six additional items to consider. The key item in the above listings is the need to ensure the verification and validation (V&V) activity for the reused software is performed to the same level of confidence required for the newly developed software component.
See also SWE-147 - Specify Reusability Requirements,
3.12 Software Certification by Outside Sources
Outside certifications can be taken into account dependent on the intended NASA use of the software and the software's use in the environment in which it was certified. Good engineering judgment has to be used in cases like this to determine the acceptable degree of use. An example: a real-time operating system (RTOS) vendor certification data and testing can be used in conjunction with data certification in the NASA software environment. Even though the software has been determined "acceptable" by a regulatory entity [e.g., the Federal Aviation Administration (FAA)], good engineering judgment has to be used for these types of cases to determine the acceptable degree of use.
See also Topic 7.03 - Acquisition Guidance, SWE-156 - Evaluate Systems for Security Risks
3.13 Off-The-Shelf Software
The remaining guidance on off-the-shelf (OTS) software is broken down into elements as a function of the type of OTS software. The following is an index of the guidance elements:
3.13.1 COTS / GOTS Software
3.13.2 MOTS Software
3.13.3 OSS Software
3.13.4 Auto-generated Code
3.13.5 Embedded Software
See also SWE-040 - Access to Software Products, SWE-042 - Source Code Electronic Access, SWE-058 - Detailed Design, SWE-211 - Test Levels of Non-Custom Developed Software
3.13.1 COTS/GOTS Software
Commercial Off the Shelf (COTS) software and Government Off the Shelf (GOTS) software are unmodified, out-of-the-box software solutions that can range in size from a portion of an entire software project to an entire software project.. COTS/GOTS software can include software tools (e.g., word processor or spreadsheet applications), simulations (e.g., aeronautical and rocket simulations), and modeling tools (e.g., dynamics/thermal/electrical modeling tools).
If you plan to use COTS/GOTS products, be sure to complete the checklist tables under the Tools section of this guidance. The purpose of these tables is to ensure that the table entries are considered in your software life cycle decisions from software acquisition through software maintenance. The checklist Checklist for Choosing Off-The Shelf Software PAT-024 provides many questions to answer before choosing a COTS product that will be used across the project life cycle.
Click on the image to preview the file. From the preview, click on Download to obtain a usable copy.
If COTS/GOTS software is used for a portion of the software solution, the software requirements for that portion should be used in the testing, V&V of the COTS/GOTS software. For example, if MIL-STD-1553 668 serial communication is the design solution for the project communications link requirements, and the COTS/GOTS, software design solution, is used along with the COTS/GOTS hardware design solution. The serial communications link project software requirements should be used to test, verify, and validate the COTS/GOTS MIL-STD-1553 software. The project requirements may not cover other functionality present in the COTS/GOTS MIL-STD-1553 software. This other functionality should be either disabled or determined to be safe by analysis and testing.
COTS software can range from simple software (e.g., handheld electronic device) to progressively more complicated software (e.g., launch vehicle control system software). A software safety assessment and a risk assessment can be made to determine if this software's use will result in an acceptable level of risk, even if unforeseen hazard(s) fails. These assessments can be used to set up the approach to using and verifying the COTS/GOTS software.
Example: Checklist for selecting an RTOS PAT-025. See also, Topic 6.3 - Checklist for Choosing a Real Time Operating System (RTOS).
Click on the image to preview the file. From the preview, click on Download to obtain a usable copy.
3.13.2 MOTS Software
As defined in Appendix A of NPR 7150.2:
In cases where legacy/heritage code is modified, MOTS is considered to be an efficient method to produce project software, especially if the legacy/heritage code is being used in the same application area as NASA. For example, Expendable Launch Vehicle simulations have been successfully modified to accommodate solid rocket boosters, payload release requirements, or other such changes. Further, if the "master" code has been designed with reuse in mind, such code becomes an efficient and effective method of producing quality code for succeeding projects.
An Independent Verification and Validation (IV&V) Facility report, "Software Reuse Study Report," April 29, 2005, examines changes made on reused software. The conclusions are positive but caution against overestimating (underestimating) the extent and costs of reused software.
The Department of Defense (DoD) has had extensive experience in COTS and MOTS. A Lesson Learned item, Commercial Item Acquisition: Considerations and Lessons Learned, specifically includes lessons learned from MOTS. Concerns in these lessons learned included commercial software vendors attempting to modify existing commercial products, limiting the changes to "minor" modifications, and underestimating the extent and schedule impacts of testing modified code.
Extreme caution should be exercised when attempting to purchase or modify COTS or GOTS code written for another application realm or for which key documentation is missing (e.g., requirements, architecture, design, tests).
Engineering judgment and consideration of possible impacts on the software development activity need to be used/made when determining that software is MOTS, legacy, or heritage.
See also Topic 7.07 - Software Architecture Description.
3.13.3 Open Source Software (OSS)
OSS can range in size from a portion of a software project to an entire software project. OSS software can include software tools (e.g., word processor or spreadsheet applications), simulations (e.g., aeronautical and rocket simulations), and modeling tools (e.g., dynamics/thermal/electrical modeling tools).
If you plan to use OSS products, be sure to complete the checklist tables under the Tools section of this guidance. The purpose of these tables is to ensure that the table entries are considered in your software life cycle decisions from software acquisition through software maintenance.
For OSS:
- The requirements to be met by the software component are identified.
- The software component includes documentation to fulfill its intended purpose (e.g., usage instructions).
- Proprietary rights, usage rights, ownership, warranty, licensing rights, and transfer rights have been addressed. The OSS license must be reviewed and approved by NASA legal before use on any government system.
- Future support for the software product is planned and adequate for project needs.
- The software component is verified and validated to the same level required to accept a similar developed software component for its intended use.
- The project has a plan to perform periodic assessments of vendor reported defects to ensure the defects do not impact the selected software components.
- An OSS software origin analysis should be done on all open source software before use.
- OSS software is required to be scanned or assessed for security vulnerabilities before use.
Table of Example Questions for Assessing COTS. MOTS, OSS, and reuse software for use by or in a System (not included here).
3.13.4 Auto-generated Software
Auto-generated software results from translating a model of the system behavior created by software engineers into different languages such as C or Ada by the appropriate code generator. Changes are made by revising the model, and code is generated from the revised model.
As is required for software developed without using a model or code generator, auto-generated software is required to be verified and validated to the level required to ensure its fitness for use in the intended application. Recommendations include subjecting the model to the same level of scrutiny and review as source code, 369 subjecting the auto-generated code to the same rigorous inspection, analysis, and test as hand-generated code, including system engineers and software engineers in joint reviews to identify misunderstood or unclear requirements.
There are many considerations to be reviewed when deciding whether using auto-generated software is the right choice for a project, including determining what is necessary to ensure future support for the generated software.
See also Topic 8.11 - Auto-Generated Code, SWE-146 - Auto-generated Source Code,
3.13.5 Embedded Software
NASA commonly uses embedded software applications written by/for NASA for engineering software solutions. Embedded software is software specific to a particular application as opposed to general-purpose software running on a desktop. Embedded software usually runs on custom computer hardware ("avionics"), often on a single chip.
Care must be taken when using vendor-supplied board support packages (BSPs) and hardware-specific software (drivers), typically supplied with off-the-shelf avionics systems. BSPs and drivers act as the software layer between the avionics hardware and the embedded software applications written by/for NASA. Most central processing unit (CPU) boards have BSPs provided by the board manufacturer or third parties working with the board manufacturer. Driver software is provided for serial ports, universal serial bus (USB) ports, interrupters, modems, printers, and many other hardware devices.
BSPs and drivers are hardware dependent, often developed by third parties on hardware/software development tools that may not be accessible years later. Risk mitigation should include hardware-specific software, such as BSPs, software drivers, etc.
Board manufacturers provide many BSPs and drivers as binary code only, which could be an issue if the supplier is not available and BSP/driver errors are found. It is recommended that a project using BSPs/drivers maintain a configuration managed version of any BSPs with release dates and notes. Consult with avionics (hardware) engineers on the project to see what actions may be taken to manage the BSPs/drivers.
Consideration should also be given to how BSP/driver software updates are to be handled, if and when they are made available, and how it will become known to the project that updates are available?
Vendor reports and user forums should be monitored from the time hardware, and associated software are purchased through a reasonable time after deployment. Developers should monitor suppliers or user forums for bugs, workarounds, security changes, and other modifications to the software that, if unknown, could derail a NASA project. Consider the following snippet from a user forum:
"Manufacturer Pt. No." motherboard embedded complex electronics contains malware.
Published: 2010-xx-xx
A "Manufacturer" support forum identifies "manufacturer's product" motherboards that contain harmful code. The embedded complex electronics for server management on some motherboards may contain malicious code. There is no impact on either new servers or non-Windows based servers. No further information is available regarding the malware, malware mitigation, the serial number of motherboards affected, or the source.
Example Questions for Assessing COTS. MOTS, OSS, and reuse software for use by or in a System | ||
This is not a complete list. Each Center or Project should add to, remove, or alter the list which applies to their tools. Not all questions apply to all tool types. This checklist helps the thought processes when considering COTS, MOTS, OSS, and resued software or software tools. If the system has safety-critical components that could contribute to a hazard by either providing a false or inaccurate output or developing software with flawed algorithms, paths, execution timing, etc., consider using the example safety checklist below. | ||
Y/N/ NA/ unknown (?) | ||
1. | What requirements does the intended OTS or OSS or reuse software fulfill for the system? | |
a. | Is it a standalone tool used to produce, develop, or verify software (or Hardware) for a safety-critical system? | |
b. | Is this an embedded product? | |
2. | Why this COTS, MOTS, OSS, or reuse software product, why this vendor? | |
a. | What is the COTS, MOTS, OSS, or reuse software pedigree? | |
b. | Is it a known and respected company? | |
c. | What does the market/user community say about it? | |
d. | Does the purchasing company/industry have a track record of using this product? | |
e. | What is the volatility of the product? Of the vendor? | |
f. | The vendor will provide what agreements and services? | |
g. | Is escrow of the COTS, MOTS, OSS, or reuse software a viable option? | |
h. | What documentation is available from the vendor? | |
i. | Are operators/user guides, installation procedures, etc., normally provided? | |
j. | Is additional requested documentation available? (perhaps at additional cost) Examples might include requirements, design, tests, problem reports, development and assurance processes, plans, etc. | |
3. | What training is provided or available? | |
4. | Does the vendor share known problems with their clients? | |
a. | Is there a user group? | |
b. | Are there a useful means to notify customers of problems (and any solutions/workarounds) found by the company or by another customer? | |
c. | Do they share their risk and or safety analyses of new and existing problems/errors? | |
5. | What plan/agreement is there for if the vendor or product ceases to exist? | |
a. | What if the vendor goes out of business? | |
b. | What if another company buys the vendor? | |
c. | What if the vendor stops supporting either the product or the version of the product used? | |
6. | Why not develop it in the house ( this may or may not be obvious)? | |
7. | How are those requirements traced throughout the life of the product? | |
---|---|---|
8. | What performance measures are expected and needed? | |
9. | What requirements does it not meet that will need to be fulfilled with developed code? | |
10. | Will wrappers and or glueware be needed? | |
11. | Will other tools and support software be needed (e.g., special tools for programming the cots, adaptors for specific interfaces, drivers specific to an operating system or output device, etc.)? | |
12. | Does it need to be programmed? Do one or more applications run on the COTS? | |
13. | What features does the COTS, MOTS, OSS, or reuse software have that are not used? | |
a. | How are they “turned off,” if that is possible? | |
b. | Is a wrapper necessary to assure correct inputs to and/or outputs from the COTS, MOTS, OSS, or reuse software? | |
c. | Can the unwanted features be “safe,” prevented from inadvertent functioning? | |
d. | Could operators/users/maintenance incorporate the unused features in the future? | |
i. | What would be the implications? | |
ii. | What would be the controls? | |
iii. | What would be the safety ramifications? | |
14. | How can it verified and validated functionally, performance-wise, stress/fault-tolerant? | |
a. | Outside the intended system? | |
b. | With any programming or applications? | |
c. | With wrappers and or glueware? | |
d. | As part of the incorporating system? | |
e. | What performance measures are to be used? | |
f. | What tests can be performed to stress the COTS, MOTS, OSS, or reuse software standalone or within the system? | |
g. | What fault-tolerant tests can be performed either standalone or within the system, fault injection? | |
15. | Will it be used in a safety-critical system? | |
a. | Do the functions performed by the COTS, MOTS, OSS, or reuse software meet the SW safety criticality criteria? | |
b. | Has a preliminary hazard analysis identified the COTS, MOTS, OSS, or reuse software functions as safety-critical? | |
c. | If so, what hazard contributions could it make? | |
i. | Think functionally at first. What happens if the function it is to perform fails? | |
ii. | Then work through common/generic faults and failure modes. | |
d. | How does it fail?; list, test for all the modes. | |
e. | Will wrapper code be developed to protect the system from this COTS, MOTS, OSS, or reuse software? | |
f. | What potential problems could the unused portions of the COTS, MOTS, OSS, or reuse software cause? | |
16. | For an Operating System, | |
a. | Is it a reduced or “safe” OS (e.g., Real-Time Operating System VxWorks, Integrity, the D0-178B ARINC 653 version sold only to the aviation, life-critical software market)? | |
b. | How are exceptions handled? | |
c. | What compilers are needed, what debuggers? | |
e. | What is it running on? Is that an approved, recommended platform? | |
f. | Is the processing time, scheduler, switching time adequate? | |
g. | Will partitioning be needed, how well does it perform partitioning? | |
17. | Will it be used in a system that must be highly reliable? | |
a. | Is there reliability information from the vendor? | |
b. | How will its reliability be measured within the system it is operating in/contributing to? | |
c. | Is there company experience of this COTS, MOTS, OSS, or reuse software to be drawn from? Which version of the COTS, MOTS, OSS, or reuse software? | |
d. | What are the error/discrepancy metrics that can be collected? | |
i. | From the vendor? | |
ii. | From use in developing and testing the COTS, MOTS, OSS, or reuse software both within and outside the system? | |
e. | Do the system functional FEMAs include the functions to be performed by the COTS, MOTS, OSS, or reuse software? | |
i. | Have known potential faults and failures been adequately analyzed and documented? | |
18. | What happens when versions of the COTS, MOTS, OSS, or reuse software change? | |
a. | Is there an upgrade plan? | |
i. | During development? | |
ii. | During operations/maintenance? | |
iii. | What does the upgrade plan take into consideration? | |
b. | Is there a maintenance plan/agreement? | |
c. | Is there a support agreement for addressing any errors found? | |
d. | Should the COTS, MOTS, OSS, or reuse software be put in escrow? | |
e. | Should there be an agreement to have the software revert to the company after so many years? | |
f. | Should the company purchase the rights to the COTS, MOTS, OSS, or reuse software code and documentation? | |
19. | What is the licensing agreement, and what are the limitations? | |
a. | How many seats are there? | |
b. | Is vendor support included? | |
c. | Can licenses be transferred? | |
d. | Does the licensing agreement meet project needs? | |
20. | For software development and debugging tools: | |
a. | Which compilers and libraries have been chosen? | |
b. | Are there a reduced instruction set and code standards to be followed? | |
c. | Is there more than one debug tool to be used? | |
i. | What are their false positive and false negative rates? | |
d. | Autocode generators: | |
ii. | What are their limitations, their known defects? | |
iii. | What are their settings and parameters? (Are they easy to use? Do they meet project needs?) | |
iv. | Are results usable, repeatable? | |
v. | What are the support agreements? | |
vi. | Is there verification and validation support? How will they be verified and validated? | |
e. | Modeling tools | |
f. | Development environment tools | |
21. | For infrastructure tools (e.g., databases, configuration management, and release tools, verification tools, etc.): | |
a. | Does it meet the requirements? | |
b. | Can it grow and expand if needed, or has it been specified for only current needs? | |
c. | Will the tool be verifying, creating (e.g., auto code generator), building, assembling, burning in safety-critical software? | |
d. | How would the loss of data stored in the tool or accessed by the tool impact the project? | |
i. | Could safety data be lost, say, from the tool that stores hazard reports, problem reporting information? | |
e. | Are there sufficient and frequent enough back-ups? How and where are those stored? | |
f. | How much training is required to use the tools? | |
g. | Are there restrictions on and levels of access? | |
ii. | How are access levels managed? | |
h. | Are any security features needed, either built-in or via access limitations? |
3.14 Additional Guidance
Additional guidance can be found in the following related requirements in this Handbook:
3.15 Center Process Asset Libraries
SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki 197
See the following link(s) in SPAN for process assets from contributing Centers (NASA Only).
SPAN Links |
---|
4. Small Projects
This requirement applies to all projects regardless of size.
5. Resources
5.1 References
- (SWEREF-040) Commissioned by the NASA Office of Chief Engineer, Technical Excellence Program, Adam West, Program Manager, and edited by Daniel L. Dvorak, Systems and Software Division, Jet Propulsion Laboratory, 2009.
- (SWEREF-121)
- (SWEREF-125) Asay, Matt. CNET, September 27, 2007
- (SWEREF-129) Baron, Sally J. F. (September, 2006). International Public Procurement Conference Proceedings.
- (SWEREF-130) Baron, Sally J. F., Ph.D. Management Consulting. IEEE Xplore, Sixth International IEEE Conference on Commercial-off-the-Shelf (COTS)-Based Software Systems (ICCBSS'07), 0-7695-2785-X/07.
- (SWEREF-143) Budden, Timothy J. AVISTA. CrossTalk - Journal of Defense Software Engineering, November 2003. See page 18.
- (SWEREF-148) Carney, D.J., Oberndorf, P.A. (May, 1997). Carnegie Mellon Software Engineering Institute, Carnegie Mellon University.
- (SWEREF-154) Clark, Drs. Brad and Betsy. Software Metrics, Inc. (June, 2007). CrossTalk - Journal of Defense Software Engineering.
- (SWEREF-167) June 1997.University of Southern California, Center for Software Engineering.
- (SWEREF-185) Feathers, Michael C. (2004). Prentice Hall.
- (SWEREF-197) Software Processes Across NASA (SPAN) web site in NEN SPAN is a compendium of Processes, Procedures, Job Aids, Examples and other recommended best practices.
- (SWEREF-242) Livingston, Jr. P.E., Wiley F. (June, 2007). Software Technology Support Center (STSC), Hill AFB.
- (SWEREF-249) McHale, John, Exec. Ed. (January, 2008). Military & Aerospace Electronics magazine. Title no longer available.
- (SWEREF-276) NASA-GB-8719.13, NASA, 2004. Access NASA-GB-8719.13 directly: https://swehb.nasa.gov/download/attachments/16450020/nasa-gb-871913.pdf?api=v2
- (SWEREF-369) Wagstaff, K., Benowitz, E. Byrne, D.J., Peters, K., Watney, G. (2008), NASA Jet Propulsion Lab (JPL). https://trs.jpl.nasa.gov/handle/2014/41374
- (SWEREF-373) NPR 2210.1C, Space Technology Mission Directorate, Effective Date: August 11, 2010, Expiration Date: January 11, 2022 See page 9.
- (SWEREF-424) NASA Langley Research Center (LaRC), August 20, 2004. Lessons Learned Reference. (See pages 45-56)
- (SWEREF-425) International Space Station, Multilateral Coordination Board, NASA Kennedy Space Center (KSC), July 22, 2009. Lessons Learned Reference.
- (SWEREF-426) Office of the Secretary of Defense, June 26, 2000. Lessons Learned Reference.
- (SWEREF-462) © 2014 Black Duck Software, Inc.
- (SWEREF-550) Public Lessons Learned Entry: 1346.
- (SWEREF-551) Public Lessons Learned Entry: 1370.
- (SWEREF-557) Public Lessons Learned Entry: 1483.
- (SWEREF-668) MIL-STD-1553B, published in 1978,
5.2 Tools
NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN.
The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.
5.3 Process Asset Templates
(PAT-024 - )
Topic 6.4, Also in SWE-027 and categories: Commercial and Legacy Software, and Coding Practices.(PAT-025 - )
SWE-027, tab 3.1, Also in Topic 6.3.
(PAT-022 - )
Topic 8.56 - Source Code Quality Analysis, tab 2.2,(PAT-024 - )
Topic 6.4, Also in SWE-027 and categories: Commercial and Legacy Software, and Coding Practices.(PAT-025 - )
SWE-027, tab 3.1, Also in Topic 6.3.
6. Lessons Learned
6.1 NASA Lessons Learned
The NASA Lessons Learned database contains the following lessons learned related to the user of commercial, government, and legacy software:
- MER Spirit Flash Memory Anomaly (2004). Lesson Number 1483 557: "Shortly after the commencement of science activities on Mars, the Mars Exploration Rover (MER) lost the ability to execute any task that requested memory from the flight computer. The cause was incorrect configuration parameters in two operating system software modules that control files' storage in system memory and flash memory. Seven recommendations cover enforcing design guidelines for COTS software, verifying assumptions about software behavior, maintaining a list of lower priority action items, testing flight software internal functions, creating a comprehensive suite of tests and automated analysis tools, providing downlinked data on system resources, and avoiding the problematic file system and complex directory structure."
Recommendations:
- "Enforce the project-specific design guidelines for COTS software, as well as for NASA-developed software. Assure that the flight software development team reviews the basic logic and functions of commercial off-the-shelf (COTS) software, including the vendor's briefings and participation.
- "Verify assumptions regarding the expected behavior of software modules. Do not use a module without detailed peer review and ensure that all design and test issues are addressed.
- "Where the software development schedule forestalls completion of lower priority action items, maintain a list of incomplete items that require resolution before final configuration of the flight software.
- "Place a high priority on completing tests to verify the execution of flight software internal functions.
- "Early in the software development process, create a comprehensive suite of tests and automated analysis tools.
- "Ensure that reporting flight computer-related resource usage is included.
- "Ensure that the flight software downlinks data on system resources (such as the free system memory) so that the actual and expected behavior of the system can be compared.
- "For future missions, implement a more robust version of the dosFsLib module, and/or use a different type of file system and a less complex directory structure.".
- Lessons Learned From Flights of Off the Shelf Aviation Navigation Units on the Space Shuttle, GPS. Lesson Number 1370 551: "The Shuttle Program selected off-the-shelf GPS and EGI units that met the requirements of the original customers. It was assumed that off-the-shelf units with proven design and performance would reduce acquisition costs and require minimal adaptation and minimal testing. However, the time, budget, and resources needed to test and resolve firmware issues exceeded initial projections."
- ADEOS-II NASA Ground Network (NGN) Development and Early Operations – Central/Standard Autonomous File Server (CSAFS/SAFS) Lessons Learned. Lesson Number 1346 550: "The purpose of the Standard Autonomous File Server (SAFS) is to provide automated management of large data files without interfering with the assets involved in the acquisition of the data. It operates as a stand-alone solution, monitoring itself and providing an automated fail-over processing level to enhance reliability. The successful integration of COTS products into the SAFS system has been key to its becoming accepted as a NASA standard resource for file distribution, and leading to its nomination for NASA's Software of the Year Award in 1999."
Lessons Learned:
"Match COTS tools to project requirements. Deciding to use a COTS product as the basis of system software design is potentially risky. The potential benefits include quicker delivery, less cost, and more reliability in the final product. The following lessons were learned in the definition phase of the SAFS/CSAFS development.- "Use COTS products and re-use previously developed internal products.
- "Create a prioritized list of desired COTS features.
- "Talk with local experts having experience in similar areas.
- "Conduct frequent peer and design reviews.
- "Obtain demonstration [evaluation] versions of COTS products.
- "Obtain customer references from vendors.
- "Select a product appropriately sized for your application.
- "Choose a product closely aligned with your project's requirements.
- "Select a vendor whose size will permit a working relationship.
- "Use vendor tutorials, documentation, and vendor contacts during the COTS evaluation period."
"Test and prototype COTS products in the lab. The COTS evaluation prototyping and test phase allow problems to be identified as the system design matures. These problems can be mitigated (often with the help and cooperation of the COTS vendor) well before the field-testing phase, at which time it may be too costly or impossible to retrofit a solution. The following lessons were learned in the prototyping and test phase of the SAFS/CSAFS development: - "Prototype your system's hardware and software in a lab setting as similar to the field environment as possible.
- "Simulate how the product will work on various customer platforms.
- "Model the field operations.
- "Develop in stages with ongoing integration and testing."
- "Pass pertinent information on to your customers.
- "Accommodate your customers, where possible, by building in alternative options.
- "Don't approve all requests for additional options by customers or new projects that come online.
- "Select the best COTS components for product performance, even if they are from multiple vendors.
- "Consider the expansion capability of any COTS product.
- "Determine if the vendor's support is adequate for your requirements.
"Install, operate, and maintain the COTS field and lab components. The following lessons were learned in the installation and operation phase of the SAFS/CSAFS development: - "Personally perform on-site installations whenever possible.
- "Have support/maintenance contracts for hardware and software through development, deployment, and first year of operation.
- "Create visual representations of system interactions where possible.
- "Obtain feedback from end-users.
- "Maintain the prototype system after deployment.
- "Select COTS products with the ability to do internal logging."
- Lessons Learned Study Final Report for the Exploration Systems Mission Directorate, Langley Research Center; August 20, 2004. Lessons Learned Number 1838 424: "There has been an increasing interest in utilizing commercially available hardware and software as portions of space flight systems and their supporting infrastructure. Experience has shown that this is a very satisfactory approach for some items and a major mistake for others. In general, COTS [products] should not be used as part of any critical systems [but see the recommendation later in this Lesson Learned] because of the generally lower level of engineering and product assurance used in their manufacture and test. In those situations where COTS [software] has been applied to flight systems, such as the laptop computers utilized as control interfaces on [International Space Station] (ISS), the cost of modifying and testing the hardware/software to meet flight requirements has far exceeded expectations, potentially defeating the reason for selecting COTS products in the first place. In other cases, such as the [Checkout Launch Control System] (CLCS) project at JSC, the cost of maintaining the commercial software had not been adequately analyzed and drove the project's recurring costs outside the acceptable range.
Recommendation: Ensure that candidate COTS products are thoroughly analyzed for technical deficiencies and life cycle cost implications before levying them on the program.
- COTS systems can reduce system costs, but only if all of their characteristics are considered beforehand and included in the planned application. (Standards)
- COTS systems that look good on paper may not scale well to NASA's needs for legitimate reasons. These include sustaining engineering/update cycle/recertification costs, scaling effects, dependence on third party services and products. We need to ensure that a life cycle cost has been considered correctly. (Headquarters - CLCS)
- COTS systems can reduce system costs, but only if all of their characteristics are considered beforehand and included in the planned application. (Standards)
6.2 Other Lessons Learned
- The following information comes from the NASA Study on Flight Software Complexity listed in the reference section of this document 040:
"In 2007, a relatively new organization in DoD (the Software Engineering and System Assurance Deputy Directorate) reported their findings on software issues based on approximately 40 program reviews in the preceding 2½ years (Baldwin 2007). They found several software systemic issues that were significant contributors to poor program execution." Among the seven listed were the following on Commercial Off The Shelf (COTS):
- "Immature architectures, COTS integration, interoperability."
"Later, in partnership with the NDIA, they identified the seven top software issues that follow, drawn from a perspective of acquisition and oversight." Among the seven listed were the following on COTS:
- "Inadequate attention is given to total life cycle issues for COTS/NDI impacts on life cycle cost and risk."
"In partnership with the NDIA, they made seven corresponding top software recommendations." Among the seven listed were the following on COTS:
- "Improve and expand guidelines for addressing total life cycle COTS/NDI issues."
- "Improve and expand guidelines for addressing total life cycle COTS/NDI issues."
- The following information is from Commercial Item Acquisition: Considerations and Lessons Learned July 14, 2000, Office of the Secretary of Defense 426:
This document is designed to assist DoD acquisition and supported commercial items. According to the introductory cover letter, "it provides an overview of the considerations inherent in such acquisitions and summarizes lessons learned from a wide variety of programs." Although it's written with the DoD acquirer in mind, it can provide useful information and assist you as we move down this increasingly significant path.
- International Space Station Lessons Learned as Applied to Exploration, KSC, July 22, 2009 425:
(23-Lesson): Use Commercial Off-the-Shelf Products Where Possible.- An effective strategy in the ISS program was to simplify designs by utilizing commercial off-the-shelf (COTS) hardware and software products for non-safety, non-critical applications.
- Application to Exploration: The use of COTS products should be encouraged whenever practical in exploration programs.
7. Software Assurance
a. The requirements to be met by the software component are identified.
b. The software component includes documentation to fulfill its intended purpose (e.g., usage instructions).
c. Proprietary rights, usage rights, ownership, warranty, licensing rights, transfer rights, and conditions of use (e.g., required copyright, author, and applicable license notices within the software code, or a requirement to redistribute the licensed software only under the same license (e.g., GNU GPL, ver. 3, license)) have been addressed and coordinated with Center Intellectual Property Counsel.
d. Future support for the software product is planned and adequate for project needs.
e. The software component is verified and validated to the same level required to accept a similar developed software component for its intended use.
f. The project has a plan to perform periodic assessments of vendor reported defects to ensure the defects do not impact the selected software components.
7.1 Tasking for Software Assurance
1. Confirm that the conditions listed in "a" through "f" are complete for any COTS, GOTS, MOTS, OSS, or reused software that is acquired or used.
7.2 Software Assurance Products
- No products have been identified at this time.
Objective Evidence
- The requirements for any COTS, GOTS, MOTS, OSS, or reused software that is acquired or used.
- COTS, GOTS, MOTS, OSS, or reused software documentation.
- Test procedures and test reports that show that any COTS, GOTS, MOTS, OSS, or reused software is verified and validated to the same level required to accept a similar developed software component for its intended use.
- Data showing a review of vendor-reported defects.
7.3 Metrics
- # of Software Requirements (e.g., Project, Application, Subsystem, System, etc.)
See also Topic 8.18 - SA Suggested Metrics
7.4 Guidance
When a Commercial Off the Shelf (COTS), Government Off the Shelf (GOTS), Modified Off the Shelf) (MOTS), Open Source Software (OSS) or reused software component is acquired or used:
- The requirements to be met by the software component are identified: Assess if the software requirements specification has requirements identified for any feature, function, or capability provided by the COTS, GOTS, MOTS, OSS, or reused software component, not just a requirement to use a COTS, GOTS, MOTS, OSS, or reused software component. The requirements are needed to drive testing of the feature, function, or capability provided by the COTS, GOTS, MOTS, OSS, or reused software components. Identify risks or issues if the software requirements specification does not have well-written requirements identified for the feature, function, or capability provided by the COTS, GOTS, MOTS, OSS, or reused software component. This includes requirements for RTOS software.
- The software component includes documentation to fulfill its intended purpose (e.g., usage instructions): - Assess if the software requirements specification has requirements identified for the feature, function, or capability provided by the COTS, GOTS, MOTS, OSS, or reused software component.
- Proprietary rights, usage rights, ownership, warranty, licensing rights, and transfer rights have been addressed. - verify that engineering has addressed the question for all COTS, GOTS, MOTS, OSS, or reused software components used.
- Future support for the software product is planned and adequate for project needs: - Assess the long-term support plans for all COTS, GOTS, MOTS, OSS, or reused software components used, including version support update plans by the project. Verify that you have access to all discrepancies identified by any user of the COTS, GOTS, MOTS, OSS, or reused software components.
- The software component is verified and validated to the same level required to accept a similar developed software component for its intended use.
- The project has a plan to perform periodic assessments of vendor reported defects to ensure the defects do not impact the selected software components. Verify that you have access to all discrepancies identified by any user of the COTS, GOTS, MOTS, OSS, or reused software components.
- Perform or review any risk analysis, trade studies, or heritage analyses that have been done to assess any potential impacts on safety, quality, security, or reliability.
If the COTS, MOTS, GOTS, or OSS is identified as safety-critical, evidence of the following need to be evaluated by SA as part of the acceptance process:
- Software contributions to hazards, conditions or events have been identified.
- All controls for hazards, conditions, or events that require software implementation have been identified and properly implemented.
- All software requirements associated with safety and safety design elements have been identified and tracked.
- All software requirements associated with safety and safety design elements have been successfully implemented, validated, or waivers and deviations have been approved.
- All software requirements associated with safety and safety design elements have been properly verified, or waivers and deviations have been approved.
- All discrepancies in safety-critical software have been dispositioned with the SMA concurrence.
- All operational workarounds associated with discrepancies in safety-critical software have the concurrence of the acquiring SMA and operations.
Example COTS Safety Checklist
This is not a complete list. Each Center or Project should add to, remove, or alter the list which applies to their tools. Not all questions apply to all COTS, MOTS, GOTS, OSS, or reuse software or tool types. This checklist helps the thought processes when considering if software tools or programs (embedded or standalone) could contribute to a hazard by either providing a false or inaccurate output or developing software with flawed algorithms, paths, execution timing, etc. | |
1. Were any risk analyses or trade-off analyses performed? | |
a. Where and how are the COTS, MOTS, GOTS, OSS, or reuse software planned to be used? | |
b. What features will not be used, and how can they be prevented from inadvertent access? | |
c. What changes to the rest of the system are needed to incorporate the COTS, MOTS, GOTS, OSS, or reuse software? | |
d. Where are the results of the trade study documented, and are they being maintained? | |
2. How adequately does the SW Management Plan address the COTS, MOTS, GOTS, OSS, or reuse software in its system(s), or is there a standalone COTS, MOTS, GOTS, OSS, or reuse software management plan? | |
a. Does the plan address how version changes and problem fixes to the COTS, MOTS, GOTS, OSS, or reuse software will be handled during development? | |
i. What is the decision-making process for what upgrades will be made and when they will be made? | |
ii. How does it address version control for the COTS, MOTS, GOTS, OSS, or reuse software and any wrappers or glueware? | |
iii. If there are multiple COTS, MOTS, GOTS, OSS, or reuse software that interacts, how are upgrades coordinated? | |
iv. What retesting and additional analyses will take place to assure smooth incorporation? | |
b. How will COTS, MOTS, GOTS, OSS, or reuse software be included in the Data Acceptance package and version description documents? | |
c. What Software Classification is assigned to the COTS, MOTS, GOTS, OSS, or reuse software or the SW System in which the COTS, MOTS, GOTS, OSS, or reuse software is used? | |
d. Does SA agree with the Software Classification? With the Safety Assessment? | |
e. Is the plan complete for the appropriate level(s) of SW Classification? | |
f. How will risks be captured and managed? | |
g. Does it cover the issues listed above? | |
h. Does the plan make sense? | |
3. Other SW or system plans will need to be reviewed to assure that they address the COTS, MOTS, GOTS, OSS, or reuse software: | |
a. Has the software maintenance plan been reviewed? | |
i. How will COTS, MOTS, GOTS, OSS, or reuse software be upgraded or replaced once in operation | |
ii. What trigger points will be used to determine the need/benefits vs. potential instability caused by upgrades or replacement of COTS, MOTS, GOTS, OSS, or reuse software? | |
b. Have retirement plans been reviewed? | |
c. Have safety plans been reviewed? | |
d. Have assurance plans (which address all that is listed here and possibly more)been reviewed? | |
4. A review of the requirements the COTS, MOTS, GOTS, OSS, or reuse software is supposed to be fulfilling: | |
a. Functional requirements, | |
b. Interface requirements, | |
c. Performance requirements | |
d. Wrapper software requirements | |
e. Has the functionality of the COTS, MOTS, GOTS, OSS, or reuse software not to be used identified, and how will it be prevented from being used? | |
f. How are the requirements fulfilled by COTS, MOTS, GOTS, OSS, or reuse software being traced from beginning to delivery and beyond? | |
g. Have realistic and complete operational and failure scenarios been written? | |
5. Participation in the design reviews of how the COTS, MOTS, GOTS, OSS, or reuse software is to be architected into the system, or at least the PDR and CDR of the systems should address the COTS, MOTS, GOTS, OSS, or reuse software. | |
a. Does it meet the requirements placed on it? | |
b. Has the risk analyses been performed? | |
c. Have the safety analyses been performed and presented to the appropriate phase? | |
6. Safety: How will Hazard analyses be run on the COTS, MOTS, GOTS, OSS, or reuse software or systems with COTS, MOTS, GOTS, OSS, or reuse software? | |
a. By its functions, or just as inputs and outputs through a wrapper? | |
b. What if it is an OS, are the safety personnel aware of how to cover OS in an HA? | |
c. Possible hazard causes and effects on safety-critical systems? | |
d. Its applications, glueware, and or wrappers? | |
e. How to mitigate possible hazards that a COTS, MOTS, GOTS, OSS, or reuse software could trigger? | |
7. Review of verification and validation plans, procedures, results, | |
a. How will it be tested? | |
i. White box testing? | |
ii. In situ testing? | |
iii. Can it be tested standalone to ensure it meets the needs it is intended for? | |
b. How are upgrades to the COTS, MOTS, GOTS, OSS, or reuse software verified and validated? | |
c. What are the plans and procedures? | |
d. Proof that it does not utilize undesired features? | |
e. Are any safety controls and mitigations tested sufficiently? | |
f. When best to participate in testing to assure the COTS, MOTS, GOTS, OSS, or reuse software are working properly and have been incorporated properly? | |
8. Reliability | |
a. What are the performance measures? | |
i. Expected? | |
ii. Measured? | |
iii. What are the issues with crashes, input, or memory overloads? | |
b. How does it fail? | |
i. What are the conditions that lead to failure or fault? | |
ii. What are the operational limits? | |
iii. What are the impacts of those failures or faults? | |
iv. Are there any predictors that can measure and lead to the prevention of a failure? | |
v. What protections need to be provided? | |
1. In the operations? | |
2. In the glueware/wrappers? | |
c. How does the COTS, MOTS, GOTS, OSS, or reuse software provide notifications of faults and failures? | |
d. How does it get reset? | |
e. What measurements should be taken, and when, to understand the reliability of the COTS, MOTS, GOTS, OSS, or reuse software? | |
i. During integration and incorporation into the system (interface problems, trouble with support SW, etc.)? | |
ii. During systems checkout and testing? | |
iii. During operations? | |
9. Metrics should be determined to assure performance and quality within the system or as a standalone tool. | |
10. SW Assurance of any associated developed software needs to carry the same SW Classification and safety assessment and thus the appropriate normal software engineering and assurance: | |
a. glueware | |
b. wrappers | |
c. applications | |
d. Interfaces | |
i. human | |
ii. other systems/software | |
iii. Hardware including Programmable Logic Devices | |
11. Lessons learned of problems, changes, adaptations, usage, programmability, etc.: | |
a. Including its applications, glueware, and or wrappers? | |
b. Provide information and evidence if the COTS, MOTS, GOTS, OSS, or reuse software product(s) worked, and provide documentation of both problems and solutions. |
See also Topic 8.02 - Software Reliability, 8.08 - COTS Software Safety Considerations.
7.5 Additional Guidance
Additional guidance can be found in the following related requirements in this Handbook: