bannerd


SWE-060 - Coding Software

1. Requirements

4.4.2 The project manager shall implement the software design into software code.

1.1 Notes

NPR 7150.2, NASA Software Engineering Requirements, does not include any notes for this requirement.

1.2 History

SWE-060 - Last used in rev NPR 7150.2D

RevSWE Statement
A

3.3.1 The project shall implement the software design into software code.

Difference between A and B

No change

B

4.4.2 The project manager shall implement the software design into software code.

Difference between B and C

No change

C

4.4.2 The project manager shall implement the software design into software code.

Difference between C and DNo change
D

4.4.2 The project manager shall implement the software design into software code.



1.3 Applicability Across Classes

Class

     A      

     B      

     C      

     D      

     E      

     F      

Applicable?

   

   

   

   

   

   

Key:    - Applicable | - Not Applicable


2. Rationale

This requirement begins the implementation section of the NPR 7150.2. It acknowledges that the project has the primary responsibility for producing the software code. The NPR notes that the "software implementation consists of implementing the requirements and design into code, data, and documentation. Software implementation also consists of the following coding methods and standards. Unit testing is also a part of software implementation." Other guidance areas in this Handbook cover the requirements for data, documentation, methods, standards, and unit testing (see the table in the guidance section for this requirement).

See also SWE-058 - Detailed Design

3. Guidance

3.1 Coding Standards

Once the software development team has completed the software architecture and the software detailed design, the exacting task of turning the design into code begins. The use and adherence to the project's software coding standards will enhance the resulting code and reduce coding errors (see SWE-061 - Coding Standards). In a team environment or group collaboration, coding standards ensure uniform coding practices; it reduces oversight errors and the time spent in code reviews. When NASA software development work is outsourced to a supplier, the agreement on a set of coding standards ensures that the contractor's code meets all quality guidelines mandated by NASA-STD-8739.8, Software Assurance, and Software Safety Standard. 278

See also SWE-185 - Secure Coding Standards Verification

3.2 Accredited Tools

The software development team uses accredited tools to develop the software code (see SWE-136 - Software Tool Accreditation). This may include accredited tools that have not been previously used or adapted to the new environment. The key is to evaluate and then accredit the development environment and its associated development tools against another environment/tool system that is accredited. The more typical case regarding new software development is using accredited tools in an accredited development environment that have not been used together previously and thus are not accredited as an integrated system. Many NASA missions and projects have used modeling tools like Simulink and Matlab. Auto-generated code (e.g., code from generators like MatrixX and Real-Time Workshop) has been successfully used in the past. It is an important approach for developing software for current and future NASA projects. The potential for bugs, sometimes difficult to find, and the generated code's certification are two problems the software development team needs to be aware of and plan for as the software coding occurs. See also Topic 8.11 - Auto-Generated Code

Smaller software work products can be completed with a few stand-alone tools.  Larger software work products will benefit from using an integrated development environment (IDE) for producing the code. An IDE (also known as an integrated design environment) is a software application that provides comprehensive facilities to software coders and computer programmers for software development.

An IDE normally includes the following tools:

  • Source code editor (a text editor for editing the source code).
  • Compiler and/or an interpreter (a program (or set of programs) that transforms source code written in a programming language (the source language) into object code).
  • Build automation tools (activities or work aids that script or automate the wide variety of tasks that software developers do in their day-to-day activities).
  • Debugger (a program run on the software to surface coding errors or other software work product issues).

An IDE developed for a particular application may have more tools. The Process Asset Library (PAL) at the performing Center is the first to search for an existing IDE to use.

This Handbook, along with the Software Processes Across NASA (SPAN), accessible to NASA users from the SPAN tab in this Handbook, provides an extensive listing of individual tools that have been developed for particular applications (see section 5.1 of this SWE). The tool list contains both NASA and commercially developed products. SPAN has several examples of design environments. PALs from other Centers, easily located by NASA users through the NASA Engineering Network (NEN), are good places to search for individual tools and developed IDEs.

3.3 Executable Code

Code generated by using an IDE results in output language from the compiler, usually the native machine language of the system. The work accomplished in this phase of the software life cycle includes the coding of the algorithmic detail developed during the component-level design activities. This results in the coding needed for manipulating data structures, affecting the communications between software components across their interfaces, and implementing the processing algorithms allocated to each software work product component.

3.4 Unit Testing

The software team performs code unit testing and debugging regularly to help find errors early in the coding cycle to avoid expensive fixes in the systems and integration test case phases of the software life cycle. The use of unit testing is intended to confirm that the software work product performs the capability assigned to it, correctly interfaces with other units and data, and represents a faithful implementation of the unit design. Static analysis tools are used to help uncover various problems in the code (dead code, security vulnerabilities, memory leaks, etc.)  Debugging can be done with various tools, but experienced and knowledgeable personnel often are needed when addressing the code that supports complex software architectures and designs. Code walk-through and peers' inspections can help identify and correct issues and reveal opportunities for applying better coding practices. See also SWE-062 - Unit Test

See also Topic 8.19 - Dead / Dormant Code and Safety-Critical Software

3.5 Optimizing Code

Compiling may take one pass, or it may take multiple passes. Compiling will generate optimized code. Often the execution of one pass through the compiler does not result in all the possible optimizations in the code. A good practice would be to plan multiple compiler passes to achieve the maximum amount of code improvements. However, optimization is one of many desirable goals in software work product development and is often at odds with other important goals such as stability, maintainability, and portability. Optimization is usually beneficial when applied at its most cursory level (e.g., efficient implementation, clean non-redundant interfaces).  But at its most intrusive (e.g., inline assembly, pre-compiled/self-modified code, loop unrolling, bit-fielding, superscalar, and vectorizing), it can be an expensive source of time-consuming recoding, recompiling, and bug hunting.  Be cautious of the cost of optimizing your code. 208

3.6 Checklist Of C Programming Practices For Safety

Derived from NUREG/CR-6463 382, appendix B. Also, refer to a generic list of programming practices for safety—updated 10/21/2020 by the NASA Software Safety Guidebook Team.

  • Limit the number and size of parameters passed to routines. Too many parameters affect the readability and testability of the routine. Large structures or arrays, if passed by value, can overflow the stack, causing unpredictable results. Always pass large elements via
  • Use recursive functions with great care. Stack overflows are common. Verify that there is a finite recursion!
  • Utilize functions for boundary checking. Since C does not do this automatically, create routines that perform the same function. Accessing arrays or strings out-of-bounds is a common problem with unpredictable, and often major,
  • Do not use the gets function or related functions. These do not have adequate limit checks. Writing your routine allows better error handling to be
  • Use memmove, not memcpy. Memcpy has problems if memory.
  • Create wrappers for built-in functions to include error
  • If “if…else if…else if…” gets beyond two levels, use a switch…case. This increases readability.
  • When using the switch…case, always explicitly define default. Do not omit the break.
  • Initialize local (automatic) variable before first use. They contain garbage before explicit initialization. Pay special attention to pointers since they can have the most dangerous effects.
  • Initialize global variables in a separate routine. This ensures that variables are properly set at warm.
  • Check pointers to make sure they don’t reference variables outside of scope. Once a variable goes out of scope, what it contains is
  • Only use setjmp and longjmp for exception handling. These commands jump outside function boundaries and deviate from normal control.
  • Avoid pointers to functions. These pointers cannot be initialized and may point to non-executable code. If they must be used, document the
  • Prototype all functions and procedures! This allows the compiler to catch errors rather than having to debug them at run-time. Also, when possible, use a tool or other method to verify that the prototype matches the
  • Minimize interface ambiguities, such as using expressions as parameters to subroutines or changing the order of arguments between similar functions. Also, justify (and document) any use of functions with an indefinite number of arguments. These functions cannot be checked by the compiler and are difficult to
  • Do not use ++ or – operators on parameters being passed to subroutines, creating unexpected side effects.
  • Use bitmasks instead of bit fields, which are implementation
  • Always explicitly cast variables. This enforces stronger typing. Casting pointers from one type to another should be justified and
  • Avoid the use of typedef’s for unsized arrays. This feature is poorly supported and error-prone.
  • Avoid mixing signed and unsigned variables. Use explicit casts when
  • Don’t compare floating-point numbers to 0 or expect exact equality. Allow some small differences due to the precision of the floating-point.
  • Do not compare different data types, such as comparing a floating-point number to an integer.
  • Enable and read compiler warnings. If an option, have warnings issued as errors. Warnings indicate that the deviation may be fine but may also indicate a subtle
  • Be cautious if using standard library functions in a multitasking environment. Library functions may not be re-entrant and could lead to unspecified.
  • Do not call functions within interrupt service routines. If it is necessary to do so, make sure the functions are small and re-entrant.
  • Avoid the use of the ?: operator. The operator makes the code more difficult to read. Add comments explaining it if it is
  • Place #include directives at the beginning of a file. This makes it easier to know what files are included. When tracing dependencies, this information is
  • Use #define instead of numeric literals. This allows the reader or maintainer to know what the number represents (RADIUS_OF_EARTH_IN_KM, instead of 6356.91). It also allows the number to be changed in one place if a change is necessitated.
  • Do not make assumptions about the sizes of dependent types, such as int. The size is often platform and compiler.
  • Avoid using reserved words or library function names as variable names. This could lead to serious errors. Also, avoid using names that are close to standard names to improve the readability of the source.

See also SWE-157 - Protect Against Unauthorized AccessPAT-032 - Considerations When Using Interrupts

3.7 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:

3.8 Center Process Asset Libraries

SPAN - Software Processes Across NASA
SPAN contains links to Center managed Process Asset Libraries. Consult these Process Asset Libraries (PALs) for Center-specific guidance including processes, forms, checklists, training, and templates related to Software Development. See SPAN in the Software Engineering Community of NEN. Available to NASA only. https://nen.nasa.gov/web/software/wiki  197

See the following link(s) in SPAN for process assets from contributing Centers (NASA Only). 

4. Small Projects

No additional guidance is available for small projects. The community of practice is encouraged to submit guidance candidates for this paragraph.

5. Resources

5.1 References


5.2 Tools

Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN). 

NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN. 

The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool.  The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.

6. Lessons Learned

6.1 NASA Lessons Learned

No Lessons Learned have currently been identified for this requirement.

6.2 Other Lessons Learned

No other Lessons Learned have currently been identified for this requirement.

7. Software Assurance

SWE-060 - Coding Software
4.4.2 The project manager shall implement the software design into software code.

7.1 Tasking for Software Assurance

From NASA-STD-8739.8B

1. Confirm that the software code implements the software designs. 

2. Confirm that the code does not contain functionality not defined in the design or requirements.

7.2 Software Assurance Products

  • None at this time.


    Objective Evidence

    • Software design analysis results
    • Software code quality analysis results
    • Software requirements analysis results
    • Static code analysis results
    • Code coverage metric data

    Objective evidence is an unbiased, documented fact showing that an activity was confirmed or performed by the software assurance/safety person(s). The evidence for confirmation of the activity can take any number of different forms, depending on the activity in the task. Examples are:

    • Observations, findings, issues, risks found by the SA/safety person and may be expressed in an audit or checklist record, email, memo or entry into a tracking system (e.g. Risk Log).
    • Meeting minutes with attendance lists or SA meeting notes or assessments of the activities and recorded in the project repository.
    • Status report, email or memo containing statements that confirmation has been performed with date (a checklist of confirmations could be used to record when each confirmation has been done!).
    • Signatures on SA reviewed or witnessed products or activities, or
    • Status report, email or memo containing a short summary of information gained by performing the activity. Some examples of using a “short summary” as objective evidence of a confirmation are:
      • To confirm that: “IV&V Program Execution exists”, the summary might be: IV&V Plan is in draft state. It is expected to be complete by (some date).
      • To confirm that: “Traceability between software requirements and hazards with SW contributions exists”, the summary might be x% of the hazards with software contributions are traced to the requirements.
    • The specific products listed in the Introduction of 8.16 are also objective evidence as well as the examples listed above.

7.3 Metrics

  • Code coverage data: % of code that has been executed during testing.
  •  # of planned units for implementation vs. # of units implemented and unit tested.

See also Topic 8.18 - SA Suggested Metrics

7.4 Guidance

Confirm that all of the implemented software traces back to some part of the design and that all of the design has been implemented in the software. The bi-directional trace matrices can be used to do this checking. Often the tools that are used for traceability can help with these checks.

Also, confirm that all of the design traces back to a documented requirement. If there are parts of the design that have been implemented but are not traced back to any requirement, either the design needs to change, or a requirement needs to be added to capture the need for that feature.  Many traceability tools can be run to identify parent requirements without any children (no corresponding design) or orphan children (design) without  any parent (requirements).

7.4.1 Checklist Of C Programming Practices For Safety

Derived from NUREG/CR-6463 382, appendix B. Also, refer to a generic list of programming practices for safety—updated 10/21/2020 by the NASA Software Safety Guidebook Team.

  • Limit the number and size of parameters passed to routines. Too many parameters affect the readability and testability of the routine. Large structures or arrays, if passed by value, can overflow the stack, causing unpredictable results. Always pass large elements via
  • Use recursive functions with great care. Stack overflows are common. Verify that there is a finite recursion!
  • Utilize functions for boundary checking. Since C does not do this automatically, create routines that perform the same function. Accessing arrays or strings out-of-bounds is a common problem with unpredictable, and often major,
  • Do not use the gets function or related functions. These do not have adequate limit checks. Writing your routine allows better error handling to be
  • Use memmove, not memcpy. Memcpy has problems if the memory overlaps.
  • Create wrappers for built-in functions to include error
  • If “if…else if…else if…” gets beyond two levels, use a switch…case. This increases readability.
  • When using the switch…case, always explicitly define default. Do not omit the break.
  • Initialize local (automatic) variable before first use. They contain garbage before explicit initialization. Pay special attention to pointers since they can have the most dangerous effects.
  • Initialize global variables in a separate routine. This ensures that variables are properly set at warm.
  • Check pointers to make sure they don’t reference variables outside of scope. Once a variable goes out of scope, what it contains is
  • Only use setjmp and longjmp for exception handling. These commands jump outside function boundaries and deviate from normal control.
  • Avoid pointers to functions. These pointers cannot be initialized and may point to non-executable code. If they must be used, document the
  • Prototype all functions and procedures! This allows the compiler to catch errors rather than having to debug them at run-time. Also, when possible, use a tool or other method to verify that the prototype matches the
  • Minimize interface ambiguities, such as using expressions as parameters to subroutines or changing the order of arguments between similar functions. Also, justify (and document) any use of functions with an indefinite number of arguments. These functions cannot be checked by the compiler and are difficult to
  • Do not use ++ or – operators on parameters being passed to subroutines, creating unexpected side effects.
  • Use bitmasks instead of bit fields, which are implementation
  • Always explicitly cast variables. This enforces stronger typing. Casting pointers from one type to another should be justified and
  • Avoid the use of typedef’s for unsized arrays. This feature is poorly supported and error-prone.
  • Avoid mixing signed and unsigned variables. Use explicit casts when
  • Don’t compare floating-point numbers to 0 or expect exact equality. Allow some small differences due to the precision of the floating-point.
  • Do not compare different data types, such as comparing a floating-point number to an integer.
  • Enable and read compiler warnings. If an option, have warnings issued as errors. Warnings indicate that the deviation may be fine but may also indicate a subtle
  • Be cautious if using standard library functions in a multitasking environment. Library functions may not be re-entrant and could lead to unspecified.
  • Do not call functions within interrupt service routines. If it is necessary to do so, make sure the functions are small and re-entrant.
  • Avoid the use of the ?: operator. The operator makes the code more difficult to read. Add comments explaining it if it is
  • Place #include directives at the beginning of a file. This makes it easier to know what files are included. When tracing dependencies, this information is
  • Use #define instead of numeric literals. This allows the reader or maintainer to know what the number represents (RADIUS_OF_EARTH_IN_KM, instead of 6356.91). It also allows the number to be changed in one place if a change is necessitated.
  • Do not make assumptions about the sizes of dependent types, such as int. The size is often platform and compiler.
  • Avoid using reserved words or library function names as variable names. This could lead to serious errors. Also, avoid using names that are close to standard names to improve the readability of the source.

7.5 Additional Guidance

Additional guidance related to this requirement may be found in the following materials in this Handbook:


  • No labels