bannerc

9.14 Resource Usage Measurement

1. Principle

Incorporate timely visibility into the use of computing resources into the software design.

1.1 Rationale

Measurement of resource usage enables determination and validation of operating margins throughout the life cycle of the project, which can be indicators of potential error and fault conditions. It also enables measurement of critical computing resources, thereby maximizing the prospects for safe and reliable operation of the software.








2. Examples and Discussion

Examples of computing resources to be measured include: real time tasks, background tasks, throughput, memory utilization, bus utilization, stack size and headroom, cycle slip statistics, and fragmentation. Each project needs to perform an assessment and determine which resources should be measured.

The Ames Research Center (ARC) standard describes this is as part of a self-test capability that is intentionally introduced into the software, and planned from day one. For example, functional requirements associated with the capability may be included at Preliminary Design Review (PDR). The requirements are implemented incrementally throughout the life cycle as needed to ensure maximum return on investment, with the full feature set being available to the test team during the Verification and Validation (V&V) phase.

The capability presents a challenge at deployment, because a decision must be made to either extract the feature just before deployment or to deploy the software with it. None of the Center standards offer a suggestion here, except to indicate that (1) the project must ensure that inadvertent activation of the feature during operations does not introduce harmful effects, and (2) thorough regression testing ought to be performed if the decision is to extract the feature. See the 9.06 Dead Code Exclusion design principle for related discussion.

3. Inputs


3.1 ARC

  • 3.7.2.5.3 Measurement of Constrained Resources - Software shall be designed to provide easy and timely visibility into the use of computing resources during testing and operations.


Note: Examples of resources to measure are: real time tasks, background tasks, throughput, memory, bus utilization, stack size and headroom, cycle slip statistics, fragmentation, memory leaks, and allocation latency. This makes it possible to validate margins and makes the flight software resource usage testable.

3.2 GSFC

None

3.3 JPL

  • 4.11.6.3 Measurement of constrained resources Software shall be designed to provide easy and timely visibility into the use of computing resources during testing and operations.


Note: Examples of resources to measure are: real time tasks, background tasks, throughput, memory, bus utilization, stack size and headroom, cycle slip statistics, fragmentation, memory leaks, and allocation latency. This makes it possible to validate margins and makes the flight software resource usage testable.

3.4 MSFC

  • 4.12.1.4 Software shall be designed to support performance measurement of defined constrained computing resource or function and provide visibility into whether real-time and background tasks are completed.


Note: Examples of resources to measure are: real time tasks, background tasks, throughput, memory, bus utilization, stack size and headroom, cycle slip statistics, fragmentation, memory leaks, and allocation latency.

Rationale: These key metrics enable determination of operating margins, which can be indicators of potential error and fault conditions. This enables measurement of critical computing resources, thereby maximizing the prospects for safe and reliable operation of the software.

4. Resources

4.1 References



5. Lessons Learned

5.1 NASA Lessons Learned

The NASA Lesson Learned  439  database contains the following lessons learned related to resource usage measurement:

  • Science Data Downlink Process Must Address Constraints Stemming from Fixed Deep Space Network (DSN) Assets.  Lesson Learned 1483:  557 "Given their minimal ability to mitigate DSN resource limitations, flight projects must consider mission design and mission operations improvements that may help to achieve Level 1 requirements, such as the 9 measures effectively employed by the Spitzer project."
  • Manage Reaction Wheels as a Limited Spacecraft Resource (2002)Lesson Learned 1598:  674 "After two and one-half years of operational use, a bearing cage instability trend developed in a bearing in one of three Cassini reaction wheels. JPL responded to the indication of life-limiting wear through steps to manage RWA use, including tracking reaction wheel assembly (RWA) performance, limiting RWA usage, using a software tool to manage reaction wheel biasing events, and providing a reaction wheel drag torque estimator to identify anomalous bearing drag conditions."

  • No labels