This version of SWEHB is associated with NPR 7150.2B. Click for the latest version of the SWEHB based on NPR7150.2D
See edit history of this section
Post feedback on this section
1. Requirements
5.4.2 The project manager shall establish, record, maintain, report and utilize software management and technical measurements.
1.1 Notes
IEEE Standard Adoption of ISO/IEC 15939 —Systems and Software Engineering—Measurement Process is a good generic model for developing a software measurement process for a project or Center. This international standard contains a set of activities and tasks that comprise a measurement process that meets the specific needs of organizations, enterprises, and projects. The NASA Chief Engineer may identify and document additional Center measurement objectives, software measurements, collection procedures and guidelines, and analysis procedures for selected software projects and software development organizations. This includes collecting software technical measurement data from the project’s software supplier(s).
1.2 Applicability Across Classes
Classes F and G are labeled with “X (not OTS)”. This means that this requirement does not apply to software that is considered to be “Off the Shelf.”
Class A B C CSC D DSC E F G H Applicable?
Key: - Applicable | - Not Applicable
2. Rationale
Numerous years of experience on many NASA projects demonstrate the following three key reasons for software measurement activities 329:
- To understand and model software engineering processes and products.
- To aid in assessing the status of software projects.
- To guide improvements in software engineering processes.
3. Guidance
It is often difficult to accurately understand the status of a project or determine how well development processes are working without some measures of current performance and a baseline for comparison purposes. Metrics support management and control of projects and work to establish greater insight into the way the organization is operating.
There are four major reasons for measuring processes, products, and resources. They are to Characterize, Evaluate, Predict, and Improve:
- Characterizations are performed to gain understanding of processes, products, resources, and environments, and to establish baselines for comparisons with future efforts.
- Evaluation is used to determine status with respect to plans. Measures are the signals that provide knowledge and awareness when projects and processes are drifting off track, so that they are brought back under control. Evaluations are also used to assess achievement of quality goals and to assess impacts of technology and process improvements on products and processes.
- Predictions are made so that planning is performed more proactively. Measuring for prediction involves gaining an understanding of the relationships among processes and products so that the values observed for some attributes are used to predict others. This is accomplished because of a desire to establish achievable goals for cost, schedule, and quality so that appropriate resources are applied and managed. Projections and estimates based on historical data help analyze risks and support design and cost tradeoffs.
- An organization measures to improve when quantitative data and information is gathered to help identify inefficiencies and opportunities for improving product quality and process performance. Measures help to plan and track improvement efforts. Measures of current performance give baselines to compare against, so that an organization can determine if the improvement actions are working as intended. Good measures also help to communicate goals and convey reasons for improving.
Measurement is an important component of the project and product development effort and is applied to all facets of the software engineering disciplines. Before a process can be efficiently managed and controlled, it has to be measured.
Software measurement programs are established with the rationale in mind, to meet objectives at multiple levels, and are structured to satisfy particular organization, project, program and Mission Directorate needs. The data gained from measurement programs assist in managing projects, assuring quality, and improving overall software engineering practices. In general, the organizational level, project level and Mission Directorate/Mission Support Office level measurement programs are designed to achieve specific time-based goals, such as the following:
- To provide realistic data for progress tracking.
- To assess the software's functionality when compared to the requirements and the user's needs.
- To provide indicators of software quality which provides confidence in the final product.
- To assess the volatility of the requirements throughout the life cycle.
- To provide indicators of the software's characteristics and performance when compared to the requirements and the user's needs.
- To improve future planning and cost estimation.
- To provide baseline information for future process improvement activities.
At the organizational level it can be said that "We typically examine high-level strategic goals like being the low cost provider, maintaining a high level of customer satisfaction, or meeting projected resource allocations. At the project level, we typically look at goals that emphasize project management and control issues or project level requirements and objectives. These goals typically reflect the project success factors like on time delivery, finishing the project within budget or delivering software with the required level of quality or performance. At the specific task level, we consider goals that emphasize task success factors. Many times these are expressed in terms of the entry and exit criteria for the task." 355
With these goals in mind, projects are to establish specific measurement objectives for their particular activities. Measurement objectives document the reasons for doing software measurement and the accompanying analysis. They also provide a view into the types of actions or decisions that may be made based on results of analysis. Before implementing a measurement program or choosing measures for a project, it is important to know how the measures are going to be used. Establishing measurement objectives in the early planning stages helps ensure organizational and project information needs are being met and that the measures that are selected for collection will be useful.
Projects are to select and document specific measures. The measurement areas chosen to provide specific measures are closely tied to the NASA measurement objectives listed above and in NPR 7150.2.
Software technical measurements
Given below is a set of candidate management indicators that might be used on a software development project:
- Requirements volatility: total number of requirements and requirement changes over time.
- Bidirectional traceability: percentage complete of System level requirements to Software Requirements, Software Requirements to Design, Design to Code, Software Requirements to Test Procedures.
- Software size: planned and actual number of units, lines of code, or other size measurement over time.
- Software staffing: planned and actual staffing levels over time.
- Software complexity: complexity of each software unit.
- Software progress: planned and actual number of software units designed, implemented, unit tested, and integrated over time; code developed.
- Problem/change report status: total number, number closed, number opened in the current reporting period, age, severity.
- Software test coverage: a measure used to describe the degree to which the source code of a project is tested by a particular test suite.
- Build release content: planned and actual number of software units released in each build.
- Build release volatility: planned and actual number of software requirements implemented in each build.
- Computer hardware and data resource utilization: planned and actual use of computer hardware resources (such as processor capacity, memory capacity, input/output device capacity, auxiliary storage device capacity, and communications/network equipment capacity, bus traffic, partition allocation) over time.
- Milestone performance: planned and actual dates of key project milestones.
- Scrap/rework: amount of resources expended to replace or revise software products after they are placed under any level of configuration control above the individual author/developer level.
- Effect of reuse: a breakout of each of the indicators above for reused versus new software products.
- Cost performance: identifies how efficiently the project team has turned costs into progress to date.
- Budgeted cost of work performed: identifies the cumulative work that has been delivered to date.
- Audit performance: indicators for whether the project is following defined processes, number of audits completed, audit findings, number of audit findings opened and closed.
- Risk Mitigation: number of identified software risks, risk migration status.
- Hazard analysis: number of hazard analyses completed, hazards mitigation steps addressed in software requirements and design, number of mitigation steps tested.
Here is an example from the MSFC Organizational Metric Plan. This example is a table showing the metrics, who collects the information, the frequency of the collection, and where the information is reported.
There are specific example measurements listed in the software metrics report (see Metrics) that were chosen based on identified informational needs and from questions shared during several NASA software measurement workshops.
There is some cost to collecting and analyzing software measures, so it is desirable to keep the measurement set to the minimum that will satisfy informational needs. When measures are tied to objectives, a minimum useful set can be easily selected. Defining measurement objectives helps to select and prioritize the candidate measures to be collected. If a measurement isn't tied to an objective, it probably doesn't need to be collected.
Metrics (or indicators) are computed from measures using approved and documented analysis procedures. They are quantifiable indices used to compare software products, processes, or projects or to predict their outcomes. They show trends of increasing or decreasing values, relative only to the previous value of the same metric. They also show containment or breeches of pre-established limits, such as allowable latent defects. The establishing and collection of Center-wide data and analysis results provides information and guidance to Center leaders for evaluating overall Center capabilities, and in planning improvement activities and training opportunities for more advanced software process capabilities. The method for developing and recording these metrics is written in the Software Development Plan (see SDP-SMP).
As the project objectives evolve and mature, and as requirements understanding improves and solidifies, software measures can be evaluated and updated if necessary or if beneficial. Updated measurement selections, data to be recorded and reported, recording and reporting procedures are documented during update and maintenance of the Software Development Plan.
Establish Measurements
Measurement objectives document the reasons why software measurement and analysis are performed. Measurement objectives consider both the perspectives of the organization as well as the project. The sources for measurement objectives may be management, technical, project, product or process implementation needs, such as:
- Deliver software by the scheduled date.
- Improve cost estimation.
- Complete projects within budget.
- Devote adequate resources to each process area.
Corresponding measurement objectives for these needs might be:
- Measure project progress to ensure it is adequate to achieve completion by the scheduled date.
- Measure the actual project planning parameters against the estimated planning parameters to identify deviations.
- Track the project cost and effort to ensure project completion within budget.
- Measure resources devoted to each process area to ensure they are sufficient.
The measurement objectives a project defines are to be based on their Center's objectives and NASA's objectives. Usually project objectives are focused on informational needs. The project has to provide information for managing and controlling the project. When establishing measurement objectives, ask what questions will be answered with the data, why you are measuring something and what types of decisions will be made with the data. A project may also have additional objectives to provide information to their Center or to NASA. This information may be used for process improvement or for developing the organizational baselines and trends. For example, Goddard Space Flight Center (GSFC) has defined two high-level objectives for its software projects as follows:
- To assure that the effort is on track for delivery of the required functionality on time and within budget.
- To improve software development practices both in responses to immediate customer issues and in support of the organizational process improvement goals.
As can be seen, one of these objectives focuses on the project's ability to manage and control the project with quantitative data and the other objective focuses on organizational process improvement.
In addition, GSFC's Measurement Planning Table Tool lists the following more specific measurement objectives for their projects:
- Ensure schedule progress is within acceptable bounds.
- Ensure project effort and costs remain within acceptable bounds.
- Ensure project issues are identified and resolved in a timely manner.
- Deliver the required functionality.
- Ensure the performance measures are within margins.
- Ensure that the system delivered to operations has no critical or moderate severity errors.
- Minimize the amount of rework due to defects occurring during development.
- Ensure requirements are complete and stable enough to continue work without undue risk.
- Support future process improvement.
When choosing project measures, check to see if your Center has a pre-defined set of measurements that meets the project's objectives. If so, then the specification of measures for the project begins there. Review the measures specified and initially choose those required by your Center. Make sure they are all tied to project objectives or are measures that are required to meet your organization's objectives.
To determine if any additional measures are required or if your Center does not have a pre-defined set of measures, think about the questions that need to be asked to satisfy project objectives. For example, if an objective is to complete on schedule, the following might need to be asked:
- How long is the schedule?
- How much of the schedule has been used and how much is left?
- How much work has been done? How much work remains to be done?
- How long will it take to do the remaining work?
From these questions, determine what needs to be measured to get the answers to key questions. Similarly, think about the questions that need to be answered for each objective and see what measures will provide the answers. If several different measures will provide the answers, choose the measures that are already being collected or those that are easily obtained from tools.
The presentation by Carolyn Seaman, "Software Metrics Selection Presentation", gives a method for choosing project measures and provides a number of examples of measurement charts, with information showing how the charts might be useful for the project. The "Project-Type/Goal/Metric Matrix" 089 is also a matrix developed following the series of NASA software workshops at Headquarters that might be helpful in choosing the project's measures. This matrix specifies the types of measures a project might want to collect to meet a particular goal, based on project characteristics, such as size.
The measurements need to be defined so project personnel collect data items consistently. The measurement definitions are documented in the project Software Management Plan (see SDP-SMP) or Software Metrics Report (see Metrics) along with the measurement objectives. Items to be included as part of a project's measurement collection and storage procedure are:
- A clear description of all data to be provided.
- A clear and precise definition of terms.
- Who is responsible for providing which data.
- When and to whom the data are to be provided.
Some suggestions for specifying measures:
- Be sure the project is going to use the measures.
- Think about how the project will use them. Visualize the way charts look to best communicate information.
- Make sure measures apply to project objectives (or are being provided to meet sponsor or institutional objectives).
- Consider whether suitable measures already exist or whether they can be collected easily. The use of tools that automatically collect needed measures help ensure consistent, accurate collection.
Tools can be used to track and report (e.g., JIRA) measures and provide status reports of the all open and closed issues, including their position in the issue tracking life-cycle. This can be used as a measure of progress.
Static analysis tools (e.g., Coverity, CodeSonar) can provide measures of software quality and identify software characteristics at the source code level.
Characterizations of measures like requirements volatility can be tracked with general purpose requirements development and management tools (e.g., DOORS), along with tracking verification progress. They can also provide reports on software functionality and software verification progress.
Links to the aforementioned tools are found in the Tools listing in the Resources section of this SWE.
Record/Maintain Measurements
Fortunately, there are many resources to help a Center develop data collection and storage procedures for its software measurement program. The NASA "Software Measurement Guidebook" 329 is aimed at helping organizations begin or improve a measurement program. The Software Engineering Institute at Carnegie Mellon University has specific detailed practices within its CMMI-DEV, Version 1.3 157 model for collecting, interpreting, and storing data. The Software Technology Support Center (STSC), at Hill Air Force Base, has its "Software Metrics Capability Evaluation Guide". 336 Westfall's "12 Steps to Useful Software
Metrics" 355 is an excellent primer for the development of useful software metrics. Other resources are suggested in the
Resources section.
Typical software measurement programs have three components: Data collection, technical support, and analysis and packaging. To properly execute the collection of the data, an agreed-to procedure for this is needed within the Software Development Plan (see SDP-SMP).
Activities within the data collection procedure include:
- A clear description of all data to be provided. This includes a description of each item and its format, a description of the physical or electronic form to be used, the location or address for the data to be sent.
- A clear and precise definition of terms. This includes a description of the project or organization specific criteria, definitions, and a description of how to perform each step in the collection process.
"When we use terms like defect, problem report, size, and even project, other people will interpret these words in their own context with meanings that may differ from our intended definition. These interpretation differences increase when more ambiguous terms like quality, maintainability, and user-friendliness are used." 355
- Who is responsible for providing which data. This may be easily expressed in matrix form, with clarifying notes appended to any particular cell of the matrix.
- When and to whom the data are to be provided. This describes the recipient(s) and management chain for the submission of the data; it also specifies the submission dates, periodic intervals, or special events for the collection and submission of the data.
The data collection involvement by the software development team works better if the team's time to collect the data is minimized. Software developers and software testers are the two groups that are responsible for collecting and submitting significant amounts of the required data. If the software developers or testers see this as a non-value added task, data collection will become sporadic and data quality will suffer, thus increasing the effort of the technical support staff that has to validate the collected data. Any step that automates some or all of the data collection will contribute to the efficiency and quality of the data collected.
Report Measurements
If a metric does not have a customer, there is no reason for it to be produced. Software measures are expensive to collect, report, and analyze so if no one is using a metric, producing it is a waste of time and money.
To ensure the measurement analysis results are communicated properly, on time, and to the appropriate people, the project develops reporting procedures for the specified analysis results. It makes these analysis results available on a regular basis (as designed) to the appropriate distribution systems.
Things to consider when developing these reporting methods in order to make them available to others are as follows:
- Stakeholders: Who receives which reports? What level of reporting depth is needed for particular stakeholders? Software developers and software testers (the collectors of the software measures) may only need a brief summary of the results, or maybe just a notification that results are complete and posted online where they may see them. Task and project managers will be interested in the status and trending of the project's metrics (these may be tailored to the level and responsibilities of each manager). Center leaders may need only normalized values that assist in evaluations of Center competence levels overall (which in turn provides direction for future training and process improvement activities).
- Management chain: Does one detailed report go to every level? Will the analysis results be tailored (abbreviated, synopsized) at progressively higher levels in the chain? Are there multiple chains (engineering, projects, safety and mission assurance)?
- Timing and periodicity: Are all results issued at the same frequency? Are weekly, monthly, or running averages reported? Are some results issued only upon request? Are results reported as deltas or cumulative?
- Formats for reports: Are spreadsheet-type tools used (bar graph, pie chart, trending lines)? Are statistical analyses performed? Are hardcopy, power point, or email/online tools used to report information?
- Appropriate level of detail: Are only summary results presented? Are there criteria in place for deciding when to go to a greater level of reporting (trend line deterioration, major process change, mishap investigation)? Who approves data format and release levels?
- Specialized reports: Are there capabilities to run specialized reports for individual stakeholders (safety-related analysis, interface-related defects)? Can reports be run outside of the normal "Timing and periodicity" cycle?
- Correlation with project and organizational goals: Are analysis results directly traceable or relatable to specific project and Center software measurement goals? Who performs the summaries and synopses of the traceability and relatability? Are periodic reviews scheduled and held to assess the applicability of the analysis results to the software improvement objectives and goals?
- Interfaces with organizational and Center-level data repositories: Are analysis results provided regularly to organizational and Center-level database systems? Is access open, or by permission (password) only? Is it project specific, or will only normalized data be made available? Where are the interfaces designed, maintained, and controlled?
The project reports analysis results periodically according to established collection and storage procedures and the reporting procedures developed according to this SWE. These reporting procedures are contained in the Software Development or Management Plan (see SDP-SMP).
Utilize Measurements
Management and technical measurements should be analyzed and used in managing the software project. Measurements are used to help evaluate how well software development activities are being conducted across multiple development organizations. They show containment or breeches of pre-established limits, such as allowable latent defects. Trends in management metrics support forecasts of future progress, early trouble detection, and realism in current plans. In addition, adjustments to software development processes can be evaluated, once they are quantified and analyzed.
Additional guidance related to software measurements may be found in the following related requirements in this Handbook:
SWE-091 - Establish and Maintain Measurement Repository |
SWE-092 - Using Measurement Data |
SWE-093 - Analysis of Measurement Data |
SWE-094 - Reporting of Measurement Analysis |
NASA-specific software measurement collection information and resources are available in Software Processes Across NASA (SPAN), accessible to NASA users from the SPAN tab in this Handbook.
Projects should check with their Center to ensure they have chosen objectives that support their Center-level objectives.
4. Small Projects
Since small projects are typically constrained by budget and staff, they choose the objectives most important to them to help keep the cost and effort within budget. Some Centers have tailored measurement requirements for small projects. Be sure to check your Center's requirements when choosing objectives and associated measures.
A few key measures to monitor the project's status and meet sponsor and institutional objectives may be sufficient. Data collection timing may be limited in frequency. The use of tools that collect measures automatically help considerably.
While a small project may propose limited sets and relaxed time intervals for the measures to be collected and recorded, the project still needs to select and record in the Software Development Plan the procedures it will use to collect and store software measurement data. Small projects may consider software development environments or configuration management systems that contain automated collection, tracking and storage of measurement data. Many projects within NASA have been using the JIRA environment (see section 5.1, Tools) with a variety of plug-ins that helps capture measurement data associated with software development.
Data reporting activities may be restricted to measures that support safety and quality assessments and the overall organization's goals for software process improvement activities. Data reporting timing may be limited to annual or major review cycles. Small projects may consider software development environments or configuration management systems that contain automated collection, tracking and storage of measurement data.
Often small projects have difficulty affording tools that would enable automatic measurement collection. There are several solutions for this issue. Some Centers, such as GSFC, have developed simple tools (often Excel-based) that will produce the measurements automatically. GSFC examples are the staffing tool, the requirements metrics tool, the action item tool, the risk tool, and the problem reporting tool.
Other solutions for small projects involve organizational support. Some organizations support a measurement person on staff to assist the small projects with measurement collection, storage, and analysis, and some Centers use tools that can be shared by small projects.
5. Resources
- (SWEREF-032) Measurement and Analysis for Projects, 580-PC-048-04, NASA Goddard Space Flight Center (GSFC), 2019. This NASA-specific information and resource is available in Software Processes Across NASA (SPAN), accessible to NASA-users from the SPAN tab in this Handbook.
- (SWEREF-089) Project-Type/Goal/Metric Matrix, developed by NASA Software Working Group Metrics Subgroup, 2004. This NASA-specific information and resource is available in Software Processes Across NASA (SPAN), accessible to NASA-users from the SPAN tab in this Handbook.
- (SWEREF-157) CMMI Development Team (2010). CMU/SEI-2010-TR-033, Software Engineering Institute.
- (SWEREF-197) Software Processes Across NASA (SPAN) web site in NEN SPAN is a compendium of Processes, Procedures, Job Aids, Examples and other recommended best practices.
- (SWEREF-252) Mills, Everald E. (1988). Carnegie-Mellon University-Software Engineering Institute. Retrieved on December 2017 from http://www.sei.cmu.edu/reports/88cm012.pdf.
- (SWEREF-329) Technical Report - NASA-GB-001-94 - Doc ID: 19980228474 (Acquired Nov 14, 1998), Software Engineering Program,
- (SWEREF-336) Software Technology Support Center (STSC) (1995), Hill Air Force Base. Accessed 6/25/2019.
- (SWEREF-355) Westfall, Linda, The Westfall Team (2005), Retrieved November 3, 2014 from http://www.win.tue.nl/~wstomv/edu/2ip30/references/Metrics_in_12_steps_paper.pdf
- (SWEREF-367) IEEE Computer Society, Sponsored by the Software Engineering Standards Committee, IEEE Std 982.1™-2005, (Revision of IEEE Std 982.1-1988),
- (SWEREF-378) ISO/IEC/IEEE 15939:2017(en) NASA users can access ISO standards via the NASA Technical Standards System located at https://standards.nasa.gov/. Once logged in, search to get to authorized copies of ISO standards.
- (SWEREF-430) Basili, V.R., et al. (May, 2002). University of Maryland, College Park. Experimental Software Engineering Group (ESEG). Lessons Learned Reference.
- (SWEREF-567) Public Lessons Learned Entry: 1772.
- (SWEREF-572) Public Lessons Learned Entry: 2218.
- (SWEREF-577) Public Lessons Learned Entry: 3556.
- (SWEREF-583) Public Lessons Learned Entry: 1024.
5.1 Tools
Tools to aid in compliance with this SWE, if any, may be found in the Tools Library in the NASA Engineering Network (NEN).
NASA users find this in the Tools Library in the Software Processes Across NASA (SPAN) site of the Software Engineering Community in NEN.
The list is informational only and does not represent an “approved tool list”, nor does it represent an endorsement of any particular tool. The purpose is to provide examples of tools being used across the Agency and to help projects and centers decide what tools to consider.
6. Lessons Learned
The NASA Lesson Learned database contains lessons learned addressing topics that should be kept in mind when planning a software measurement program. The lessons learned below are applicable when choosing objectives and related measures:
- Know How Your Software Measurement Data Will Be Used. Lesson Number 1772: When software measurement data used to support cost estimates is provided to NASA by a project without an understanding of how NASA will apply the data, discrepancies may produce erroneous cost estimates that disrupt the process of project assessment and approval. Major flight projects should verify how NASA plans to interpret such data and use it in their parametric cost estimating model, and consider duplicating the NASA process using the same or a similar model prior to submission. 567
- Space Shuttle Processing and Operations Workforce. Lesson Number 1042: Operations and processing in accordance with the Shuttle Processing Contract (SPC) have been satisfactory. Nevertheless, lingering concerns include: the danger of not keeping foremost the overarching goal of safety before schedule before cost; the tendency in a success-oriented environment to overlook the need for continued fostering of frank and open discussion; the press of budget inhibiting the maintenance of a well-trained NASA presence on the work floor; and the difficulty of a continued cooperative search for the most meaningful measures of operations and processing effectiveness. 583
A recommendation from Lesson Number 1042 states "NASA and SPC should continue to search for, develop, test, and establish the most meaningful measures of operations and processing effectiveness possible."
- Selection and use of Software Metrics for Software Development Projects. Lesson Learned Number 3556: "The design, development, and sustaining support of Launch Processing System (LPS) application software for the Space Shuttle Program provide the driving event behind this lesson.
"Metrics or measurements provide visibility into a software project's status during all phases of the software development life cycle in order to facilitate an efficient and successful project." The Recommendation states that: "As early as possible in the planning stages of a software project, perform an analysis to determine what measures or metrics will used to identify the 'health' or hindrances (risks) to the project. Because collection and analysis of metrics require additional resources, select measures that are tailored and applicable to the unique characteristics of the software project, and use them only if efficiencies in the project can be realized as a result. The following are examples of useful metrics:
- "The number of software requirement changes (added/deleted/modified) during each phase of the software process (e.g., design, development, testing).
- "The number of errors found during software verification/validation.
- "The number of errors found in delivered software (a.k.a., 'process escapes').
- "Projected versus actual labor hours expended.
- "Projected versus actual lines of code, and the number of function points in delivered software." 577
- Flight Software Engineering Lessons. Lesson Learned Number 2218: "The engineering of flight software (FSW) for a typical NASA/Caltech Jet Propulsion Laboratory (JPL) spacecraft is a major consideration in establishing the total project cost and schedule because every mission requires a significant amount of new software to implement new spacecraft functionality."
The lesson learned Recommendation No. 8 provides this step as well as other steps to mitigate the risk from defects in the FSW development process:
"Use objective measures to monitor FSW development progress and to determine the adequacy of software verification activities. To reliably assess FSW production and quality, these measures include metrics such as the percentage of code, requirements, and defined faults tested, and the percentage of tests passed in both simulation and test bed environments. These measures also identify the number of units where both the allocated requirements and the detailed design have been baselined, where coding has been completed and successfully passed all unit tests in both the simulated and test bed environments, and where they have successfully passed all stress tests." 572
Much of NASA's software development experience that was accomplished in the NASA/GSFC Software Engineering Laboratory (SEL) is captured in the reference cited below [from Lessons learned from 25 years of process improvement: The Rise and Fall of the NASA Software Engineering Laboratory]. The document describes numerous lessons learned that are applicable to the Agency's software development activities. From their early studies they were able to build models of the environment and develop profiles for their organization. One of the key lessons in the document is "Lesson 6: The accuracy of the measurement data will always be suspect, but you have to learn to live with it and understand its limitations." 430