Similar processes exist to guide software development programs. For example, the National Aeronautics and Space Administration has for many years relied on independent verification and validation (IV&V) for safety-critical software applications. And the Software Engineering Institute and others have defined guidelines for verification and validation of large software applications. But these processes do not effectively address the complexities inherent in this class of information-based programs.

Multiple versions of what constitutes a program life cycle can be found in literature; here the committee describes a generic model with the following phases:

  • Identification of needs. Analyze the current environment and solutions or processes currently in use; identify capability gaps or unmet needs.

  • Research and technology development. Develop potential solutions to meet the identified needs.

  • Systems development and demonstration. Develop and demonstrate the integrated system.

  • Operational deployment. Complete production and full deployment of the program.

  • Operational monitoring. Provide for ongoing monitoring to ensure that the deployed capability remains both effective and acceptable.

  • Systems evolution. Institute upgrades to improve or enhance system functionality.

An effective policy regime should address each of the above phases in turn as indicated below:

  • Identification of needs. During this phase, questions 1 and 2 from the summary of framework critera for evaluating effectiveness in Section 2.5.1 of Chapter 2 should be addressed—that is, the research should proceed only if a clear purpose and a rational basis are established and documented. Measures of effectiveness (benefit) and measures of performance (risk) should be drafted during this phase.

  • Research and technology development. During this phase, testing should occur in a controlled laboratory setting—the equivalent of animal testing in the drug development process or developmental test and evaluation (DT&E) in traditional technology development programs. A key issue in testing information-based programs is access to data sets that adequately simulate real-world data such that algorithm efficacy can be evaluated. Ideally, standardized (and anonymized) data sets should be generated and maintained to support this phase of testing; the data sets maintained by the National Institute of Standards and Technology for



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement