contracting team members early and to maintain that engagement over the course of the program.

Walker spoke next of the assessment of technology readiness and mentioned that Welby had alluded to DARPA’s use of the NASA TRL process and structure. Walker also asked Welby to comment on the NASA TRL process and model in general. Welby said that NASA’s TRL was not a tool that DARPA used internally when discussing the status of programs. Rather, it is used to communicate with external users, particularly as technology moves toward the transition stages. Welby continued by saying that TRL categorizations tended to gloss over many of the details that were specific to a technical solution or approach. TRLs do not really help DARPA to assess investments or to make decisions about program milestones, decision points, or changes. However, he did mention that TRLs provided an effective shorthand for communicating with customers in terms of what they should expect when DARPA finishes an effort. He said that DARPA tried to avoid delivering unexpected technology to customers. Communication is critical in making sure that end users understand what they are going to receive from a DARPA effort.

Walker continued this line of questioning by asking Welby to elaborate on the technology risk management processes that DARPA used. Welby responded that because of the broad range of work involved, it was hard to point to a single example. However, across the agency an attempt has been made to forge an early definition of a set of quantitative metrics that program managers could use to assess whether a project is proceeding at the expected pace. For an aircraft effort, it might be progress in the understanding needed for modeling or progress toward a flight test. For other efforts, it might be the progress of very carefully measured, well-designed laboratory measurements over a period of time. Those individual metrics, which are intended to be quantifiable, discrete items, form the basis of the contract between a program manager and his office director. DARPA calls them go/no go metrics, but they are not hard, fixed criteria. Rather, they form the structure within which management and an individual program manager can discuss the progress being made. As DARPA charts its way through these unknown technological territories, it is good to have a milestone along the way to understand how much progress has been made. Welby said that these milestones should be negotiated program by program at the beginning of the programs.

Walker then asked about relationships with small companies. Recalling that Welby had discussed STTR, SBIR, and other formalized federal processes, he asked Welby to elaborate on any other mechanisms used by DARPA. Welby responded that the processes were unique to the small business environment. He did not expand on that thought, but said that a more interesting issue was one of program scale. A new programs tends to start with an idea. New programs take shape when a program manager comes to management with a new concept, or when a potential recruit comes to DARPA with a new idea. In any case, early in the process, management typically looks for evidence that the idea has merit.

Almost all of DARPA’s projects—whether they eventually grow to be programs on the scale of millions, tens of millions, or hundreds of millions of dollars—start with small, proof-of-concept efforts, typically in the $250,000 to $500,000 range, that aim to show whether the idea embodies a new capability. If the idea cannot pass that initial test, it is probably not worth pursuing a full program based on it. Those small efforts are one



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement