Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 38
5 Assessing Impact Measuring the impact of R&D activities is the most subjective aspect of an assessment. An insightful definition of impact was posed by William Banholzer in his presentation at the National Research Council's workshop on best practices in assessment of research and development organizations: "What would not have happened if you did not exist, and how much would society have missed?"1 An assessment of impact may be designed by considering the following questions: 1. Does a survey of program outcomes for the time period being assessed (near term, midterm, and far term) indicate expected levels of success in the context of the assigned program objectives? a. Was the stakeholder, sponsor, or customer satisfied with the output delivered? b. Did the outcome advance the field significantly? c. Was there a sustainable advantage associated with the outcome? d. Was there external recognition of the outcome as fundamentally differentiating in the field? 2. Does a survey of programs launched within the relevant time period indicate an ability to consistently lead in the definition of the next steps required to continue to advance the field within the scope of the mission--for example, what is the track record of the organization in providing forecasts to the sponsor, stakeholder, or customer? 3. Has the organization consistently identified foundational discontinuities in the trajectory of the field of interest, opening unanticipated new fields of study and opportunities? Providing answers to these questions has been challenging for most organizations. The answers are best sought primarily through analysis of completed R&D, sometimes by examining the recent past and at other times by conducting retrospective analyses of more distant history. R&D organizations are part of a system or process that leads to products that an organization's management, stakeholders, and customers will ultimately use as the basis for judgments about the worth of the R&D. However, the links between the R&D and the final impact, perhaps well understood by those performing the R&D, are often not understood by some stakeholders and customers, who may not be technical experts. It is the task of the R&D organization to make the case about those links between R&D and impact, and various strategies have been used. For convenience, a single word, transition, is used here to characterize the process and the consequence of delivering the product of any accomplishment of the R&D organization to any one of its stakeholders and customers. For basic research, the claim is often made that the distance between the research and the end-item product is so great that one should judge the research as an entity on its own. To 1 National Research Council, 2012. Best Practices in Assessment of Research and Development Organizations-- Summary of a Workshop. The National Academies Press, Washington, D.C., p. 10. 38
OCR for page 39
identify transitions from basic research, measures are collected on papers published, citations by others, invitations to speak about the work, patents received, and awards conferred. These metrics for assessing research quality are useful for assessing the quality of R&D, as discussed in Chapter 4, but they are often of little interest to those stakeholders and customers whom the organization wishes to satisfy. Organizations that conduct R&D recognize this, and so their reports to stakeholders frequently include anecdotal stories about how research funded many years earlier has found its way into application today. In many R&D organizations transitions from basic to applied research are frequent, and are often the subject of management decisions. Within an R&D organization additional indicators for examining transitions from basic research are available, but they are less frequently tracked than the obvious outputs in papers and technical talks. Some related questions are the following: Has the technical advance in a particular project opened the door for more applied work? Does the applied work justify additional funding and possibly new hires? These questions may apply whether the basic work was done in-house or extramurally. These questions are regularly asked and decisions are made, but organized records of such transitions are not commonly compiled or recognized as appropriate indicators for judging the basic work. With respect to assessing the impact of applied research or product development, the situation becomes far more complex. Industry will appropriately focus on its bottom line, and predicted return on investment for the R&D investment can be calibrated against actual sales. Feedback from both failures and successes may be communicated to stakeholders and used to modify future investments. Government organizations rarely have such a direct metric and must search for more information and a structure in order to communicate to their myriad stakeholders. Four general types of R&D organizations are identified above in Chapter 2: mission- specific, industrial and contract organizations, product-driven organizations (e.g., national laboratories), and universities. The question of impact is different for each type of organization. Mission-specific organizations in the government and industrial organizations with clearly defined missions are generally considered the underpinning of an organization that includes centers for development, testing and evaluation, and maintenance. Each organization includes management processes designed for transitioning the product of its efforts to the next stage of development and/or application. In many instances these process include ensuring that adequate resources are available so that a transition can indeed occur. Formal agreements often ensure handoffs of responsibility upon completion of the organization's contributions. These processes offer excellent opportunities for examining the recent past as well as for aiding in the scholarly task of examining the more distant past. In the NIST laboratories, for example, there are frequently tangible consequences of the R&D work that can be monitored, including standard reference materials sold, data used, and standards developed and promulgated by standards-making bodies. In addition, NIST participates, as do other government labs, in cooperative research and development agreements (CRADAs) that may be seen as a potential database of relevant transitions. Within such CRADAs, the R&D organization and its partners (other government organizations, universities, and industries) carry out collaborative efforts designed explicitly to transition the results of more basic research toward application. The challenge with respect to providing information supporting assessments is to identify these transitions in a manner that reveals the implications 39
OCR for page 40
for future decisions and to make the data accessible and comprehensive enough to allow comparisons over time and within the organization. For product-driven national laboratories, the impact of R&D activities has more variation in its characteristics. An organization typically has carved out one or more core mission spaces, and the impact of those is measured by the degree to which these missions are being carried out. In emerging mission areas, the impact is measured by the growth of external investment, and in future mission areas impact is measured by the successful establishment of the workforce and competencies that will be needed to address these areas. In its simplest form, an industrial organization could measure its impact in terms of return on investment. This form of assessment is a best practice, but not a universal one, and involves caveats--many things affect earnings, such as price, volume, and cost of the existing product, and the time lag between investment and earning can be as long as a decade.2 For academic research groups, a common best practice is to assess the quality of the organization by benchmarking it against other laboratories generally recognized to be successful. This is a qualitative but widespread practice. TIMESCALES OF IMPACT In attempts to project the future impact of current or proposed programs, the R&D organization is hampered by the fact that it may take many years or even decades before the full impacts of current programs are realized. When that eventually does come to pass, there have usually been so many different organizations involved in developing, engineering, producing, and fielding the end item that its identity with the research organization is often lost. Regardless of how the story is formulated, the stakeholders' confidence in the organization and its management will be bolstered by demonstration that the decision making and processes of the present are comparable to, or better than, those of the past that led to measurable impacts. To tell this story properly, many organizations have had recourse to looking backward and tracing the consequences of R&D events long past. An example of this approach for learning about impact is Project Hindsight.3 Carried out by the Department of Defense in the mid- to late 1960s, Project Hindsight was a study of the development of 22 different weapons systems drawn from across the military services. It involved more than 200 personnel over a period of approximately 6 years. For years afterward the observations and conclusions of Project Hindsight guided military R&D planning and decision making. In 2004, recognizing that much had changed in the intervening years, the U.S. Army commissioned a new study, Project Hindsight Revisited.4 The Air Force Research Laboratory (AFRL) identified research transitions over a 50-year period at the AFRL,5 and Berry 2 W. Banholzer and L. Vosejpka, 2011. Risk taking and effective R&D management. Annual Review of Chemical and Biomolecular Engineering 2:8.1-8.16. 3 Office of the Director of Defense Research and Engineering (DDRE), 1969. Project Hindsight: Final Report. Office of the DDRE, Washington, D.C. 4 J. Lyons, R. Chait, and D. Long, 2006. Critical Technology Events in the Development of Selected Army Weapons Systems: Project Hindsight Revisited. National Defense University, Washington, D.C. 5 Air Force Office of Scientific Research, 2002. AFOSR at 50: Five decades of research that helped change the world. Research Highlights, March/April. 40
OCR for page 41
and Loeb identified breakthrough Air Force capabilities spawned by basic research.6 Coffey and coauthors describe methods and challenges in tracing the development of selected technologies from basic research through significant exploitation of the research.7 The Department of Energy (DOE) utilized a similar retrospective analysis, with the assistance of the NRC, examining the impacts on energy-producing and energy-using industries of R&D programs executed by the DOE laboratories over the time period 1978-2000. The report summarizing the findings of the assessment makes the case in economic terms for a return on investment that by itself could justify funding the research, while recognizing that societal impacts are far more difficult to measure and are not readily quantifiable.8 An exemplary example of after-the-fact analysis of impact is the economic impact analyses performed by NIST. Stimulated in part by the concerns of many in Congress that a program such as the NIST Advanced Technology Program (ATP) was inappropriate for government and that the R&D carried out under its aegis would be better left to industry, NIST began the ATP program in the late 1980s with a coordinated plan to measure economic impact. Using outside expertise and a well-articulated process, this impact analysis was applied to all funded activities and its record made available to all interested parties. A summary of the first 10 years of ATP funding, published in 2003, provides methodologies and results gained from their application to many industrial sectors. NIST has gone on to apply such organized studies to many of its laboratory-based efforts, and the historic record of accomplishment provides a strong justification for future work in the areas whose impacts have been assessed.9 SUMMARY OF FINDINGS Measuring the impact of R&D activities is the most subjective aspect of assessment and is ill-suited to quantitative measures. Transitions are measurable at every level of R&D (see Figure 2-1). The impact of research can only be measured after the fact. Near-term impacts, including many transitions during R&D phases, require looking back at the recent past but can be monitored for most of the R&D effort. Long-term impacts require deeper historical probes and are more likely to be assessed for only a few notable examples. In both cases, organized processes for gathering and analyzing the data require management attention and designated leadership. Developing the data and presenting them in a manner that makes the data useful to the intended audience is a job for professionals. The R&D organization will benefit from having an appropriately supported historian and internal report requirements to ensure the utility of the process. 6 W. Berry and C. Loeb, 2007. Breakthrough Air Force Capabilities Spawned by Basic Research. National Defense University, Washington, D.C. 7 T. Coffey, J. Dahlburg, and E. Zimet, 2005. The S&T Innovation Conundrum. National Defense University, Washington, D.C. 8 National Research Council, 2001. Energy Research at DOE: Was It Worth It? Energy Efficiency and Fossil Energy Research 1978 to 2000. National Academy Press, Washington, D.C. 9 R. Ruegg and I. Feller, 2003. Evaluating Public R&D Investment Models, Methods, and Findings from ATP's First Decade. National Institute of Standards and Technology, Gaithersburg, Md. 41