A research institution may be examined in three phases of the R&D: (1) in the planning stage, (2) during ongoing research, and (3) when the work is completed. In the planning stage, prior to the initiation of research, an organization develops goals, selects strategies and tactics intended to reach those goals, identifies personnel and/or workforce development pathways, and lists metrics for evaluating progress. Planning is done in the context of the organization’s mission and may entail everything from broad capability development to specific, targeted development efforts. Appendix D provides an example of the effective application of peer advice during the planning phase of a research project at the Army Research Laboratory (ARL). The example describes how a set of experts selected from a pool of individuals familiar with the ARL contributed to effective refinement of a request for proposal for two significant ARL programs involving consortia that would examine multiscale modeling of materials relevant to Army R&D.

Ongoing research is the most common subject of review by processes external to the performing organization. The technical content of the R&D being performed is usually the subject of most attention, but an effective assessment also considers all of the context elements identified in the planning stage framework. Retrospective analyses of programs may be made at various times following the completion of research and/or development activities. Many of the same metrics used to evaluate work in progress are useful in examining an R&D program after its completion. Publications and patent awards to individuals and groups may be indicative of quality, particularly shortly after completion. Evidence of technology transition within the larger research, development, testing and evaluation, and fielding or commercialization system is another metric, although documentation of this metric is more readily obtained after time has passed. Reviews of economic impact and/or increased capability in a military system are examples of retrospective analysis that requires formal study by professionals, sometimes many years following the technical work of the organization. Such studies may be expensive and are most likely to be done on a case-by-case basis that tends to emphasize successes. Nevertheless, such anecdotal reviews after sufficient time has elapsed continue to represent the best evidence for later judgments of the effectiveness of the program.

Findings from assessments from each of the phases provide information to organizational decision makers, who are responsible for maintaining the preparedness of the organization to identify and respond to ongoing and future challenges.

After assessment findings are communicated to the organization, it is important to validate those findings—to assess the assessment itself. Appendix C provides a discussion of considerations pertaining to the validation of assessments, including discussion of various types of validity and reliability of assessment findings, factors relating to the efficiency of an assessment, and evaluation of the impacts of an assessment.


Any assessment implicitly assumes what the attributes are that characterize an effective R&D organization. The attributes listed below were suggested in a report to the Secretary of Defense as part of the Base Realignment and Closure decision-making process pertaining to the Defense laboratories in 1991.6 The list does not address the accomplishments of the past or offer


6 Federal Advisory Commission on Consolidation and Conversion of Defense Research and Development Laboratories, 1991. Report to the Secretary of Defense. U.S. Department of Defense, Washington, D.C.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement