of these elements at all levels, from the most senior manager to the individual executing a given program.16 Some elements of the assessment of cultural factors will be quantitative—for example, numbers of courses or degree programs associated with training, safety initiatives, accidents, and other indicators relating to safety—although the following important metric is largely qualitative in nature: Does the cultural environment provide meaningful support?

Other considerations related to organizational culture may include: To what extent are organizational members clear about organizational policies and processes, explicit and implicit? To what extent do organizational members agree that “this is a great place to work”? To what extent are organizational members treated with respect and dignity? Are differences among people respected and encouraged, or is the expectation one of bias and prejudice? Is conflict surfaced and managed, or is it avoided?

As is readily gleaned from the above discussion, indicators may be either quantitative or qualitative in nature and may be found in various data sources, including employee surveys. For instance, quantitative measures include number of publications and presentations, citations, and new products and processes. Return on investment and performance outputs can be important metrics.

Other outputs not as readily associated with quantitative measures might also have significance. Examples include impact on customer satisfaction, contributions to the pool of innovations, global recognition, the effectiveness of organizational leadership, communication among various entities within the organization and with relevant stakeholders, and the ability to transition research from invention/innovation to later stages of development.

Appendix L provides a set of assessment metrics and criteria applied by NRC panels that review the ARL. This set of metrics and criteria is not presented as a prescription, but, rather, as an example of a tailored set developed to meet the perceived assessment needs of one organization. The assessment items identified fell into the following categories: relevance to the wider scientific and technical community, impact to customers, formulation of the goals and plans for projects, methodology applied to the research and development activities, adequacy of supporting capabilities and resources, and responsiveness to the findings from previous assessments.


One commonly used assessment approach is to compare one R&D organization with one or more others judged to be at the top level of performance. This is usually done with metrics that are normalized to account for size differences. Thus one may cite the number of archival publications for each technical professional. Using percentages accounts for size differences, e.g., the percentage of doctorates among the professional population. It is important that comparisons made are among R&D organizations operating in similar contexts.17 For example, comparing an engineering research organization with an academic department provides little meaningful information, because the two operate in different contexts. A problem with benchmarking with metrics is that such assessments do not reveal the effectiveness of the organizations. A first- class organization may reside in a parent that fails to capitalize on the


16 B. Jaruzelski, J. Loehr, and R. Holman, 2011. Why Culture Is Key. Booz & Company, New York, N.Y.

17 National Research Council, 2000. Experiments in International Benchmarking of U.S. Research Fields. National Academy Press, Washington, D.C.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement