Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
Summary Research and development (R&D) organizations are operated by government, business, academe, and independent institutes. The success of their parent organizations is closely tied to the success of these R&D organizations. In this report, organization refers to an organization that performs research and/or development activities (often a laboratory), and parent refers to the superordinate organization of which the R&D organization is a part. When the organization under discussion is formally labeled a laboratory, it is referred to as such. The question arises: How does one know whether an organization and its programs are achieving excellence in the best interests of its parent? Does the organization have an appropriate research staff, facilities, and equipment? Is it doing the right things at high levels of quality, relevance, and timeliness? Does it lead to successful new concepts, products, or processes that support the interests of its parent? The management of the National Institute of Standards and Technology (NIST) asked the National Research Council (NRC) to study methods of assessing research and development organizations. To conduct the study, the NRC appointed the Panel for Review of Best Practices in Assessment of Research and Development Organizations. This report summarizes the findings of that panel. The report offers assessment guidelines for senior management of organizations and of their parents. The report lists the major principles of assessment, noting that details will vary from one organization to another. It provides sufficient information to inform the design of assessments, but it does not prescribe precisely how to perform them, because different techniques are needed for different types of organizations. Three key factors underpin the success of an R&D organization: (1) the mission of the organization and its alignment with that of the parent; (2) the relevance and impact of the organization's work; and (3) the resources provided to the organization, beginning with a high- quality staff and management. Other resources include its budget, facilities, and capital equipment. Consideration of the alignment of the organization's mission and the relevance and impact of its work requires assessing the relationship that the laboratory and its parent have with their customers and stakeholders. Definitions of customer and stakeholder vary. A customer is often viewed as someone either in or outside of the organization who purchases products from the organization or its elements. A stakeholder may be viewed as an entity that can impact the organization's vision, mission, plans, or resources. Customers may differ from or be a subset of stakeholders. Although these definitions are arguable, the point remains that an effective assessment determines whether the organization has developed a clear and meaningful identification of its set of customers and stakeholders and a means for identifying and satisfying their needs. THE CONTEXT OF EVALUATION The context in which an organization is being evaluated relates first to the mission and vision of the parent organization. It is essential that the organization align its programs to be 1
OCR for page 2
consistent with the parent's mission and vision.1,2 Additionally, the organization may write its own mission and vision statements. (It is important to keep in mind, when discussing these missions, which is meant.) The output of an organization depends on the kind of work that it is commissioned to do. R&D organizations perform a variety of technical work. Some conduct fundamental, long-term research; some do applied research; others do developmental work; still others support technical efforts leading to production and marketing or to implementation of new processes. Some organizations do all of the above. Effective assessments are structured to take into account what the organization is aspiring to do. Research and development can be examined by considering three phases of the R&D: the planning stage, ongoing research, and evaluation of the relevance and impact of the R&D activities. In the planning stage, prior to launching a project, an organization develops goals for the projects, selects strategies and tactics intended to reach these goals, identifies needed personnel, and lists methods, including metrics, to assist in evaluating progress. Planning is done and assessed in the context of the organization's mission. Assessing ongoing research, the most common subject of assessments, includes reviewing and evaluating the technical projects and considering the quality of the research staff and management, the facilities, and the capital equipment. An effective assessment compares the program to the parent's mission and vision. Relevance can be assessed by comparing the organization's portfolio with expressed needs of customers in terms of the substance of the work and of its priorities. Retrospective analyses of programs may be made at various times following the completion of research and development activities. Many of the same metrics used to evaluate work in progress are useful in examining an R&D program after its completion. THE THREE MAIN ELEMENTS OF AN ASSESSMENT A comprehensive assessment evaluates three elements: management, technical quality, and impact. Aspects of these three elements may overlap. For example, the quality of the workforce falls under both management actions and quality of the work. Quality of the work will also be covered in considering impacts. Determining the relevance of the work is a key role of management. Assessing Management Customers and Stakeholders An effective assessment begins with considering how well management of the organization has identified its mission and vision as they are aligned with those of the parent. The assessment involves identification of the stakeholders and customers of the organization. A key question is how management stays in close contact with these groups and how well it responds to changing demands. For organizations that are focused on a limited number of customers, contacts can be made directly with the ultimate users of the results. For other 1 National Research Council, 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. The National Academies Press, Washington, D.C. 2 J. Sommerer, 2012. "Assessing R&D Organizations: Perspectives on a Venn Diagram." Presentation at the National Research Council Workshop on Best Practices in Assessment of Research and Development Oganizations, March 19, Washington, D.C. 2
OCR for page 3
organizations, such as NIST, there are too many customers for individual contacts. NIST's clientele includes the scientific and engineering communities; a variety of federal, state, and local organizations; and other countries. NIST of necessity deals through intermediary organizations such as trade groups, scientific and engineering societies, and aggregations of state and local government interests. It is important that an assessment of an organization include the efficacy of such interactions, using techniques such as polling and face-to-face meetings. Resource Management Any effective assessment is done in the context of the organization's mission. In order to satisfy its mission, an organization needs to be prepared to handle its current and future workload. This means that it will have a successful combination of the following: R&D portfolio--a collection of projects that are most likely to lead to successful accomplishment of the organization's mission; Resources--a workforce with an appropriate skill balance; the needed physical plant and equipment; and sufficient funding to enable accomplishment of the mission; Organizational leadership and management structure appropriate to the mission; and Planning for the future--the preparedness needed to ensure that the required resources will be in place as the mission evolves. These elements are properly considered in context. Academic research focuses on generating new knowledge with relatively few mission objectives, whereas government and industrial research organizations have fairly clearly defined missions. Any effective management assessment also recognizes externally imposed limitations, including but not limited to regulatory and budgetary restrictions. Portfolio At all stages of R&D, it is important that the institution construct and manage its portfolio to maximize the probability of success. In basic research, it is important that the portfolio cover those areas that are likely to be important in the long term to achieving the mission, and that the assessors look at the portfolio and comment on whether there are areas that may be missing and whether there are areas that may be covered but not be relevant. In product development the areas are often well specified, but it is important to consider whether or not the correct set of technologies is being applied to achieving the desired results. As noted above in the discussion on the context of the evaluation, portfolios can be assessed during the planning phase, ongoing research phase, and retrospectively. There are three elements to consider when assessing the quality of the research portfolio: (1) current projects and their relevance to the mission; (2) anticipation of opportunities; and (3) alignment of the planned future portfolio to mission, opportunities, and budget. Every R&D organization has some systematic way of listing its investments. At the extreme of offering little specificity is identification of funding per group, with descriptions of the group responsibilities and recent accomplishments. This format is most common at the more basic or fundamental end of the R&D spectrum. At the more applied extreme are examples in some industrial 3
OCR for page 4
organizations in which each project is specified in great detail, including time lines and anticipated return on investment. In many respects surveying the research portfolio of an organization or of a unit within that organization is straightforward. Individual projects are grouped under programs and evaluated in concert with stakeholders' and customers' needs and expectations. This process is often done through organized meetings. It may be a continuing process that includes formal oversight from stakeholders as well as outside reviewers and consultants. Industrial organizations may rely heavily on metrics involving financial return, whereas government organizations may focus more on delivering needed value to stakeholders consistent with mission statements. Resources Fulfilling the organization's mission requires a high-quality workforce with an appropriate mix of skills, an appropriate physical plant and laboratory equipment, and sufficient funding to accomplish the tasks.3 Managing the Workforce. The importance of the quality and expertise of people within an organization cannot be overemphasized. Of special significance for many R&D organizations is the composition within any particular division, laboratory, or project, of a staff with deep and creative technical capabilities. People with deep specialties but also broad perspectives and a history of varied assignments help prepare the organization for future assignments. At the heart of looking forward to the next generation of scientific and technological opportunities are the organization's scientists and engineers. Their knowledge of cutting-edge research is the starting point in all such efforts. To maintain and expand their knowledge, scientists and engineers require opportunities to attend scientific and technical meetings and to participate in the international community of scholars. Supportive management will also encourage the staff to think about next-generation efforts and reward them for that effort by bringing resources to bear on the most promising ideas. It is important to assess the organization's policies and actions aimed at steadily building upon the sets of capabilities associated with individuals possessing both breadth of experience across multiple projects and depth in one or more systems and disciplines. An effective organization enables its staff to capture new skills as required for a given set of tasks at hand, while over the long term building the network required to make team members effective participants in global efforts to achieve the overall goals of the organization. To facilitate the creation of such capabilities, a diversity of personnel and work experience is vital. Effective assessments of an R&D organization include consideration of a diverse workforce whose contributions may affect and advance the R&D mission of an organization. It is important that management continually plan for the future so that when the future arrives the laboratory is in a position to fulfill its mission. This means that the right resources and leadership must be in place when needed. The demographics of the workforce must be tracked so that there will be leaders in place as retirements and departures occur, and people with new skill sets must be recruited to be ready to deal with new technologies. An effective 3 J. Lyons, 2012. Reflections on Over Fifty Years in Research and Development: Some Lessons Learned. National Defense University, Washington, D.C. 4
OCR for page 5
assessment considers the adequacy of this planning as well as plans for the necessary physical infrastructure to support the new skill sets and technologies. Physical Resources. Technical facilities encompass the organization's physical space and how it is occupied. Does the workforce have what it needs to carry out the research program as identified or planned? An evaluation of facilities independent of clear understanding of program content is just as imprudent as an evaluation of program content without recognition of the need for appropriate facilities. Metrics concerning the facilities are not readily generalized, but it is important that the assessment of resources include capital equipment. It is important that management seek funding for purchases of modern items or for effective upgrades of existing equipment as well as maintenance of existing equipment. There is no established rule for how much of the budget management should allocate for equipment. The need for such funding depends on the nature of the work. Some work requires major purchases of equipment, but other work is not very dependent on expensive devices. Often, organizing equipment in the facilities to maximize utilization is desired, so individual "ownership" may need justification. "Home-built" equipment, properly identified and documented for the assessment team, is often a useful measure of a researcher's creativity. Organizational Leadership and Management An effective management structure will be consistent with the nature of the work. Basic research typically requires a very flat management structure, significant individual freedom in selection of research directions, and a management very receptive to suggestions (although projects involving very large experimental resources such as accelerators may require more structure). Product development typically requires a more hierarchical structure in order to ensure mission progress. It is important that assessment of management include the effectiveness of the two-way communication between management and the workforce. How well does management explain the mission, vision, and strategy of the organization? How well does the management explain the importance of the work and why the work has been assigned to the organization? Does the management explain external changes that affect the organization? Is there a clear operational plan for executing the technical work? Does management provide copies of its planning documents to the staff? It is important that assessors try to identify the culture of the organization, including how well the staff understand how "things are done here"; whether they feel that the organization is "a great place to work"; whether staff members are treated with dignity and respect; how well diversity is encouraged; and how conflict is surfaced and managed. It is also important to determine whether and how managers are noted for their ability to intercept and handle bureaucratic demands from above, thereby shielding research staff from administrative burdens. Other important assessment items include whether an organization's leaders have experience matched to their assigned groups and experience in leading groups of professionals; whether there are programs to prepare staff members for future assignments involving more managerial functions; and whether training assignments, mentoring, and coaching are a part of personnel development. 5
OCR for page 6
Assessing Quality An assessment of the quality of a research organization's work involves the consideration of a number of factors. Some can be measured quantitatively; others require more subjective judgments. Effective assessments will also include the quality of managers; the quality of research staff members; the output of the organization in terms of papers, patents, presentations, and handoffs to clients; and the adequacy of facilities and equipment. Some of these factors can be quantified in metrics; others require hearing presentations, walking through the organization, and speaking with staff and managers. Some of these assessments will also be made when looking at management and at impact. The quality of the research staff is thought by many to be the most important factor in an organization's success. The assessment of the quality of staff is therefore most important. Some measures address the staff as a whole--for example, the percentage of Ph.D.'s or the number of postdoctoral fellows. Production by individual staff members includes the number of papers or patents per staff year, although these metrics do not really tell the quality of the individual-- papers can be routine, and patents can be trivial and not used. However, assessing the quality of papers can be done by subject-matter experts on the assessment panels. The assessment of ongoing work is accomplished by hearing presentations by the individuals and visiting with them informally in their workplace or laboratory modules. In this way the overall capability of the laboratory staff can be estimated. An important assignment of the assessment panels, while touring the organization, is an evaluation of the state of the infrastructure--facilities, capital equipment, and support services. Conducting the Assessment The way that an assessment is done depends on what the nature of the organization is, the time frame for which the review applies, and who designs and manages the assessment. Assessments can be done within the organization or by outside parties. There is a trade-off between inside and outside evaluations. The inside assessors would have more detailed knowledge of the roles of the organization and the projects under review, but insiders may possess a bias with respect to the organization. External panels of independent experts would need to develop enough knowledge to make the assessment but would necessarily assess it with less intimate knowledge of the organization. Generally, there is more credibility attached to an independent external assessment. A fully independent assessment is arranged by and managed by an independent contractor. The appointment of assessment panel members with requisite expertise is crucial. It is essential that candidates for membership on assessment panels are required to present any biases and potential conflicts of interest to the contractor so that the appointment decision can take into consideration such potential conflicts. Before the assessment is carried out, it is important that the panel's members receive briefing materials covering the background of the organization, including some history, a discussion of the parent organization, a number of quantitative measures (metrics) that the organization's management maintains, and any special charges to the assessors from the organization's management. It is essential that an assessment panel spend sufficient time visiting the organization to be able to perform an assessment at the desired scope. Normally, selection of the topics to be addressed is discussed with the management, but the decisions are generally best made in collaboration with panel members, guided by the panel chairs. 6
OCR for page 7
Outputs of the Organization The results of an organization's work will normally be available for the assessment panels to review. These may be papers, presentations, or other means of conveying the nature of the technical work. The quality of the finished work is evident through the study of the documents and the discussion of them with those who did the research and, if possible, with the customers for it. It is important that evidence of the opinions of the customers be sought by either the panel chair or the managers at the organization. It is essential that the organization provide anecdotes of successful work by, for example, citing significant scientific advances (well-cited papers, awards, or other recognition), or, if appropriate, new products in the marketplace, new processes for producing the products, or new software in use by the technical communities. Benchmarking One commonly used technique of assessing quality is to compare one R&D organization with others judged to be at a high level of performance. Benchmarking is usually done with metrics, which have to be normalized to account for size and budget differences of the organizations examined. For example, one may cite the number of archival publications for each technical professional. Using percentages also avoids the problem of size differences--for example, the percentage of doctorates among the professional population. It is important to make comparisons among R&D organizations operating in similar contexts. Comparing an engineering research organization with an academic department would usually be inappropriate. A problem with benchmarking by metrics is that such assessments do not get at the effectiveness of the organization being assessed. There are examples of first-class organizations working in a parent organization that has failed to capitalize on the organization's breakthroughs. Nonetheless, benchmarking can be a useful addition to the assessment tool kit. Assessing Impact Measuring the impact of R&D activities is an important aspect of assessment. An insightful definition of impact was posed by William Banholzer of the Dow Chemical Company in his presentation to the National Research Council's workshop on Best Practices in Assessment of Research and Development Organizations: "What would not have happened if you did not exist, and how much would society have missed?" 4 One looks to the customers and stakeholders for an evaluation of the impact of a research program. Supporting this evaluation, an organization will put in place and use on a regular basis a systematic process of outreach to this clientele. Polling by questionnaire and polling by interviews are alternatives. Sometimes impact can be assessed by talking to industrial and technical organizations that are able to represent individual companies or other groups. Holding 4 National Research Council, 2012. Best Practices in Assessment of Research and Development Organizations-- Summary of a Workshop. The National Academies Press, Washington, D.C., p. 10. 7
OCR for page 8
periodic sessions with clientele would seem to be helpful not only for analysis of impact but also for validating the current research program against the needs of clientele. Meaningful evaluations also include analysis of completed R&D, by examining the recent past and at other times through retrospective analyses of more distant history. R&D organizations are part of a system or process that leads to products that both organizational management and stakeholders will ultimately use as the basis for judgments about the worth of the R&D. In attempts to project the future impact of current or proposed programs, the R&D organization is hampered by the inevitable fact that it may take many years, or even decades, before the full impacts of current programs are realized by other organizations. And when that eventually does come to pass, there have usually been so many different organizations involved in developing, engineering, producing, and fielding the end item that its identity with the research organization is lost. Regardless of how the story is formulated, the stakeholders' confidence in the organization and its management will be bolstered by demonstration that the decision making and processes of the present are comparable to, or better than, those of the past that led to measurable impacts. To tell this story properly, many organizations have had recourse to looking backward and tracing the consequences of R&D events long past. With respect to applied research and product and process development, industry will appropriately focus on its return on investment (ROI) for the R&D. Feedback from both failures and successes may be communicated to stakeholders and used to modify future investments. Government organizations rarely have such a direct metric, and it is important to search for more information and a structure to communicate to their myriad stakeholders. An example of this approach for learning about impact is Project Hindsight.5 Carried out by the Department of Defense in the mid- to late 1960s, Project Hindsight was a study of the development of 22 different weapons systems drawn from across the military services. It involved more than 200 personnel over a period of approximately 6 years. For years afterward the observations and conclusions of Project Hindsight guided military R&D planning and decision making. In 2004, recognizing that much had changed in the intervening years, the U.S. Army commissioned a new study, Project Hindsight Revisited. 6 The Department of Energy (DOE) utilized a similar retrospective analysis, with the assistance of the NRC, examining the impacts on energy-producing and energy-using industries of R&D programs executed by the DOE laboratories over the time period 1978-2000. The report summarizing the findings of the assessment makes the case in economic terms for an ROI that by itself could justify funding the research, while recognizing that societal impacts are far more difficult to measure and are not readily quantifiable.7 Companies, or even laboratories themselves, may commission histories. Sometimes a popular book describes developments in technical organization; examples are developments in the Bell Laboratories 8 or the General Electric laboratories.9 Occasionally the history of an 5 Office of the Director of Defense Research and Engineering (DDRE), 1969. Project Hindsight: Final Report. Office of the DDRE, Washington, D.C. 6 J. Lyons, R. Chait, and D. Long, 2006. Critical Technology Events in the Development of Selected Army Weapons Systems: Project Hindsight Revisited. National Defense University, Washington, D.C. 7 National Research Council, 2001. Energy Research at DOE: Was It Worth It? Energy Efficiency and Fossil Energy Research 1978 to 2000. National Academy Press, Washington, D.C. 8 J. Gertner, 2012. The Idea Factory: Bell Labs and the Great Age of American Innovation. Penguin Press, New York, N.Y. 9 B. Gorowitz, 1999. The General Electric Story: A Heritage of Innovation 18761999. Schenectady Museum, Schenectady, N.Y. 8
OCR for page 9
organization may appear in a biography of one of the founders; recently a biography of a founder of Apple described the accomplishments of that company.10 SOME QUESTIONS TO CONSIDER DURING ASSESSMENT The following is a series of questions that a manager--either the organization's director or a responsible remember of the parent organization--can ask when considering carrying out an assessment of a research organization. The body of this report addresses these questions in detail and suggests some best practices. A set of these questions is presented here and can serve as guidelines--a kind of "tool kit"--for anyone considering performing or sponsoring an assessment of an R&D organization. Assessing Management Answers to the following questions will be useful in the assessment of organizational management: Does the organization's management understand its mission and its relationship to that of its parent? Does the vision statement of the organization align with that of the parent organization? Is there a long-range plan for implementing the strategy by specific technical programs? Does the organization have an explicit strategy for its work and for securing the necessary resources? Do the program plans reflect a model for balance--that is, amount of basic versus applied and development research, and short-, medium-, and long-term work? Does the organization have a clear champion within the parent organization? Does management have an aggressive recruiting plan with well-defined criteria for new hires? Is there a set of practices for retaining, promoting, and recognizing the staff? Does the organization have a process for forecasting likely future technical developments in areas appropriate to its mission? Does the organization's management have discretionary authority to invest in new programs on its own initiative? Does management solicit ideas from the staff for new work? Does management regularly assess facilities and equipment for adequacy? Does it have a fiscal plan for updating or replacing laboratory equipment? Is there a process for regularly reviewing the organization's research portfolio for its alignment with the mission? What is the management climate, and how does one assess it? Is there enough flexibility to work across organizational lines? How does the structure of the organization support its mission? 10 W. Isaacson, 2011. Steve Jobs. Simon & Schuster, New York, N.Y. . 9
OCR for page 10
How much collaboration is there with outside organizations? How many staff exchanges are there? Does management have a well-defined process and criteria for determining what work is performed in-house versus what work is sponsored via grants, contracts, or other mechanisms with external entities? Does the management support a culture of creativity, diversity, and entrepreneurship? Assessing the Quality of Scientific and Technical Work Answers to the following questions will be useful in the assessment of the quality of an organization's technical work: Does the assessment include the quality of the staff, equipment, and facilities? Does the assessment include the nature of the research portfolio as to alignment with the mission and the balance in regard to basic, applied, and development work and short-, intermediate, and long-term research? Does the organization have a set of indicators that can serve as parameters when the time frame precludes immediate assessment? Does the organization benchmark itself against premier organizations? Who is the expected audience for the assessment? Is the review done by technical peers? What are the criteria for ensuring the credibility and validity of the assessment? What is the scope of the assessment? Does it include proposals for new work? Does it include assessment of completed work--internal review and authority to release a report, publications, patents, invited lectures, awards, and the like? Who designs and manages the assessment? Assessing Relevance and Impact Addressing the following questions will be useful in the assessment of an organization's relevance and impact: Does the organization have a process for identifying its stakeholders and customers? Does it have a regular process for reviewing its programs and plans with its stakeholders? Does the organization have a process for learning of its customers' current and likely future needs and expectations for the organization? Does the organization have an explicit process for tracking the utilization of its results (e.g., is transition to the next R&D stage actively managed and measured)? Does it have a formal program for recording the history of its work from concept to final utility or impact? Does the organization have a program to conduct retrospective studies of its earlier work? 10