National Academies Press: OpenBook
« Previous: 1 Introduction
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

2

Summary of Presentations

The following summaries of the presentations by James H. Turner, John C. Sommerer, J. Stephen Rottler, William F. Banholzer, Roy Levin, and Gilbert F. Decker were prepared by the workshop rapporteur, James P. McGee. Except where noted, comments in the summaries are attributable to the speakers.

Presentation by James H. Turner:
Impact of Assessments and Merit Reviews on Stakeholders

Introducing the first part of his presentation, James Turner referred the audience to a report that he had prepared for the U.S. Department of Energy (DOE) and that had been published by the Association of Public and Land-grant Universities—Best Practices in Merit Review: A Report to the U.S. Department of Energy, December 2010.1 For purposes of examining its energy portfolio, the DOE had asked how peer review should be performed at the department, and the topic was addressed at a DOE-sponsored workshop, held in January 2010, that is addressed in Best Practices in Merit Review.

Turner emphasized that assessments should include measurement against goals and intentions. Basic research and applied research are distinct. The goal of applied research is to get a task done on time and within budget. The goal of basic research is to develop science to the cutting edge and beyond.

Speakers at the January 2010 DOE workshop from industry and venture-capital research organizations showed commonality in various areas. For example, they did not want to spend time and funds evaluating proposals. Their emphasis was on their desire to secure the best team rather than worry about the details of proposals. They considered past research the best indicator of future success. They suggested that managers must be hands-on and aware that time is the enemy (the maximal time line for achieving commercialization is generally 4 years for industry and venture-capital initiatives). They suggested that if no progress is evident after a year or two, the work should be stopped and efforts directed elsewhere. They believed that every 3 to 6 months, work plans and progress should be assessed, that reviewers should be changed after 2 to 3 years, and that reviewer expertise should be based on current assessment needs. They said that the goal of the R&D efforts is to move to the marketplace. It was noted that the Department of Defense (DOD) has its “marketplace” as well, and that the DOD desires to streamline its procurement processes.

________________

1 James Turner, Best Practices in Merit Review: A Report to the U.S. Department of Energy, December 2010, Washington, D.C.: Association of Public and Land-grant Universities. Available at http://www.aplu.org/document.doc?id=2948. Accessed August 21, 2012.

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

Speakers at the January 2010 DOE workshop from organizations and agencies with a focus on basic research also shared common opinions, but their opinions differed from the opinions of those at that workshop who expressed a focus on applied research. Those who emphasized basic research considered it important to examine every proposal and proposer and to ask whether each team had the expertise, equipment, and facilities to succeed. They agreed that there is a need for competent researchers, but they also saw a need to fund new researchers and ideas. They contended that diversity of peer reviewers helps to enhance recognition of innovative proposals.

Participants at the DOE workshop highlighted a need to collectively evaluate the importance both of advancing the state of the art in basic research and of performing applied research.

For the second part of his presentation, Turner examined three stakeholder groups that he believes should be considered by government laboratories when assessing the satisfaction of stakeholder needs: the U.S. Congress, the Executive Branch, and the public.

The Congress is a trailing indicator, rather than a leading indicator. It is difficult for Congress to influence without achieving a consensus. Congress reports to its members’ constituents, the voting public. Responding to congressional leadership is important for the members of Congress, as is being mindful of reelection. Congress is a legal society in the sense that approximately half of the U.S. Senate and a third of the members of the U.S. House of Representatives are lawyers. For the Congress, it is axiomatic that a good anecdote is the coin of the realm.

So where does assessment evidence fit in? It is one step removed from the Congress. Congressional staff review assessment reports and have a key role in drafting bills. The congressional authorizing committee chair influences the Appropriations Committee chair. These are ultimate targets and prime readers of assessment reports. In general, congressional attention is thinly spread.

Other targets of assessment reports are the Executive Branch, support agencies (e.g., the Congressional Research Service and the Government Accountability Office), and think tanks (e.g., the National Research Council and the Cato Institute). Reports from those institutions make a great impact. The National Research Council and other think tanks have good reputations with the Congress.

The Executive Branch usually has its way during appropriations, and so influencing the Executive Branch is important. The most important stakeholder, though, is the public.

Presentation by John Sommerer:
Assessing R&D Organizations—Perspectives on a Venn Diagram

In his opening remarks, John Sommerer noted that organizations are motivated by the desire to innovate. The context within which an R&D organization exists is

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

everything. It is not feasible to develop an assessment approach that will be applicable across all organizations—a tool kit is needed. When assessments go badly, it is because an approach is applied without regard to context. For example, academic metrics can be inappropriate for the assessment of stakeholder satisfaction.

The three main influences on Sommerer’s thinking in this regard have been Vannevar Bush,2 Terence Kealey,3 and Donald Stokes.4 Vannevar Bush proposed the linear model of innovation that is now codified within the DOD (as 6.1, 6.2, etc., levels of R&D maturity). That approach, which assumes a linear transition from basic research to applied research, to development, and production, is fundamentally wrong. Rarely does innovation operate according to a linear model.

Terence Kealey updated the linear model to include on-ramps, off-ramps, and feedback loops. He assumed that old science informs new science and old technology informs new technology and that only infrequently do they interact, but when they do, frequently serendipitously, significant results are produced. Kealey was hostile to the concept of government investment in R&D. This is a well-documented perspective—the view that government investment is lacking in appropriate context and motivation (it is a trailing rather than a leading indicator), and the frequent result is waste. Kealey’s perspective is that an R&D organization exists as a modus vivendi between stakeholders of the organization and its researchers, and that one cannot get people capable of embracing and understanding the cutting edge of literature to capture value for the parent organization unless one gives them money to play with. Sommerer noted that one cannot expect a staff of nonpractitioners (those who are not directly and intimately involved with the R&D activities) to cull the literature for the purpose of identifying value for an organization, because they do not have the appropriate insights. Even in a private-investment context, an R&D organization is a way of having a captive populace to extract value from cutting-edge activities.

Donald Stokes identified an innovation web and emphasized the need to address very hard problems.

Sommerer emphasized that an R&D organization is successful if and only if there are three components in synergistic support: alignment, vision, and people. The interaction of these components can be illustrated by a Venn diagram in which the three components are represented as overlapping sets.

The alignment component is addressed by asking the following questions, and any assessment, even of technical quality, needs this context: Does the parent stakeholder have a strategy articulated with clear milestones so that it can be internalized by the organization? Does the organization have a supportive strategy? Is there a clearly articulated vision of what the parent/organization is trying to achieve according to some

________________

2Vannevar Bush, The Endless Frontier, Washington, D.C.: U.S. Government Printing Office, 1945. Available at http://www.nsf.gov/od/lpa/nsf50/vbush1945.htm. Accessed August 23, 2012.

3Terence Kealey, The Economic Laws of Scientific Research, New York, N.Y., Palgrave MacMillan, 1996.

4Donald E. Stokes, Pasteur’s Quadrant, Washington, D.C.: Brookings Institution Press, 1997.

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

milestones? Are all of these elements in synchrony? Are these strategies mutually supportive and updated? Are they good or bad strategies? Within this alignment, is the organization looking for first-mover advantages or second-mover advantages? What developments does the organization consider important to capture?

The vision component is addressed by asking the following questions: Does the organization know what it wants to become (in 1-, 5-, 10-year frameworks)? What expertise is it trying to achieve? Acknowledging that strategy is about what one is going to do and not do, where does the organization choose to be a leader as opposed to being a follower of fast developments? Does the organization have expertise in areas in which it desires to be a leader, and less expertise in areas in which it desires to be a follower? Are the synergies nurtured? Are there exit strategies? Are there realistic stretch goals? Are there sufficient resources? A vision without resources is a hallucination.

The component of people is addressed by the following considerations: Human capital is fundamental. Innovation requires free energy—that is, giving researchers some latitude and discretion in their work. There is no hope for the future of an organization without free energy. Peer reviews, which measure competence, have been well defined, but it is more difficult to measure motivation and external engagement. There is a need for external engagement globally in order to innovate. An assessment of human capital should ask: Are the people in the organization trying to become better?

The intersection of people with alignment is addressed by the following questions: Do the people know the strategy of the organization and its parent? Are there mechanisms by which the people can contribute to the strategy? Can they interact with the organization’s customers? Are the leaders administrators or role models? What are their credentials and qualifications? Do they have a strategy to support the people of the organization? Does the organization assess and mentor the people? Does the organization have the will to release people who should not be there? Does the organization have a strategy and the resources for engagement with the external world and for encouraging such engagement? Is innovation welcome, supported, protected? External engagement must be focused on the broad global community.

The intersection of vision with alignment is addressed by the following questions: Is there updating of the vision in response to changing external factors? Is there a process of self-assessment? Is there a list of lessons learned, and are they really learned, not just recorded? Is the self-assessment diligent, and does it have integrity? Is the assessment updated in acknowledgment of new strategies? There is need for both bottom-up and top-down assessment.

The intersection of people with vision is addressed by the following questions: Do the people within the organization know the vision? Can the people contribute to the vision? Does the R&D organization have a strategy and appropriate resources for engagement with the larger technical community, the commercial sector, and the global community? Is innovation welcomed, supported, and protected?

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

The intersection of vision, people, and alignment is addressed by an examination of the organization’s agility, flexibility, and adaptability in the face of changing pressures, budgets, and external contexts. This intersection needs to be consciously worked by staff and leadership, and it must be internalized.

There are potential cautions for external reviewers: Even robust review processes can be susceptible to inappropriate coaching. Any organization being assessed by an external group has a stake in the assessment. External assessment groups must be careful not to be used. An organization’s suggestion that the assessment be restricted to a “narrow lane” without the reviewers’ understanding the context of the organization and of the assessment is a warning sign of an attempt to influence the assessment. Given the freedom to do so, external reviewers can be very helpful in identifying suggestions for addressing the context questions mentioned here.

Presentation by J. Stephen Rottler:
Assessing Sandia Research

J. Stephen Rottler emphasized that Sandia National Laboratories has undergone a continuous evolution of assessment of quality, relevance, and impact, with quantitative assessment evolving into qualitative assessment that is informed by data.

Organizations are complex systems, composed of interconnected parts. The properties of the whole organization are not necessarily perceived by looking at individual parts. Systems behave in nonlinear ways that can be difficult to predict. Assessors must probe, watch behavior, probe, watch behavior, iteratively, being mindful that their assessment impacts behaviors. Over time, there has been a need to shift from quantitative to qualitative assessment informed by data.

According to Jon Gertner’s book titled The Idea Factory: Bell Labs and the Great Age of American Innovation,5 the characteristics of Bell Laboratories, identified in that book, have been deliberately nurtured at Sandia. These characteristics, which cannot be reduced to simple rules, must be applied dynamically. They are as follows:

•  A critical mass of talented doers and thinkers;

•  An environment that encourages interaction, and an open-door policy under which experts are expected to participate in the everyday mix of work and to mentor junior staff;

•  Multidiscipline research teams who understand that the aim of the organization is to transform knowledge into new things;

•  Freedom (and time) to pursue solutions thought to be essential; and

•  Rich knowledge exchange between the creators and the users.

________________

5Jon Gertner, Bell Labs and the Great Age of American Innovation, London: Penguin Press, 2012.

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

Organizations that traditionally have been stovepiped are increasingly evolving strategies and funding approaches that acknowledge the importance of multidisciplinary research organizations.

Sandia performs assessment in order to continue to improve the performance of the laboratory as an organization. Sandia addresses serious and high-consequence security challenges faced by the nation. It must deliver to requirements today and in the long term, and position itself to operate and deliver according to long-term requirements. Sandia focuses on the assurance of error prevention—developing and exhibiting behaviors so that errors are less likely to occur. Peer-review and external advisory boards examine pathways to error so that the probability of success can be maximized.

There are three assessment categories: (1) Self-assessments are intended to be objective but are inherently limited. All successful organizations have mature self-assessments that are objective and that promote responsive behaviors. (2) Independent assessments conducted through external peer reviews and visiting committees (external advisory boards) are used to examine quality, relevance, impact, and responsiveness to customers. (3) Benchmarking compares an organization to other organizations and is accomplished by formal assessments (by visiting other organizations) and less formal interactions as well.

Self-assessment at Sandia has become increasingly more formal and disciplined. Quarterly assessments present opportunities for leaders to examine with their teams whether their expectations about quality, relevance, and impact are being met. These assessments are performed at all levels of management.

Independent assessments are performed through a research advisory board that meets twice a year. The board is composed of senior individuals drawn from across academia and the public sector. The board is used in a broad sense to assess technical quality, using external measures and comparison against other organizations. The assessment examines whether Sandia is meeting the criteria of its roles as fast follower or first researcher. It also examines the health of the research environment and connections with internal and external customers. It elucidates what is either working or getting in the way in terms of innovation. The board also meets with customers of the organization and examines the impacts of prior investments. It assesses whether investments have enabled the laboratory to continue fruitful work or to initiate new work.

Laboratory Directed Research and Development (LDRD) funds are an important element of Sandia. Sandia’s director is permitted to decide how the LDRD funds are allocated across projects consistent with Sandia’s mission. The National Nuclear Security Administration (NNSA) provides oversight for this program, which captures principal-investigator-generated ideas within the management context. The program includes 5 or 6 grand challenge projects; each of these larger projects has an assigned external advisory board. Historically, these larger projects have transitioned successfully to have impact within Sandia or have achieved follow-on external funding—these impacts have been achieved with the help of the external advisory boards.

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

Assessments have traditionally been performed according to a balanced scorecard that guides the selection of data to support assessment decisions. Metrics are defined to assess three areas of measurement: value to customers, outputs, and inputs. Within each area, metrics are defined to support the assessment of what the organization is doing and how it is doing it. To assess value to customers, the value and impact in terms of leadership, stewardship, and mission satisfaction are addressed by examining measures of the effectiveness of strategic partnerships with industry and technical collaborators. To assess outputs, the excellence of scientific and technical advances is addressed within the context of management excellence, which involves measuring elements of the work environment and management assurance. To assess inputs, the capabilities of staff, technology infrastructure, and facilities are addressed by examining the science, technology, and engineering strategy through measurements of parameters indicative of the portfolio and the technical planning process.

The evolving assessment processes at Sandia increasingly include an examination of qualitative factors informed by the quantitative data. The following elements are assessed: clarity, completeness, and alignment of the research strategy; alignment of the research with the organization’s missions; quality and innovation of the research; vitality of the organization’s scientists and engineers; and long- and short-term impacts of the research with respect to the organization’s missions and to advancing the frontiers of science and engineering.

In summary, organizations and their assessors should be clear about the purposes of the assessment and its context; carefully decide what data to collect and what the assessment framework is; and link the assessment to the organization’s concept of what makes a great organization. Management is a qualitative sport, not a quantitative sport, but management in the absence of data is vacuous. The role of a leader in an R&D organization is to clearly express expectations to coworkers, to assess constantly whether those expectations are being met, and to take action to correct cases in which the expectations are not being met.

Presentation by William F. Banholzer:
An Industrial Perspective

William Banholzer introduced his presentation by noting that industry does not have a right to do research—research is a privilege earned by creating value. Value means that industry can commercialize something and that someone will pay for what industry has done. Therefore, for industry, three questions must be addressed: What do people want? What will people pay for? What can they afford?

Industrial organizations must convert research efforts into products that people want, will pay for, and can afford. Industry always has this compass: return on investment (ROI) of research expense versus gain. By examining impact to society (i.e., will society purchase the product?) one can define benefit beyond such metrics as scientific publications.

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

Also, there are multiple stakeholders, including stockholders, employees, government, and suppliers. Each context—industry, academia, government—has different stakeholders, whose interests must be balanced and addressed. Sometimes these different interests are not aligned. Customers are those who pay for a product; stakeholders are those who have a vested interest in an organization. For industry, customers and shareholders are those to whom the organization sells and is responsible; stakeholders may include the government, community, employees, and suppliers. For universities, customers are the students, and stakeholders are parents, the community, the faculty, and society at large. For the national laboratories, customers are the sponsoring government department and society, and stakeholders are taxpayers, the community, employees, and companies.

Assessment practices must show recognition of the importance of time frame. The time frame for technology development is getting shorter and shorter. It is increasingly necessary to invest faster, and this has implications for the metrics used for assessment. The economy has also cycled—because almost all products involve chemistry, the chemical industry profits have followed the economic trends. The typical 7-year cycles are driving the need for faster research; cycles are getting shorter.

Commercial launch represents enormous investment. “Materiality” refers to the point at which the cumulative investment is matched by cumulative sales. Sales do not equal profit, and so the break-even point is even farther out. Materiality points are up to 15 years out, and the cash-flow-positive point can be 25 to 30 years. In the meantime, investors are asking, “What have you done for me this quarter?” The message is that R&D is a long-term endeavor, and evaluations must consider the time frame. However, an organization cannot simply say, “Trust me for the long term.” Competitive analysis (pointing to how much competitors are spending) does not convince management. Not all technologies have same time frames; for some areas (e.g., energy) one has to think in decades.

Great science does not necessarily convert into great business. Pragmatically, science is a necessary but not sufficient condition. Organizations must be wary of claims by poor science and fads. Fads have had a perturbing effect on our pragmatic research.

What gets measured gets done. Measurements in industry are usually historical. R&D as a percent of sales, number of new product sales, and number of patents are woefully inadequate to define success of an industrial organization. The assumption that spending a lot yields great results is a flawed assumption. Better metrics are:

•  New product sales/R&D investment: a measure of productivity;

•  Margin on new products: new products are accountable for expanded earnings;

•  Patent-advantaged sales: reveals whether patents are protecting sales.

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

These metrics apply to industry, but they may not apply to national laboratories. An organization needs to define its customers and its criteria for success. Patents and publications do not indicate impact. The question to ask is, What would happen if you were not there? This question is more complicated when considering the long-term time frame.

National laboratories should measure impact and some form of ROI. Successes can be communicated through anecdotes, but past history does not prove future success. It is critically important to look at an organization’s total portfolio and to examine how well the organization is spending the whole portfolio, not just the best portion of it. If an organization wants to succeed, it must play to succeed. Failures should be appreciated if they represent reasonable attempts at addressing challenges. National laboratories must consider their R&D by comparison with centers of excellence, and the national laboratories need more exposure to industry.

Assessments should consider as key the question, Who are the customers and stakeholders, and how do they define success?”

Presentation by Roy Levin:
Managing Innovation at Microsoft Research

Roy Levin emphasized that context is everything and that Microsoft Research (MSR) is in the business of innovation. MSR has laboratories in six locations around the world. Research at Microsoft is not part of the product organization. Focusing on research, MSR is funded as a corporate function. The MSR mission is to perform basic research and to advance the state of the art—which does bring advances quickly into products and services. The management challenge is to transition research into products and services. MSR contributions to Microsoft products are direct (by providing products) and indirect (by building software tools that enable product development).

The role of a research organization overall is to address the long term. MSR exists to look out for the company’s future. The rationale for the existence of a research organization is to provide the capacity to deal quickly with the unknown and unexpected. Disruptive technologies, new competitors, and new business models can occur suddenly and must be responded to. An example was Microsoft’s ability, supported by its forward-looking research, to respond effectively to Bill Gates’s exhortation to attend to the evolving Internet.

There are several norms in research at MSR. It conducts a broad spectrum of research in computer science and other fields, including social sciences. Research is bottom-up. Researchers, not managers, develop the projects and programs. The job of the research manager is to hire the best people and support them. Research is collaborative, both within the organization and with external collaborators such as product groups and academia.

MSR maintains a flat management structure as much as possible. This affects the size of the organization—the span of control at MSR is 70 people, so the management

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

leader can interact individually with each staff member. Larger organizations can adopt this approach by breaking the research organization into smaller groups.

Work at MSR is “open”—most work is published. This is key for basic research and needed for effective collaborations. A research organization cannot reap from the community without participating in it. Software is based on intellectual property. MSR seeks patent protection for defensive purposes.

MSR is modeled after academia, with academic principles applied to this corporate computer science department, which is larger than that found in academic institutions. MSR seeks to publish at the right time, with emphasis on quality of publication, rather than to emphasize quantity of publications.

MSR applies formal assessment mechanisms. Mechanisms include peer-reviewed publication: how work is reviewed by external peers and whether it is published in the best venues (journals and conferences). External awards and participation on program committees and editorial boards are considered as vehicles that create publications and show leadership in the field. A corporate annual review is also conducted; its mechanisms are designed to serve the larger corporation and are not focused on research. These annual reviews include feedback from peers within MSR, the rest of Microsoft, and externally, generally with attribution of the feedback comments. Formal “stack ranking” that compares researchers at similar career levels is performed, although it is sometimes tantamount to comparing trees with rocks—corporate interests generally focus on shorter terms than do those of researchers. Researchers need to have faith that the corporation will support them even though their work does not often map to the corporate evaluation cycles. The assessment process must be tweaked by the manager to consider the expected time frame for each researcher. The assessments also include formal one-on-one meetings of staff with the MSR manager.

Less formal assessment mechanisms are also applied. Mentors provide feedback unencumbered by financial considerations. Mentors are internal, more senior researchers who provide career development guidance to mentees. Mentors draft the annual performance review, which is reviewed by the organization’s manager. Weekly all-hands meetings include technical talks; everyone provides a technical talk once per year. These talks are highly interactive, with discussion of innovative ideas preferred.

The hiring process is also collaborative, and it is considered the most important thing done in a research organization. Across-the-lab interviewing is conducted to cover all of the relevant technical areas; it assesses the breadth and depth of a candidate’s interests and capabilities. Committees-of-the-whole involve the entire MSR organization in hiring discussions. Management-by-walking-around is a key element of informal assessment, which should not be confined to annual review.

MSR evaluates what gets done, how it gets done, and when it gets done. In assessing “what gets done,” MSR looks at whether individuals and teams are advancing the field (e.g., through publication and professional service); helping Microsoft (e.g.,

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

through technology transfer, consulting, or spending time embedded within a product organization); and ensuring the future (e.g., through strategic engagement with product divisions and other parts of the corporation that consider strategy).

In assessing “how it gets done,” MSR evaluates collaboration (with the understanding that the best research usually involves collaboration); individual initiative; and success at mentoring others. In assessing “when it gets done,” MSR evaluates time to expected payoff and appropriate milestones. MSR recognizes that most deadlines are self-imposed but need some sense of when payoff can be expected. Ultimately management must decide whether to continue or stop a project or program.

In aggregate, MSR assesses the impact of research on the field of computing and on Microsoft Corporation.

Presentation by Gilbert F. Decker:
Concepts for Assessment of R&D Organizations

Gilbert Decker introduced his presentation by noting that industry and government, and to an extent academia, quite often use the term “research and development,” although in reality these are two somewhat different but related functions. It is constructive to discuss research and development and to show the differences in the assessment of each. The New Oxford American Dictionary defines research as both a noun and a verb. The noun is defined as “the systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions.” Then, of course, the verb is defined as “the act of conducting research.” The term “development” is best defined here as “the creation of a new or refined product or idea.” Said another way, research is usually based on a hypothesis and on the design of an experiment or analyses to confirm or disprove the hypothesis; development is the creation of a product.

Having defined terms, it is useful to discuss the assessment of development programs, followed by a discussion of the assessment of research programs and then a comparison of the two. Usually the objective of a development program is to produce a product that has applications and can be sold in the market and/or used by mission organizations, such as the military, to accomplish needed functions. The beginning of a development program, to be successful, requires fairly rigorous descriptions of the functions that the product must perform and specifications that drive the design. Consequently, the assessment of the quality and success of a development program is based on some definable metrics. Further, processes of managing the development program also need to be well defined, although they are not always well executed. One very key factor in the decision to begin a development program is the maturity of the technology. If the program is not based on known facts and conclusions established from proven research results, it is clearly unwise to pursue it. If one believes in the program, but it is not ready, it is best to recycle it back into applied research. Additionally, a development program usually has fairly rigorous cost estimates to which it tries to adhere.

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

So, the assessment of a development program has fairly straightforward metrics, based primarily on two parameters: how well the program functions compared to its intended functions and specifications, and how well it adheres to its cost estimates. Both of these assessment criteria can be measured by means of a well-designed test program.

Research is often described in two categories: basic research and applied research. A descriptive definition of “basic research” is that which is carried out to increase understanding of fundamental principles. It is not intended to yield immediate commercial benefits; basic research can be thought of as arising out of curiosity. However, in the long term, it is the basis for many commercial products and for applied research. Thus, basic research is a quest for knowledge and understanding. A descriptive definition of “applied research” is that which is designed to solve actual problems, rather than to acquire new knowledge or theory for its own sake. The objective of applied research is to define new concepts that are based on knowledge, theory, and understanding from basic research.

There is no sharp dividing line between basic and applied research. Both categories are often carried out in research organizations, such as at research universities and in government laboratories. Often, the results of basic research become evident, and the research team may morph into an applied research program.

There are two dimensions in assessing research and development activities: quality and relevance. Assessing quality is the focus of evaluating the technical merit of science and engineering work. The following issues are germane to the consideration of both quality and relevance: the adequacy of the resources available to support high-quality work; the effectiveness of the organization’s delivery of the services and products required to fulfill its goals and mission and to address the needs of its customers; the degree to which the organization’s current and planned R&D portfolio supports its mission; and the elements of technical management that affect the quality of the work.

As with development programs, metrics are required to assess the quality and relevance of research programs. Such metrics for research programs are not uniformly numerical measurements; or if numerical scoring is desired, the numerical assessment may require that judgments be made by skilled and experienced people, including peers and research managers.

In the case of basic research, relevance should not be weighted as heavily as quality. That is simply because of the definition and goal of basic research: the quest for new knowledge and understanding for its own sake. One should focus on quality of the research organization itself and also on the quality of the individual research programs. Assessments of applied research should address both relevance and quality.

The candidate metrics for assessing the quality and relevance of basic and applied research, presented below, are drawn in large part from two studies: Managing Air Force

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

Basic Research6 and Improving Army Basic Research.7 Gilbert Decker was a member of the committee that conducted the former study and chaired the committee that conducted the latter study.

Metrics for assessing the technical merit and quality of the science and engineering work include the following:

•  Membership in professional societies. A high percentage (perhaps at least 75 percent) of the researchers of the organization should be members of professional organizations.

•  Memberships in the National Academies and/or high recognition by professional societies. A very low percentage would be indicative of overall inadequacy of staff.

•  Number of members of research staff who have been awarded National Medals of Science. There should certainly be at least two or three who have received this award. This is probably a good indicator of the overall quality of the research staff, the theory being that winners of these medals would not be part of a low-quality research organization.

•  Whether the research organization maintains a database of research projects, findings and results. This database should also include lists of publications in refereed journals, citations, and awards. It should also include, based on the findings and results, an assessment of lessons learned by the researchers on each project. Lessons learned from failed projects can be as valuable as those learned from successful projects. Such a database could enable an assessment by funding organizations and scientific peer-review committees of the quality of basic research as well as applied research.

•  Percentage of research staff members with doctorates and/or postdoctorate fellowships. Something greater than 50 percent is the metric for recognized high-quality research organizations.

•  The balance between internally sponsored basic and applied research funding and customer funding that seeks applied research and/or engineering support to address specific problems. High-quality research organizations have 10 to 15 percent of their total funding in internally generated basic and applied research projects; another 25 to 30 percent is devoted to applied research funded by external organizations looking for concept solutions to problems; the remainder is allocated to scientific and engineering support to advanced development programs. When the internally generated basic and applied research effort falls significantly below 10 percent, the overall quality and stature of the research organization diminish significantly.

The following are management functions: providing the resources available to support high-quality work, effectively delivering the services and products required to

________________

6National Research Council, Managing Air Force Basic Research, Washington, D.C.: National Academy Press, 1993.

7G. Decker, J. Davis, R.A. Beaudet, S. Dalal, and W.H. Forster, Improving Army Basic Research: Report of an Expert Panel on the Future of Army Laboratories, Santa Monica, Calif.: RAND Arroyo Center, 2012.

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×

fulfill the organization’s goals and mission and to address the needs of its customers, and maintaining a current and planned R&D portfolio that supports the organization’s mission. Generally, managers or directors of research organizations should be scientifically trained personnel with research experience. There are certainly exceptions, but they are few; directors must genuinely understand and believe in basic and applied research. The manager must be capable of assessing and prioritizing research proposals from the research staff and from external funding sources and must also ensure that the aforementioned database is rigorously maintained. The manager must also be the marketing leader in finding research grants for internally generated basic and applied research proposals; must support customers who are seeking solutions to problems using applied research; and must diligently push for balance between internally generated basic and applied research proposals, customer-funded basic research proposals, and customer-funded advanced engineering support. Non-academic organizations (e.g., corporations and government organizations) which include R&D organizations that are productive in both research and engineering are most successful when the R&D director or manager reports to the overall leader of the organization.

The following ideas for assessment processes might be considered:

•  The research organization should conduct internal assessments of about 25 percent of basic and applied research projects, randomly selected each year, using the assessment criteria.

•  The research organization should have a “blue ribbon” external panel of scientific experts to assess the internally reviewed projects with respect to the assessment criteria. The external review should be compared with internal assessments, using the same criteria.

•  The research organization should solicit feedback from external customers who fund projects. The solicitation for feedback should be oriented around customer satisfaction. The feedback survey should be a tailored version of the assessment criteria.

•  The research organization should review the research staff composition annually with regard to the quality of staff and mix of staff expertise.

Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 4
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 5
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 6
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 7
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 8
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 9
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 10
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 11
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 12
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 13
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 14
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 15
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 16
Suggested Citation:"2 Summary of Presentations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13527.
×
Page 17
Next: 3 Key Questions Identified by Discussion Groups »
Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop Get This Book
×
Buy Paperback | $29.00 Buy Ebook | $23.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The National Institute of Standards and Technology (NIST)--recognizing that information and insights gained through continual examination of practices for organizational assessment are useful for decision makers at organizations across the deferral, industrial, academic, and national laboratory sectors-recently requested that the National Research Council (NRC) organize a panel to review best practices in assessment of research and development (R&D) organizations. In response, the NRC established the Panel for Review of Best Practices in Assessment of Research and Development Organizations.

The panel was charged to consider means of assessing the following in a manner that satisfies the requirements of NIST to perform effective assessments but also identifies assessment methods that can be applied selectively to other R&D organizations. These methods include: technical merit and quality of the science and engineering work, the adequacy of the resources available to support high-quality work, the effectiveness of the agency's delivery of the services and products required to fulfill its goals, the degree to which the agency's current and planned R&D portfolio supports its mission, as well as the agency's flexibility to respond to changing economic, political, social and technological contexts.

As one means of data gathering, among others that the panel is performing toward development of a final report of its findings, the panel organized a planning committee for a workshop on best practices in assessment of R&D organizations. Best Practices in Assessment of Research and Development Organizations: Summary of a Workshop reviews the workshop conducted at the Keck Center of the National Academies in Washington, D.C., on March 19, 2012.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!