At the conclusion of the workshop, a rapporteur from each of the six afternoon discussion groups provided an oral summary of the group’s discussion. The summary comments are organized below according to elements of the statement of task for the Panel for Review of Best Practices in Assessment of Research and Development Organizations. (The panel’s statement of task is presented above on page 1 of this report.) The following summary comments are phrased as a set of questions that might be considered during assessments.
Organizational Context: Does the Organization’s Current and Planned Portfolio
Align with Its Mission, and Are the Organization’s Plans and Strategies Aligned
with the Needs of Its Customers and Stakeholders?
In their presentations, John Sommerer, Roy Levin, and J. Stephen Rottler highlighted the importance of recognizing and addressing the context within which a given R&D organization exists. James Turner emphasized that assessments should include measurement against organizational goals and intentions. Sommerer emphasized the importance of a clearly articulated vision of what the parent or organization is trying to achieve according to established milestones. Gilbert Decker emphasized that the following are key management functions: providing the resources available to support high-quality work, effectively delivering the services and products required to fulfill the organization’s goals and mission and to address the needs of its customers, and maintaining a current and planned R&D portfolio that supports the organization’s mission. William Banholzer noted that an industrial organization must remain mindful of three questions: What do people want? What will people pay for? What can they afford? Banholzer noted that assessments should consider the importance of R&D time frames and should address the question, Who are the customers and stakeholders, and how do they define success? Workshop discussants also identified the following questions:
• Does the assessment reflect understanding of the principle that context is, indeed, fundamentally important, and that the whole organization should be assessed, not only individual programs, projects, units, or people?
• Is the organizational environment created to produce outcomes?
• What is the organization’s definition of success?
• What are the appropriate time frames for research and development efforts being assessed?
• Is the assessment in synchrony with changing time frames?
• How is the relationship between the organization and its customers and stakeholders being addressed?
• Does the agency that funds the research have an appreciation for research?
• What is the level of direct interaction with customers?
• Are there mechanisms in place to cut stagnant and unnecessary programs in order to prevent the dilution of the quality of more important programs?
• How should the impact of programs in basic research be assessed?
• How should innovation be assessed?
• How should spin-offs and transitions be measured?
• How are publication citations and patents—measures of the transition of knowledge—assessed?
• Are assessment metrics focused on outcomes as well as on activities?
• Are both historical impact and predictions of future success considered?
J. Stephen Rottler emphasized that there has been a need within his organization to shift from quantitative to qualitative assessment informed by data. Workshop discussants identified the following questions:
• Is there external oversight of the assessment, even if the assessment is not being conducted by an external review board?
• Is there a strong internal review to ensure that the product is not trivial before being submitted to external review? Is this applied to publications (especially for scientific publications) as well as to programs?
• To allow for candor without worry about giving offense or meeting with reprisal, especially in small scientific communities, do external reviews include processes to preserve anonymity?
• Are the terms of external review board members appropriate (generally between 3 and 5 years)?
• Does the review team have a balance of expertise and backgrounds?
• Do the review team members have good community reputations?
• Does the chair of the review team show good judgment?
• Has a clear tasking charge been provided for the assessment team?
• Has the audience for assessment reports been identified?
• Have mechanisms for both formal and informal communication of assessment findings been established?
• Is benchmarking included in the assessment as a useful means for assessing process factors—that is, how things are accomplished within the organization?
Each of the six presenters examined elements of technical management that affect the quality of the R&D scientific and technical work. Workshop discussants identified the following questions:
• Does the organization document processes and outputs when metrics are desired? Are the metrics that are gathered appropriate to permitting an examination of the data and trends for making decisions about actions to take?
• How do organizational decision makers perform judgments based on assessment of risk?
• Is a record kept of anecdotes, which often communicate accomplishments better than quantitative metrics such as publications and patents?
• Is the organization’s management assessed? Does the assessment ask staff how well management is performing? Are the senior managers technically competent? Does the organization have in place mechanisms to remove pathological managers who will not hire individuals more competent than themselves?
John Sommerer noted that a vision without resources is a hallucination. He and Roy Levin emphasized that human capital is a fundamental resource and that innovation requires that researchers be given some latitude and discretion with respect to their projects. Workshop discussants identified the following questions:
• How well does the organization support the education and development of staff?
• How well does the personnel selection and assessment system provide and maintain good performance? Are there any constraints on the system (e.g., constraints imposed within the federal context)?
• Are incentives in place to recognize individuals and teams?
• Are there efforts within the organization to seek external recognition?
• Does the organization promote teamwork? Does management connect with the team to discover talent, through social networking and an open-door policy?
• Does management communicate with junior researchers?
• As a measure of teamwork, are common cross-discipline terminologies used by the staff and the management?
• Do the staff members possess both technical and social skills?
• As a measure of organizational flexibility, how do the initial academic degrees of staff compare with their current work tasks?
• How does the organization inspire stellar performers? Does the organization have a rigorous and transparent process of rewards and acknowledgments in which the staff have confidence and faith?
• Does the organization have mechanisms for moving aside ossified individuals and those who block the performance of others—mechanisms whereby nonperformers can be flushed out?
• Are “wild ducks” (brilliant oddballs) identified and embraced?
• Are rewards other than money available to staff?
• Is there a mentoring system that creates a direct interaction between mentor and mentee in order to enhance productivity?
• If the organization applies a force ranking system to compare the goals and productivity of individual employees with the productivity and goals of others, is the size of the organization adequate to allow meaningful comparison of similar individuals?
John Sommerer noted that the intersection of vision, people, and alignment is addressed by an examination of the organization’s agility, flexibility, and adaptability in the face of changing pressures, budgets, and external contexts. Roy Levin emphasized that the rationale for the existence of a research organization is to provide the capacity to deal quickly with the unknown and unexpected. He noted that disruptive technologies, new competitors, and new business models can occur suddenly and must be responded to. Workshop discussants identified the following questions:
• Does the assessment reflect the differences in context for federal, industrial, academic, and national laboratory settings, which involve very different customers, stakeholders, missions, and goals?
• What can be learned from assessment methods applied in other countries?
• Does the organization foster participation in the global R&D community (e.g., by providing resources to prepare publications and attend conferences)?