National Academies Press: OpenBook

Best Practices in Assessment of Research and Development Organizations (2012)

Chapter: Appendix K Examples of Peer Review Conducted at Federal R&D Organizations

« Previous: Appendix J Assessment at the National Institute of Standards and Technology, Army Research Laboratory, and Sandia National Laboratories
Suggested Citation:"Appendix K Examples of Peer Review Conducted at Federal R&D Organizations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×

Appendix K

Examples of Peer Review Conducted at Federal R&D Organizations

In addition to the reviews conducted at the National Institute of Standards and Technology (NIST), the Army Research Laboratory (ARL), and Sandia National Laboratories (SNL), described in Appendix J, peer reviews are conducted at other Department of Defense (DOD) laboratories and at laboratories at other federal agencies, either owned and operated by the government or contracted by the government to private entities. This summary of peer reviews conducted at other laboratories is taken from the report Strengthening Technical Peer Review at the Army S&T Laboratories.1 Cozzens and coauthors also provide a summary of the assessment processes at NIST, ARL, the Department of Energy (DOE), Environmental Protection Agency (EPA), National Institutes of Health (NIH), Naval Research Laboratory (NRL), and the Agricultural Research Service (ARS), noting that relevance and quality are key assessment foci across these organizations.2 A 1999 report of the U.S. General Accounting Office summarizes the peer-review practices for the following organizations, noting the significant variance across the organizations with respect to the amount and type of peer review applied: ARS, Forest Service, NIST, National Oceanic and Atmospheric Administration (NOAA), DOE, EPA, NIH, U.S. Geological Survey (USGS), NASA, National Science Foundation (NSF), and the Federal Aviation Administration (FAA).3 Strauss and Loper define the assessment processes applied at the ARS, noting that the process has resulted in measureable improvements in the research.4

One type of review is that done by the National Academies’ National Research Council (NRC). The NRC has boards that form and oversee study groups for all three military services. The boards may perform in-depth technical reviews.

The Defense Science Board (DSB) has reviewed the DOD laboratories—not in terms of specific technical work, but looking at DOD’s policies for the laboratories, a higher level of review.

One aspect of external review is that the Federal Advisory Committee Act of 1972 (FACA; Public Law No. 92-463) may govern the operation. This means that meetings must be public, members of the public may make presentations to the committee, and members are appointed only with concurrence of agents of the President (typically the General Services Administration or agency heads). These requirements would have infringed on the traditional operations of the National Academies. After consideration, the FACA was amended to exempt the Academies from many of the requirements.

The services have traditionally convened their own advisory committees to look at technology issues—for example, the Army Science Board, the Air Force Science Advisory

____________________________

1 J. Lyons and R. Chait, 2009. Strengthening Technical Peer Review at the Army S&T Laboratories. National Defense University, Washington, D.C.

2 S. Cozzens, B. Bozeman, and E. Brown, 2001. Measuring and Ensuring Excellence in Government Laboratories: Practices in the United States. Canadian Council of Science and Technology Advisors, Ottawa, Canada.

3 U.S. General Accounting Office, 1999. Peer Review Practices at Federal Science Agencies Vary. GAO-RCED-99-99. U.S. General Accounting Office, Washington, D.C.

4 M. Strauss and J. Loper, 2012. Peer Review of Prospective Research Plans at the USDA Agricultural Research Service. U.S. Department of Agriculture, Agricultural Research Service, Washington, D.C.

Suggested Citation:"Appendix K Examples of Peer Review Conducted at Federal R&D Organizations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×

Board (AFSAB), the Naval Studies Board, and the Naval Research Advisory Committee. These groups differ in their operations. Some are involved in detailed technical studies; others restrict themselves to policy studies. All of them come under the FACA (with modified FACA rules applied to the Naval Studies Board). On occasion, ad hoc advisory committees may be set up, often by direction of the U.S. Congress. Congressional committees, the Government Accountability Office, and even the Library of Congress have also done policy studies of the laboratories.

Interviews with senior officials at a number of federal laboratories have revealed several different models for peer review. Some are formal review panels that come under the FACA; others are formal but are managed internally by the service or the laboratory being assessed.

An example of one managed by a service is the review by the Air Force Science Advisory Board. The AFSAB, a formal Federal Advisory Committee, reports to the Air Force Chief of Staff and the Secretary of the Air Force. Among other duties, the AFSAB reviews in depth the scientific work of the Air Force Research Laboratory (AFRL) on a 2-year cycle. The review covers the 6.1 through 6.3 work and also work sponsored by downstream users, such as program managers or executive officers. One panel has been created for each AFRL directorate. Panel members are selected by the AFSAB; all are external to the Air Force. Most are members of the AFSAB, but when necessary the AFSAB brings in consultants. At least one member of each panel must be a member of the National Academies. The AFSAB conducts reviews of each directorate, devoting a full week to each. The panels provide an exit briefing and a formal report. The reports are sometimes made available to the public, but some may be restricted (for official use only). The panels look at four factors for the programs: technical work, relevance for the near term (5 years), future impacts, and resources. When evaluating the 6.1 programs, the AFSAB does not look at near-term impacts. For the technical work, it evaluates technical innovation, technical rigor, productivity, and collaboration.

Two external groups have been established to look at various technical topics for the Navy: the Naval Research Advisory Committee and the National Research Council’s Naval Studies Board created by the Chief of Naval Operations. Both groups consider the impact of technical developments on the future of naval forces. For a detailed technical review of its technical base research programs, the Naval Research Laboratory establishes and manages its own peer review. The NRL has seven focus areas: materials and chemistry, electronics, battlespace environment, undersea warfare, electromagnetic warfare, space research/space technology, and information technology. The technical review of the technical base covers about one-third of the total research program each year. The NRL selects members for the external review panels for each focus area. A panel typically has four to six members, drawn from academia and elsewhere, along with at least one member of the National Academies. The NRL asserts that these members are unbiased. The panels meet for from 2 to 4 days, with time for immersion in the laboratories and discussion with the staff. The panels give exit briefings to the NRL management. Subsequently, they submit a formal written report of about 10 to 15 pages, and the NRL responds to this report in writing. The panels evaluate the programs for scientific merit: they examine the research approach, the credentials of the staff, a project’s relevance, equipment, and costs. Details of these differ for 6.1 and 6.2. programs. For a 6.1 program, the panel looks for work that seeks to expand the frontiers of known science; for a 6.2 program, the panel looks for whether the NRL is investigating and developing recent advances in science and technology. The panels have seven categories of evaluation containing metrics. For the customer-funded work, the criteria used are those of the customers.

Suggested Citation:"Appendix K Examples of Peer Review Conducted at Federal R&D Organizations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×

The National Institutes of Health operates peer review by 19 formal advisory committees, termed boards of scientific counselors, one for each of the institutes. Their duties are described as follows: Boards of scientific counselors serve a dual function in providing expert scientific advice to scientific directors regarding particular employees and projects, and providing the NIH as a whole with an assessment of the overall quality of its intramural efforts. The Committee Management Office at NIH tracks these and many other NIH advisory committees. The office stays in contact with the FACA Office of the General Services Administration. The NIH process is the most formal type of quality review discovered in interviews for this study.

The National Security Laboratories of the Department of Energy—Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL)—are government-owned, contractor-operated facilities under the DOE. Interviews at these three laboratories revealed very similar processes for peer review. The Department of Energy’s NNSA contracts for LANL and LLNL explicitly call for regular peer review; Sandia’s contract contains similar language. LANL and LLNL are operated by a consortium of the University of California and the Bechtel Corporation, with additional partners from the Babcock and Wilcox Company and the Washington International Group. Two limited-liability corporations have been created to manage the contracts: Los Alamos National Security LLC and Lawrence Livermore National Security LLC. There is a joint board of governors for the two LLCs. The board has a Subcommittee on Science and Technology (the S&T Committee) that oversees and controls peer reviews of S&T at both laboratories. The S&T Committee makes the appointments to the peer-review panels upon nomination by the laboratories. The panels are independent and balanced. Each does both review and critique. The panels have 8 to 10 members, drawn from academia (about half), industry, and other laboratories; there are some University of California faculty, and some from other national laboratories. The operation of the panels does not come under the FACA.

Los Alamos National Laboratory conducts three kinds of review. The first is strictly concerned with the quality of scientific activity and is focused on capability areas rather than on scientific disciplines. The other reviews are on weapons design and customer programs. Typically, these capability areas are crosscuts from the discipline areas, such as weapons science and information science. The reviews cover eight capabilities per year; each capability area is reviewed every 3 years. The review panels meet for 3 or 4 days. (In earlier years these reviews were run strictly in-house and covered only scientific disciplines, not capabilities.) The panels also look at the adequacy of the laboratory infrastructure, the morale of the staff, and the research environment. The design reviews are done internally by DOE weapons design teams; the customer reviews look at quality, relevance, and performance against the mission. Customer reviews are set up by the laboratory subject to approval by the board of governors’ S&T Committee.

Lawrence Livermore National Laboratory follows a similar assessment process. At LLNL, there are four principal disciplines and three major program areas. All are reviewed by peer panels. In addition, they conduct cross-cutting reviews of portfolios (for example, the National Ignition Facility) involving more than one of the seven areas. The panels consist of 10 to 15 members, drawn from academia, industry, and other laboratories. Members are selected by the LLNL directorates and vetted by the board of governors’ S&T Committee. Selection factors include diversity on the panels and turnover of membership. The panels usually have a member from LANL and an observing auditor from NNSA. Panel reports contain both critique and advice. Verbal exit briefings are given to the head of the unit under review, as well as to senior

Suggested Citation:"Appendix K Examples of Peer Review Conducted at Federal R&D Organizations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×

managers and the Director of the LLNL. Reports of the panels are circulated within DOE but are restricted.

Sandia National Laboratories is operated under a contract between NNSA and Sandia Corporation, a wholly owned subsidiary of the Lockheed Martin Corporation. There are six Lockheed employees on the Sandia Corporation Board of Directors. The Corporation’s S&T Committee is chaired by the chief technical officer of Lockheed. About half of the more than $2 billion budget at Sandia National Laboratories is for S&T. Peer review at Sandia is divided between S&T and nuclear weapons work. For S&T, the reviews are conducted under contract to the University of Texas (UT), which has two positions on the board of Sandia Corporation. Under supervision by the Sandia Corporation Board of Directors Subcommittee on S&T, the UT selects and convenes review panels. Both Sandia management and Sandia Corporation have input into the final selections. Each panel has 6 to 12 members from various disciplines, including physical sciences, computation, electronics, and materials. The UT draws some panel members from its faculty; other members are drawn from other sources. There is an external panel for each area of scientific and technical competence. Panels meet for multiple day sessions and cover individual projects and programs. They meet with staff members to assess morale and the research environment. They also may meet with groups of principal investigators at the program level, as well as with individual project leaders. The review results are reported to laboratory management and the board of directors’ S&T Committee. Principal investigators receive the reports and must respond to panel critiques.

Nuclear weapons-related peer review at Sandia includes three kinds of internal reviews: design, management, and internal peer review (members from entities other than the design team under review). Reviews are assisted by a full-time office of assessment that reports directly to the laboratory director. The assessment staff is internal but separate from the program areas. There is also a standing panel for external independent review of these Sandia assessments. These assessments are a critical part of the regular certification of the U.S. nuclear weapons stockpile.

In 2010, the Congress mandated that the NNSA contract with the National Research Council to conduct a review of the quality of the science and engineering, as well as its management, at LANL, LLNL, and Sandia. That review is ongoing at the time of the publication of this report.

Entities that award grants—for example, the NSF, the Army Research Office (ARO), the ONR, and the AFOSR—do not operate laboratories and, therefore, conduct quality reviews in a different manner. The NSF is overseen by the National Science Board, which reports to the Congress. Each NSF directorate is monitored by a formal advisory committee that meets regularly to review performance. Periodically, under the auspices of the committee the grant folders are reviewed to ensure that procedures have been followed. Grant proposals are sent out to experts for evaluation; subsequently the folders are evaluated by NSF staff before the decision is made to award or not to award. For work in progress, grantees are visited on-site by NSF program managers. Quality is judged by these reviews, regular reports, and examination of publications. The ultimate indication of how well a grantee is doing is the renewal or termination of the grant. This is true for all of the granting agencies.

The ARO conducts two kinds of peer review concerning single investigator (SI) proposals for new work. One review evaluates technical merit. Typically the proposal is sent to external reviewers, mostly university faculty. The other review focuses on military relevance and is done by Army and DOD scientists and engineers. The SI grants are typically for 3 years. Two or more site visits are usually conducted during this time. Program managers audit grantees’

Suggested Citation:"Appendix K Examples of Peer Review Conducted at Federal R&D Organizations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×

presentations at scientific meetings and gauge audience reaction. The ARO receives formal annual reports and copies of all publications by its grantees. The ARO divisions are evaluated biennially by external boards of visitors, one for each division. The boards look at the overall portfolios, evaluating the strategic direction of the divisions and looking out for overlap with other programs in the DOD or elsewhere.

The AFOSR, a directorate within the AFRL, manages the 6.1 funds for the Air Force. Funding executed by AFRL internal research directorates is evaluated by the AFSAB during the biennial reviews of those directorates. The AFOSR as a whole is reviewed by the AFSAB every 2 years.

The Director of Defense Research and Engineering for a number of years conducted Technology Area Reviews and Assessments (TARA) that covered DOD basic research programs. Representatives of the service laboratories and operating commands participated in these reviews, along with outside experts. TARA reviews are no longer conducted.

Clearly there is no single, accepted best way to conduct peer review of federal S&T laboratories.

Suggested Citation:"Appendix K Examples of Peer Review Conducted at Federal R&D Organizations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×
Page 73
Suggested Citation:"Appendix K Examples of Peer Review Conducted at Federal R&D Organizations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×
Page 74
Suggested Citation:"Appendix K Examples of Peer Review Conducted at Federal R&D Organizations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×
Page 75
Suggested Citation:"Appendix K Examples of Peer Review Conducted at Federal R&D Organizations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×
Page 76
Suggested Citation:"Appendix K Examples of Peer Review Conducted at Federal R&D Organizations." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×
Page 77
Next: Appendix L Metrics Applied by National Research Council Panels to Assessment of the Army Research Laboratories »
Best Practices in Assessment of Research and Development Organizations Get This Book
×
Buy Paperback | $37.00 Buy Ebook | $29.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Research and development (R&D) organizations are operated by government, business, academe, and independent institutes. The success of their parent organizations is closely tied to the success of these R&D organizations. In this report, organizations refers to an organization that performs research and/or development activities (often a laboratory), and parent refers to the superordinate organization of which the R&D organization is a part. When the organization under discussion is formally labeled a laboratory, it is referred to as such. The question arises: How does one know whether an organization and its programs are achieving excellence in the best interests of its parent? Does the organization have an appropriate research staff, facilities, and equipment? Is it doing the right things at high levels of quality, relevance, and timeliness? Does it lead to successful new concepts, products, or processes that support the interests of its parent?

This report offers assessment guidelines for senior management of organizations and of their parents. The report lists the major principles of assessment, noting that details will vary from one organization to another. It provides sufficient information to inform the design of assessments, but it does not prescribe precisely how to perform them, because different techniques are needed for different types of organizations.

Best Practices in Assessment of Research and Development Organizations covers three key factors that underpin the success of an R&D organization: (1) the mission of the organization and its alignment with that of the parents; (2) the relevance and impact of the organization's work; and (3) the resources provided to the organization, beginning with a high-quality staff and management.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!