Peer Review Process

The peer review process can be broken down into five general steps (Figure 1):

  1. selection of projects to be reviewed;
  2. definition of the objectives of the peer review and selection of specific review criteria;
  3. selection of peer review panel;
  4. planning and conducting the peer review; and
  5. use of peer review results in decision making.

In order for a peer review process to be credible and effective as a whole, each of these steps must be performed following well-defined procedures that are understood and accepted by everyone involved with the peer review program.

Figure 1

Flow-chart showing major steps of a peer review process.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 13
--> Peer Review Process The peer review process can be broken down into five general steps (Figure 1): selection of projects to be reviewed; definition of the objectives of the peer review and selection of specific review criteria; selection of peer review panel; planning and conducting the peer review; and use of peer review results in decision making. In order for a peer review process to be credible and effective as a whole, each of these steps must be performed following well-defined procedures that are understood and accepted by everyone involved with the peer review program. Figure 1 Flow-chart showing major steps of a peer review process.

OCR for page 13
--> BOX 1 An Overview of OST’s Peer Review Program The peer review program is managed within OST by the Peer Review Coordinator. The peer reviews are conducted by the American Society of Mechanical Engineers (ASME) with administrative and technical support provided by the Institute for Regulatory Science (RSI). Funding for the review program is provided directly to RSI through a $1.2 million annual grant from OST. OST focus area (FA) and crosscutting area (CC) program managers7 initiate the peer review process by making written requests for peer reviews to the Peer Review Coordinator. The FA/CC program managers are responsible for developing a prioritized list of technologies to be reviewed, providing documentation to reviewers, preparing responses to review panels' Consensus Reports, and for covering the cost of FA/CC program personnel and material needed for the peer reviews. The FA/CC program managers are the main decision makers regarding the funding of technology projects within OST. Each project's principal investigators (PIs) are responsible for providing relevant project information for the peer reviewers, including written background documentation and oral presentations to the review panels, if needed. The Peer Review Coordinator is the principal federal official responsible for managing the day-to-day activities of the program. Specific responsibilities of the coordinator include receiving, processing, and scheduling peer review requests from FA/CC program managers; coordinating peer review activities among FA/CC program managers, the ASME Peer Review Committee (see below), and the review panels; ensuring that reviews are executed in a timely manner, ensuring that FA/CC program responses to review recommendations are included in Final Reports; and managing the budget and records for OST. The ASME Peer Review Committee is a standing body of the ASME whose sole purpose is to oversee the OST's peer review program and enforce relevant ASME policies, including compliance with professional and ethical requirements. The Peer Review Committee is responsible for appointing members nominated by the Executive Panel (see below) to individual peer review panels and assessing Interim Reports for conformity to ASME standards before being issued as ASME-sanctioned public documents. The Executive Panel of the Peer Review Committee (EP) consists of three to five ASME members who have served in leadership positions within ASME, and is responsible for overseeing the day-to-day operation of the peer review program. Peer review panels consist of three or more technical experts chosen for their knowledge of the specific technology to be reviewed. Although ASME requires the peer review panel chair to be a member of ASME, other panel members need not be ASME members. The peer review panel conducts the review, prepares a Consensus Report detailing its recommendations and observations, and transmits the written report directly to the sponsoring FA/CC program manager. 7   Focus areas were created by EM to assist in management and coordination of its technology development activities. The four focus areas, which are based on EM's major problem areas, are mixed waste characterization, treatment, and disposal; radioactive tank waste; subsurface contamination (formerly two focus areas: contaminant plume containment and landfill stabilization); and decontamination and decommissioning. In addition, EM created several crosscutting areas to coordinate activities that apply to multiple focus areas: robotics; efficient separations; characterization, sensors, and monitors; industry and university programs; and technology integration.

OCR for page 13
--> The Institute For Regulatory Science provides administrative and technical support for peer review activities. RSI's responsibilities include meeting planning, the compilation and distribution of background materials for members of review panels, facilitating peer reviews, and providing an executive secretary to assist in the preparation of Consensus Reports of peer review panels. The size and scope of each review panel depends on the specific technology(ies) being reviewed and the areas of expertise required to address the review criteria. OST classifies specific peer reviews as Level I, II, or III, as defined below. Multi-technology review of a complex nature involving site visits (five or more reviewers), In-depth review of a technology involving site visits (three or more reviewers); and Review of a document requiring no site visit (i.e., mail review). The following sections provide a general analysis of OST's peer review program based on the committee's review of OST documents, presentations by DOE staff, observations of recent peer reviews, and the committee's assessment of peer review reports. The committee recognizes that OST's peer review program is in its early stages of implementation and therefore offers these comments and recommendations for OST's consideration as it continues to develop and improve its program. For the reader's reference, boxes 1 through 9 introduce relevant aspects of OST's current peer review program and the OST organization, and are based on OST's descriptions of the program. A complete description of OST's peer review program can be found in Appendix A, from which the text of most of the boxes is extracted. An overview of OST's peer review program is presented in Box 1. Selection of Projects for Peer Review An effective project selection process ensures that projects are prioritized on the basis of criteria such as program relevance, cost effectiveness, and chance of implementation (Cole, 1991; McCullough, 1989). Box 2 presents OST's process for selecting technologies to be peer reviewed. Seventeen technologies were peer reviewed by the OST program between October 1996 and May 1997, and thirty additional projects are scheduled to be peer reviewed through September 1997. Table 1 summarizes at what stage in OST's technology-development process these peer reviews took place, and it also shows the total number of active projects within OST. Because of the relatively low rate of peer reviews conducted under this program relative to the large number of active projects, OST currently funds a large number of technologies that never have been peer reviewed. Many of these projects are in the later stages of development.

OCR for page 13
--> BOX 2 OST's Process for Selecting Technologies to be Peer Reviewed OST policy requires that a peer review be conducted on each technology/system before it passes through Gate 4 of OST's Technology Investment Decision Model (TIDM) process (see Box 3 and Appendix B for descriptions of the TIDM). Specifically, OST policy states that a technology peer review will be conducted on each technology/system. However, FA/CC Program Managers will give priority attention to those technologies that are preparing to pass through Gate 4 of the OST Technology Decision Process Procedure (Version 7.0) [DOE, 1997c]. Specifically, this procedure states that technical peer review reports are one of the documents required to pass through Gate 4. In addition, peer reviews may be appropriate at other phases of the technology maturation stages. (DOE, 1997b, p. 3) The actual decisions to conduct peer reviews of specific technologies are made by FA/CC program managers in response to five main ''drivers'': (1) funding, (2) changes in status (e.g., when a project approaches a gate in the TIDM process), (3) regulatory issues, (4) stakeholder concerns, and (5) any technical or other issues that arise concerning a given technology. Currently, there is no OST-wide process for selection of technologies to be reviewed, although the need for such a system has been recognized by OST staff.8 TABLE 1 Status of Peer Reviewed Technologies in the Stage-Gate Model Gate (See Box 3)   1 2 3 4 5 6 Unassigned Peer Reviews 0 5 0 3 2 6 0 Completeda               Peer Reviews 0 1 0 4 6 2 0 Scheduleda               Active 0 2 34 86 111 27 33 Projectsb               a As of April 1997. b As of May 1997. 8   Presentation to committee by Miles Dionisio, April 14, 1997, Washington, D.C.

OCR for page 13
--> BOX 3 The Technology Investment Decision Model (TIDM) The TIDM is the procedure developed by OST to provide a common basis on which to assess and manage the performance, expectations, and transition of technologies through the development process. It is a user-oriented decision-making process for managing technology development and for linking technology-development activities with cleanup operations. The TIDM identifies six R&D stages from basic research through implementation of a technology (see Figure 2). The model incorporates decision points (or "gates") within the R&D process where projects are evaluated for funding of the next stage of development. These TIDM gates represent milestones at which peer review might assess a technology's soundness. For each stage, specific criteria, requirements, and deliverables provide a common basis for technology assessment. The "stage-gate" process is meant to guarantee early evaluation of projects against technical and nontechnical criteria to ensure that the technologies will provide superior performance and also will meet the acceptance requirements of the intended customers. The TIDM also addresses the technology transfer and commercialization factors that must be considered to get the technological innovations into the marketplace. The FA/CC technology leadership is responsible for evaluating all documentation in accordance with the criteria for each gate. If the FA/CC program determines that the technology warrants passing through a gate, the technology maturation process will continue. If the program determines that the technology does not warrant further consideration, then funding is discontinued. Figure 2 Diagram of OST's Technology Investment Decision Model (DOE, 1997b).

OCR for page 13
--> OST's decision to require peer reviews prior to Gate 4 is based upon the large increase in funding that typically occurs when a project moves from bench scale to field scale. This is a rational basis for prioritization, and Gate 4 is an appropriate time to schedule a review for projects that have not yet reached field scale. Such a mandate, however, does not constitute a sufficient and systematic approach for selecting and prioritizing the projects to be reviewed. OST's policy to review all technology projects prior to Gate 4 also does not address the large number of technology projects that have already moved past Gate 4 (138 of 293 active projects) and have never been the subject of peer review. The committee believes that these projects should be peer reviewed before being considered for additional funding. The committee is encouraged that a significant number of the peer reviews conducted to date have reviewed technologies that are past Gate 4 (Table 1). Peer review of technologies beyond Gate 4 is particularly important for those technologies nominated for the Technology Deployment Initiative (TDI, see Box 4). This initiative is designed to aid in rapid deployment of demonstrated technologies that are ready for full-scale application. The probability that this initiative will lead to successful deployment of technologies can be maximized by assuring the technical merit of these technologies through the use of peer review results in the TDI selection process. BOX 4 EM's Technology Deployment Initiative (TDI) EM has requested a total of $30 million in fiscal year 1998 for the TDI, an effort that will provide funding for projects proposing immediate application of proven technologies. The TDI is intended to expedite application of remediation technologies at multiple DOE sites to help reduce the EM mortgage and achieve the goals of the Focus on 2006 program. TDI projects are selected by DOE managers through a competitive procurement process. Selection criteria include technical, business, and stakeholder/regulatory criteria. Field offices have primary management authority for TDI projects, with OST serving in an oversight capacity. Projects are funded as actual site cleanups, at a level of up to $5 million a year for four years. In order to encourage participation in the TDI, EM has committed to reinvesting cost savings obtained through TDI projects back into the site's programs, so that achieving cost savings would not result in decreased funding for the site. In addition, the committee strongly encourages OST to focus a part of its peer review efforts earlier in the technology-development process. As previously discussed in the section entitled Benefits of Peer Review, peer review of technologies during the early stages of development maximizes the return on R&D expenditures. In a effective peer review/decision-making process, there will be fewer and fewer projects at later stages of development because many projects would be terminated at early gates alter relatively small funding investments. This is the opposite of the current situation within OST, where there are relatively few projects in the early stages but many projects in the later stages of development (Table 1).9 9   Not including projects supported by the EM Science Program, which funds basic environmental research relevant to EM's cleanup mission. Over time, successful results from this program could feed into the early stages of OST's technology development program. All projects in the EM Science Program are peer reviewed before receiving any funding.

OCR for page 13
--> To address these issues, the committee recommends that OST develop a rigorous process for selecting projects to be peer reviewed. To be fair and credible, this process should employ well-defined project selection criteria, and OST peer review program staff should be directly involved in making decisions regarding which projects will be reviewed . Due to the large number of projects in the later stages of development, the committee encourages OST to focus much of the short-term peer review efforts on high-budget, late-stage projects that have never been peer reviewed, but that still face upcoming decisions with major programmatic and/or funding implications. In the long-term, however OST's objective should be to subject all new technologies to a peer review when they enter the program (i.e., at the proposal stage), followed by additional peer reviews at other critical points in the technology-development process (such as at Gate 4). Definition of Peer Review Objectives and Selection of Review Criteria The objectives of the peer review (i.e., what OST is attempting to achieve with the review) must be spelled out clearly so that they are understood by all involved in the process (Chubin, 1994; Chubin and Hackett, 1990). In addition, the specific review criteria (i.e., specific questions/issues that reviewers are asked to address in a particular review) should be defined prior to the selection of the peer review panel to ensure that the panel has the appropriate expertise required to address these issues. Because peer reviews are by definition technical in nature, both the objectives of the review and review criteria should focus on technical considerations. OST's procedure for defining peer review objectives and review criteria is described in Box 5. BOX 5 OST's Procedure for Defining Peer Review Objectives and Review Criteria Peer Review Objectives. FA/CC program managers are required to include a statement of the desired scope and purpose of the review when requesting a peer review. Peer Review Criteria. The following general criteria for peer reviews have been established for OST's peer review program in the ASME Manual for Peer Review (ASME, 1997): (1) relevancy, (2) scientific and technical validity, (3) state of the art, (4) filling an existing void, (5) non-duplicative/superior to alternatives, (6) cost effectiveness, (7) reduced risk, (8) regulatory acceptability, and (9) public acceptance. OST has provided no guidance on the relative weighting of these general criteria. Although these nine criteria form the core of the criteria to be used during the peer review, the actual criteria used during each review are determined by the ASME Peer Review Committee in consultation with the FA/CC program manager. For each review, the FA/CC program manager develops a preliminary list of specific review criteria, which are then used by the ASME Peer Review Committee to develop the formal review criteria for the review panel. These criteria are used by project staff and other presenters to organize written materials and oral presentations for the peer review. In addition, even though these review criteria form the basis of the review panel's evaluation, the review panel does have the authority to pursue other issues that arise during the review.

OCR for page 13
--> Peer Review Objectives The written statements of "scope" and "purpose" that OST focus area/crosscutting area program managers are required to submit to the Peer Review Coordinator when requesting a peer review can serve as a statement of the objective of the peer review. Such statements should be made available to all participants of the peer review (i.e., PIs, the ASME Peer Review Committee, all reviewers, and observers) prior to the review so that they clearly understand the context of the review. Although this has not yet been accomplished, the committee has been informed that these statements will be incorporated in all future peer review reports.10 Peer Review Criteria OST's general list of review criteria includes broad issues such as cost effectiveness, reduced risk, regulatory acceptability, and public acceptance, which include many nontechnical considerations. Although program managers must consider these criteria when making decisions regarding technologies, such broad, mostly nontechnical issues may sidetrack reviewers into issues beyond their expertise or difficult to resolve within the time constraints of a two- to three-day review. Thus, these criteria are generally inappropriate for peer review. For example, if technologies are peer reviewed very early in their development, both regulatory and public acceptability may be indeterminate. The committee therefore recommends that OST revise these general nontechnical criteria to focus on technical aspects of these issues, or to remove them from the list of review criteria. The committee encourages OST to continue its practice of developing a small number of technology-specific review criteria, provided these criteria are technical in nature and do not obscure or preempt the principal goal of determining technical merit. Specific review criteria allow individual peer review panels to focus their review efforts on issues that are especially relevant to a particular technology or a specific stage of technology development, or that are of particular interest to OST. The committee recommends that OST develop a well-defined general set of technical criteria for peer reviews, to be augmented by technology-specific criteria as needed for particular reviews. Selection of Peer Reviewers The selection of peer reviewers is a critically important step in the peer review process. The process for selecting reviewers must consider the fundamental characteristics of peer review described earlier in this report (see the section entitled Definition of Peer Review) and the specific objectives and criteria for peer review, and should be conducted by a group independent of the group being reviewed (Cozzens, 1987; Koning, 1990). Peer reviewers should be selected in accordance with formally established criteria. The minimum criteria for individual reviewers should include the following: relevant, demonstrated experience in the subject to be reviewed (Bozeman, 1993; Porter and Rossini, 1985); 10   Statement by Alan Moghissi on changes in ASME policies during Radioactive Isolation Consortium peer review, July 8, 1997, Columbia, Maryland.

OCR for page 13
--> peer recognition; knowledge of the state of the art of an aspect of the subject matter under review, including national and international perspectives on the issue; and absence of real or perceived conflict of interest (e.g., not selected from DOE or the DOE contractor community) and bias such that the panel as a whole is balanced (Abrams, 1991; Cole, 1991; Moxham and Anderson, 1992). In assessing an individual's qualifications for participation in a peer review panel, all relevant career experience, published papers, patents, and participation in professional societies should be considered. It is also important to consider the individual's experience with peer review itself (Royal Society, 1995). The peer review panel should be balanced by including individuals with an appropriate range of knowledge and experience to provide credible and effective peer review of the technology being judged (Porter and Rossini, 1985). The chair should be internationally respected in the field under review and should be an experienced peer reviewer. Additionally, some representation of persons qualified in competing or alternative technologies is desirable. BOX 6 OST's Reviewer Selection Criteria Reviewers are selected and approved by the ASME Peer Review Committee based on three generally recognized criteria: (1) education and relevant experience, (2) peer recognition, and (3) contributions to the profession. A minimum of a B.S. degree in an engineering or scientific field and sufficient experience in the technical area being reviewed are required for participation in an OST-related review. Peer recognition is assessed by measures such as activity in a professional society. Contributions to the profession include publications in peer reviewed journals, patents, and meeting presentations. Individuals with any real or perceived conflicts of interest with respect to the subject of the review may not serve on the Review Panel. According to ASME, "an individual who has a personal stake in the outcome of the review may not act as a reviewer" (ASME, 1997, p. 8). Every panel member must sign a statement indicating a lack of personal or financial interest in the outcome of the review. OST's reviewer selection criteria are described in Box 6. The committee finds that the criteria used to select reviewers in the OST peer review program are adequate to ensure the technical credibility of the peer review panel. In order to ensure the independence of the peer reviewers, however, the committee recommends that OST also include a criterion that explicitly excludes EM staff and contractors with real or potential conflicts of interest, including all OST staff and contractors, from consideration as peer reviewers. OST should also consider modifying the criteria to emphasize expertise relevant to the review. For example, its first criterion (education and relevant experience) could specifically include knowledge of the state of the art of an aspect of the subject matter under review—including national and international perspectives on the issue. Another important factor in selecting a peer review panel is that all relevant areas of expertise required to address the review criteria should be represented on the peer review panel. This is particularly important for the very diverse and complex technologies that are being developed by OST for environmental cleanup of the DOE complex, which may require large

OCR for page 13
--> review panels. Thus, the size of the review panel should depend on a number of factors, including the complexity and number of projects being reviewed and the specific review criteria established for the peer review. As discussed in the section entitled Peer Review Criteria, the panel encourages OST to revise its general review criteria to focus on technical issues. If OST continues to include nontechnical review criteria such as public acceptance and regulatory acceptability, however, expertise in these areas also must be included on peer review panels. Many organizations that conduct peer reviews of such complex projects rely on large data bases of potential reviewers from which to select peer reviewers).11 The committee notes that one reviewer served on six of the first ten review panels convened under the new peer review program. This suggests that a large data base was not used in reviewer selection at that time. One approach that could be used to help identify reviewers with relevant expertise would be to develop and use a large data base of potential reviewers. Planning and Conducting the Peer Review In order for the peer review to be fair and credible, the peer review panel should receive written documentation on the significance of the project, a focused charge that addresses technical review criteria, clear presentations by the project team, and adequate time to assess the project comprehensively so that it is able to write a report that effectively summarizes and supports its conclusions and recommendations. BOX 7 OST's Documentation Requirements OST requires that the PIs provide written materials relevant to the nine general criteria for peer review (see Box 4). Depending on the specific stage of development of the project, the OST Revised Guidance (DOE, 1997b) describes the documentation generally required for a peer review. At Gate 4 of the Technology Development Process Procedure, for example, PIs are required to provide the following: literature references, progress report (topical), needs document, test plan, quality assurance plan, proof of design, life-cycle cost analysis, risk analysis, regulatory issues and review, and public acceptance issues and plans. 11   Presentations to committee by Donald Beem on peer reviews conducted by AIBS on behalf of federal agencies, February 25, 1997, Washington, D.C., and by Carl Guastaferro, Information Dynamics, Inc., on peer review conducted for NASA's Office of Life and Microgravity Sciences and Applications, April 15, 1997, Washington, D.C.

OCR for page 13
--> The documentation requirements for OST's peer review program are presented in Box 7. As was the case with OST's general review criteria, some of these required documents address nontechnical issues and therefore should not be part of the materials reviewed by panelists during the peer reviews. The committee encourages OST to refine its list of required materials to include only technical documents relevant to the review criteria established for the peer review. During some of the technology presentations at OST peer reviews, members of this committee have observed several problematic situations that impeded a thorough review, such as the withholding of confidential technical information from peer reviewers, which prevented panelists from judging the technical merit of a project, and lack of facilitation by the chair. These problems presumably will be resolved as the program matures. Although complete respect for confidentiality is central to the successful operation of peer review (Royal Society, 1995), confidentiality of proprietary information during review can be dealt with through panel selection and by requiring panel members to sign confidentiality agreements. Further, agreement to disclose information critical to a meaningful peer review under appropriate confidentiality agreements could be a condition of the initial project award.12 Openness is also an important characteristic of credible peer review process: One of the pre-requisites for confidence in the integrity of peer review (and, indeed, of other decision-making processes) is that its workings should be transparent. There are several levels to this. At the minimum, the membership of peer review and other decision-making panels should always be in the public domain. (Royal Society, 1995, p. 4-84) The committee encourages OST to promote openness at the peer reviews in order to fully inform the public and others attending the reviews of the nature of the review. An approach used by the EPA Science Advisory Board is to make lists of the reviewers and their affiliations available and to have each reviewer publicly state his or her pertinent experience and any factors that could affect bias at the beginning of the peer review. The chair of the peer review panel also should clearly explain the objectives of the review and the specific review criteria that will be addressed by the panel. Usefulness of Peer Review Results The peer review program will be effective only if the results are used as an important factor in making decisions regarding future support for the reviewed project and/or as input to improve the technical merit of the project (Bozeman, 1993; Cozzens, 1987). Peer review reports that clearly provide the rationale for their conclusions and recommendations are an essential first step in achieving these objectives. In addition, procedures should be established for incorporating peer review results into the programmatic decision-making process and for requiring project personnel to follow through on technical recommendations of the peer review 12   The confidentiality of proprietary technical material was well handled during a recent peer review of the Radioactive Isolation Consortium's SMILE and SIPS technologies, conducted on July 8-9, 1997, in Columbia, Maryland.

OCR for page 13
--> panel. In a well-established peer review program, specific metrics also should be in place to evaluate the ability of the peer review program to achieve its objectives. Peer Review Reports BOX 8 OST's Peer Review Reporting Procedures Each member of the review panel prepares a report containing the outcomes of the review. The review panel chair combines these individual reports into a Consensus Report, which typically contains four parts: (1) a brief description of the review subject, (2) the consensus of the panel on shortcomings and meritorious aspects of the project, (3) recommendations of the panel, and (4) an appendix containing the comments by each reviewer for which no consensus could be reached, or which were considered by the chair to be beneficial, but of secondary importance, to the investigators and managers. OST's procedures for peer review reporting are presented in Box 8. Some of the early peer review reports that were reviewed by this committee did not document the reasoning for conclusions or provide adequate support for recommendations. More recent peer review reports have more clearly explained the rationale for the peer review panel's conclusions and recommendations, however. Two other improvements in recent peer review reports have been the addition of a section that summarizes the review criteria and the inclusion of short biographical sketches of the peer reviewers. The committee believes that the peer review reports could be improved further, however, by also including a statement of the objective of the review and a list of references used in the analysis. These additions could help improve the overall quality of the reports by helping the reviewers focus more clearly on their charge and also could make the reports more useful to program management by documenting the context of the review more clearly. Feedback Procedures BOX 9 OST's Review Feedback Procedures One to two days after the completion of the peer review, RSI submits the peer review panel's Consensus Report to the FA or CC program manager who requested the review. The program manager is asked to prepare within 30 days a written response that describes how the Principal Investigator and/or the FA/CC program manager intend to respond to the findings and recommendations of the Consensus Report. This formal response is then combined with the substantive sections of the Consensus Report as an ASME Interim Report, which is used by DOE headquarters' staff to evaluate the management of the technology program. The Interim Report is then reviewed and approved by the ASME Peer Review Committee and released as an ASME Final Report. OST's Technology Decision Process requires that peer review reports be used in all funding decisions made at Gate 4 or at any subsequent gate in the technology-development process. In these gate reviews, these reports constitute one input to be considered in determining whether to continue funding for a project (other inputs to the gate reviews include an estimated project budget, a life-cycle cost study, proof of design documents, and commercialization plan).

OCR for page 13
--> Box 9 summarizes OST's procedures that govern the feedback of peer review results into program management and development decisions. The committee has reviewed OST's written procedures on feedback and finds that they are reasonable, although the committee has observed that the implementation of many of these procedures has been uneven. The most significant problem concerns the timeliness of the reviews. In order to have any effect on programmatic decision making, peer reviews must be scheduled to occur before project decisions are made. Such planning has not occurred in a number of the peer reviews completed to date, and in these instances, the main benefits of peer review have been lost. For example, the decision to fund the next stage of development of the In Situ Redox Manipulation project apparently had been made prior to the peer review. In another peer review (of the Large-Scale Demonstration project at Fernald Plant I), the facility had already been decommissioned when the peer review was conducted. Although retrospective reviews can be useful in guiding other projects, clearly, if OST's new peer review program is to be effective, the peer reviews must occur prior to key decision points in the technology-development decision process. The committee also has observed instances where peer reviews of specific projects were canceled shortly before the scheduled peer reviews. To address these issues, the committee recommends that OST develop a targeted plan for the peer review program. Such a plan should consider factors such as how many of OST's technology projects can be peer reviewed, realistic schedules for the reviews, and the peer review program budget. To be effective, this plan also must assure that peer reviews are conducted early enough in the budget cycle to allow peer review results to be used as an input into meaningful funding decisions. In developing such a plan, OST should consider expanding its practice of consolidating reviews of related projects into a single review or several overlapping reviews in order to increase the number of projects that can be reviewed in the peer review program. Another value of reviewing multiple projects during a single peer review is that it acts to normalize the peer review results (Kostoff, 1997b). The timeliness and quality of the formal written responses from the FA/CC program managers also have been uneven. A number of formal responses from the program managers have not been transmitted to ASME within 30 days of the peer review, as required by OST policy. In addition, many of the written responses from OST program managers have not adequately documented how OST intends to follow through with the conclusions and recommendations of the peer review reports. For example, although the review panel for the Cost of In Situ Air Stripping of VOC Contamination in Soils Project made 10 specific recommendations for improving the project, OST's entire formal response was "The Program Manager agreed with the recommendations of the Review Panel, and the report is being revised" (ASME, 1996a, p. 5). For the peer review of Proposals for Salt Stabilization, OST's entire response was: "The DOE agreed with most recommendations of the Review Panel. However, while the project in priority 4 was not funded, three projects prioritized lower were funded." (ASME, 1996b, unpublished, p. 8). Such brief written responses do not provide sufficient explanation or documentation to be useful to OST decision makers charged with making funding decisions. The committee therefore encourages OST to enforce the 30-day requirement for written responses, and to require more detailed responses that fully describe how the recommendations of the peer review reports were implemented or considered. A peer review report that finds that a project is technically sound, based on good science, and capable of practical realization should be a necessary but not sufficient condition for passing certain TIDM gates. A project could fail to pass the gate for programmatic reasons even if it

OCR for page 13
--> were technically sound, but a report stating that a project is not technically sound should be a sufficient reason to reject a project at any gate. The project need not necessarily be terminated (e.g., it might need more development before moving to the next stage, or the review might be "appealed" and reconsidered, especially if the panel was divided in its conclusions), but it should not pass a gate in the face of an adverse review. Metrics to Evaluate Program Success Metrics assist in the measurement of effectiveness and can be used to evaluate the success of a program in realizing its objectives. Two types of metrics can be considered: activity metrics and performance metrics.13 Activity metrics indicate whether the program is proceeding efficiently. Performance metrics indicate whether the program is achieving the desired results. Although OST currently has no activity or performance metrics for its peer review program, properly chosen and clearly defined metrics could be a powerful management tool to help OST improve the efficiency and performance of this program. Activity metrics that could be chosen include (1) the percentage of projects reviewed at each gate, (2) the percentage of reviewed projects that were not funded in the next gate review decision, (3) the percentage of adequate DOE written responses to peer reviews that were received within the required 30 days, and (4) the degree of follow up on recommendations of the peer review panels. Performance metrics for the peer review program could be selected to measure the "success" of the program in meeting its objectives. For example, the percentage of resources going to "successful" projects could be an appropriate performance metric. These activity and performance metrics are provided as examples—ultimately OST management will need to establish its own set of metrics based on the success criteria they set for the technology-development program. The committee encourages OST to begin developing and collecting data for a set of activity and performance metrics for the peer review program. Metrics to evaluate the success of the peer review program will be discussed more fully in the committee's final report. 13   The terms "activity metrics" and "performance metrics" are referred to as "formative evaluation'' and ''summary evaluation," respectively, in the field of program evaluation.