5
Analysis of OST's Peer Review Program

This committee published its interim report (NRC, 1997b) in October 1997. At that time, the committee found that OST had made progress in the implementation of its peer review program. The committee recognized that OST had developed a process for selecting reviewers, developing technology-specific review criteria, and executing peer reviews. Despite the relatively small number of reviews that had been conducted at that time, the committee concluded that OST's peer review program had the potential to be fair and credible. The committee also recognized a number of key obstacles that would have to be addressed before OST could fully achieve the objectives of this program. however. In particular, the committee noted problems with some of the procedures actually used to plan for and select projects to be reviewed, the general review criteria, and the application and usefulness of the results of the peer reviews. In this chapter, the committee examines the status of OST's peer review program by considering the main components of the peer review process.

The analysis is based on the committee's review of OST documents, presentations by DOE staff, observations of recent peer reviews and ASME Peer Review Committee meetings, and the committee's assessment of peer review reports. In each section of this chapter, the committee revisits the recommendations from the interim report and discuss specific changes that OST has implemented since the publication of its interim report to address these issues. Relevant aspects of OST's current peer review program and the OST organization are described in Chapter 4.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 67
--> 5 Analysis of OST's Peer Review Program This committee published its interim report (NRC, 1997b) in October 1997. At that time, the committee found that OST had made progress in the implementation of its peer review program. The committee recognized that OST had developed a process for selecting reviewers, developing technology-specific review criteria, and executing peer reviews. Despite the relatively small number of reviews that had been conducted at that time, the committee concluded that OST's peer review program had the potential to be fair and credible. The committee also recognized a number of key obstacles that would have to be addressed before OST could fully achieve the objectives of this program. however. In particular, the committee noted problems with some of the procedures actually used to plan for and select projects to be reviewed, the general review criteria, and the application and usefulness of the results of the peer reviews. In this chapter, the committee examines the status of OST's peer review program by considering the main components of the peer review process. The analysis is based on the committee's review of OST documents, presentations by DOE staff, observations of recent peer reviews and ASME Peer Review Committee meetings, and the committee's assessment of peer review reports. In each section of this chapter, the committee revisits the recommendations from the interim report and discuss specific changes that OST has implemented since the publication of its interim report to address these issues. Relevant aspects of OST's current peer review program and the OST organization are described in Chapter 4.

OCR for page 67
--> Definition of Peer Review In the past, OST has used the term ''peer review'' generally to refer to internal reviews of OST projects by qualified EM technical staff who were not involved directly in the specific project under review. This use of terminology caused confusion and misunderstanding within both OST and external review groups (e.g., GAO, NRC) who continued to criticize OST for lack of a credible peer review program. The committee believes that at least part of the criticism leveled at the OST project review process has resulted from inconsistent and inaccurate descriptions of the processes involved (e.g., internal peer review, technical peer review). In the interim report, the committee therefore recommended that "OST should restrict the term 'peer review' to only those technical reviews conducted by independent, external experts. OST should adopt alternative terms, such as 'technical review,' for its internal reviews of scientific merit and pertinency" (NRC, 1997b, p. 10). In response to the committee's interim report, OST recently issued a policy restricting the term peer review to only those technical reviews conducted by independent, external experts (DOE, 1998b). Benefits of Peer Review The OST technology development program involves the expenditure of hundreds of millions of increasingly scarce federal discretionary dollars each year. It is particularly important that decisions about investment of funds are based on sound criteria (both technical and nontechnical) and that the decision-making process is respected by all parties in the technology development program. By ensuring that technologies are held to the highest technical standards, rigorous peer review can be an important tool for OST in meeting these objectives, as well as improving those projects that are funded. Thus, peer review at an early stage can ensure that projects are technically sound before they are developed (and funded) so that they will be able to achieve customer satisfaction. If they are not technically sound, they will not satisfy the customer. The question also has been raised as to whether an assessment or expression of customer satisfaction should or could be a surrogate or substitute for peer review. Customer satisfaction and technical merit (as judged by independent peer review) are separate issues—each is important, but neither is sufficient to itself. In general, customers will not necessarily have knowledge of

OCR for page 67
--> the technical aspects of the technology in question or of alternatives to it. Even when there is great expertise resident in the customers, it is nonetheless useful (e.g., in the federal funding process) to have an alternative judgment provided by disinterested parties. By the same token, a high-quality peer review is no substitute for customer satisfaction. Regardless of the peer review results, the customer should have confidence in the technology and its efficacy for the customer's particular applications (about which the customer may, indeed, know more than any peer reviewer). In sum customer satisfaction cannot substitute for peer review, nor can peer review substitute for customer satisfaction. These related but distinct elements are both important to the program's success. Peer Review Process The following sections address the state of OST's peer review program as of April 1998 in terms of the five general steps of a peer review process. It should be noted that in the committee's view, OST has made significant strides in improving its program since the publication of the committee's interim report. Although OST has made many policy changes in response to the recommendations in the interim report, it has not yet implemented all the changes. Selection of Projects for Peer Review Forty-three technologies were peer reviewed by the OST program between October 1996 and April 1998. Table 5.1 indicates at what gate in OST's technology development process these peer reviews took place, the number of reviews conducted in FY97, the number completed in FY98 as of May 1, 1998, the number of reviews planned for FY98 (by gate), and the total number of projects that met OST's selection criteria. Because of the relatively low rate of peer reviews conducted under this program relative to the large number of active projects, OST currently funds a large number of technologies that never have been peer reviewed. Many of these projects are in the later stages of development.

OCR for page 67
--> TABLE 5.1 Status of Peer Reviewed Technologies in the Stage-Gate Model   Gatea Total   1 2 3 4 5 6 Unassigned   Peer reviews completed FY97 0 7 1 5 8 7 0 28 Peer reviews completed FY98b 0 0 1 8 5 1 0 15 Total peer reviews Scheduled FY98b 0 0 8 14 8 5 3 38 Projects meeting OST's selection criteriab 0 1 10 61 107 3 0 182 a The last gate passed by the technology project. b As of May 1, 1998. Prioritizing Projects for Review OST's decision to require peer reviews before a technology passes Gate 4 is based on the large increase in funding that typically occurs when a project moves from bench scale (Stage 4 or lower) to field scale (Stage 5 or higher). This is a rational basis for prioritization, and Gate 4 is an appropriate time to schedule a review for projects that have not yet reached field scale. Such a practice, however, does not constitute a sufficient and systematic approach for selecting and prioritizing the projects to be reviewed. In its interim report the committee recommended that OST develop a rigorous process for selecting projects to be peer reviewed. Such a process should employ well-defined project selection criteria, and OST peer review program staff should be directly involved in making decisions regarding which projects should be reviewed. The committee recommended that OST subject all technologies to a peer review when they enter the program (i.e., at the proposal stage), followed by additional peer review at other critical points in the technology development process (such as at Gate 4). To address the large number of projects at late stages of

OCR for page 67
--> development that have never been peer reviewed, the committee recommended that in the short term, peer review efforts should focus on high-budget, late-stage projects that have never been peer reviewed but still face upcoming decisions with major programmatic and/or funding implications. In response to these recommendations, OST has recently issued Version 1.0 of a new policy (DOE, 1998b) describing the three criteria that "must be used in selecting the technologies as candidates subject to peer review" (p. 11), which were used in the selection of projects for FY98 (see Chapter 4). Although OST's three selection criteria are reasonable and should help it choose projects to be reviewed, they do not explicitly address two issues that have been emphasized recently by OST management: (1) deployment of new technologies in the field and (2) the need to reduce funding levels due to budget cuts. The committee believes that the procedure developed for selecting projects for peer review should ensure that OST would peer review any technology being considered for deployment if it has not already been peer reviewed. Although the suitability of a given technology for deployment at a specific site involves other considerations in addition to the technical merit of the process (e.g., site conditions, infrastructure), technologies should not be deployed without being peer reviewed at some point during development. This applies to all technologies that OST aids in deployment, whether in response to requests from a DOE site, through the Accelerated Site Technology Deployment (ASTD) program (originally named the Technology Deployment Initiative), or though OST's own initiative. Thus, it also applies to new technologies considered for deployment by OST that were developed without OST assistance. To address these issues, the committee recommends that OST adopt two additional criteria to choose from among those projects that satisfy one of the three existing selection criteria: (1) technologies that are being considered for deployment, and (2) technologies for which a request for further funding has been received or is anticipated. The usefulness of peer reviews for projects that are already mature, that is, technologies that have already undergone field trials or initial deployment, has been questioned by several OST personnel during the committee's discussions. The committee notes that reviews of projects just prior to deployment, or even after initial deployment if an additional deployment is planned, could benefit OST and the site at which the technology is to be deployed in several ways. First they provide confirmation as to the validity of the technical process and its probable capabilities and limitations. Second, by providing a basis for comparison of competing technologies, they provide a rationale for technology selection. Third, a peer-reviewed evaluation of technology performance provides

OCR for page 67
--> the basis for the most defensible cost estimates and hence the most reliable basis for planning future deployment. Timing of Reviews The optimal time for a review relative to initial field demonstration will depend on several factors including the relative cost of a demonstration and whether the technology was developed by OST or not. Projects with relatively small dollar value may not warrant a high enough priority to qualify for a Type I or Type II peer review. Reviews of small projects may be handled by mail (Type III review) or by inclusion in a Type I review convened for another related technology. If a program has numerous small projects that are difficult to fit into the Type I or Type II peer review structure, a peer review of all projects in a program could be used. In such a review the panel examines similar projects from a given program or subprogram. However, the range of expertise of the panel would have to include all the relevant areas from the entire suite of projects to be reviewed. A related problem arises when a technology is developed and deployed without OST involvement. Although application of the current policy of peer reviewing new starts and all projects at Gate 4 will ensure that all technologies developed by OST in the future are appropriately peer reviewed, technologies developed outside DOE and proposed for demonstration by OST would not have been subjected to an OST peer review during development. If these technologies have been deployed successfully at non-DOE sites, they have already undergone field trials. If the cost of demonstration of such projects is relatively low, the most appropriate time for review may be after completion of the initial DOE demonstration. A large group of small demonstrations could be reviewed by a single review panel. Selection of projects to be included in a Type I review group thus requires consideration of the current state of development of each project, the current status of funding decisions for each project, and the technologies being considered for deployment at the various DOE sites. Because of the broad knowledge required for project prioritization, the direct participation of an OST headquarters program manager in the selection process may be required. Another important consideration for selecting projects for review is that the technology projects must be at a suitable stage. To evaluate any technology, basic data on system performance are necessary. For example, in a recent Type I review, two projects that had already received funding for construction of pilot

OCR for page 67
--> plants were reviewed before the pilot plants had operated.1 Because the pilot plants had not operated, there were no data on which to judge the effectiveness of each process. In this case, it would have been better to postpone review of the projects until after the plants had operated. Because the level of development of these projects was inappropriate for peer review, much of the potential value of these particular peer reviews was lost. The results of the reviews were not, and could not be, used in program management. Grouping Projects in a Single Review Although the two additional selection criteria recommended by the committee would assist OST in identifying those projects for which peer review is of highest priority, unless some other changes are made to the peer review program (e.g., increased funding, changes in process), application of these criteria by itself would still leave a large number of projects that are not peer reviewed. To address this issue, the committee recommends that OST expand its practice of evaluating a number of related technologies in a single peer review, whenever possible. Such an approach would provide both efficiency in review, by allowing multiple projects to be reviewed by the same panel, and the opportunity for a rigorous comparative evaluation of competing technologies. Thus, if one project is selected for review, related projects could be reviewed concurrently for a marginal increase in cost. Similarity may be determined in at least two ways: by technology or by application. If OST decided to review various technical approaches to a group of similar problems (e.g., for one of OST's focus areas), the review would have to be constructed for this end. Another approach for addressing this issue is discussed in Chapter 6. Definition of Peer Review Objectives and Selection of Review Criteria Peer Review Objectives The committee noted in its interim report that a statement of the objective of the peer review should be made available to all participants in a peer review (i.e., PIs, the ASME Peer Review Committee, all reviewers, and observers) prior to the review so that they clearly understand its context. These 1   At the time of the peer review, one pilot plant was under construction, and construction (but not the permitting process) of the other had been completed.

OCR for page 67
--> statements of objectives should be consistent with the general purpose of peer review (e.g., technical, useful for decision making) and should be approved (or revised) by the Deputy Assistant Secretary for Science and Technology or his or her designee. OST's newly revised Implementation Guidance (DOE, 1998b, p. 13) requires FA/CC program managers, coordinating with the EM-53 Office Manager to submit to the Peer Review Coordinator the objective of the review, technology-specific review criteria, and a list of PIs who will be responsible for providing technical documentation and delivering the technical presentation. Also, although no FY98 peer review report as of April 1998 had included a statement of objectives, OST's Implementation Guidance lists such a statement as a typical feature of the reports. The committee concludes that OST has developed an appropriate policy but has not yet implemented it. The committee therefore encourages OST to follow up on this new requirement. Peer Review Criteria The committee previously observed that OST's original list of general criteria for peer review included broad issues such as cost-effectiveness, reduced risk, regulatory acceptability, and public acceptance, which include many nontechnical considerations. Certainly it is important to include business criteria and other nontechnical criteria in the management process for deciding which projects or programs to support. For example, such factors as acceptance in the marketplace, political feasibility, and social equity often are just as important in decision making as technical concerns. However, if a technology does not perform up to technical requirements, the degree of customer satisfaction, cost, and feasibility are, in a sense, beside the point. Nevertheless, the peer review processes that the committee defines and discusses here focus exclusively on technical criteria. In other types of reviews (such as business reviews) it may be useful and even necessary to integrate business and technical criteria. Such broad-scope program reviews may well include not only technical criteria related to the specifications and functionality of a technology but also such issues as payback period, implementation costs, and spillover effects of a technology (e.g., economic development impacts). In a peer review, however, such broad, mostly (or partially) nontechnical issues may sidetrack reviewers into issues that are outside their expertise or are difficult to resolve within the time constraints of a two- to three-day review. For example, if technologies are peer reviewed very early in their development, both regulatory and public acceptability may be

OCR for page 67
--> indeterminable. Thus, in peer review, where technical merit is the focus, business and technical criteria generally should not be integrated. The committee therefore recommended in its interim report that OST revise these broad criteria to focus on technical aspects of the issues mentioned above, or remove them from the list of review criteria. The committee also recommended that OST develop a well-defined general set of technical criteria for peer reviews, to be augmented by technology-specific criteria as needed for particular reviews. These technology-specific criteria should supplement, not supplant, the basic criteria: they should not deflect the review from completing the basic evaluation. The general review criteria released by OST in early 1998 (DOE, 1998b) and listed in Chapter 2 are consistent with the committee's recommendations. If these criteria are implemented wholeheartedly they should result in a major improvement in OST's peer review program. OST's new policy also states that these general review criteria will be augmented by technology-specific criteria, as recommended in the interim report. As mentioned in Chapter 4, ASME combines the general and technology-specific review criteria to develop the specific criteria for each peer review, thus enabling reviewers to focus on the specific objectives of a given review. In addition, even though these review criteria form the basis of the review panel's evaluation, the review panel has the authority to pursue other issues that arise (DOE, 1998b). The committee finds that these revised general criteria and the procedure for developing technology-specific criteria are a meaningful improvement over the original review criteria because they allow OST's peer review program staff to focus the reviews on important technical issues. The procedure also has sufficient flexibility to allow the review criteria to vary as a function of the stage of development of a technology. Selection of Peer Reviewers Qualifications OST's criteria for reviewer selection are discussed in Chapter 4. Although the committee finds that OST's selection criteria are generally adequate to ensure the technical credibility of the review panel, in its interim report the committee recommended that OST consider modifying the criteria to emphasize expertise relevant to the review. For example, its first criterion (education and relevant experience) could specifically include knowledge of the

OCR for page 67
--> state of the art of an aspect of the subject matter under review, including national and international perspectives on the issue. Panel Formation Another important factor in selecting a peer review panel is that all relevant areas of expertise required to address the review criteria should be represented on the peer review panel. This is particularly important for the very diverse and complex technologies that are being developed by OST for environmental cleanup of the DOE complex, which may require large review panels. In its interim report, the committee recommended that the scope of the review panel, and thus its size, should depend on a number of factors, including the complexity and number of projects being reviewed and the specific review criteria established for the peer review. The committee also suggested that OST use a large database of potential reviewers to help identify those with relevant expertise. The databases currently used by RSI to identify committee members are described in Chapter 4. Although the range of expertise on peer review panels observed by this committee has been acceptable, the committee believes that the databases may not have adequate scope to identify the broad range of reviewers likely to be necessary for some complex projects. In its interim report the committee noted that it would be appropriate for OST to gain access to databases from other professional societies or review organizations. ASME's Manual for Peer Review (ASME, 1998a) states that ASME will work to develop such collegiality agreements in situations where expertise beyond that available to ASME is required, but none have been employed to date2. To the committee's knowledge, OST has not addressed this issue. The committee therefore recommends that OST establish a more systematic approach for accessing reviewer information from other databases (e.g., chemical engineers, geologists, physicists, materials scientists, biologists) and from other professional societies, as needed to ensure the appropriate range of expertise for all review panels. 2   During the final stages of report publication, the committee learned that ASME has in fact established agreements with the American Chemical Society, the Society of Environmental Toxicology and Chemistry, and the Federation of American Societies for Experimental Biology to assist in the identification of qualified reviewers.

OCR for page 67
--> Conflict of Interest In its interim report, the committee recommended that in order to ensure the independence of peer reviewers, OST should include in its requirements a criterion that explicitly excludes EM staff and contractors with real or potential conflicts of interest, including all OST staff and contractors, from consideration as peer reviewers. OST has revised its conflict-of-interest policy to explicitly exclude "all DOE staff and contractors with real or potential conflicts" from consideration as peer reviewers (DOE, 1998, p. 7); in practice , OST has interpreted this policy to exclude all DOE staff, whether or not there is a real or potential conflict of interest with the projects under review. This interpretation goes beyond the committee's recommendation but is consistent with the ASME conflict-of-interest policy. The committee would like to point out that concerns over conflicts of interest should not necessarily preclude all DOE staff and contractors from serving on a peer review panel, however. The committee believes that DOE staff from organizations outside OST and its contractors could be used in special circumstances when the appropriate expertise is not available outside DOE (e.g., high-level waste tanks) and where these individuals have had no connection with the projects under review. For some reviews, for example, national laboratory researchers may be unique sources of expertise for technologies addressing certain DOE cleanup problems. In such cases, issues of potential conflicts of interest would have to be treated very carefully and openly. The reviewer selection process should in general avoid DOE staff as peer reviewers, however, and should ensure that the DOE-affiliated persons are never more than a small fraction of a panel's membership. Planning and Conducting the Peer Review Organization and preparation of each peer review event are key to a successful peer review program. Well in advance of the review, peer reviewers should receive written documentation on the significance of the project and a focused charge that addresses technical review criteria. When a review panel is convened, it should be provided with clear presentations by the project team, as well as adequate time to assess the project comprehensively so that the panel is able to write a report that effectively summarizes and supports its conclusions

OCR for page 67
--> and recommendations. Issues such as proprietary information also have to be handled in a systematic and confidential manner. Documentation The documentation required for an OST peer review is presented in Chapter 4. This list has been modified from OST's original list of required documentation in response to the committee's interim report, which pointed out that some of the required documents addressed nontechnical issues. The revised list identifies some of the documents required to evaluate the technical merit of a project and, if implemented, should improve the quality of background materials provided to peer review panels. One document that is not included in this revised list of required materials is a statement of work, or proposal, that describes the specific activities that will be carried out if the project is funded. The committee recommends that a detailed proposal or statement of work be required for all peer reviews. Confidentiality of Technical Information During some early OST peer reviews, members of the committee observed problems with how confidential technical information was handled. In particular, confidential materials that were necessary to evaluate a project were withheld from the review panel, which prevented panelists from judging the technical merit. The confidentiality of proprietary technical material has been ensured in more recent reviews, such as the review of the Radioactive Isolation Consortium's SMILE and SIPS technologies, conducted on July 8–9, 1997, in Columbia, Maryland, and the review of the Industry Program on January 21–22, 1998, in Columbia, Maryland. Unfortunately, at the latter, procedures restricting the panel's review of proprietary materials to the site of the review unnecessarily restricted the time reviewers had to digest these proprietary materials. In future cases where proprietary information cannot be sent to the panels prior to review, OST may want to consider convening the panels at the peer review site one day prior to the review to ensure sufficient time to review this material. In general, however, the system for handling proprietary information appears to be appropriate and effective.

OCR for page 67
--> Anonymous Versus ''Open'' Peer Reviews As discussed in Chapter 2, another important consideration in planning and conducting peer reviews is whether the evaluations should be conducted anonymously or openly (i.e., using publicly known reviewers). In the case of OST, the committee believes that the strengths of open reviews (e.g., enhanced credibility of the process, the potential for more constructive evaluations) far outweigh the potential weaknesses (e.g., possible lack of candor by some reviewers when evaluating weak proposals), especially for the peer review of projects or programs.3 As such, the committee encourages OST to continue to promote openness of its peer reviews and to fully inform the public and others attending the reviews of their nature. An approach used by the EPA Science Advisory Board is to make lists of the reviewers and their affiliations available and to have each reviewer publicly state his or her pertinent experience and any factors that could affect bias at the beginning of the peer review. Similarly, at the OST peer review of Industry Programs held in January 1998, members of the review panel were asked to give short descriptions of their backgrounds. The committee encourages OST to continue this practice in future reviews. In addition, at the beginning of each review, the chair of the peer review panel should clearly explain the objectives of the review, the specific review criteria that will be addressed by the panel, and how the results of the peer review will be used in OST's decision-making process. Usefulness of Peer Review Results Peer Review Reports OST's procedures for peer review reporting are described in Chapter 4. Although some of the early peer review reports did not document the reasoning for conclusions or provide adequate support for recommendations, recent reports have more clearly explained the rationale for the panel's conclusions and recommendations. Two other improvements in peer review reports have been the addition of a section that summarizes the review criteria and the inclusion of short biographical sketches of the peer reviewers. In its interim report, the committee recommended that the peer review reports could be improved further 3   If OST decides to implement a system of peer review to evaluate large numbers of research proposals, however, it may want to consider the pros and cons of anonymous versus open peer reviews for this purpose.

OCR for page 67
--> by also including a statement of the objective of the review and a list of references used in the analysis. These additions could improve the overall quality of the reports by increasing the reviewers' focus on their charge and could make the reports more useful to program management by more clearly documenting the basis of the review. In response to this recommendation, OST recently established a new policy that requests ASME peer review reports to include the features of technical peer review reports listed in Chapter 4 (DOE, 1998b), including the objective of the review and a reference list. As of April 1998, neither of these items had yet appeared in the peer review reports, and the committee recommends that OST implement this new policy. Feedback Procedures OST's procedures that governing the feedback of peer review results into program management and development decisions are described in Chapter 4. The committee has reviewed OST's written procedures on feedback and finds that they are reasonable, although the committee has observed that the implementation of many of these procedures has been uneven. Many peer reviews are not linked to decision points, despite previous and current policy guidance. The committee noted in its interim report that the timeliness and quality of the formal written responses from the FA/CC program managers also have been uneven. None of the formal responses from the program managers for the FY97 reviews were transmitted to ASME within 30 days of the peer review, as required by OST policy.4 In addition, many of the written responses did not document how OST intended to follow through with the conclusions and recommendations of its peer review reports. For example, although the review panel for the Cost of In Situ Air Stripping of VOC Contamination in Soils Project made 10 specific recommendations for improving the project, OST's entire formal response was, "The Program Manager agreed with the recommendations of the Review Panel, and the report is being revised" (ASME, 1997, p. 41). Such responses are not useful to OST decision makers charged with funding decisions. To address such problems, in its interim report the committee encouraged OST to enforce the 30-day requirement for written responses and to require more detailed responses that fully describe how the recommendations of 4   Personal correspondence from Sorin Straja, RSI, to Erika Williams, NRC, March 26, 1998.

OCR for page 67
--> the peer review reports were implemented or considered. OST has indicated its intent to implement these recommendations; for example, the Implementation Guidance states that a schedule for acting on recommendations must be submitted to the EM-53 Office Director within 30 days of receipt of the Technical Report (DOE, 1998b, p. 7). The committee notes that since the release of the interim report, both the quality and the timeliness (although still none have been submitted on time) of the written responses from OST program managers have increased significantly. The committee notes that the need for a formal response to reviewer comments from FA/CC program managers (and the inability to meet the 30-day deadline) is a strong indication that peer review has not yet become a part of the "culture" of OST, whereby FA/CC program managers would embrace peer review as a valuable tool to help guide their decision process (see discussion in Chapter 7).