National Academies Press: OpenBook

Evaluating Research Efficiency in the U.S. Environmental Protection Agency (2008)

Chapter: 5 Findings, Principles, and Recommendations

« Previous: 4 A Model for Evaluating Research and Development Programs
Suggested Citation:"5 Findings, Principles, and Recommendations." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 58
Suggested Citation:"5 Findings, Principles, and Recommendations." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 59
Suggested Citation:"5 Findings, Principles, and Recommendations." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 60
Suggested Citation:"5 Findings, Principles, and Recommendations." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 61
Suggested Citation:"5 Findings, Principles, and Recommendations." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 62
Suggested Citation:"5 Findings, Principles, and Recommendations." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 63
Suggested Citation:"5 Findings, Principles, and Recommendations." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 64
Suggested Citation:"5 Findings, Principles, and Recommendations." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 65
Suggested Citation:"5 Findings, Principles, and Recommendations." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 66
Suggested Citation:"5 Findings, Principles, and Recommendations." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 67
Suggested Citation:"5 Findings, Principles, and Recommendations." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 68

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

5 Findings, Principles, and Recommendations In this chapter, the committee draws together its preceding discussions in the form of findings, principles, and recommendations. The findings constitute a brief summary of major points discussed in Chapters 1-4. The principles are intended for use by both the Environmental Protection Agency (EPA) and other research-intensive federal agencies. The recommendations are intended specifi- cally for EPA, although other agencies, including the Office of Management and Budget (OMB), may find them useful.1 To introduce this chapter, it is useful to begin with the two central issues on which the committee focused many of its discussions. The first is the empha- sis on efficiency in Program Assessment Rating Tool (PART) reviews. In the planning, execution, and review of research programs, efficiency should nor- mally be subordinate to the criteria of relevance, quality, and effectiveness for reasons explained in Chapter 3. However, all federal programs should use effi- cient spending practices, and the committee suggests which aspects of efficiency can be measured in research programs and how that might best be done. Two kinds of efficiency should be differentiated. The first, process efficiency, uses primarily quantitative metrics to evaluate management processes whose results are known and for which benchmarks can be defined and progress can be meas- ured against milestones. The second, investment efficiency, measures how well a program’s resources have been invested and how well they are being managed. Evaluating investment efficiency involves qualitative measures, primarily the judgment and experience of expert-review panels, and may also draw on quanti- tative data. Investment efficiency is the responsibility of the portfolio manager who identifies the most promising lines of research for achieving desired out- comes. 1 It should be emphasized again that these recommendations apply only to R&D pro- grams, not to the much broader universe of federal programs to which PART is applied. 58

Findings, Principles, and Recommendations 59 The second central issue is the charge question of whether metrics used by federal agencies to measure the efficiency of research are “sufficient” and “out- come-based.” In approaching sufficiency, the committee gathered examples of methods used by agencies and organized them in nine categories. It found that most of the methods were insufficient for evaluating programs’ process effi- ciency either because they addressed only a portion of a program or because they addressed issues other than research, and all were insufficient for evaluat- ing investment efficiency because they did not include the use of expert review. In responding to the question of whether the metrics used are outcome- based, the committee determined that ultimate-outcome-based evaluations of the efficiency of research are neither achievable nor valid. The issue is discussed in Chapter 3. Those two basic conclusions constitute the background of the major find- ings of this report. Findings 2, 4, 5, 6, and 7 are linked to specific charge ques- tions, as indicated; findings 1 and 3 are more general. FINDINGS 1. The key to research efficiency is good planning and implementa- tion. EPA and its Office of Research and Development (ORD) have a sound strategic planning architecture that provides a multi-year basis for the an- nual assessment of progress and milestones for evaluating research pro- grams, including their efficiency. 2. All the metrics examined by the committee that have been pro- posed by or accepted by OMB to evaluate the efficiency of federal research programs have been based on the inputs and outputs of research- management processes, not on their outcomes. 3. Ultimate-outcome-based efficiency metrics are neither achievable nor valid for this purpose. 4. EPA’s difficulties in complying with PART questions about effi- ciency (questions 3.4 and 4.32) have grown out of inappropriate OMB re- quirements for outcome-based efficiency metrics. 5. An “ineffective” (OMB 2007a)3 PART rating of a research pro- gram can have serious adverse consequences for the program or the agency. 2 Question 3.4 is “Does the program have procedures (e.g. competitive sourcing/cost comparisons, IT improvements, appropriate incentives) to measure and achieve efficien- cies and cost effectiveness in program execution?” Question 4.3 is “Does the program demonstrate improved efficiencies or cost effectiveness in achieving program goals each year?” 3 OMB (2007a) states that “programs receiving the Ineffective rating are not using tax dollars effectively. Ineffective programs have been unable to achieve results due to a lack of clarity regarding the program’s purpose or goals, poor management, or some other significant weakness. Ineffective programs are categorized as Not Performing.”

60 Evaluating Research Efficiency in EPA 6. Among the metrics proposed to measure process efficiency, several can be recommended for wider use by agencies (see recommendation 1). 7. The most effective mechanism for evaluating the investment effi- ciency of R&D programs is an expert-review panel, as recommended in earlier reports of the Committee on Science, Engineering, and Public Policy and the Board on Environmental Studies and Toxicology. Expert-review panels are much broader than scientific peer-review panels. PRINCIPLES The foregoing findings led to a series of principles that the committee used to address the overall process of evaluating research programs in the context of agency long-term plans and missions. A central thesis of this report is that the evaluation principles can and should be applied to all federally supported re- search programs and can also be applied to research in other contexts. The committee hopes that these principles will be adopted by EPA and other re- search-intensive agencies in assessing their R&D programs. Principle 1 Research programs supported by the federal government should be evaluated regularly to ensure the wise use of taxpayers’ money. The purpose of OMB’s PART is to ensure that the government is spending taxpayers’ money wisely. This committee’s recommendations are designed to further that aim. More broadly, the committee agrees that the research programs of federal agencies should be evaluated regularly, as are other programs of the federal government. During the evaluations, efforts should be made to evaluate the efficiency of the research programs of agencies. The development of tools for doing that is still in an early stage, and agencies continue to negotiate their practices inter- nally and with OMB. EPA’s multi-year plans, which provide an agency-wide structure to review progress and to revise annually, constitute a useful frame- work for organizing evaluations that serve as input into the PART process. Principle 2 Despite the wide variability of research activities among agencies, all agencies should evaluate their research efforts according to the same crite- ria: relevance, quality, and performance. Those criteria are defined in this report as follows:

Findings, Principles, and Recommendations 61 • Relevance is a measure of how well research supports an agency’s mission. • Quality is a measure of the novelty, soundness, accuracy, and repro- ducibility of research. • Performance is described in terms of both effectiveness (the ability to achieve useful results) and efficiency (the ability to achieve quality, relevance, and effectiveness in timely fashion and with little waste). The research performed by federal agencies varies widely by primary mis- sion responsibility. The missions of the largest research-intensive agencies in- clude defense, energy, health, space, agriculture, and the environment. Their research efforts share assumptions, approaches, and investigative procedures, so they should be evaluated by the same criteria. Research that is designed appropriately for a mission (relevance), is im- plemented in accordance with sound research principles (quality), and produces useful results (effectiveness) should be managed and performed as efficiently as possible. That is, research of unquestionable quality, relevance, and efficiency is effective only if the information it produces is in a usable form. The committee emphasizes that research effectiveness, in the context of PART, is achieved only to the degree that the program manager makes the most effective use of re- sources by allocating resources to the most appropriate lines of investigation. This integrated view is a reasonable starting point for the evaluation of research programs. Principle 3 The process efficiency of research should not be evaluated using outcome-based metrics. PART encourages the use of outcome-based metrics to evaluate the effi- ciency of federal programs. For many or perhaps most programs, especially those with clearly defined and predictable outcomes, such as countable services, that is an appropriate and practical approach that makes it possible to see how well inputs (resources) have been managed and applied to produce outputs. But OMB recognizes the difficulty of using outcome-based metrics to measure the efficiency of core or basic-research programs. According to PART guidance (OMB 2007b), agencies should define appropriate output and outcome measures for all R&D programs, but agencies should not expect fundamental basic re- search to be able to identify outcomes and measure performance in the same way that applied research or development are able to. Highlighting the results of basic research is important, but it should not come at the ex- pense of risk-taking and innovation. For some basic research programs,

62 Evaluating Research Efficiency in EPA OMB may accept the use of qualitative outcome measures and quantitative process metrics (OMB 2007b, p. 76). The committee agrees with that view, as elaborated below, and finds that ultimate-outcome-based efficiency metrics are neither achievable nor valid, as explained in Chapter 3. In some instances, however, it may be useful to reframe the results of re- search to include the category of intermediate outcomes, the subject of Chapter 4. That category of results may include new tools, models, and knowledge for use in decision-making. Because intermediate outcomes are available sooner than ultimate outcomes, they may provide more practical and accessible metrics for agencies, expert-review panels, and oversight bodies. Principle 4 The efficiency of R&D programs can be evaluated on the basis of two metrics: investment efficiency and process efficiency. In the committee’s view, the construct presented by PART has proved unworkable for research-intensive agencies partly because of their difficulty in evaluating the “efficiency” of research. In lieu of that construct, the committee suggests that any evaluation of a research program be framed around two ques- tions: Is the program making the right investments? Is it managing those invest- ments well? This report has used the term investment efficiency for the first evaluation metric. Investment efficiency is determined by examining a program in light of its relevance, quality, and performance—in other words, by asking whether the agency has invested in the right research portfolio and managed it wisely. Those criteria are most relevant to research outcomes. The issue of efficiency is not the central concern in asking whether a pro- gram is making the right investments. But it is implicit in that the portfolio man- ager must make wise research investments if the program is to be effective and efficient; once resources, which are always finite, have been invested, they must be used to optimize results. The totality of those activities might be called portfolio management, a more familiar term that suggests linking research activities with strategic and multi-year plans. Sound portfolio management is the surest route to desired out- comes. The elements of investment efficiency are addressed in most agency pro- cedures developed under the Government Performance and Results Act (GPRA) and in PART questions, although not in those addressing efficiency (that is, questions 3.4 and 4.3). Moreover, it is essential to correct the misunderstanding embodied in the following statement in the PART guidance: “Programs must document performance against previously defined output and outcome metrics”

Findings, Principles, and Recommendations 63 (OMB 2007b, p. 76). A consistent theme of the present report is that for many research programs there can be no “outcome metrics”; that is true especially for core research, as discussed in Chapter 3. Distinct from investment efficiency is process efficiency, which has to do with how well research investments are managed. Process efficiency involves activities whose results are well known in advance and can be tracked by using established benchmarks in such quantities as dollars and hours. Process efficiency is secondary to investment efficiency in that it adds value only after a comprehensive evaluation of relevance, quality, and effective- ness. Process efficiency most commonly addresses outputs, which are the near- term results of research. It can also—like investment efficiency—make use of intermediate outcomes, which can be identified earlier than ultimate outcomes and thus provide valuable data points for reviewers. Principle 5 Investment efficiency is best evaluated by expert-review panels that use primarily qualitative measures tied to long-term plans. PART questions 3.4 and 4.3 seem to require evaluation of the efficiency of research in isolation from review of relevance and quality and thus emphasize cost and time. Agencies find that this approach may place programs at risk be- cause the failure to satisfy PART on efficiency-related questions can increase the chances of an unacceptable rating for the total R&D program. As discussed in Chapter 3, quantitative metrics in the context of quality and relevance are important in measuring process efficiency but by themselves cannot assess the value of a research program or identify ways to improve it. A more appropriate approach is to adapt the technique of expert review, already recommended by the National Research Council for compliance with GPRA. Indeed, OMB (2007b, p. 76) specifically recommends, in its written instructions to agencies, that agency managers “make the processes they use to satisfy the Government Performance and Results Act (GPRA) consistent with the goals and measures they use to satisfy these [PART] R&D criteria.” One advantage of using an expert-review panel is its ability to evaluate both investment efficiency and process efficiency. It can determine the kind of research that is most appropriate for advancing the mission of an agency and the best management strategies to optimize the results of the research with the re- sources available. An expert-review panel can also identify emerging issues and their place in the research portfolio. Those would be developing fields (for example, nanotechnology a decade ago) identified by the agency for their potential impor- tance but not mature enough for inclusion in a strategic plan. Identification of new fields might be thought of as an intermediate outcome because their value can be anticipated as a result of continuing core or problem-driven research and

64 Evaluating Research Efficiency in EPA through the process of long-term planning. Because they may not seem urgent enough to have a place in a current strategic plan, emerging issues often fall victim to the budget-cutter’s knife, even though an early start on a new topic can bring long-term efficiencies and strengthen research capabilities. Principle 6 Process efficiency, which may be evaluated by using both expert re- view and quantitative metrics, should be treated as a minor component of research evaluation. PART question 3.4, the one that addresses efficiency most explicitly, asks of every federal program whether it has procedures “to measure and achieve efficiencies and cost effectiveness in program execution”, including “at least one efficiency measure that uses a baseline and targets” (EPA 2007b, p. 41). Re- search programs, especially programs of core or basic research, are unlikely to be able to respond “yes” to that question, because research managers cannot set baselines and targets for investigations whose outcomes are unknown. There- fore, such programs are unlikely to gain a “yes” for the question and are less likely to receive an acceptable rating under PART. In addition, failure on the PART efficiency questions precludes a “green” score on the Budget-Perfor mance Integration initiative of the President’s Management Agenda.4 Isolating efficiency as an evaluation criterion can produce a picture that is at best incom- plete and at worst misleading. It is easy to see how an effort to reduce the time or money spent on a project, in order to increase efficiency, might also reduce its quality unless this effort is part of a comprehensive evaluation. To evaluate applied research, especially in a regulatory agency, such as EPA, it is essential to understand the strategic and multi-year plans of the regu- latory offices, the anticipated contributions of knowledge from research to plans and decisions, and the rather frequent modifications of plans due to intervening judicial, legislative, budgetary, or societal events and altered priorities. Some of those intervening events may be driven by new scientific findings. The efficiency of research-management processes should certainly be evaluated. They include such activities as grant administration, facility mainte- nance or construction, and repeated events, such as air-quality sampling. Process efficiency can be evaluated with quantitative management tools, such as earned- value management (EVM). But such evaluations should be integrated with the work of expert-review panels if they are to contribute to the larger task of pro- gram evaluation. 4 PART Guidance states, “The President’s Management Agenda (PMA) Budget and Performance Integration (BPI) Initiative requires agencies to develop efficiency measures to achieve Green status” (OMB 2007b, p. 9).

Findings, Principles, and Recommendations 65 In summary, efficiency measurements should not dominate or override the overall evaluation of a research program. Parts of the program may not be ame- nable to quantitative metrics, and the absence of quantitative metrics should not be cause for a low rating that harms the reputation of the program or the agency. RECOMMENDATIONS The following recommendations flow from the committee’s conclusion that undue emphasis has been placed on the single criterion of efficiency. That emphasis, which is often seen for non-R&D activities throughout the main body of the PART instructions, is not explicit in the PART Investment Criteria (OMB 2007b). Rather, it has emerged during agency reviews, appeal rulings, and out- side evaluations of the PART process, despite its inappropriateness for the evaluation of research programs. The issue is important because unsatisfactory responses to the two PART efficiency-focused questions have apparently con- tributed to a low rating for an entire program (for example, EPA’s Ecological Research Program) and later budget cuts (Inside EPA’s Risk Policy Report 2007).5 Evaluation of research should begin not with efficiency but with the criteria of relevance, quality, and effectiveness and should secondarily address efficiency only after these criteria have been reviewed. Recommendation 1 To comply with PART, EPA and other agencies should only apply quantitative efficiency metrics to measure the process efficiency of research programs. Process efficiency can be measured in terms of inputs, outputs, and some intermediate outcomes but not in terms of ultimate outcomes. For compliance with PART, evaluation of the efficiency of a research pro- gram should not be based on ultimate outcomes. Ultimate outcomes can seldom be known until considerable time has passed after the conclusion of the research. Although PART documents encourage the use of outcome-based metrics, they also describe the difficulty of applying them. Given that restriction, the committee recommends that OMB and other oversight bodies focus not on investment efficiency but on process efficiencies when addressing questions 3.4 and 4.3—the ways in which program managers exercise skill and prudence in conserving resources. For evaluating process effi- ciency, quantitative methods can be used by expert-review panels and others to track and review the use of resources in light of goals embedded in strategic and 5 According to EPA’s Risk Policy Report, “previous PART reviews criticized ERP [the Ecological Research Program] for not fully demonstrating the results of program- matic and research efforts – and resulted in ERP funding cuts.” (Inside EPA’s Risk Pol- icy Report 2007)

66 Evaluating Research Efficiency in EPA multi-year plans. Moreover, to facilitate the evaluation process, the committee recommends including intermediate outcomes, as distinguished from ultimate outcomes. Intermediate outcomes include such results as an improved body of knowledge available for decision-making, comprehensive science assessments, and the dissemination of newly developed tools and models. The PART R&D investment-criteria document (OMB 2007b, see also Ap- pendix G) should be revised to make it explicit that quantitative efficiency met- rics should be applied only to process efficiency. Recommendation 2 EPA and other agencies should use expert-review panels to evaluate the investment efficiency of research programs. The process should begin by evaluating the relevance, quality, and performance6 of the research. OMB should make an exception when evaluating R&D programs under PART to permit evaluation of investment efficiency as well as process effi- ciency. This approach will make possible a more complete and useful evalua- tion. Investment efficiency is used in this report to indicate whether an agency is “doing the right research and doing it well.” The term is used as a gauge of port- folio management to measure whether a program manager is investing in re- search that is relevant to the agency’s mission and long-term plans, whether the research is being performed at a high level of quality, and whether timely and effective adjustments are being made in the multi-year course of the work to reflect new scientific information, new methods, and altered priorities. Those questions cannot be answered quantitatively; they require judgment based on experience. The best mechanism for measuring investment efficiency is the ex- pert-review panel. The concept of investment efficiency may be applied to stud- ies that guide the next set of research projects and stepwise development of ana- lytic tools or other products. EPA should continue to obtain primary input for PART compliance by us- ing expert review under the aegis of its Board of Scientific Counselors (BOSC) and Science Advisory Board (SAB). Expert review provides a forum for evalua- tion of research outcomes and complements the efforts of program managers in their effort to adjust research activities according to multi-year plans and antici- pated outcomes. To enhance the process, consideration should be given to in- termediate outcomes. As outputs and intermediate outcomes are achieved, the expert-review panel can use them to adjust and evaluate the expected ultimate outcomes (see Logic Model in Chapter 4). 6 Performance is described in terms of both effectiveness (the ability to achieve useful results) and efficiency (the ability to achieve research quality, relevance, and effective- ness with little waste).

Findings, Principles, and Recommendations 67 The qualitative emphasis of expert review should not obscure the impor- tance of quantitative metrics, which should be used whenever possible by ex- pert-review panels to evaluate process efficiency when activities can be meas- ured quantitatively and linked to milestones—for example, administration, construction, grant administration, and facility operation. In evaluating research at EPA, both EPA and OMB should place greater emphasis on identifying emerging and cross-cutting issues. ORD needs to be responsive to short-term R&D requests from the program offices, but it must have an organized process for identifying future research needs. BOSC and SAB should assign appropriate weight in their evaluations to forward-looking exer- cises that sustain the agency’s place at the cutting edge of mission-relevant re- search. Expert-review panels and oversight bodies should recognize that research managers need the flexibility to adapt to the realities of input changes beyond the agency’s control, especially budgeting adjustments. The most rigorous plan- ning cannot foresee the steps that might be required to maintain efficiency in the face of recurrent unanticipated change. Recommendation 3 The efficiency of research programs at EPA should be evaluated ac- cording to the same overall standards used at other agencies. EPA has failed to identify a means of evaluating the efficiency of its re- search programs that complies with PART to the satisfaction of OMB. Some of the metrics it has proposed, such as the number of publications per full-time equivalent (FTE), have been rejected, although accepted by OMB for other agencies. OMB has encouraged EPA to apply the common management tech- nique of EVM, which measures the degree to which research outputs conform to scheduled costs along a timeline, but EPA has not found a way to apply EVM to research activities themselves. No other agency has been asked to use EVM for research activities, and none has done so. Agencies have addressed PART questions with different approaches, which are often not in alignment with their long-term strategies or missions. Many of the approaches refer only to portions of programs, quantify activities that are not research activities, or review processes that are not central to R&D programs. In short, many federal agencies have addressed PART with responses that are not, in the wording of the charge, “sufficient.” ADDITIONAL RECOMMENDATION FOR THE OFFICE OF MANAGEMENT AND BUDGET OMB should have oversight and training programs for budget exam- iners to ensure consistent and equitable implementation of PART in the many agencies that have substantial R&D programs.

68 Evaluating Research Efficiency in EPA Evaluating different agencies by different standards is undesirable because results are not comparable. OMB budget examiners bear primary responsibility for working with agencies in PART compliance and in interpreting PART ques- tions for the agencies. Although not all examiners can be expected to bring sci- entific training to their discussions with program managers, they must bring an understanding of the research process as it is performed in the context of federal agencies, as discussed in Chapters 1-3.7 OMB decisions about whether to accept or reject metrics for evaluating the efficiency of research programs have been inconsistent. A decision to reject the metrics of one agency while accepting similar metrics at another agency can unfairly damage the reputation of the first agency and diminish the credibility of the evaluation process itself. Because the framework of PART is virtually the same for all agencies and because the principles of scientific inquiry are virtu- ally the same in all disciplines, the implementation of PART should be both consistent and equitable in all federal research programs. It should be noted that actual consistency is unlikely to be achieved in the vast and varied universe of government R&D programs, which fund extramural basic research, mission-driven intramural labs, basic research labs, construction projects, facilities operations, prototype development, and many other opera- tions. Indeed, it is difficult even to define consistent approaches that would be helpful to both agencies and the OMB. But there is ample room for examiners to provide clearer, more explicit directions, understand the particular functioning of R&D programs, and discern cases when exceptions to broad requirements are appropriate. REFERENCES Inside EPA’s Risk Policy Report. 2007. Improved OMB Rating May Help Funding for EPA Ecological Research. Inside EPA’s Risk Policy Report 14(39). September 25, 2007. OMB (Office of Management and Budget). 2007a. ExpectMore.gov. Office of Manage- ment and Budget [online]. Available: http://www.whitehouse.gov/omb/ expectmore/ [accessed Nov. 7, 2007]. OMB (Office of Management and Budget). 2007b. Program Assessment Rating Tool Guidance No. 2007-02. Guidance for Completing 2007 PARTs. Memorandum to OMB Program Associate Directors, OMB Program Deputy Associate Directors, Agency Budget and Performance Integration Leads, Agency Program Assessment Rating Tool Contacts, from Diana Espinosa, Deputy Assistant Director for Man- agement, Office of Management and Budget, Executive Office of the President, Washington, DC. January 29, 2007. Attachment: Guide to the Program Assess- ment Rating Tool (PART). January 2007 [online]. Available: http://stinet.dtic.mil/ cgi-bin/GetTRDoc?AD=ADA471562&Location=U2&doc=GetTRDoc.pdf [ac- cessed Nov. 7, 2007]. 7 Some examiners do have training and experience in science or engineering, but this is not a requirement for the position.

Next: Appendix A: Biographic Information on the Committee on Evaluating the Efficiency of Research and Development Programs at the U.S. Environmental Protection Agency »
Evaluating Research Efficiency in the U.S. Environmental Protection Agency Get This Book
×
Buy Paperback | $48.00 Buy Ebook | $38.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

A new book from the National Research Council recommends changes in how the federal government evaluates the efficiency of research at EPA and other agencies. Assessing efficiency should be considered only one part of gauging a program's quality, relevance, and effectiveness. The efficiency of research processes and that of investments should be evaluated using different approaches. Investment efficiency should examine whether an agency's R&D portfolio, including the budget, is relevant, of high quality, matches the agency's strategic plan. These evaluations require panels of experts. In contrast, process efficiency should focus on "inputs" (the people, funds, and facilities dedicated to research) and "outputs" (the services, grants, publications, monitoring, and new techniques produced by research), as well as their timelines and should be evaluated using quantitative measures. The committee recommends that the efficiency of EPA's research programs be evaluated according to the same standards used at other agencies. To ensure this, OMB should train and oversee its budget examiners so that the PART questionnaire is implemented consistently and equitably across agencies.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!