National Academies Press: OpenBook

Evaluating Research Efficiency in the U.S. Environmental Protection Agency (2008)

Chapter: 2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs

« Previous: 1 Introduction: The Government Performance and Results Act, the Program Assessment Rating Tool, and the Environmental Protection Agency
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 21
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 22
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 23
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 24
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 25
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 26
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 27
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 28
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 29
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 30
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 31
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 32
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 33
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 34
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 35
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 36
Suggested Citation:"2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 37

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

2 Efficiency Metrics Used by the Environmental Protection Agency and Other Federal Research and Development Programs The questions in the Program Assessment Rating Tool (PART) address many aspects of programs of the federal government, but the charge to this committee refers specifically to questions used to evaluate efficiency. The charge asks 1. What efficiency measures are currently used for EPA R&D programs and other federally-funded R&D programs? 2. Are these efficiency measures sufficient? Outcome based? 3. What principles should guide the development of efficiency measures for federally-funded R&D programs? 4. What efficiency measures should be used for EPA’s basic and applied R&D programs? This chapter addresses primarily the first question in the charge. To an- swer that question, the committee examined many of the efficiency metrics pro- posed to comply with PART by EPA and other federal agencies engaged in re- search. The committee reviewed documents, interviewed agency personnel, and heard presentations during a workshop in April 2007 that was attended by most of the research-intensive agencies and several large corporations that emphasize research.1 This chapter summarizes some of the challenges of evaluating re- search efficiency and ways in which agencies have approached those challenges. 1 The workshop was held on April 24, 2007, at the National Academies, 2101 Consti- tution Avenue, Washington, DC 20418. A workshop summary appears in Appendix B. 21

22 Evaluating Research Efficiency in EPA EVALUATING RESEARCH AND DEVELOPMENT Research is difficult to evaluate by any mechanism. Useful evaluation re- quires substantial elapsed time because research on a given scientific question may span 3-5 years from initiation of laboratory or field experiments to analysis and publication of results. Substantial time may also be required for training of EPA staff or the scientific community in scientific and technical advancements prior to the conduct of the research. Considerably more time may elapse before the broader impacts of published research are apparent (NRC 2003). Although the committee was asked specifically to render advice on how EPA could best comply with the efficiency questions of PART, it concluded that more general suggestions on the evaluation of research would also have value for research-intensive agencies, for the Office of Management and Budget (OMB) and the Office of Science and Technology Policy (OSTP), and for Con- gress. It therefore examined the particular details of PART (Chapters 2-4), pro- posed principles that can be used to evaluate the results of research in any fed- eral agency, and provided recommendations for EPA that other agencies also may find useful (Chapter 5). THE PROGRAM ASSESSMENT RATING TOOL AND EFFICIENCY Efficiency is a common enough concept, as illustrated by familiar diction- ary definitions: “effective operation as measured by a comparison of production with cost (as in energy, time, and money)” and “the ratio of the useful energy delivered by a dynamic system to the energy supplied to it.”2 The PART ap- proach to efficiency is explained this way by OMB (OMB 2007a, p. 9): Efficiency measures reflect the economical and effective acquisition, utili- zation, and management of resources to achieve program outcomes or produce program outputs. Efficiency measures may also reflect ingenuity in the improved design, creation, and delivery of goods and services to the public, customers, or beneficiaries by capturing the effect of intended changes made to outputs aimed to reduce costs and/or improve productiv- ity, such as the improved targeting of beneficiaries, redesign of goods or services for simplified customer processing, manufacturability, or delivery. APPLYING EFFICIENCY TO INPUTS, OUTPUTS, AND OUTCOMES Any definition of efficiency depends on the process to which it is applied. Of relevance to this report is its application to the processes of research and de- velopment, which, as described by OMB, are complex and variable and involve 2 Merriam Webster Online, http://www.m-w.com/.

Efficiency Metrics Used by EPA 23 inputs, outputs, and outcomes that vary by agency, program, and laboratory. Those terms are discussed by OMB (OMB 2007a) as follows: • Inputs for purposes of PART are any agency resources that support re- search, which may include “overhead, intramural/extramural spending, infra- structure, and human capital” (OMB 2007a, p. 76). • Outputs “describe the level of activity that will be provided over a pe- riod of time, including a description of the characteristics (e.g., timeliness) es- tablished as standards for the activity. Outputs refer to the internal activities of a program (i.e., the products and services delivered)” (OMB 2007a, p. 83). Out- puts that have been used by agencies to comply with PART include research findings, papers published or cited, grants awarded, adherence to a projected schedule, and variance from cost and time schedules (OMB 2007b). • Outcomes, according to OMB guidance, “describe the intended result of carrying out a program or activity. They define an event or condition that is external to the program or activity and that is of direct importance to the in- tended beneficiaries and/or the public” (OMB 2007a, p. 8). OMB gives the ex- ample of a tornado-warning system, whose outcomes “could be the number of lives saved and property damage averted” (OMB 2007a, p. 9). An outcome of research in support of the mission of a regulatory agency, such as EPA, may be the consequence of a regulation or other change that brings about some im- provement in health or environmental quality. THE PROGRAM ASSESSMENT RATING TOOL GRADING SYSTEM The overall structure of PART was introduced in Chapter 1; additional as- pects are described briefly here. Although this report focuses on the criterion of efficiency, the PART ques- tions, taken in their entirety, attempt to address a broad range of issues with the goal of a thorough evaluation of federal programs. For example, section 1 of PART asks for general information about purpose and design. The second sec- tion, on strategic planning, asks such questions as whether a program has a long- term plan (2.1) and annual plans (2.2), both of which concern the relevance of research to the agency mission; it then asks whether the program is meeting long-term targets (2.3) and annual targets (2.4) concerning the quality of the research. The requirements for annual plans and annual targets often present problems for research managers. As discussed in Chapter 1, programs engaged in core research may be unable to specify the nature, timing, or benefits of their work annually because the results of core research can seldom be planned or analyzed in 1-year increments. Many of these sections also assume that evaluation be based on ultimate outcomes. For example, the PART guidance for question 2.2 (Does the program have ambitious targets and timeframes for its long-term measures?) states as follows: “For R&D programs, a Yes answer would require that the program pro-

24 Evaluating Research Efficiency in EPA vides multi-year R&D objectives. Where applicable, programs must provide schedules with annual milestones, highlighting any changes from previous schedules. Program proposals must define what outcomes would represent a minimally effective program and a successful program.” [Additional examples of the emphasis on ultimate outcomes can be seen in the sections of the guidance included in Appendix I.] In Section 3, which deals with management, question 3.4 specifies quanti- tative efficiency metrics as follows: “Does the program have procedures (e.g., competitive sourcing/cost comparison, IT improvements, and appropriate incen- tives) to measure and achieve efficiencies and cost effectiveness in program execution?” To win approval, a program must have “regular procedures to achieve efficiencies and cost effectiveness and at least one efficiency measure that uses a baseline and targets” (OMB 2007a, p. 26). Question 4.3 also addresses efficiency but appears in Section 4, on results and accountability: “Does the program demonstrate improved efficiencies or cost effectiveness in achieving program goals each year?” To pass question 4.3, a program must have passed question 3.4 (that is, have “at least one efficiency measure that uses a baseline and targets”) and have demonstrated improved effi- ciency or cost effectiveness over the prior year. In discussions with the committee, OMB officials were consistent in sup- porting the use of outcome-based measures to evaluate efficiency. They also acknowledged that efforts to do so had not yet been successful.3 Because agencies are expected to provide a satisfactory answer to every PART question and several questions require answers on efficiency and annual achievements, this focus on efficiency and annual achievements can lead to a poor PART grade even if the relevance, quality, and effectiveness of a research program are demonstrated. That and other difficulties are addressed further in Chapter 3. THE USE OF “EXPERT REVIEW” AT THE ENVIRONMENTAL PROTECTION AGENCY EPA uses multiple mechanisms to evaluate its R&D activities, including internal processes of strategic plans, multi-year plans, and annual performance goals (mentioned in Chapter 1). The multi-year plans provide a means for track- ing and, when necessary, adjusting research activities as they progress toward long-term goals. 3 As one example, the committee heard the following from OMB: “The requirement for 3.4 is that they have an efficiency measure. The highest standard is outcome. However, if that is not achievable, then they can have an output efficiency measure… But we are pushing for the outcomes, because if we just focus on the activities that we do, we don’t necessarily have the ability to find out if those activities and strategies are effective. That’s why another key component of the PART is evaluation” (from April 2007 workshop discussion; see summary in Appendix B).

Efficiency Metrics Used by EPA 25 To gain independent external perspective, EPA uses several standing “ex- pert review” boards. Expert review is a broadened version of peer review, the mechanism by which researchers’ work is traditionally judged by other re- searchers in the same field.4 Expert review groups may include not only experts in the field under review but members from other fields and appropriate “users” of research results, who may represent the private sector, other agencies, non- government organizations (NGOs), state governments, labor unions, and other relevant bodies (NRC 1999). At EPA, expert reviewers are chosen for their skills, experience, and abil- ity to judge not only the quality, relevance, and effectiveness of a program but whether it is being efficiently planned, managed, and revised in response to new knowledge—that is, whether it is efficient. Each panel should include members who have successfully run research programs themselves and are able to recog- nize good performance. The Science Advisory Board One of EPA’s long-standing review bodies is the Science Advisory Board (SAB), which includes a mix of scientists and engineers in academe, industry, state government, advisory bodies, and NGOs. The SAB was established by Congress in 1978 under a broad mandate to advise the agency on technical mat- ters, including the quality and relevance of information used as the basis of regu- lations. The panel includes experts in science and technology policy, environ- mental-business planning processes, environmental economics, toxicology, resource management, environmental decision-making, ecotoxicology, risk per- ception and communication, decision analysis, risk assessment, civil and envi- ronmental engineering, epidemiology, radiologic health, air-quality modeling, public health, and environmental and occupational health (EPASAB 2007). According to the Overview of the Panel Formation Process at the Envi- ronmental Protection Agency Science Advisory Board (EPASAB 2002), EPA uses the following criteria in evaluating an individual panelist to serve on the SAB: • Expertise, knowledge, and experience (primary factors). • Availability and willingness to serve. 4 According to one definition, “peer review is a widely used, time-honored practice in the scientific and engineering community for judging and potentially improving a scien- tific or technical plan, proposal, activity, program, or work product through documented critical evaluation by individuals or groups with relevant expertise who had no involve- ment in developing the object under review” (NRC 2000). Expert review was also rec- ommended by the Committee on Science, Engineering, and Public Policy panel cited in Chapter 1 for evaluating research to comply with the Government Performance and Re- sults Act.

26 Evaluating Research Efficiency in EPA • Scientific credibility and impartiality. • Skills working in committees and advisory panels. The Board of Scientific Counselors The other principal EPA expert-review panel for ORD is the Board of Sci- entific Counselors (BOSC), a body of nongovernment scientists and engineers established in 1996 to provide advice, information, and recommendations to ORD. It has up to 15 members, and they meet three to five times a year. The BOSC reviews are relevant to this discussion because EPA is experimenting with their use as a mechanism for reviewing various aspects of research effec- tiveness. In 2004, the BOSC review process was restructured to focus on the three evaluation criteria of PART and to include both prospective and retrospective reviews of research programs. In 2006, three charge questions were developed for use in BOSC’s summary assessment of each program’s long-term goals: • How appropriate is the research used to achieve each long-term goal? Is the program still asking the right questions, or have they been superseded by advancements in the field? (Relevance) • How good is the technical quality of the program’s research products? (Quality) • How much are the program results being used by environmental deci- sion-makers to inform decisions and achieve results? (Performance) The BOSC review process also feeds into PART through several other questions. For example, the BOSC review is submitted in response to question 4.5, “Do independent evaluations of sufficient scope and quality indicate that the program is effective and achieving results?”5 In spring 2006, a BOSC panel added the charge questions discussed above to its evaluation criteria; it used them for the first time early in 2007. Although there is much overlap between the BOSC investigations and the PART questions regarding results of a program, OMB had not by the time of this committee’s investigation determined whether the use of BOSC’s revised charge met the requirements of PART.6 During the 2005 PART review for EPA’s drinking-water program, EPA experimented with a quantitative version of the BOSC evaluation. It involved nine questions, for each of which the committee gave a rating of 1-5 to provide a 5 For additional material about BOSC, see EPA’s Draft Board of Scientific Counselors Handbook for Subcommittee Chairs, Appendix B, p. 18. 6 During the July 2007 workshop, committee members discussed this issue with repre- sentatives of OMB and EPA. The EPA representatives described current efforts to de- velop a quantitative system for use by BOSC.

Efficiency Metrics Used by EPA 27 numerical grade. That process was not accepted as scientifically valid by the BOSC. Two BOSC reviews were in progress at the time of this report: on pesti- cides and toxics and on sustainability research. EPA staff noted that the reviews will serve as baselines for later reviews. EPA is providing the BOSC review committees with two kinds of data, among others, to evaluate the performance of research programs: pilot surveys that evaluate how the research is being used and bibliometric analyses (P. Juengst, EPA, personal communication, 2007). Box 2-1 provides an example of a BOSC expert review. According to EPA staff, there is increasing pressure from OMB to focus on outcome-based efficiency metrics, but the agency has been unable to establish such metrics for research. EMERGING ISSUES One important role of expert review is to complement the ability of pro- gram managers and agency leaders to anticipate important emerging issues. Strategic effectiveness rises when the agency plans for the “next big thing,” rather than awaiting its sudden arrival. The program managers necessarily focus their attention on the day-to-day demands of administration, but expert review- ers can survey agency research in a wider context. To the degree that an agency can position itself at the forefront of a new field, it can increase its research rele- vance, quality, and performance. METRICS PROPOSED BY THE ENVIRONMENTAL PROTECTION AGENCY EPA, like other agencies, proposed quantitative metrics to measure vari- ous kinds of efficiency in its PART compliance, and many of them were ac- cepted by OMB. Metrics considered or used by EPA included the use of its re- search results to support regulations, surveys to gauge client satisfaction with its products, average time spent in producing assessments, overhead as a fraction of research, and citations per dollar invested. Such metrics fit well with the existing strategic and multi-year planning that provides annual milestones against which to evaluate them. Like other agencies, EPA has proposed that an increase in the number of peer-reviewed publications produced per full-time equivalent (FTE) complies with the PART guidance that “efficiency measures could focus on how to pro- duce a given output level with fewer resources” (OMB 2006a, p. 10). That was not accepted by OMB examiners for question 3.4 as an efficiency metric for the Water Quality Research Program. OMB found that the lack of a tight linkage between publications and budget made it hard to determine whether money was being spent appropriately. Publications might have been far ahead of schedule and over budget, for example, or behind schedule and under budget (K. Ney- land, OMB, personal communication, 2007).

28 Evaluating Research Efficiency in EPA BOX 2-1 BOSC: An Example of Expert Review One example of the composition and function of a BOSC expert- review panel is its Human Health Subcommittee, which issued a report on EPA’s Human Health Research Program (HHRP) in 2005. The panel in- cluded eight members in academe, industry, and government.7 The panel met for 3 days and stated its purpose as follows: “The objective of this re- view is to evaluate the relevance, quality, performance, and scientific lead- ership of the Office of Research and Development’s (ORD’s) Human Health Research Program.” It evaluated the overall program’s relevance, quality, performance, and leadership relative to each of its four long-term goals: • Use of mechanistic data in risk assessment. • Aggregate and cumulative risk assessment. • Evaluation of risk to susceptible subpopulations. • Evaluation of public-health outcomes. The subcommittee visited the HHRP’s main facility, in Research Tri- angle Park, North Carolina, where it heard from EPA offices and programs regarding the utility of research products developed by ORD scientists in the HHRP. The expert-review panel received extensive confirmation that ORD scientists were helpful to the various EPA regions in hosting regional scien- tists in ORD laboratories, collaborating with the regions on regional envi- ronmental problems, providing scientific consultation to the regions to help to ameliorate their environmental problems, and providing scientific consul- tation to the regions on specific problems in environmental toxicology. Earned-Value Management EPA staff also approached OMB to discuss the use of EVM as an effi- ciency metric for its ecologic research program. EVM measures the degree to which research outputs conform to scheduled costs along a timeline. It is used by agencies and other organizations in many management settings, such as con- struction projects and facilities operations, where the outcome (such as a new 7 The members of the Human Health Subcommittee, according to the BOSC Web site, had “considerable expertise in the area of human health research, including formal educa- tion, training, and research experience in biology, chemistry, biochemistry, environ- mental carcinogenesis, pharmacology, molecular biology and molecular mechanisms of carcinogenicity and toxicity, toxicology, physiologically based pharmacokinetic (PBPK) modeling, exposure modeling, risk assessment, epidemiology, biomarkers and biological monitoring, and public health, with additional expertise in the areas of children’s health, community-based human exposure studies, and clinical experience” (EPA 2005, p. 1).

Efficiency Metrics Used by EPA 29 laboratory or optimal use of facilities) is well known in advance, and progress can be plotted against milestones. Although EPA and other agencies have found value in using EVM to measure the efficiency of some processes, they have not found a way to apply it to research outcomes. METRICS THAT DID NOT PASS THE PROGRAM ASSESSMENT RATING TOOL PROCESS Since 2004, when PART grading began, several major research-based EPA programs have been rated “ineffective” or “results not demonstrated.” One was the Ecological Research Program (ERP), which in 2005 was given an “inef- fective” rating,8 including a “no” on question 3.4. At that time, it was noted that the program lacked an acceptable efficiency measure, but was working to de- velop one. In EPA’s view, the agency failed to provide an acceptable efficiency met- ric because it could not measure the outcome efficiency of its research (M. Pea- cock, EPA, personal communication, 2007). It was also given a zero score on question 4.3 because a program that fails to have an acceptable efficiency metric on question 3.4 cannot demonstrate annual increases in efficiency. Other pro- grams that have not passed the efficiency questions include the National Ambi- ent Air Quality Standards Program and the Ground Water and Drinking Water Program, for similar reasons. In examining OMB documents and the ExpectMore.gov Web site, the committee found that for most research-intensive agencies other than EPA, OMB accepted efficiency measures similar to those proposed by EPA, such as scheduled regulatory decision-making activities. No agency has responded to PART questions 3.4 and 4.3 by using outcome-based efficiency measures for R&D programs. The Appeals Process EPA appealed the denial of using publications per FTE as an efficiency metric for the Water Quality Research Program to an OMB Appeals Board, which accepted the appeal on several conditions. One was that “the program should include a follow-up action in its PART improvement plan relating to developing an outcome-oriented efficiency metric,” and another was that the program “must have a baseline and targets” (OMB 2006b). Even though the use of “outcome-oriented efficiency metrics” is not required by the PART guidance or the R&D Investment Criteria, it is strongly preferred by OMB and sometimes required by the examiner and/or during the appeals process. In both EPA and other research-intensive agencies, concerns have been voiced by those involved 8 PART rated 3% of federal programs “ineffective.” It rated 19% as “results not dem- onstrated” (OMB 2007c).

30 Evaluating Research Efficiency in EPA in PART compliance that the application of rules sometimes seems inconsistent or confusing.9 In addition, OMB continues to encourage development of a ver- sion of EVM that will prove satisfactory for the purpose of evaluating the effi- ciency of R&D programs, but at the time of the present committee’s study the issue had not been resolved (B. Kleinman, OMB, personal communication, 2007). THE CONSEQUENCES OF A “NO” ANSWER TO A PROGRAM ASSESSMENT RATING TOOL QUESTION Because PART was initiated in 2003 and has not yet examined all research programs of federal agencies, there is little information about the effects of low ratings on agencies. However, as demonstrated above by the example of the ERP, a program may do poorly in the PART process if it does not have an ac- ceptable measure of efficiency even if it has high marks for relevance and qual- ity. A rating of “ineffective” for research cannot be helpful for a regulatory agency like EPA, whose authority rests in part on its reputation for sound scien- tific research. Indeed, after the “ineffective” PART ratings were applied to the ERP in 2005, the program suffered substantial erosion of support (Morgan 2007).10 According to figures cited by EPA’s Risk Policy Report (Sarvana 2007), Congress reduced extramural funding for “ecology and global change” through EPA’s National Center for Environmental Research from about $32 million in FY 2002 to less than $25 million in FY 2003. The program received a similar cut from FY 2004, when it received about $24 million, to FY 2005, when it received only about $8 million.11 9 According to one study of the early application of PART, “The patterns of rating pro- grams are not very clear regarding the FY 2004 process, largely because of variability among the OMB budget examiners. The variability was pointed out by GAO in its as- sessment of the process” (Radin 2006, p. 123). See also comments by NASA representa- tive on p. 33 below. 10 In his testimony before the Subcommittee on Energy and Environment, Committee on Science and Technology, U.S. House of Representatives, March 2007, M. Granger Morgan, of Carnegie Mellon University (CMU), noted the agency’s difficulty in carrying out the work necessary to comply with PART while its budget was reduced. He stated that “it appears seriously misguided to raise the bar for comprehensive cost-effective or benefit-cost justification for environmental science research, while simultaneously shrinking the resources devoted to the types of research needed to assess the net social benefits of the outcomes of environmental science research.” Morgan is chair of CMU’s Department of Engineering and Public Policy, chair of the EPA SAB, and an expert in risk analysis and uncertainty (Morgan 2007, p. 4). 11 Most recently, the ERP received a positive rating, again as reported by In- sideEPA.com: “The rating was part of OMB’s Program Assessment Rating Tool (PART) process, which rates federal programs’ performance and helps set budget levels. It is the third PART review of ERP since OMB launched the initiative in 2002, and the first to give the program a positive score. Previous PART reviews criticized ERP for not fully

Efficiency Metrics Used by EPA 31 An “ineffective” PART rating also affects program ratings under the President’s Management Agenda (PMA). The PMA is relevant to this document, even though the committee’s charge did not specify it, because PART is a com- ponent of the PMA. The PMA is generally an agency-level effort with five ini- tiatives, one of which is Budget-Performance Integration (BPI). All programs assessed by PART must have acceptable efficiency measures for the agency to receive a “green” score in the BPI.12 EVALUATION MECHANISMS USED BY OTHER AGENCIES The committee and staff have consulted with other agencies about their PART evaluation processes. They also invited research-intensive agencies’ rep- resentatives to a workshop at the Academies in April 2007 to describe their processes and compare results (see Appendix B for the workshop summary). This section is based on the workshop presentations and followup conversations. Metrics of Efficiency Accepted by the Office of Management and Budget OMB makes clear its general preference for “outcome efficiency” in PART compliance (OMB 2006a), but the mechanisms proposed by agencies are metrics of output (process) efficiency. The committee compiled and reviewed an “efficiency measures table” of efficiency metrics used for research programs by 11 federal agencies, including EPA, in following the PART guidance13 (see Ap- pendix E for Table E-1). The table also includes information gathered from four corporations that have R&D programs. The following list is a sample of the common types of metrics proposed by the agencies, many of which have been accepted by OMB: • Time to process and award grants. • Time to respond to information requests. • Publications per FTE (or per dollar). • Percentage of budget that is overhead. • Percentage of work that is peer-reviewed. • Average cost per measurement or analysis. • Cost-sharing. demonstrating the results of programmatic and research efforts—and resulted in ERP funding cuts” (Inside EPA Risk Policy Report 2007). 12 The PMA awards agencies a green, yellow, or red rating. As noted in PART guid- ance (OMB 2007a, p. 9): “The President’s Management Agenda (PMA) Budget and Per- formance Integration (BPI) Initiative requires agencies to develop efficiency measures to achieve Green status.” 13 The complete table is in Appendix E.

32 Evaluating Research Efficiency in EPA • Quality or cost of equipment and other inputs. • Variance from schedule and cost. Chapter 3 asks whether those metrics are “sufficient” for evaluating re- search efficiency. Department of Energy The Department of Energy (DOE) uses PART to evaluate many manage- ment processes of its Office of Science. For evaluating the quality and relevance of research, DOE depends on peer review of all portfolios every second or third year by “committees of visitors.” That was found to be fairly cost-effective, al- lowing the agency to look at what was proposed and how well it was performed, to identify ideas that lack merit, to discontinue inefficient processes, to redirect R&D, or to terminate a poorly performing project. With the creation of PART, a committee was established to test appropri- ate metrics, but the committee has not found a way to assign value to a basic- research portfolio. A DOE representative commented that the work is valued according to its societal and mission accomplishments; this has to be done by working closely with the scientific community. A DOE representative said that the director of the President’s Office of Science and Technology Policy had established a committee charged with de- veloping a mechanism for measuring the value of research and estimating the cost of compliance. National Science Foundation Like DOE, the National Science Foundation (NSF) uses external commit- tees of visitors to perform peer review (called merit review) on programs or portfolios every 3 years. Merit review is a detailed and long examination of technical merit and broader impacts of research. NSF tracks efficiency primarily in two ways. One measures the time to decision on research awards; the second measures facility costs, schedules, and operations, with specific goals for each. National Aeronautics and Space Administration The National Aeronautics and Space Administration (NASA) uses PART exercises in evaluating the efficiency of repetitive, stable, and baseline processes and some aspects of R&D, such as financial management, contracting, travel processing, and capital-assets tracking. The agency has been using PART met- rics to track and evaluate the complex launch process and to find safe ways to reduce the size of the Space Shuttle workforce. Other uses were planned, such as

Efficiency Metrics Used by EPA 33 increasing the on-time availability and operation of ground test facilities and reducing the cost per minute of operating space network support for missions. Like other agencies, NASA does not find PART useful for evaluating the efficiency of research, especially unrepeatable projects, such as discoveries dic- tated by science or the development of prototypes. A NASA representative notes that the PART process depended heavily on the PART examiners, who tended to vary widely in attitudes and experience. Although the NASA examiners typically had scientific or engineering back- grounds, this was not necessarily the case for OMB policy-makers crafting the PART policies and guidance. The representative notes that because the OMB policy-makers generally did not have research backgrounds, NASA spends con- siderable time in educating them about the relevant differences in R&D pro- grams, as these differences are not well considered in PART guidance. The rep- resentative suggests more flexibility in the actions of the reviewers—for example, in recognizing that short-term decreases in efficiency might lead to long-term efficiency gains and in seeing the need to balance efficiency with ef- fectiveness.14 National Institutes of Health National Institutes of Health (NIH) staff described using PART on re- search and research-support activities. For example, the extramural research program has achieved cost savings through improved grant administration. The intramural research program has used it to reallocate laboratory resources, and the building and facilities program has monitored its property condition index. The extramural construction program has achieved economies by expanding the use of electronic management tools to monitor construction and occupancy for 20-years post-completion. NIH staff notes that the PART approach to efficiency is the same as a business model which emphasizes time, cost, and deliverables. With such a model, efficiency can be increased by improving any variable, so long as the other two do not worsen. This approach does not fit the scientific discovery process. Some 99% of the NIH portfolio has been subjected to PART; 95% per- cent of the programs are rated as effective, and the other 5% as moderately ef- fective. External research (close to 90% of the budget), which is not under NIH’s direct control, is excluded, although the agency coordinates with awardees to ensure performance. 14 In a case study of the PART process at the Department of Health and Human Ser- vices, a similar difficulty was described. “In some cases, OMB budget examiners were willing to deal with multiple elements of programs as a package; in other cases, the ex- aminer insisted that a small program would require individual PART submissions. It was not always clear to HHS staff why a particular program received the rating it was given; OMB policy officials did not appear to have a consistent view of the PART process” (Radin 2006, p. 142).

34 Evaluating Research Efficiency in EPA NIH staff notes that, in scientific discovery, variables are largely unknown because the outcome is unpredictable knowledge, and the inputs of time, cost, and resources are difficult to estimate. Research does not fit easily into a busi- ness model, for other reasons cited: • In research, high-risk projects are strongly associated with innovative outcomes that may initially fail even though the scientific approach was sound. • A research outcome may be unexpected or lead to an unexpected benefit. • Changing direction, which may look like poor management in the con- text of a business model, may be good research practice in order to conduct good science. • A business model does not capture the null hypothesis, or a serendipi- tous finding that gives valuable information (Duran 2007). These types of research results provide valuable information, which is likely not credited in a business model approach. National Institute for Occupational Safety and Health The National Institute for Occupational Safety and Health (NIOSH) func- tions as the research partner of the Occupational Safety and Health Administra- tion. NIOSH uses independent expert review to evaluate its 30 research pro- grams, which exist in a “matrix” with substantial overlaps. A model for evaluating a research program was provided by a framework developed for NIOSH by the National Academies with a blend of quantitative and qualitative elements. A central feature of the framework is that it adds to outputs and outcomes a third metric, “intermediate outcomes.” The categories were associated with the following kinds of results (Sinclair 2007): • Outputs. Peer-reviewed publications, NIOSH publications, communica- tions to regulatory agencies or Congress, research methods, control technologies and patents, and training and information products. • Intermediate outcomes. Regulations, guidance, standards, training and education programs, and pilot technologies. • End outcomes. Reductions in fatalities, injuries, illnesses, and expo- sures to hazards. The Academies’ review of NIOSH procedures included several ap- proaches to program evaluation, including the following general road map for characterizing the evaluation process as a whole (NRC 2007): • Gather appropriate information from NIOSH and other sources. • Determine timeframe for the evaluation.

Efficiency Metrics Used by EPA 35 • Identify program-subject challenges and objectives. • Identify subprograms and major projects in the research program. • Evaluate the program and subprogram components sequentially (this will involve qualitatively assessing each phase of a research program). • Evaluate the research program’s potential outcomes not yet appreci- ated. • Evaluate and score the program’s potential outcomes and important subprogram outcomes specifically for contributions to the environment and health. • Evaluate and score the overall program for quality, using a numerical scale. • Evaluate and score the overall program for relevance, using a numerical scale. • Evaluate and score the overall program for performance (effectiveness and efficiency), using a numeric scale. • Identify important emerging research. • Prepare report. One attribute of that approach, said a NIOSH representative, is that it is flexible enough to apply to R&D programs in which efficiency metrics are ap- propriate for some functions but not for others. It allows evaluation of a research program by focusing on outputs agreed to in the multi-year plan, and the out- comes are embedded in the plan itself. The plan can thus be evaluated by year- over-year results that lay the foundation for the outcomes. The NIOSH plan could also make use of a numeric scale for performance. For example, expert-review panels could be asked to assess performance on a scale of 1-5. METHODS USED BY INDUSTRY Representatives of four industries that perform research participated in the committee’s public workshop and described how research delivers value to their companies. They all had clear rationales for investing in research and spent sub- stantial amounts (typically about 1% of sales) on research activities, but none used EVM, none evaluated research in terms of efficiency, and none described direct links to outcomes (sales).15 Chapter 3 offers additional perspective on evaluating research efficiency in industry. SUMMARY OMB has required that federal agencies measure the efficiency of their 15 See workshop summary in Appendix B.

36 Evaluating Research Efficiency in EPA activities to comply with PART. This chapter has summarized how agencies with substantial research programs, including EPA, have attempted to comply with that requirement. In the next chapter, we discuss whether the metrics that agencies have proposed and are using are “sufficient” for evaluating the performance of research, including its efficiency, and discuss a balanced approach to such evaluation. REFERENCES Duran, D. 2007. Presentation at the Workshop on Evaluating the Efficiency of Research and Development Programs at the Environmental Protection Agency, April 24, 2007, Washington, DC. EPA (U.S. Environmental Protection Agency). 2005. Program Review Report of the Board Scientific Counselors: Human Health Research Program. Office of Research and Development, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/osp/bosc/pdf/hh0507rpt.pdf [accessed Nov. 8, 2007]. EPASAB (U.S. Environmental Protection Agency Science Advisory Board ). 2002. Overview of the Panel Formation Process at the Environmental Protection Agency. EPA-SAB-EC-02-010. Office of the Administrator, Science Advisory Board, U.S. Environmental Protection Agency, Washington, DC. September 2002 [online]. Available: http://www.epa.gov/sab/pdf/ec02010.pdf [accessed Nov. 8, 2007]. EPASAB (U.S. Environmental Protection Agency Science Advisory Board ). 2007. U.S. Environmental Protection Agency Chartered Science Advisory Board FY 2007 Member Biosketches. Science Advisory Board, U.S. Environmental Protection Agency, Washington, DC [online]. Available: http://www.epa.gov/sab/pdf/ board_biosketches_2007.pdf [accessed Nov. 8, 2007]. Inside EPA’s Risk Policy Report. 2007. Improved OMB Rating May Help Funding for EPA Ecological Research. Inside EPA’s Risk Policy Report 14(39). September 25, 2007. Morgan, G.M. 2007. Testimony M. Granger Morgan, Chair U.S. Environmental Protec- tion Agency Science Advisory Board Before the Subcommittee on Energy and Environment Committee on Science and Technology, U.S. House of Representa- tives, March 14, 2007 [online]. Available: http://www.epa.gov/sab/pdf/2gm_final_ written_testimony_03-14-07.pdf [accessed Nov. 8, 2007]. NRC (National Research Council). 1999. Evaluating Federal Research Programs: Re- search and the Government Performance and Results Act. Washington, DC: Na- tional Academy Press. NRC (National Research Council). 2000. P. 99 in Strengthening Science at the U.S. Envi- ronmental Protection Agency: Research-Management and Peer-Review Practices. Washington, DC: The National Academy Press. NRC (National Research Council). 2003. P. 3 in The Measure of STAR: Review of the U.S. Environmental Protection Agency’s Science to Achieve Results (STAR) Re- search Grants Program. Washington DC: The National Academies Press. NRC (National Research Council). 2007. Framework for the Review of Research Pro- grams of the National Institute for Occupational Safety and Health. Aug. 10, 2007. OMB (Office of Management and Budget). 2006a. Guide to the Program Assessment Rating Tool (PART). Office of Management and Budget. March 2006 [online].

Efficiency Metrics Used by EPA 37 Available: http://www.whitehouse.gov/omb/part/fy2006/2006_guidance_final.pdf [accessed Nov. 7, 2007]. OMB (Office of Management and Budget). 2006b. PART Appeals Board Decisions- Environmental Protection Agency. Letter from Clay Johnson, III, Deputy Director for Management, Office of Management and Budget, Washington, DC, to Marcus C. Peacock, Deputy Administrator, U.S. Environmental Protection Agency, Wash- ington, DC. August 28, 2006. OMB (Office of Management and Budget). 2007a. Guide to the Program Assessment Rating Tool (PART). Office of Management and Budget. January 2007 [online]. Available: http://stinet.dtic.mil/cgi-bin/GetTRDoc?AD=ADA471562&Location= U2&doc=GetTRDoc.pdf [accessed Nov. 7, 2007]. OMB (Office of Management and Budget). 2007b. Circular A–11, Part 7, Planning, Budgeting, Acquisition and Management of Capital Assets. Office of Management and Budget, Executive Office of the President. July 2007 [online]. Available: http://www.whitehouse.gov/omb/circulars/a11/current_year/part7.pdf [accessed Nov. 9, 2007]. OMB (Office of Management and Budget). 2007c. ExpectMore.gov. Office of Manage- ment and Budget [online]. Available: http://www.whitehouse.gov/omb/expect more/ [accessed Nov. 7, 2007]. Radin, B. 2006. Challenging the Performance Movement: Accountability, Complexity and Democratic Values. Washington, DC: Georgetown University Press. Sarvana, A. 2007. Science Board Joins House in Opposing EPA Ecosystem Research Cuts. Inside EPA’s Risk Policy Report 14 (37):1, 8-9. September 11, 2007. Sinclair, R. 2007. Presentation at the Workshop on Evaluating the Efficiency of Research and Development Programs at the Environmental Protection Agency, April 24, 2007, Washington, DC.

Next: 3 Are the Efficiency Metrics Used by Federal Research and Development Programs Sufficient and Outcome-Based? »
Evaluating Research Efficiency in the U.S. Environmental Protection Agency Get This Book
×
Buy Paperback | $48.00 Buy Ebook | $38.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

A new book from the National Research Council recommends changes in how the federal government evaluates the efficiency of research at EPA and other agencies. Assessing efficiency should be considered only one part of gauging a program's quality, relevance, and effectiveness. The efficiency of research processes and that of investments should be evaluated using different approaches. Investment efficiency should examine whether an agency's R&D portfolio, including the budget, is relevant, of high quality, matches the agency's strategic plan. These evaluations require panels of experts. In contrast, process efficiency should focus on "inputs" (the people, funds, and facilities dedicated to research) and "outputs" (the services, grants, publications, monitoring, and new techniques produced by research), as well as their timelines and should be evaluated using quantitative measures. The committee recommends that the efficiency of EPA's research programs be evaluated according to the same standards used at other agencies. To ensure this, OMB should train and oversee its budget examiners so that the PART questionnaire is implemented consistently and equitably across agencies.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!