National Academies Press: OpenBook

Evaluating Research Efficiency in the U.S. Environmental Protection Agency (2008)

Chapter: 4 A Model for Evaluating Research and Development Programs

« Previous: 3 Are the Efficiency Metrics Used by Federal Research and Development Programs Sufficient and Outcome-Based?
Suggested Citation:"4 A Model for Evaluating Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 52
Suggested Citation:"4 A Model for Evaluating Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 53
Suggested Citation:"4 A Model for Evaluating Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 54
Suggested Citation:"4 A Model for Evaluating Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 55
Suggested Citation:"4 A Model for Evaluating Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 56
Suggested Citation:"4 A Model for Evaluating Research and Development Programs." National Research Council. 2008. Evaluating Research Efficiency in the U.S. Environmental Protection Agency. Washington, DC: The National Academies Press. doi: 10.17226/12150.
×
Page 57

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

4 A Model for Evaluating Research and Development Programs This report has discussed the difficulty of evaluating research programs in terms of results, which are usually described as outputs and ultimate outcomes. However, between outputs and ultimate outcomes are many kinds of “interme- diate outcomes” that have their own value as results and can therefore be evalu- ated. The following is a sample of the kinds of activities that might be catego- rized as outputs, intermediate outcomes, and ultimate outcomes: • Outputs include peer-reviewed publications, databases, tools, and methods. • Intermediate outcomes include an improved body of knowledge avail- able for decision-making, integrated science assessments (previously called cri- teria documents), and the dissemination of newly developed tools and models. • Ultimate outcomes include improved air or water quality, reduced ex- posure to hazards, restoration of wetland habitats, cleanup of contaminated sediments, and demonstrable improvements in human health. Those steps can be described in different terms, depending on the agency using them and the scope of the research involved. For the Environmental Pro- tection Agency (EPA) Office of Research and Development (ORD), for exam- ple, results that might fit the category of intermediate outcome might be: the provision of a body of knowledge that can be used by EPA’s customers and the use of that knowledge in planning, management, framing of environmental regu- lations, and other activities. Intermediate outcomes are bounded on one side by outputs (such as toxicology studies, reports of all kinds, models, and monitoring activities) and on the other side by ultimate outcomes (such as protection and improvement of human health and ecosystems). 52

A Model for Evaluating Research and Development Programs 53 As a somewhat idealized example of how EPA (or other agencies) might conceptualize and make use of these terms, the following logic model shows the sequence of research, including inputs, outputs, intermediate outcomes, and ul- timate outcomes. These stages in the model are roughly aligned with various events and users as research knowledge is developed. However, it is important to recognize that this model must be flexible to respond to rapid changes in re- search direction based upon unanticipated issues. The shift of personnel and resources to meet a new or newly perceived environmental challenge inevitably will impact the ability to complete planned R&D programs. In the top row of Figure 4-1, the logic flow begins with process inputs and planning inputs. Process inputs could include budget, staff (including the train- ing needed to keep a research program functioning effectively), and research facilities. Planning inputs could include stakeholder involvement, monitoring data, and peer review. Process and planning inputs are transformed into an array of research activities that generate research outputs listed in the first ellipse, such as recommendations, reports, and publications. The combination of re- search and research outputs leads to intermediate outcomes. A helpful feature of the model is that there are two stages of intermediate outcomes: research outcomes and customer outcomes. The intermediate research outcomes are depicted in the arrow and include an improved body of knowledge available for decision-making, new tools and models disseminated, and knowl- edge ready for application. The intermediate research outcomes in the arrow are followed by intermediate customer outcomes, in the ellipse, that describe a us- able body of knowledge, such as regulations, standards, and technologies. In- termediate customer outcomes also include education and training. They may grow out of integrated science assessments or out of information developed by researchers and help to transform the research outputs into eventual ultimate outcomes. The customers who play a role in the transformation include interna- tional, national, state, and local entities and tribes; nongovernment organiza- tions; the scientific and technical communities; business and industry; first re- sponders; decision-makers; and the general public. The customers take their own implementation actions, which are integrated with political, economic, and so- cial forces. The use of the category of intermediate outcome does not require substan- tial change in how EPA plans and evaluates its research. The strategic plan of ORD, for example, already defines the office’s mission as to “conduct leading- edge research” and to “foster the sound use of science” (EPA 2001). Those lead naturally into two categories of intermediate outcome: intermediate outcomes from research and intermediate outcomes from users of research. EPA’s and ORD’s strategic planning architecture fits into the logic dia- gram as follows: the ellipse under “Research Outputs” contains the annual per- formance metrics and the annual performance goals (EPA 2007b), the arrow under “Intermediate Outcomes from Research” contains sub-long-term goals, the ellipse under “Intermediate Outcomes from Users of Research” contains the

Intermediate Intermediate Outcomes from Users 54 Research Outcomes from of Research Ultimate Inputs Activities Outputs Research Outcomes → (transformation) (implementation) Process inputs: •Budget Customers: •Staff •EPA program •Training offices & other Research federal To protect •Facilities (intramural •Recommendations agencies human health and •Reports •State & local and the extramural): •Improved body of knowledge governments environment, •Publications -making available for decision •Nongovern- •Guidance via the following Planning •Monitoring •Workshops •New tools and models mental •Regulations goals: inputs: •Epidemiologic disseminated organizations •Databases •Clean Air & studies •Standards •Stakeholders •Application-ready technology •Tribes Addressing Global •Conferences provided (regulatory • Physical studies •Technologies Climate Change offices- •Tools & methods •Integrated science •Science & • Toxicologic assessments technical •Education •Clean & Safe Water national, state, studies •Best practices local) •Program staff papers community (EPA 2007b) •Land Preservation & •Laboratory & •Developmental Restoration •Other external •Business field studies technologies •Healthy Communities stake holders •Industry •Exposure (EPA 2007b) & Ecosystems •Program & measurements •First •Compliance & regional offices responders •Risk assessment Environmental •State & local •Decision- Stewardship counterparts (EPA 2007a) makers (EPA 2006) •Monitoring •Public data (EPA 2007b) •Risk assessments •Peer review •Expert review FIGURE 4-1 EPA research presented as a logic model. Source: Modified from NRC 2007.

A Model for Evaluating Research and Development Programs 55 long-term goals (EPA 2007b), and the box under “Ultimate Outcomes” contains EPA’s overall mission (EPA 2006). In general, ultimate outcomes are evaluated at the level of the mission, intermediate outcomes at the level of multi-year plans, and outputs at the level of milestones. Specific examples of outputs, intermediate outcomes, and ultimate out- comes taken from the Ecological Research Multi-Year plan (EPA 2003),1 fit into the framework as follows: • Outputs: a draft report on ecologic condition of western states, and the baseline ecologic condition of western streams determined. • Intermediate outcome from research: a monitoring framework is avail- able for streams and rivers in the western United States that can be used from the local to the national level for statistical assessments of condition and change. • Intermediate outcome from customers: the states and tribes use a com- mon monitoring design and appropriate ecologic indicators to determine the status and trends of ecologic resources. • Ultimate outcomes: critical ecosystems are protected and restored (EPA objective), healthy communities and ecosystems are maintained (EPA goal), and human health and the environment are protected (EPA mission). Similar logic models might be drawn from EPA’s other multi-year plans, including water-quality monitoring and risk-assessment protocols for protecting children from pesticides. The use of the model can have several benefits. First, it can help to gener- ate understanding of whether and how specific programs transform the results of research into benefits for society. The benefits—for example, an identifiable improvement in human health—may take time to appear because they depend on events or trends beyond EPA’s influence. The value of a logic model is to help to see important intermediate points in development that allow for evalua- tion and, when necessary, changes of course. Second, the model can help to “bridge the gap” between outputs and ulti- mate outcomes. For a project that aims to improve human health through re- search, for example, there are too many steps and too much time between the research and the ultimate outcomes to permit annual evaluation of the progress or efficiency of a program. The use of intermediate outcomes can add results that are key steps in its progress. The use of intermediate outcomes can also give a clearer view of the value of negative results. Such results might seem “ineffective and inefficient” to an evaluator, perhaps on the grounds that the project produced no useful practice or product. Making use of intermediate outcomes in the reviewing process, how- 1 Note that p. 14 (EPA 2003) shows a logic diagram of how all the sub-long-term goals connect to feed into the long-term goal.

56 Evaluating Research Efficiency in EPA ever, may clarify that a negative result is actually “effective and efficient” if it prevents wasted effort by closing an unproductive line of pursuit. Intermediate outcomes are already suggested by the section of the 2007 PART guidance entitled Categories of Performance Measures (OMB 2007, p. 9). The guidance acknowledges the difficulty of using ultimate outcomes to measure efficiency, and proposes the use of proxies when difficulties arise, as in the following example: Programs that cannot define a quantifiable outcome measure—such as programs that focus on process-oriented activities (e.g., data collection, administrative duties or survey work)—may adopt a “proxy” outcome measure. For example, the outcomes of a program that supplies forecasts through a tornado warning system could be the number of lives saved and property damage averted. However, given the difficulty of measuring those outcomes and the necessity of effectively warning people in time to react, prepare, and respond to save lives and property, the number of min- utes between the tornado warning issuance and appearance of the tornado is an acceptable proxy outcome measure. Identification of intermediate steps brings into the PART process an im- portant family of existing results that may lend themselves to qualitative and sometimes quantitative assessment, which can provide useful new data points for reviewers. The terms in which those steps are described depend on the agency, its mission, and the nature and scope of its work. SUMMARY Although the task of reviewing research programs is complicated by the limitations of ultimate-outcome-based metrics, the committee suggests as a par- tial remedy the use of additional results that might be termed intermediate out- comes. This class of results, intermediate between outputs and ultimate out- comes, could enhance the evaluation process by adding trackable items and a larger body of knowledge for decision-making. The additional data points could make it easier for EPA and other agencies to see whether they are meeting the goals they have set for themselves, how well a program supports strategic and multi-year plans, and whether changes in course are appropriate. Using this class of results might also improve the ability to track progress annually. REFERENCES EPA (U.S. Environmental Protection Agency). 2001. Strategic Plan. EPA/600/R-01/003. Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. January 2001 [online]. Available: http://www.epa.gov/osp/ strtplan/documents/final.pdf [accessed Nov. 13, 2007].

A Model for Evaluating Research and Development Programs 57 EPA (U.S. Environmental Protection Agency). 2003. Sub-long-term goals, annual per- formance goals and annual performance measures for each long term goal. Appen- dix 1 of the Ecological Research Multi-Year Plan. Office of Research and Devel- opment, U.S. Environmental Protection Agency. May 29, 2003 Final Version [online]. Available: http://www.epa.gov/osp/myp/eco.pdf [accessed Nov. 1, 2007]. EPA (U.S. Environmental Protection Agency). 2006. EPA Strategic Plan 2006-2011: Charting Our Course. U.S. Environmental Protection Agency. September 30, 2006 [online]. Available: http://www.epa.gov/cfo/plan/2006/entire_report.pdf [accessed Nov. 13, 2007]. EPA (U.S. Environmental Protection Agency). 2007a. Research Programs. Office of Research and Development, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/ord/htm/researchstrategies.htm [accessed Nov. 13, 2007]. EPA (U.S. Environmental Protection Agency). 2007b. Research Directions: Multi-Years Plans. Office of Science Policy, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/osp/myp.htm [accessed Nov. 13, 2007]. OMB (Office of Management and Budget). 2007. Guide to the Program Assessment Rat- ing Tool (PART). Office of Management and Budget. January 2007 [online]. Available: http://stinet.dtic.mil/cgi-bin/GetTRDoc?AD=ADA471562&Location= U2&doc=GetTRDoc.pdf [accessed Nov. 7, 2007]. NRC (National Research Council). 2007. Framework for the Review of Research Pro- grams of the National Institute for Occupational Safety and Health. Aug. 10, 2007.

Next: 5 Findings, Principles, and Recommendations »
Evaluating Research Efficiency in the U.S. Environmental Protection Agency Get This Book
×
Buy Paperback | $48.00 Buy Ebook | $38.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

A new book from the National Research Council recommends changes in how the federal government evaluates the efficiency of research at EPA and other agencies. Assessing efficiency should be considered only one part of gauging a program's quality, relevance, and effectiveness. The efficiency of research processes and that of investments should be evaluated using different approaches. Investment efficiency should examine whether an agency's R&D portfolio, including the budget, is relevant, of high quality, matches the agency's strategic plan. These evaluations require panels of experts. In contrast, process efficiency should focus on "inputs" (the people, funds, and facilities dedicated to research) and "outputs" (the services, grants, publications, monitoring, and new techniques produced by research), as well as their timelines and should be evaluated using quantitative measures. The committee recommends that the efficiency of EPA's research programs be evaluated according to the same standards used at other agencies. To ensure this, OMB should train and oversee its budget examiners so that the PART questionnaire is implemented consistently and equitably across agencies.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!