Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
Page 1 EXECUTIVE SUMMARY The Government Performance and Results Act (GPRA), enacted by Congress in 1993, requires that all federal agencies evaluate and report on the results of their activities annually. Evaluating federal research programs in response to GPRA is challenging because we do not know how to measure knowledge while it is being generated, and its practical use might not occur until many years after the research occurs and cannot be predicted. For example, today's global positioning system is the result of research conducted 50 years ago in atomic physics. In 1999, the National Academies Committee on Science, Engineering, and Public Policy (COSEPUP) addressed this issue for research programs in its report Evaluating Federal Research Programs: Research and the Government Performance and Results Act. That report indicated that federal research programs could be evaluated by a process it called expert review that makes use of three evaluation criteria: quality, relevance, and leadership. Expert review is more than traditional peer review by scholars in the field. It also includes the users of the research, whether they are in industry, nongovernment organizations, or public health organizations or are other members of the public who can evaluate the relevance of the research to agency goals. This followup report, by the COSEPUP Panel on Research and the Government Performance and Results Act 2000, describes the panel's analysis of how federal agencies that support science and engineering research are responding to GPRA. The panel decided to focus its work on the five agencies that provide the
OCR for page 2
Page 2 majority of federal funding for research: National Science Foundation (NSF), National Institutes of Health (NIH), Department of Defense (DOD), Department of Energy (DOE), and National Aeronautics and Space Administration (NASA). As it began its examination of the strategic and performance plans and reports of these agencies, the panel found that, given the preliminary state of change of the agency's approach to GPRA for its research programs and the different organization and methodology of each, the panel could only conduct a “snapshot” of each agency's approach. Further, only general, not agency-specific, conclusions and recommendations were appropriate at this time. After a series of focus groups, a workshop, and numerous other communications with agency representatives and oversight bodies, 1 the panel reached the following 10 conclusions: Conclusion 1: All five agencies have made a good-faith effort to develop reporting procedures that comply with the requirements of GPRA. Some agencies stated that GPRA compliance has added substantially to the cost of their planning and evaluation activities in the form of staff time and resources. Others report that they have been able to integrate GPRA with their traditional budget and planning processes although at some cost of time and effort. Conclusion 2: Some agencies are using the GPRA process to improve their operations. These agencies report benefits in strengthening program management and enhancing communication about their programs to the users of research and the general public. The need to do so depends on the goal of that agency and the degree to which there is concern about a given field of research or about new and emerging programs. A few agencies 1Primarily Congress's General Accounting Office and the White House Office of Management and Budget.
OCR for page 3
Page 3 found that GPRA requirements added to their reporting workload and are still struggling to adapt to these requirements. Conclusion 3: The most effective technique for evaluating research programs is review by panels of experts using the criteria of quality, relevance, and, when appropriate, leadership. Agency approaches to GPRA research programs demonstrate the utility of expert review using the same criteria of quality and relevance as outlined in COSEPUP's original report. The international leadership criterion is generally not evaluated by most federal agencies at this time, although several are interested in such a measure. However, given the diversity in mission, complexity, culture, and structure of federal agencies that support research, it is not surprising that their approaches to GPRA have varied. One size definitely does not fit all. Conclusion 4: Oversight bodies and some agencies need clearer procedures to validate and verify agency evaluations. In particular, oversight bodies expressed a desire for better understanding of the methodology and results of expert review evaluations. Conclusion 5: Agencies choose to aggregate their research programs at different levels. Some agencies provide evaluations on a field-specific or program-specific basis; others do so for the research program in its entirety. Aggregating at a high level can make it difficult for oversight bodies to clearly see and understand the method and programs that are the focus of the analyses. Conclusion 6: The development of human resources as an agency objective sometimes does not receive explicit emphasis or visibility in GPRA plans and reports.
OCR for page 4
Page 4 When this objective is explicit, it affirms the value of educating young scientists and engineers by involving them in the research programs of their advisers. In addition, such an explicit linkage between research and education makes it easy to show how reductions in research funding can jeopardize the preparation of the scientists and engineers the nation will need in the future. Conclusion 7: Agencies often receive conflicting messages from oversight bodies about the desired format, content, and procedures to be used in GPRA compliance. For example, one agency made an effort to tie its GPRA reports more closely to its annual budget, as required in the act, only to be told by a congressional committee to return to a previously used format; another was told the reverse. Conclusion 8: Due to timing requirements built into the legal guidelines of GPRA, agencies find that they must begin work on performance plans before the relevant performance reports are complete. As a result, the potential benefit of GPRA in providing a mechanism for incorporating performance results of previous years into performance plans for later years is limited. A longer performance schedule—say, 3 years—would probably provide sufficient timing for most cases. Conclusion 9: Communication between agencies and oversight groups is not sufficiently regular, extensive, or collaborative. During focus groups, the workshop, and interviews, it was consistently clear that improved communication between these two sectors could reduce the difficulties and misunderstandings experienced by some agencies. Conclusion 10: The degree to which the results of GPRA reporting of research programs are being used
OCR for page 5
Page 5 by oversight groups for programmatic decision-making is not clear. In particular, agencies have not yet seen the use of their reports in the congressional decision-making that determines the size and priorities of their budgets. On the basis of these observations, the panel offers specific recommendations in Chapter 2 and Chapter 3. They can be summarized in the form of the following four general recommendations: Recommendation 1: Federally supported programs of basic and applied research should be evaluated regularly through expert review, using the performance indicators of quality, relevance, and, where appropriate, leadership. Recommendation 2: Agencies should continue to improve their methods of GPRA compliance and to work toward the goals of greater transparency, more-realistic reporting schedules, clear validation and verification of methods, and the explicit use of human-resources development as an indicator in performance plans and reports. Recommendation 3: Agencies and oversight bodies should work together as needed to facilitate agencies integrating their GPRA requirements with their internal planning, budgeting, and reporting processes. In addition, they should work together to adjust the timing of GPRA reporting to capitalize on the value of the planning process. Recommendation 4: Agencies should strive for effective communication with oversight groups on the implementation of GPRA. For their part, oversight
OCR for page 6
Page 6 bodies should clarify their expectations and meet more often among themselves to coordinate their messages to agencies. Much has been learned about the procedures of planning, evaluation, and management in the last several years, and some value will have been gained by the agencies from their own discussion of accountability. However, one key remaining question is the degree to which oversight groups are using the results of the “results act” for programmatic decision-making. Unless the agency responses to GPRA are useful to Congress in the urgent task of setting priorities and budgeting, the value of the act might not warrant the time and effort it requires of the federal government. But by working more closely than they have in the past, the federal agencies and the oversight bodies can implement the letter and spirit of GPRA in ways that lead to greater efficiency, lower cost, and more-effective research programs that are demonstrably conducted in the national interest.
Representative terms from entire chapter: