. "7 Identification of Issues for General Discussion." Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press, 1995.
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
RESEARCH RESTRUCTURING AND ASSESSMENT: Can We Apply the Corporate Experience to Government Agencies?
To answer this question, it is necessary to know who the customers are. We have come back to that theme a number of times already. Our corporate speakers have told us that if there are no customers, there is no value. You cannot tell whether research is working if you do not know who it is working for—if you cannot actually interact with those people and get their judgments about it. For that purpose, the notion of the U.S. taxpayers as a set of customers, while it is certainly accurate, may not be very helpful operationally. It may be that every program at NSF (and we have 180 of them) has a slightly different set of customers. The programs themselves probably know who their customers are, but they have to interact with them somehow in the process. If the programs do not know who their customers are, then one of the issues that NSF may have to face is how to find its customers. Where are they?
The next operational question is what role customers should play in judging the results of NSF programs. The requirements of the GPRA are symptomatic of a general change in the way government operates —a shift toward results-oriented management. The GPRA, which was passed in 1993, does not come into full effect as a law until the end of the decade. The Office of Management of Budget (OMB), however, has moved up the timetable on it, in part because it reflects the administration 's philosophy, and in part because OMB knows that we have to experiment with the performance indicator process quite a bit to make it effective.
There is general pressure from many directions to start focusing on results of programs rather than on inputs. NSF has traditionally, of course, been an agency that has done a meticulous job of input selection. It devotes major resources to this. Now we have to start thinking about what comes out on the other end of the grants. That is what the performance movement in government is calling on us to do.
GPRA distinguishes between outputs and outcomes. Outputs are the activities that go on under a program. These are immediate, tangible things that you can see being produced as a result of program activities. Outcomes are things that happen over much longer periods of time. Most of the payoffs for the U.S. taxpayer from NSF programs are in the outcomes category. The results that we will be able to track easily and count, if they are even worth counting, will largely be outputs. In the organizations that have to deal with these performance plans, there is a broad recognition that most agencies are going to report outputs in their performance indicators on an annual basis. They are going to learn about outcomes in other ways. For instance, at NSF we can learn about outcomes through program evaluation, rather than through annual performance indicators. By program evaluation, I mean a much more in-depth look, a process that can be much more sophisticated, that takes all kinds of elements into account other than just numerical indicators, and that looks at what the programs are actually producing. We also can provide anecdotal evidence in the form of lists of accomplishments.
In short, we are going through a significant learning process in the government in the shift toward results-oriented management. The smart organizations are embracing that movement. They are moving aggressively into the learning process, ahead of the curve. I am pleased that NSF is one of those organizations. I see some people at other agencies who are saying that they can just put up any set of numbers and it will not make any difference in the end, that they can “stonewall” anything. That approach is not going to be working by the year 2000. I am sure of that.
What does GPRA actually require NSF to deliver and to whom? It requires us to deliver three things: a strategic plan, performance plans, and performance reports. Those will be delivered first to OMB, at which point they are incorporated into government-wide versions, and then to Congress. So these are documents and indicators that will go to very important and influential people in both the executive and the legislative branches of government.
Let me talk about the strategic plan element first. What was discussed this morning was in some ways more relevant to the NSF strategic plan than to performance indicators. We actually have very little guidance from anywhere—from GPRA, OMB, or anywhere else—on what the GPRA strategic plan needs to look like. We have been told only that it needs to be consistent with administration policy and that it needs to be prepared in consultation with Congress. There are no specifications at this point, however, for what level of consultation with Congress is required. In a sense, we have an open slate to write on for this plan, but it is supposed to reflect the desires of our customers for what NSF is producing.
So the question becomes, again, who are we going to treat as the customers in this process? Should we take OMB and Congress as proxies for the customers? Legally, that is their role, but they actually do not want us to do that. They want to hear from us about other customers that we are serving. They do not consider themselves to be our customers per se. Are we going to treat ourselves the way the Research Councils in the United Kingdom are now being asked to treat themselves? The U.K. basic research system has moved to a very user-oriented approach. The Research Councils are being told that they are proxies for the public as customers, and that they are procuring