National Academies Press: OpenBook

Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies? (1995)

Chapter: 7 Identification of Issues for General Discussion

« Previous: 6 Performance Assessment Activities at NSF
Suggested Citation:"7 Identification of Issues for General Discussion." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×

7

Identification of Issues for General Discussion

Susan E. Cozzens

The NSF Office of Policy Support has the task of trying to facilitate the processes of developing the performance indicators and performance plans under the Government Performance and Results Act (GPRA). I am therefore right in the middle of the machinery that is actually going to produce these numbers, and I am going to try to draw you into that machinery, too, in my remarks. Obviously, one item that we must extract from this morning's discussion are the analogies (or lack of them) between, on the one hand, the corporate research department, the larger corporation, and its customers and, on the other hand, the National Science Foundation and the nation. The core shared elements are the excitement about science, the technical content, and the emphasis on excellence in both of these areas. Without that, neither organization would be in business. In both of these areas, scientific excitement is the part of the enterprise that no one thinks can be quantified—none of us believes that we can capture excitement, quality, or excellence in a set of numbers. As I talk to people at the agencies that support research, there seems to be a broad consensus on that point in the federal government, similar to the consensus we have heard from the industrial speakers. John McTague, for example, said that all the things you can measure are trivial. One could put that in a petition and get virtually everyone in research agencies in Washington to sign it.

The main ambiguity in our analogy has to do with the “relevance of relevance” in research. The message from the speakers this morning is that relevance is the key to value. If performance indicators are supposed to indicate value, and if we follow the model proposed by industry, relevance is going to have to come in somewhere. The firms we have heard about know what business they are in, and the research organizations within them have had to treat the business divisions as customers. They have to provide service to them and be in an interactive relationship with them in relation to business goals. This whole area is less clear for NSF, as has been pointed out. The National Science Foundation serves the nation, but the managers of that business keep changing. It is possible that every two or four years we will have a new set of managers, with a new vision of what business government is in. If the country cannot conduct good strategic planning, if the country does not know exactly what businesses it is in, then I do not know whether the NSF can decide what businesses it is going to be in, either. At least, we have to cope with that ambiguity.

Another issue has been raised. Even if we knew what businesses the country was in, should NSF care what businesses the country is in? John Seely Brown proposed that the National Science Foundation has gone from the grounded end of one of his dimensions to the open-loop end. Instead of having researchers who are steeped in real-world problems, Brown suggested, NSF has moved into the business of having researchers who are doing something else. They are not connected to those problems. He raises the question of whether NSF is moving toward the grounded end of his “relevance” dimension at this point. If so, is the Foundation experiencing the same kind of trauma that several of our speakers have talked about in terms of reconnecting to the business?

Suggested Citation:"7 Identification of Issues for General Discussion." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×

To answer this question, it is necessary to know who the customers are. We have come back to that theme a number of times already. Our corporate speakers have told us that if there are no customers, there is no value. You cannot tell whether research is working if you do not know who it is working for—if you cannot actually interact with those people and get their judgments about it. For that purpose, the notion of the U.S. taxpayers as a set of customers, while it is certainly accurate, may not be very helpful operationally. It may be that every program at NSF (and we have 180 of them) has a slightly different set of customers. The programs themselves probably know who their customers are, but they have to interact with them somehow in the process. If the programs do not know who their customers are, then one of the issues that NSF may have to face is how to find its customers. Where are they?

The next operational question is what role customers should play in judging the results of NSF programs. The requirements of the GPRA are symptomatic of a general change in the way government operates —a shift toward results-oriented management. The GPRA, which was passed in 1993, does not come into full effect as a law until the end of the decade. The Office of Management of Budget (OMB), however, has moved up the timetable on it, in part because it reflects the administration 's philosophy, and in part because OMB knows that we have to experiment with the performance indicator process quite a bit to make it effective.

There is general pressure from many directions to start focusing on results of programs rather than on inputs. NSF has traditionally, of course, been an agency that has done a meticulous job of input selection. It devotes major resources to this. Now we have to start thinking about what comes out on the other end of the grants. That is what the performance movement in government is calling on us to do.

GPRA distinguishes between outputs and outcomes. Outputs are the activities that go on under a program. These are immediate, tangible things that you can see being produced as a result of program activities. Outcomes are things that happen over much longer periods of time. Most of the payoffs for the U.S. taxpayer from NSF programs are in the outcomes category. The results that we will be able to track easily and count, if they are even worth counting, will largely be outputs. In the organizations that have to deal with these performance plans, there is a broad recognition that most agencies are going to report outputs in their performance indicators on an annual basis. They are going to learn about outcomes in other ways. For instance, at NSF we can learn about outcomes through program evaluation, rather than through annual performance indicators. By program evaluation, I mean a much more in-depth look, a process that can be much more sophisticated, that takes all kinds of elements into account other than just numerical indicators, and that looks at what the programs are actually producing. We also can provide anecdotal evidence in the form of lists of accomplishments.

In short, we are going through a significant learning process in the government in the shift toward results-oriented management. The smart organizations are embracing that movement. They are moving aggressively into the learning process, ahead of the curve. I am pleased that NSF is one of those organizations. I see some people at other agencies who are saying that they can just put up any set of numbers and it will not make any difference in the end, that they can “stonewall” anything. That approach is not going to be working by the year 2000. I am sure of that.

What does GPRA actually require NSF to deliver and to whom? It requires us to deliver three things: a strategic plan, performance plans, and performance reports. Those will be delivered first to OMB, at which point they are incorporated into government-wide versions, and then to Congress. So these are documents and indicators that will go to very important and influential people in both the executive and the legislative branches of government.

Let me talk about the strategic plan element first. What was discussed this morning was in some ways more relevant to the NSF strategic plan than to performance indicators. We actually have very little guidance from anywhere—from GPRA, OMB, or anywhere else—on what the GPRA strategic plan needs to look like. We have been told only that it needs to be consistent with administration policy and that it needs to be prepared in consultation with Congress. There are no specifications at this point, however, for what level of consultation with Congress is required. In a sense, we have an open slate to write on for this plan, but it is supposed to reflect the desires of our customers for what NSF is producing.

So the question becomes, again, who are we going to treat as the customers in this process? Should we take OMB and Congress as proxies for the customers? Legally, that is their role, but they actually do not want us to do that. They want to hear from us about other customers that we are serving. They do not consider themselves to be our customers per se. Are we going to treat ourselves the way the Research Councils in the United Kingdom are now being asked to treat themselves? The U.K. basic research system has moved to a very user-oriented approach. The Research Councils are being told that they are proxies for the public as customers, and that they are procuring

Suggested Citation:"7 Identification of Issues for General Discussion." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×

research from the scientific and engineering communities on behalf of the public. Is that the concept we are going to adopt, or are we going to work with the more complicated one that I just heard? Are we going to be a coupling mechanism between the taxpayers and the research community?

We have heard several points in the discussion this morning that are relevant to developing the strategic plan we have to provide in two years. One is that you need to change the language around research to orient it toward customers. A strategic planning process, in essence, is a discussion. It is discourse, and it can change the language. We also have heard that measures do not work without credibility. Credibility is obviously built up in an interactive process with the people that you are trying to serve. For instance, it has been argued this morning that in the strategic planning process of the National Institutes of Health, it was the research community that killed the plan. It was not the effort per se; it was the unwillingness of the community to participate in it.

The second thing that we have to deliver under GPRA are performance plans. These must be produced annually, starting with fiscal year 1999, and therefore they can be changed annually. This is an important point. If we develop a set of indicators, we do not need to keep them forever. Indicators can be dropped or added. Also, even if the indicators stay the same, the performance goals, which are the heart of the plan, can be changed from year to year. The act requires that the goals be expressed in quantified, objective, and measurable terms. Although there is another option available for things that absolutely cannot be quantified, there are very strict requirements for being able to verify whether you have met those goals. It may actually be harder to try to respond completely under that alternative option because of the way GPRA is written, in comparison with using a mix of quantitative and descriptive kinds of performance reports. Again, this mix of quantitative and descriptive elements in performance reports seems to be the mode that research agencies, at least, are moving toward strongly already and are likely to recommend to OMB.

We have heard this morning about at least three different types of performance goals that industrial firms have been using and that could be used in this process. One, used by Ford, relies on quantitative balance among activities of various sorts. This type of performance goal would probably work in the GPRA process, but in the NSF context, it would be an activity indicator. We would be saying, “We are going to put our activities in various categories, and we want a certain balance among them.” This goal is not really focused on results.

A second kind of goal would be reflected in a citation impact indicator like the one John Seely Brown showed. For instance, one could set a goal that NSF's publications in the aggregate are expected to be 25 percent above the average for the journals in which they appear. We could actually collect data on that and report on a quantitative goal.

A third example involved customer satisfaction measures. Other research agencies already use such measures. The question of who the customers are and what role they are going to play in the process becomes very important when such measures are proposed. The Army Research Laboratory (ARL), for example, has adopted a corporate model very much like the one that we heard described this morning. It has gone through the various steps of strategic planning, and it has adopted a set of performance measures. One of the key ones is customer satisfaction, on a project-by-project basis. The ARL model is under consideration by an informal interagency group as a standard to be recommended to OMB for performance indicators in research agencies. As I have pointed out, NSF has more difficulty in defining its customers than the ARL, but customer satisfaction is certainly quantifiable. Like customer satisfaction, scientific quality can be put on a scale and fed in a quantitative form into the GPRA reporting process, but it would be expensive to do this for all of the research systems.

One question that arises, then, is how much the resulting information is worth. Is it worth going through the whole GPRA process just to turn satisfaction or scientific quality into a set of numbers? To give you an example: we have a GPRA pilot project on research facilities at NSF. We have developed a set of performance indicators for those facilities through a very successful interactive process with the facilities' managers. The facilities currently have site visits at least every five years, although we conduct renewal reviews more often than that. One could ask site visit teams, at no additional cost to the government, to produce ratings on scales so that we could report those in the GPRA process. But this rating would not add any information for our internal management purposes, because we have very good site visit reports already. Whether to ask reviewers to produce such ratings is part of the quantification issue.

The foregoing also draws attention to something else I want to mention. Whatever quantitative measures are reported under GPRA also should be useful to the organizations that are doing the reporting. They should be useful for management purposes within NSF, in part because we are going to have to bear the costs of gathering those data. GPRA is an unfunded mandate. Congress provided no additional money to agencies to produce their GPRA

Suggested Citation:"7 Identification of Issues for General Discussion." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×

responses. So, ultimately, the performance measures have to be worth their cost to us, as well as to people outside the agency.

In that regard, it is important to mention that GPRA requires us to report only on very broad aggregates. We do not have to add all the measures up from the project level, and they do not have to be reported at the project level. They do not even have to be reported at the directorate level. We are looking at various choices now, but whatever we choose will be broad aggregate numbers that focus attention on the agency as a whole.

From this discussion of what performance goals and plans might look like, you should have some idea of what a performance report might look like. I also mentioned that we are required in our strategic plan to describe how we are using program evaluations—more in-depth looks at programs—along with the indicators, to learn about our performance and to communicate it externally.

Let me end, then, with what some people would consider the unpleasant aspect of the scenario in relation to GPRA, but what was in fact described this morning as the use of these processes within firms, that is, performance-based budgeting. Performance-based budgeting is the specter at the other end of this process. Such a budgeting approach uses performance measures in an algorithm to produce resource allocation. We heard from some of our speakers this morning that their pay and their unit's resources are based on some of the performance measures used in their firms. GPRA does not actually state that the government will move to performance-based budgeting. OMB has affirmed recently that this exercise is not about performance-based budgeting. It is about results-oriented management.

GPRA, however, does call for a set of pilot projects in this regard. The pilots would test whether we can come up with sensible information that would indicate if you had an increment of X amount in your budget, how much more of your result you would be able to produce. Alternatively, if you had a decrement of X amount in your budget, how much of your result would the nation lose? That is what is meant by performance-based budgeting in GPRA. Under present circumstances, those budget scenarios are realistic. One can see such a scenario working its way through OMB and Congress much better at this time than the question of whether doubling the NSF budget would be a good thing, or whether some particular number is the right level. The right level is not absolute. The present-day question is what the country would gain or what the country would lose with increments or decrements of specific sizes. That question is very difficult to answer until you have some quantitative total outputs and outcomes.

Suggested Citation:"7 Identification of Issues for General Discussion." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×
This page in the original is blank.
Suggested Citation:"7 Identification of Issues for General Discussion." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×
Page 52
Suggested Citation:"7 Identification of Issues for General Discussion." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×
Page 53
Suggested Citation:"7 Identification of Issues for General Discussion." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×
Page 54
Suggested Citation:"7 Identification of Issues for General Discussion." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×
Page 55
Suggested Citation:"7 Identification of Issues for General Discussion." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×
Page 56
Next: Part 4 -- Discussion and Synthesis »
Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies? Get This Book
×
 Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?
MyNAP members save 10% online.
Login or Register to save!

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!