The general principles of GPRA have been implemented by many state governments and in other countries (for example, Canada, New Zealand, and the U.K.), but implementation by the U.S. federal government is the largest scale application of the concept to date and somewhat different. Over the last 5 years, various states have tried to develop performance measures of their investments. With respect to performance measures of science and technology activities, states tend to rely on an economic-development perspective with measures reflecting job creation and commercialization. Managers struggle to define appropriate measures, and level-of-activity measures dominate their assessments.3 With respect to other countries, our limited review of their experiences showed that most are struggling with the same issues that the United States is concerned with, notably how to measure the results of basic research.
Not every aspect of the system worked perfectly the first time around in the United States. Some agencies started the learning process earlier and scaled up faster than others. OMB allowed considerable agency experimentation with different approaches to similar activities, waiting to see what ideas emerged. The expectations of and thus the guidance from the various congressional and executive audiences for strategic and performance plans have not always been the same and that has made it difficult for agencies to develop plans agreeable to all parties. Groups outside government that are likely to be interested in agency implementation of GPRA have not been consulted as extensively as envisioned. There is general agreement that all relevant parties should be engaged in a continuing learning process, and there are high expectations for improvement in future iterations.
The development of plans to implement GPRA has been particularly difficult for agencies responsible for research activities supported by the federal government. A report by GAO (GAO, 1997) indicates that measuring performance and results is particu-