5
Evaluation as a Framework for Management

The evaluation of government research and information programs in support of public policy can be undertaken at different levels. At the highest level, evaluation involves consideration of the broad objectives of public policy. Consistent with the objectives of this report, our goal here is much more modest. The agenda of the Economic Research Service (ERS), and of economic policy support research agencies generally, derives from the mandate of the economic policy maker to whom the agency reports. In the case of ERS, policy is made by the secretary of agriculture working with Congress.

This chapter concerns the process of evaluating the services provided by ERS. It does not undertake the evaluation itself for any particular service. In fact, it does not even define the units appropriate for evaluation: the specific functions referred to later in the chapter might be too broad and some of the accomplishments too narrow, and in any event definitions of services can be made only by those with day-to-day responsibility for their organization. Instead, this chapter concentrates on a framework for the retrospective evaluation of each service ERS provides. This framework can also be used prospectively, in deciding how each service should be delivered (Chapter 6) and this, in turn, drives the internal organization of ERS and its placement within USDA (Chapter 7). None of the later development is possible without first knowing what the services are and how they will be evaluated.

Evaluating ERS Programs

General Guidelines for Program Evaluation

Every day individuals and organizations make decisions and choices. Much of this process involves comparison of goods and services—both those that are



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 91
5 Evaluation as a Framework for Management The evaluation of government research and information programs in support of public policy can be undertaken at different levels. At the highest level, evaluation involves consideration of the broad objectives of public policy. Consistent with the objectives of this report, our goal here is much more modest. The agenda of the Economic Research Service (ERS), and of economic policy support research agencies generally, derives from the mandate of the economic policy maker to whom the agency reports. In the case of ERS, policy is made by the secretary of agriculture working with Congress. This chapter concerns the process of evaluating the services provided by ERS. It does not undertake the evaluation itself for any particular service. In fact, it does not even define the units appropriate for evaluation: the specific functions referred to later in the chapter might be too broad and some of the accomplishments too narrow, and in any event definitions of services can be made only by those with day-to-day responsibility for their organization. Instead, this chapter concentrates on a framework for the retrospective evaluation of each service ERS provides. This framework can also be used prospectively, in deciding how each service should be delivered (Chapter 6) and this, in turn, drives the internal organization of ERS and its placement within USDA (Chapter 7). None of the later development is possible without first knowing what the services are and how they will be evaluated. Evaluating ERS Programs General Guidelines for Program Evaluation Every day individuals and organizations make decisions and choices. Much of this process involves comparison of goods and services—both those that are

OCR for page 91
available now and those that may become available in the future. The evaluation of a service provided by an organization or individual is the comparison of that service with alternative services that are or could be provided by others. It entails a ranking of the services provided by alternative suppliers. There are no operative absolute standards for providing any service. However, comparison of a service currently provided by a supplier with that provided by a hypothetical supplier can be quite important. (For example, this question may be paramount in contemplating whether to discontinue a service.) Having determined the best provider of a service, the question of whether that service should be offered at all remains open; evaluation is only the first step in this determination. The evaluation process therefore requires that the service being evaluated be defined, and that alternative providers of the service—perhaps including hypothetical providers—be identified. The relevant comparison of providers of the service is that made by clients—the users and potential users of the service. In an ideal, competitive market for a service, clients compare and rank alternative providers, and each chooses its most favored provider. Food retailing, laundering and dry cleaning, fast food, and a score of other service industries approximate this ideal. Successes and failures of establishments in these industries reflect the evaluations of thousands of clients. Although a market may reveal evaluations of clients in the aggregate, even a perfect market does not directly indicate the reasons for clients' rankings. Understanding how clients make comparisons is essential to success in a competitive environment. For example, success in the laundering and dry cleaning business requires that managers understand how clients take into account establishment location, hours of operation, promptness of delivery, quality of service, price, and other characteristics in making comparisons and choices. Thus, if evaluation is to be used to improve the delivery of a service, it is necessary to identify those attributes of the service that underlie comparisons by clients. Many important service markets are far removed from the competitive ideal. There may be only a few suppliers, information may be hard to acquire, and clients may find it very costly to switch suppliers. The same prerequisites for evaluation operate in these situations—namely, it is necessary to identify: (1)   the service provided, (2)   potential providers of the service, (3)   the clients for the service, and (4)   the attributes of the service that underlie clients' comparisons. To the extent that a market is not competitive, it may indicate less clearly the service, providers, and clients, and the first three prerequisites may be more difficult to satisfy. In any case, it is critical to concentrate on those attributes that are important to the comparisons that clients make. A specific example of such an imperfectly competitive service market is postgraduate professional education. Professional schools provide services for at

OCR for page 91
least two kinds of clients—their prospective students and those who might hire their graduates. Identification of prospective students is not a straightforward task, for this group is broader than the students who actually apply for admission. For students, alternative providers of the service include, of course, other professional schools in the same discipline, but, if the service is defined more broadly, alternative providers may include professional schools in other disciplines or, even more generally, other career opportunities. Determining the attributes of postgraduate professional education that underlie choices requires study of a fairly wide group of prospective clients. Simply asking students, as they approach graduation, to provide numerical scores for attributes of their professional school is nearly meaningless. Such a survey may uncover comparisons of experience with expectations or vicarious experiences elsewhere, but it does not elicit the attributes that underlie any choice, and comparison of such numerical scores across institutions is of dubious value. A complete evaluation of an entire program of an organization may be conceived as entries in a four-dimensional matrix of services, potential providers, clients, and service attributes that underlie clients' comparison of alternative providers. Evaluation can be more or less formal. More formal evaluations are more costly than less formal evaluations, but they can provide information of greater strategic value. For example, complaints received by an airline identify attributes of service that are important to clients and are very likely to affect their choice of airlines. But since the complaints are made only by the actual clients of the carrier, the airline is missing an important group of prospective clients—those who are using its competitors' services. A well-designed survey including prospective clients in the sampling frame would be more useful to the airline. As a second example, consider a government agency providing primary data of very high quality to the private sector free of charge. A systematic assessment of clients' comparison of the current data with prospective data of lower quality could provide the appropriate strategic response to a future question of whether to continue the current program, provide lower-quality data at lower cost, or institute user fees to partially or completely cover the additional cost of the higher-quality data. The concept of evaluation as entries in the four-dimensional matrix of services, providers, clients, and attributes applies to government as well as private-sector programs. When a service is provided by both government and the private sector, there is no important complication if user fees cover the cost of government provision of the service. (Examples involving quasi-governmental organizations include check clearing by the Federal Reserve banks and express package delivery by the U.S. Postal Service.) But this situation is the exception, not the rule. There are very large sectors of the economy—education, transportation, and health care, for example—in which government agencies provide or subsidize services that are also provided by the private sector. This is especially the case in the increasingly important information-based sectors of the economy.

OCR for page 91
Typically, government provides primary data and some infrastructure, and the private sector provides secondary data and builds on the infrastructure. Leading examples are censuses and large national surveys, including not only the decennial census but also the quinquennial censuses of manufacturing and agriculture and major household surveys, which form the basis for extensive information gathering in the private-sector to support private-sector decision making. At a conceptual level, the reasons for government involvement in providing information can be traced to the public good arguments set forth in Chapter 2. But at an operational level, determining the demarcation between public and private provision of services requires careful evaluation. Here, the level of user fees for services charged by government agencies complicates the evaluation process. If the private sector also provides the service, then it is important to include price (user fees for the government agency, market price for private suppliers) in the list of attributes. Determining an appropriate user fee for the government service may be difficult, and identifying the hypothetical clientele for this service, if that user fee were charged, is more difficult still. There are two initial steps that can be carried out, however. The first is ascertaining the cost of providing the government service. Since most government agencies jointly produce many services, this may not be straightforward, but essentially the same problem is faced in cost accounting in private firms and methods developed there can be applied in government agencies. A second initial step is to elicit clients' and prospective clients' assessment of their demand at different user fee levels.1 The most constructive evaluation processes include hypothetical as well as actual providers of the service. This process is never-ending in any competitive industry and is clearly evident to the casual observer in industries undergoing rapid change. To enter, survive, or thrive, suppliers and potential suppliers are constantly considering and experimenting with new combinations of attributes for services. This is true of successful public-sector markets, for example, the market for basic scientific research in the academic sector and government agencies, as well as in successful private-sector markets. If a government agency is the only provider of a service, then the only alternative suppliers are hypothetical, but the evaluation process is still essential to improving, and even sustaining, the quality of services provided. The principle is taken as granted, even in those cases in which it is given that only government should provide services—most notably, every election involves the comparison by the electorate of hypothetical alternative suppliers of services. It should also be taken as granted in the regular and systematic evaluation of services provided by government agencies, whether or not there are private-sector alternatives for these services. 1   A finding of little or no demand at a reasonable user fee does not immediately imply that the service should be discontinued by the government agency. For example, equity considerations might intervene, as discussed in Chapter 2. But the burden of proof is then on the side of continuing agency provision of the service in question.

OCR for page 91
Three Types Of Services In its information and research support of the economic policy function of USDA, ERS provides three kinds of services, staff analysis, secondary data preparation and analysis, and intermediate long-term research. The most immediate and visible category of service is staff analysis in response to questions from the secretary's office, often from the Office of the Chief Economist. In fiscal 1997, ERS responded to 346 such formal requests. The ERS administrator maintains logs of these requests and the corresponding responses. Table 5.1 provides a list of the requests for a representative period during 1997, and it indicates the diversity of questions to which ERS responds. Responding to these requests takes about 20 percent of the time of the ERS staff of professional employees, most of whom are economists. Table 5.2 provides information about the distribution of ERS professionals by grade level, job title, and division. Table 5.3 provides summary classifications of staff analysis by source of request, assignment to divisions, and timeliness of response for fiscal 1995, 1996, and 1997. The second category of service is secondary data preparation and analysis. Included in this category are Situation and Outlook (S&O) and indicator activities. The output of these activities supports policy analysis and is used extensively through subscription by both industry and researchers in analyzing agricultural and food economics. S&O and indicator work is most closely associated with commodity market analysis, although indicators are also provided in the areas of natural resources and food economics. The S&O work in the commodity area supports the interagency commodity estimates provided by the USDA World Agricultural Outlook Board. (Other USDA agencies participating in the interagency estimate committees are the Foreign Agricultural Service, the Farm Service Agency, and the Agricultural Marketing Service.) Development of indicators and the preparation of S&O reports and other information takes about 40 percent of the time of the ERS staff of professional employees. Much of the market and trade-related S&O output is reported through the publication Agricultural Outlook, the main source for USDA's farm and food price forecasts. Agricultural Outlook emphasizes the short-term outlook for all major areas of the agricultural economy, and also presents long-term analyses of such issues as U.S. agricultural policy, trade forecasts and export-market development, food-safety, the environment, and farm financial institutions. The publication presents extensive data on individual commodities, the general economy, U.S. farm trade, farm income, production expenses, input use, prices received and paid by farmers, per capita food consumption, and related issues. In addition to this general publication, ERS also regularly publishes S&O reports several times a year for numerous individual commodities and financial measures (examples include Agricultural Income & Finance; International Agriculture & Trade; Cotton and Wool; Feed; Fruit & Tree Nuts; Livestock, Dairy and Poultry; Aquaculture; Oil Crops; Agricultural Exports; Rice; Sugar & Sweet-

OCR for page 91
TABLE 5.1 ERS Staff Analyses: Requests Received during January, 1997 Analysis requested Received Due Sent Provide background on research, education, and analysis pertaining to global change including responses to specified questions 1/2 1/8 1/8 Assemble a list and cost of subscriptions to newspapers, magazines, other periodicals, and on-line news services 1/2 1/8 1/8 Comment on the thesis that declining rice production in the South has a negative impact on migratory birds and on the environment 1/2 1/10 1/10 Review draft GAO report, ''Commodity Programs: Despite Reforms, Some U.S. Prices Will Remain Higher Than World Prices" 1/3 1/8 1/8 Comments and suggestions regarding the coverage content of the Business Expenditures Survey 1/6 1/15 1/17 Provide the number of occupied housing units by region and summarize uses of fuel and uses of electricity 1/6 1/8 1/8 Briefing materials relating to the National Cattlemen's Beef Association 1/7 1/15 1/15 Prepare a 3-part briefing paper relating to the domestic supply, use, and price for dairy over the next year, the international dairy situation re Oceania, EU and GATT commitments, and relation of dairy situation to domestic food assistance 1/7 1/8 1/9 Draft response relating to use of lecithin in Mexican electricity generators 1/7 1/10 1/10 Prepare a briefing paper on the propane price issue 1/8 1/9 1/10 Provide a description of the ERS proposed FY 98 global change program 1/8 1/8 1/8 Briefing materials relating to the Pacific Northwest Flooding 1/9 1/9 1/9 Describe the status of assessments of economic impacts of a worldwide ban on methyl bromide 1/9 1/13 1/14 Review the proposed Western Governors Association memorandum of understanding regarding future management of drought in the west 1/9 1/14 1/16 Provide comments, talking points, and guidance on United Nations Statistical Commission documents 1/9 1/28 1/29 Background on the tax income averaging concept and comments on the Nick Smith bill 1/10 1/17 1/17 Briefing materials relating to Iowa Pork Producers Association 1/10 1/15 1/15 List major Science and Technology related publications issued by ERS over the 1993–1996 period 1/13 1/16 1/16 Discuss the food component of the December Consumer Price Index 1/14 1/14 1/14 Identify hot issues that governors might raise at National Governors Association meeting 1/15 1/17 1/17

OCR for page 91
Analysis requested Received Due Sent Prepare slides of selected charts from briefing handouts and Food Review 1/15 1/28 1/23 Provide information on the civilian labor force, employment, unemployment, and unemployment rate for selected countries 1/14 1/16 1/16 Prepare draft ERS testimony for the appropriation hearings 1/16 1/29 1/28 Update and add/delete Q's and A's for the FY 1998 appropriation hearings ERS witness book 1/17 2/18 2/20 Briefing materials relating to data development associated with the new food safety law 1/22 1/22 /1/22 Provide information on persistent poverty counties and Great Plains 1/22 1/31 1/28 Briefing paper on the effect of the January 1997 freeze in Florida vegetable areas 1/23 1/23 1/23 Review and comment on "Working Paper Toward a National Rural Policy" 1/24 1/29 — Briefing materials relating to California vegetable and fruit growers, including any flood-related issues 1/24 1/29 1/29 Briefing materials relating to small and minority farmers 1/24 2/5 2/5 Provide background information on the Commercial Agricultural Division involvement in the Canada-United States Joint Commission on Grams 1/27 1/39 /1/30 Briefing information relating to the Southern Rural Sociological Association and emerging roles of Land Grant Universities 1/27 1/30 1/30 Review and comment on U.S.-Japan science and technology relations and agreement 1/27 2/5 2/3 Briefing materials relating to the winter storms and cold in the Dakotas 1/28 1/29 1/29 Update tables for the 1997 Statistical Abstract of the United States 1/29 3/14 3/19 List of regional centers, consortia, programs, and projects that ERS supports 1/30 2/6 2/6 Review and comment on material for a legislative report on the proposed draft bill on the Treasury Amendment to the Commodity Exchange Act 1/30 1/30 1/31 Review and comment on draft FY 1998 "Our Changing Planet" 1/30 2/6 2/6 Prepare a white paper on industrial hemp 1/31 2/6 2/6 Briefing materials relevant to the National Cotton Council annual meetings in Florida 1/31 2/7 2/6 Review and comment on Summary Report on Class I Price Proposals for Milk Marketing Order Reform 1/31 2/4 2/4   Source: Staff analysis logs provided to panel by ERS.

OCR for page 91
TABLE 5.2 Social Scientists in ERS by Grade and Job Series (May 23, 1998) Job Series Grade Level   SES 15 14 13 12 11 9 7 Total Economist/ Agriculture Economist 2 37 78 137 27 8 4 6 299 Geographer       1         1 Mathematician         1       1 Operations research analyst     1           1 Social science analyst     1 3 1       5 Social science aid technician               2 2 Sociologist     3 3         6 Statistician   1   1 1       3 Total 2 39 83 145 30 8 4 8 318   Source: Provided to panel by ERS. TABLE 5.3 Staff Analysis in ERS, 1995–1997 Total Requests 455 456 346 By sourcea       REE 60(13%) 93(20%) 62(18%) OCE 125(27%) 105(23%) 52(15%) Agencies 52(11%) 58(13%) 47(14%) OSEC 100(22%) 86(19%) 106(31%) White House 55(12%) 54(12%) 26(8%) Legislative branch 33(7%) 22(5%) 21(6%) Other 30(7%) 38(8%) 32(9%) By division assignmentb       RED 151(23%) 130(18%) 115(20%) RED 97(14%) 109(15%) 94(17%) RED 235(35%) 281(39%) 148(26%) RED 113(17%) 115(16%) 124(22%) RED 46(7%) 58(8%) 63(11%) RED 28(4%) 28(4%) 25(4%) By timeliness       On time or early 356(78%) 378(83%) 292(84%) One day late 60(13%) 47(10%) 31(9%) Two or more days late 39(9%) 31(7%) 23(7%) a USDA Assistant Secretary for Research, Education, and Economics (REF); USDA Office of the Chief Economist (OCE): USDA agencies (Agencies); Office of the Secretary, USDA (SEC); White House offices including Office of Management and Budget and Council of Economic Advisers (White House); Members of Congress, General Accounting Office, Congressional Budget Office, Congressional Research Service (Legislative branch); remaining clients (Other). b Rural Economics Division (RED); Food and Consumer Economics Division (FCED); Commercial Agriculture Division (CAD); Natural Resources and Environment Division (NRED); Office of Energy and New Uses (OENU); Information Services Division (ISD); Categories sum to more than total requests due to assignments of some requests to more than one division.

OCR for page 91
ener; Tobacco; Vegetables & Specialties: and Wheat and a yearbook containing data and related information on an annual basis for most of these commodities. Another important outlet for this type of work is the Farm Business Economics Report (formerly called Economic Indicators of the Farm Sector), which includes national and state farm income estimates, farm-sector balance sheets, government payments, farm-sector debts, and costs of production by commodity. ERS produces numerous indicators that summarize the status of natural resource use in agriculture and associated environmental quality. Every few years these indicators are integrated into a comprehensive report, Agricultural Resource and Environmental Indicators (AREI). Following publication of the comprehensive report, as new data and information are collected, ERS publishes AREI Updates to supplement and update information contained in AREI. AREI identifies trends in land, water, and commercial input use, reports on the condition of natural resources used in the agricultural sector, and describes and assesses public policies that affect conservation and environmental quality in agriculture. Indicators of individual, household, and market food consumption, expenditures, and nutrients, food marketing costs, marketing margins, and farm-to-retail price spreads are also regularly developed and reported by ERS. Periodicals such as Food Review and annual publications such as the Food Marketing Review and Food Consumption, Prices, and Expenditures report data and statistics related to food consumption and nutrition, as well as the structure and performance of the food system. The third category of service, intermediate and long-term research, accounts for the remaining 40 percent of ERS professional staff time. This research is related to the economic policy mandate of USDA. Currently, this entails a diverse set of projects, as indicated in Box 5.1. This box indicates specific research functions and accomplishments of ERS, organized by division, as detailed by ERS in April 1998. This summary reflects the greatly increased diversity of the policy mandate of USDA, and by implication the diversity of the ERS research program, over the past 20 years, as discussed in Chapters 3 and 4. Table 5.4 provides information on the research publications of ERS professional staff. The services of data preparation and analysis and intermediate and long-term research, all in support of public policy, are made available not only to public servants charged directly with policy making, but also to private citizens to whom public policy makers are responsible, and who are free to make use of these services in private decision making. As discussed in Chapter 2, in a democracy with a market economy, government might well provide information because it is a pure public good, even though the information was not needed to support the making of public policy. Some ERS services, particularly the provision of reports and indicators, are clearly used for private as well as public decision making purposes. To determine in any useful sense which ERS services are primarily in support of public policy and which are primarily to provide information as a pure public good is beyond the scope of this report. In any event, in the panel's

OCR for page 91
BOX 5.1 Specific ERS Research Functions and Accomplishments Market and Trade Economics Division Specific function: Conduct research on U.S. and foreign agricultural and trade policies and their relationships to U.S. and world supply, demand, and trade of agricultural products. 1997 accomplishments: Analysis of China and Taiwan joining the World Trade Organization (WTO); Support for WTO implementation and future negotiations; Foreign direct investment and trade; Technical assistance to emerging market countries. Specific function: In cooperation with other ERS divisions, analyze the relationships between U.S. food, health and safety, environmental, and rural economic policies and programs and the structure and competitive performance of U.S. and world agricultural markets. 1997 accomplishment: Study of U.S. agricultural growth and productivity Specific function: Develop and maintain an analytic understanding of U.S. and foreign agricultural economic developments, including policy changes and institutional developments that affect agricultural markets. 1997 accomplishment: Implications of NAFTA for U.S. agriculture. Food and Rural Economic Division Specific function: Examine the demographic, social and economic determinants of food and nutrient consumption; interrelationships between food and nonfood consumption; consumer valuation of quality, safety, and nutrition characteristics; and the role of information in determining food choices. 1997 accomplishment Estimating nutrition information differentials and their impact on individual diets. Specific function: Examine the adequacy and effectiveness of government program particularly food assistance and nutrition program, on nutritional adequacy of diets, and food securing including costs and benefits of food assistance and nutrition programs, the extent and social cost of food insecurity, and the role of food assistance in meeting larger goals of welfare programs.

OCR for page 91
1997 accomplishments: Study of low-income household food prices and costs; Study of childrens' diets and nutritional shortcomings; Evaluating commodity procurement for food assistance programs. Specific function: Analyze the food processing and distribution sector, including the ability of the sector to meet changing consumer demand; the effect of government market interventions to facilitate that response; and the effect of government interventions and rapid changes in the sector on consumer and producer welfare. 1997 accomplishments: Estimating and addressing America's food losses; Monitoring and analyzing the U.S. food industry. Specific function: Analyze food safety issues, including consumer benefits from risk reduction, production trade-offs in reducing hazards, impacts of proposed regulations and international harmonization, and the implications of changing demographics on food safety economics. Also, examine the role played by food safety attitudes, knowledge, and awareness in shaping food choices and eating behavior. 1997 accomplishments: Economic assessment of the new meat and poultry inspection system; Benefits of improved drinking water quality; Estimates of societal costs from Campylobacter-associated Guillain-Barré syndrome. Specific function: Analyze the economic, social, and demographic factors influencing the infrastructure of rural communities, agribusiness activity, and the industrial base of rural areas. In particular, analyze the development of rural portions of geographic regions of the United States, including changes in industry mix, tax policy, credit availability, and other economic activities, and means of measuring overall economic development. 1997 accomplishments: Rural credit study; Rural empowerment zones; Comparing income and wealth of farm operator households with all U.S. households. Specific function: Determine the effects of economic, social, and governmental policy behavior on the demand for and supply of state and local government services including low-income

OCR for page 91
judgment, there are no ERS services that clearly are not provided in part to support public policy making. As noted in Chapter 4, however, ERS services with a substantial private-sector clientele, including many of the reports and indicators, have been a source of political support for the agency. Current Framework for Program Evaluation The most important characteristic of the ERS mission, plans, and goals is that, taken together, they provide the foundation for evaluation of the ERS program. As mandated by the Government Performance and Results Act of 1996, USDA has prepared a strategic plan for 1997–2002 (U.S. Department of Agriculture, 1997). This plan includes the following mission statement for ERS (p.7–61): The Economic Research Service provides economic analysis on efficiency, efficacy, and equity issues related to agriculture, food, the environment, and rural development to improve public and private decision making. The ERS strategic plan identifies five goals (pp. 7–61—7–65): The agricultural production system is highly competitive in the global economy; The food system is safe and secure; The Nation's population is healthy and well-nourished; Agriculture and the environment are in harmony; Enhanced economic opportunity and quality of life for rural Americans. In support of each al is an objective. For example, the objective in support of the first goal is (p. 7–61): Provide economic analyses to policy makers, regulators, program managers, and those shaping public debate that help ensure that the U.S. food and agriculture sector effectively adapts to changing market structure, domestic policy reforms, and post-GATT and post-NAFTA trade conditions. For each objective, there is a statement of strategies for achieving the objective. The strategy statement for the first objective is (p. 7–62): Identify key economic issues relating to the competitiveness of U.S. agriculture, use sound analytical techniques to understand the immediate and broader economic and social consequences of alternative policies and programs and changing macroeconomic and market conditions on U.S. competitiveness, and effectively communicate research results to policy makers, program managers, and those shaping the public debated regarding U.S. agricultural competitiveness. Finally, there are performance measures corresponding to each goal. For the first goal, the performance measures are "Reports, briefings, staff papers, articles, and responses to requests that provide. . ." followed by a list of seven substantive topics, for example, "economic analyses on the linkage between domestic and

OCR for page 91
global food and commodity markets and the implications of alternative domestic policies and programs for competitiveness.'' The objectives, strategies, and performance measures corresponding to the other four goals are quite similar. With respect to evaluation, the ERS strategic plan states in its section on "linkage of goals to annual performance plan" (p. 7–67): Performance measures will assess the extent to which policy makers, regulators, program managers, and organizations (including major media) affecting the public policy debate have high-quality, comprehensive, objective, relevant, and accessible economic analyses for senior policy officials. . . . ERS will use metrics to partially describe its volume of output. . . . The annual performance reports also will include narratives covering characteristics of ERS output that demonstrate that ERS analyses were high quality, objective, relevant, timely, and accessible. The narratives will cover ERS anticipation of issues and the timeliness of output, review prior to release, customer views on relevance and accessibility of ERS analyses, and how ERS analyses contributed to informed decision making. Evaluating ERS Services The ERS mission statement and the goals, objectives, and performance measures of the ERS strategic plan concentrate on the substance of ERS research and information provision. The performance measures identify services at levels approaching, but still broader than, the level required for the four dimensions of evaluation described earlier in this chapter. Assessment of specific services by specific clients is raised only in the linkage of goals to the annual performance plan. This section of the strategic plan also identifies five global attributes for ERS analyses: quality, objectivity, relevance, timeliness, and accessibility. With respect to the framework for evaluation discussed above, however, the ERS strategic plan has three critical shortcomings. First and most important, it gives no indication of comparisons of ERS services with either existing or hypothetical alternatives. It does not hint of comparing ERS performance with that of any other organization along any quantitative or qualitative dimensions. Second, the strategic plan makes no provision for assessing the costs of providing any specific service, or evaluating costs in the light of the quality or other attributes of the service provided. Third, the plan concentrates largely on setting forth the substance of what ERS currently does, at the expense of focusing on improving the relevant attributes of the services it delivers. RECOMMENDATION 5-1. Taken together, the ERS mission statement, strategic plan, and annual performance plan should identify the services provided by ERS, the clients and potential clients for each service, potential providers for each service, and the attributes of each service critical

OCR for page 91
to evaluation. An effective System of program evaluation will seek to establish the competitive position of ERS with respect to the services it provides and the reasons for that position. Mission To be effective as the driving force for an organization, a mission statement must explain the function of the organization with respect to the services provided and prospective clients. The ERS mission statement should be sufficiently broad that it rarely needs modification, whereas the strategic plan for a branch for one year should be much more specific and change from year to year. The current ERS mission statement defines one very broad product (economic analysis) and indicates the broad substantive scope of this analysis. This product is further specified in the specific functions of each ERS division, presented in Box 5.1. In fact, the substantive scope of ERS work derives from the mandate of USDA and, as detailed in Chapters 3 and 4, this derived mandate has greatly changed throughout the history of the BAE and ERS, especially in the past 20 years. In fact, as indicated earlier in this chapter, ERS provides secondary data as well as economic analysis. The mission statement of ERS should recognize its functions and indicate broadly how those functions are carried out. RECOMMENDATION 5-2. The mission of ERS should be to provide timely, relevant, and credible information and research of high quality to inform economic policy decision making in USDA, the executive and legislative branches of the federal government, and the private and public sectors generally. It should identify information and frame research questions that will enhance and improve economic policy decisions within the authority of the secretary of agriculture, organize the subsequent collection of information and conduct of research, and evaluate alternative approaches to policy problems. The work of ERS should address anticipated as well as current and continuing policy questions. Services Services provided by ERS should be defined both narrowly enough that ranking them against alternative potential suppliers is possible and broadly enough that a workable group of clients for the service can be identified. For purposes of organization, it is natural to group substantively related services into branches and divisions within the agency, but it makes little sense to try to compare how ERS analyses "help ensure that the U.S. food and agriculture sector effectively adapts to changing market structure, domestic policy reforms, and post-GATT and post-NAFTA trade conditions" along the attributes of quality, objectivity, relevance, timeliness, and accessibility, with other actual or hypothetical provid-

OCR for page 91
ers, because the topic is so broad. To do the same with "economic analyses on the linkage between domestic and global food and commodity markets and the implications of alternative domestic policies and programs for competitiveness," is more realistic, but it remains a task requiring further organization. If one moves to the level of 5- and 10- year projections of agricultural commodity prices, the job is manageable. For a given substantive activity of ERS—for example, economic analyses on the linkage between domestic and global food and commodity markets and the implications of alternative domestic policies and programs for competitiveness—it may be important to distinguish between intermediate and long-term research, monitoring, reporting, and staff assignments as separate services. Clients Clients are the potential users of the identified service, whether that service is provided by ERS, a private firm, an international organization, or another government agency. Clients, and especially potential clients, may be difficult to identify. The universe of "policy makers, regulators, program managers, and organizations shaping public debate of economic issues," so frequently mentioned in the ERS strategic plan, is a large one, An essential task of ERS management is to identify the subset of this important group of individuals and organizations for each service it provides of contemplates providing. In the case of 5- and 10- year projections of agricultural commodity prices, the primary client has been the USDA Budget and Program Analysis Office, because the impact of these programs on future federal budgets depends substantially on future prices. As mandated in the farm bill of 1996, these support programs are gradually being reduced and may even be eliminated, in which case for these purposes the Budget and Program Analysis Office will no longer be a client for these services. There are other clients, for example, the Office of Management and Budget and the Congressional Budget Office. Determination of clients and potential clients for this particular service is essential to evaluating the service, and ultimately in deciding whether ERS should continue to perform this function. Providers Once services are identified at an appropriately narrow level, other actual providers of the service should not be hard to identify. In the case of information and research services, this amounts to knowing the secondary sources. Identifying potential providers of the service is a more sophisticated, but reasonable, task. As elaborated in the next chapter, this task is an essential first step in deciding whether a particular information or research service should be provided directly by ERS or should be procured from another party. In the case of 5-and 10-year projections of agricultural commodity prices, another provider is the Food and

OCR for page 91
Agricultural Policy Research Institute (FAPRI) based at the University of Missouri and Iowa State University. Neither ERS nor FAPRI charges user fees for these projections; both receive congressional appropriations. Whether alternative providers would emerge and the relevant attributes of the projections they would provide, if ERS and FAPRI were either to charge user fees or to cease providing projections, is the sort of comparison with a hypothetical provider that is essential to serious evaluation. Attributes Simply eliciting clients' comparisons of the providers of an identified service is not enough. If ERS is to improve its delivery of services on the basis of evaluation, it must know why clients rank providers as they do. This is no less true if the ERS service is the hands-down favorite among clients than it is if potential clients universally disdain the ERS service. In many cases, ERS managers will have a good indication of the important broad attributes—for example, quality, objectivity, relevance, timeliness, and accessibility—but it is critical to obtain clients' open-ended assessment of the reasons for the comparisons they make. For example, in the case of 5- and 10-year projections of agricultural commodity prices, the client might indicate that the FAPRI provides projections earlier, but only by regions of the world, whereas ERS provides projections country by country. When a government agency does not charge user fees for its services, it is easy to overlook the cost of the service in question as one of its key attributes. Currently ERS makes no provision for allocating costs or staff time among the services it provides. It is not difficult to record this information—indeed, for ERS operations supported by interagency transfers (for example, from the Agency for International Development), this sort of accounting is maintained. Without accounting for costs and staff time according to the services provided, any evaluation tool will be of quite limited usefulness. RECOMMENDATION 5-3. ERS should allocate its costs and staff time across the services used in its system of evaluation, according to generally accepted accounting principles. Evaluation Process The evaluation process can be either formal or informal. Regular informal contact between providers and clients in a competitive environment leads to ongoing evaluation, for which any formal process is likely to be a poor substitute. This model is very familiar in private markets, including those for information and research embodied in products like newsletters and consulting. It is also

OCR for page 91
familiar in the public sector, in the form of expert advisers (for example, the chairman and members of the Council of Economic Advisers) who are appointed to serve their primary client (the president, in this example). For ERS programs, this model is not directly applicable, and a more formal evaluation process is required. More formal approaches are also mandated by the Government Performance and Results Act. Formal instruments for evaluation should be organized along the four dimensions described here. In addition, the evaluation should be administered by a third party, and, ideally, clients and potential clients should not know that ERS is sponsoring the evaluation. Examples of third parties include the Measurement Laboratory of the Bureau of Labor Statistics and private-sector auditing and consulting firms. RECOMMENDATION 5-4. Formal program evaluation instruments should elicit from clients and potential clients their choices among alternative providers and potential providers of the services provided by ERS, and the attributes of the services critical to their choices, including prices. The instrument should solicit the identities of additional potential clients and alternative providers of these services. ERS should participate in the design of evaluation instruments, but their administration should be delegated to an independent party. Evaluating Individuals The evaluation of professionals in ERS or any other organization poses a set of questions distinct from the evaluation of programs. Within ERS, the dimensions of evaluation for economists are set by the Office of Personnel Management (OPM). The five dimensions are scope of assignment, technical complexity, technical responsibility, administrative responsibility, and policy responsibility. Some examples from current OPM policy (U.S. Office of Personnel Management, 1996) indicate the kinds of qualities involved. With regard to scope: "The GS-13 economist must initiate, formulate, plan, execute, coordinate, and bring studies to meaningful conclusions." With regard to technical complexity: "GS-14 economists are almost entirely dependent on their own personal professional knowledge and imagination in the assessment and understanding of problems of critical importance." With regard to technical responsibility: "The GS-12 economist is accountable not only for the factual accuracy of his results but for the thoroughness of his research plan and the cogency of his interpretations." With regard to administrative responsibility: "Subject to supervisory approval, economists at the GS-13 level are responsible for identifying, defining, and selecting specific problems for study and for determining the most fruitful investigations to undertake."

OCR for page 91
With regard to policy responsibility: "GS-14 economists serve as authoritative technical advisors, within the area of assignment, in the highest councils of Government." In 1996, ERS initiated an Economist Position Classification System, based on the OPM economist standard. Under this system, "economists have open-ended promotion potential based on their personal research and leadership accomplishments, which can change the complexity and responsibility of their positions" (Economic Research Service, 1997:1). Evaluation is carried out by a peer review panel, whose chair has final authority for determination of position grade level. This system supplements, but does not replace, annual reviews by immediate supervisors. Peer panel reviews are conducted every three years for positions GS-12 and below, every four years for GS-13, and every five years for GS-14 and above. There are provisions for early and delayed reviews and for reevaluation. The objective of the peer panel review is to grade the incumbent against the five dimensions of the economist classification standard and assign the grade level that best matches the incumbent's qualifications. General Guidelines for Evaluation of Individuals The principle of comparison is as valid for the evaluation of individuals as it is for the evaluation of programs. The objective is to rank an individual's performance relative to the performance of other individuals in similar positions doing similar work. There can be no absolute standards. Regardless of the kind of work being done, any aspect of individual performance must satisfy five characteristics appropriate for evaluation, discussed below. The way that evaluation is carried out differs greatly, depending on the universe of comparison for individual performance. Characteristics of Individual Performance An aspect of individual performance is consequential the extent that it directly affects the attributes of services provided by ERS that are identified in the program evaluation process set forth earlier in this chapter. An individual aspect of performance is controllable the individual has substantial control over that aspect of his or her work. Ascertaining those aspects of performance that are controllable is more difficult for individuals working in teams than it is for individuals working alone. An aspect is observable if the immediate supervisor can monitor that aspect of the individual's performance. It is verifiable if the supervisor's observation can be replicated by others. Organizing tasks so that consequential aspects of individual performance are also observable and verifiable is a key task of management in any organization. A good organization of tasks will provide individual incentives that support favorable program

OCR for page 91
evaluations; if tasks are poorly organized, then it is not possible to reward individuals for performance that supports program objectives. Finally, an aspect of performance is comparable if it exists and can be measured in much the same way as the performance of others. This characteristic presents special problems, to which we return shortly. First, we take up three examples that illustrate how these characteristics are important to the evaluation of individual performance. Consider an individual who is responsible, in part of his or her position, for written staff assignments in response to requests for information coming directly from the Office of the Secretary. An important aspect of the individual's performance is whether or not the written response addresses the question posed. This is largely controllable by the individual, although not entirely—for example, time constraints and the availability of data or other pertinent information must be taken into account in trying to isolate how well the individual addressed a particular question. (As this activity is repeated, these uncontrollable characteristics may be about the same for this individual and for those with whom he or she is compared.) This aspect is also an important attribute in the evaluation of the staff analysis services of ERS. If the individual's supervisor is appropriately qualified, then this characteristic is observable, and it can be verified by asking the ultimate client, the secretary, whether the response was relevant—indeed, this information is likely to be volunteered if the information provided is badly off target. Since there are many professional performing similar activities in ERS and other agencies, this aspect of the individual's performance is comparable. In the second case, consider a senior economist with substantial discretion and responsibility for intermediate and long-term research in support of a service provided by ERS: for example, economic analysis of alternative designs for the auction process used in the Conservation Reserve Program, or the synthesis and commissioning of studies on the impact of agricultural policy changes on carbon dioxide emissions. The related academic publication and citation record of the senior economist, or for the studies that he or she has managed, is consequential to the quality and credibility of ERS research, as well as controllable, observable, verifiable, and comparable. In the final example, consider an individual who has had discretionary responsibility for a computational general equilibrium (CGE) model, used to address questions about the impact of changes in trade agreements and tax policy. The quality of the model directly affects the quality, and ultimately the credibility, of ERS responses to questions about trade and tax policy—both important attributes in the evaluation of this service of ERS. Although the model is important, it uses established and proven methods, and its use is therefore not a candidate for publication in refereed journals. In both program and individual evaluation, critical anonymous reviews of the model might be solicited. Direct or indirect comparison of this individual with others might be complicated by the unique characteristics of the problems addressed by the CGE model and the technical complexity of the work.

OCR for page 91
RECOMMENDATION 5-5. The evaluation of professional staff should be grounded in aspects of individual performance that directly affect attributes of ERS services. The aspects of an individual's performance that are evaluated should be under the individual's control and should be capable of being verified by ERS staff beyond the individual's immediate supervisor. The three examples taken up illustrate that the appropriate method of evaluation differs greatly depending on the kind of activity in which the individual is engaged. The OPM grade definitions are based on the complexity of the tasks assigned to a position. The Economist Position Classification System currently implemented by ERS is a method for assessing these complexities in some depth. Neither, however, directly addresses the fundamental question of how successful the individual in the position is in carrying out the tasks important to the mission of ERS, relative to the others in similar positions. Doing so requires greater flexibility and imagination than has so far been achieved in federal agencies. RECOMMENDATION 5-6. Standards for evaluation of professional staff should be driven by the tasks that are important to the success of ERS programs. Standards of evaluation should therefore be different for professional staff engaged in different activities. No one standard is appropriate for all economists, much less for all professional staff. RECOMMENDATION 5-7. Career evaluation of professional staff should be conducted by supervisors and appropriate peers, including ones outside ERS. In each case, it is essential that these evaluators gather information from the widest appropriate sources. Sources include clients for ERS services, external critical evaluators of technical work retained for the purpose, and publications and citations of research. The ease with which individuals can be compared with their peers, and the universe of peers, varies greatly across positions. For example, many research assistants perform similar functions, and most senior economists have worked with quite a few research assistants in their careers. Comparing the work of research assistants is therefore a more or less straightforward matter. Very senior economists engaged in intermediate and long-term research do work that is typically impossible for supervisors to observe directly, due to its technical specificity and complexity, but the institutions of peer-reviewed academic journals and journal article citation provide systematic comparisons by experts, with observable and verifiable outcomes. For other positions, comparison by any means can be quite difficult. One example is provided by information specialists who have responsibility for monitoring and reporting in particular substantive areas. Consider a specialist in ERS

OCR for page 91
with sole responsibility for monitoring and reporting for a crop or group of crops. If there is no other specialist with that responsibility anywhere—including other government agencies as well as the academic and private sector—then comparisons with actual alternatives cannot be elicited in either the program or individual evaluation exercises. Comparison with hypothetical alternatives is problematic, and if comparisons are made with specialists in other areas, then sorting out how much of the difference is controllable is likely to be difficult. As a second case, return to the hypothetical example previously set forth in which there is no market for comparison of the CGE model used to address trade and tax policy changes, as there is for academically innovative research papers. Since the work is technical and intricate, the details are likely to be observable and verifiable only at high cost, and determining the extent to which the quality of the outputs is controllable is nearly impossible. More generally, the issue of how to deal with positions in which individuals control key aspects of technical processes is a universal and increasingly difficult and important one in both private and public-sector organizations. It is essential for managers to prevent positions involving critical, unique, and complex tasks from becoming bottlenecks to the success of agency programs. To this end, ERS should make openness and transparency in performance of these functions central in the evaluation of individuals in those positions. One device for doing so is to require that such positions include documentation of the work performed, to the level that a new individual could assume the position with minimum disruption to the services provided by ERS. The evaluation of individuals will then depend in large part on the quality of this documentation, which in turn would be evaluated by those who would potentially replace the individual if performance were substandard. Conclusion Evaluation of ERS programs must be conducted against world standards, not simply against the best that can be produced within ERS as a strictly intramural research and information agency. This is the appropriate standard for research and information in support of public policy in the United States, and it is especially important for ERS at this time, as its mandate is extended well beyond the training and background of much of its permanent staff of professionals. An open system of evaluation can do much to ensure that perceptions of the quality of ERS work are consistent with reality. A continuous, systematic program of evaluation will also help to insulate ERS and USDA from shifts in course as division directors come and go. Systematic evaluation of individuals on program-related criteria will facilitate internal rewards to good work. A well-articulated system of evaluation of ERS programs is the appropriate cornerstone for management decisions within ERS. An effective and regular system of program evaluation will provide information essential to determining

OCR for page 91
the appropriate scope of ERS activities, to evaluating the performance of individual professional employees, to management and allocation of programs among potential suppliers, and to meeting ERS responsibilities under the Government Performance and Results Act of 1996. The next two chapters establish the line between the evaluation process and the internal administration and organization of ERS, respectively.