National Academies Press: OpenBook
« Previous: 2 The Search for Useful Information
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

3
Improving the Tools and Uses of Policy Analysis

A STRATEGY FOR INVESTMENT

The agencies that supply policy analysis for social welfare issues need to improve their databases and modeling tools. Although the climate of support for a well-targeted investment program is more positive on the part of both branches of government than at any time in recent years, agencies still face difficult choices in deciding where best to direct limited investment dollars.

Microsimulation models, in our view, offer important capabilities to the policy analysis process—in particular, the ability to evaluate fine-grained as well as broader policy changes from the perspective of their impact on subgroups of the population that are of interest to the user. However, microsimulation models do not serve all policy analysis needs, and the capabilities they provide typically require highly complex model structures and databases that can be resource-intensive for development and use. Other tools that are available for policy analysis, which may, in particular circumstances, offer appropriate and cost-effective capabilities, include:

  • large-scale macroeconomic models based on systems of simultaneous equations estimated with historical times series, which can project the effects of aggregate factors, such as rising inflation or changes in the federal budget deficit, on aggregate outcomes, such as gross national product or unemployment;

  • single-equation time-series models, which can use historical experience to project aggregate costs and caseloads for specific programs, such as AFDC

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

or food stamps, under varying assumptions about changing family composition, inflation, employment, or other factors;

  • cell-based models, which can develop estimates of the effects of proposed policy changes, such as raising the social security retirement age or payroll tax, for the specified population subgroups (such as people categorized by age and sex) that comprise the "cells" in the model;

  • econometric models of individual behavior, which can estimate the probabilities that decision units (e.g., families and individuals) will make different program participation choices or otherwise alter their behavior in response to a policy change; and

  • second-round effects models, which can develop estimates of the longer-run effects of policy changes, such as the effects of changes in tax laws on long-run changes in the character of economic markets.

Many policy applications require more than one modeling technique, and, indeed, many models themselves incorporate multiple approaches. Some models are explicitly "hybrids"—for example, models that link a microsimulation-based model of the household sector and a macroeconomic-based model of the economy. Other models reflect primarily one approach but make use of the outputs of other kinds of techniques. Hence, agencies will benefit from adopting a broad perspective as they consider how best to improve the tools and associated data they need for policy analysis.

In framing an investment strategy, agencies confront the fact of continual change in the policy landscape even though the basic concerns of social welfare policy have not changed much in the years since the Great Depression and World War II: for example, the current interest in revamping the nation's patchwork system of health care financing carries echoes of similar debates going back at least as far as the Truman administration. However, the relative priorities among issues change, as do the particular features of the debate on each issue.

In looking to the next 5-10 years, it is clear that issues related to health care will more and more occupy center stage as the nation faces escalating needs and costs. Thus, it is obvious that investments should be made to improve the capability for modeling health care policies, but it is by no means clear precisely what form these investments should take. Moreover, it would be unwise of agencies to assume that other policy topics, such as income support for the poor or retirement income, will be quiescent and that they can safely defer investment in modeling capability for those topics.

What agencies can assume is threefold: that policy options are likely to involve several topics—for example, the use of tax policy to achieve health care cost containment or income support goals; that changes in the debate within and across topics will occur, sometimes with stunning speed—for example, tax policy debate may well shift from capital gains to energy taxes; and that policy

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

makers will certainly not reduce and may well expand their appetite for detailed information about the impacts of proposed policy changes.

These features of the policy landscape rule out extreme strategies. There is little merit in a totally proactive strategy of trying to forecast future analysis needs in great detail and so developing a highly targeted investment program, because the inevitable miscalculations will result in wasted dollars. Nor is there merit in a totally reactive strategy of producing on-the-fly estimates in response to the policy needs of the moment, because this posture throws away any opportunity to develop analysis tools that have a longer-term payoff or that can lead to improvement in the quality of estimates. Instead, in our view, agencies need to accord priority to investments in policy analysis tools that maximize their capacity to respond flexibly to shifts in policy interests and that provide capabilities for evaluating the quality and meaning of the estimates and maintaining high standards of documentation and replicability. Given the current climate of constrained resources, agencies also need to seek strategies that promise to reduce costs of model development and future application. All of this is a tall order, particularly in the case of large, complex models. In our review of the investment needs for microsimulation, we identified several approaches that we believe offer promise of success for this class of models and, possibly, also for other classes.

One imperative is for agencies to act upon the maxim that, in the case of multifaceted policy models that are intended to have a long-term use, it is essential to allocate sufficient resources and attention for good model design and implementation at the beginning. Attempts to achieve "savings" in development and testing are all too likely to have disastrous consequences for the users of the model. (The recent failure of the Hubble space telescope to achieve full functionality offers a highly visible object lesson of this point.) Another strategy, with high potential payoffs for making complex policy models more cost-effective, is for agencies to take advantage of the important technological advances in microcomputing hardware and software that are already having an impact in the business and academic worlds. Still another worthwhile approach is for the agencies to work for changes in the policy analysis community, to foster wider use of complex models by analysts and researchers, to encourage production of research that is relevant to modeling needs, and to improve upon some of the ways in which agencies have traditionally operated, both individually and as a group. We discuss these approaches in detail with respect to microsimulation models in Part II.

In determining investment strategies, whether for microsimulation or other types of models, it is important to focus on the goal of policy analysis, which is not just to produce numbers, but to produce numbers that provide useful guidance for decision making. "Useful" in this context implies many things, including relevance to the issue at hand, timeliness, and multidimensionality, for example, shedding light on distributional as well as aggregate effects. Most

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

important, "useful" implies that the numbers are of reasonable quality. Given a known level of quality, analysts and policy makers can debate the merits of investing in the next increment of quality or investing in some other dimension. However, for many extant policy analysis models, the level of quality is simply unknown.

In our review, we found that policy analysis agencies have generally skimped on investment in model validation and related activities, such as archiving and documentation, that support validation. In the absence of systematic validation efforts, agencies are blindly spending precious dollars for model application and development: they can neither assess the return to date on their investment in policy analysis tools in terms of the quality of the estimates nor make rational decisions about future investments to improve quality.

Moreover, in the absence of validation, decision makers are using policy analysis estimates as if they were error-free. In fact, all estimates have some uncertainty associated with them, and the level of that uncertainty, in many cases, may be high. By ignoring the errors in estimates, policy makers may reach decisions, with possibly far-reaching social consequences, that they would not have made if they had realized how uncertain was the available information. Or they may waste time and resources in exploring very fine-grained policy alternatives that cannot be distinguished reliably with the available information. Finally, without information on uncertainty, policy makers cannot determine what investments in databases and models are most needed to improve the quality of the estimates for future decision making.

We vigorously urge investment that will facilitate validation of model estimates on a regular and systematic basis. We also urge investment to improve the underlying databases for modeling and for the applied and basic socioeconomic research on which models rely for many important elements such as behavioral response functions. Although little systematic information is available about the overall quality of estimates produced by many models, there is ample evidence that critical data inputs to models have deteriorated in quality and relevance. Moreover, there is evidence that, in some cases, problems with data have had serious consequences for social and economic policy.

In this chapter we discuss in considerable detail important overall improvements that are urgently needed with regard to the quality and availability of data to support a wide range of policy analysis applications using a variety of modeling tools. We also discuss major changes that are needed in the approach of policy analysis agencies to validating and documenting model results and communicating uncertainties in these results to policy makers. We offer recommendations on each of these topics.

DATA QUALITY AND AVAILABILITY

Policy analysis of alternative legislative proposals is undeniably a "data-hungry"

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

enterprise. Although some analyses require relatively few data, many kinds of analyses depend heavily on large amounts of data, and all analyses require data of good quality. Among the widely used policy analysis tools, both macroeconomic and microsimulation models stand out in their voracious data appetites. The major macroeconomic models rely on hundreds of historical time series extending over many decades that capture specific elements of aggregate economic behavior (e.g., public and private spending, industrial production, employment, prices). Microsimulation models require large samples with rich sets of information on each individual record in order to estimate policy effects for detailed population subgroups.

In this regard, policy analysis simply mirrors the larger society: contemporary Americans are avid consumers of information of all types, and the federal government supplies much of the data that the public and private sectors use for everything from entertainment to research and analysis to critical decision making. At the lighter end of the spectrum, media outlets rely heavily on government statistics for a wide range of information about the characteristics of Americans and their predilections and problems. At the weightier end, statistics such as the monthly unemployment rate and consumer price index have consequential impact on the national economy, indirectly through their influence on financial markets and business behavior and directly through their use in indexing some wage contracts, entitlement programs (e.g., social security and food stamps), and federal grant programs to states and localities. In between, federal statistics are used by all levels of government, businesses, nonprofit organizations, and academia for all manner of research, planning, decision support, and evaluation purposes.

Good data are obviously necessary for good analysis and informed decision making; consequently, improvements in data quality and relevance for policy analysis and other purposes represent worthwhile investments on the part of the federal government. Certainly, a well-considered continuing program of investment in data (and modeling tools) needed for social welfare policy analysis seems warranted in light of the resources that are at stake. The federal government spends more than $300 billion annually on social insurance programs (including social security, Medicare, unemployment insurance, and workers' compensation) and almost $75 billion annually on public assistance programs (including supplemental security income (SSI), AFDC, food stamps, and Medicaid); state and local governments spend an additional $43 billion and $46 billion, respectively, on social insurance and public assistance programs (Bureau of the Census, 1991:Table 583).1 In comparison, the entire statistical budget of the federal government is less than $2 billion in most years.2

1  

These figures are for 1988; social insurance expenditures exclude federal and state and local public employee pensions.

2  

The Office of Management and Budget (1990:Table 1) reported fiscal 1988 budget obligations for

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

However, we recognize that very difficult resource allocation issues arise in considering, first, the share of the federal budget to devote to data production and, second, the share of the federal data budget to devote to particular data needs. We do not pretend to have the answers to questions such as whether a dollar invested in improved intercensal small-area estimates of population and income for use in federal fund allocation and state and local government planning has a higher payoff than a dollar invested in improved sample surveys on income and health care for federal policy analysis use or than a dollar invested in improved input data for the national economic accounts. We do offer some observations and recommendations that we believe have a broad utility for improving data for policy analysis.

Investment in Data Production

A disturbing feature of the decade just completed has been the declining federal investment in the production of high-quality, relevant data for many policy areas. At the start of the decade, nine major federal statistical agencies experienced sizable cutbacks, amounting to a 20 percent reduction (in constant dollars) in their budgets between fiscal 1980 and fiscal 1983. Subsequently, budgets were adequate for the agencies to maintain and, in some cases, expand the core activities that remained after the initial reductions. However, across-the-board cuts implemented again in 1986 and yet again in 1988 resulted in an overall decline of 13 percent in the expenditures of the major statistical agencies between 1980 and 1988 (Wallman, 1988:13).

With regard to information specifically needed for social welfare policy, we note first that federal and state spending for social insurance and public assistance programs increased by 32 percent from fiscal 1980 to 1988 in real terms (Bureau of the Census, 1991:Table 583).3 In contrast, spending for the statistical agencies that produce relevant data—including the Bureau of Economic Analysis, Bureau of Labor Statistics, Census Bureau, National Center

   

all statistical activities of the federal government-including programs of large and small statistical agencies, statistics-related activities of policy research agencies, and the programs of administrative agencies (such as the Immigration and Naturalization Service and the Internal Revenue Service) that generate statistical data as a byproduct of administrative actions—at $1.7 billion, including $0.2 billion for the 1990 decennial census. The 11 principal statistical agencies, including the Bureau of Economic Analysis, Bureau of Labor Statistics, Bureau of Justice Statistics, Census Bureau, Economic Research Service of the Department of Agriculture, Energy Information Administration, National Agricultural Statistics Service, National Center for Education Statistics, National Center for Health Statistics, Policy Development and Research Office of the Department of Housing and Urban Development, and Statistics of Income Division of the IRS, accounted for $0.9 billion. In fiscal 1990, the peak year of spending on the decennial census, total budget obligations of federal statistical programs are estimated at $3.0 billion, including $1.3 billion for the census.

3  

Fiscal 1988 budgets were converted to constant 1980 dollars by using the GNP implicit price deflators for federal nondefense and state and local government purchases of goods and services (from Joint Economic Committee, 1990:2).

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

for Health Statistics, and Statistics of Income Division in the Internal Revenue Service (IRS)—increased at most by only 12 percent in real terms from fiscal 1980 to 1988 and actually fell by 6 percent in real terms from fiscal 1985 to 1988 (see Table 3-1). A major new survey was introduced during the 1980s to support improved social welfare policy analysis—the Survey of Income and Program Participation (SIPP). However, repeated cutbacks in the SIPP sample size and length of panels greatly undercut its usefulness.

Although budget constraints in some cases encouraged agencies to scale back or eliminate outmoded programs, they have had many serious consequences. Most attention has focused on quality problems with basic economic statistics, such as monthly measures of retail sales, imports and exports, and the gross national product (see Council of Economic Advisers, 1990; Economic Policy Council, 1987; Juster, 1988; Office of Technology Assessment, 1989b; see also Sy and Robbin, 1990, who consider problems with a broad range of economic and other federal statistics as they affect policy uses of the data). Failure of concepts and measurement to keep up with economic trends (such as the shift from a manufacturing to a service economy), reductions in survey samples and in the availability of administrative records, and inadequate staff resources are among the factors cited for deterioration in basic economic data series.

These deficiencies in the quality and relevance of economic data have had important policy consequences. Schultze (Policy Users Panel, 1988) recounts the experience in the late 1970s and early 1980s, when the consumer price index (CPI) overstated the rise in the cost of living by some 1-2 percent a year, with serious economic consequences for wage escalation and overadjustment of social security and other federal entitlements.4 Fuerbringer (1990) comments on inadequacies in economic data series that, with disturbing frequency, have resulted in large differences between the preliminary and revised estimates of the gross national product (GNP) and other key economic indicators. The preliminary estimates are heavily used for business decisions, and they influence decisions of policy-setting agencies such as the Federal Reserve Board.5 Samuelson (1990) cites problems with current business surveys that produced overestimates of wages and salaries in 1989 and hence overestimates of projected federal tax revenues. The revised wage and salary estimates, based on

4  

The overestimate was caused by calculations regarding owner-occupied housing, which was treated in the CPI as an investment good rather than as an element in the cost of living that provided a stream of housing services. Hence, soaring interest rates and house prices in the 1970s led to overestimates of the rise in the cost of housing and thereby in the CPI. In 1983 the Bureau of Labor Statistics changed the measurement of housing costs to a rental-equivalence approach.

5  

David (1990:3) cites an example in which the Federal Reserve Board tightened interest rates in the summer of 1989, at least in part in response to a measure of declining inventories: "The higher rates forestalled home purchases and caused entrepreneurs to delay new enterprises. Subsequent revision of the data showed a large error in the original estimate."

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

TABLE 3-1 Trends in Funding for Major Statistical Agencies Providing Data for Social Welfare Policy

 

Budget Obligations (millions of dollars)

Percentage Increase (constant dollars)

Agency

Fiscal 1980

Fiscal 1985

Fiscal 1988

Fiscal 1980-1988

Fiscal 1985-1988

Bureau of Economic Analysis (BEA)

15.8

21.8

23.6

+ 4.9

- 4.5

Bureau of Labor Statistics (BLS)

102.9

170.6a

175.3

+19.5

- 9.8

Census Bureau (current programs only; not including censuses)

52.5

84.8b

94.3

+25.8

- 2.4

National Center for Health Statistics

43.3

42.8

54.4

-11.8

+11.6

Statistics of Income Division (SOI), Internal Revenue Service (IRS)

14.6

19.0c

17.2

-17.2

-20.5

TOTAL

229.1

339.0d

364.8

+11.7

- 5.5

NOTE: Analyzing statistical agency budgets over time is difficult Some budget changes are more apparent than real because of transfers of programs from one agency to another. See notes a, b, and d for estimates of the effects of the largest transfers. Other changes reflect cyclical funding patterns: for example, including the population and economic censuses in the Census Bureau's budget would produce dramatically different trends depending on whether the comparison years represented high or low points in the funding cycle.

a The fiscal 1985 budget figure reflects a transfer of programs from the Employment and Training Administration of an estimated $23.7 million. If this amount is excluded from the BLS budget for fiscal 1988 (assuming conservatively that it did not increase in nominal terms), the increase [for BLS from fiscal 1980 to 1988 in constant dollar terms is 3.4 percent instead of 19.5 percent.

b The 1985 budget figure reflects the introduction of SIPP. (The first SIPP interviews were conducted in fall 1983, and the design of overlapping panels was fully phased in by fiscal 1985.) The 1985 budget figure also reflects program transfers from other agencies of an estimated $2.3 million (Baseline Data Corporation, 1984:Table 1). If the transferred amount is excluded from the Census Bureau budget for fiscal 1988 (assuming conservatively that it did not increase in nominal terms), the increase from fiscal 1980 to 1988 in constant dollar terms is 22.9 percent instead of 25.8 percent.

c The fiscal year 1985 budget amount is from Wallman (1986:260). The amount reported in Office of Management and Budget (1986:Table 2) includes IRS field costs not previously assigned to SOI.

d The fiscal 1985 total budget figure reflects transfers from other agencies of an estimated $26 million (see notes a and b). If this amount is excluded from the fiscal 1988 total, the increase from fiscal 1980 to 1988 in constant dollar terms is 3.7 percent instead of 11.7 percent.

SOURCES: Unless otherwise noted, net direct budget obligations are from Office of Management and Budget (1986:Table 2) for fiscal 1980 and 1985 and from Office of Management and Budget (1990:Table 1) for fiscal 1988. Constant dollar comparisons were calculated by using the GNP implicit price deflator for federal government nondefense purchases of goods and services (from Joint Economic Committee, 1990:2).

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

more complete information that became available in spring 1990, necessitated downward revisions of projected tax revenues and a sharp upward revision in the forecast of the federal budget deficit revision that presented grave political and economic policy problems for the President and Congress.

Data series that are used for policy analysis of social welfare issues have also suffered deterioration in terms of quality and relevance to policy needs. A prime example is the continued reliance on outmoded concepts for characterizing families and economic decision units in widely used household surveys such as the Current Population Survey (CPS) and Consumer Expenditure Survey (see David, 1990). As Juster (1988:16) comments,

In today's society, the traditional notion of the stable family as the unit of observation in economic and social statistics is in need of rethinking. For example, unrelated individuals in a modern household may have little or no information on the labor force attachment, or the income and wealth positions, of the other members of the household. Is that why teenage unemployment rates appear to be so high? And members of a household may depend more on transfers of income or wealth from outside the household for food and clothing and shelter, or for access to higher education, than on the income and wealth of household members. Is that why forecasts of college attendance have been too low?

Inability to provide adequate descriptions of today's complex family structures and relationships has made it increasingly difficult to assess many important policy initiatives for social welfare. Thus, analysis of child support enforcement programs, which offer the potential to reduce government income support costs, is hampered in the absence of joint information on the family circumstances of both the custodial and the noncustodial parents. There are other examples of problems in socioeconomic data series (see Baseline Data Corporation, 1984; General Accounting Office, 1984; Wallman, 1988):

  • Reduced sample sizes and content of major surveys in a wide range of areas have limited the analyses that the data can support. Surveys with reduced content include the CPS (some supplements were dropped), the SIPP (one or more interview waves were dropped for some panels), and the Health Interview Survey. Surveys with sample size reductions include the CPS (a change that did not necessarily affect national estimates, but did affect state data, which are important for programs such as AFDC and Medicaid); the SIPP, for which the sample reduction—as much as 40 percent for some panels—was particularly drastic; the Health Interview Survey (the sample was later restored for this survey); the National Health and Nutrition Examination Survey (NHANES); and the youth cohort of the National Longitudinal Surveys of Labor Market Experience. As noted earlier, samples for business surveys, such as wholesale and retail trade, that are important for estimates of the GNP, projections of

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

budget deficits, and other policy purposes, were reduced, as were samples of tax returns prepared by the Statistics of Income Division.

  • Cutbacks in the periodicity of many surveys, particularly health surveys, have adversely affected the ability of analysts to measure important trends. Thus, the NHANES was cut back in frequency from 5 to 10 years, the National Ambulatory Medical Care Survey from 1 to 3 years, the National Nursing Home Survey from 2 to 6 years, and the Survey of Family Growth from 3 to 6 years. The periodicity of several business surveys was also cut back. The American Housing Survey (formerly the Annual Housing Survey) was made biennial instead of annual.

We note just one example—from health care policy—of the impact of less frequent updates of important surveys. National medical care expenditure surveys were conducted in 1977 and 1980 but not again until 1987. Estimates originally prepared by the Congressional Budget Office (CBO) of the likely costs of covering prescription drug costs under Medicare were determined to be much too low, once the 1987 data, which showed a rising trend in prescription drug use on the part of the elderly, became available (see Chapter 8).

  • Important differences in concepts across data series, which agencies have identified but not been able to address, have precluded definitive assessment of the quality of policy-relevant information. Thus, the personal income measures in the National Income and Product Accounts (NIPA), which provide important data for evaluating the quality of income measurement in household surveys, have never been disaggregated to permit appropriate comparisons for the household sector. (For example, the personal income estimates of interest and dividends in the NIPA include receipts of nonprofit organizations as well as households.)

  • A range of measurement problems, which agencies have not been able to analyze adequately or remedy, has hampered assessment of economic well-being. Such problems include rising nonresponse rates to questions about income, as well as errors in reporting types and amounts of income in household surveys. Also, there is a lack of adequate data on sources of income as diverse as nonwage benefits, which are estimated to account for more than one-quarter of employer labor costs, and receipts from illegal enterprises, which in one estimate account for one-quarter of the income of inner-city men (Levitan and Gallo, 1989:14,25).

Congress has expressed concern over the deterioration of the nation's information base, and the administration has recently expressed support for budget increases and reallocations to make it possible to effect improvements in important statistical concepts and data series (see Boskin, 1990; Council of Economic Advisers, 1990, 1991; Darby, 1990; Office of Technology Assessment, 1989a). Some of the proposals that are relevant to social welfare policy analysis data needs include conducting research on measurement of poverty and

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

income; restoring the SIPP sample size; exploring ways to link SIPP data with information from administrative records while safeguarding confidentiality; and modernizing the labor force component of the CPS. Although we have not examined the merits of specific elements in the administration's package and do not presume to judge among data needs across areas, we want to record our support for increased investment in the federal statistical system.

Recommendation 3-1. We recommend that the federal government increase its investment in the production of relevant, high-quality, statistical data for social welfare policy analysis and other purposes.

Coordination of Data Production

In addition to budget and staffing constraints, the federal statistical system over the past decade has suffered a deterioration in mechanisms for interagency coordination and the ability to draw on and integrate information from a range of databases, particularly administrative records. The consequences have been reduced timeliness, quantity, and quality of policy-relevant data.

With its traditionally decentralized statistical system, whereby one agency collects data on health conditions, another collects data on health care financing, another collects income data, and so on, the United States depends heavily on effective coordinating mechanisms to achieve optimal allocation of data production resources. Yet the principal coordinating mechanism—a statistical policy group (variously named over the years) in the Office of Management and Budget (OMB)—with no more than half a dozen staff members and limited resources is today a shadow of its former self. Indeed, the demise of the statistical coordinating function has occurred over a longer period than the past decade: resources for this office, established in 1933 and located throughout most of its existence in OMB, peaked just after World War II (Wallman, 1988; see also Policy Users Panel, 1988; Sy and Robbin, 1990). The deleterious consequences of this situation include long lags between revisions of important government-wide coding schemes, such as the Standard Occupational and Industrial Classifications, and a reduced ability to evaluate interrelationships among data collection activities, given that the statistical policy group can provide limited or no oversight to the OMB desk officers who are assigned to clear survey questionnaires of specific agencies.

Various interagency and intraagency coordination efforts, of greater or lesser formality, continued or started up during the 1980s. They were typically organized around specific surveys—for example, the federal interagency committee on SIPP, which is chaired by OMB—or around specific topics—for example, the interagency forum on aging-related statistics, which is cochaired by the Census Bureau, the National Center for Health Statistics, and the National

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

Institute on Aging. These efforts, although useful and important, do not wholly fill the need for a carefully designed structure of coordinating mechanisms devoted to optimizing the cost-effectiveness of federal data production.

The Paperwork Reduction Act, which governs the OMB role in setting statistical policy, is due for reauthorization by the Congress. The legislative discussion, although focused on OMB's role in reviewing federal agency regulations and reducing the burden of administrative paperwork on the private sector, has also considered ways to strengthen OMB's statistical coordination function. We have not studied the merits of alternative legislative proposals nor have we considered the issue of whether OMB remains the appropriate place to lodge the responsibility for statistical coordination, but we want to record our support for efforts to improve coordination among federal statistical agencies.

Recommendation 3-2. We recommend that the federal government strengthen and increase its investment in the coordination of federal statistical activities, with the goal of improving the quality and relevance of data for policy analysis and other purposes.

Broadened Data Collection

The decentralized character of the federal statistical system, coupled with the near certainty of changing data requirements over time, argues, in our view, not only for improved coordination of data production generally, but specifically for adoption of more far-seeing strategies of government data collection. It is important for reasons of both total cost and analytical usefulness that the major surveys on particular topics—health care, income support, retirement income, tax policy—be designed with a broad focus and in ways that facilitate relating the survey data to other survey and administrative data.

Cost concerns are frequently used to argue that surveys be focused in terms of subject matter and that they not duplicate topics covered in other surveys.6 While not denying that costs are important, we believe that false economy is frequently introduced in large-scale data collection efforts by not also considering the benefits gained from more inclusive survey strategies. First, as alluded to previously, there is considerable uncertainty about what the important policy issues of tomorrow will be. Data collection efforts that are finely tuned to today's issues are likely to be unresponsive to future concerns. Second, investments in analytical tools for policy analysis, such as microsimulation models, are generally tied to specific surveys. The investment payoffs will be limited to the extent that the database constricts the model's ability to respond to new policy questions. Third, the most analytically useful

6  

Under the general heading of ''costs" we include such diverse things as the time required by respondents to provide information, the costs of encoding and processing survey responses, and the effects on the complexity of the analysis tasks required to use the data.

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

surveys frequently include panel observations stretching over a number of years.7 In these cases, it is especially important to anticipate the range of potential applications. Although no one survey can or should strive to serve all purposes, these factors suggest that proper decision making should look beyond just the cost of mounting the specific survey or data collection effort. Breadth of subject matter—for example, health care surveys including some socioeconomic variables and income surveys including some health status variables—has clear analytical value.

In addition, inclusion of overlapping variables is very useful for evaluation of data quality (e.g., comparing income reports in the March CPS and SIPP) and is essential when it becomes necessary for analysis purposes to add variables to a primary data set from other sources through some type of imputation or matching technique. Hence, duplication across surveys should not be resisted solely on cost grounds—considerations of data quality and utility must enter the calculation as well.8 We repeat that not all future needs can be anticipated. It is therefore desirable as well to build mechanisms into surveys, such as supplemental modules with changing content, to enhance timely response to emerging data needs. It is also important that such supplemental modules be designed and incorporated in a timely manner.9

Recommendation 3-3. We recommend that federal data collection strategies emphasize breadth of use and ability to respond to changing policy needs. In this regard, duplication of selected questions across surveys should be encouraged to the extent that such duplication enhances utility and facilitates evaluation of data quality.

Another way in which federal data collection efforts should be broadened concerns the need for data that relate characteristics of individuals and institutions. There is increasing recognition that the success of social welfare policies depends importantly on complex delivery systems. The precise manner in which services are provided frequently influences—sometimes decisively—the extent to which services are truly available to individuals and the extent to

7  

Panel surveys collect repeated data on the same units (individuals, families, firms, etc.) over time, allowing analysis of how changed circumstances affect changed behavior. Panel data differ dramatically in usefulness from purely cross-sectional data, which include only a single observation for each of the sampled units.

8  

For a similar argument, see the interim assessment of SIPP prepared by the Committee on National Statistics (1989). We note that care must be exercised in interpreting the results for overlapping items obtained from two or more surveys, because it is rarely the case that the items can be duplicated in all respects: wording, placement in the sequence of questions, etc.

9  

Several major surveys, including CPS, SIPP, and the Health Interview Survey, currently have a structure of core questions that are asked at every interview together with supplements that vary in content. For some of these surveys, however, including SIPP, long lead times are needed to include new supplements.

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

which eligible individuals choose to avail themselves of the services. Indeed, a wide variety of social policies operate, not directly on recipients but through various institutions. For example, Medicaid reimbursements to hospitals influence directly the health services provided to the poor; nondiscrimination rules of the corporate tax code have an impact on the character of individual retirement benefits; the hours of operation, location of offices, and treatment of individuals by welfare agencies affect participation rates of eligible families and individuals. David (1990:4) contrasts the effort put into measuring participation in public welfare programs, which has produced evidence from the SIPP and other sources of high nonparticipation rates for some programs, with the lack of effort put into measuring the reasons for nonparticipation, which may include misunderstanding, fear of stigma in the community, the costs of applying, and other aspects of program administration.

The interaction of individuals and service suppliers generates a special informational demand: the need to link individuals with a variety of institutions. Good analysis of policy alternatives must in general take the underlying linkages into account. For example, understanding the effects of hospital regulations necessitates data that relate the characteristics of patients and hospitals of different kinds. Similarly, understanding the effects of various regulations with regard to retirement requires joint information about characteristics of both individuals and their employers. In this regard, the Social Science Research Council Advisory Group on a 1986 Quality of Employment Survey recommended that the U.S. Department of Labor sponsor surveys that would provide linked data about employers and employees. The group commented (Kalleberg, 1986:8):

It is imperative to study the organizational contexts of human resource issues since organizations—small or large—are today the central structures in American society through which changes in the nature of work and industry occur and where policies are enacted. It is organizations that, among other things, sign union contracts, adopt automated equipment, relocate to communities or nations with lower wage rates, make capital investments, create supervisory structures, provide fringe benefits, set salary scales, and create or eliminate jobs.

Linkages of data about individuals and institutions must be incorporated into the design of data collection efforts. Most data collection efforts, however, are focused on a single entity—the family, the firm, or the like—and only tangentially collect information on related institutional data.

Recommendation 3-4. We recommend that federal statistical agencies give more attention to data collection strategies that recognize key interactions among individuals and institutions—employers, hospitals, government agencies, and others.

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Linking Survey and Administrative Data

We believe it is imperative to address a serious problem that stems from the decentralized structures of the statistical and administrative data functions in the United States and heightened concern for preserving confidentiality of individual records: namely, barriers to relating survey and administrative data. The United States, although not having developed as fully as some European countries the use of administrative records for statistical purposes,10 has traditionally relied on administrative data to supplement information obtained from censuses and surveys. Records such as social security earnings histories, case files from public assistance programs, health care claims, and tax returns have provided additional data at reduced costs and also served to evaluate the quality of census and survey data. A principal mechanism for relating survey and administrative records has been to perform an exact match based on common identifiers, such as social security number.11 Yet developments in the past two decades have greatly undercut the contributions to the nation's information base from administrative sources. In the world of economic statistics, deregulation of key industries has eliminated entire databases because there are no longer regulatory agencies in place to generate records. In the world of social statistics, major restrictions on the availability of administrative data for research and analysis have come from legal and administrative actions to guard against possible breaches of confidentiality.

Federal statistical agencies historically have an exemplary record in protecting the confidentiality of individual data records, whether from surveys or administrative systems. But in the past decade, many agencies, particularly the Census Bureau, tightened their policies and procedures to protect confidentiality and restrict access to both administrative records and survey data. In so doing, the agencies responded directly to legislation, such as the 1974 Privacy Act and the 1976 Tax Reform Act,12 that reflected heightened public unease about potential abuses of government data and indirectly to a broader set of concerns about growing disinclination to cooperate with government surveys.13

10  

For example, Denmark conducted its most recent census by extracting information from administrative data registers rather than by canvassing the population, and Sweden is moving in this direction (Redfern, 1987).

11  

Exact match" is something of a misnomer in that data problems and inconsistencies in one or more of the input files inevitably make it difficult to be 100 percent certain about each match. However, exact matches are far preferable to other kinds of matching and imputation in which it is not possible to link records for the same people; see discussion of exact matching, which has generated useful files for microsimulation modeling, in Chapters 5 and 8.

12  

The 1976 Tax Reform Act extended the limitations on access to tax return data to the social security earnings data that employers report to the IRS and the IRS subsequently passes on to the Social Security Administration for administering benefit programs.

13  

During this period, response rates declined in some government (as well as many private) surveys, which could be attributed, in part, to growing distrust of measures to protect confidentiality. Census

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

Such restrictions have limited access not only for policy analysis agencies and the research community, but also for other statistical agencies. The effects have been most pronounced on the quality and breadth of publicly available microdata files containing individual responses, which are a necessary input to both microsimulation modeling and microeconometric research. The consequences are also evident for the quality and breadth of aggregate data series developed by statistical agencies from internal microdata sources. To cite one example, the Census Bureau no longer prepares for public release exact-match files (with all identifiers removed) from household surveys such as the March CPS matched with social security earnings histories. 14 Because the available CPS-SSA exact-match files date back to 1978 and 1973, models and analyses of future retirement income program benefits are in the anomalous position of having to generate data for 10 or more past years before they can begin their projections, not only adding costs but inevitably impairing the quality of the estimates.15

To cite another example, access to IRS tax return data for statistical purposes can only be described as byzantine in nature. The IRS Statistics of Income (SOI) Division makes available samples of income tax returns for research use; however, only the Office of Tax Analysis and the Joint Committee on Taxation have access to the full data records for these samples. Other agencies involved in tax policy analysis, such as CBO and ASPE, have access only to the much more limited public-use version of the SOI files. The information from tax returns does not provide all of the needed data for tax modeling, which typically requires linked information from household surveys as well. Thus, analysis of new tax proposals frequently requires information on people who do not file tax returns or who do not itemize deductions. Moreover, meaningful distributional analyses require data on the family composition and other characteristics of people who do file returns. Files containing exactly matched CPS (or SIPP) and SOI data would be very useful; however, they are not available.16 The IRS permits Census Bureau employees to have access to a limited set of tax return variables that have been used for evaluating censuses

   

Bureau officials were particularly concerned about public outcries over privacy issues in some European countries, which led to the cancellation of the Netherlands census planned for 1981 and the West German census planned for 1983 (Butz, 1984).

14  

The Census Bureau recently prepared an exact-match file of SIPP and Social Security Administration (SSA) data, which are available only for 2 years to SSA researchers who are sworn in as special census employees.

15  

The 1973 exact-match file is publicly available. The 1978 exact-match file was never widely distributed, but it was made available to President Reagan's Commission on Pension Policy for use in modeling alternative retirement income policies. See Chapter 8 for a fuller discussion of information needs for retirement income modeling.

16  

Tax policy analysis agencies use statistical matching, which is a much less satisfactory technique than exact matching, to relate CPS and SOI data (see Chapters 5 and 8). The Census Bureau itself must develop its estimates of after-tax income from the CPS by imputing tax return information based

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

and surveys via exact-match techniques, but the Census Bureau does not, in turn, release matched files.17

We strongly support appropriate measures to protect the confidentiality of individual data records, taking all due precautions against either deliberate or inadvertent disclosure. However, we believe that mechanisms can and must be found to make it possible for the rich sets of data that are generated for federal administrative purposes to be used more fully for statistical purposes. The current situation of very limited availability, not only for policy analysis agencies and researchers but also for statistical agencies other than the originating agency, adds to the cost and reduces the quality of vitally important databases. We note that the administration recently took a step toward addressing this problem, announcing (Council of Economic Advisers, 1991:6) its intention to seek legislation "to provide a standardized mechanism for limited sharing of confidential statistical information solely for statistical purposes between statistical agencies under stringent safeguards. This will improve the accuracy, consistency, and timeliness of data throughout the Federal statistical system."

A panel of the Committee on National Statistics is currently investigating issues of confidentiality and access to federal data from surveys and administrative sources and exploring mechanisms for improved access by outside users as well as other statistical agencies. Such mechanisms include:

  • setting up "enclaves" whereby statistical agencies could share survey and administrative records; such data sharing would reduce costs and enhance quality even if public access remained limited (e.g., the Census Bureau could use IRS data to improve imputations for missing income data in the CPS and SIPP and to develop improved estimates of the income distribution for publication);

  • using sophisticated techniques to mask or blur data values so that microdata files containing survey and administrative data could be made publicly available;

  • swearing-in analysts as special employees to use confidential data onsite at a statistical agency or in a secured facility; and

  • requiring researchers to sign agreements that provide them with access to more complete data sets but also subject them to stiff penalties for any data disclosure that breaches confidentiality.

Other mechanisms are also possible. Each has advantages and disadvantages; some may require legislation as well as changes in policies and procedures. But we believe it is imperative for the federal statistical system to find ways out

   

on public-use SOI data because, as noted in the text, it has access to only a subset of items for all tax returns.

17  

The Census Bureau will also not release the bulk of the tax return data collected in the SIPP, although these and other confidential data can be used by outside researchers who come to work at the Census Bureau under its fellowship program and are sworn in as special census employees. See Chapter 8 for a fuller discussion of information needs for tax modeling.

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

of the bind that currently puts too many data sources, collected at considerable cost, off limits for research and analysis purposes.

Recommendation 3-5. We recommend development and implementation of mechanisms to improve access, under appropriate circumstances, to administrative and survey microdata for statistical research and analysis purposes.

Adding Value to Existing Data

Increased investment, improved coordination, and broader data collection strategies are all important for improved quality and utility of federal data for policy analysis and other purposes. In addition, we believe there is a need for reallocation of resources within statistical agencies to emphasize analysis and amelioration of data quality problems, together with a realignment of the data production functions of statistical agencies vis-á-vis those of users in policy analysis agencies (and elsewhere). Although important data gaps need to be filled in many areas, simply collecting more data more frequently is not the answer to the problems confronting the nation's information base. As we see in the case of flawed economic indicators that have had serious policy consequences, the available information must reflect appropriate and accurate measurement.

Statistical agencies, of course, are cognizant of the need to devote resources to evaluation of the quality of their data. However, budget and staff constraints, coupled with the difficulty of convincing decision makers of the value of methodological work, have often forced agencies to emphasize the operational activities necessary for timely data release at the expense of research on measurement issues and assessment of data quality. Moreover, evaluation research and the information on data quality that is provided to users have tended to emphasize the easily measured sources of errors, such as sampling variability, and to give less attention to other kinds of errors, such as nonresponse biases and content reporting errors, that may be equally—if not more—consequential in their impact. For example, Census Bureau publications from surveys such as the CPS and SIPP typically devote an entire appendix to sampling errors, but give very little space to nonsampling errors. Although there are encouraging examples of focused attention to data quality issues—such as the extensive research and evaluation program of the Census Bureau for SIPP (see Jabine, King, and Petroni, 1990) and the cognitive research laboratories set up by the National Center for Health Statistics and the Bureau of Labor Statistics to conduct pioneering studies of how respondents perceive survey questions—it is our view that much more attention needs to be devoted to data quality. And the need for evaluation of quality extends to data from administrative records as well as those from surveys. All too often, administrative data are accepted

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

uncritically as providing more accurate measures than surveys, when, in fact, they also suffer from a variety of reporting errors.

An important consideration for evaluation and measurement research conducted by statistical agencies is that the analysis be well informed by the policy and research needs for the data. Triplett (1990) argues that agencies have generally been responsive to the policy issues of the moment in making decisions about improved data collection but that they have failed to heed the data needs of policy research and more basic socioeconomic research, which are critical for effecting long-run improvements in the quality of policy analysis and decision making. This inattention goes a long way, in his view, to accounting for the relative slowness with which measurement improvements, such as better data on service industries, family structures, or economic well-being, are identified and implemented in the statistical system.

In addition to assessing data quality, statistical agencies need to add more value to the data series they release than is currently the practice. The statistical agencies have traditionally seen their role as preparing survey-specific data files and published tabulations from those files. They have not seen their role as producing analytical databases or publishing the best estimates—for such statistics as household income or poverty—that could be developed from multiple data sources. For example, in processing surveys like the CPS and SIPP, the Census Bureau has concentrated on such tasks as adjusting the data records for nonresponse by households and individuals and editing item responses for consistency. It has not performed additional data adjustments—such as correcting income amounts for reporting errors or modifying family structure to reflect population coverage errors—that would involve use of administrative records and other external data sources. Yet these and other adjustments are often critical for policy analysis and research uses of the data.

Currently, policy analysis agencies and other users carry out these kinds of data adjustments if they are performed at all. But these users often lack the information as well as the resources to perform an adequate job, and users at one agency frequently duplicate the efforts of users at another agency. Having statistical agencies add value to data would, of course, increase the costs to those agencies, but it could result in overall savings for policy analysis and other important applications because of reduced data processing expenditures by user agencies. Moreover, substantial gains in data quality could be made by having the originating agencies with better access to related data sources produce their best estimates of, for example, family composition, health care costs, or household income. These estimates in turn would provide a measure of consistency for all subsequent analyses.

We recognize that it will not be easy to implement a major change in the relationship of statistical agencies and such users as policy analysis agencies. The users will have concerns as to whether the statistical agencies can provide enhanced databases in a manner that is timely and addresses particular analysis

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

needs. The statistical agencies, on their part, will have concerns about whether meeting user requirements for enhanced databases will adversely affect their primary responsibilities for original data collection and processing. Nonetheless, we urge that a dialogue be started.18

Recommendation 3-6. We recommend that federal statistical agencies increase their investment in evaluation of the quality of survey and administrative data. We further recommend that they use the results of evaluation studies to implement corrections, when feasible, to databases and published data series, with the objective of improving the quality and reducing the overall costs of providing analytically useful data for policy analysis and other important purposes.

Finally, we mention one specific issue that falls under the rubric of evaluating and adding value to data and is of especially widespread importance, namely, the coverage of the population in household surveys and censuses. Decades of research have documented a persistent pattern of coverage errors in the decennial census, amounting to a small net undercount of the population (estimated at 1.4 percent in 1980) and large net undercounts of particular population subgroups (see Fay, Passel, and Robinson, 1988). Research has also documented additional large undercounts in household surveys such as the CPS and SIPP (see Chapter 5 and Citro, in Volume II). The mechanisms and correlates of coverage error are not definitively established, but variables such as age, race, sex, family relationship, and income appear to relate strongly to undercount: for example, net undercount rates in the census are high for black men, black children of both sexes, household members outside the nuclear family, and low-income people generally (see Citro and Cohen, 1985: Fein, 1989).

Coverage errors in the census may have important implications for many analyses. Census data are often used directly for policy analysis and research. In addition, they indirectly affect the quality of many other data sets because of their use in the design of survey samples, in adjusting survey data to match census-derived population controls, as denominators for vital rates and other socioeconomic indicators, and as the basis for postcensal population projections.19

18  

See our more detailed discussion in Chapter 5 of realigning data production responsibilities between statistical and policy analysis agencies with respect to the Census Bureau's recently announced intentions to develop a database from the March CPS, SIPP, and administrative records that will support an improved system of income statistics.

19  

In this discussion we consider only the importance of adjustment mechanisms for analytical uses of census data, not the political consideration of geographic adjustments and their potential impact on apportionment, funding formulas, and the like. Clogg, Massagli, and Eliason (1986) review the potential impact of census coverage errors on direct and indirect uses of the data, such as denominators

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

The additional coverage errors in household surveys may also have important analytical implications—for example, for studies of low-income population groups. The use of census-derived population estimates to correct coverage errors in household surveys realigns the survey data only by the age, race, and sex distribution of the population as a whole, not by other characteristics, and does not adjust for coverage errors in the census itself.

Evaluation studies of the 1990 census will provide important information about coverage errors and their correlates. We believe it is critically important to use these and other studies of population coverage to assess the extent and implications of coverage errors in censuses and surveys for important analytical uses of the data and to determine whether and what kinds of adjustments the data may need.

Recommendation 3-7. We recommend that the Census Bureau conduct a thorough evaluation of population coverage errors in the major household surveys and decennial census and their potential impacts on policy analysis and research uses of the data. Should important coverage errors be identified, we recommend that the Census Bureau develop ways to adjust census and survey data that have wide application for policy analysis and research.

VALIDATION

We must help those who want all knowledge to be clear, definite, and sure to deal with uncertainty…and to judge accuracy.

Janet Norwood (1990:67)

The stock in trade of policy analysis is the production of answers to a series of "what if" questions. If the government wants to require enrollment in job training programs as a prerequisite to receipt of income support payments, what will be the effect on program enrollment and costs? If the Medicare system decides to pay a flat amount to hospitals for treatment of individuals with a given illness, what will happen to costs and quality of care? If food stamps cannot be used for certain types of purchases, what will be the impact on nutrition and on program expenditures?

Answering such questions requires estimates that amount to conditional forecasts, that is, projections of hypothetical future events. If a given policy is put into place, what will be its effects? The users of these kinds of estimates—decision makers in the executive branch or Congress and their staffs—must decide how to evaluate the information that is provided to them. They, of course, would like accurate data with no uncertainty, a standard that cannot

   

for vital rates and weighting adjustments for sample surveys, and cite several examples of important effects.

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

be met in the real world. Knowing that the standard of certainty cannot be met, users must consider how to incorporate the estimates into policy decisions. Moreover, because there may be competing estimates, which give different answers, users need to have information for assessing the relative quality, or certainty, of the estimates.

We begin our discussion by posing a simple question: Can users make reasonable judgments about the quality of estimates of the likely effects of a proposed policy change? That is, do they have information about the degree of uncertainty or variability in the estimates? And can they judge the track record of a policy modeling tool in terms of how well its previous estimates have corresponded with what actually occurred? We conclude that the answer to these questions is generally no. Indeed, this sweeping statement is not restricted to the work of specific agencies or government contractors. Nor is it restricted to social welfare policy issues or to the use of particular analytical tools. It is the exceptional analysis that can be assessed for validity, not the typical analysis.

In this section we focus primarily on the estimates produced by policy analysis techniques and models that are developed for more than one-time use. We first describe the magnitude of the validation problem, then provide more explicit definitions of what we mean by ''model validation" and the kinds of techniques that are involved, and then discuss ways to facilitate the use of model validation techniques. In the next section we argue for the need for good documentation and archiving systems to support model validation and discuss ways to communicate the results of model validation exercises to decision makers.

The Difficulty of Validation

The problem of assessing the quality of estimates, forecasts, projections, or other analytical results is not restricted to the policy process and to policy analysis. It is faced in most scientific and academic inquiries and in business decision making, and a variety of approaches are available and used for validation. These approaches generally involve isolating the various sources of uncertainty, using observed data when possible to estimate the magnitude of such uncertainty, and generally undertaking a series of sensitivity analyses to assess the effect on the results of alternative specifications and assumptions.

The application of these approaches to policy analysis is particularly difficult. The accuracy of policy estimates depends on many different factors, including the precision of specification of the policy to be considered; the validity of the assumed relationship between program characteristics and outcomes of interest; the accuracy of forecasts of other relevant factors, such as the state of the economy; and the completeness of knowledge of how other factors affect outcomes. Each of these factors—as well as the quality of the database used

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

for the estimation task—contributes to the inaccuracy of specific estimates and, as a result, contributes to uncertainty in the estimates. Also contributing to uncertainty and complicating the task of validation is the fact that the development of policy estimates often necessitates the use of more than one model or modeling approach.

Evaluating the results of policy analysis is not more difficult simply because of the number of different sources of uncertainty, however; other areas of inquiry face a similar array of sources of potential error.20 The difficulty in the policy process is that almost all analyses involve conditional rather than unconditional forecasts: that is, almost all analyses apply to a hypothetical set of policy alternatives—for example, a set of proposals to mandate minimum AFDC benefits so as to bring recipients' income up to specified levels of the poverty line. Because none of the alternatives may ultimately be adopted, reality checks on the quality of policy estimates are necessarily limited. This basic source of uncertainty is then compounded by the uncertainties from other, nonpolicy sources.

It is useful to compare this situation with other cases in which analysts make unconditional forecasts. For example, when economists forecast macroeconomic aggregates, such as the rate of growth of GNP, the inflation rate, or the unemployment rate, it is possible to compare what happens with the forecasts. In these cases, even though the forecasts may have been predicated on certain assumptions about exogenous conditions or about policies, the objective is a forecast of what will in fact occur at a specific time in the future.

Although validating unconditional forecasts requires time—it is necessary to wait until the actual data are available—pursuing such a process is straightforward. Indeed, forecasts of leading macroeconomic models are regularly evaluated against actual events. If the model or analytical method producing the estimates remains stable over time, analysts can compile data on the forecast accuracy of the approach and can use the model's track record as part of developing an estimate of uncertainty for future forecasts. Analyzing the precise sources of uncertainty, which is important for understanding the quality of a model's estimates and planning improvements to the model, is also possible, although the task may be quite complex. For example, in evaluating the forecasts produced by macroeconomic models, it is necessary to determine the contribution to total error not only from badly specified relationships inside the model and erroneous assumptions about external factors such as fiscal and monetary policy, but also from the ad hoc adjustments that forecasters typically make to their models' outputs. Nonetheless, a growing body of literature is attempting,

20  

It is generally true, however, that the more complex the model, the more difficult the validation task, and the less likely it is that the task will be carried out. For example, there are only a handful of validation studies of microsimulation models (see Cohen, Chapter 7 in Volume II), but considerable literature evaluating cell-based population projection models (see Grummer-Strawn and Espenshade, in Volume II).

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

with some success, to do just that—namely, disaggregate the observed errors from macroeconomic model forecasts by source (e.g., see McNees, 1989, 1990).

Most policy analysis efforts, which involve conditional estimates of the outcomes of specific policy alternatives, are much more difficult to validate and analyze than are unconditional forecasts.21 First, data may never become available on the actual outcome of an analyzed policy. Although any given analysis may consider a range of policy options, the specific policy ultimately enacted is frequently not included among those analyzed: the policy process often begins with a specification of the potential range of policies but ends with a compromise outcome of the legislative debate. Second, even if an analyzed policy is enacted, it is difficult to disentangle the specific sources of errors. For example, in projected participation rates for a program, it is hard to distinguish between an error that occurs because of poor understanding of the behavior of program participants and one that occurs because of bad forecasts of the economic environment. Yet it is often vital to know the source of error in order to evaluate the utility of different policy modeling approaches. Third, the policy process does not generate data on a regular basis. Most programs go through only occasional significant changes, making it hard to analyze how specific programmatic elements affect the observed policy outcomes.

These factors suggest that developing information on the quality of policy analysis efforts is inherently difficult. The policy process, however, makes the task even more complex. Typically, programmatic considerations and deliberations are conducted in an environment of time constraints and distractions from other policy deliberations. The pressures to "do something" generally move any discussion away from consideration of the underlying analysis. These same pressures often leave analysts with little or no time to spend on such tasks as documenting the analyses and developing evidence on the quality of estimates for future use.

Although the validation of policy analysis is inherently difficult, we are still troubled by the few attempts that have been made at it. We conclude that the users of policy analysis have rarely asked for data on the uncertainty of model estimates. This lack of interest has serious ramifications for the long-term quality of the analysis that is used in the decision-making process and, consequently, for the quality of the outcomes of that process. Without information to assess uncertainty, users may make decisions on the assumption that estimates are reasonably accurate when, in fact, they are full of errors.

One egregious example occurred in the decision to amend the tax code in 1981 to offer tax-deferred Individual Retirement Accounts (IRAs). Tax analysts substantially underestimated the revenue losses of this provision: many

21  

In this regard, the use of macroeconomic models for simulation rather than forecasting purposes—that is, to estimate the effects of alternative government fiscal policies as distinct from forecasting the state of the economy—presents the same kinds of validation problems as do other kinds of conditional modeling exercises.

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

more taxpayers took advantage of the provision than projected. This is not necessarily a criticism of the analysts, because the data available for estimating IRA participation were very sparse; the point is that the decision process made use of the estimates as though they were certain.

Moreover, without information to assess uncertainty, policy analysis agencies cannot determine the greatest need for their investment dollars in order to improve the quality of future estimates. They must make resource allocation decisions based largely on instinct rather than on a cumulative body of evidence. Inevitably, investment dollars will be misdirected, and progress in improving the quality of policy analysis results will be erratic at best.

Before recommendations are considered, it is important to point out one way in which policy analysis estimates may be easier to produce reliably than the unconditional forecasts first discussed. The typical policy estimate, done early in the process, considers a variety of alternative specifications of policies designed to achieve a given goal. The policy deliberations themselves tend to concentrate on differences among the alternative approaches. Therefore, the key information from the policy analysis is the differential effects or costs of the alternative approaches. This fact frequently makes the task of validation easier because errors or uncertainties that affect the alternatives in a similar fashion have little importance for the consideration of which alternative is preferable. For example, erroneous forecasts of macroeconomic aggregates, such as the unemployment rate or inflation rate, may have little effect on the projected difference in costs of alternative welfare reform proposals. Even though the level of projected costs may be wrong, it is quite possible that the projected levels for each alternative are "equally wrong," making the analysis somewhat immune to some of the major sources of potential errors.

However, this argument is less likely to apply for relatively long projection periods, and it does not apply at all when assessing the impact of completely new programs, for which the critical estimate is the level of expected costs and caseloads. Moreover, in order to assess errors that do affect estimates of differences among program alternatives, as well as errors that affect estimates of levels for new programs, one must be able to decompose the sources of errors—which, as noted above, is a difficult task.

Kinds of Model Validation

The technical nature of the validation task warrants a brief definition of key terms and concepts (see also the Appendix to Part I).22 The first concept we need to define is "model." A model is a replicable, objective sequence of computations used for generating estimates of quantities of interest. By

22  

The discussion in this section benefited greatly from a set of notes prepared by Michael L. Cohen, consultant to the panel.

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

replicable, we mean that the sequence of computations generating an estimate can be reproduced by anyone who runs the model. By objective, we mean that, given the input data for the model, the output estimates are not a function of any analyst's opinions or assessments.

"Model validation," or, sometimes, "model evaluation," has been used in a variety of ways to describe techniques to assess the strengths and weaknesses of a model and the quality of its estimates. We acknowledge the range of topics that at times have been included under the validation rubric, including assessment of how easy it is to use the model and the completeness of the documentation.23 However, we use the term model validation in a somewhat narrower sense: the processes for measuring the uncertainty or variability in a model's estimates and identifying the sources of that uncertainty.

External Validation

One principal technique for model validation, as we have defined it, is "external validation," or assessment of the validity of a model's estimates compared with measures of reality. For example, externally validating the cost and caseload estimates produced during the legislative debate on the Family Support Act would involve comparing them with the corresponding costs and caseloads obtained from AFDC program administrative records after the act took effect. By definition, one cannot carry out an external validation of a model's estimates when the change is under consideration. However, after-the-fact assessments of external validity can help identify model weaknesses and contribute to measuring the likely uncertainty in successive estimates produced by the same model.

Sometimes it is possible to carry out an external validation simply by comparing a model's estimates with what actually happened. More often, the policy change will not correspond with one particular estimate, or the estimates, as in the case of the Family Support Act, will represent a combination of outputs from several models. Hence, other approaches will be needed.

A common external validation technique is "ex post forecasting." Using this approach (e.g., see Kormendi and Meguire, 1988), one puts oneself in the place of the analyst who, say, 5 years ago was asked to simulate a program alternative to take effect in some future year. One chooses that "future" year to be in the recent past so that measures of what happened, from

23  

The General Accounting Office (1979) lists features of models to consider in a global assessment, including but not limited to model validation as we have defined it. GAO suggests looking at the completeness and adequacy of the documentation; the validity of the model's theoretical concepts, data, and computer code; the ease of maintenance and updating; the understandability of the model's outputs; public accessibility to the model and data; portability among computer environments; and cost and time to run the model. We consider these features and others in our review of microsimulation models in Part II.

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

administrative records or other sources, are available. Correspondingly, one chooses the program alternative to be the actual program rules in effect during the comparison year. The panel, as part of its work, carried out an ex post forecasting evaluation of the TRIM2 microsimulation model, which proved quite informative. (The study, which also involved a sensitivity analysis, compared the TRIM2 estimates developed from a 1983 database of AFDC costs and caseloads under 1987 AFDC law to administrative data for AFDC in 1987; see Chapter 9 and Cohen et al., in Volume II.) Alternatively, in a method called "backcasting," one uses the current model and database to simulate program provisions that were operative at some period in the past and compares the model estimates with administrative data or other measures for that period (e.g., see Hayes, 1982).

In either backcasting or ex post forecasting, differences between the model results and the measures of what occurred may involve economic or social changes that the model could not have been expected to capture, such as an unanticipated recession. Differences may also be due to chance variation. Hence, it is important to conduct external validation studies for a number of time periods, including those that were relatively stable on key social and economic indicators. It is also important, to the extent feasible, to construct measures of variability both for the model output (see discussion below) and for the measures of what actually occurred, which may themselves contain errors (see Andrews et al., 1987, for exploratory work on this topic).

Internal Validation

To understand the extent and sources of uncertainty in a model's estimates of the effects of a proposed policy change, one needs to conduct not only external validation studies but also direct investigations, or internal validation studies, of the underlying model. Internal validation refers to all of the procedures that are part of conducting an intensive step-by-step analysis of how model components work, including the theory behind the various modules, the data used, the computer programming, and the decisions made by the analysts running the model. All aspects of internal validation are important; in the context of our discussion of the measurement of uncertainty in a model's estimates, however, we focus on internal validation techniques—namely, variance estimation and sensitivity analysis—that contribute to such measurement. Both estimation of the underlying variance of the estimates and analyses of the sensitivity of the results to alternative model specifications yield potentially important information that can become a standard part of the model improvement process.

It is useful to think of the uncertainty or variability—"errors"—in the outputs from a model as resulting from four sources: (1) sampling variability in the input database, which is only one of a family of possible data sets that could have been used; (2) sampling variability in other inputs such as imputations,

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

regression coefficients, and control totals; (3) errors in the database and other inputs; and (4) errors due to model misspecification. Even though conceptually clear, the complete partitioning and full estimation of a model's uncertainty are generally beyond current capabilities. Indeed, in many complicated analytical situations, even the most rudimentary estimates of uncertainty have been intractable until recently. Nonetheless, a combination of approaches can assess portions of the uncertainty and can pinpoint areas of concern—that is, aspects of a model for which uncertainty is likely to have particularly important effects on the results.

For estimates produced by simple models, standard variance estimation techniques are available to assess the variability due to the first two sources of error noted above. For complex models, these techniques cannot be readily applied, but it has recently become possible to use some relatively new variance estimation methods, called "sample reuse" techniques, that harness the power of modern computers.

Sensitivity analysis is a useful supplement to formal variance estimation techniques. Sensitivity analysis, carried out by developing and running one or more alternate versions of one or more model components, looks at the impact on the estimates of decisions about the structure or specification of a model. For example, the decision to use a particular equation to estimate program participation in a microsimulation model or to use a particular set of vital rates in a cell-based population projection model is properly investigated as part of a sensitivity analysis. A sensitivity analysis typically will not identify the optimal method for modeling a component, but it will provide a rough idea as to the components that matter for a specific result. Sensitivity analysis is, in simplest terms, a diagnostic tool for ascertaining which parts of an overall model could have the largest impact on results and therefore are the most important to scrutinize for potential errors that could be reduced or eliminated.24

In the current state of the art, sensitivity analysis is the only way to obtain rough estimates of the variability in model outputs due to misspecification (the fourth source of error noted above). Sensitivity analysis is also often the best approach to estimate the variability from errors in the input data sets (the third source). For complex models, sensitivity analysis may also represent the most feasible approach at the present time to assess the variability from the second source, that is, the sampling variability in data sources other than the primary database. What one gives up when going from a variance estimation methodology to a sensitivity analysis is that the probabilistic mechanism underlying a sensitivity analysis is not rigorously determined. Thus, construction of confidence intervals—a type of formal "error bound"—to express the uncertainty

24  

Another way of learning about deficiencies in a model is to make use of completely different modeling approaches to the entire problem, rather than experimenting with individual components. This form of "global" sensitivity analysis is not effective in operating a feedback loop, but it is effective in providing a rough indication of the level of error in estimates from several models.

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

in the estimates is immensely more difficult. Indeed, in complex models the variability in the estimates is often not understood well enough to construct any reliable confidence intervals.25

Whatever mix of sensitivity analysis and variance estimation is used, the critical point is the need to obtain measures of the uncertainty in the estimates for proposed program changes that are used in the policy debate. We next discuss ways and means of moving model validation from a rare to a regular part of the policy analysis function.

Investment in Model Validation

We believe that the policy process must include consideration of the quality of the information used in reaching decisions, that is, on the level and sources of uncertainty in estimates about the effects of proposed policy alternatives. In order to achieve such consideration, a series of fundamental changes must be made to the routine production of policy analyses.

First of all, it is clear that information on uncertainty in policy analyses will be produced only if the users insist on receiving it. We have noted that decision makers shy away from estimates of uncertainty, particularly of cost and revenue projections, because it is hard to integrate such information into a decision process in which the numbers must add up. Today's severe budget constraints and the perceived necessity by members of Congress to balance changes in expenditures against changes in revenue down to the last dollar reinforce the predilections of policy makers for certainty—or what appears to be certainty—in the numbers.

Our message to decision makers is that they must demand, as a matter of regular practice, information about the level and sources of uncertainty in policy analysis work. It is in both their short-run and their long-run best interests to do so.

Estimates of uncertainty can be very helpful for legislators who are facing immediate decisions on policy issues. First, if there are competing estimates from two different agencies,26 information about the uncertainty in each estimate

25  

Confidence intervals are ranges about an estimate, constructed so that one can say with a specified "coverage" probability, such as 95 or 90 percent, that the confidence interval includes the actual value in the population (or, more precisely, the average value that would be obtained from all possible samples of the population). For example, according to the Census Bureau, the estimate of the number of people below the poverty level in 1988, from the March 1989 Current Population Survey, is 31.9 million, with a 90 percent confidence interval of plus or minus 0.9 million. That is, one can be 90 percent confident that the range of 31.0-32.8 million people includes the true value (Bureau of the Census, 1989a:2; see also the Appendix to Part I).

26  

A recent example, with serious policy implications, involves competing estimates of the 1991 federal budget deficit from CBO and OMB (excluding projected costs for the savings and loan bailout). In January 1990, CBO projected the 1991 deficit at $138 billion; OMB projected $101 billion. In September 1990, CBO projected the 1991 deficit at $232 billion; OMB projected S149 billion (Magnuson, 1990).

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

can help determine which of the estimates is better. As an example, an estimate that the added costs of a proposed change are in the range of $10-11 billion is clearly more useful than an estimate for the same proposal that the costs are in the range of $5-20 billion. Second, if all of the available cost estimates have wide error bounds, such as $5-20 billion or, worse, minus $20 billion to plus $20 billion, decision makers would be well advised to give greater weight to criteria other than overall cost, such as distributional effects or agreement with important societal values, in reaching a conclusion about the merits of a particular proposal. Third, if the available estimates of costs and distributional effects for program alternatives are not reliable enough to distinguish among them (because, for example, the alternatives involve small changes or focus on small population groups), decision makers would be well advised to minimize the effort spent to fine-tune the policy proposal.

Perhaps more compelling, it is also in the long-run interests of decision makers to demand estimates of uncertainty. Decisions made on the basis of erroneous information can have large unintended social costs. The goal of completely certain information is illusory; however, with estimates of uncertainty for the information provided by currently available models and databases, decision makers can target funds for policy analysis agencies to develop better information on which to base their policy choices in the future. In other words, developing uncertainty information can serve an important feedback function that leads, over time, to the development of better models and better policy information.

Recommendation 3-8. We recommend that users of policy projections systematically demand information on the level and sources of uncertainty in policy analysis work.

We recognize the practical difficulties of changing the behavior of decision makers, who have avoided information about uncertainty in the past and who may, despite arguments about the short-term and long-term benefits, remain hesitant to seek such information in the future. Hence, we urge that the heads of policy analysis agencies assume the challenge of working toward the goal of having information on uncertainty available as a matter of course for the estimates their agencies produce. Agency heads can take several actions. They can set and enforce standards that validation be part of the policy analysis work of their staffs; they can allocate staff and budget resources to validation; they can support efforts by their staffs to educate the staffs of decision makers about the need for information on the quality of the estimates and how to interpret such information; and they can support their staffs when time constraints and demands for certainty threaten to short-circuit validation efforts.

Recommendation 3-9. We recommend that heads of policy analysis agencies assume responsibility for ensuring, to the extent feasible,

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

that their staffs regularly prepare information about the level and sources of uncertainty in their work. Agency heads should also support efforts of their staffs to accustom decision makers to request and use such information in the policy process.

In some instances, agency staffs perform policy analyses from start to finish; in many other instances, agencies contract out for analytic work. Under either approach, policy analysis agencies must explicitly consider how to develop relevant information about the uncertainty in the results. They must plan to obtain this information at the beginning, when they are under less time pressure.

Policy analysis work, whether conducted in-house or by contractors, should always include some type of validation effort that, at a minimum, develops approximate estimates of uncertainty in the results and the main sources of this uncertainty. In addition, for major analyses that are contracted out, we believe that the agencies should at the same time let separate contracts to independent agencies or firms to conduct thoroughgoing evaluations of the work, including external validation studies and sensitivity analyses. The reason for independent contracts is to ensure objectivity of the evaluation and to minimize the likelihood that the evaluation will be sacrificed to the need for immediate results to feed to the policy debate. We recommend independent in-depth evaluations of this type for major policy analysis work per se—that is, work using models to develop policy impact estimates—and for research and demonstration projects of which the results may have subsequent applicability for policy modeling.

Recommendation 3-10. We recommend that policy analysis agencies earmark a portion of the funds for all major analytical efforts for evaluation of the quality of the results. For large-scale, ongoing research and modeling efforts, the agencies should let a separate contract for an independent evaluation.

Information on sources of error obtained from sensitivity analysis, along with the results of external validation, is important for determining the priorities for resources for the improvement of policy analysis tools. The focus of the independent evaluation studies will necessarily be on the feedback process whereby evaluation results give rise to better analysis tools that, in turn, produce better numbers for future policy debates.

At the time of a debate, of course, no comparisons with reality are possible, and the time available for extensive investigation of sources of uncertainty is necessarily limited. Still, information about uncertainty and its sources can and should be provided. Some types of analysis are quite amenable to investigations about the magnitude and source of uncertainty. For example, a simple projection of numbers of program participants that comes from a single regression equation can include information on estimated error variances due to randomness in the estimated parameters of the underlying model. Providing this information is

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

a step in the right direction, although such randomness is only part of the error, and additional information would be necessary to completely describe the potential errors (such as those from misspecification of the underlying model, which may introduce more error than the simple sampling errors that occupy most attention).

Other analyses are less amenable to such quantitative estimates. At one extreme, rough back-of-the-envelope estimates almost defy formal error analysis because they rely so heavily on an analyst's judgment. At the other extreme, estimates from large, complex models are difficult to assess because of the sheer number of inputs. Nevertheless, in the former case it should be possible and routine practice for an analyst to identify major potential uncertainties in his or her estimates, even if they cannot be measured in quantitative form; in the latter case, recently developed computer-intensive techniques are available to develop error bounds for projections from complex models due to randomness in one or more of the inputs (see above; Chapter 9; and Cohen, Chapter 6 in Volume II). Sensitivity analysis techniques, in which data inputs and model components are systematically varied in a series of model runs, can also be used to assess the magnitude and major sources of variation.

We acknowledge the difficulties in developing error estimates, given the complexities of the real world and the policy alternatives that analysts are trying to model, but we believe it is possible to make significant progress with allocation of sufficient resources and a strong commitment to the task. We are also optimistic about the prospects that technological developments in computing will make it possible to conduct validation studies of even very complex models with relative ease.

We note that some policy deliberations occur at regular times and are supported by a consistent set of analyses, which would enable error studies to be carried out on a continuing basis. For example, at the beginning of each budget season, both the Office of Management and Budget and the Congressional Budget Office provide estimates of budgetary aggregates. Although they come in various forms, each contains budget (deficit) projections for the condition of no policy changes. It is then possible, as CBO does each year in its August update of the budget analysis, to consider how changes in economic conditions and changes in actual policies affect the budget projection (see, e.g., Congressional Budget Office, 1989a:38). This analysis provides a model for how to approach the task, but it does not go far enough because it is not revisited after the actual data become available.

Routine estimates are made in a wide variety of program areas. It is important, whenever possible, to match estimates with actual outcomes. Clearly, such an activity is most valuable when a reasonably stable projection method is used, because in such cases the time series of evidence can be used to estimate errors and decompose them into various sources. However, this technique is not restricted to time-series approaches. Some program estimates are made

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

on a state-by-state basis, and observed state variations provide another way of inferring the importance of uncertainty in policy information.

Recommendation 3-11. We recommend that policy analysis agencies routinely provide periodic error analyses of ongoing work.

DOCUMENTATION AND COMMUNICATION OF THE RESULTS OF POLICY ANALYSIS

Documentation and Archiving as Aids to Validation

We turn next to the critical role of good documentation practices for the proper use of models that provide estimates to the policy debate and for evaluation of the quality of their outputs. From the perspective of applying policy analysis tools, complete and intelligible documentation is essential for their appropriate and efficient use, whether the model is based on microsimulation, macroeconomic modeling, multiple regression, or some other technique. The larger and more complex the model, the harder the task of preparing adequate documentation, but, at the same time, the more necessary the task becomes.27 Such models can quickly take on the aspect of ''black boxes," which can be fully understood only by a handful of experienced analysts who have invested the time to master the intricacies of their operation. Agencies can become too dependent on the availability of these experienced analysts in order to use complex models. Moreover, inadvertent errors due to misunderstanding the interactions of elements of complex models may occur even on the part of highly experienced users.

The quality of any validation effort is highly dependent on the quality of the documentation, that is, the documentation of the particular analysis that was performed, which is needed in addition to the documentation of the policy analysis tool itself. Documentation of policy analysis exercises (such as assessing the cost implications of a proposed program change) should include information about the specifications for the analysis (e.g., the particulars of the policy alternatives modeled), data inputs, key assumptions, changes that were made to the basic model, analyst inputs, and other information needed to understand what was done and to place the results in context.

The necessity for such documentation underscores the importance of making the evaluation process a regular and expected part of policy analysis work. The best time to document an analysis is during the process of performing the

27  

David and Robbin have written extensively on the necessity of providing "metadata" for complex databases and models, that is, information that helps users work with them appropriately. David and Robbin have outlined design concepts for information management systems to facilitate the production of complete documentation and the generation of audit trails that keep track of users' applications on an automated basis (see David, 1991; David and Robbin, 1989, 1990).

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

work; if the documentation effort is deferred for very long, the result is likely to be the loss of key information.

We have noted that it is often difficult to provide information with which to judge the usefulness of a given analysis. In particular, complete information on uncertainties in the estimates may be unavailable. However, when documentation of the methods used is available to other analysts, an internal validity check is possible. For instance, other analysts can ascertain whether reasonable scientific methods were followed or whether the best current information was used. Although the scope and scale of the analysis must be considered in deciding how much documentation is needed (it is not likely to be feasible or sensible to document each and every policy analysis result), the provision of adequate documentation should be a broad objective and requirement of policy analysis work.

Recommendation 3-12. We recommend that policy analysis agencies allocate sufficient resources for complete and understandable documentation of policy analysis tools. We also recommend that, as a matter of standard practice, they require complete documentation of the methodology and procedures used in major policy analyses.

The task of validating policy analysis estimates and building a cumulative body of knowledge about the merits of particular analytic approaches and tools is also dependent on the continued availability of previous versions of models and databases that were used for analyses, as well as the results of those analyses. When a legislative change has enacted one of a set of proposed policies, validation studies that look at what actually happened need access to the estimates that were made at the time of the debate. Such studies also need access to the model and data that were used to determine the sources of errors in the estimates. In particular, such studies need to separate errors due to the model per se and errors due to conditional assumptions about exogenous factors, such as the state of the economy, that did not turn out as assumed. The availability of complete documentation will make it possible, at least occasionally, to rerun the model with the erroneous assumptions removed.

When none of a proposed set of policies has been implemented but some other legislative change has occurred, validation studies can simulate the estimates that would have been made if the analysts at the time had been asked to simulate what was enacted. Such studies can use a current model, but they need access to the original database. Hence, it is important that at least large-scale analytical efforts based on underlying quantitative models be archived in a form that allows them to be used in the future for validation purposes.

Recommendation 3-13. We recommend that policy analysis agencies require that major analytical efforts be subject to archiving

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

so that the models, databases, and outputs are available for future analytical use.

Communicating Validation Results to Decision Makers

Developing information on sources of uncertainty represents a formidable undertaking and is only one part of the task of providing information on uncertainty to the policy debate. The other equally important part, with substantial problems of its own, is presenting the information to decision makers in a manner that they can understand and use. There is very little experience to build on in this area: to our knowledge, presentations of policy estimates are rarely if ever accompanied by formal error bounds; at most, there may be oral or written statements identifying those estimates that the analysts believe to be most problematic.28

A self-fulfilling prophecy may be seen at work here. Policy analysts are convinced that decision makers will not accept, let alone welcome, information about uncertainty. Hence, such information is not provided, and decision makers are not educated to the need for it.

The prospects for changing this situation are not entirely bleak. In the media, it is now standard practice to provide error bounds due to sampling error for estimates from public opinion polls. Articles based on government statistical reports sometimes cite error bounds as well. To make the job easier for the media and other users, the Census Bureau recently instituted a practice in its reports of including error bounds (90% confidence intervals) for each estimate mentioned in the text, in addition to appending technical information about errors and how to calculate errors at the end of the report (see, e.g., Bureau of the Census, 1989a). Such error reports generally pertain only to sampling error and not other, often more important, sources of uncertainty, but they represent a step forward.

The experience with tax reform in Wisconsin in the late 1970s also provides an encouraging precedent for accompanying estimates of the effects of alternative proposals with error bounds. The confidence intervals provided to the legislators (shown graphically, in most instances) were not resisted or disdained. Rather, they served the useful purpose of eliminating discussion of numbers that could not be made precise because they pertained to rare populations or events (Wisconsin Department of Revenue, 1979).

28  

This statement applies to estimates that are delivered during the course of the policy debate. Subsequently, agencies often prepare more detailed descriptions that include attempts to identify important assumptions and sources of error: see, for example, the write-up of the estimates for key provisions of the Family Support Act in a study released by CBO in January 1989 (Congressional Budget Office, 1989d). However, even these analyses rarely include estimates of error bounds or the results of formal sensitivity analyses.

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

Recommendation 3-14. We recommend that policy analysis agencies include information about estimated uncertainty and the sources of this uncertainty as a matter of course in presentations of results to decision makers. The agencies should experiment with modes of presentation to facilitate understanding and acceptance of information about uncertainty on the part of decision makers.

The question of precisely how to communicate uncertainty to users of various degrees of technical sophistication, particularly how best to express uncertainty in terms of a single measure such as a confidence interval, is a difficult one for which we offer no specific recommendations. Instead, we present below several approaches that might be adopted in different situations, along with their advantages and disadvantages (see also the Appendix to Part I). As measures of uncertainty—including confidence intervals—are provided to various audiences, we believe that the most effective methods will become apparent over time. Of course, whatever the measures of uncertainty that are used, they should always accompany—and not replace—the point estimates themselves.

  • Policy analysts can provide formal error bounds (i.e., confidence intervals) for their estimates that represent the variability due to the sampling variance of the input data. In technical reports they can also include information about uncertainty due to model misspecification and other factors. This approach gives an overall impression of the uncertainty of estimates from a model. The major disadvantage of this approach is that the variability with the greatest visibility, namely, sampling variance, is likely to be the least important source of error.

  • Policy analysts can provide error bounds for their estimates that represent total uncertainty by presenting the widest range obtained through the variety of techniques used in the evaluation, including sensitivity analysis and variance estimation. This approach strongly—perhaps too strongly—communicates the total uncertainty to model users. Its disadvantage is that constructing the broadest range is dependent on the ability of the analyst to perform sensitivity analyses of all important components of the model, which can be difficult to do. More important, it will generally not be possible to state the probability with which such a range includes the actual value in the population, in the way that one can do with a 95 or 90 percent confidence interval.

  • Policy analysts can provide error bounds for their estimates based on combining the results of previous external validation studies of similar uses of the model. In a static modeling environment, this approach is a highly appropriate method for conveying the variability in a model's estimates. Its major disadvantage is that the modeling environment is likely to be dynamic in one or more respects so that the current application of the model may not resemble the applications included in the external validation studies. For

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×

example, important elements of the database may have changed, important elements of the model itself may have been rewritten, and important aspects of the policy question may have altered.

There are other issues that need to be addressed in considering how best to express the uncertainty in policy analysis estimates. First, there are few incentives for analysts to pursue the difficult problems involved in developing appropriate measures of uncertainty for their estimates. Models that have accurately estimated confidence intervals are likely to suffer in comparison with models that have confidence intervals that are wrongly estimated to be too narrow. Second, it will be difficult for analysts to communicate to an unsophisticated audience the extent to which the commonly available confidence intervals are conditional—that is, predicated on the assumption that sampling variability is the only source of error in the estimates—and hence the implications for how the estimates should be interpreted in the policy debate.

Yet even with all these difficulties, we remain convinced that the objective of conveying information about the uncertainty in policy analysis results through some type of error bound is critically important. From the perspective of improving models through feedback, the availability of confidence intervals can be very helpful in distinguishing statistically significant differences that may have to be addressed from nonsignificant differences.

From the perspective of the policy process itself, there are several important uses of confidence intervals or, more generally, statements of uncertainty. First, as noted above, decision makers can readily use measures of uncertainty to assess the quality of two competing estimates. Also, information about uncertainty can help decision makers decide how much weight to give to the estimates of costs and of winners and losers that are produced by policy analysis tools vis-á-vis other important considerations for the policy debate.

Finally, decision makers can make use of measures of uncertainty to help judge the utility of allocating additional resources to the improvement of policy analysis models and databases. Today, many policy makers recognize the growing deficiencies in data series and modeling tools that support the policy analysis function, but they are not able to relate those problems to the quality of the resulting estimates of costs and distributional effects that are of concern in the policy debate. Having measures of uncertainty available for policy analysis estimates would enable decision makers to target issues with a high degree of policy importance and a high degree of uncertainty for concentrated investment of resources.

In summary, regular, systematic evaluation of the tools used for policy analysis is critical for improving the quality of their estimates. And decision makers need information about the quality of the estimates to be able to weigh them appropriately in making the critical choices that shape the nation's public policies.

Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 52
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 53
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 54
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 55
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 56
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 57
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 58
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 59
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 60
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 61
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 62
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 63
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 64
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 65
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 66
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 67
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 68
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 69
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 70
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 71
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 72
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 73
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 74
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 75
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 76
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 77
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 78
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 79
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 80
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 81
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 82
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 83
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 84
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 85
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 86
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 87
Suggested Citation:"3 Improving the Tools and Uses of Policy Analysis." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume I, Review and Recommendations. Washington, DC: The National Academies Press. doi: 10.17226/1835.
×
Page 88
Next: Appendix: Models, Uncertainty, and Confidence Intervals »
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!