A SCENE IN WASHINGTON, D.C.
It is mid-1988. In the quarters of the Congressional Budget Office (CBO) at the foot of Capitol Hill, exhausted analysts are working overtime to prepare final estimates of the budgetary and programmatic effects of what will become the Family Support Act (FSA) of 1988 (signed by President Reagan a few months later). This piece of legislation includes some of the most far-reaching changes in more than two decades to the Aid to Families with Dependent Children (AFDC) program—a federal-state program that is an important component of the nation's "safety net" for the poor. The act both extends program coverage and provides incentives for families to reduce their reliance on welfare support and be more responsible for their own economic well-being. The act represents a policy compromise among the goals of providing more adequate income support to poor children, strengthening family ties, and strengthening attachments to the labor force. It also represents a budgetary compromise: under the guidelines agreed to by both political parties at the outset, the 5-year projected cost of the legislation cannot exceed $3.0-$3.5 billion.
The process of developing the final form of the FSA has spanned a period of almost 2 years, beginning in late 1986. During that time, CBO analysts have prepared cost estimates for half a dozen major bills, each of which contains as many as 50 separate provisions. They have also prepared estimates of which population groups would be affected by the various bills and whether those
groups would gain or lose under each proposal, in comparison with alternative proposals and with current law.
To prepare their estimates for the FSA, the CBO analysts have called on a wide variety of data sources and data processing tools. For some provisions, they have used a complex computer model, one of a class of microsimulation models that process large, nationally representative samples of families as if they were applying to the local welfare office for benefits. For each family in the sample, the microsimulation model determines which family members would be included in the assistance unit, their eligibility for benefits under the current AFDC program rules, whether they would choose to participate, and the amount of benefits they would receive. The model then repeats the process using the proposed new rules. Tabulations of program costs and caseloads under the current and alternative proposed programs show the impact of reform.
For provisions that would affect only families already receiving AFDC, the analysts have used a similar but less complicated model that processes administrative data from AFDC case records. For still other provisions, existing models were not designed to handle the task or required too much time and resources to modify. The analysts have developed their own ad hoc computations that use available data and research findings—of varying quality and relevance—from a variety of sources. In these instances, they have often used personal computer spreadsheet programs to develop their calculations so that they can more readily handle interactions among various program provisions, incorporate state differences when appropriate, and so on.
In accordance with the requirements of the 1974 Budget Act, the cost estimates must be projected for 5 years, beginning at the time the reform is anticipated to take effect. To make such projections, the CBO analysts have used a variety of data, including forecasts of employment changes and other economic factors developed within CBO with input from forecasts of large macroeconomic models of the national economy. Finally, the analysts have used spreadsheet and word processing programs to pull all of the estimates together into a package for presentation to Congress and to keep track of the many such packages that have been generated during the course of the legislative debate.
During the same period, analysts two buildings away in the Office of the Assistant Secretary for Planning and Evaluation (ASPE) of the U.S. Department of Health and Human Services (HHS) have been preparing their own estimates of the various FSA welfare reform proposals, using similar data sources and processing tools but with a somewhat different mix. The ASPE analysts have relied more heavily on a large microsimulation model for their estimates than have the analysts at CBO. Analysts at other federal agencies, including the Food and Nutrition Service of the U.S. Department of Agriculture, the Family Support Administration in HHS, and the Health Care Financing Administration (HCFA) in HHS, have also been involved in preparing estimates of the effects
of the proposed changes on the food stamp, AFDC, child support enforcement, Medicaid, and other programs.
At the same time in mid-1988, other groups of analysts—principally in the U.S. Department of the Treasury and the congressional Joint Committee on Taxation, but also in CBO and ASPE—are hard at work estimating the impact on revenues and tax burdens facing different income groups from proposed changes in the nation's tax laws. Their pace is somewhat less frantic than that of the welfare program analysts because 1988 is a period of relative calm following the frenzy of activity that culminated in the Tax Reform Act of 1986. Although the specific models and databases differ, analysts involved in the estimates for tax legislation use many of the same kinds of tools as the analysts looking at welfare program changes, including microsimulation models operating on administrative and survey data, macroeconomic models, and models designed to estimate "second-round" effects, that is, the effects after the economy has adjusted to the legislative changes.
At the same time, still other analysts working on retirement policy, principally in the Social Security Administration, are assessing the impact of current law and proposed changes on social security revenues and benefits. Because of their need to produce frequently updated long-range projections for 30 years or more, social security analysts have typically relied on "cell-based" models. These models, which estimate program effects for specific population groups (e.g., retired men age 65) instead of individual assistance units, provide less detail than microsimulation models and are typically less complex and less expensive to maintain and run. But social security analysts have also used microsimulation models, both "static" models that use samples of current households to represent the population in future years (through statistical adjustments to the survey weights) and ''dynamic" models that represent the future by "growing" individual sample cases forward in time.
Separate groups of analysts in CBO, HCFA, and other agencies are also working hard to assess the cost and social effects of changes in the nation's programs to provide and pay for health care. Health care policy in the United States is of immense and growing complexity: policy issues run the gamut from how to provide health insurance coverage to the working poor and long-term care benefits to the elderly to how to alter the behavior of hospitals and physicians to achieve the most cost-effective medical care. Correspondingly, the world of health care policy analysis is highly fragmented. To estimate the impact of health care policy changes, most individual analysts rely on their own data extracts and special-purpose computer models (frequently, cell based) to address very specific questions affecting particular segments of the health care system.
The results of all of these activities are numbers—numbers of dollars and numbers of participants—that are estimates and projections of what will happen if new programs are enacted or existing programs are maintained, changed,
or eliminated. And on the basis of these numbers, policy makers throughout Washington are making decisions about the government's role in social policy.
THE TOOLS OF POLICY ANALYSIS
Twenty-five years ago, no one would have anticipated the major role that quantitative information about the effects of alternative proposals now plays in shaping legislation. In the early 1960s, computer technology was in its infancy, data sources were limited, and modeling techniques were primitive. Although analysts in executive branch agencies (there was no CBO until 1975) often made estimates of the impact of policy changes, they were severely limited in the number of different estimates they could prepare, the detail they could provide about the effects on specific population groups, and the time frame in which they could respond to requests. The quality of these "back-of-the-envelope" estimates rested heavily on the experience and judgment of individual analysts, and the estimates themselves played no more than a secondary role in the legislative process.
Today, whatever the policy issue, "the numbers" play a prominent role. Indeed, in the Washington of the early 1990s, neither top administration officials nor members of Congress will move very far to develop legislation in the absence of detailed estimates of the cost and other effects of the proposed changes. Top officials will not necessarily know specifically how the estimates were developed, but they will know that the estimates rely heavily on complex computerized techniques. They treat the estimates not only as informative but often, in the case of costs, as binding.
The cost estimate for mandating a minimum benefit standard for AFDC in all states—produced largely for CBO and ASPE by a microsimulation model called TRIM2 (Transfer Income Model 2)—was one of the factors that led Congress to jettison this provision early in the development of the Family Support Act, despite strong support from many representatives. The reason: the estimated cost would have far exceeded the agreed-upon dollar amount.
At each of three times during the hectic days of drafting the final version of the 1986 Tax Reform Act, the staff of the Joint Committee on Taxation determined that problems in their data or specifications had resulted in the model's overestimation of revenues by $17 billion. Each time, although despairing of success, the members shepherding the legislation felt compelled—and ultimately were able—to find ways to make up the revenue shortfall so that the net impact compared with current law would be zero (Bimbaum and Murray, 1987). Senators and Representatives also paid close attention to the estimates of the impact of proposed tax reforms on different income groups in the population. There was strong support for cutting top tax rates, but not in a manner that would injure the middle class or the poor. The final legislation raised corporate taxes to make it possible to reduce personal income tax rates.
These examples are by no means unique. Formal computer modeling techniques implemented by teams of analysts and programmers today contribute importantly to the policy analysis function—that is, the production of information evaluating the effects of alternative proposals for legislative change. Yet, despite extensive use over the past two decades, most policy analysis tools have rarely been the focus of any explicit evaluation of their utility or effectiveness. Recently, however, policy analysts at HHS, concerned about the increasing pace and new directions of legislative activity in many areas, determined that it was imperative to take stock of the available modeling tools and databases and to evaluate whether they were up to the job.
In the spring of 1988, the Office of the Assistant Secretary for Planning and Evaluation in HHS and the Food and Nutrition Service (FNS) in the Department of Agriculture asked the Committee on National Statistics to convene a panel. The two agencies asked that the panel be charged to evaluate the large microsimulation models that have been heavily used for estimating the costs and effects on population subgroups of proposed changes in social welfare programs. These models had not been subjected to a major evaluation since an initial study by the General Accounting Office in 1977.
The need for an evaluation of policy analysis tools is evident from the perspectives of both demand and supply. The demand for social welfare policy analysis, after a relatively slack period in the early 1980s, has gained strength in recent years and suggests a boom in the 1990s. One source fueling the demand is the recognition that changes in the U.S. population, economy, and society are generating problems that the federal government cannot ignore (although the solutions need not always involve direct federal action). As just one example, the aging of the population is making it imperative to assess the potentially staggering costs of providing health care and retirement support for the elderly and to determine cost-effective approaches for meeting these needs. Similarly, the huge growth in the labor force participation of women, along with other factors, has generated renewed interest in legislative initiatives for child care, through both tax credits to parents and funding for day care services.
The large federal budget deficits, resulting from the tax cuts of the early 1980s and growth in government spending throughout the decade, are another strong and continuing source of demand for policy analysis. The deficits force Congress to engage in often desperate attempts to find more revenue or produce more cost savings by fine-tuning tax provisions and program regulations. The Joint Committee on Taxation, which produces revenue estimates for every proposed change to the tax code, received 1,290 requests for estimates in 1989; in comparison, it had received only 150 requests yearly in the early 1980s (Haas, 1990).
Yet just as the demand for social welfare policy analysis is on the rise, the supply of effective modeling tools and databases has been impaired by constrained resources over the past decade for modeling, data collection, and
socioeconomic research. Funding support in the 1980s (in real dollar terms) declined—drastically in some cases—or at best held steady for all three elements of the policy analysis infrastructure. Both Congress and the executive branch have acknowledged the deterioration in the nation's statistical and research knowledge base and the need for retooling. An evaluation of the tools that are widely used for policy analysis is timely to help chart directions for future investment to improve the quality and effectiveness of the information on which policy makers rely.
THE PANEL STUDY
The panel formed by the Committee on National Statistics in late 1988 was asked to evaluate microsimulation models as a tool for policy analysis and to make recommendations about the role for microsimulation modeling in addressing the policy analysis needs of the 1990s. The charge was to focus on the modeling requirements of "social welfare programs," specifically, the programs within the purview of ASPE and FNS. With this mandate, we could and did seek to make our task more manageable by excluding a wide array of policy domains—defense, education, energy, the environment, housing, transportation, and the work force—even though many issues in these areas are also amenable to similar formal modeling and analysis. The remaining policy areas that fell within our scope—support for the low-income population, support for retirees, health care benefits, and taxation (which intersects at many points with social welfare policy)—were sufficiently broad to strain our capacity to carry out a useful study.
In considering the utility and cost-effectiveness for analyzing social welfare policy issues of the particular class of techniques called microsimulation modeling, we found it necessary to look more globally at the policy analysis process. Microsimulation models bring unique characteristics to the policy analysis effort; they also share common aspects with other classes of models. More important, it appeared that some of the major problems confronting microsimulation models today plague other kinds of policy analysis tools as well.
Part I of our report considers the role of information in the policy process broadly and presents recommendations in several areas that we believe are important to address, whether the analysis tool is microsimulation or some other approach. Part II presents recommendations specific to microsimulation modeling as a tool for policy analysis. A companion volume of technical papers provides additional information on topics related to microsimulation modeling.
In our assessment, we examine both the inputs and outputs of policy analysis tools and the structure and design of the models themselves. On the input side, the quality and scope of the available data and research about socioeconomic behavior determine the scope, amount of detail, and quality of the
estimates that models can provide and affect their cost structure. The capabilities and efficiency of a third input—the technological means for computation—importantly affect the cost and timeliness of model outputs as well as the accessibility of the model to policy analysts and researchers.
On the output side, we are most concerned with the added information that is required for model estimates to be useful in the policy process. The estimates need to be evaluated in terms of their likely variability and sensitivity to key assumptions. Such validation is important both for purposes of informing decision makers about the utility and quality of model estimates and for purposes of improving the models themselves. The models and their inputs and outputs need to be appropriately described, or documented, for future use. Finally, model estimates, together with information about their quality, need to be communicated to the decision makers in the executive branch and Congress in a form that they can understand and use. How well the validation, documentation, and communication functions are performed has profound implications for the quality and cost-effectiveness of the information produced by policy analysis and thereby for the legislative process itself.
We conclude that the policy analysis world needs a "second revolution." The "first revolution" of the past two decades institutionalized the use of detailed estimates of cost and population effects of alternative proposals as part of the legislative process, and contributed to the development and widespread application of large computerized models as estimation tools. The second revolution requires significant investments in data, research knowledge, and computing to improve the quality of these models and the estimates they produce.
Even more important, the second revolution requires a commitment to model validation, with implications for improved communication and documentation as well. The validation function has, for many reasons, been the stepchild of the policy analysis process—chronically short on resources and attention. Indeed, the absence of a validation literature severely constrained the ability of the panel to make definitive recommendations for the microsimulation models that we were charged to examine in detail. Our strongest recommendation to policy analysis agencies, for these and other kinds of models, is to invest the needed resources to make validation an integral part of the policy estimation process. Without systematic and rigorous evaluation of models and their inputs and outputs, no one can know their quality today or make informed choices about how best to allocate scarce investment resources to improve their quality and usefulness for tomorrow.