Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations Summary Since the inception of the U.S. federal system in 1789, decision makers in the executive and legislative branches have sought information to help make choices among alternative public policies. However, throughout most of the nation's history, the supply of policy information has been limited and the demand for it sporadic and ad hoc in nature. Beginning in the 1960s, quantum improvements in data sources, socioeconomic research, and computing technology made it possible to supply information of much greater depth and breadth to the policy process. In turn, the activist posture of the federal government during that period both stimulated the production of policy research and analysis and drew on its results. At one end of the process, policy research helped identify problems and move them onto the federal agenda; at the other end, it contributed to an understanding of the successes and failures of enacted programs. At the middle stage of the process, in which legislative initiatives are debated, the role of information about the costs and benefits of alternative proposals became institutionalized. Today, the policy community in Washington takes for granted that neither the administration nor Congress will consider legislation to alter any of the nation's expenditure programs or the tax code without looking closely at ''the numbers." Often, these numbers are the product of team efforts to apply formal computerized modeling techniques and large-scale databases to the task of estimating the impact of alternative policies. The kinds of formal models that are used for policy analysis, defined as the production of estimates of the budgetary and population impacts of proposed program changes, vary
OCR for page 2
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations widely. They include large-scale macroeconomic models, single-equation time series models, cell-based models of population groups, econometric models of individual behavior, and large-scale microsimulation models (and, of course, these approaches are frequently supplemented, or sometimes supplanted, by a range of less formal means of developing policy information). Despite the widespread use of formal models to provide information to the legislative debate, neither the policy analysis tools employed nor the estimates they produce have been subject to much explicit evaluation of their utility or accuracy. Two years ago, the Office of the Assistant Secretary for Planning and Evaluation (ASPE) in the U.S. Department of Health and Human Services and the Food and Nutrition Service (FNS) in the U.S. Department of Agriculture asked the Committee on National Statistics at the National Research Council to convene a panel of experts. They asked that the panel evaluate microsimulation based policy models, such as TRIM2 (Transfer Income Model 2) and MATH (Micro Analysis of Transfers to Households). ASPE, FNS, and other agencies have used microsimulation models for many years to estimate the impacts of proposed changes in social welfare programs, including programs for income support for the poor, retirement income support, and provision of health care, as well as in tax laws. Models of this class were first developed for policy analysis in the late 1960s, but have not been the focus of a major evaluation since a study by the General Accounting Office in 1977. Our panel sought to evaluate microsimulation models within a broad context. Microsimulation models bring important strengths to policy analysis, but they are far from the only useful type of analytical tool. Moreover, while having unique characteristics, they share aspects in common with other classes of models. Most important, some of the major problems confronting microsimulation models today plague other kinds of policy analysis tools as well. Below we summarize first our findings and recommendations that apply to policy models generally (from Part I of our report) and then those about the current state of microsimulation modeling specifically (from Part II). The text of all of our recommendations follows, keyed to the chapter in which they appear in the body of the report. IMPROVING THE TOOLS OF POLICY ANALYSIS: INVESTMENT PRIORITIES We identified two major deficiencies that demand attention if policy models, of whatever type, are to provide cost-effective information to the legislative debates of the future. The first problem (one of long standing) is lack of regular and systematic model validation. Ingrained patterns of behavior on the part of both decision makers and policy analysts have led to systematic underinvestment in the validation task. The second problem (of more recent
OCR for page 3
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations origin) is underinvestment and consequent deterioration in the scope and quality of needed input data for policy models. Validation Model-based estimates of the costs and population effects of proposed policy changes, although certainly not the only type of information used in the legislative process, are regularly consulted and often play a pivotal role in the debate. The estimates produced by the TRIM2 model of the high costs of mandating a federal minimum benefit standard for the Aid to Families with Dependent Children (AFDC) program helped kill this provision in the debate over the Family Support Act of 1988. The estimates produced by the tax policy microsimulation model operated by the Joint Committee on Taxation and the Office of Tax Analysis in the U.S. Department of the Treasury were critical in shaping the 1986 Tax Reform Act. Given the importance of these and other policy estimates, it is essential that the legislative debate have available, in addition to the estimates themselves, an assessment of their quality. Any estimate, whether coming from a rough back-of-the-envelope calculation or produced by one or another type of formal model, will inevitably contain errors and be subject to uncertainty—from sources such as sampling variability and errors in the input data, as well as errors in the specification of model components. An obvious question to ask is, How reliable is the estimate? For instance, can the policy analyst be reasonably confident that the estimated cost of a proposed policy change of, say, $25 billion lies within a relatively narrow bound, such as, say, $22 billion to $28 billion? Or, given the limitations of available knowledge, must the analyst acknowledge that the likely range is much wider—say, from $1 billion to $49 billion—and hence that the estimate is of much less utility as a guide to decision making? Another obvious question to ask is, What is the track record of the model that produced the estimate? Under reasonably similar conditions, has the model done a good job or a poor job in projecting actual outcomes? Despite the clear need, it is rare, on the one hand, for questions about the quality of policy estimates or the track record of modeling tools to be asked by decision makers and, on the other hand, for information about the uncertainty surrounding policy estimates to be provided to the policy debate. We identified many reasons for this state of affairs. First of all, the task of evaluating the results of policy analysis is quite hard. There are many sources of uncertainty. Moreover, a basic difficulty is that almost all policy analyses involve conditional rather than unconditional forecasts: that is, they are designed to answer the question of "what if." What will be the effect on program costs and caseloads if a national minimum AFDC benefit standard is set at one or another level? What will be the effect on tax revenues and investment behavior if rich households are assessed a surtax of
OCR for page 4
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations one or another percentage? In many cases, none of the hypothetical policy alternatives that were analyzed during the course of a debate may be enacted, so that no data ever become available against which to check the validity of the estimates. (In contrast, unconditional forecasts, such as those made with macroeconomic models of expected growth in the gross national product and other economic aggregates for the next quarter or year, can be and often are checked against reality.) Even in cases in which an analyzed policy is enacted, it is likely to be difficult to distinguish among sources of error—for example, an error in projected participation rates for a program that occurs because of poor understanding of the behavior of program participants as opposed to bad forecasts of the overall state of the economy. The policy process itself exacerbates the problem. Typically, the debate on a policy issue takes place in an environment of time constraints and distractions from competing debates. Analysts are usually under great pressure to prepare a large number of estimates for many different program variants. Consequently, they have little time for such tasks as documenting and evaluating the quality of their estimates. Finally, they have little incentive to find time to develop measures of uncertainty, given the desire of decision makers for precise numbers that "add up"—a desire that is particularly strong in the current constrained fiscal environment in which new program expenditures must be offset to the dollar by new revenues or cuts in other programs. Despite the difficulties in validating estimates of the impacts of proposed policy alternatives and ingrained institutional behaviors, we believe that it is essential for users and producers of policy information to elevate validation to a priority task. Our message to users, including decision makers and their staffs, is that they must systematically demand information on the level and sources of uncertainty in policy analysis work. It is in both their short-run and their long-run interests to do so. In the short run, users need information about uncertainty for several purposes: to evaluate competing estimates, for example, from congressional and executive branch agencies; to determine how much weight to give to the "numbers" in making policy choices; and to determine when it no longer makes sense to fine-tune a policy proposal because the available information cannot reliably distinguish among alternatives. In the long run, they need information about uncertainty to help set investment priorities for policy analysis tools and databases that are most likely to improve the quality of critical estimates. Recognizing the difficulty of changing the behavior of decision makers, we urge the heads of policy analysis agencies to take the lead in working to ensure that information on uncertainty becomes available as a matter of course for the estimates their agencies produce. Agency heads should set and enforce standards that validation be part of the policy analysis work of their staffs; allocate staff and budget resources to the validation task; support efforts by their staffs to educate the staffs of decision makers about the need for information on the
OCR for page 5
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations quality of the estimates and how to interpret such information; and back up their staffs when time constraints and demands for certainty threaten to undercut the validation effort. Policy analysis work, whether conducted in-house or by contractors, should always include some type of validation effort that, at a minimum, develops approximate estimates of uncertainty in the results and the main sources of this uncertainty. For microsimulation and other types of large, complex models, recently developed computer-intensive techniques make it feasible to develop estimates of variance from sources such as sampling variability in the primary database and other inputs. In addition, for large-scale, ongoing research and modeling efforts, the agencies should let separate contracts with independent analytical suppliers for evaluation. The reason for independent contracts is to ensure objectivity and minimize the likelihood that the evaluation will be sacrificed to the need for immediate results to feed to the policy debate. The focus of these independent evaluation studies should be on two major types of validation that provide important information for determining priority areas for future investment in policy analysis tools. The first type is sensitivity analysis, which involves running alternate versions of one or more model components and data inputs to determine the effects on the estimates; the second type is external validation, which involves comparing model outputs with measures of what actually occurred. Given the rarity with which policy changes correspond to actual model projections, external validation must often be accomplished by other means. Thus, one can use the model with an earlier database to project current program law, thereby making possible comparisons with administrative data on actual program outcomes. The panel itself conducted an illustrative validation experiment with the TRIM2 model, including an external validity study combined with a sensitivity analysis. The effort that we believe is required for systematic, ongoing validation of policy analysis estimates requires attention to ancillary activities as well. Specifically, policy analysis agencies need to allocate sufficient resources for complete and understandable documentation of policy analysis tools and of the methodology and procedures employed in major policy analyses. The agencies also need to require that major analytical efforts be archived so that the models, databases, and outputs are available for future evaluation. Finally, the agencies need to experiment with modes of presenting information about the uncertainty in their estimates to facilitate understanding and acceptance of such information on the part of decision makers. Better Data An essential requirement for policy analysis of alternative government programs, whatever the type of estimate and estimating tool used, is that there be data to analyze. Good data are a critical ingredient for models and other analysis
OCR for page 6
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations tools to produce good policy estimates. Data that are of poor quality, scope, and relevance will increase the uncertainty and decrease the validity of model outputs. Poor data also make it harder for models to respond to changing policy analysis needs in a timely and cost-effective manner. Given the resources that are at stake, a well-considered, ongoing program of investment in data sources for social welfare policy analysis on the part of the federal government is more than justified. Federal expenditures total over $300 billion a year for social insurance programs, such as social security and Medicare, and almost $75 billion a year for public assistance programs, such as AFDC and food stamps. In comparison, the entire statistical budget of the federal government runs under $2 billion in most years. A disturbing feature of the decade just completed has been declining federal investment in the production of high-quality, relevant data in many areas of ongoing policy concern. Significant cutbacks in budgets and staff resources for the major federal statistical agencies (amounting to a 13 percent overall budget reduction in real dollar terms from 1980 to 1988), although encouraging the demise of some outmoded programs, in most cases had debilitating effects. Important surveys were reduced in sample size and frequency; programs to review and improve data quality were stretched out or canceled; and key concepts and measurements were not revised to keep up with changing social and economic trends. These problems are not of purely academic interest; they have had real-world policy consequences. Lack of recent trend data resulted in estimates of the cost of covering prescription drugs as part of the 1988 Medicare Catastrophic Coverage Act (since repealed) that turned out to be much too low. Inadequate data greatly hampered the development of good estimates of the likely impact of child support and employment programs for welfare recipients: Congress had to enact these major new provisions of the 1988 Family Support Act in large measure on faith. Decision makers in both the public and the private sectors have made policy choices based on preliminary economic statistics that later turned out to have large errors. Members and committees of Congress have expressed concern over the deterioration of the nation's information base, and the administration has expressed support for budget increases and reallocations to make it possible to effect improvements in important statistical concepts and data series. We strongly support these developments and recommend increased investment by the federal government in the production of relevant, high-quality statistical data for social welfare policy analysis and other purposes. In addition to budget and staffing constraints, the federal statistical system has suffered over the past few decades a deterioration in mechanisms for interagency coordination and the ability to draw upon and integrate information from a range of databases, particularly administrative records. This situation has
OCR for page 7
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations also contributed to reduced timeliness, quantity, and quality of policy-relevant data. With its traditionally decentralized statistical system, whereby one agency collects data on health care financing, another on income, and so on, the United States depends heavily on effective coordinating mechanisms to achieve optimal allocation of data production resources. Yet the principal coordinating mechanism, lodged in the Office of Management and Budget, with no more than half a dozen staff members and limited resources, is today a shadow of its former self. We strongly recommend that the federal government strengthen and increase its investment in the coordination of federal statistical activities. To better position the decentralized federal statistical system to serve changing data requirements over time, there is a need not only for improved coordination of data production, but for the adoption of more far-seeing strategies of government data collection that emphasize flexibility and breadth of use. In this regard, duplication of selected questions across surveys can be very beneficial and should not be rejected out of hand. The collection of overlapping data—for example, the collection of income data in health-related surveys and health data in income surveys—makes it easier to relate multiple data sources and evaluate their quality. Such overlaps also facilitate the ability of policy models, which are usually based on one primary data source, to respond more readily to changing policy agendas. Federal statistical agencies should also give more attention to data collection strategies that recognize key interactions among individuals and society's institutions-employers, hospitals, government agencies, and others. Most data collection efforts are focused on a single entity, such as the family or firm. However, the characteristics of service providers as well as beneficiaries greatly influence the operation of social welfare programs. For example, the hours of operation, location of offices, and treatment of individuals by welfare agencies affect participation by eligible people. Administrative records, such as social security earnings histories, case files from public assistance programs, health care claims, and tax returns, are valuable sources of information with which to augment, evaluate, and improve surveys and censuses. Their use can enhance the scope and quality of available data for policy analysis and other purposes at very low additional cost. Yet developments in the past two decades have greatly undercut the contributions to the nation's information base from administrative sources. A major factor has been the increased emphasis placed by statistical agencies on restricting data access in order to guard against possible breaches of confidentiality. For example, the Census Bureau no longer prepares exact-match files for public release from household surveys such as the March Current Population Survey (CPS) matched with Social Security Administration (SSA) earnings records. Because the available CPS-SSA exact-match files date back to the 1970s, models of future retirement income programs must generate data for 10 or more
OCR for page 8
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations past years before they can begin their projections, which not only increases costs but inevitably impairs the quality of the estimates. We strongly support the need to take appropriate measures to protect the confidentiality of individual data records and to take all due precautions against either deliberate or inadvertent disclosure. However, we believe that mechanisms must be found to make it possible for the rich sets of data that are generated for federal administrative purposes to be used more fully for statistical analysis purposes. Finally, we support reallocation of effort within federal statistical agencies and between these agencies and users to emphasize the development of the highest quality data for policy analysis and research purposes. More data, although needed in many areas, are not enough. The data must also reflect appropriate and accurate measurement. Yet budget and staff constraints, coupled with the difficulty of convincing decision makers of the value of methodological work, have often forced agencies to emphasize the operational activities necessary to timely data release at the expense of measurement research and assessment of quality. We urge that this imbalance be redressed. In addition, we urge that federal statistical agencies use their assessments of quality to add more value to the data series they release. Traditionally, statistical agencies have seen their role as preparing survey-specific data files and publications. They have not seen their role as producing integrated databases or the best published estimates for such statistics as household income or poverty that could be developed from multiple data sources. (For example, the Census Bureau adjusts household surveys for nonresponse by people in the sample but does not perform other adjustments, such as correcting income amounts for misreporting, that would involve the use of outside sources such as administrative records.) Currently, policy analysis agencies and other end users must perform many additional adjustments to survey data to make them suitable for modeling and analysis. Users often lack the information as well as the resources to perform an adequate job, and users at one agency frequently duplicate the efforts of other users. We recommend that statistical agencies seek, where feasible, to use evaluative studies and multiple data sources to develop improved databases and published series for policy analysis and other important purposes. THE ROLE OF MICROSIMULATION AS A POLICY ANALYSIS TOOL The microsimulation model approach to producing estimates of the effects of proposed changes in government programs involves obtaining inputs from microlevel databases of individual records, mimicking how current and alternative program provisions apply to the individuals described in those records, and maintaining the simulated outputs for each program scenario on each of the individual records. For example, in simulating the effects of changes to the
OCR for page 9
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations AFDC program, microsimulation models process records for families as if they were applying to the local welfare office for benefits, and in simulating the effects of tax law changes, they process records for people as if they were filling out their 1040 tax forms. Models based on microsimulation techniques are conceptually highly attractive because they operate at the appropriate decision level and take into account the diverse circumstances and characteristics of the relevant population, whether it be low-income families, taxpayers, or health care providers. Such models are able to respond to important needs of the policy process for information about the effects of very fine-grained as well as broader policy changes, the effects of changes that involve complicated interactions among more than one government program, and the effects of changes on specific population groups as well as total program costs and caseloads. Of course, microsimulation models are by no means the only useful tool for policy analysis. Indeed, the policy analysis community benefits from having available a wide range of modeling tools to provide alternative perspectives and answer a variety of questions, not all of which require greatly detailed information. Yet, when flexible, fine-grained analysis of proposed policy changes is called for, no other type of model can match microsimulation in its potential to respond. However, the capability for the detailed analysis provided by microsimulation modeling comes at a price. Microsimulation models tend to be highly complex (reflecting the complexities of government programs and individual circumstances) and must usually meld together a variety of data and research results of varying quality (making many, often unsupported, assumptions in the process). As a result, the history of microsimulation model development has witnessed instances in which model development and application incurred extra time and costs; in which the model became inflexible in operation and difficult to understand and access; and in which it was hard for the analyst, let alone the decision maker, to evaluate the quality of the outputs. A typical response in the past to the problems posed by the complexity of microsimulation modeling was to pare back model capabilities or focus new development on the accounting functions that mimic program rules and to leave aside other, more difficult aspects, such as modeling behavioral response to program changes. However, these kinds of design choices limit the usefulness of the models for the policy debate. Very little information is available with which to assess the performance of current microsimulation models, including how well their outputs compare with actual policy outcomes or the degree and sources of uncertainty in the estimates—which we suspect may be high. Although we could not make definitive judgments, we identified several causes for concern. We found that the data sources used to construct microsimulation model databases have serious
OCR for page 10
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations weaknesses and deficiencies. We are concerned that the mainframe, batch-oriented computer processing technology that is used to implement most current microsimulation models is no longer cost-effective and presents barriers to model validation and experimentation and direct use of the models by analysts. We also note that the underlying base of research knowledge that is needed to support modeling behavioral responses to government program changes and in other ways expand the capabilities of current models has important limitations. Finally, we are troubled by some aspects of the current structure of the microsimulation modeling community—that is, the interrelationships among the agencies that use the models, the statistical agencies that produce needed input data, the contractors that generally operate the models, and the academic research community. The highly decentralized nature of policy analysis and database production in the federal government often adds costs for duplicative work and raises barriers to effective communication among analysts and between them and data producers. In addition, because the policy community that actively works with microsimulation models today is largely limited to a small number of expert staff in a few firms and agencies, there are few avenues for new ideas and perspectives—either from users in the agencies, academic researchers, or others—to lead to improvements in models and the estimates they produce. We believe that microsimulation models are important to the policy process, and we anticipate that the need for the kinds of detailed estimates that they can best generate will grow, not diminish, in future years. We recommend allocating sufficient resources to the current models to maintain and improve them incrementally where appropriate and cost-effective. Further, we recommend in-depth validation studies of the current models, both to provide information on the quality of their estimates to the policy process and to guide decisions on future model development. However, because there is so little information with which to assess current models and because of the limitations of available databases and research knowledge, we do not advocate expanding the capabilities of existing models in any specific direction at this time. The ultimate objective, in our view, is to develop a new generation of microsimulation models that incorporate improvements in quality, flexibility, accessibility to a broader user community, and overall cost-effectiveness. To achieve this goal will require significant investments in data, research knowledge, model design and validation techniques, and computing technology. We urge policy analysis agencies, over the next few years, to devote the needed level of resources to investment activities, including validation of current models. If budgets remain tight, the agencies should be prepared to cut back on resources for current applications in the short term, in order to have available improved modeling tools to satisfy the detailed information requirements of policy debates over the medium and long term.
OCR for page 11
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations Databases In considering the data inputs to microsimulation models, we focused on the March income supplement to the Current Population Survey and the new Survey of Income and Program Participation (SIPP), which currently or potentially provide much of the data required for modeling income support programs, as well as programs in other areas. We found a need for in-depth evaluation of the March CPS and also of administrative records that often are used in models to supplement household survey data. We propose investigation of short-term and long-term alternatives for improving model databases through combined use of March CPS, SIPP, and administrative records data. In order to improve quality and perhaps reduce the total costs of generating suitable data for policy analysis, we propose that the Census Bureau play a much more active role in preparing useful databases for modeling and policy research. We note specifically the need to investigate the impact on microsimulation model estimates of population undercount in censuses and surveys and, should important effects be determined, to develop ways to implement coverage error adjustments in surveys that the models use. Model Design and Development In reviewing the structure and capabilities of microsimulation models, we found that a number of basic principles of model design (such as modularity) and implementation (such as prototyping) have often but not always been followed in the past. They are necessary to the development of cost-effective, useful, and usable models that are well positioned to respond to changing policy needs. In particular, future models need to be designed to facilitate the conduct of validation studies that involve altering model components and data inputs. In considering strategic directions for future microsimulation model development, we identified three components that are problematic or not well developed in many current models: techniques to project (or age) the data forward in time, to simulate behavioral responses to proposed policy changes, and to simulate longer-term effects of policy changes. In the absence of a body of research knowledge on which to base specific recommendations in these areas, we recommend a research agenda designed to identify the most important kinds of enhancements for microsimulation models and the best ways to implement them. Computing Technology Several technologies, such as powerful microcomputer workstations, possibly linked to other computers, and new kinds of software, such as graphical user interfaces and computer-assisted software engineering, promise to enhance
OCR for page 12
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations greatly the flexibility and accessibility of a new generation of microsimulation models. We recommend that policy analysis agencies position themselves to take advantage of these new directions in computing, particularly when promising software tools become more standardized. Validation We identified three validation techniques that we believe can and must be used to obtain vitally important information about the quality of microsimulation outputs: (1) external validity studies in which model results are compared with data from program administrative sources or other targets; (2) sensitivity analyses that assess the effects of alternative versions of specific model components on the estimates; and (3) computer-intensive sample reuse techniques, such as the bootstrap, that measure the variance in model estimates. We recommend that policy analysis agencies support research on model validation methods and bring together available information about model quality. We also propose that agencies organize cost-effective programs to obtain validation results of two kinds: rough-and-ready information for use in informing current policy debates, and in-depth information, provided by independent organizations, for use in identifying model weaknesses and planning needed investments in future model development. Documentation and Archiving In comparing current documentation and archiving practices for microsimulation models with industry standards for software documentation, we found a number of problem areas. We recommend greater attention to documentation, particularly from the perspective of making microsimulation models more accessible and their outputs more understandable to end users. Similarly, to facilitate model validation, we see a need to set higher standards for archiving of microsimulation model databases and all of the inputs to major policy applications of the models. The Structure of the Community We found several areas for improvement in the relationships among all of the organizations and people involved in developing, using, and applying microsimulation model estimates for the policy debate. Adding to our earlier recommendation that the Census Bureau and other federal statistical agencies play a more active role in preparing usable databases for modeling and policy research uses, we recommend that policy analysis agencies undertake cooperative activities and encourage relevant academic research to further microsimulation model development. Given the particularly fragmented nature of health care
OCR for page 13
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations policy data collection and analysis, we urge the Department of Health and Human Services to establish a high-level group to coordinate microsimulation model development in this area. Finally, we recommend greater use of the models on the part of agency staff. The Use of Microsimulation for Basic Research Guy Orcutt, an economist who pioneered the concept of microsimulation modeling for policy analysis, had a dream that microsimulation models would also make important contributions to social science research knowledge. With some exceptions, notably in the field of family demography, the realization of that dream has gone largely unfulfilled. Today, there is the likelihood that improved data, model design, and computing technology will greatly enhance the cost-benefit ratio of microsimulation. Moreover, many contemporary research problems present complexities that microsimulation can potentially address. Hence, we believe the time may be ripe for the realization of Orcutt's dream. In turn, a larger role for microsimulation modeling as a basic research tool, which we encourage agencies to help foster and seek to benefit from, should make possible advances in the usefulness of microsimulation techniques for policy research and analysis and thereby contribute to better information for public policy decision making. RECOMMENDATIONS FOR IMPROVING POLICY ANALYSIS Data 3-1 We recommend that the federal government increase its investment in the production of relevant, high-quality, statistical data for social welfare policy analysis and other purposes. 3-2 We recommend that the federal government strengthen and increase its investment in the coordination of federal statistical activities, with the goal of improving the quality and relevance of data for policy analysis and other purposes. 3-3 We recommend that federal data collection strategies emphasize breadth of use and ability to respond to changing policy needs. In this regard, duplication of selected questions across surveys should be encouraged to the extent that such duplication enhances utility and facilitates evaluation of data quality. 3-4 We recommend that federal statistical agencies give more attention to data collection strategies that recognize key interactions among individuals and institutions—employers, hospitals, government agencies, and others.
OCR for page 14
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations 3-5 We recommend development and implementation of mechanisms to improve access, under appropriate circumstances, to administrative and survey microdata for statistical research and analysis purposes. 3-6 We recommend that federal statistical agencies increase their investment in evaluation of the quality of survey and administrative data We further recommend that they use the results of evaluation studies to implement corrections, when feasible, to databases and published data series, with the objective of improving the quality and reducing the overall costs of providing analytically useful data for policy analysis and other important purposes. 3-7 We recommend that the Census Bureau conduct a thorough evaluation of population coverage errors in the major household surveys and decennial census and their potential impacts on policy analysis and research uses of the data. Should important coverage errors be identified, we recommend that the Census Bureau develop ways to adjust census and survey data that have wide application for policy analysis and research. Validation 3-8 We recommend that users of policy projections systematically demand information on the level and sources of uncertainty in policy analysis work. 3-9 We recommend that the heads of policy analysis agencies assume responsibility for ensuring, to the extent feasible, that their staffs regularly prepare information about the level and sources of uncertainty in their work. Agency heads should also support efforts of their staffs to accustom decision makers to request and use such information in the policy process. 3-10 We recommend that policy analysis agencies earmark a portion of the funds for all major analytical efforts for evaluation of the quality of the results. For large-scale, ongoing research and modeling efforts, the agencies should let a separate contract for an independent evaluation. 3-11 We recommend that policy analysis agencies routinely provide periodic error analyses of ongoing work. 3-12 We recommend that policy analysis agencies allocate sufficient resources for complete and understandable documentation of policy analysis tools. We also recommend that, as a matter of standard practice, they require complete documentation of the methodology and procedures used in major policy analyses. 3-13 We recommend that policy analysis agencies require that major analytical efforts be subject to archiving so that the models, databases, and outputs are available for future analytical use.
OCR for page 15
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations 3-14 We recommend that policy analysis agencies include information about estimated uncertainty and the sources of this uncertainty as a matter of course in presentations of results to decision makers. The agencies should experiment with modes of presentation to facilitate understanding and acceptance of information about uncertainty on the part of decision makers. RECOMMENDATIONS FOR MICROSIMULATION MODELS Databases 5-1 We recommend that the Census Bureau evaluate the Current Population Survey March income supplement in its role as a primary source of data for analysis of the income distribution and economic well-being of the population. The evaluation should be designed with input from the policy analysis agencies that are major users of the data. It should be comprehensive, covering the impact on data quality of every stage of data collection and processing. It should also compare the March CPS estimates with estimates from other sources. The results should be brought together in a quality profile that is published for users and updated periodically as further evaluations are conducted and new findings obtained. 5-2 We recommend that the responsible agencies sponsor in-depth evaluations of the quality of administrative data that are used as primary or supplemental inputs to social welfare policy microsimulation models. Such data sets include the Integrated Quality Control System samples on the characteristics of welfare recipients and the Statistics of Income samples from federal income tax returns. The results of each evaluation should be brought together in a quality profile that is published for users and updated periodically as further evaluations are conducted and new findings obtained. 5-3 We recommend that the Census Bureau, in conjunction with policy analysis agencies, immediately evaluate alternative options for short-term improvements to the data used for microsimulation modeling, and policy analysis generally, of income support and related social welfare programs. Alternatives that should be investigated include: proceeding with the current plan to obtain added resources to restore the SIPP sample size and overlapping panels, beginning with the 1991 panel; and keeping the SIPP budget at its current level with the 1990 design of fewer, larger panels, while reallocating the added budget to some combination of initiatives, including adding a low-income sample to the March CPS; adding a limited set of questions to the March CPS to ascertain family composition during the income reference year; exploiting the longitudinal information available in the CPS; exploring sophisticated imputations that use SIPP data to improve
OCR for page 16
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations CPS information on intrayear income, employment status, and other variables; and exploring matches of SIPP and CPS data with administrative records in a form that can be made publicly available. 5-4 For the longer term, we note that the Census Bureau now has studies under way to consider the future design of SIPP. We recommend that these studies focus on improving the databases for modeling and analysis of income support and related social welfare programs. We recommend that the studies review all aspects of the SIPP design (such as the sample size and length of each panel and the extent to which overlapping of panels is desirable) and consider how best to design SIPP to facilitate relating data from the SIPP, the March CPS, and administrative records. 5-5 After current studies of SIPP and the CPS are completed, we recommend that the policy analysis agencies plan accordingly to redesign their income-support program microsimulation models to make best use of the improved data on income and related subjects that should be available after 1995. 5-6 We recommend that the Census Bureau assume a more active role in adding value to databases for modeling and research purposes and for generating published data series. In particular, we recommend that the Census Bureau seek to produce the best estimates of the income distribution and related variables, such as household and family composition. Steps necessary to achieve this goal include evaluating income reporting errors in the SIPP and March CPS, on the basis of administrative records and other information sources, and using data from multiple sources to develop improved estimates. Model Design and Development 6-1 We recommend that policy analysis agencies set standards for the design of future microsimulation models that include: setting clear goals and priorities for the model; using self-contained modules that can be readily added to (or deleted from) the model and that are constructed to facilitate documentation and validation, including the assessment of uncertainty through the use of sensitivity analysis and the application of sample reuse techniques to measure variance; providing for entry and exit points in the model that facilitate linkages with other models; attaining a high degree of computational efficiency of the model and its components consonant with other objectives such as ease of use; and attaining a high degree of accessibility of the model to analysts and other users who are not computer systems experts.
OCR for page 17
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations 6-2 We recommend that policy analysis agencies set standards of good practice for the development of future microsimulation models that require: constructing prototypes and establishing milestones throughout the development process so that design flaws can be identified at an early stage and the agency provided with some analysis capability before the entire model is completed; . preparing fully adequate documentation on a timely basis for the model and its components; conducting validation studies of the model and its components, including estimates of variance and sensitivity analyses (the latter should be conducted for each new module, prior to full implementation, by examining its impact on the rest of the model in order to identify any unexpected or dysfunctional interactions or adverse effects on use); and subjecting the model to a ''sunset" provision, whereby the model is periodically reevaluated, obsolete components are deleted, and other components are respecified to optimize the model's usefulness and efficiency. 6-3 We recommend that policy analysis agencies sponsor an evaluation program to assess the quality of estimates from current static microsimulation models as a function of the aging technique that is used and that they further support such evaluations on a periodic basis for future models. 6-4 We recommend that policy analysis agencies require that future static microsimulation models build in an aging capability in a manner that facilitates evaluation and use of alternative aging assumptions and procedures. 6-5 We recommend that policy analysis agencies devote resources to studies of the relationship between behavioral research and microsimulation modeling, including studies of ways in which research and modeling can complement one another, as well as ways in which the two are alternative modes of deriving answers to policy questions. 6-6 We recommend that policy analysis agencies sponsor studies to determine when behavioral response effects are most likely to be important in different policy simulations and, hence, how investment in developing behavioral response capabilities in microsimulation should be concentrated. On the basis of such studies, policy analysis agencies should commission research to attempt to narrow the range of statistical estimates of behavioral parameters that may be of major importance to critical policy changes. Such research may require additional data analysis, replication studies, and multiple econometric analyses that use different data sets and analytic techniques. 6-7 We recommend that policy analysis agencies commission methodological research to develop methods for systematically assessing the impact on microsimulation model estimates of the degree of uncertainty in the behavioral
OCR for page 18
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations parameters that are used—both the uncertainty arising from the variance of specific parameters and that arising from the range of estimates from different behavioral studies. This work should be tied into the development of similar methods for assessing uncertainty of the estimates produced by microsimulation models without behavioral response. 6-8 We recommend that policy analysis agencies support research on second-round effects of policy changes that may be important to understand. We also recommend that the agencies require that future microsimulation models include entry and exit points that could facilitate linkages with second-round effects models. However, except perhaps for health care issues, we do not recommend investment at this time in building second-round effects capabilities into microsimulation models. Computing Technology 7-1 We recommend that policy analysis agencies invest resources in developing prototypes of static and dynamic microsimulation models that use new computer technologies to provide enhanced capabilities, such as the ability for a wider group of analysts to apply the models; conduct timely and cost-effective validation studies, including variance estimation and sensitivity analyses; and alter major components, such as the aging routines, without requiring programmer intervention. 7-2 We recommend that policy analysis agencies, after experience with prototypes and reviews of developments in computer hardware and software technologies, make plans to invest in a new generation of microsimulation models that facilitate such design criteria as user accessibility and adequate documentation and evaluation of model components, as well as computational efficiency. Health Care and Retirement Policy Modeling 8-1 We recommend that the U.S. Department of Health and Human Services establish a high-level, department-wide coordinating and steering body to set priorities for development of microsimulation models and related data collection and research needed for improved analysis of alternative government policies and programs for health care. 8-2 We recommend that the Census Bureau perform a new exact match of social security earnings histories with the March CPS as soon as possible. The Census Bureau should develop a program for periodically conducting matches of social security earnings histories with both the March CPS and SIPP records. Ways should be found to make the matched data files available for research and modeling use.
OCR for page 19
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations Validation 9-1 We recommend that policy analysis agencies commit sufficient resources and accord high priority to studies validating the outputs of microsimulation models. Specifically, we recommend the following: Agencies, in letting major contracts for development, maintenance, and application of microsimulation models, should allocate a percentage of resources for model validation and revisions based on validation results. The types of validation studies to be carried out by the modeling contractor should include estimates of variance and focused sensitivity analyses of key sets of model outputs. The goal of these efforts should be to provide timely, rough-and-ready assessments of selected estimates that are important for informing current policy debates. In addition, agencies, when practical, should let separate microsimulation model validation contracts to independent organizations or in other ways arrange to carry out comprehensive, in-depth evaluations. The types of studies to be performed by a validation contractor should include external validation studies that compare model outputs with other values and detailed sensitivity analyses. The goal of these longer range efforts should be to identify priority areas for model improvement. 9-2 We recommend that policy analysis agencies provide support, through such mechanisms as grants and fellowships, for research on improved methods for validating microsimulation model output. 9-3 We recommend that policy analysis agencies support the development of quality profiles for the major microsimulation models that they use. The profiles should list and describe sources of uncertainty and identify priorities for validation work. Documentation and Archiving 10-1 We recommend that policy analysis agencies set high standards for documentation of microsimulation models and their inputs and outputs. Agencies should investigate existing standards, such as those published by the Institute of Electrical and Electronics Engineers, for relevance to microsimulation models and determine what additional standards are needed. The kinds of documentation that agencies should require to be developed for analysts and programmers who use, or expect to use, the models include general informational materials; tutorials; and detailed reference documents for model components that describe their theoretical basis, assumptions, operation, inputs, and outputs. 10-2 In order to facilitate model validation, we recommend that policy analysis agencies require archiving of microsimulation model databases on a regular
OCR for page 20
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations basis. In addition, we recommend that the agencies require full documentation and archiving of major applications of microsimulation models. The archived materials should include the model itself, the documentation of the model, the database and other inputs, the analyst's specifications, and the outputs. Structure of the Microsimulation Modeling Community 11-1 We recommend that executive and legislative branch policy analysis agencies expand their communications and undertake cooperative efforts to improve the quality of microsimulation models and associated databases through such means as cosponsoring research on model validation methods and other initiatives. 11-2 We recommend that policy analysis agencies have a strict policy that only public-use, nonproprietary microsimulation models—for which documentation, inputs, outputs, and programming code can be freely exchanged—will be considered for agency use. 11-3 We recommend that policy analysis agencies set a goal of increasing the in-house use of microsimulation models by agency analysts, who have the ultimate responsibility of interpreting model results for policy makers. 11-4 We recommend that policy analysis agencies encourage and support the involvement of social science researchers in work that is relevant to microsimulation modeling, and other microlevel policy analysis, through sponsoring regular research conferences. The conferences should highlight pertinent research results that can be used for models, with an emphasis on the synthesis of research findings and the reconciliation of conflicting results. These conferences should also work to develop research agendas to address emerging policy needs. The agencies should prepare and disseminate proceedings from all such conferences.
Representative terms from entire chapter: