Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 153
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations 6 Model Design and Development We begin our consideration of the structure and design of the models themselves with a strong belief in the importance of following good design principles and practices in the development of policy analysis tools that are intended to be used many times in response to the changing requirements of the policy process. Skimping on resources for model design and testing in the development stage is likely to be a classic ''penny-wise, pound-foolish" decision that comes back to haunt the model users in the application stage, particularly for highly complex microsimulation models. A poorly designed model may not perform all of the functions that were intended, may entail higher-than-expected costs of operation, may be inordinately difficult to modify or to understand, or may experience other difficulties that require analysts to scale back their expectations or invest resources in an effort to patch up the problems after the fact. We thus begin this chapter by outlining the design principles and practices that are appropriate for microsimulation models as a general class, and we briefly review the history and current status of microsimulation model development in the United States from the viewpoint of good and bad design practices. We then turn to the substantive issues involved in determining the kinds of capabilities that might be added to existing microsimulation models or included in new models. We discuss the issues and trade-offs involved in the choice of aging strategies for microsimulation models, in the decision of whether and how to incorporate first-round behavioral responses, and in the decision of whether and how to model second-round responses. We do not make specific recommendations about preferred approaches to the modeling task in any of these areas because there is an inadequate body of knowledge evaluating
OCR for page 154
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations the impact of particular design choices on the quality and utility of model outputs.1 In the absence of such evidence, it would be foolhardy and arrogant to attempt to dictate the future course of microsimulation model development. Our recommendations, instead, lay out an agenda for research and evaluation that we hope will result in the information needed to make wise choices for future model development. One concern that is common to all the issues we address is when a microsimulation model becomes "too big": When do the benefits from additional capabilities sink under the weight of added costs for model development and use—when costs include not only staff and computer resources but also reduced capability for timely response, reduced flexibility, and reduced accessibility of the model? The issue of how wide a range of program interactions to simulate poses this concern as well. Historically, policy analysts have asked microsimulation models, as a bedrock capability, to estimate the direct effects of proposed program changes on individual decision units, that is, the effects if one assumes no changes in people's behavior in response to the program (other than to accept or reject the program's benefits). Analysts have also typically asked models to estimate interactions among closely related programs, such as SSI, AFDC, and food stamps. At times, analysts have asked models to provide additional capabilities, such as the following: the ability to project, inside the model, the cost and population effects of a program change when a new policy is expected to be implemented and for some number of years into the future (referred to as the process for "aging" the data); the ability to estimate, inside the model, the likely extent and effects of program-induced behavioral responses, such as the likelihood that people who are ineligible for income benefits will behave so as to become eligible (e.g., by quitting their jobs in order to obtain AFDC and Medicaid benefits) or that households will change their investment patterns in response to tax incentives; the ability to estimate, inside the model or through links to other models, so-called second-round effects, that is, the longer run effects of program changes—for example, the effects on wage rates of labor supply responses to welfare program changes, or the effects on health care providers of changes in the demand for health care services induced by mandated changes in the proportion of costs that patients have to pay themselves; and 1 In general, we also do not discuss the detailed approach of particular microsimulation models to specific aspects of the models' operations. However, we note considerable variability in the way in which similar functions are implemented in different models due to factors such as differences in client interests and modeling "style." Citro and Ross (in Volume II) describe some of the differences among three static models, and Ross (in Volume II) describes some of the differences between two dynamic models. The panel's validation experiment with TRIM2 (see Chapter 9 and Cohen et al., in Volume II) found important effects of different approaches to such operations as aging the data.
OCR for page 155
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations the ability to model interactions among a wide variety of programs across several issue areas, such as income support, taxation, retirement income, and health care benefits. MODEL DESIGN PRINCIPLES AND PRACTICES The majority of microsimulation models today incorporate some, but far from all, of these capabilities. For example, most models analyze interactions among multiple programs (although, typically, within just one or two issue areas), but few of them analyze second-round effects or, indeed, first-round behavioral effects other than program participation choices. Yet, in simply modeling the complex provisions of the nation's income support programs or tax codes, today's microsimulation models are already very complicated and difficult for anyone other than experienced analysts to use or modify to respond to changing policy concerns. Design Principles The complexity that reflects the real world of policies and individual circumstances is an inherent feature of microsimulation modeling, so it is especially important to minimize unnecessary complexities. We believe models should follow design strategies that enable them to meet the three criteria that we believe are essential for their cost-effective use in the policy process: flexibility and timeliness in responding to emerging policy needs; understandable outputs that can be assessed in terms of their quality; and tolerable costs of model development, use, and modification. These requirements argue strongly for a design approach that has the following characteristics: clear analytic goals, self-contained modules, facilitated documentation and evaluation, linkages, computational efficiency, and accessibility to users. Clear Analytic Goals It is important that the main analytic goals of a model are well thought-out and that development work gives priority to those goals, accepting limitations with respect to other possible goals. In other words, no one model can or should try to be all things to all people. Examples of models that foundered because the goals were far too ambitious, particularly given the restricted capabilities of the computer hardware and software technology available at the time, litter the history of microsimulation model development in the United States (see below).
OCR for page 156
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations We encourage broad conceptual thinking about model goals, and we support, when appropriate, model designs that are ambitious in their scope. For example, there are strong arguments that future model development in the health care policy world (whether for models of health care consumers or providers or both) ought to simulate behavioral responses and second-round effects as well as the direct effects of policy changes (see Chapter 8). Nonetheless, we argue strongly that model developers must make explicit choices about priorities. If they design a model that is broad in scope, then they should plan to develop only some of the model components in depth. To continue with the example, the developers of a behavioral health care policy model might pick one type of response to model in detail, while constructing a much cruder model of other responses. Conversely, a very detailed model should most likely restrict the range of issues or behaviors that it encompasses. We are hopeful that current and expected future improvements in computer technology will make it possible to build more ambitious models that do not crumble under their own weight. Improvements in research knowledge and databases will also support this goal. However, we still see the need to trade off model breadth and depth. If such trade-offs are not made, the model development process is likely to abort or result in a model that is dysfunctional, wasting scarce resources. Self-Contained Modules Good design should be based on self-contained modules that can be readily added to (or deleted from) a model. As a corollary, the design should permit an analyst to specify all or only a subset of the existing modules to be used for an application. Furthermore, the modular principle applies not only to the components of the model that simulate programs such as AFDC or SSI, but also to the various steps that are used to generate a model database. Finally, each module should be highly "parameterized," that is, allow the user to change various features of the module by simply supplying a new value for one or more parameters instead of writing new code. For example, it should be possible to change the value of allowable deductions in the formula for calculating AFDC benefits or to change the value of the factor used to adjust income amounts for underreporting. In our view, a well-specified modular structure is one of the most important means for enabling a model to respond in a timely and flexible manner to new policy issues, while minimizing development costs because the entire model does not need to be rebuilt. Similarly, a modular structure minimizes costs of model use because only those modules that are required for a particular application have to be used. Looking to the future, it may be that modular designs implemented with enhanced computer technology will enable microsimulation
OCR for page 157
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations models to become bigger in scope than current models without becoming "too big."2 Facilitated Documentation and Evaluation A good design provides for features in each module that facilitate documentation and evaluation. Although many existing microsimulation models have been designed along modular lines, relatively few have sought ways to improve the production of model documentation, and none has paid sufficient attention to model validation. Good documentation is essential to enable people other than a model's original developers to understand the model and modify it in a cost-effective manner and to evaluate the interactions of model components and their effects on the quality of the model output. Good model design should seek ways to reduce the costs and time of preparing complete documentation, such as providing for self-documenting program code in each module and automated audit trails that can track the effects of modifications in a module on other model components (see David, 1991; see also Chapter 10). Good model design should also seek ways to facilitate the evaluation of model components, separately and collectively. To date, the task of model validation has been costly and time consuming and, partly for those reasons, has been generally ignored. Providing features such as the ready ability to use different procedures for constructing variables in the database could greatly facilitate the conduct of validation studies, including sensitivity analyses and estimation of variance in the model results through sample reuse techniques. Again, developments in computer technology should make possible great strides on these dimensions of good model design. Linkages Good design provides for entry and exit points in the model that facilitate linkages with other models. In addition to a modular structure, another way to tackle the problem of providing enhanced capability without designing an overly elaborate and cumbersome model is to build in pointers that make it possible to use more than one model. For example, with such a design, a microsimulation model could feed results to a macroeconomic model in order to obtain a reading of second-round effects of proposed policy changes and, in turn, use outputs from the macroeconomic model in running further microsimulations. 2 Indeed, enhanced computer technology may well make it possible for a good model design to be "integrated," that is, to require running all instead of just some of the model's component modules without incurring excessive computational costs. Such a design requires careful specification of default values for each module to minimize the burden on the user of determining whether all modules are properly specified for the particular application.
OCR for page 158
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations Computational Efficiency Good design strives to attain a high degree of computational efficiency of a model and its components consonant with other important design objectives. It is obvious to assert that the design of computer-based models should strive to minimize computational costs, and, indeed, developers have historically given high priority to this aspect of model design. However, declining costs of computing mean that it may be feasible, and perhaps desirable, to give up some computational efficiency for other goals, such as ease of use and ease of modification of the model. In any case, in considering ways to reduce computing costs, developers need to consider not just the costs of a single run but the combined costs of all of the runs that are necessary for a particular application. Accessibility to Users Good model design incorporates features that provide a high degree of accessibility of a model to analysts and other users who are not computer systems experts. At present, policy analysis agencies generally rely heavily on a few expert analysts, often affiliated with a contractor, to carry out microsimulation model applications. In turn, the analysts rely on programming staff to make changes in the model and even to set up model runs. Although, to our knowledge, the experts have met high performance standards, we believe that the cost-effectiveness of microsimulation modeling for the policy process would be improved by making the models accessible to more people. This goal requires making the models more "user friendly" to people who are not computer buffs, as well as providing adequate documentation, training, and support services. Greater accessibility should reduce overall costs for model applications by making it possible for analysts to run the models themselves (at least for straightforward applications), should enhance the analysts' understanding of the properties of the model, and should encourage greater experimentation with the models and closer scrutiny of the quality of the model outputs. Recommendation 6-1. We recommend that policy analysis agencies set standards for the design of future microsimulation models that include: setting clear goals and priorities for the model; using self-contained modules that can be readily added to (or deleted from) the model and that are constructed to facilitate documentation and validation, including the assessment of uncertainty through the use of sensitivity analysis and the application of sample reuse techniques to measure variance;
OCR for page 159
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations providing for entry and exit points in the model that facilitate linkages with other models; attaining a high degree of computational efficiency of the model and its components consonant with other objectives such as ease of use; and attaining a high degree of accessibility of the model to analysts and other users who are not computer systems experts. Practices In addition to adopting good design principles, it is important to follow good practices in the actual implementation of a microsimulation model design. Such practices include careful development, full documentation, validation, and follow-up assessment. The development process has to be carefully staged, with work directed toward intermediate goals or milestones, so that progress can be assessed and timely corrective action taken if there are undue delays or budget problems. In addition, the process should include the construction of prototypes of model components so that design flaws can be identified and corrected at an early stage. Prototypes or test versions of the model can also provide the sponsor agency with some analysis capability before the model is completed, an important consideration if the development process is expected to extend over several years. Fully adequate documentation should be prepared on a timely basis for the model, its components, and its database. Not only is it important for the design to facilitate documentation, but the development process must assign sufficient priority and resources so that good documentation does, in fact, result. Ideally, documentation is developed on a flow basis, including an audit trail of all modifications, instead of being left to the end. It is important to carry out validation studies of the model and its components, including sensitivity analyses as well as estimation of variance. It is particularly important to perform sensitivity analyses for each new module prior to full implementation, including studies of the impact on the rest of the model, in order to identify any unexpected or dysfunctional interactions or adverse effects on use. Again, the development process should assign sufficient priority and resources to this task. In our experience, model developers typically follow good practice with regard to validation in the development of individual components of modules. For example, standard procedure in developing a regression equation for imputing variables, such as child care deductions for the AFDC program, is to look at various measures of goodness of fit before determining the final form of the equation. However, model developers are much less likely to look at the effect
OCR for page 160
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations of alternative forms of an equation on the operation of the module as a whole or of alternative modules on the operation of the entire model.3 Finally, once a model is in use, developers should periodically take stock of the model's capabilities and functioning. That is, they should conduct regular "sunset" or "zero-based" reviews, in which they reevaluate the structure of the model, determine components that have outlived their usefulness and need to be dropped, and identify other components that need to be rebuilt. Such zero-based reviews are important because models constantly undergo revision and adaptation to respond to users' changing requirements for estimates. Without regular reevaluations, it is all too likely that a once well-designed model will become increasingly less cost-effective over time. It is important that policy analysis agencies build into their budgets provision for reassessing and reoptimizing the microsimulation models they use for input to the policy process. Recommendation 6-2. We recommend that policy analysis agencies set standards of good practice for the development of future microsimulation models that require: constructing prototypes and establishing milestones throughout the development process so that design flaws can be identified at an early stage and the agency provided with some analysis capability before the entire model is completed; preparing fully adequate documentation on a timely basis for the model and its components; conducting validation studies of the model and its components, including estimates of variance and sensitivity analyses (the latter should be conducted for each new module, prior to full implementation, by examining its impact on the rest of the model in order to identify any unexpected or dysfunctional interactions or adverse effects on use); and subjecting the model to a "sunset" provision, whereby the model is periodically reevaluated, obsolete components are deleted, and other components are respecified to optimize the model's usefulness and efficiency. 3 An exception is a recent study of food stamp participation rates, based on comparing counts and characteristics of participants from the IQCS with simulations of eligible units from the SIPP. In it, Doyle (1990) discussed sources of error in the participation rate estimates and provided an assessment of some components of uncertainty, including the impact of two different ways of estimating countable assets for households; adjusting versus not adjusting the program administrative data for administrative errors; and comparing wave 7 of the 1984 SIPP panel with wave 3 of the 1985 SIPP panel, with the two waves combined as the database for the eligibility simulations.
OCR for page 161
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations CURRENT MICROSIMULATION MODEL DESIGN Many current microsimulation models have been constructed on the general principles that we advocate. But some modeling efforts have foundered because (in addition to whatever data and other problems occurred) the design was too grandiose, the structure did not provide for sufficient modularization, or the documentation was inadequate to permit easy updating or adaptation of model components. In addition, some models that were initially well designed became encumbered with ill thought-out additions or obsolete components and lost significant flexibility and ease of use. In the history of microsimulation modeling, RIM (Reforms in Income Maintenance) and the KGB (Kasten, Greenberg, Betson) model represent notable examples of models that were developed under acute time pressures; asked to simulate more and more proposed policy changes; and then, having collapsed under their own weight, abandoned. Insufficient attention to documentation, modularity, and computing efficiency made them increasingly difficult for their own developers—let alone other analysts—to use. The developers of TRIM (Transfer Income Model) originally tried to rewrite RIM but gave up the attempt and instead developed an entirely new model; the KGB model wound up on the shelf after its developers left the originating agency, ASPE. Other models have been constructed according to good design principles but, over the years, having become increasingly more cumbersome and difficult to access. Thus, a major impetus to the development of the KGB model was the perception that TRIM, as it then existed, could not readily be adapted to the requirements of simulating a combined income support and jobs program and that it would take too many resources to conduct such simulations with the MATH (Micro Analysis of Transfers to Households) model. The current TRIM2 model is the result of rewriting TRIM to achieve greater efficiency and modularity. The MATH model was also rewritten to improve computing efficiency. Other modeling efforts have suffered from being overly ambitious, particularly given the limitations of the computing technology at the time of their development. The original DYNASIM (Dynamic Simulation of Income Model) was intended to model a wide range of human behaviors and policy issues and to be accessible to a broad research community. The goals were laudable, but the costs of running the model on more than a very small sample of households proved prohibitive. DYNASIM2 is a scaled-back version that was redesigned to maximize computational efficiency and to focus on policy issues surrounding the provision of adequate retirement income support. The MRPIS (Multi-Regional Policy Impact Simulation) model has also experienced high costs and long development times in its developers' effort to construct a complex system of fully linked models. The system includes a detailed microsimulation model of the household sector, together with input-output and cell-based models that
OCR for page 162
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations simulate the effects of changes in household disposable income on savings and consumption that, in turn, affect regional economic production and, ultimately, in a feedback loop, affect household employment. To date, MRPIS has had only limited application. Overall, the history of microsimulation has exhibited a pattern of advances and retreats. Advances occurred as new capabilities of various kinds were added to the models to respond to policy needs. Invariably, there came a time when the added capability proved dysfunctional for the continued cost-effective operation of the model. Then came retreat. Historically, the response has been (if the model was not simply shelved) to redesign the model for maximum efficiency with the available computing technology. Another typical response has been to refocus the model on core modules: for example, current maintenance and development efforts for the MATH model are limited almost entirely to those modules that are necessary to simulate the food stamp program. Other modules have been largely shelved. Today, the major microsimulation models are optimized for the mainframe, batch-oriented computing technology that prevailed in the late 1970s and early 1980s. Although this strategy has greatly reduced computing costs, the models are not well positioned to move toward adding new capabilities. Development as well as operation of the models can entail substantial overall costs because many applications require multiple runs and the intervention of skilled programmers to make necessary changes in the program code. Moreover, the linearity embedded in the models' hardware and software configuration imposes constraints, sometimes quite severe, on analysts' flexibility of use. For example, once a new TRIM2 or MATH baseline file has been constructed, it is very expensive to go back and alter its characteristics, say, to age the data forward in time if aging has not already been done or to construct different kinds of program filing units. This inflexibility also inhibits evaluation studies of models. Today, the major models—particularly the cross-sectional "static" models such as TRIM2 and MATH—are focused on those capabilities that are central to simulating the direct effects of proposed program changes. The core of these models is the set of modules that mimic the detailed operation of government programs, and new model development in the past decade has generally focused on further elaborating these modules or developing modules for additional programs. For example, the federal and state income tax modules in TRIM2 were completely rewritten and expanded in the mid-1980s, as was the TRIM2 Medicaid module. This type of development increases the breadth of a microsimulation model in terms of the policy areas for which it is relevant. Another example of adding breadth, in this case to a dynamic model, is the recent development of a module for simulating long-term care financing programs in PRISM (Pension and Retirement Income Simulation Model). In contrast, development of the kinds of capabilities that increase the depth of a microsimulation model in terms of simulating the effects of a proposed
OCR for page 163
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations policy change has either not taken place or resulted in less-than-satisfactory modules that were later mothballed. Examples of depth include functions for aging the data and simulating first-round and second-round behavioral responses inside a model. All of the major static models include aging modules; however, this capability in TRIM2 has not been used for policy purposes since the early 1980s. As we have noted, none of the static models currently includes modules for behavioral response (other than the participation decision) or for second-round effects.4 The modules that were developed for MATH in the mid-1970s to simulate work responses to welfare program changes have been dormant for many years. The dynamic models, such as DYNASIM2 and PRISM, that are used to model long-range policy issues involving retirement income and other areas obviously include aging capabilities. Indeed, aging is the focus of these models. They also make heavy use of research knowledge about behavior to build their databases. However, they incorporate relatively few feedback effects of future policy changes on behavior, except for the decision to retire, which is the functional equivalent to the program participation decision in the static income support models. Many factors have resulted in the situation we just described. The need to keep their models competitive in the policy analysis market has obviously motivated the model developers (and the analysts who use the results) to strive for reduction of computer costs and sufficient streamlining so as to facilitate their use. In turn, the predilections of policy makers (and their staffs) have exerted a strong influence in the direction of emphasizing the models' accounting elements—that is, the detailed program rules—in preference to other capabilities. Particularly in the Gramm-Rudman-Hollings era, policy makers have often been more concerned about the immediate consequences of a proposed welfare or tax policy change for next year's budget deficit, for example, than about longer term behavioral effects. They have also been interested in obtaining results for very fine-grained changes in programs. An added factor in reinforcing emphasis on the accounting modules is the concern of the staffs in the policy analysis agencies to have credible models whose baseline results closely match available administrative data on program recipients, although, as noted above, the process of calibrating model results to control totals carries its own perils (see Chapter 5). Last, but far from least, an important factor in the underemphasis on capabilities such as modeling behavioral response is the skimpy base of research knowledge on which to develop these kinds of capabilities. Social science is far from being able to predict the interactions of individuals with government policies and other social and economic forces with the precision that would be 4 Recently, work started on developing a labor supply response module for TRIM2.
OCR for page 171
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations More generally, changes in transfer policy, tax policy, and health care policy can be expected to affect participation by individuals in transfer and health care programs, levels of work effort among program recipients or the population as a whole, levels of health care expenditures and other health outcomes, and levels of investment and other variables affected by the policies under consideration. Conceptually, one can distinguish two separate functions of microsimulation modeling: the accounting function of aggregating the effects of a policy change across the individual units in a given population and the function of predicting the behavioral response to a policy change. The first function is the more basic and must precede the second. This section considers the desirability and feasibility of incorporating behavioral responses into microsimulation models in addition to their accounting structures. We consider initially only first-round behavioral effects and hence only partial responses to policy changes; later, we discuss the incorporation of second-round effects. Behavioral Responses in Current Models The documentation for some of the models that we reviewed left us in doubt as to the extent to which the model incorporated behavioral responses. Nonetheless, it is clear that in the microsimulation models currently in operation, as well as those implemented in the past, the incorporation of behavioral response has been the exception rather than the rule. MATH and TRIM2 do not currently have operational components for behavioral response in labor supply (or other dimensions) to policy changes in transfer programs; thus, the models almost exclusively serve the accounting function. Previously, however, labor supply modules were developed in both MATH and KGB to simulate the labor supply effects of the Carter welfare reform program, and a labor supply module is currently under development in TRIM2. The dynamic microsimulation models of retirement income programs (e.g., DYNASIM2 and PRISM) make use of many behavioral parameters in developing their longitudinal databases, for example, using probabilities of job change as a function of variables such as age and previous job experience to build individual employment and earnings histories. However, they model only a limited range of the likely behavioral responses to proposed changes in retirement income programs. Thus, in neither model does the level or potential level of pension and social security income affect hours of work or earnings of retirees or nonretirees. Rather, changes in retirement income programs affect labor supply only by their effects, if any, on the decision to retire at a particular age (see Burtless, 1989:20-27; Ross, in Volume II). For tax policy, most microsimulation models are tax calculators that simulate the revenue effects of changes in the tax code. Tax models typically include a function to decide whether or not taxpayers will itemize, on the basis
OCR for page 172
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations of the assumption that they will choose to minimize their tax liability. Other behavioral responses are also frequently taken into account, but not inside the models. Rather, the agencies and congressional committees using tax simulators adjust their accounting estimates for behavioral response outside the models, for example, adjusting the revenue estimates of a change in the capital gains tax rate to reflect assumptions about the effects on the timing of sales of capital assets (see Strauss, 1989). For health care policy, strong behavioral responses have been documented in some cases: for example, hospitals responded when Medicare implemented a prospective payment system by reducing the average length of stay (see Grannemann, 1989). However, behavioral responses are rarely or only crudely incorporated in health care policy models at the present time. As we have noted, the one behavioral response that is incorporated in MATH, TRIM2, and other microsimulation models for income support programs is participation in such programs. Given that some proportion of eligible households voluntarily choose not to apply for benefits, the models must incorporate the determinants of the participation decision. The analogous behavioral response function in models of retirement income programs is the decision to accept retirement benefits, which entails a corresponding decision to stop work or reduce hours.14 Issues and Trade-Offs There are many issues that must be considered in an assessment of the desirability and feasibility of incorporating behavioral response in microsimulation models. First is the potential magnitude of behavioral responses that are thought to occur. Clearly, the greater the potential for a behavioral response, the more seriously it merits consideration for incorporation into models. The level of the potential response is likely to differ by the type of policy considered, the magnitude of the policy stimulus, the type of individual unit whose response is under consideration, and the type of behavior being examined. A related issue comes into play when the microsimulation modeling is designed to rank-order different policy alternatives rather than to obtain the best estimate of a single alternative. In this case, the relevant question is whether the potential magnitude of behavioral response across the alternatives being ranked is likely to affect their respective rankings. A second major consideration is whether there are generally agreed-upon and reasonably reliable statistical estimates of the magnitude of a behavioral response in order to simulate it in the first place. To some extent, this issue 14 The participation and retirement decision components of existing microsimulation models vary widely in their form and complexity; see the comparison of the MATH, TRIM2, and HITSM participation functions in Citro and Ross (in Volume II) and the comparison of DYNASIM2 and PRISM in Ross (in Volume II).
OCR for page 173
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations precedes the first one, because judgment of the potential magnitude of behavioral response presumes the existence of relatively good estimates. Nevertheless, it is often the case that the statistical estimates available from the research literature are sufficiently large to warrant concern over behavioral response but far from sufficiently reliable to give any degree of confidence to incorporating them in models. Unfortunately, it appears that this situation is the rule rather than the exception for social welfare policy issues. For income support issues the variances of estimates of the work-effort effects of welfare programs are often large and, more important, the estimates obtained from different studies range widely (see Burtless, 1989). The same characterization applies to tax policy estimates, such as the effect of capital gains taxation (see Strauss, 1989). Estimates of other kinds of behavioral responses with important policy implications, including charitable giving, demand for health services, and demand for health insurance, also vary widely (Strauss, 1989:Tables 4-1 to 4-10). In many cases, it appears that research designed to replicate and reconcile the different findings from different studies could narrow considerably the range of estimates in the statistical literature. However, studies that attempt such a narrowing of the range of estimates are rarely conducted.15 Because of the problem of the unreliability of response estimates, there is a need to develop concrete and well-defined methods for assessing the uncertainty of behavioral response parameters if they are incorporated in microsimulation models. If a single estimated parameter is introduced in a microsimulation model, the variance of that parameter must be translated into variances of the aggregates simulated from the model. If multiple parameters are introduced, their covariance must be similarly incorporated; this task is often difficult because the different parameters are drawn from different studies. In addition, the uncertainty of parameters resulting from the range of estimates in the statistical literature must be assessed through some type of sensitivity analysis and the findings reflected in the caveats that accompany the output from a model. The degree of uncertainty from the range of estimates may often be much larger than that from the variance of any single estimated parameter. We have not found any systematic attempt in microsimulation modeling efforts to date to address the problem of measuring the degree of uncertainty in model outputs attributable to the use of particular behavioral parameters 15 Meta-analysis techniques, although still evolving and posing a number of thorny methodological issues, may be useful in helping to narrow the range of estimates of behavioral responses that are important to include in microsimulation models. Meta-analysis involves a systematic way to aggregate all available studies on a topic and, with the aid of statistical techniques, to determine the best estimate based on all of the studies, without conducting new research or secondary data analysis; see Wachter and Straf (1990). Empirical or hierarchical Bayes techniques, which provide a means to combine information from different sources, of varying quality, through a weighting process, may also be helpful in developing the best available estimate of a parameter for inclusion in a microsimulation model (see Efron and Morris, 1973; Lindley and Smith, 1972).
OCR for page 174
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations or to alternative specifications for those parameters.16 Indeed, there has been little attention to the development of scientifically based methods for measuring uncertainty in the accounting function estimates of the models. Yet the assessment of uncertainty with regard to the estimation of behavioral response should not proceed without a similar assessment for the no-response estimates from microsimulation models that are obtained in the first place (see further discussion of this set of issues in Chapter 9). The third major consideration concerns the problems involved in actually incorporating behavioral parameters in microsimulation models in an appropriate manner. This task is not always straightforward (see Burtless, 1989; also Grossman and Watts, 1982). The database used to estimate a behavioral response is usually not the same as the database used by the microsimulation model. Thus, researchers have typically estimated labor supply and other responses on panel data sets such as the Michigan Panel Survey of Income Dynamics and the Retirement History Survey. However, these surveys, because of small sample size, limited populations, and other reasons, are rarely suitable as simulation databases. In contrast, large, regularly updated, nationally representative surveys that are heavily used for simulation, such as the March CPS income supplement, generally lack the richness of information desired for estimation. For example, the March CPS does not obtain direct measures of hourly wage rates, so that modelers must impute wage rates to the CPS records in order to simulate a labor supply response to a policy change. As Burtless (1989) suggests, a useful, but rarely performed, step to help assess the impact of basing the simulation on imputed values would be to compare the estimated response in the original data set using both actual and imputed variables. A related concern is that the available behavioral specifications may reflect an inappropriate degree of aggregation in relation to the microsimulation model. Thus, a good deal of behavioral analysis is carried out in terms of a representative economic agent, although microsimulation models are designed to carry out simulations on diverse kinds of units. For example, using a specification that predicts a uniform decline in work hours in response to an increase in welfare benefits fails to take advantage of the capabilities of microsimulation models for detailed analysis of population subgroups. Moreover, use of such a specification may distort the results, if, in fact, the pattern of response is that some workers drop out of the labor force entirely while the rest maintain their work hours. Furthermore, the interaction of a behavioral response module with other aspects of a microsimulation model may produce anomalous effects. Burtless (1989) reported a comparison of the labor supply effects of four negative income tax plans simulated by the MATH model and a variant of MATH developed 16 Betson's (1988) is one of the few studies to address such questions. Betson delineates major sources of errors in the microsimulation modeling process; he then explores the issue of the variability in microsimulation model estimates of a negative income tax plan that could be expected from the variability in estimates of labor supply response parameters from experimental versus nonexperimental studies.
OCR for page 175
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations by SRI International (called TATSIM). The two models used the same labor supply functions, estimated from the Seattle and Denver Income Maintenance Experiments: MATH simulated that aggregate labor supply and earnings of single mothers would decline under all four plans; TATSIM simulated that labor supply and earnings for such women would rise. These discrepant results could not be attributed to the behavioral response function itself; rather, they must have been caused by differences in the two models' simulation of the baseline prereform policy environment. The fourth major consideration concerns the means by which behavioral responses are presented to policy makers—provided that reasonably good estimates are available, that they can be incorporated into a microsimulation model, and that they are of sufficient magnitude to be important for policy purposes. One issue is whether simulations should be presented both with and without the behavioral response included or just including the response. In the case of cost estimates, it may be that the simulated values should be provided to policy makers with the behavioral response, because such response will generally affect costs. However, in the case of other estimates more directly related to the response variable in question, such as labor supply, health expenditures, or capital gains realizations, it may be that the simulated values should be presented both with and without response, because the magnitude of the response itself may be of interest to policy makers. Another issue involving presentation is how to convey the uncertainty in the estimates to policy makers. Many policy analysts argue that only the best possible point estimates should be provided because policy makers will base their decisions on such point estimates and not on a range, even if it is presented. We argue throughout this report that policy makers should be made aware of the degree of uncertainty in the simulated values either directly or indirectly so that they can take uncertainty into account in their decisions. Over the past decade there has been little investment in capabilities for simulating behavioral response inside microsimulation models. There has been even less investment in the kinds of research and evaluation studies that would permit charting a sensible course for the development of future behavioral components of such models. Concern about the cost implications of major development work has been one important factor in creating this situation. As we noted above, the nature of the legislative process in this decade—in which policy makers have been most concerned about total costs and distributional effects only in the very short run—has limited the motivation to add behavioral responses to models. The absence of a body of research demonstrating the impact on model estimates of incorporating behavioral parameters has further dampened interest, while the absence of sufficient research to produce parameter estimates within a reasonably narrow range or in a form suitable for microsimulation modeling has made the modelers cautious about expending much effort in this direction. In short,
OCR for page 176
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations there has been a self-perpetuating cycle in which lack of investment leads to downplaying the need and also makes it more difficult to incorporate behavioral response in models, which, in turn, leads to continued lack of investment. Yet, undoubtedly, there are important behavioral effects of proposed policy changes about which decision makers should be informed. Certainly, legislative initiatives such as the 1988 Family Support Act and the 1986 Tax Reform Act were predicated on the assumption that policy changes would in fact lead to behavioral changes—for example, that transitional Medicaid and day care benefits would make work more attractive to AFDC recipients and that lower tax rates would stimulate investment and economic growth. But these effects were not simulated, nor with current modeling capability could they have been simulated; instead, to the extent that the effects were treated at all, they were handled with ad hoc, out-of-model adjustments. Yet microsimulation modeling cannot continue to ignore these behavioral response issues and expect to remain intellectually strong or policy relevant in the future. (In Chapter 8 we note the particular importance of incorporating behavioral response effects for modeling policy changes related to health care. Such responses are also very important for modeling tax policy issues.) Thus, the goal must be to devise a strategy for development of enhanced modeling capabilities in the area of behavioral response that makes the best use of scarce investment resources. Clearly, a balance is required between the two extremes of always and never developing in-model estimates of behavioral responses. Research is required to determine the appropriate mix. In some cases, the best method of obtaining estimates may be through behavioral response studies separate from microsimulation models, perhaps involving the application of microeconometric behavioral research results to aggregate estimates of key population subgroups. In other cases, it may be that estimates of the behavioral effects of policy changes are best obtained within the structure of microsimulation models themselves. Or it may be that a dual strategy should be followed in which behavioral response estimates are obtained in both ways. We note that, even where it appears more cost-effective to obtain estimates of behavioral responses outside microsimulation models, it is quite possible to use the models to generate estimates of the population groups that are most likely to have a response in order to obtain a rough assessment of the likely overall impact on program costs and caseloads. Recommendation 6-5. We recommend that policy analysis agencies devote resources to studies of the relationship between behavioral research and microsimulation modeling, including studies of ways in which research and modeling can complement one another, as well as ways in which the two are alternative modes of deriving answers to policy questions.
OCR for page 177
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations An important criterion for deciding to spend the resources necessary to imbed behavioral response capabilities in microsimulation models is the likely magnitude of the impact. Hence, policy analysis agencies need to determine the kinds of behavioral responses to policy measures that could be important to consider. If behavioral responses are potentially large for some policy issues, and if it is decided to build some or all of those responses into microsimulation models, policy analysis agencies should attempt to reduce the extent of uncertainty surrounding existing statistical estimates of such responses. For many reasons, existing incentives for research, especially in academic circles, do not lead to the replication studies, additional data analysis, and sensitivity testing that are needed to reconcile the diversity of estimates from different analyses. Policy analysis agencies will need to provide those incentives by supporting such work. Recommendation 6-6. We recommend that policy analysis agencies sponsor studies to determine when behavioral response effects are most likely to be important in different policy simulations and, hence, how investment in developing behavioral response capabilities in microsimulation should be concentrated. On the basis of such studies, policy analysis agencies should commission research to attempt to narrow the range of statistical estimates of behavioral parameters that may be of major importance to critical policy changes. Such research may require additional data analysis, replication studies, and multiple econometric analyses that use different data sets and analytic techniques. One of the major issues regarding the incorporation of behavioral response parameters into microsimulation models concerns measurement of their uncertainty and the resulting impact on the uncertainty of the model outputs. After the analyses that we recommend above have been conducted, some uncertainty will still remain, both because the different results of varying studies will prove difficult to reconcile and because all behavioral estimates have variance. How to incorporate that uncertainty in measuring the total uncertainty of the estimates from a microsimulation model is far from obvious and requires the resolution of difficult technical issues (see further discussion in Chapter 9). Recommendation 6-7. We recommend that policy analysis agencies commission methodological research to develop methods for systematically assessing the impact on microsimulation model estimates of the degree of uncertainty in the behavioral parameters that are used—both the uncertainty arising from the variance of specific parameters and that arising from the range of estimates from different behavioral studies. This work should be tied into the development of similar methods for assessing uncertainty of the estimates produced by microsimulation models without behavioral response.
OCR for page 178
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations Incorporation of Second-Round Effects As discussed above, the first function of a microsimulation model is the accounting, or aggregation, function; a second function is the simulation of behavioral response to a policy change. Here, we discuss a third possible function of microsimulation models: simulation of the second-round effects of a policy change.17 By second-round effects, we refer not to the immediate responses by individual economic units directly affected, but to effects that alter the nature of factor or product markets or the level and distribution of consumption, production, and employment in the economy or in a sector of it affected by the policy change. Examples of potential second-round effects are many and pervasive: A change in a transfer program that alters labor supply may change the wage rate in the labor market and therefore further change labor supply. In addition, in the short run, prior to an equilibrating change in the wage rate, the unemployment rate may be affected and displacement or replacement effects may occur. Altering the coinsurance rate in Medicare or health care benefit programs in general will affect the demand for health care and therefore its price. If care is rationed because prices are inflexible, the policy change may affect the amount of health care in the market. Lowering the marginal tax rate in the federal income tax code will affect the tax price of all deductions and therefore affect the demand for the goods that provide such deductions, such as housing. Changes in transfer, health care, or tax policy will generally affect the level of consumption of different goods, which will affect employment in different industries, which, in turn, will affect the distribution of income in the population. Existing Models of Second-Round Effects The incorporation of second-round effects has been studied for many years by economists and others in the field of macroeconomic policy modeling. Indeed, the development of macroeconomic models preceded the development of microsimulation models, and early microsimulation modelers such as Guy Orcutt explicitly looked to macroeconomic models as sources of input. Thus, Orcutt 17 The discussion in this section benefited greatly from presentations to the panel by Don Fullerton on computable general equilibrium (CGE) models and by Barry Bluestone, Alan Clayton-Matthews, and John Havens on MRPIS. Useful references on CGE models are Berkovec and Fullerton (1989), Shoven and Whalley (1984), and Whalley (1988). Useful references on MRPIS are Havens and Clayton-Matthews (1989), Havens et al. (1985), and Social Welfare Research Institute (no date). Finally, Anderson (1990) and Hanson and Robinson (1988) discuss linkages of macroeconomic and microsimulation models generally.
OCR for page 179
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations proposed originally to build a macroeconomic component into DYNASIM that would interact with the microsimulation components (Zedlewski, 1990). In addition to this type of interaction, the development of input-output models has provided another avenue for linkage with microsimulation models, and input-output models have been incorporated in some microsimulation models even on a regional basis (Haveman et al., 1980). Inadequate documentation made it difficult for us to determine the detailed structure of many such models. To focus our efforts, we examined two types of second-round models: computable general equilibrium (CGE) models and the Multi-Regional Policy Impact Simulation model. CGE models, such as those developed by Shoven and Whalley (1984), simulate the equilibration of a full set of interconnected markets in an economy and permit full long-run adjustment of prices to changes in supply and demand.18 CGE models are calibrated by a process of setting parameter values for supply and demand elasticities drawn from the econometric literature or picked to fit the available data on market prices and quantities. Although CGE models as generally implemented are not micro in nature, it is possible to develop disaggregated CGE models or to link them with microsimulation models by iterating back and forth between them until market equilibrium is reached (see, e.g., Berkovec and Fullerton, 1989; Slemrod, 1985). The MRPIS model simulates shorter run responses (see Havens and Clayton-Matthews, 1989). Its focus is on modeling the impact of program changes on regional product mix and therefore on regional employment. Program changes, such as welfare reform or tax reform, are simulated first on a microlevel database with a microsimulation model, and marginal propensities to consume different goods are then applied to the changes in household income on the micro file. The implied region-specific changes in consumption demand are then applied to a regional input-output matrix to obtain regional employment demands by occupation. These changes in employment are then allocated across the individuals in the original micro file, which in turns generates income changes that are used to begin the process anew. The model iterates to convergence. All prices and wages are held fixed during the iteration. Issues and Trade-Offs Many of the same issues and trade-offs discussed for the incorporation of behavioral response in microsimulation models apply to the incorporation of 18 David (1991) reviews the general equilibrium modeling that was conducted by the U.S. Department of the Treasury as part of the debate on tax reform. He asserts (p. 784) that this approach is sophisticated in that it ''integrates consumption, saving, and factor supply decisions within a household population. And it models the total response of the economy to a change in tax structure." However, he notes a number of problems in the ability of general equilibrium approaches thus far to model in a satisfactory manner the complex behavior of actors in the economy.
OCR for page 180
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations second-round effects. First, the potential empirical importance of second-round effects should be a primary determinant of whether attempts to incorporate such effects are undertaken. Second, the uncertainty surrounding the estimates must be an important concern. Third, the best manner to present estimates of second-round effects to policy makers—if the effects are of sufficient magnitude and the estimates of sufficient quality—is an important issue. The second issue, that concerning uncertainty, is likely to be even more important for the modeling of second-round effects than for the modeling of first-round behavioral effects. For example, CGE models include simulation of behavioral effects in all markets on both supply and demand sides, and hence must rely on estimated elasticities for many different relationships. The range of uncertainty created by a simulation from such a large number of uncertain parameter values is likely to be larger than that of a first-round simulation, by one or more orders of magnitude, because the uncertainty is likely to increase multiplicatively rather than additively in the number of parameters. The MRPIS model also involves a large number of parameters, including detailed marginal propensities to consume, in order to project changes in consumption, and it also has a large number of parameters from a multiregional input-output matrix. The model uses estimated econometric equations to allocate changes in labor demand across the micro units as well. Although the literature on these models, particularly the CGE literature, has discussed issues of uncertainty, too little attention has been paid to the development of methods by which uncertainty can be quantified. This inattention reflects a more general lack of attention to the issue of model validation. The addition of second-round effects to models also raises difficult issues relating to short-run versus long-run adjustment. For example, CGE models are best thought of as relatively long-run simulations, but such models rarely provide guidance about the time horizon for full adjustment or the dynamic path of the adjustment process. Policy makers will naturally be quite concerned with these issues. A related issue is the nature of the counterfactual scenario that is being simulated by second-round models. For example, the MRPIS model simulates increases in transfer program benefits to increase national output and to lower the unemployment rate, and the simulated estimates reflect this change as well as the change in distribution of employment across different skill classes. Yet policy makers are unlikely to be directly interested in the fiscal stimulus provided by a particular change in a transfer program, because fiscal stimulus could be provided in many other ways as well. This issue has been discussed for some years in the economic theory of tax incidence; it is often proposed that programs be evaluated on the basis of their distributional effects, while total government expenditures are held constant. Given these difficulties with second-round models, it is worth considering whether there is any middle ground between the polar extremes of completely
OCR for page 181
Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Volume I - Review and Recommendations ignoring second-round effects and fully incorporating them into microsimulation models through a process of iterating with a second-round model. Such iteration is likely to be difficult to carry out in an appropriate manner with any degree of confidence in the results. The quality of the simulation depends critically on the proper allocation of outcomes from the aggregated second-round model across the individual units in the micro model. A middle ground could be achieved by a single iteration in which the initial aggregated outcomes from a microsimulation model are put into a second-round model, from which second-round effects on prices and quantities are estimated. Such second-round estimates would provide a barometer of the importance of such effects as well as an indication of their sign (or direction). These outputs from the second-round model could then be presented in an appropriate way to policy makers as qualifiers to initial estimates. We are less than sanguine about the cost-effectiveness of devoting investment resources to modeling second-round effects, given the difficult modeling issues involved and the high degree of uncertainty surrounding the outputs. However, it seems advisable for policy analysis agencies to support research to determine what kinds of second-round effects of policy changes are likely to be of large magnitude and hence important to understand. It also seems useful for the agencies to explore the issues involved in attempting to link microsimulation models with second-round effects models. At a minimum, the agencies should require that microsimulation models be designed to facilitate linkages with such models. As we discuss in Chapter 8, it may be particularly important to consider modeling second-round effects of policy changes for health care issues. Recommendation 6-8. We recommend that policy analysis agencies support research on second-round effects of policy changes that may be important to understand. We also recommend that the agencies require that future microsimulation models include entry and exit points that could facilitate linkages with second-round effects models. However, except perhaps for health care issues, we do not recommend investment at this time in building second-round effects capabilities into microsimulation models.
Representative terms from entire chapter: