Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
FUTURE COMPUTING ENVIRONMENTS FOR MICROSIMULATION MODELING 218 and simulation as well as more effective programming tools for building such systems; and â¢ language processors of various kinds that will provide users with access to the capacious real memories we expect as well as to the very substantial virtual address spaces that will also be available. Strategies for evolving systems for microanalytic simulation activities, including model specification, submodel integration, simulation, database preparation, and data management need to be based on the likely existence and exploitation of such an environment. Adoption of such strategies is likely to produce an efficient and effective environment for microanalytic simulation modeling activitiesâteaching, research, and policy formulation and analysis. RECOMMENDATIONS AND CONCLUSIONS Microanalytic simulation is an important technique for designing and assessing the impact of a wide variety of public policy measures. In areas where sufficient detail exists at the micropopulation level and the implications of policy proposals can be predicted with some accuracy, microanalytic simulation experiments can provide a degree of accuracy and distributional detail unmatched by other methodologies. The methodology is perhaps at its best in performing sensitivity experiments, in which the aggregate and distributional impacts of changes between two policies are considered rather than the derivation of aggregate effects for either policy separately with no other point of reference. From a policy analyst's point of view, the function of such models is to provide answers to questions regarding proposed policy measures. The techniques underlying the model are not of great importance, provided that the tools used give accurate, understandable, and explainable results. From a broader perspective, if microsimulation tools and techniques are of such central importance in certain policy areas, our investment in them should be as efficient and as effective as possible, because the leveraged effect of such tools on policy formulation in the past has been extraordinary.85 To us, efficient means the best possible set of outputs for a given investment; effective means ensuring that the tools and outputs are in the best form possible to contribute to the overall task. We adopt here a broad view of the notion of effectiveness that encompasses the policy formulation and evaluation process. The development and use of 85 All of the changes in federal individual income tax policy in the United States since 1963 and in Canada since 1965 have depended critically on this methodology. Most of the analysis of transfer programs has required the use of such models, starting with negative income tax proposals in the United States in 1969. The detailed analysis required for studying interactions between related tax and transfer measures is not feasible without such tools. Their existence has increased the level of informed debate regarding proposed measures, since tools for assessing impact analysis can now be made available to and used by interested parties. Finally, legislative bodies understand the strengths of using this methodology and increasingly expect it to be used when appropriate.
FUTURE COMPUTING ENVIRONMENTS FOR MICROSIMULATION MODELING 219 microanalytic simulation models in a policy environment are important parts of the evaluation process, but only parts. The contribution that such models should make is to ensure that the entire policy formulation and evaluation process is conducted in the most efficient and effective manner possible. Optimizing the use of computer-related resources is not necessarily the same as assisting in such a way as to optimize the entire process. A primary goal of the present study has been to assess what pattern of investments in and use of microanalytic simulation models will best meet that objective. Because of their primacy in the United States and Canada, attention here is focused on SPSD/M and TRIM2 as representative of static model developments that have found substantial use in both countries. Assessment and comparison of these systems are useful in understanding costs, benefits, problems, and opportunities that might influence the nature, timing, and extent of further investments. We have also investigated the likely hardware platforms and software techniques that are emerging and that are likely to be affordable in the medium term (5â10 years); they form the technical base that should guide investment decision making. Our concerns can be divided into two groupsâshort-run and medium-run considerations. Short-run considerations affect current work requiring access to simulation models where relatively prompt responses are necessary, generally within a year. Medium-run considerations do not depend on the existing stock of data capital or software for modeling and executing models.86 Distinct separation of the discussion according to these two time horizons is essential because previous investments in systems for simulation experiments have generally been funded on the margin, resulting in medium-run development questions being either ignored or defined a year at a time. Although no one experience is typical, a retrospective look at how the KGB system evolved within the U.S. Department of Health and Human Services provides a story that is not atypical (Lewis and Michel, 1990:50â51): No analysts were left in HHS/ASPE who understood the then-existent TRIM model [in 1977], which had been complicated by ad hoc developments since 1975. Furthermore the current-law bias and rigidity of the TRIM structure did not seem to lend itself well to the innovative linked public-jobs-and-cash strategy favored by many policy officials within the administrationâ¦. The result was the creation, over a period of about five weeks of intense work, of a new microsimulation model named KGB for its developers, Richard Kasten, David Greenberg and David Betson. KGB was intended as a relatively simple model to be used in-house for a specific project. It used data files derived from TRIM and included 86 Our medium-run considerations would ordinarily be termed long run. However, it is difficult to think of a period of only 6 years as long run, even though it does represent approximately one hardware generation.