Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
EVALUATIONS OF MICROSIMULATION MODELS: LITERATURE REVIEW 256 assess the likely variability of the output. Betson (1988) notes that there have been scant efforts to study the statistical properties of estimates derived from microsimulation models. Doyle and Trippe (1989) agree, remarking that even though microsimulation models have been used extensively to help set public policy, there has been little effort to ascertain their quality. Burtless (1989) adds that there have been no comparisons of behavioral predictions from microsimulation models with actual historical experience from a period other than the one used to derive the estimates and that the public's confidence in microsimulation model results can be increased, as well as probably the reliability of behavioral routines, if model predictions are periodically compared with actual experience.3 The rest of this chapter summarizes the literature on microsimulation model validation.4 HENDRICKS AND HOLDEN (1976A) Hendricks and Holden (1976a) compared the variation in earnings among individuals and within individual earnings histories provided by DYNASIM for 1967 through 1972 with the earnings from the March Current Population Survey (CPS) for 1968â1973 and the earnings from the Michigan Panel Study of Income Dynamics (PSID). The outputs from DYNASIM were derived from a simulation that began in 1960 and proceeded annually through 2000. The initial sample was of 3,029 families containing 8,013 persons and was drawn from the public-use sample of the 1960 U.S. census. To make the comparison fair, some data processing was necessary to overcome several sources of database incomparability. These problems included the inclusion or exclusion of the institutional population and of Alaska and Hawaii. Also, the PSID experienced some attrition and some missing data over the period studied. 3When microsimulation models are compared to âtrueâ values, those values are often only convenient approximations to the truth. Data from the Current Population Survey, the Michigan Panel Study of Income Dynamics, and other data sets that are used to produce comparison values have several potential sources of error, including sampling error, undercoverage, unit and item nonresponse, and datedness. These errors must be considered when comparing these estimates with those from microsimulation models. There are statistical techniques that address this issue, some of which are presented in Andrews et al. (1987). These errors blur the distinction between comparing estimates with the truth, and comparing estimates with independent estimates, where it is often unclear whether the model or the âtruthâ is responsible for any large discrepancy between them. However, if the target, as is generally the case, can be easily argued to have substantially smaller errors on average than the output from the microsimulation model under study, approximations to the truth serve about as well as the truth in assessing the variability in the model's estimates. 4Readers should be careful to note the publication date of the various validation studies. The criticisms of model capabilities or performance presented in a study do not necessarily apply to the current version of the model that was reviewed. (For example, DYNASIM2 and TRIM2 differ in many respects from their precursors, DYNASIM and TRIM.) Indeed, a number of these validation studies resulted in subsequent improvements to models.