National Academies Press: OpenBook
« Previous: 7 Evaluations of Microsimulation Models: Literature Review
Suggested Citation:"HENDRICKS AND HOLDEN (1976A)." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers. Washington, DC: The National Academies Press. doi: 10.17226/1853.
Page 256

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

EVALUATIONS OF MICROSIMULATION MODELS: LITERATURE REVIEW 256 assess the likely variability of the output. Betson (1988) notes that there have been scant efforts to study the statistical properties of estimates derived from microsimulation models. Doyle and Trippe (1989) agree, remarking that even though microsimulation models have been used extensively to help set public policy, there has been little effort to ascertain their quality. Burtless (1989) adds that there have been no comparisons of behavioral predictions from microsimulation models with actual historical experience from a period other than the one used to derive the estimates and that the public's confidence in microsimulation model results can be increased, as well as probably the reliability of behavioral routines, if model predictions are periodically compared with actual experience.3 The rest of this chapter summarizes the literature on microsimulation model validation.4 HENDRICKS AND HOLDEN (1976A) Hendricks and Holden (1976a) compared the variation in earnings among individuals and within individual earnings histories provided by DYNASIM for 1967 through 1972 with the earnings from the March Current Population Survey (CPS) for 1968–1973 and the earnings from the Michigan Panel Study of Income Dynamics (PSID). The outputs from DYNASIM were derived from a simulation that began in 1960 and proceeded annually through 2000. The initial sample was of 3,029 families containing 8,013 persons and was drawn from the public-use sample of the 1960 U.S. census. To make the comparison fair, some data processing was necessary to overcome several sources of database incomparability. These problems included the inclusion or exclusion of the institutional population and of Alaska and Hawaii. Also, the PSID experienced some attrition and some missing data over the period studied. 3When microsimulation models are compared to “true” values, those values are often only convenient approximations to the truth. Data from the Current Population Survey, the Michigan Panel Study of Income Dynamics, and other data sets that are used to produce comparison values have several potential sources of error, including sampling error, undercoverage, unit and item nonresponse, and datedness. These errors must be considered when comparing these estimates with those from microsimulation models. There are statistical techniques that address this issue, some of which are presented in Andrews et al. (1987). These errors blur the distinction between comparing estimates with the truth, and comparing estimates with independent estimates, where it is often unclear whether the model or the “truth” is responsible for any large discrepancy between them. However, if the target, as is generally the case, can be easily argued to have substantially smaller errors on average than the output from the microsimulation model under study, approximations to the truth serve about as well as the truth in assessing the variability in the model's estimates. 4Readers should be careful to note the publication date of the various validation studies. The criticisms of model capabilities or performance presented in a study do not necessarily apply to the current version of the model that was reviewed. (For example, DYNASIM2 and TRIM2 differ in many respects from their precursors, DYNASIM and TRIM.) Indeed, a number of these validation studies resulted in subsequent improvements to models.

Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers Get This Book
Buy Paperback | $100.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This volume, second in the series, provides essential background material for policy analysts, researchers, statisticians, and others interested in the application of microsimulation techniques to develop estimates of the costs and population impacts of proposed changes in government policies ranging from welfare to retirement income to health care to taxes.

The material spans data inputs to models, design and computer implementation of models, validation of model outputs, and model documentation.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook,'s online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!