National Academies Press: OpenBook
« Previous: 8 A Validation Experiment with TRIM2
Suggested Citation:"OVERVIEW OF THE EXPERIMENT." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers. Washington, DC: The National Academies Press. doi: 10.17226/1853.
×
Page 277

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

A VALIDATION EXPERIMENT WITH TRIM2 277 The primary goals of the experiment were to (1) determine the resources required to conduct a sensitivity analysis and an external validation and, more generally, examine the feasibility of microsimulation model validation; (2) identify those modules that when altered have an appreciable effect on model outputs; (3) obtain an admittedly limited measure of model validity against comparison values from administrative records (limited because only one time period was examined and the comparison values used are subject to several sources of error); and (4) illustrate some of the data analysis techniques that analysts can use to help answer questions about the sources and magnitude of errors in projections from microsimulation models. In addition, we were interested in attempting to identify component alternatives that were superior to those currently used in TRIM2. Other efforts to validate microsimulation models are presented in Cohen (Chapter 7 in this volume). Two closely related studies can be found in Kormendi and Meguire (1988) and Doyle and Trippe (1989). OVERVIEW OF THE EXPERIMENT The experiment was a combination of a sensitivity analysis and an external validation of TRIM2. The general idea was to use a previous year's TRIM2 database to predict various quantities associated with a program change put into effect in a later year. The sensitivity analysis was used to measure the variability of TRIM2's projections resulting from the specification of various components of the model. This was accomplished by using alternatives for the components in various combinations to generate projections associated with the program change. The variability of these projections provided evidence of their sensitivity to use of the various combinations of component alternatives. For the external validation, comparison values from the Integrated Quality Control System (IQCS) were used in a variety of ways to measure the correspondence of projections resulting from various combinations of alternatives to the comparison values. Depending on the closeness of the comparison values to the true values, these correspondences may be used to rate or rank model versions defined by the choices for various component alternatives. Specification of the experiment involved identification of the year for the TRIM2 database, the year for the program that was instituted, the component modules to be examined and the alternatives used, and finally, what is meant by a good model, which entails deciding which outputs in which form are important to predict well. As part of this decision, it is important to note that analysts

Next: CHOICE OF MODEL YEAR, PROGRAM YEAR, AND COMPARISON VALUES »
Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers Get This Book
×
Buy Paperback | $100.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This volume, second in the series, provides essential background material for policy analysts, researchers, statisticians, and others interested in the application of microsimulation techniques to develop estimates of the costs and population impacts of proposed changes in government policies ranging from welfare to retirement income to health care to taxes.

The material spans data inputs to models, design and computer implementation of models, validation of model outputs, and model documentation.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!