National Academies Press: OpenBook
« Previous: Incorporation of Sensitivity Analyses - Measurement of Total Uncertainty
Suggested Citation:"Number of Replications Needed." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers. Washington, DC: The National Academies Press. doi: 10.17226/1853.
×
Page 251

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

VARIANCE ESTIMATION OF MICROSIMULATION MODELS THROUGH SAMPLE REUSE 251 module to use. This is discussed above in the situation where the modules could be identified through the choice of one or several parameters, and distributions for the “correctness” or plausibility of these parameters could be elicited. However, there are modeling situations where there is not an obvious sample space of alternatives, and even should there be a well-defined sample space, it may be extremely difficult to attribute subjective probabilities of correctness to these various alternatives. In these situations, it is common to use sensitivity analyses—running various alternative leading cases for the module—to develop some understanding of the variability resulting from the use of alternative modules. Clearly, this process can result in an underestimate of the variability, since not all possible alternatives may be included in the analysis, and this process can also result in an overestimate of the variability, since the module used in the microsimulation model might be much closer to the truth than the alternatives. The hope is that if the various alternatives used are of similar plausibility (or nearly so, given the current state of knowledge), the resulting range of the output estimates will provide some information as to the uncertainty in the output that can be attributed to misspecification of that component. In addition to using sensitivity analysis for assessing variability due to model misspecification, one could also use sensitivity analysis alone or together with the bootstrap to estimate uncertainty due to sampling variability of inputs from data sources, such as control totals, and the bias from errors or untimeliness in the primary database and secondary sources, such as undercoverage of the target population or misreporting of key variables. For example, to assess uncertainty due to undercoverage, one would need to reweight the primary input data set to mimic the type of reweighting that might result from undercoverage. This could be accomplished in several ways to create several artificial data sets for input into the microsimulation model. If one were also interested in assessing variance due to sampling in the input data set, one could create a family of artificial data sets for each bootstrap replication. Numbe r of Replications Needed It is not clear how large one needs K to be to apply bootstrapping to microsimulation models. There are examples in the literature for which as few as 10 replications have been profitably used (see, e.g., Diaconis and Efron, 1983); this might be all one could hope to compute with today's models in their current computer environments, ignoring such possibilities as embedding a statistical match within the bootstrap process. When one is interested in variance estimation—or what amounts to roughly the same thing, 67 percent confidence intervals—Tibshirani (1985) and others indicate that a K of 50 is sufficient, and possibly one could compute reasonable estimates for even smaller values. This is currently difficult for some microsimulation models, but it is quite feasible for others that have been developed recently (see, e.g., Wolfson and Rowe, 1990).

Next: Use of Resampling Techniques in Constructing Confidence Intervals »
Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers Get This Book
×
Buy Paperback | $100.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This volume, second in the series, provides essential background material for policy analysts, researchers, statisticians, and others interested in the application of microsimulation techniques to develop estimates of the costs and population impacts of proposed changes in government policies ranging from welfare to retirement income to health care to taxes.

The material spans data inputs to models, design and computer implementation of models, validation of model outputs, and model documentation.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!