National Academies Press: OpenBook

Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers (1991)

Chapter: Use of Resampling Techniques in Constructing Confidence Intervals

« Previous: Number of Replications Needed
Suggested Citation:"Use of Resampling Techniques in Constructing Confidence Intervals." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers. Washington, DC: The National Academies Press. doi: 10.17226/1853.
×
Page 252

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

VARIANCE ESTIMATION OF MICROSIMULATION MODELS THROUGH SAMPLE REUSE 252 This need for replication is a reason to restructure microsimulation models so that replications are less expensive. Use of Re sampling Techniques in Constructing Confidence Intervals A much more difficult problem than simple variance estimation is the creation of confidence intervals with high coverage probability, say 95 or 99 percent. For issues for which policy makers would like guarantees that expenditures for a revised program are not going to exceed a particular amount, it is very likely that, to provide confidence intervals with 95 or 99 percent coverage, more replicates would be needed, probably in the hundreds. However, this need depends on the approach used to generate the confidence intervals and the assumptions that one is comfortable in making. (See Efron [1981] for an introduction to this discussion.) There are currently two approaches to the solution to this problem, although this is an active area of research and new developments can be expected. First, if one is willing to assume that the output of interest follows an approximate normal distribution, adding and subtracting the usual critical values from the normal distribution times the estimated standard deviation should provide reasonable confidence intervals with coverage probability corresponding to the critical values used. This approach would obviously be feasible with a relatively small number of replications. The second approach is called the percentile method. The original idea was to use the percentiles provided by the bootstrap replications as a confidence interval, with coverage probability corresponding to the percentiles used. Clearly, to estimate these percentiles would take a large number of replications. This idea has exhibited poor performance in some situations, and various bias-correcting procedures have been presented to remedy the problem. These bias-correcting procedures are all very costly in terms of replications. If one is interested in providing confidence ellipsoids for estimates of more than one result from a microsimulation model, estimation of covariances could proceed in the same way that variances are estimated. However, the construction and use of confidence ellipsoids of two and higher dimensions have received very little investigation, so little is currently known about their performance. Finally, Johns (1987) has presented methods that have been successful in reducing the necessary number of bootstrap replicates through use of importance sampling, by identifying the pseudosamples that are likely to contribute a good deal to the variability in the results and then oversampling those pseudosamples. In the case of microsimulation models, one could oversample groups that are most affected by the proposed changes in regulations in the replication process. It is unclear to what extent this would reduce the number of replications necessary to construct useful 95 or 99 percent confidence intervals.

Next: REFERENCES »
Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers Get This Book
×
Buy Paperback | $100.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This volume, second in the series, provides essential background material for policy analysts, researchers, statisticians, and others interested in the application of microsimulation techniques to develop estimates of the costs and population impacts of proposed changes in government policies ranging from welfare to retirement income to health care to taxes.

The material spans data inputs to models, design and computer implementation of models, validation of model outputs, and model documentation.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!