Skip to main content

Currently Skimming:

Appendix: Models, Uncertainty, and Confidence Intervals
Pages 89-96

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 89...
... If a component is a regression model, the precise form of the regression model, the covariates to use, and the data set on which to estimate the regression coefficients may be arbitrary, at least to some extent. Examples of this appear in population projections and macroeconomic models, in which the selection of projected fertility and mortality rates in the former, or the selection of projected inflation rates and productivity indices in the latter, is somewhat arbitrary.
From page 90...
... The first two categories of uncertainty listed above—sampling variability and errors from imprecise estimation of other model inputs~re most easily estimated and summarized. These estimates are frequently labeled mean square error.
From page 91...
... For simple models, mean square error is measured with standard techniques for variance estimation.2 For relatively complicated models, it has recently become possible to use nonparametric sample reuse techniques for this purpose. Available sample reuse techniques include the jacl~nife, bootstrap, balanced half-sample replication, and cross-validation; in particular, the bootstrap has shown good flexibility and utility (see Efron, 1979~.
From page 92...
... By using the above taxonomy, model outputs can be biased as a result of errors in an input data set or as a result of model misspecification,3 and therefore the root mean square error is often a better summary of the performance of a model's estimates than the standard deviation.4 External validation directly measures root mean square error (if there is a well-defined experiment) and is therefore directly useful in assessing an estimator's uncertainty.
From page 93...
... In addition, even when information about the nonsampling error components can be qualified roughly through the use of sensitivity analysis, it is often impossible to incorporate this information into an unconditional confidence interval with a known coverage probability. Therefore, the problem for unsophisticated users is how to present them with information combining different levels of probabilistic rigor.
From page 94...
... s Thus, even in the uncommon situation in which policy analysts can make use of external SIn addition, the extent to which the estimates in the validation study follow an approximately nonnal distribution remains unknown, making the usual confidence interval formed by adding and subtracting twice the root mean square error somewhat suspect.
From page 95...
... It is generally impossible to provide range 6 with a rigorous estimate of coverage probability, but it might admit to a probabilistic interpretation if a probabilistic model for the various macroeconomic forecasts can be developed. (Range 6 also might be considered by some to represent root mean square error, with contributions of variance from sampling variability and bias from use of incorrect macroeconomic forecasts.)
From page 96...
... In addition, range 8 cannot be given an associated coverage probability, which greatly weakens its utility. However, it does provide an indication of the amount of uncertainty due to sampling variability, macroeconomic forecasts, imputation routines, and modeling approaches.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.