After-the-fact process evaluations can suggest important areas for research to improve operations in future censuses.

Quality Indicators

Having effective real-time quality control and assurance procedures in place for key census operations will contribute to but not ensure high-quality data. First, processes may have been well executed but poorly designed to achieve their goal. Furthermore, even when well-designed operations are carried out as planned, respondent errors from such sources as nonreporting, misreporting, and variability in reporting may result in poor-quality data. As signals of possible problems that require further research, it is common to construct and review a variety of census quality indicators, such as mail return rates, household (unit) nonresponse rates, item nonresponse rates, inconsistency in reporting of specific items, and estimates of duplicates in and omissions from the census count. For census long-form-sample estimates, it is also important to look at variability from sampling and imputation for missing data.

Each of these quality measures must itself be evaluated and refined. For example, the initial measure of duplicates and other erroneous enumerations from the Census Bureau’s 2000 Accuracy and Coverage Evaluation (A.C.E.) Program—issued in March 2001—turned out to be substantially lower than a revised estimate released in October 2001, which in turn was lower than the final estimate issued in March 2003. As another example, it is generally assumed that a high nonresponse rate to a questionnaire item impairs data quality. The reason is the twofold assumption that reported values are likely to differ from the values that would have been reported by the nonrespondents and that imputations for missing responses will tend to reflect the values for reporters more than the true values for nonrespondents (as well as add variability). While often true, neither part of this assumption is always or necessarily true (see Groves and Couper, 2002).

Quality indicators (and process evaluations) also need a standard for comparison—for example, what is a “high” versus a “low” nonresponse rate? Such standards are commonly drawn from prior censuses; they may also be drawn from other surveys and admin-

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement