increases, and vice versa. (See Box 2-5 for explanations of statistical terms associated with the measurement of sampling error.)

  • Other elements of a survey can increase the imprecision of estimates, including variability introduced by respondents, interviewers, and coders, and by procedures to impute values for missing responses. Errors arising from such other sources are referred to as nonsampling errors. For example, the same respondent may give different answers to a question about income or race when interviewed more than once due to random factors, such as how the respondent interprets the question. For large-scale surveys for estimates of totals, these other sources of variance may contribute much more to imprecision than sampling variance.


  • The estimates from a survey may differ systematically from the true value for any number of reasons. Nonsampling error sources often give rise to bias.

  • Sources of bias include that the question wording elicits responses that differ from the construct intended by the survey designer; that respondents consistently overestimate or underestimate the true value (for example, the amount of their income last year); that imputation and weighting adjustment procedures may not compensate adequately for nonresponse and noncoverage; and that the weighting adjustment controls used to correct for coverage errors are inaccurate for certain areas and population groups.

Some variability and bias in survey estimates is inevitable (bias and some sources of variability also affect censuses). The challenge for users is, with the help of methodologists, to understand enough of the extent and nature of sampling and nonsampling errors in survey estimates to assess the utility of the estimates for the user’s purpose and identify possible strategies for ameliorating the effects of these errors on survey inferences.

tween the two surveys, as well as a few differences. These comparisons were performed for the nation as a whole and for individual counties in the ACS test sites, which were oversampled relative to the other C2SS counties. The finding of consistency between estimates from the C2SS and the long-form sample cannot prove that the C2SS estimates are unbiased. Consistency, however, does offer reassurance that the C2SS—and, by extension, the ACS—are measuring items in the same way.

The highlights of the overall and individual item evaluations of the C2SS compared with the 2000 long-form sample are summarized below; the complete findings are available in seven reports issued by the Census Bureau (U.S. Census Bureau, 2002b, 2004a-f; see also National Research Council, 2004b:Ch. 7; Schneider, 2004).

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement