discussion of respondent burden and the relationship of real and perceived burden with the willingness to take part in surveys. Several of the methods we discuss in detail, such as matrix sampling or greater reliance on administrative records, represent attempts to greatly reduce the burden on respondents.

We then discuss several approaches that are being taken or have been proposed to increase survey response rates. The first group of approaches involve sampling procedures—respondent-driven sampling (RDS), matrix sampling, and address-based sampling (ABS)—that may have implications for response rates. Other approaches are aimed at increasing our understanding of the conditions and motivations underlying nonresponse; changing the interaction of interviewer and respondent; making better use of information collected in the survey process to adjust the collection strategy in an attempt to achieve higher response rates, lower costs, or both; using other data sources (e.g., transaction data and administrative data) as strategies to reduce burden; and using mixed-mode methods of data collection.


It is widely accepted that nonresponse is, at least in part, related to the perceived burden of taking part in a survey. It is less clear how to define and measure burden. Two flawed but widely used indicators of burden are the number of questions in the survey and the average time taken by respondents to complete those questions. The notion that the time used in responding is directly related to burden seems to be the working principle behind the federal government’s Paperwork Reduction Act. This act requires the computation of burden hours for proposed federal data collections and has provisions that encourage limiting those burden hours. The use of a time-to-complete measure (in hours) for response burden is fairly widespread among the national statistical agencies (Hedlin et al., 2005, pp. 3–7).

The factors to be taken into account in the calculation of burden hours are important considerations. Burden could relate only to the actual time spent on completing the instrument, but it also could take into account the time respondents need to collect relevant information before the interviewer arrives (for example, in keeping diaries) and any time after the interview is completed. For example, the time incurred when respondents are re-contacted to validate data could also be taken into account. Without these additions, a measure that uses administration time or total respondent time per interview as a metric for burden is clearly problematic.

Bradburn (1978) suggested that the definition of respondent burden should include four elements: interview length, required respondent effort, respondent stress, and the frequency of being interviewed. The effort

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement