Summary

For many household surveys in the United States, response rates have been steadily declining for at least the past two decades. A similar decline in survey response can be observed in all wealthy countries. Efforts to raise response rates have used such strategies as monetary incentives or repeated attempts to contact sample members and obtain completed interviews, but these strategies increase the costs of surveys.

This review addresses the core issues regarding survey nonresponse. It considers why response rates are declining and what that means for the accuracy of survey results. These trends are of particular concern for the social science community, which is heavily invested in obtaining information from household surveys. The evidence to date makes it apparent that current trends in nonresponse, if not arrested, threaten to undermine the potential of household surveys to elicit information that assists in understanding social and economic issues. The trends also threaten to weaken the validity of inferences drawn from estimates based on those surveys. High nonresponse rates create the potential or risk for bias in estimates and affect survey design, data collection, estimation, and analysis.

The survey community is painfully aware of these trends and has responded aggressively to these threats. The interview modes employed by surveys in the public and private sectors have proliferated as new technologies and methods have emerged and matured. To the traditional trio of mail, telephone, and face-to-face surveys have been added interactive voice response (IVR), audio computer-assisted self-interviewing (ACASI), Web surveys, and a number of hybrid methods. Similarly, a growing research agenda has emerged in the past decade or so focused on seeking solutions



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 1
Summary F or many household surveys in the United States, response rates have been steadily declining for at least the past two decades. A similar decline in survey response can be observed in all wealthy countries. Efforts to raise response rates have used such strategies as monetary incen- tives or repeated attempts to contact sample members and obtain completed interviews, but these strategies increase the costs of surveys. This review addresses the core issues regarding survey nonresponse. It considers why response rates are declining and what that means for the accuracy of survey results. These trends are of particular concern for the social science community, which is heavily invested in obtaining informa- tion from household surveys. The evidence to date makes it apparent that current trends in nonresponse, if not arrested, threaten to undermine the potential of household surveys to elicit information that assists in under- standing social and economic issues. The trends also threaten to weaken the validity of inferences drawn from estimates based on those surveys. High nonresponse rates create the potential or risk for bias in estimates and affect survey design, data collection, estimation, and analysis. The survey community is painfully aware of these trends and has re- sponded aggressively to these threats. The interview modes employed by surveys in the public and private sectors have proliferated as new technolo- gies and methods have emerged and matured. To the traditional trio of mail, telephone, and face-to-face surveys have been added interactive voice response (IVR), audio computer-assisted self-interviewing (ACASI), Web surveys, and a number of hybrid methods. Similarly, a growing research agenda has emerged in the past decade or so focused on seeking solutions 1

OCR for page 1
2 NONRESPONSE IN SOCIAL SCIENCE SURVEYS to various aspects of the problem of survey nonresponse; the potential solu- tions that have been considered range from better training and deployment of interviewers to more use of incentives, better use of the information collected in the data collection, and increased use of auxiliary informa- tion from other sources in survey design and data collection. In addition, considerable effort has gone into developing weighting adjustments and adjustment models to compensate for the effects of nonresponse. This report also documents the increased use of information collected in the survey process (paradata) in nonresponse adjustment. Some of this work is in early stages, while other work is more advanced. Two relatively new indicators of the nature and extent of nonresponse bias—representativ- ity and balance indicators—may assist in directing focus on the core of the problem in ways that the traditional measures, such as overall nonresponse rates, cannot. Several approaches to increasing survey response are being taken or have been proposed. Some of these approaches are aimed at increasing general knowledge about the conditions and motivations underlying re- sponse and nonresponse; others are focused on identifying techniques that change the interaction of interviewer and respondent or that could motivate respondents; still others employ paradata to identify possible survey design and management techniques that can be used to positively adjust the col- lection strategy to minimize the level or effects of nonresponse. As part of these efforts, survey researchers are enriching auxiliary information for both the reduction of nonresponse and adjustment for it, exploring matrix sampling (“planned missingness”) and other strategies to reduce burden, exploring mixed-mode alternatives for data collection, and deploying re- sponsive or adaptive designs. The research agenda proposed in this report is needed to develop even better approaches to improve survey response and to improve our ability to use the data for analytical purposes even when response rates cannot be efficiently improved. The agenda should be multifaceted. In these times of increasingly constrained human and financial resources in the social science survey community, this agenda must be mindful of both costs and benefits. Based on the panel’s assessment of the state of knowledge about the problem of nonresponse in social surveys, the report suggests several key research areas in which the statistical community could fruitfully invest resources. Some of the recommended agenda items are designed to further advance our knowledge of the scope and extent of the problem, others to enhance our understanding of the relationship between response rates and bias, and still others to improve our ability to address the problems that come with declining response rates. The recommendations for research include basic research that would

OCR for page 1
SUMMARY 3 help define the problem, develop appropriate measures, and expand our understanding of the scope and extent of the problem, such as: • Research on people’s general attitudes toward surveys and on whether these have changed over time. • Research about why people take part in surveys and the factors that motivate them to participate. • Research to identify the person-level and societal variables that have created the downward trend in response rates, taking into account changes in technology, communication patterns, and survey administration. As a part of a research program that would illuminate why people take part in surveys, research is needed to clarify the factors that provide posi- tive motivation (such as incentives) as well as those that provide pressure to participate. As specific examples: • Research on the overall level of burden from survey requests and on the role that burden plays in an individual’s decision whether to partici- pate in a specific survey. • Research on the different factors affecting contact and cooperation rates. In an era when more and more people are taking steps to limit their accessibility, research is needed on whether the distinction between contact and cooperation is still useful to maintain. It is well-documented that the increase in nonresponse has led to in- creasing costs of conducting surveys. But cost measures are not standard- ized and are hard to come by. Research is needed on: • The cost implications of nonresponse and how to capture cost data in a standardized way. Likewise, it is important to periodically challenge the fundamentals that underlie our understanding of the statistical nature of nonresponse control and adjustment. This calls for a variety of research initiatives, including: • Research on the theoretical limits of what nonresponse adjustments can achieve, given low correlations with survey variables, measurement er- rors, missing data, and other problems with the covariates. • Research on and development of new indicators for the impact of nonresponse, including application of the alternative indicators to real surveys to determine how well the indicators work.

OCR for page 1
4 NONRESPONSE IN SOCIAL SCIENCE SURVEYS • Research on understanding mode effects, including ways in which mixed-mode designs affect both nonresponse and measurement errors and the impact of modes on reliability and validity. The panel notes that there has been increasing appreciation of the role of nonresponse bias, but this only draws attention to the lack of a compre- hensive theory of nonresponse bias. A more comprehensive theory would help further a basic understanding of the relationship between response rates and nonresponse bias, enhance the understanding of such bias, and aid in the development of adjustment techniques to deal with bias under differing circumstances. A unifying theory would assure that comparisons of nonresponse bias in different situations would lead to the development of standard nomenclatures and approaches to the problem. To assist in the development of such a theory, the report sugests a need for: • Research on the relationship between nonresponse rates and non- response bias and on the variables that determine when such a relationship is likely. • Research to examine both unit and item nonresponse bias and to develop models of the relationship between nonresponse rates and bias. • Research on the impact of nonresponse reduction on other error sources, such as measurement error. • Research to quantify the role that nonresponse error plays as an overall component of total survey error. • Research on the differential effects of incentives offered to respon- dents (and interviewers) and the extent to which incentives affect nonre- sponse bias. Finally, research that is needed to identify those plans, policies, and procedures that would assist in overcoming the problem: • Research to establish, empirically, the cost–error trade-offs in the use of incentives and other tools to reduce nonresponse. • Research on the nature (mode of contact, content) and the effects of the contacts that people receive over the course of a survey, based on data captured in the survey process. • Research leading to the development of minimal standards for call records and similar data in order to improve the management of data col- lection, increase response rates, and reduce nonresponse errors. • Research on the structure and content of interviewer training as well as on the value of continued coaching of interviewers. Where possible, support should be given to experiments designed to identify the most effec- tive techniques.

OCR for page 1
SUMMARY 5 • Research to improve the modeling of response as well as to im- prove methods to determine whether data are missing at random. • Research on the use of auxiliary data for weighting adjustments, including whether weighting can make estimates worse (i.e., increase bias) and whether traditional weighting approaches inflate the variance of the estimates. • Research to assist in understanding the impacts of adjustment pro- cedures on estimates other than means, proportions, and totals. • Research on how to best make a switch from the telephone survey mode (and frame) to mail, including how to ensure that the right person completes a mail survey. • Research on the theory and practice of responsive design, includ- ing its effects on nonresponse bias, information requirements for its imple- mentation, types of surveys for which it is most appropriate, and variance implications. • Research on the availability, quality, and application of administra- tive records to augment (or replace) survey data collections. • Research to determine the capability of information gathered by mining the Internet to augment (or replace) official survey statistics.

OCR for page 1