Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
Prepublication Copy — Uncorrected Proofs Summary For many household surveys in the United States, responses rates have been steadily declining for at least the past two decades. A similar decline in survey response can be observed in all wealthy countries. Efforts to raise response rates have used such strategies as monetary incentives or repeated attempts to contact sample members and obtain completed interviews, but these strategies increase the costs of surveys. This review addresses the core issues regarding survey nonresponse. It considers why response rates are declining and what that means for the accuracy of survey results. These trends are of particular concern for the social science community, which is heavily invested in obtaining information from household surveys. The evidence to date makes it apparent that current trends in nonresponse, if not arrested, threaten to undermine the potential of household surveys to elicit information that assists in understanding social and economic issues. The trends also threaten to weaken the validity of inferences drawn from estimates based on those surveys. High nonresponse rates create the potential or risk for bias in estimates and affect survey design, data collection, estimation, and analysis. The survey community is painfully aware of these trends and has responded aggressively to these threats. The interview modes employed by surveys in the public and private sectors have proliferated as new technologies and methods have emerged and matured. To the traditional trio of mail, telephone, and face-to-face surveys have been added interactive voice response (IVR), audio computer-assisted self-interviewing (ACASI), web surveys, and a number of hybrid methods. Similarly, a growing research agenda has emerged in the past decade or so focused on seeking solutions to various aspects of the problem of survey nonresponse; the potential solutions that have been considered range from better training and deployment of interviewers to more use of incentives, better use of the information collected in the data collection, and increased use of auxiliary information from other sources in survey design and data collection. In addition, considerable effort has gone into developing weighting adjustments and adjustment models to compensate for the effects of nonresponse. This report also documents the increased use of information collected in the survey process (paradata) in nonresponse adjustment. Some of this work is in early stages, while other work is more advanced. Two relatively new indicators of the nature and extent of nonresponse bias—representativity and balance indicators—may assist in directing focus on the core of the problem in ways that the traditional measures, such as overall nonresponse rates, cannot. Several approaches to increasing survey response are being taken or have been proposed. Some of these approaches are aimed at increasing general knowledge about the conditions and motivations underlying response and nonresponse; others are focused on identifying techniques that change the interaction of interviewer and respondent or that could incentivize respondent behavior; still others employ paradata to identify possible survey design and management techniques that can be used to positively adjust the collection strategy to minimize the level or effects of nonresponse. As part of these efforts, survey researchers are enriching auxiliary information for both the reduction of nonresponse and adjustment for it, exploring matrix sampling (“planned missingness”) and other strategies to reduce burden, exploring mixed-mode alternatives for data collection, and deploying responsive or adaptive designs. The research agenda proposed in this report is needed to develop even better approaches to improving survey response and to improving our ability to use the data for analytical purposes S-1
OCR for page 2
Prepublication Copy — Uncorrected Proofs even when response rates cannot be efficiently improved. The agenda should be multifaceted. In these times of increasingly constrained human and financial resources in the social science survey community, this agenda must be mindful of both costs and benefits. Based on the panel’s assessment of the state of knowledge about the problem of nonresponse in social surveys, the report suggests several key research areas in which the statistical community could fruitfully invest resources. Some of the recommended agenda items are designed to further advance our knowledge of the scope and extent of the problem, others to enhance our understanding of the relationship between response rates and bias, and still others to improve our ability to address the problems that come with declining response rates. The recommendations for research include basic research that would help define the problem, develop appropriate measures, and expand our understanding of the scope and extent of the problem, such as: • Research on people’s general attitudes toward surveys and on whether these have changed over time. • Research about why people take part in surveys and the factors that motivate them to participate. • Research to identify the person-level and societal variables that have created the downward trend in response rates, taking into account changes in technology, communication patterns, and survey administration (including interviewer variables, where relevant). As a part of a research program that would illuminate why people take part in surveys, research is needed to clarify the factors that provide positive motivation (such as incentives) as well as those that provide pressure to participate. Specific examples include: • Research on the overall level of burden from survey requests and on the role that burden plays in an individual’s decision whether to participate in a specific survey. • Research on the different factors affecting contact and cooperation rates. In an era when more and more people are taking steps to limit their accessibility, research is needed on whether the distinction between contact and cooperation is still useful to maintain. It is well-documented that the increase in nonresponse has led to increasing costs of conducting surveys. But cost measures are not standardized and are hard to come by. Research is needed on: • The cost implications of nonresponse and how to capture cost data in a standardized way. Likewise, it is important to periodically challenge the fundamentals that underlie our understanding of the statistical nature of nonresponse control and adjustment. This calls for a variety of research initiatives, including: • Research on the theoretical limits of what nonresponse adjustments can achieve, given low correlations with survey variables, measurement errors, missing data, and other problems with the covariates S-2
OCR for page 3
Prepublication Copy — Uncorrected Proofs • Research on and development of new indicators for the impact of nonresponse, including application of the alternative indicators to real surveys to determine how well the indicators work. • Research on understanding mode effects, including their impact on reliability and validity. The panel notes that there has been increasing appreciation of the role of nonresponse bias, but this only draws attention to the lack of a comprehensive statistical theory of nonresponse bias. A more comprehensive statistical theory would help further a basic understanding of the relationship between response rates and nonresponse bias, enhance the understanding of such bias, and aid in the development of adjustment techniques to deal with bias under differing circumstances. A unifying theory would assure that comparisons of nonresponse bias in different situations would lead to the development of standard nomenclatures and approaches to the problem. To assist in the development of such a theory, the report recommends: • Research on the relationship between nonresponse rates and nonresponse bias and on the variables that determine when such a relationship is likely. • Research to test both unit and item nonresponse bias and to develop models of the relationship between rates and bias. • Research on the impact of nonresponse reduction on other error sources, such as measurement error. • Research to quantify the role that nonresponse error plays as an overall component of total survey error. • Research on the differential effects of incentives offered to respondents (and interviewers) and the extent to which incentives affect nonresponse bias. Finally, research that is needed to identify those plans, policies, and procedures that would assist in overcoming the problem includes • Research to establish, empirically, the cost–error tradeoffs in the use of incentives and other tools to reduce nonresponse. • Research on the nature (mode of contact, content) of the contacts that people receive over the course of a survey, based on data captured in the survey process. • Research leading to the development of minimal standards for call records and similar data in order to improve the management of data collection, increase response rates, and reduce nonresponse errors. • Research on the structure and content of interviewer training as well as on the value of continued coaching of interviewers. Where possible, support should be given to experiments designed to identify the most effective techniques. • Research to improve the modeling of response as well as to improve methods to determine whether data are missing at random. • More research on the use of auxiliary data for weighting adjustments, including whether weighting can make an estimate worse (i.e., increase bias) and whether traditional weighting approaches overly inflate the variance of the estimates. S-3
OCR for page 4
Prepublication Copy — Uncorrected Proofs • Research to assist in understanding the impacts of adjustment procedures on estimates other than means, proportions, and totals. • Research on how to best make a switch from the telephone survey mode (and frame) to mail, including how to ensure the right person completes a mail survey. • Research on the theory and practice of responsive design including its effect on nonresponse bias, information requirements for its implementation, types of surveys for which it is most appropriate, and variance implications. • Research on the availability, quality, and application of administrative records to augment (or replace) survey data collections. • Research to determine the capability of information gathered by mining the Internet to augment (or replace) official survey statistics. S-4