This report documents the rich set of extant findings about the causes, consequences, and remedies for the general decline in survey response throughout the developed world. This decline represents a growing threat to the quality of social science data in the United States and elsewhere. This report also identifies a number of gaps in that knowledge and promising paths to advance the state of the science and develop more effective remedies.
In the various sections of this report, the panel has recommended research on a long list of topics. These topics fall into three broad categories: (1) research that would deepen our understanding of the phenomenon of nonresponse and the causes of the decline in response rates over the past few decades; (2) research aimed at clarifying the consequences of nonresponse; and (3) research designed to improve our tools for boosting response rates or more effectively compensating for the effects of nonresponse statistically. The panel thus supports a series of research programs and projects that are brought together here. We believe that, together, these topics constitute a comprehensive and multifaceted research agenda.
The panel is aware that, in these times of increasingly limited human and financial resources for the social science survey community, a research agenda must reflect both costs and benefits. Where possible, priorities have been suggested in the report itself.
The panel does not attempt to assign responsibility for these research items among the various players that make up the social science survey community and the research and academic institutions that support that community. Much of the path-breaking basic research, which often does
not require significant investment of resources for testing and development, will come from academic and other research centers. Large data collection and analytical organizations in the private sector and in government would be responsible for conducting the research that requires new data collection, such as research on interviewer and mode effects. Organizations that provide a platform for integrating the research work generated in these various venues, such as the AAPOR, the American Statistical Association, the International Statistical Institute, and, within the federal government, the Federal Committee on Statistical Methodology, would also have important roles to play.
The first set of research topics would help further define the problem, develop appropriate measures, and deepen understanding of the scope, causes, and extent of the problem:
• Research to identify the person-level and societal variables responsible for the downward trend in response rates. These variables could include changes in technology, in communication patterns, and in methods of collecting survey data.
• Research on people’s general attitudes toward surveys and on whether these have changed over time.
• Research about why people take part in surveys and the factors that motivate them to participate.
• Research on the factors affecting contact and cooperation rates.
• Research on the nature (mode of contact, content) of the contacts that people receive over the course of a survey based on data captured in the survey process.
• Research on the overall level of burden from survey requests and on the role of that burden in the decision to participate in a specific survey.
In considering burden, it is important to conduct basic research on the dimensions of response burden and how they should be operationalized. It would be useful to consider factors (e.g., time, cognitive difficulty, or invasiveness) that may determine how potential respondents assess the burden involved in taking part in a survey. These research paths should lead to more practical consideration of how interviewers, advance letters, or other explanatory or motivational material could effectively alter perceptions about the likely burden of a survey.
The second set of topics concerns statistical and other tools for understanding and mitigating the consequences of nonresponse:
• Research on the cost implications of nonresponse and on how to capture cost data in a standardized way.
• Research on the relationship between nonresponse rates and nonresponse bias and on the variables that determine when such a relationship is likely.
• Research to examine unit and item nonresponse bias and to develop models of the relationship between nonresponse rates and bias.
• Research on the theoretical limits of what nonresponse adjustments can achieve, given low correlations with survey variables, and on the effects of measurement errors, missing data, and other problems with the covariates.
• Research on the impact of nonresponse reduction on other error sources, such as measurement error.
• Research to quantify the role that nonresponse error plays as a component of total survey error.
• Research on the differential effects of incentives offered to respondents (and interviewers) and the extent to which incentives affect nonresponse bias.
The panel notes that research activities designed to expand knowledge of the relationship between response rates and nonresponse bias are likely to assist in the development of a much needed theory of nonresponse bias. A more comprehensive theory not only would further our understanding of the relationship between response rates and nonresponse bias but also would aid in the development of adjustment techniques to deal with bias under different circumstances.
The third set of research topics concerns methods for coping with nonresponse:
• Research to establish empirically the cost–error trade-offs in the use of incentives and other tools to reduce nonresponse.
• Research on and development of new indicators for the impact of nonresponse, including application of the alternative indicators to real surveys to determine how well the indicators work.
• Research on understanding mode effects, including the impact of mode on reliability and validity.
• Research leading to the development of minimal standards for call records and similar data in order to improve the management of data collection, increase response rates, and reduce nonresponse errors.
• Research on the structure and content of interviewer training as well as on the value of continued coaching of interviewers. Where possible, experiments should be supported to identify the most effective techniques.
• Research to improve the modeling of responses as well as to improve the methods to determine whether data are missing at random.
• Research on the use of auxiliary data for weighting adjustments, including whether weighting can make estimates worse (i.e., increase bias) and whether traditional weighting approaches inflate the variance of the estimates.
• Research to assist in understanding the impact of adjustment procedures on estimates other than means, proportions, and totals.
• Research on the impact that reduction of survey nonresponse would have on other error sources, such as measurement error.
• Research on how to best make a switch from the telephone survey mode (and frame) to mail, including how to ensure that the right person completes a mail survey.
• Research on the theory and practice of responsive design, including its effects on nonresponse bias, information requirements for its implementation, types of surveys for which it is most appropriate, and variance implications.
Finally, the panel recognizes the need to explore alternatives to traditional survey data collection. There are increasing suggestions that administrative data and Internet “scraping” can produce data that could substitute for surveys. The panel suggests that further research is needed to ascertain the quality of data gleaned from these sources, and makes two final recommendations:
• Research into the availability, quality, and application of administrative records to augment (or replace) survey data collections.
• Research to determine the capability of information gathered by mining the Internet to augment (or replace) official survey statistics.