on completion of a household interview, completion of interviews with individual members, and completion of time use diaries. Nevertheless, as Singer (2011) pointed out to the panel, interviewer expectations may have an independent effect on respondent behavior—for example, Lynn’s effects might have been larger had interviewers had a more positive view of incentives. Also, the possibility of contamination in Singer’s experiment cannot be entirely ruled out since the same interviewers administered both conditions. It may also be that incentives vary in their effect at different points over the field period of a survey.

An important consideration is the effect of incentives on response quality. Singer and Kulka (2001) found no decline in quality of response to incentives in terms of differences in nonresponse or length of open-ended answers. Since then, the small number of studies (mail, RDD, and face to face) that have examined incentive effects on data quality have, with one exception, found no effects. The exception is Jaeckle and Lynn (2008), who found that incentives increased item nonresponse. Cantor et al. (2008) argued for additional tests that would control for such factors as survey topic, size and type of incentive (e.g., prepaid, promised, refusal conversion), and whether studies are cross-sectional or longitudinal.

Do incentives affect sample composition? Cantor et al. (2008), in their review of 23 RDD studies, concluded that incentives, whether prepaid or promised, have little effect on measures of sample composition. Nevertheless, a number of studies have demonstrated such effects on specific characteristics (see Singer, 2013, pp. 128–129). But specific attempts to use incentives to bring groups into the sample that are less disposed to respond because of lower topic interest have received only qualified support (Groves et al., 2004, 2006). Singer points out that very few studies have considered the sample composition effect of Web survey incentives, and she concluded that more research is clearly needed.

A key question concerns the effect of incentives on the responses that respondents provide. The research findings are mixed. James and Bolstein (1990), Brehm (1994), and Schwarz and Clore (1996) reported results consistent with the mood hypothesis—that incentives boost mood and therefore affect responses—and Curtin et al. (2007) found an interaction between race and receipt of incentives (nonwhites receiving an incentive gave more optimistic answers on the Index of Consumer Confidence). Groves et al. (2004, 2006) reduced nonresponse due to lack of topic interest by offering incentives, and the change in bias due to increased participation of those with less interest was not statistically significant. The possibility that incentives bias responses directly through an effect on attitudes has found no support in the experimental literature, although Dirmaier et al. (2007) specifically tested such a hypothesis. There is no evidence that



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement