Cover Image

Not for Sale



View/Hide Left Panel
Click for next page ( 65


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 64
64 Standardized Procedures for Personal Travel Surveys survey information. It is certainly beyond the scope of this present report to examine the poten- tials for using or not using cell phones in the future and to potentially recommend changes to ethics standards that would permit the use of such phones. 4.3.2 Incentives Standardized procedures for the type of incentives to be used have been described in Section 2.2.8. However, it is unknown how different types of survey methodologies would effect the recep- tion of the cash incentive. For example, it is unknown how a $10 cash incentive would be received amongst those who respond to a CATI versus those who respond to a face-to-face interview. This issue may become greater for survey practitioners wanting to use multi-modal surveys: what level of incentive is likely to reduce non-response across the different survey modes? This needs to be investigated before any standardized procedures or guidelines could be suggested. Further, as noted in Section 5.8 of the Technical Appendix, there has been no comprehensive test of the effect of incentives. It is not known how much of an increase in response rate can be obtained with incentives of different sizes, nor what biases may result from their use. This is research that would be warranted. To determine the effect of incentives, it would be necessary to undertake a survey in at least two locations in which varying incentives were offered (including no incentive) in a random pattern and in such a way that comparisons could be made on the response rates and on who responds with and without an incentive. In addition, a non-response survey could be conducted in which the survey is repeated to respondents who refused or terminated on the first occasion, but offer- ing either an incentive where none was offered before, or a larger incentive where a small one was offered before. It may even be worth exploring incentives from a completely different angle. Instead of attempt- ing to establish an "invisible" sense of reciprocation through an obligation-free incentive, one could go a step further and enter a formal agreement which establishes an explicit connection between the reward being offered and tasks required on the part of the respondent. This could possibly develop a greater sense of reciprocation, which would move the role of the respondent away from that of a "donor" to something resembling more of an "employee." It is recommended that research be done to evaluate the impact of such an approach on the recruitment process, as well as on response and completion rates. Offering a more substantive gift such as a football ticket, manicure, etc., may appear exorbitantly expensive on one hand, but the additional money costs may be justifiable if they result in significant improvements in quality of data or if the survey itself runs more quickly and smoothly. It may not even be necessary to offer a large incentive. There is some evidence to suggest that response rates improve simply through the act of having respon- dents sign a document to say they will complete it. 4.3.3 Personalized Interview Techniques In this project, it was not possible to explore personalized interview techniques and the impacts they have on the response rates, completion rates, and the quality of data. The most well-known alternative approach to interviewing respondents is known as the "Brg technique." This approach differs from conventional interviewing techniques in that it stresses the importance of trust between the interviewer and the respondent. Instead of being contacted by several inter- viewers through the course of a survey, the respondent is instead given the name and phone num- ber of a specific member of the interviewing staff who will serve as a "motivator" (Brg, 2000). Respondents are given the freedom to communicate using their own terms rather than those spec- ified in a questionnaire, and a certain amount of dynamics are permitted in the interview while maintaining a coverage of essential topics. In general, the survey is made to be respondent-friendly