National Academies Press: OpenBook
« Previous: 4. Design of Survey Instruments
Page 71
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 71
Page 72
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 72
Page 73
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 73
Page 74
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 74
Page 75
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 75
Page 76
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 76
Page 77
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 77
Page 78
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 78
Page 79
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 79
Page 80
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 80
Page 81
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 81
Page 82
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 82
Page 83
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 83
Page 84
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 84
Page 85
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 85
Page 86
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 86
Page 87
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 87
Page 88
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 88
Page 89
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 89
Page 90
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 90
Page 91
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 91
Page 92
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 92
Page 93
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 93
Page 94
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 94
Page 95
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 95
Page 96
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 96
Page 97
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 97
Page 98
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 98
Page 99
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 99
Page 100
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 100
Page 101
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 101
Page 102
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 102
Page 103
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 103
Page 104
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 104
Page 105
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 105
Page 106
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 106
Page 107
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 107
Page 108
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 108
Page 109
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 109
Page 110
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 110
Page 111
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 111
Page 112
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 112
Page 113
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 113
Page 114
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 114
Page 115
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 115
Page 116
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 116
Page 117
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 117
Page 118
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 118
Page 119
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 119
Page 120
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 120
Page 121
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 121
Page 122
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 122
Page 123
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 123
Page 124
Suggested Citation:"5. Design of Data Collection Procedures." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 124

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

65 CHAPTER 5 5. Design of Data Collection Procedures 5.1 D-1: NUMBER AND TYPE OF CONTACTS 5.1.1 Definition This item is about how many times and by what methods households should be contacted to obtain complete household responses. In terms of recruitment, the question arises as to the number of times a household should be contacted to obtain a complete recruitment response, especially if initial contact results in the household requesting to be called back, or simply a non-contact (answering machine, busy, and modem/fax). In relation to data retrieval, the number of reminders and the methods of conducting these reminders depends on the survey mode employed initially. For example, if recruitment took place through e-mail and data retrieval was through the internet, then mailing out reminder postcards or letters is unlikely to provide as good a result as if the reminders were e-mail reminders. However, this warrants further investigation given that this survey mode is not widely used, especially in relation to travel surveys. 5.1.2 Review of Number and Type of Contacts Table 31 shows the variability in the number and type of contacts made during the survey process for six recent travel related surveys. The Victorian Activity and Travel Survey, conducted in the state of Victoria, Australia, has used the following schedule: • Initial contact letter; • First mailing; • First reminder; • Second Reminder; • Third reminder, entire survey package re-sent; • A cover letter from the Survey Director stressing the importance of cooperation by respondents; and • A Fourth reminder (Richardson, 2000). The wide variability in the survey process, in terms of contact and reminders, emphasizes the need for standards. The following is a summary of recent literature about this topic. Literature Review The number and type of contacts, along with the data retrieval method employed, will impact on the final response rate (Axhausen, 1999). Various other design features also influence the response rate. For example, it was found that university sponsorship, pre-notification, personalized letters, salience, and follow-up procedures led to improved response rates (Ettema et al., 1996; Melevin et al., 1998; Cook et al., 2000). Advance letters may increase the response rate by 5 to 13 percent. The time of receipt of the

66 letter, as well as distinctive postage markings13, also play a role (Zmud, 2003). However, the advance letter may have a negative impact because the respondent can now prepare to refuse to participate (Zmud, 2003). In addition, the use of a “motivator” who motivates household members to undertake the survey task, and also is available to be contacted at any time by any member of the household, has been shown to be effective in increasing response rates in Europe (Brög, 1983; van Evert, Brög, and Erl, 2005). Table 31: Type and Number of Contacts During the Recruitment and Retrieval Phase of Various Recent Travel Surveys Survey Advance Letter Telephone Recruitment Next Contact First Reminder Second Reminder Emergency Evacuation (ITS – Sydney) No Yes A week later; email contact containing information about principal agents involved in study, web address for internet survey and id number (password) A week later; email reminder to households that have not responded A week later; 2nd email reminder to households that have not responded SE Florida No Yes Mail out of survey package; cover letter, survey materials including 24 hr travel diary CATI Retrieval None OKI No Yes Mail out of survey package; cover letter, survey materials including 24 hr travel diary CATI Retrieval None Broward Yes Yes (3 attempts) Mail out of survey package; cover letter, survey materials including 24 hr travel diary Follow- up calls after assigned travel day; Mail Back retrieval None NYC Yes and call Yes (9 attempts) Mail out of survey package; cover letter, survey materials including 24 hr travel diary Reminder call CATI Retrieval after assigned travel day DFW No Yes and intercept recruitment Mail out of survey package; cover letter, survey materials including 24 hr travel diary Reminder call CATI Retrieval after assigned travel day The number and type of contacts to households depends on the recruitment and retrieval mode(s) employed. For example, if the internet and e-mail are the retrieval methods used, then it may not be useful to employ telephone reminders to respondents. If mixed mode surveys are employed, then it is most likely that follow-up modes will also be mixed, to achieve the greatest levels of contact. However, it is important to acknowledge the findings of Dillman et al. (2001) with respect to mixed mode surveys: it was found that the success of the second mode of survey delivery in reducing unit non response was very small. The key is to give the respondents the choice of how to respond; this was a significant finding from the stated choice analysis for non-respondents in section 5.6 of this report. Non-contacts Non-contact is becoming more of an issue due to changing household structure and flexible work arrangements as well as technological and physical barriers (Kalfs and van Evert, 2003; Zmud, 2003). From this arise important questions: when is the best time to contact households and how many calls should be attempted before a household is no longer included in the sample? In many multicultural societies such as the United States, the United Kingdom and Australia, time and type of contact with a household can have social implications and this may influence unit non-response and hence, overall response rates. For example, households may engage in certain cultural activities at particular times during the week. If contacted about survey participation during these times, the households may become 13 The markings and distinct features of the letter may help the respondent to remember what the research is about (Zmud, 2003).

67 quite annoyed: the disruption to their cultural activity(ies) being perceived as a lack of respect. This has not yet been investigated (Kalfs and van Evert, 2003), and unfortunately, this is beyond the scope of the current project. Young persons and better educated persons are more difficult to contact. Households with one adult and employed household members required more calls to first contact (Keeter et al., 2000). Also, it was found that highest contact rates for first calls occurred for households with incomes between $25, 000 and $35,000 (29.6 percent) on Monday to Thursday evenings between 6 and 9 pm (Dennis et al., 1999). Both the low income group ($0-$15,000) and the high income group ($75,000 +) had the lowest contact rates for the same time slot. Overall, it was found that the median household income group ($25,000 to $35,000) had the highest household contact rate (Dennis et al., 1999). This is also the case for respondents to travel surveys; hence, non-response bias exists because households with higher or lower income have different trip rates (De Heer and Moritz, 1997; Kam and Morris, 1999; Richardson, 2000). Reminders Reminders may involve a reminder postcard, telephone call, email, or re-sending the entire survey package. Generally, as a rule of thumb, if the initial contact generates a response rate of R, then the first reminder will add 0.5R, the second 0.25R, and the third, 0.125R, etc. Thus, three reminders almost double the initial response rate. For example, studies have shown that reminders to a survey can double the response rate that otherwise would have been obtained through a single mailing of a survey (Richardson, 2000; Lahaut et al., 2003). Larger households and households with children were slower to respond to the survey; therefore, reminders increase the likelihood of these households responding, which helps to decrease non-response bias (Kam and Morris, 1999). In relation to mail out/mail back surveys, reminder calls were not successful in increasing response rates because many households appeared to have thrown out their survey package (Freeland and Furia, 1999). In this case, a second mailing was conducted to households to increase the response rate. Also, it was found that telephone reminder calls were ineffective when limited to only listed telephone numbers. It is more likely that people with higher incomes have “silent” telephone numbers and these people are also highly mobile (Freeland and Furia, 1999). People of higher socio-economic status are more difficult to contact (Kalfs and van Evert, 2003; Zmud, 2003); therefore, a postcard reminder may be more useful. Areas historically difficult to contact will have increasing response rates if a second mailing of the questionnaire is conducted. This was found to be the case when a second mailing was sent out to households that had not yet responded (Whitworth, 1999). However, too many reminders act like the Law of Diminishing Returns in economics (Cook et al., 2000). In addition, it was found that respondents to later mail outs often under-report trips or do not travel as much (Kam and Morris, 1999; Polak, 2000; Richardson, 2000). The longer households take to respond to the survey, the higher the item non-response rates: decreasing data reliability with increasing response time (Kam and Morris, 1999). There appears to be a trade-off between increasing response rates and data reliability (Kam and Morris, 1999). Therefore, how many reminders should be made to households? This really is a function of the initial response rate, the survey environment, and the time frame of the data collection period. Call Attempts (Re-Calls14) Research by Black and Safir (2000) found that a statistical difference exists on test variables between households that completed a survey and households that could not be contacted. Also, according 14 Re-calls are not call backs. Call back is a disposition code whereby the household requested to be called back. This call disposition code therefore, indicates that the call has not been resolved and requires more calls to achieve a final call resolution.

68 to Stec et al., (1999) and Colombo (2000), re-calling households will provide information on response probabilities that may be used in the estimates of non-response bias. However, the maximum number of calls made, varied in the two studies consulted. In travel surveys, there may be, at least, ten call attempts made. In essence, to reduce the incidence of non-response bias, non-contact and call-back conversions must be conducted. Also, in travel surveys, these call attempts are not distributed evenly in the population. This creates problems in relation to bias reduction. As shown in Section 5.8 of this report, call-back and non-contact conversions showed different mean trip rates for every call attempt. For example, the mean trip rate for households that required two calls to be converted from a non-contact, was 8.005, for households that required three calls to be converted, the mean trip rate was 8.5636. Therefore, selectivity15, in relation to subsequent call attempts, will not reduce the incidence of non- response bias. The question now arises as to the number of calls that should be conducted. Findings of research conducted by Harpuder and Stec (1999) indicated that an average of five call attempts was required to obtain a complete interview. This research also suggests that between four and six call attempts is most appropriate. After six call attempts, the reduction in non-response bias resulting from the number of non- contacts is not significant (Harpuder and Stec, 1999). Call History File Analyses These results are from the analyses conducted on call history files, files containing the recruitment history for sampled households, for travel surveys that were conducted in two major areas in the United States. The data retrieval mode was either CATI or mail back. The importance of this analysis is threefold: 1. Call history files have not been analyzed in this depth; 2. This type of analysis has not been conducted on two stage surveys. The importance of these results is that they show the effectiveness of non-contact and call back conversions; and 3. Recommendations as to the number of calls that should be made to convert non-contacted households and those that requested to be called back, to complete recruitment interviews, is presented. For a particular study, three types of initial contact were employed. These were: 1. Cold call – where the household(s) was simply called and asked to participate in the study without being given prior knowledge about the survey; 2. Pre-notified – where the household(s) was informed about the survey and a future recruitment call, in a letter stating the objectives of the survey; and 3. Intercept – where individuals were approached at bus stops and asked if their households would be interested in participating in a travel survey. In Table 32, it can be seen that an association exists between the type of contact and the number of calls. The strength of this association, however, varies with contact type. The contact type “cold call” showed the strongest association (0.406), representing a moderate association. Overall, the results show that if a household had prior knowledge about the interview, then it required the least number of calls to reach a final call status. These results were expected and confirm what was found in the literature (Melevin et al., 1998; Cook et al., 2000; Kalfs and van Evert, 2003; Zmud, 2003). 15 From the analyses of the call history files, it was discovered that for some non-contacted households, and some households that requested a call back, the number of subsequent call attempts conducted was three, whereas for other households, with the same disposition codes, subsequent call attempts conducted were as many as nine.

69 Table 32: Statistical Tests between Number of Call Attempts and Type of Contact, File 1 Contact type Chi –Square Df Cramer’s V Cold call *4917.93 19 0.406 Pre-notified *4159.04 19 0.373 Intercept *647.54 19 0.147 * significant at p=0.001 Given these results, it may be best for researchers to mail out advance letters to households, to inform households about the upcoming survey. This may “legitimize” the research survey process in the minds of respondents because cold call contacts may sound very much like marketing type interviews16. It will be interesting to see whether the National Do Not Call Registry, set up by the Federal Trade Commission (FTC) in the U.S. in late December, 2002 (CMOR News, 2003)17, will have a positive effect on response rates to household travel surveys. In the two call history files, between 10 and 11 percent of all refusals were initially call backs (350 out of 3279 in file 1 and 7,461 out of 76,612 in file 2). The more call backs are requested by respondents, the more likely the respondents are to refuse to participate in the survey. In the files investigated, this was especially the case if households requested two or more callbacks. These results support what was stated in the literature (Zmud, 2003). Of the 12,978 call backs requested by households in file 1, 2.2 percent (284) became refusers after subsequent call attempts. Of these, 41 or 14.4 percent of the 284 were converted from refusers to completing the recruitment interview. These represent only 0.3 percent of the total call backs. Finally, of the 41 who were converted to completing the recruitment, only 13 actually completed the household survey. The overall conversion is, thus, very small, with 0.1 percent of call backs to initial refusals eventually completing the entire survey. Table 33 shows the number of call attempts needed for households that intially requested a call back, that then completed both the recruitment interview and the household survey. For call 1, the number of call backs to complete recruitment interviews is assigned “n/a”, because it remains a call back, if the disposition code of the first call is a call back. It is only when two or more calls are made that the call disposition can change from a call back to, in this case, a complete interview (recruitment). For call 2, it can be seen that 612 of the call backs in call 1 were converted to complete recruitment interviews after the second call. However, of the total 855 complete recruitment interviews from call backs, 71.6 percent of these occurred when a second call was made to the household. From call numbers 7 through 10, no call backs were converted to complete recruitment interviews confirming what is reported elsewhere (Zmud, 2003). Table 33: Call Attempts Required to Complete Interviews with Households Initially Requesting a Call Back (File 1) Call Number Total Conversion 1 2 3 4 5 6 7 8 9 10 Call Backs 4,292 2,881 1,885 1,199 984 558 386 336 296 161 12,978 Call Back to Complete Recruitment n/a 612 136 52 37 18 0 0 0 0 855 Converted (%) n/a 71.6 15.9 6.1 4.3 2.1 0 0 0 0 100% Call Back to Complete Interview n/a 209 46 12 7 2 0 0 0 0 276 Converted (%) n/a 75.7 16.7 4.4 2.5 0.7 0 0 0 0 100% 16 This depends on interviewer training and experience. 17 This was not activated until early July, 2003 (Overington, 2003).

70 It is interesting to observe that of the 612 complete recruitment interviews achieved after a second call was made to the household, 75.7 percent, or 209 of these, were converted to complete household surveys. The percentages of conversions from call backs to complete recruitment interviews, and from call backs to complete recruitment interviews to complete household surveys, are almost identical for each call. Table 34 shows the same information as Table 33 for file 2. It can be seen that 2,785 of the 41,467 call backs in call 1 were converted to complete recruitment interviews after the second call. However, of the total 3,958 complete recruitment interviews from call backs, 70.4 percent of these were converted to a complete interview when a second call was made to the household. Of the 2,785 complete recruitment interviews achieved after a second call was made to the household, 1,151 (71.6 percent) were converted to complete household surveys. The percentages of conversions from call backs to complete recruitment interviews, and from call backs to complete recruitment interviews to complete household surveys are almost identical for each call. Table 34: Call Attempts Required to Complete Interviews with Households Initially Requesting a Call Back (File 2) Call Number Conversion 1 2 3 4 5 6 7 8 9 10 Total Call Backs 41,467 23,021 14,764 9,250 5,784 3,313 1,938 1,048 536 18 101,139 Call Back to Complete Recruitment n/a 2,785 762 259 87 39 16 5 4 1 3958 Converted (%) n/a 70.4 19.3 6.54 2.2 0.9 0.4 0.13 0.1 0.03 100% Call Back to Complete Interview n/a 1151 308 103 26 17 2 1 0 0 1607 Converted (%) n/a 71.6 19.2 6.4 1.62 1.0 0.12 0.06 0 0 100% It is interesting to note that call back conversions to complete recruitment interviews occurred throughout the ten calls. However, the conversion of these to complete household surveys drops significantly after the second call is made. For example, the conversion to complete household survey for call number two is 71.6 percent, for call number three it is only 19.2 percent. The corresponding amount for file one, for call three is 16.7 percent. However, two call attempts should not be set as the call limit for households requesting call backs. The overall conversion to complete household surveys of households that requested to be called back is 276 (2.1 percent) for File 1 and 1,607 (1.6 percent) for File 2. If a five-call limit had been set, then the numbers drop to 274 (2.1 percent) and 1,588 (1.6 percent), respectively. Thus, the overall conversion remains identical to the conversion rates with ten call attempts. Setting a call limit for call backs would, therefore save time and money, and allow resources to be diverted to convert refusal or non-contacted households. Table 35 and Table 36 show the number of non-contacted households that were converted to complete recruitment interviews after subsequent call attempts, for call history files one and two. For both files, again it can be seen that overall conversion drops significantly after call two. The overall conversion rates for non-contacted households who went on to complete the recruitment interview, and later went on to complete the household survey are 5.4 percent for File 1 and 1.3 percent for File 2. Table 35: Call Attempts Required to Complete Interviews with Households Initially Not Contacted (File 1) Call Number Conversion 1 2 3 4 5 6 7 8 9 10 Total Noncontacts 11,859 5,642 2,980 1,402 286 129 65 34 5 5 22,407 Noncontacts to n/a 1,518 687 429 246 36 0 0 0 0 2,916

71 Complete Recruitment Converted (%) n/a 52.1 23.6 14.7 8.4 1.2 0 0 0 0 100% Noncontacts to Complete Interview n/a 638 291 168 92 17 0 0 0 0 1,206 Converted (%) n/a 52.9 24.1 14 7.6 1.4 0 0 0 0 100% Table 36: Call Attempts Required to Complete Interviews with Households Initially Not Contacted (File 2) Call Number Conversion 1 2 3 4 5 6 7 8 9 10 Total Noncontacts 116,586 71,163 43,658 26,311 16,311 11,989 9,885 8,807 8,010 7,609 320,329 Noncontacts to Complete Recruitment n/a 5,288 2,757 1,547 599 229 71 33 36 16 10,576 Converted (%) n/a 50 26 14.6 5.7 2.2 0.7 0.33 0.33 0.15 100% Noncontacts to Complete Interview n/a 2,199 1,121 613 243 76 26 5 11 2 4,296 Converted (%) n/a 51.2 26 14.3 5.7 1.8 0.6 0.1 0.26 0.04 100% If only five call attempts were conducted, the overall conversion remains the same for file two, but drops by 0.07 percent for file one. This is not a significant loss, but given that six call attempts was the limit for re-calls to non-contacted households in study one, the cost saving is minimal. These results are similar to the results of a study conducted by Harpuder and Stec (1999). With this in mind, therefore, either five or six calls should be the call limit set to convert non-contacted households to complete recruitment and household interviews. Motivators As noted earlier, one of the methods that has been proposed and used in Europe is that of providing a motivator for each household. This is intended to increase the motivation of household members to complete the survey task, by building rapport between the motivator and the household members. In most current telephone surveys, each time the household is called, a different interviewer contacts the household, with the result that the survey may seem very impersonal, and the individual respondent may assume that his or her contribution is of relatively little value. The use of a motivator is one method to counteract the feeling of lack of importance, and of the impersonal nature of the survey. As part of this project, Westat undertook a small pilot survey using an adaptation of the motivator procedure, devised by Socialdata (Moritz and Brög, 1999). Because of the nature of the CATI system used by Westat, and the availability of staff to act as motivators, Westat used a three-person team of interviewers for each household in a sample of 50 to 100 completed households. Respondents were provided with the names and phone numbers of each of the three interviewers, so that they could serve as motivators for the household and respond to questions from the household. The pilot survey was undertaken as part of the Metropolitan Washington Council of Governments’ (COG) longitudinal survey. This survey was a multi-contact telephone survey, using random digit dialing as the sampling procedure. There was a screener interview, followed by an extended interview to obtain trip data. The latter could sometimes require multiple calls to retrieve data from all household members. Three teams of three interviewers were used for the pilot survey, with each team including at least one bilingual interviewer, in case a Spanish-speaking household was encountered. There was little overlap between the three interviewers in a team, other than to brief the next shift’s team member of any appointments that had been made for calls, and the status of each household.

72 Using this procedure, it would not be expected that much difference would occur in the initial screener interviews, since rapport was not yet established at that time. Also, because Westat used a manual dialing procedure for the pilot survey, compared to the automated call assignment system of the CATI software for the balance of the sample, it is possible that the initial screener interviews would be less successful than those not in the pilot survey. This was the case, although the reason for the lower response rate to the Screener Interview was principally in a larger proportion of no answers, answering machines, and reaching maximum number of call attempts without successfully contacting the household than in the main survey. There is no clear reason why this would have occurred, and it is not apparently due to the different methods of assigning calls to interviewers. With respect to the completion of the Screener Interview, there was no statistically significant difference between the pilot survey and the main survey. It is worth noting that there was a slightly higher refusal rate in the pilot, which may have resulted from using no refusal conversion specialists in the interviewer teams, whereas such specialists are used in the main survey. For the extended interview, there was again no statistical difference in the results from the motivator teams and the main survey. It was also noted that the average time spent on the phone was about 7 percent higher for the pilot survey than the main survey, although there was considerably more scheduling and down time for interviewers in the pilot survey than in the main survey, largely because of the small sample used, and the number of interviewers assigned to the pilot. Overall, the results of this test were inconclusive, and served to show that the automated system used by Westat was not able to respond readily to this different procedural design. It should also be noted that the method employed was not strictly the same as the method developed by Moritz and Brög (1999), and so results may not be reflective of the gains to be obtained from a more rigorous application of the motivator method. Conclusion Given the above results, and the results from other studies, it would appear that conducting no more than five call attempts to convert households that request to be called back, or non-contactable households, should be set as the call limit during recruitment, and retrieval (Harpuder and Stec, 1999). There is no significant reduction in non-response bias if more than five call attempts are made (Harpuder and Stec, 1999) and there are no real changes in the conversion percentages for households that requested to be called back, or non-contacted households, to complete household interviews. Further research is also warranted on the motivator approach, which may serve to reduce termination rates and incomplete surveys, although it is felt unlikely to affect the initial response to the recruitment call. Although the Westat experiment was inconclusive on this point, there is enough indication in those results that a full-scale application should be attempted and compared to the conventional procedure. However, no recommended standards or guidelines can be proposed at this time on this specific issue. Table 37 shows a proposed schedule of contacts and reminders, devised from the current state of practice of travel surveys. This is a proposed schedule; field work investigation is required before a standard can be devised. The recommendations for standardized procedures for number and type of contacts may be found in section 2.2.1 of the Final Report. Table 37: Recommended Schedule of Contacts and Reminders Ref. Day Contact Type Content Received by Household 1 Advance letter (R – 7) Mail Pre-Notification letter A week before recruitment is scheduled to commence 2 Recruitment (R) Telephone Recruitment interview Recruitment Day 3 R+1 Mail Survey package sent out R+3 to R+5 4 Day before Diary Day (D – 1) Telephone Pre-Diary Day Reminder (motivation call) D-1

73 5 D+1 Telephone Reminder to return completed survey (motivation call) D+1 6 D+2 Mail Postcard reminder/reset of Diary Day to D+7 D+4 to D+6 7 D+6 Telephone Reminder and check on second opportunity for Diary Day D+6 8 D+9 Mail Postcard reminder and reset of Diary Day to D+14 D+11 to D+13 9 D+13 Telephone Reminder and check on third opportunity for Diary Day D+13 10 D+15 Mail Re-mailing of Survey Package and reset of Diary Day to D+21 D+17 to D+19 11 D+20 Telephone Reminder and check on fourth opportunity for Diary Day D+20 5.2 D-3: PROXY REPORTING 5.2.1 Definition In surveys that use telephone or personal interviews as the method to retrieve completed data, there is a continual issue regarding who provides the activity or travel information: the person performing the activity or travel (direct respondent) or someone else. Those instances in which the activities or travel are reported by someone other than the person who actually performed the activity are referred to as having been reported by “proxy”. 5.2.2 Effects of Proxy Reporting There is a relatively large body of research that concurs that the number of trips is lower when reported by proxies (e.g., Richardson et al., 1995). Among recent travel surveys, the 1996 Dallas-Ft. Worth Household Travel Survey found that proxies reported statistically significant fewer activities than direct reports (comparing a mean of 11.4 activities from proxies to a mean of 12.3 activities from direct reports, p<.0001). The Bay Area Travel Survey 2000 also found lower trips from proxy reports, with an average of 3.8 trips on Day 1 (of a two-day travel diary) reported by proxy compared to 4.4 trips from direct reports. Both of these surveys permitted proxy reporting for persons under 18 years of age. There have been other studies that have examined the types of trips that are more frequently differentially reported by proxies. Analyzing data from the 1995 Nationwide Personal Transportation Survey (NPTS), Greaves (2000) found that proxy reports tended to overestimate the trip rates for regular trips such as work and school trips, while severely underestimating the more spontaneous or discretionary trips such as non-home-based trips. Badoe and Steuart (2002) examined travel data collected in Toronto and found somewhat similar results, with proxy reports tending to underestimate home-based discretionary, and non-home-based, trips. In contrast to Greaves, however, they found that work and school trips were not over-reported. To date, survey practitioners and local survey designers have developed their own rules and protocols for determining when proxy reporting is acceptable, and for reducing proxy reporting. As shown in Table 38, different household travel surveys have used slightly different guidelines for determining when a proxy report is acceptable, calculating the percentage of proxy reports, and methods for reducing the number of proxy reports. If, as is suggested elsewhere in this report, the percentage of proxy reports may be used as an indicator of survey method quality, then it is imperative that survey practitioners have a standard approach.

74 Acceptable Proxy Reports There are clear instances in which having someone else report activities is not only appropriate, but desirable. Foremost among these instances is the reporting of children’s activities or travel. The issue is whether proxy reporting should be required for certain ages and if so, what age categories should be used. Table 38: Proxy Reporting Guidelines Issue 2001 National Household Travel Survey Bay Area Travel Survey 2000 2000-02 Southern California Travel and Congestion Study 1996 Dallas-Ft. Worth Household Travel Survey Minimum Age Threshold Proxy requested for all household members aged less than 16 years. Household members age 14 or 15 could respond for themselves if approval was obtained from an adult household member. 17 and under for proxy reporting; 18 and older for direct None specified in documentation. Aged 14 and under; always a proxy. Proxy permitted for ages 15-18; 19 and older for direct How many times attempt on primary before accept proxy We can speak directly to persons age 16 And older. However, a proxy for these individuals is acceptable beginning on the fourth day after the trip date. If adult respondent not available for direct retrieval on initial call, and completed diary was available, could collect travel from proxy immediately. None specified in documentation. Initially, two attempts required before would accept proxy; this rule appeared to negatively impact completion rate, so was relaxed to one attempt halfway through survey. % Proxy among Adults 16.9% 23.7% Not reported in Final Report 19% Coded whether respondent or proxy Yes Yes Yes Yes The Federal Office of Human Research Protection considers research participants under the legal age of consent to be minors and therefore requires parental consent in most cases. Because the legal age of consent varies among the states (usually from 16 to 18), Human Subjects Guidelines have different procedures for respondents under 18 years of age (17 and under). In practice, most travel surveys in the United States have permitted adult proxies to report travel of children aged 14 and under. This matches the European practice, which is to use proxy reporting for persons aged 14 and under (CORDIS, 2003). Practice varies as to the upper age limit, with some surveys accepting proxy reports from persons aged 16 and younger and others using 18 as the upper limit for acceptable proxy reporting. The following standards are recommended: 1. For persons aged 14 and under, require parental or other adult proxy reporting; 2. For persons aged 15 to 17, permit proxy reporting unless the individual is available to report their activities directly with parental permission; and, 3. All persons aged 18 or older should be asked directly for their activities or travel. Among adult survey participants, there are other instances in which a proxy report might be appropriate, including when the individual is ill or physically or mentally unable to complete the survey. However, determination of these conditions requires the addition of at least two questions to the survey: one asking about long-term disabilities; and the other asking about short-term reasons for not responding

75 directly. There is a standard question recommended regarding long-term disabilities that would prevent an individual from traveling alone outside the home (section 4.3). This may be used to also assess whether a person is capable of responding directly. In cases where there was no travel outside the home on the travel day, it has also been recommended that a question be asked to probe for the reasons why, and temporary illness is one possibility (section 8.6). The responses to both these questions may be used to help determine whether a proxy report is acceptable. Procedures for Reducing Proxy Reporting Among Adult Respondents There is wide variation among survey practitioners as to the protocol, if any, used to reduce the number of proxy responses. The most common is to make repeated attempts to speak directly with the individual respondents. Both the 2001 NHTS and the 1996 Dallas-Ft. Worth Household Travel Survey included provisions that a proxy report was not accepted until a certain number of prior attempts had been made to reach the respondent directly by telephone. The issue is often framed as being one of balancing the desire for higher quality data (obtaining activity or travel information from a direct report) with the desire for complete households (obtaining some data from all members of a household). In the Dallas-Ft. Worth survey, the requirement for at least two repeated attempts to reach the direct respondent was relaxed to require only one attempt during the study in response to perceptions that the protocol was leading to a lower completion rate than desired. However, the results were somewhat counter-intuitive: a higher percentage of households (78.9 percent) were complete prior to the relaxation of the proxy protocol than after (53.2 percent). While this finding may be confounded with the fact that the protocol change occurred roughly three quarters of the way through the study, and some of the later households may simply have been “abandoned” at the end of the survey period, it still provides some indication that additional attempts to reach the direct respondent does not necessarily impact household completion rates negatively. Some survey practitioners accept as a quasi-direct response those instances in which an individual has written her/his activity or travel information in a dairy, and someone else in the household reports it during retrieval. However, Greaves’ (2000) analyses showed that more trips were reported by proxies when a completed diary was present than when it was not; but, in both instances, the number of trips reported by proxies was less than the number reported by direct reports (even when the direct report did not bother to fill out the diaries). With this evidence, it is recommended that even when a completed diary is available for reporting by another household member, this should not be considered as a direct report. Only a direct response permits missing activities to be discovered and the full slate of trips/activities obtained. Direct reporting is especially crucial as survey designers choose not to include all of the desired information such as travel costs on the written diary form. Recommendations on proxy reporting are provided in section 2.2.2 of the Final Report, while proxy reporting as a survey quality indicator is discussed further in section 10.4 of this Technical Appendix and section 2.7.4 of the Final Report. 5.3 D-4: COMPLETE HOUSEHOLD DEFINITION 5.3.1 Description A complete household response is generally defined as a household in which complete information is obtained from all eligible household members (Stopher and Metcalf, 1996; Nustats International, 2000; Ampt and Ortuzar, 2004). The main problems that result from this rather stringent definition are:

76 1. Lower response rates; and 2. Exclusion of many households due to incomplete responses; larger and smaller size households are less likely to provide complete responses and this usually results in biased databases because demographic and travel characteristics of these households differ to those of completely responding households (DeHeer and Moritz, 1997; Kam and Morris, 1999; Richardson, 2000). Other related issues include the accepted levels of proxy reporting and data imputation. Together, these troublesome issues, due to varying levels of acceptability across different surveys, raise the need for standardization of this survey element. 5.3.2 Review of Complete Household Definitions Stopher and Metcalf (1996) found that 56 percent of recent travel surveys defined a complete household as one in which complete information was obtained from all eligible household members. In even more recent investigations, it was found that the definition of a complete household varied across nine metropolitan and national data sets from the U.S. Table 39 provides a summary of the key features of the data sets in terms of complete household definition and response rates. For four studies, two of which employed travel diaries and the other two employed activity based travel diaries, a household response was considered complete if all household members provided all travel and all travel and activity information. Activity based travel diaries are popular because these prompt the respondent to recall travel undertaken between and or during activities, hence an expected lower incidence of item non-response. However, another two studies, that used activity based travel diaries, required complete information from all household members for all survey components- vehicle, household, personal and activity forms had to have complete information. Correspondingly, these two activity based travel diaries yielded low response rates for diaries of this type. This may also be because no partial responses were incorporated into the analyses as well as poorly designed survey instruments leading the respondent to believe that much effort is required to complete the diaries (high respondent burden). However, the lowest response rate was recorded for a study which used an activity based travel diary. This survey involved a recruitment and retrieval stage and thus attrition resulted in both stages leading to a low overall response rate. Whether proxy reporting is permitted or not ultimately impacts how the research agency defines a complete household response. For example two studies (NYMTC and Bay Area Travel Survey) specified, in detail, the circumstances in which proxy reporting was accepted: 1. Proxy reporting accepted if an adult reporting on behalf of a minor, or for an adult that completed/returned the activity/trip diary (NuStats, 2000); and 2. Proxies were allowed if the subject was not capable of being interviewed because of an impairment or a language barrier, the interviewer was told that the subject would not be available for the entire six-day recall period, the interviewer was told that the subject would never participate, and the proxy was knowledgeable about the subject’s travel on the assigned travel day, the interviewers attempted to reach the subject for the first three days of the six-day call- back period, and were not successful (U.S Department of Transportation, 2001a).

77 Table 39: Summary Features of Nine Data Sets Examined Characteristic New York Metropolitan Transportation Council Household Interview Survey Bay Area Main Travel Survey 2000, Dallas-Fort- Worth Travel Survey South East Florida Household Travel Survey Broward Household Travel Survey Oklahoma Kentucky Indiana Activity and Travel Survey Little Rock Household Travel Survey Yakima, Charleston and Wilmington National Household Travel Survey Complete Response Definition All records for all household members All household members to provide travel and activity information All records for all household members All household members to provide travel information All household members to provide travel information All household members to provide travel and activity information All household members to provide travel and activity information All household members to provide travel and activity information 50% of adults in household complete the person interview. Proxy Reporting Accepted if an adult reporting on behalf of a minor, or for an adult that completed the activity/trip diary Individuals unavailable at the time of the interview Permitted Permitted Permitted Permitted Permitted Permitted Permitted Eligibility All household members No one under 18 All household members All household members Household members over five years All household members All household members All household members 16 years and over All household members Partial Response Definition Complete information from all employed household members; partial responses excluded from analyses Not defined Not defined Not defined Not defined Complete household responses except missing start and end of travel times Not defined Not defined Not defined Response Rate 26.2% 7.5% 37% 33% 33% 57% n/a n/a 36.8% Source: Adapted from Carr Smith and Corradino (2000a, 2000b); Morpace International (1995); Morpace International (2002); Applied Management and Planning Group (1995); NuStats International (2000); NuStats International (2003a, 2003b); U.S. Department of Transportation (2001a).

78 In the documentation of another study, it was stated that proxy reporting was permitted so as not to reduce the response rate and discard part of the sample; high levels of unit and item non-response were expected. However, it was found that proxy reporting led to an underestimation of trip rates by as much as 0.43 for males and 0.69 for females (Morpace International, 2003). In addition, this study only sought travel and activity information from household members 18 years and over. This is an example of why the level of proxy reporting in the resulting data set has to be carefully examined (section 5.2 looks at proxy reporting in more detail). A strict definition of a complete household response, permitting proxy reporting for certain cases only, and the elimination of partial responses from the final data set, resulted in a low response rate of 26.2 percent for the New York Metropolitan Transportation Council Household Interview Survey. The exclusion of these data is problematic for two major reasons. The first is the rising costs of data collection, and the second is that these data can be useful and provide insight into partial non-respondents. In addition, the Bay Area Travel Survey allowed a high level of proxy reporting but used a stringent definition of a complete household response. The resulting response rate was 7.5 percent and the travel data obtained were relatively poor in quality. The National Household Travel Survey employs the following definition of a complete household response: if fifty percent or more of adults within a household completed the person interview, which incorporates travel information and trip diaries, then the household response was considered complete. It has been widely documented that the characteristics of non-respondents to travel surveys are: 1. Very low and very high income; 2. High and low mileage drivers; 3. Young single males and females; 4. Zero vehicle use; 5. People residing in metropolitan areas; and 6. Households with children (De Heer and Moritz, 1997; Richardson, 2000; Kam and Morris, 1999). Therefore, this rule was adopted to address the concern that larger households and low income households are less likely to have all household members complete the person interview and travel diary, due to complex travel patterns or the perception that their travel data are worthless to the data collection agency (Kam and Morris, 1999). The fifty percent rule aims to minimize non-response bias in travel surveys, thus obtaining a more accurate picture of people’s travel behavior. However, despite the less stringent complete household definition and the permission for proxy reports for eligible household members, the overall response rate for the 2001 NHTS was 36.8 percent (U.S. Department of Transportation, 2001a). This demonstrates the problem of increasing non-response which adds to survey costs; this is addressed later in this report. The NHTS definition of a complete household response does not incorporate important demographic characteristics of the household that may affect trip rates and the types of trips undertaken: household age structure could be problematic (U.S. Department of Transportation, 2001b). For example, the household may consist of adults only. Simply stating that only fifty percent of adults are required to provide all personal and travel information, for the household response to be considered complete, leads to a generalization that all household members, regardless of age, exhibit the same travel patterns and behavior. For example, there is a marked difference in trips rates for these rather broad age groups, 18 to 64 years, 65 to 75 years, and over 75 years (Alsnih and Hensher, 2003). The difference between these age groups is even more apparent when looking at the activities undertaken. If the first age group is broken down into smaller categories, there are further differences in the type of activities and the number of trips undertaken. If all adults within the household fall within one age group, the problem posed by the fifty percent definition is minimal compared to the case where household members fall in all three broad age categories. For example, if there are six adults in the household and two adults fall in each of the three age categories described, the fifty percent rule as is, means that only three adults have to answer for the

79 household response to be considered complete. Thus, we may receive complete information for all adults in the 18-64 age group, receive information from only one adult in the over 75 age group, and obtain no information about adults in the 65-75 age group. Capturing information for some adult members in particular age groups and no information for household members in other age groups means that imputation of data will not provide accurate travel related information, at the household level. This clearly demonstrates the need to elaborate the fifty percent definition for a complete household response, to allow for varying household characteristics that are otherwise unaccounted for. Certainly there are higher costs associated with a more stringent definition of a complete household response. In addition, it appears as though lenient definitions of both a complete household response and proxy reporting employed together in one survey do not boost response rates significantly and actually undermine data quality. Given this outcome, a lenient definition of a complete household response should incorporate a stringent definition of proxy reporting. Undoubtedly, the two standards are interdependent. Proxy reporting really should only be permitted when repeated attempts to obtain the information from the respondent in question have failed, and time and budget constraints require finalization of the data collection process. Clearly, a more lenient definition of a complete household response would be less likely to result in bias in the data set because partial18 responses would not be dropped (U.S. Department of Transportation, 2001a; Richardson and Meyburg, 2003). Suppose a study required a sample of 450 households, and expected a response rate of 30 percent. The sample size would have to be 1,500 households. If a lenient definition of a complete household response is adopted, then the 450 households would be relatively easy to obtain when compared with a stringent definition of a complete household response (that includes dropping partial household responses). Standardized procedures that are recommended for complete household definition are provided in section 2.2.3 of the Final Report. 5.4 D-6: SAMPLE REPLACEMENT 5.4.1 Issues in Sample Replacement Refusals result in lost sample and require some sample make up or replacement. Procedures for sample replacement are critical in preserving the integrity of the initial sample. Two questions arise: 1. When should a sampled household or person be considered non-responsive and a replacement make-up household or person be selected? 2. How should replacements for the sample be provided? Quite frequently, the decision to make up sample is not seriously considered and additional samples are added after a relatively minor attempt to gain the original sample. This leads to the potential to create serious biases in the sample and is a practice that should be avoided. In addition, due to high non-response rates and increasing problems with data integrity, the issue of sample replacement has become more important. This is because demographic characteristics and trip rates of the non-responding households are different to those of the households that participate in the survey (DeHeer and Moritz, 1997; Kam and Morris, 1999; Polak, 2000; Richardson, 2000; Kalfs and van Evert, 2003; Zmud, 2003). In this section, how to provide replacement of the sample is discussed. Also, call history files are examined to determine the rate of refusal conversion for households that originally gave a “soft” refusal 18 Partial response is defined as a household where at least one member did not provide any trip/activity information.

80 to participate in the recruitment interview. Further analysis is conducted to determine the overall conversion of these soft refusals to complete household survey responses. 5.4.2 Discussion, Review, and Analysis of the Issues An important issue is the make-up of the sample, when there are refusals. Most surveys are based on anticipated response rates and set up samples that represent sufficient over-sampling to handle expected non-response. Two problems arise. First, if the sample is not a simple random sample, but is a stratified or other more complex sample, the over-sampling must account for varying non-response in different strata. This is not easy to anticipate. In a survey that stretches over several weeks of recruitment and retrieval, the final sample achieved in each cell of a stratified sample can be tracked and households can be sought that will provide the make-up sample in each cell that is falling short. However, to avoid a costly search for households in specific cells of the matrix, many surveys have diverged from the specified stratified sample to make up total sample without regard to the distribution of the final sample. It is possible that examination of previous surveys would indicate the relative sizes of non-response rates in different sampling cells, which could, in turn, allow for recruitment to over-sample at a rate that more nearly compensates for the eventual non-response levels. Alternatively, the level of over-sampling at recruitment may need to be increased, so that not all recruited households are used in the final sample. Finally, sampling “on the fly” as a mechanism to make up the sample needs to be examined. In this case, new households are added to the sample as needed, whenever non-response drives the total sample below what is desired. However, this method has the probability of producing a very distorted sample, particularly when attempts to gain cooperation of respondents are not pursued aggressively. Review of Recent Travel Surveys After careful examination of a number of recent travel surveys, it was found that call attempts to households that initially refused to participate in the survey varied from zero to six. In other words, some surveys did not bother re-calling households that gave a soft refusal to participate during recruitment. Most surveys allowed the household to give a soft refusal once. However, often these households will respond, on subsequent call attempts, with hard refusals19. Some households, during subsequent call attempts, may refuse to answer the telephone, especially if they are expecting the call, and caller-id displays the origin of the call. These calls with a non-contact20 call disposition remain unresolved until contact is achieved with the household. In other words, these households are usually called numerous times until contact is again achieved and the call resolved. The next section describes the results of call history file analyses. The analyses show the maximum number of call attempts made, during the recruitment phase of recent travel surveys conducted in the United States, for households that initially refused to take part in the survey, but later went on to complete the recruitment interview. Call History File Analyses Analysis of call history files gives important information that is not found in any other data base. Two important pieces of information contained in the call history files are: 1. The call disposition codes for every call attempt made to a household; and 19 Hard refusals – strong indication by the respondent that she or he does not want to participate. In this situation, a household is not called again 20 Non-contacts include no answer, busy, and answering machine call disposition codes

81 2. The (implicit) pattern of initial response behavior. In studying the response behaviors for households who initially refused to participate in one particular survey, but who later completed the recruitment, and eventually completed the household travel survey, it was found that the number of first refusals converted to complete recruitment interviews was 521 (22.5 percent of all first refusals). The overall conversion of first refusals to complete household surveys was 7.4 percent. In other words, if refusal conversion is not attempted, 172 complete household responses would have been lost. This adds to non-response bias given that refusers differ from respondents, especially in relation to the statistic of interest; mean trip rates. This confirms what is stated in the literature (Kam and Morris, 1999; Richardson, 2000; Kalfs and van Evert, 2003; Zmud, 2003). Only 9 percent of households that initially refused to participate in the survey and which were non-contactable during subsequent call attempts, actually went on to complete the recruitment interview. Consequently, the overall conversion of these households to complete household responses is only 3.1 percent. These results give the impression that households that initially refuse to participate, and on later call attempts are not contactable, are much more reluctant to participate in the survey than households who initially refuse but are contactable during the next few round of call attempts. This type of response behavior is similar to that of households who request to be called back numerous times but during the final call, refuse to participate; these households eventually respond like outright refusers (Zmud, 2003). For example, for the households who requested to be called back, the conversion rate, to complete household response, was 2.1 percent. This is much lower than the conversion of first refusals. Almost 16 percent of initial “soft” refusals become “hard” refusals on subsequent call attempts. This is a significant amount and reinforces the fact that it is much more difficult to get respondents to participate in surveys. For example, total hard refusals to this particular survey were 3,279, and the percentage of hard refusals that were originally soft refusals is 11.3 percent. However, 7.4 percent of soft refusals were converted to complete household responses. In Table 40, which shows the number of subsequent call attempts made to convert households from initial refusals, 52 percent of first refusals required another call to be converted to a complete recruitment interview which later resulted in a complete household survey. However, 28 percent of first refusals required two call attempts. Also in Table 40, 56 percent of first refusals, converted to a non-contact during the second call, required one more call attempt to be converted to a complete recruitment interview. The number of first refusals that were converted to non-contacts during intermediate call attempts and that were finally converted to complete interviews, drops significantly after two call attempts are made to convert the non- contact from the initial refusal. This means that, in total, these households required four calls to achieve a complete recruitment response. If only three call attempts are allowed after the first non-contact is recorded for households who initially refused, the overall conversion to complete household response is 3 percent; a drop of only 0.1 percent. This call limit may be proposed given the cost savings in relation to limited sample loss. However, further research is required to achieve some conclusive results. Recommendations on sample replacement are provided in section 2.2.4 of the Final Report. Table 40: Number of Call Attempts Required to Convert First Refusals to Complete Recruitment Interviews Number of call attempts 1 2 3 4 5 Number of first refusals converted to complete recruitment interview 273 148 67 23 10 Percent 52.4 28.4 12.9 4.4 1.9 First refusals to non-contacts to complete recruitment interviews 96 49 19 6 2 Percent 55.8 28.5 11 3.5 1.2

82 5.5 D-7: ITEM NON-RESPONSE 5.5.1 Definition Item non-response has been defined as “the failure to obtain a specific piece of data from a responding member of the sample” (Zimowski et al., 1997a), or the “failure to obtain ‘true’ and complete data from each respondent” (Zmud and Arce, 2002). The latter definition draws attention to an important issue – item non-response occurs not only as a result of data being missing but also when incorrect data are provided. Within this context, Statistics Canada defines “incorrect” data as data that are either invalid or inconsistent (1998a, p. 38). Invalid data are data items whose values are beyond the possible or feasible range of that item. Inconsistent data are data items whose values are inconsistent with the values of other data items of the respondent. Item non-response is closely linked to several other items discussed in this report. First, it is linked to the definition of a complete household addressed in Section 5.3, because it is only when item non-response is within tolerable limits that a responding household is considered complete. Second, it relates to survey design and survey execution, because the form in which the questions are posed and the manner in which the survey is conducted are known to have a significant impact on item non-response. 5.5.2 Analysis of Item Non-Response The need for standardization in the identification and measurement of item non-response in travel surveys is motivated by the desire to achieve two features of future travel surveys: consistency among surveys so that meaningful comparisons can be made, and the potential to use item non-response as a measure of data quality. One of the first needs in standardizing item non-response in travel surveys is to standardize the definition of item non-response so that a consistent interpretation exists among all travel surveys. There is general acceptance that any data item that is missing or whose value is incorrect (i.e., it is invalid or inconsistent, as defined in the opening paragraph of section 5.5.1 of this Technical Appendix) is an item non-response. However, looking over past travel surveys, some surveys provide response categories such as “don’t know” and “refused” while others do not, and the number of missing item values are affected by the presence or absence of these categories. In a review of seven recent travel surveys shown in Table 41, four did not provide the option of responding to the question on household income with “don’t know” and “refused”. The results vary quite widely from survey to survey, but it is clear that “don’t know” and “refused” has effectively replaced the missing values category in these surveys. Thus, it seems quite appropriate that “don’t know” and “refused” responses be included as non-response items. Of course, other response options that may be provided, such as “not applicable” (see section 8.3), should not be counted as a non-response. Table 41: Non-response on Household Income Among Several Surveys Household Income Survey Data Set Date Sample size Percent missing Percent don’t know or refused Regional Travel Household Interview Survey for New York and North Jersey 1997- 1998 11,264 24.4 not included Maricopa Regional Household Travel Survey 2001 4,018 0 9.9 Salt Lake City Survey 1993 3,082 0 4.8 Southeast Florida Regional Characteristics Study 1999 5,168 23.5 not included Ohio-Kentucky-Indiana Survey 1990 3,001 0 8.6 Dallas Fort Worth Survey 1996 3,996 8.0 not included

83 Broward Travel Characteristics Study 1996 702 13.2 not included In addition to the non-response on household income shown in Table 41, other items displaying high incidence of missing data in the seven data sets reviewed are driver license status of individuals (0- 55 percent), travel mode (0-26 percent), start time of trip (0-24 percent), end time of trip (0-25 percent), travel time of trip (0-27 percent), and vehicle occupancy (0-35 percent). Because most of these variables are collected on a routine basis in travel surveys, they could feature as standard variables in the construction of a single measure of item non-response in a data set. That is, these variables, or a subset of them, could be used to establish a single statistic that reflected an overall measure of item non-response in data. No record of the establishment of such a single statistic of item non-response was encountered in the literature. Recommendations on standardized procedures are given in section 2.2.5 of the Final Report. 5.6 D-8: UNIT NON-RESPONSE 5.6.1 Definition of Unit Non-Response A definition of unit non-response is the absence of information from some part of the target population of the survey sample (Harpuder and Stec, 1999; Black and Safir, 2000). However, what also needs to be outlined is the definition of a complete response. Are the travel data required from all of the household’s members? If not, then the significance of unit non-response is reduced. What constitutes a complete household is discussed in Section 5.3 of this Technical Appendix. 5.6.2 Review and Analysis of Unit Non-Response High rates of unit non-response are generally associated with non-response error. Non-response error is a function of the non-response rate and the difference between respondents and non-respondents on the statistic of interest (Keeter et al., 2000). For example, characteristics of non-respondents to travel surveys are that they are more likely to be low and high income households and households with low or high mobility rates (De Heer and Moritz, 1997; Richardson, 2000). A lower unit non-response rate is desired because this reduces the incidence of non-response bias. Non-response rates are influenced by the survey topic, the number of call backs, the sponsor of the research, incentives, the number of follow-ups and the survey environment (Schneider and Johnson, 1994; Ettema et al., 1996; Melevin et al., 1998). Interestingly, it has also been stated that late respondents21 to a survey actually resemble non-respondents (Richardson, 2000; Lahaut et al., 2003). Essentially, had later waves not been conducted, these late respondents would actually have been non-respondents. There are two broad categories for unit non-response. These are refusals (hard refusals, soft refusals, and terminations) and non-contacts (busy, no reply, and answering machines). In relation to call backs, if eligibility status is never determined and the household requested to be called back, but on subsequent call attempts no contact was achieved, this becomes a unit of unknown eligibility and cannot be regarded as a non-responding unit. However, if eligibility was determined and the household requested to be called back, but on subsequent call attempts no contact was achieved, this unit becomes a non- responding unit. To reduce unit non-response, in both the recruitment and retrieval stages of a two- or more stage survey (most travel surveys are two-stage surveys, wherein recruitment is conducted through RDD and retrieval is either through CATI, or mail back), the number of refusals, terminations, and non- contacts need to be reduced. In addition, the researcher may opt to employ refusal conversion techniques, 21 Respondents who respond to a survey after numerous waves and reminders

84 if the survey environment allows for this. In terms of non-contacts, there needs to be greater effort to contact the difficult to contact. Characteristics of non-respondents to travel surveys, found in numerous studies, are: 1. Very low and very high income; 2. High and low mileage drivers; 3. Young single males and females; 4. Zero vehicle use; 5. People residing in metropolitan areas; and 6. Households with children (De Heer and Moritz, 1997; Kam and Morris, 1999; Richardson, 2000). Non-response bias is common in travel surveys and must be minimized to obtain a more accurate picture of people’s travel behavior. The development of a standard on the definition of unit non-response that effectively incorporates the definition of a complete household response will enable comparability across different surveys and hence, provide a more accurate picture of what is actually happening to response rates. It has been well documented that response rates have been declining (Atrostic et al., 1999; Dillman and Carley-Baxter, 2000; Dillman et al., 2001; Kalfs and van Evert, 2003; Richardson and Meyburg, 2003;). For example, in a study that compared data from 1976 to 1996, it was found that it took double the number of calls to complete an interview and the number of people not responding increased significantly; it took four calls to complete an interview in 1979 whereas in 1996, it took eight calls (Oldendick and Link, 1999; Curtin et al., 2000). Also, given the high number of non-respondents reported for this study, it is not surprising that the rate of refusal conversion jumped from 7.4 percent in 1976 to 14.6 percent in 1996 (Oldendick and Link, 1999; Curtin et al., 2000). The phenomenon of rising unit non- response rates may be attributed to the nature of the data collected requiring more time for the participants to complete (increased respondent burden), and more physical barriers inhibiting contact with the prospective participant, such as call screening devices (telephone surveys) and gated communities (face- to-face surveys) (Melevin et al. 1998; Kam and Morris, 1999; Oldendick and Link, 1999; Vogt And Stewart, 2001; Kalfs and van Evert, 2003). Also, the increasing number of marketing type surveys has led people to perceive increased respondent burden therefore these individuals no longer even consider participating (Kalfs and van Evert, 2003; Black and Safir, 2000). However, it will be interesting to see the effect of the National Do Not Call Registry, which allows respondents, who have placed their number on the registry, to permit only research surveys, or a selected few telemarketing surveys (chosen by consumers), to call their household. In relation to face-to-face surveys, another inhibiting factor is a decreasing number of potential respondents at home when the study is conducted. Ways to overcome rising unit non-response rates have also been well documented (Schneider and Johnson, 1994; Melevin et al., 1998; Cook et al., 2000; Dillman et al., 2001; Kalfs and van Evert, 2003; Zmud, 2003). To reduce the number of refusals and increase the chance of obtaining a complete interview, three commonly recommended strategies are: • The use of pre-survey monetary incentives; • The use of advance letters and reminders (follow–ups); and • Special interviewer training (Ettema et al., 1996; Leslie, 1997; Melevin et al., 1998; Kam and Morris, 1999; Cook et al., 2000; Kalfs and van Evert, 2003). Incentives, especially pre-survey monetary incentives, are effective in increasing response rates by as much as twenty percent (Melevin et al., 1998; Dillman et al., 2001; Zmud, 2003). Self interest is a powerful motivator for respondents to participate in a study (Dillman et al., 2001; Zmud, 2003). It was also found that pre-paid incentives have positive effects on response rates for short mail out surveys (Kurth et al., 2001). Other ways to improve response rates and lower unit non-response include good

85 questionnaire design, and easy to answer questions, thereby decreasing respondent burden (Axhausen, 1999). Evoking respondents’ appeal (salience) to the research topic is associated with higher response rates: salience is a significant determinant of response rates (Cook et al., 2000; Dillman and Carley- Baxter, 2000). Research sponsored by a government agency or academic institution yields higher response rates because respondents usually trust this type of research, especially in terms of confidentiality and privacy (Kalfs and van Evert, 2003). Scarcity (when a respondent belongs to an exclusive group of people being asked to participate in the study) is also associated with higher response rates (Kalfs and van Evert, 2003). This is an example of Social Exchange Theory (Schneider and Johnson, 1994; Kalfs and van Evert, 2003). Figure 5 provides a conceptual framework for survey cooperation, and Figure 6 depicts graphically how the interviewer may influence the rate of survey participation. Figure 5: A Conceptual Framework for Survey Cooperation Source: Groves and Couper, 1998. An identified area that could be improved is the interaction between the interviewer and respondent (special interviewer training) (Groves and Couper, 1998; CMOR, 2000; Kalfs and van Evert, 2003). It has been acknowledged that the interviewer’s behavior should be tailored to the social situation and the respondent. This will help to establish rapport quickly and avoid discomfort between the respondent and interviewer. This in turn explains why more experienced interviewers are more successful in obtaining higher response rates (Groves and Couper, 1998). Figure 6: Interviewer Influences on Survey Participation Source: Groves and Couper, 1998. Refusal conversion may also involve changing survey mode from telephone to face to face interviews. However, this is more costly and it was found that the success of the second mode of survey Social Environment Survey Design Household(er) Interviewer Householder-interviewer interaction Decision to cooperate or refuse Out of Researcher Control Under Researcher Control Interviewer experience Interviewer Socio- demographic attributes Assignment area Survey design features Interviewer expectations Interviewer behavior Cooperation by householder

86 delivery in reducing unit non-response is very small (Dillman et al., 2001). Given this information, however, changing survey mode along with the use of post incentives to induce response amongst non- respondents has not been tested for. This is addressed later in this section. In relation to reminders, according to Freeland and Furia (1999), telephone reminders to mail surveys did not significantly improve response rates. This, however, has not been borne out in a number of transport surveys, where telephone reminders for mail surveys have been found to be quite effective. For mail back surveys, if the person who initially contacted the household delivered the questionnaire and a post payment incentive was offered, the result was an overall increase in the response rate: personal delivery evokes reciprocity (Dillman et al., 2001). Realistically, however, personal delivery of surveys to households is unlikely due to high costs. In travel surveys, households with children are less likely to respond and if they do respond, this is after numerous mail outs of the original questionnaire and numerous reminder letters or postcards. These households are less likely to respond because of the complex structure of the travel diary; hence, the respondents perceive completion of the questionnaire as a cumbersome exercise. This perception is exacerbated by the complex nature of trips undertaken by these households (Kam and Morris, 1999). When these households eventually complete the travel survey, they tend to under-report trips due to their complex nature. Thus, even though response rates increase, this is quite often at the compromise of data quality (Kam and Morris, 1999). In travel surveys, RDD is frequently employed to recruit households by telephone. This means that the location of households in the area under investigation is not known from the telephone number. However, telephone numbers are usually retained. Therefore, the researcher may call the non-responding household and ask for the address, ask about particular questions, mail out the questionnaire a second time, or schedule a face-to-face interview; the mode of delivery depends on the prospective respondent’s preference. Another option is to devise a survey for non-respondents and this is described later in this section. Non-contacts As mentioned earlier, the main recruitment method in travel surveys is RDD. The problem is that non-contacts are increasing, adding to rising unit non-response rates. The number of non-contacts encountered in a survey is a function of repeated calls that interviewers make on these particular cases (Zmud, 2003). Addressing non-contacts is becoming more of an issue due to changing household structure, flexible work arrangements and physical and technological barriers; physical barriers are becoming more prevalent in today’s societies and this makes it more time consuming and difficult for interviewers to reach prospective respondents. In travel surveys, non-contacted households may have higher mobility rates than households which refused to take part in the survey; therefore, if the researcher is unable to contact these households, it may result is an underestimation of trip rates (Zmud, 2003). Figure 7 explains the interactions between the influencing factors on ability to contact. Social environmental attributes Socio-demographic attributes Physical impediments Accessible at-home patterns Likelihood of contact Interviewer attributes Number of calls Timing of calls X X

87 Figure 7: Influences on the Likelihood of Contacting a Sample Household Source: Groves and Couper, 1998. According to Groves and Couper (1998), households with members who have physical impediments should be called first because, on average, these households require more calls to obtain the first contact. This also applies to multi-unit dwelling structures and unlisted numbers. If these numbers are called first, it allows for more call attempts and more attempts at converting refusals. To enhance the rate of contact, four methods should be employed: • Increase the number of calls for non-contacted units; • Designate certain times for calling non-contacted units, e.g., Tuesday evenings; • Expand the data collection period; and • Conduct face-to-face interviews (Groves and Couper, 1998). According to Dennis et al. (1999), Monday to Thursday evenings are the best time to contact households (conduct interviews to obtain complete recruitment and complete interviews) and it was found that the highest contact rates for first calls occurred for households with incomes between $25,000 and $35,000 (29.6 percent) on these evenings between 6 and 9 pm. In relation to technological barriers, it was found that households with answering machines were just as likely to complete an interview once contact was established. Also, if a researcher leaves a brief message describing the purpose of the research, it gives the impression to the respondent that the researcher has gone to the trouble to contact them and therefore it is more likely that the person will participate in the study (reciprocity) (Kalfs and van Evert, 2003). Non-contacts become problematic if the responses of non-contacts differ significantly from the responses of contacts because this will add to non-response bias (Zmud, 2003). For example, younger households and households with higher incomes required more calls to complete an interview due to telephone screening devices. These households also have higher refusal rates (Zmud, 2003). Also, non- contacts who become refusers, after subsequent call attempts, usually have the same socio-demographic characteristics as outright refusers (Zmud, 2003). Respondents who initially refused an interview but were later converted were predominantly of lower socioeconomic status and households with children, whereas the non-contact group was dominated by younger, higher educated and wealthier respondents: higher socioeconomic status (Stec et al., 1999; Curtin et al., 2000; Keeter et al., 2000). It is also important to acknowledge that non-contacts lead active lifestyles and are highly mobile. In travel surveys, absence of data from these households results in an underestimation of trip rates. In addition, potential refusers possess different demographic characteristics to non-contacts. Higher refusal rates have been found among the elderly and low educated persons (Kurth et al., 2001). For this reason, it is important to distinguish between the two components of bias reduction (converting refusals and establishing contact with the difficult to contact group) when trying to improve response rates (Zmud, 2003). It has also been documented that respondents, who initially stated that they were too busy to participate and scheduled a call back, were more likely to be “refusers” than “participators” (Zmud, 2003). This raises the questions of whether these households should be recalled and, if so, what should happen if on subsequent calls they again schedule for a call back. This question is addressed in Sections 5.1 and 5.4 of this report. Call History File Analysis For households that initially refused, and for the households that were initially non-contactable but that later went on to complete the household travel survey, the call history characteristics were added into the household data base to compare the important characteristics of these households to those of the

88 entire sample. Table 42 shows the number of call attempts made to convert households who were initially non-contactable22 for call history file 1. Table 42 shows that households that required fewer call attempts to establish contact and result in a complete household response, differed, in terms of mobility (mean trips), to the entire sample. For non- contacted households that required 3 calls to become a complete household response, the mean number of trips was 8.56. For the entire sample, the mean number of trips was 8.47. However, for households that required 2 calls, or between 4 and 6 calls, the mean numbers of trips were 8.01, 8.13, 7.80 and 6.47, respectively. These results show that for the households that required more than 4 calls, the respective mean number of trips differed markedly to that for the entire sample. Also, it appears as though the non- contact conversions that required 6 calls consisted mainly of households without children and households with higher income. This is consistent to what was found in the literature (Colombo, 2000; Zmud, 2003). The most important result to come out of these findings is that it does not matter whether some households are easy or difficult to contact (in relation to the number of call attempts), bias is present, in terms of an important key statistic, mean number of trips. Also, it appears that the increase in response rates has led to a decrease in data quality in relation to mean trips rates, due to under-reporting. This confirms what was stated in the literature (Kam and Morris, 1999). Table 42: Descriptive Statistics for Original Non-Contacts Variable Call 2 Call 3 Call 4 Call 5 Call 6 Total Calls (2-6) Sample One-Person HHs 32% 35% 42% 51% 47% 36% 27% Two-Person HHs 34% 36% 32% 25% 24% 33% 35% One Worker 43% 41% 51% 53% 53% 44% 40% Two Workers 38% 40% 34% 29% 29% 37% 37% One Car 37% 36% 43% 45% 47% 39% 33% Two Cars 42% 44% 40% 34% 29% 41% 43% Single Detached Dwelling 71% 69% 65% 59% 71% 69% 74% Home Owner 68% 63% 60% 49% 53% 64% 68% Mean Trips per HH 8.01 8.56 8.13 7.80 6.47 8.12 8.47 No Infants in Household (0-4yrs) 87% 90% 90% 88% 94% 88% 87% No School Aged Children in HH 77% 77% 80% 84% 94% 78% 72% One Adult Households 36% 29% 48% 53% 47% 40% 31% Two Adult Households 54% 53% 45% 40% 29% 51% 56% Income under $50,001 61% 60% 68% 67% 47% 62% 62% Table 43 shows that the differences are even greater for households that initially refused to participate in the survey, despite that the mean number of trips for all the households, that were converted from refusals, was closer to the sample mean number of trips than the mean number of trips for all the households that were originally non-contacts: 8.24, 8.47 and 8.12 trips, respectively. Regardless of the number of call attempts made to convert the households from refusals, the mean number of trips was different from that for the entire sample. However, the greatest difference was for households that required another call attempt to convert the refusal successfully. In addition, socioeconomic characteristics of refusers that required four conversion attempts (five calls altogether) appear to be lower than the socioeconomic characteristics for the non-contact conversions that required five call attempts. This also confirms what was stated in the literature (Stec et al., 1999; Curtin et al., 2000; Keeter et al., 2000; Richardson, 2000; Zmud, 2003). Table 43: Descriptive Statistics for Original Refusals Variable Call 2 Call 3 Call 4 Call 5 Total Calls (2-5) Sample 22 Original call dispositions include no answer, answering machine, fax line, and busy

89 One-Person HHs 19% 29% 18% 23% 22% 27% Two-Person HHs 49% 48% 36% 23% 46% 35% One Worker 36% 40% 55% 23% 37% 40% Two Workers 34% 29% 45% 23% 33% 37% One Car 25% 38% 18% 23% 28% 33% Two Cars 51% 50% 55% 31% 49% 43% Three or more Cars 19% 10% 27% 46% 19% 19% Single Detached Dwelling 83% 74% 91% 85% 81% 74% Home Owner 81% 79% 60% 62% 79% 68% Mean Trips per HH 7.86 8.82 8.95 8.67 8.24 8.47 No Infants in Household (0-4yrs) 87% 92% 82% 92% 88% 87% No School Aged Children in HH 79% 79% 55% 77% 77% 72% One Adult Households 27% 35% 36% 23% 30% 31% Two Adult Households 60% 60% 45% 38% 58% 56% Income under $50,001 61% 68% 64% 50% 61% 62% Table 44 shows the differences in the mean number of trips for households converted from non- contacts to complete household surveys, from that for the entire sample. The households that required from eight to ten calls to be converted, appear to have a higher socioeconomic status than households that required between two and seven calls. This again confirms the literature (Stec et al., 1999; Colombo, 2000; Keeter et al., 2000; Zmud, 2003). Employing refusal and non-contact conversion requires careful and thorough analysis of call history files, for both CATI recruitment and retrieval, because sufficient numbers of refusals and non-contacts must be successfully converted, for every call attempt, to reduce the incidence of bias in the data set. According to Polak (2002), households with more vehicles are more likely to be non-respondents, due to their high mobility rates. Therefore, exclusion of these households tends to lead to a downward bias in trip rates. This requires further investigation. Table 44: Descriptive Statistics for Non-Contact Conversions (File 2) Call Variable 2 3 4 5 6 7 8 9 10 Total Calls (1-10) Sample One Person 35% 39% 44% 43% 46% 46% 40% 45% 0% 38% 29% Two Persons 38% 39% 35% 36% 42% 27% 20% 36% 100% 37% 38% One Vehicle 36% 40% 45% 38% 46% 39% 40% 64% 0% 40% 33% Two Vehicles 41% 38% 39% 43% 38% 42% 40% 27% 100% 39 43% Single Detached Dwelling 63% 61% 54% 60% 59% 69% 60% 73% 50% 61% 66% Owner/Occupier 67% 66% 60% 61% 54% 65% 60% 82% 100% 65% 69% Mean Trips per HH 9.18 8.30 8.19 8.20 6.99 7.73 13.8 6.82 11 8.70* 9.11 Income over $40,000 77% 76% 74% 78% 74% 73% 80% 73% 100% 76% 77% * Significant difference between the mean number of trips for households converted from non-contacts, to that of the sample, at P=0.05 Non-Response Surveys There have been numerous mathematical equations to calculate non-response bias (Groves and Couper, 1998; Black and Safir, 2000). Also, given that late respondents to a survey after numerous mail outs have been conducted usually respond like non-responders (Richardson, 2000; Lahaut et al., 2003), it may be best to reduce the number of mail outs due to decreasing data quality and adopt non-response surveys to correct for non-response bias. Non-response surveys are important because they enable the researcher to gain some knowledge about travel patterns of non-respondents and to determine if these differ significantly from respondents’ travel characteristics. Non-response surveys also allow the researcher to understand why these individuals refused to participate in the original study as well as aid in the development of future travel surveys.

90 Calling non-responding households and reminding them to participate will be of no use if they have discarded or misplaced the survey package (Richardson, 2000). This may be why Freeland and Furia (1999) recorded no significant increase in response rates to a mail out survey when telephone reminders were made. However, a second mailing to hard-to-enumerate households resulted in increasing response rates (Whitworth, 1999). If this procedure does not yield a significant increase in response rates, then a face-to-face interview should be conducted using the original questionnaire or a non-response survey should be devised and either mailed, e-mailed, conducted by CATI or conducted by a face-to-face interview to non-responding households, depending on funds available to research agencies. It should be relatively short and ask questions about the number of trips undertaken by the household on an allocated day, the means of travel, household size and age structure, housing tenure status, type of dwelling, combined household annual income and employment status of the respondent. This survey reduces the perception of high respondent burden because questions asked are less complex, the survey form is shorter, thereby taking less time to complete, and the visual presentation of the survey is more “aesthetically pleasing.” In addition, the respondent will notice the level of effort of interviewers to contact them; hence, they may be more inclined to participate, otherwise known as reciprocity (Kam and Morris, 1999; Kalfs and van Evert, 2003). A non-response survey was devised to gain some insights about non-responding households to the 1997 Denver Region Travel Behavior Inventory Household Travel Survey. A two dollar incentive was offered and it was found to be very effective in reducing item non-response for the income related question (Kurth et al., 2001). The results of this non-response survey found that more elderly households were among the non-contact and quick refusing households and, therefore, their trip rates were not accounted for adequately in the original survey. Another non-response survey was conducted in Sydney, 2001, by the Transport Data Centre, NSW Department of Transport, to investigate non-response and its effects on data quality, in relation to the Sydney Household Travel Survey, as well as to test the telephone as an alternative data collection method to the costly face-to-face interview (TDC, 2002). Households that could not be contacted after at least five visits (non-contacts) and those that still refused after refusal conversion was attempted, were moved into the Non-Response Study. A full HTS telephone interview was offered first, if the main reason for non-response was unavailability for a face-to-face interview. If the non-respondents still declined, a shorter Person Non-Response Interview was offered. This only collected core demographic and trip information. If the non-respondent did not want to complete the Person Non-Response Interview form, a Person Non-Interview form was offered; information was collected by proxy. From the results of this study, TDC was unable to state with any confidence the relative accuracy of the telephone interview data to that of the personal interview (regular HTS), due to the insufficient sample size (TDC, 2002). However, the results of the non-response study conducted by TDC are useful for providing some insight into the characteristics of non-respondents to a face-to-face interview. Table 45 shows some key characteristics of non-respondents. Interestingly, total household trip rates for non-responding households do not differ significantly from total household trip rates for responding households. However, use of train and walk transport modes were significantly higher for non-responding households compared to responding households. This is important information that is particularly useful in relation to the revision and planning of transport services in the area(s) under investigation, hence, the benefits of conducting follow-up non-response studies. Table 45: Transport Data Centre Non-Respondent Summary Characteristics Attributes Characteristics Dwelling Type More likely to live in a unit or apartment Housing Tenure More likely to be renters Age Significantly over-represented in the 15-49 age group Employment 60.2% full time workers, significantly different to responding adults reported as full time workers (43.2%) Reason for not responding 60% stated that they were “not interested” and “did not want to” respond

91 Total Trip Rates Not significantly different to responding households Mode Use Significantly higher walk and train trip rates than respondents The following section describes the non-response surveys developed by the Institute of Transport Studies (The University of Sydney) and Louisiana State University, and conducted by NuStats, as well as the data analyses. The actual surveys are shown in Appendices A and B. 5.6.3 Non-Response Follow-Up Study. It was decided that a non-response follow-up study to a recent travel survey would be undertaken to investigate the reasons why people do not respond to surveys and if there are any remedial steps that can be employed to lessen the numbers of non-respondents, and therefore, decrease the incidence of non- response bias. In addition, a Stated Choice experiment was devised and incorporated in one of the non- response surveys. This was considered important, because it allowed for the testing of various survey elements noted in the literature as significant in the determination of participation and response rates. Nustats, a survey research firm, conducted a recent travel survey in four regions in the United States: Wilmington, NC; Charleston, SC; Little Rock, AK; and Yakima, WA and, therefore, was approached to tackle the recruitment phase of the Non-Response Follow-Up Survey, for these four regions. These surveys followed a similar general approach to the conduct of household travel surveys. Selected households received an advanced mailing, followed by a recruitment telephone call, the mailing of a travel diary package, and a telephone call to retrieve the travel diary information. The recruitment call also collected demographic information about the household and its members. All studies used sample that was generated via random digit dialing (RDD) techniques. In this respect, the sampling frames of the initial travel studies consisted of list-assisted 1+ sample in which only exchanges with at least one working residential telephone number were included in the universe. The sample was purged as much as possible in advance to identify nonworking and business numbers. The final sample was reverse address matched using Targus, the premier source of telephone and address match information, to identify addresses for the advance mailings. In other respects, the initial travel surveys may have differed from each other, as noted in the details below: • Little Rock: o Study area comprised Faulkner, Lonoke, Pulaski, and Saline, AR Counties; o Travel in April and May, 2003; and o Everyone in the household completes 24-hr travel log. • Charleston: o Study area comprised Berkeley, Charleston, and Dorchester, SC counties; o Travel in April and May, 2003; and o Persons age 16+ in household complete 24-hour travel log. • Wilmington: o Study area comprised New Hanover County, NC and a small portion of Brunswick County; o Travel in April and May, 2003; and o Persons age 16+ in household complete 24-hour travel log. • Yakima: o Study area comprised Yakima County, WA; o Travel in April and May, 2003; and o Persons age 16+ in household complete 24-hour travel log.

92 Independent samples of non-completers were drawn from the sample of telephone numbers that was generated for each of the four target household travel surveys. This study focused on two categories of non-completes: • Refusers: Those households that were contacted during the recruitment phase of the survey but refused to participate. Telephone numbers in the frame were eligible for selection into the Refuser category if the contact resulted in a Refusal (R1, RF) or Hang-Up (HU). • Terminators: Those households that were recruited to participate in the household travel survey but did not complete the travel diary portion. Telephone numbers in the frame were eligible for selection into the Terminator category if they remained a non-complete (NC) subsequent to the retrieval phase of the survey. Non-contacts were not considered in the main non-response study because of limited time and budget constraints. Certainly, non-contacts should be part of ongoing research in this field. The research team felt it necessary to devise two different surveys for the two categories of non- respondents to be investigated, despite that both are types of refusers. Terminators, according to the definition, actually saw the original travel survey form; therefore, questions directly about the content and structure of the survey form could be asked. Also, given this fact, a Stated Choice experiment was included because various characteristics of the original travel survey could be tested. Each choice set described two surveys and respondents were asked to choose whether they would respond to the survey with a given set of characteristics, or whether they would choose the survey with another set of characteristics. Survey characteristics tested were: the recruitment method, the type of incentive offered, how and when the completed survey should be returned, and the length of the survey. Respondents were asked to answer two questions: 1. Which of the two surveys they would prefer to complete? and 2. If they were given this survey, would they actually complete it? Thus, the first question related to a conditional choice, whereby respondents had to choose between the two surveys, and the second question gave respondents the opportunity to indicate that the survey was not something they would complete despite having selected this particular survey in the previous question. The survey for the refusers was shorter. The objective was to gain some insight into why these people refused to respond to the original travel survey and how they would like to be contacted in the future if they were to participate in travel surveys. A Stated Choice experiment was not included because there was no recent travel survey the respondents could refer to, having not seen the original travel survey. The two surveys are shown in Appendices C and D, respectively. Table 46 shows the number and percentage of households that fall into the categories of non- completers in the original travel studies. Sample for the Non-Response Follow Up study (pilot and full study) was drawn from these groups. Table 46: Population for Non-Response Follow-Up Study Original Travel Studies Charleston Little Rock Wilmington Yakima Total Sample Loaded 12,154 12,809 11,153 6,769 Refusers 4,212 (35%) 3,502 (27%) 3,543 (32%) 1,596 (24%) Total Recruited Sample 1,369 1,366 1,420 1,505 Terminators 339 (25%) 306 (22%) 351 (25%) 371 (25%) A random sample of cases with telephone numbers and addresses, fitting the definitions for Terminators and Refusers, were selected from the sampling frames from each target household travel survey. In total, 360 telephone numbers were selected to represent the Refuser category (90 from each

93 frame) and 640 numbers were selected to represent the Terminator category (160 from each frame). This sample was drawn equally across the four target household travel surveys. The research team developed the instruments for implementation by NuStats. The content of the instruments is best described as follows: • Refuser Instrument – 31 questions, covering reasons for not participating in study and importance of those reasons (four questions), preferred times and modes of contact (seven questions), household travel patterns (13 questions), and demographics (seven questions); and • Terminator Instrument – 32 questions, covering reasons for not participating in study and importance of those reasons (three questions), preferred times and modes of contact (seven questions), stated preference choices (one question with 12 pairs of choices, one additional question), household travel patterns (13 questions), and demographics (seven questions). The mail out/mail back questionnaires were produced by the research team and provided to NuStats “ready to go.” The research team also produced the internet questionnaire. This instrument was “hosted” on the University of Sydney website. Potential respondents were given a NuStats-hosted URL that linked to the University of Sydney site. The telephone questionnaire was a revised version of the mail-out/mail-back instrument that was re-worked for telephone administration and programmed into NuStats’ computer-assisted telephone interviewing (CATI) software system, VOXCO. The in-person instrument was the mail-out/mail back booklet, administered orally by the in-person surveyor. Supporting respondent materials include: • A cover letter to accompany the mail-out/mail-back booklet; • 9 ½" × 6 ½" envelope to send the mail-out/mail-back booklet to respondents and another envelope (9" × 6") for respondent return of the booklet; and • A reminder postcard. NuStats conducted a pilot survey of the data collection activities. A specific number of cases, or pieces of sample, were selected to test each instrument type (i.e. mail, internet, telephone, and in-person). The pilot study was conducted from June 3 to June 25, 2003. The main consequence of the pilot test was the addition of incentives to the full study to increase the response rate among these known “non- responders.” To increase response rates, and to test for the effects of different post incentive levels on response rates, both pre and post incentives were used. A $2 bill pre-incentive was included with all mail-out booklets. Three levels of post incentives were used in the study: $0, $10 and $20. The sample was randomly assigned so that 45 percent of the total sample was offered $0 post incentive, 45 percent was offered $10 and 10 percent was offered $20. The cover letter told the respondent of the post incentive contingent upon receipt of their completed booklet or internet survey unless they were in the $0 incentive group, in which case no mention was made of the post incentive. The full study methodology called for a hierarchy of interviewing modes beginning with mail (which also included the internet option), followed by CATI and then in-person interviewing. All 1,000 selected households (640 from the Terminator category and 360 from the Refuser category) were mailed the survey booklet, a cover letter, a return envelope and a $2 pre-incentive on July 29 and 30, 2003. The cover letter referenced the website so that respondents who preferred to complete the survey via the internet were able to do so. The cover letter also offered a toll-free number for inbound CATI surveying. Reminder postcards were mailed on July 31 and August 1, 2003. For the mail and internet portion of the study, 450 households were offered no post incentive, 450 households were offered the $10 post incentive and 100 households were offered the $20 post-incentive. Households that did not respond to the survey via mail, internet or inbound telephone call (there were no inbound completed surveys) were eligible to participate in the CATI phase of the survey from September 9 to October 1, 2003. Due to budget limitations, the CATI portion of the study was restricted

94 to reaching a target number of completed interviews: 20 for the Terminator category and 10 for the Refuser category. Initially the CATI interviewing focused on Little Rock and was later expanded to all cities. At the start of CATI interviewing the respondents were offered the same incentive as they were offered in the mail/web portion of the study. Mid-way through interviewing they were all offered a $10 post incentive. Little Rock was selected as the site for in-person interviewing. The intended in-person respondents received a Priority Mail advance letter informing them of the in-person interviewer’s visit. A team of two in-person interviewers completed surveys in Little Rock from the September 25 to September 28, 2003. The instrument was the mail back booklet, administered orally by the in-person surveyor. All in-person respondents received a $10 post incentive for their participation. Table 47 shows the number of responses by survey mode, while Table 48 and Table 49 show response rates by incentive for all survey modes; CATI and in-person respondents were all offered a $10 post-incentive for participation. The number of “Mail Back Booklets” reported in the tables includes partially completed booklets. Budget constraints limited the amount of CATI dialing and in-person interviewing that could be completed. According to Dillman et al. (2001), in relation to telephone surveys, the post incentive is not as effective as the pre-incentive. Unfortunately, this could not be tested in the Non-Response Follow-Up Survey. Table 47: Response by Mode (Main Follow-Up Survey) Group TOTAL SAMPLE Mail Back Booklets Web Complete CATI Complete In-Person Complete TOTAL RETURNS Terminators 640 125 13 20 12 170 Refusers 360 92 1 10 6 109 Table 48: Terminator Completes by Incentive Incentive Mail Internet CATI Face-to-face Total Sample $0 38 1 0 0 39 $10 67 11 20 12 110 $20 20 1 0 0 21 Total 125 13 20 12 170 Table 49: Refuser Completes by Incentive Amount Incentive Mail Internet CATI Face-to-face Total Sample $0 44 1 0 0 45 $10 33 0 14 10 57 $20 15 0 0 0 15 Total 92 1 14 10 117 Table 50 shows the response rates, for the terminator and refuser surveys, for the pilot and main survey. By definition, all units are eligible if they are units of non-response; otherwise, these units would be units of unknown eligibility, and not units of non-response. Hence, the response rate calculation is simply the formula: where: RR = the response rate, CI = the number of completed household interviews, and Sample = the sample size. Sample CIRR =

95 Table 50: Response Rates for Terminator and Refuser Samples, Pilot and Main Survey Sample Size (Pilot) Complete (Pilot) Response Rate Sample Size (Main) Complete (Main) Response Rate Terminators 66 11 17% 640 170 27% Refusals 30 8 27% 360 117 30% Table 50 shows that respondents who initially refused to take part in the original travel survey do not appear to care about incentives as much as those who were classed as terminators (given the difference in response rate between the pilot and the main non-response survey). However, it must be noted that problems were encountered during the pilot stage because the timing of this survey coincided very closely with the timing of the original travel survey; some terminator non-respondents were confused and quite upset about being bothered to do the “same survey” again. Table 51 shows the percentage of completed surveys for each survey mode, for both samples and shows that the mail survey mode was the dominant mode, as expected given the hierarchical application of the survey mode. However, the number of CATI interviews conducted was a function of budget and time, therefore the maximum permissible was 30. For a few of those respondents who did not respond to the CATI interview, a face-to-face interview was organized. Table 51: Percentage of Completed Surveys for Each Survey Mode Mode Terminators Refusals Mail 73.5% 84.4% Internet 7.6% 0.9% Telephone 11.8% 9.2% Face-to-Face 7.1% 5.5% Table 52 shows the percentage of mail and internet responses given the post incentive amount offered. Table 52: Mail and Internet Responses by Incentive Amount (Terminators) Mode $0 $10 $20 Mail/internet 39 (13.5%) 78 (27.1%) 21 (32.8%) Sample size 288 288 64 The purpose of this analysis was to determine whether changing survey mode on subsequent waves would have an effect on response level, in terms of the post incentive level offered. As already mentioned, 45 percent of the terminator sample was given no post incentive, 45 percent was given a $10 post incentive, and 10 percent was given a $20 post incentive. This incentive structure was repeated for the refusers. It appears as though for the mail/internet survey mode utilized in wave one, the $20 post incentive was the most significant for terminator non-respondents. This is also the result from the Stated Choice analysis, described later in this section. Terminators were also least likely to respond to the survey if no post incentive was offered. This may be so because some of these individuals indicated that they did not have the time to do the survey; hence, they may have believed that a zero post incentive was not appropriate for their efforts if they were to make time to complete the survey. Table 53 shows the percentage of mail and internet responses given the post incentive amount offered for the households that refused to participate in the original travel survey. Again, the highest percentage of responses is for the $20 post incentive. However, comparing this table with Table 52, refusers were more likely to respond to the survey than terminators if no incentive was offered, but less likely to respond than terminators, if a $10 incentive was offered. In this survey,

96 refusers are more likely to respond to the extreme levels of post incentive offered. Table 54 shows that the majority of terminator and refuser non-respondents would answer the telephone if their caller-id displayed a research institute, university, or a government agency, confirming other reports (Kalfs and van Evert, 2003). Also, the terminator and refuser samples are dominated by 1, 2 or 3 person households. However, two important differences between the characteristics of the terminators and refusers are that the terminator sample is younger than the refuser sample and that there is a higher proportion of female terminators than there are female refusers. Table 53: Mail and Internet Responses by Incentive Amount (Refusers) Mode $0 $10 $20 Mail/internet 45 (27.8%) 33 (20.4%) 15 (41.7%) Sample size 162 162 36 Table 54: Key Summary Statistics for Both the Terminator and Refuser Samples Attribute Terminators Refusers Percentage of 1,2 or 3 person households 70% 76% Percentage of respondents who would answer the phone if their caller-id displayed the name of a research institute or university 93% 75% Percentage of respondents who would answer the phone if their caller-id displayed the name of a government agency 77% 75% Percentage of respondents who drive 91% 80% Gender: female 64% 52% Percentage of male respondents aged under 55 years 64% 41% Percentage of female respondents aged under 55 years 67% 38% Percentage of households with no vehicle 6% 12% Percentage of respondents who rode a bus during the last weekday 3% 4% Percentage of respondents who rode in a car during the last weekday 90% 84% Percentage of respondents who did not ride in a car, bus or taxi during the last weekday 6% 12% Percentage of households with a combined household income less than $50,000 65% 63% Percentage of respondents who own or are buying the dwelling in which they reside 68% 77% Percentage of respondents not employed 9% 5% Comparing these results to those from the TDC Non-Response Study, 90 percent of refusals were 1 and 2 person households, and households with 2 adults and 2 or more children, whereas 76 percent of the refuser sample, of the Non-Response Follow-Up Study, were 1, 2 and 3 person households. The TDC results also showed that 7 percent of refuser households had no vehicle whereas 12 percent of refusers in the Non-Response Follow-Up Study had no vehicle. Unfortunately, due to different income categories used in both studies, comparison of income levels could not be made. A similar finding between the TDC Non-Response Study and the Non-Response Follow-Up is in relation to gender of refuser respondents: 53.3 percent and 52 percent females, respectively. Also, results of the terminator sample could not be compared to the results of the TDC Non-Response Study because this study did not classify refuser non- respondents in the manner that was described by the researchers, as defined earlier in this section. Non Response Follow-Up Study Results This section is divided into two sub-sections:

97 1. Terminator results – this includes results of the multidimensional scaling; background information about the type of model used in the analysis of the Stated Choice data, and the results of the stated choice experiment; and 2. Refusers results – this includes the results of the multidimensional scaling. Terminators The survey asked respondents to circle a number between 1 and 5 that showed how the respondent felt about each statement in terms of agreement and importance. The following three tables show the results. Table 55 shows the percentage of respondents who strongly disagreed and strongly agreed with the statements, in relation to the original travel survey. The majority of respondents strongly disagreed with the statements “I didn’t understand the questions being asked” and “The person on the phone put me off”. The statements “You called me at a bad time” and “I didn’t have the time to do it”, incurred the highest percentage of respondents strongly agreeing: 30 percent and 29 percent respectively. Table 55: Status of Agreement to Statements in Relation to Original Travel Survey (Terminators) Agreement Strongly Disagree Strongly Agree The survey form was too long 22% 18% I don’t care about transportation issues 44% 8% You called me at a bad time 19% 30% I didn’t like the questions being asked 39% 12% I travel too much 44% 10% I didn’t understand the questions being asked 59% 7% I didn’t have the time to do it 24% 29% I travel too little to be of interest to you 38% 21% I didn’t want to say no to the interviewer 39% 13% I don’t do surveys 46% 11% I couldn’t get other family members to take part 34% 28% I thought it was marketing deal or scam 37% 23% The person on the phone put me off 60% 8% I just couldn’t be bothered to do it 32% 14% Table 56 shows how important the statements were to the respondents, in their decisions not to participate in the original travel survey. The most important statements, in terms of the decision not to participate in the original travel survey, are: • You called me at a bad time (31 percent); and • I didn’t have the time to do it (28 percent). Table 56: Status of Importance of Statements in Terms of the Decision Not to Participate in Original Travel Survey Importance Not at all important Very important The survey form was too long 25% 18% I don’t care about transportation issues 28% 23% You called me at a bad time 19% 31% I didn’t like the questions being asked 34% 16% I travel too much 44% 13% I didn’t understand the questions being asked 47% 12%

98 I didn’t have the time to do it 19% 28% I travel too little to be of interest to you 34% 23% I didn’t want to say no to the interviewer 35% 17% I don’t do surveys 37% 17% I couldn’t get other family members to take part 33% 26% I thought it was marketing deal or scam 37% 26% The person on the phone put me off 52% 13% I just couldn’t be bothered to do it 28% 17% This was expected given that almost the same number of respondents also strongly agreed with these statements. Table 57 shows the results of cross-tabulations for the same statements, in terms of agreement and importance. In Table 57, 12 percent of respondents strongly disagreed, and did not regard the statement “The survey form was too long”, important in their decision not to participate in the survey, whereas 9 percent of respondents strongly agreed with, and thought the statement was important in their decision. Twenty one percent of respondents strongly disagreed, and did not regard the statement “I don’t care about transportation issues” important in their decision not to participate in the survey, whereas only 3 percent of respondents strongly agreed with the statement and thought it was important in their decision. Table 57: Cross-tabulation of Statements in Terms of Agreement and Importance Statements Strongly disagree and not at all important Strongly agree and very important Undecided The survey form was too long 12% 9% 31% I don’t care about transportation issues 21% 3% 26% You called me at a bad time 12% 19% 3% I didn’t like the questions being asked 25% 6% 22% I travel too much 33% 5.% 18% I didn’t understand the questions being asked 40% 2% 15% I didn’t have the time to do it 13% 19% 22% I travel too little to be of interest to you 25% 15% 20% I didn’t want to say no to the interviewer 23% 6% 27% I don’t do surveys 29% 5% 23% I couldn’t get other family members to take part 24% 18% 20% I thought it was marketing deal or scam 25% 14% 20% The person on the phone put me off 47% 4% 17% I just couldn’t be bothered to do it 19% 9% 30% In relation to the statement “You called me at a bad time”, 19 percent of respondents strongly agreed with it and regarded it very important in their decision not to participate in the survey, whereas 12 percent of respondents did not regard it as important, and strongly disagreed with it. Twenty five percent of respondents strongly disagreed with, and did not regard the statement “I didn’t like the questions being asked”, as important in their decision not to participate in the survey, whereas only 6 percent thought it was very important and strongly agreed with it. Nineteen percent of respondents indicated that they strongly agreed with the statement “I didn’t have time to do it” and regarded it very important in their decision not to participate in the survey. Multidimensional scaling analysis, using the ALSCAL procedure in SPSS®, was employed to determine whether the agreement statements could be grouped into “new” variables. Initially, the model was asked to create a matrix with a maximum of three dimensions. All stimulus coordinates in dimension three were not significant, hence the model was asked for a two dimensional matrix. The stress and R squared values for the desired matrix are 0.16807 and 0.85656, respectively, representing a relatively good fit model. (Lower stress values and higher R squared values are desired. These values depict the

99 goodness of fit of the model to the data.) Table 58 shows the results of the Euclidean Distance Model for the agreement statements. Table 58: Euclidean Distance Model Results for Statements in Terms of Agreement Stimulus Number Stimulus Name Dimension 1 (Interest) Dimension 2 (Survey Content) 1 The survey form was too long 0.378 1.1763 2 I don’t care about transportation issues -1.2226 -.0860 3 You called me at a bad time 2.0040 0.2222 4* I didn’t like the questions being asked -.3696 -.2416 5 I travel too much -1.1001 1.6863 6 I didn’t understand the questions being asked -1.6755 -.2543 7 I didn’t have the time to do it 1.8812 -.1415 8 I travel too little to be of interest to you 0.0974 -1.2900 9* I didn’t want to say no to the interviewer -3649 -.3051 10* I don’t do surveys -.4464 -.4028 11 I couldn’t get other family members to take part 1.8821 0.4069 12 I thought it was marketing deal or scam 0.3940 -1.2412 13 The person on the phone put me off -1.5287 -.0008 14* I just couldn’t be bothered to do it 0.0763 0.4716 * not significant in either dimension in terms of agreement These results show that many respondents disagree with the statements “I don’t care about transportation issues”, “I didn’t understand the questions being asked” and “The person on the phone put me off”. These results confirm the results shown in Table 55. Also from the results shown in Table 58, the statements can be placed in two clusters (groups), based on their scores on the two dimensions: survey content and interest. Statements grouped under survey content are: • The survey form was too long; • I travel too much; • I travel too little; and • I thought it was a marketing deal or scam. Statements grouped under interest are: • I don’t care about transportation issues; • You called me at a bad time; • I didn’t understand the questions; • I couldn’t get other family members to take part; • The person on the phone put me off; and • I didn’t have time to do it. Statements that are insignificant in both dimensions for the original travel survey are: • I didn’t like the questions asked; • I didn’t want to say no to the interviewer; • I don’t do surveys; and • I just couldn’t be bothered to do it. Similarly, multidimensional scaling analysis, using the ALSCAL procedure in SPSS®, was employed to determine whether the importance statements could be grouped into “new” variables. In this

100 case, the model was asked to create a matrix with a maximum of four dimensions. Some stimulus coordinates in dimensions three and four were significant, hence this model was retained for analysis. The stress and R squared values for the desired matrix are 0.10756 and 0.91684 respectively, depicting a good fit model. Table 59 shows the results of the Euclidean distance model for the importance statements and shows four clusters (groups); survey content, interest, respondent burden and communication. Statements grouped under survey content are: • The survey form was too long; • I travel too much; and • I travel too little. Statements grouped under interest: are: • You called me at a bad time; • I didn’t understand the questions; • The person on the phone put me off; and • I didn’t have time to do it. Table 59: Euclidean Distance Model Results for Statements in Terms of Importance Dimension Stimulus Number Stimulus Name 1 (Interest) 2 (Survey Content) 3 (Respondent Burden) 4 (Commun- ication) 1 The survey form was too long -.3348 1.0339 0.3373 -.9906 2* I don’t care about transportation issues 0.2825 0.4992 0.9963 -.6813 3 You called me at a bad time -2.7390 0.0835 -.6107 -.0008 4 * I didn’t like the questions being asked 0.9365 0.0221 -.0902 0.1920 5 I travel too much 1.2248 1.9263 -1.0847 -.3396 6 I didn’t understand the questions being asked 2.3208 0.1613 0.1866 -.2245 7 I didn’t have the time to do it -2.3830 -.6047 -.1692 -.5424 8 I travel too little to be of interest to you 0.3851 -1.6619 1.2350 -.3219 9 I didn’t want to say no to the interviewer 0.4827 -.9596 -.0693 -1.0008 10* I don’t do surveys 0.4294 -.0264 0.2009 0.3620 11 I couldn’t get other family members to take part -1.6066 0.9637 1.6736 1.3719 12 I thought it was marketing deal or scam -.0624 -.3550 -.6322 1.6571 13 The person on the phone put me off 1.7123 -.6372 -.9605 0.4732 14 I just couldn’t be bothered to do it -.6480 -.4453 -1.0529 0.0457 * not significant in any dimension in terms of importance Statements grouped under respondent burden are: • I couldn’t get other family members to take part; and • I just couldn’t be bothered to do it. Statements grouped under communication are: • I didn’t want to say no to the interviewer; and • I thought it was a marketing deal or scam. Statements that are insignificant in any dimension in the decision to participate in the original travel survey are: • I don’t care about transportation issues;

101 • I didn’t like the questions asked; and • I don’t do surveys. In summary, the MDS for the terminator non-respondents showed that the following statements had positive values: respondents tended to agree with these statements rather than disagree, in relation to their decision not to participate in the original study. These statements are grouped under the following: • Survey content: o The survey form was too long; and o I travel too much. • Interest: o You called me at a bad time; o I didn’t have the time to do it; and o I couldn’t get other family members to participate. Also, the terminator non-respondents tended to consider the following statements important rather than not important, in their decision not to participate in the original study. These statements are grouped under the following: • Survey content: o The survey form was too long; and o I travel too much. • Interest: o I didn’t understand the questions being asked; and o The person on the phone put me off. • Respondent Burden: o I couldn’t get other family members to take part. • Communication: o I thought it was a marketing deal or scam. A stated choice (SC) experiment involving the decision to respond to alternative hypothetical surveys was conducted on the 640 terminators, 200 of whom completed the survey. The socio- demographic characteristics of the respondents are shown in Table 54. The choice experiment consisted of two unlabeled survey alternatives defined on five attributes described by eight, four, or two attribute levels. The attributes and attribute levels are reported in Table 60. A balanced main effects only orthogonal fractional factorial design was constructed with 24 treatment combinations. To minimize cognitive burden on respondents, each respondent was shown only 12 of the total 24 treatment combinations. For each choice set, respondents were first asked to select to which survey they would be more likely respond, based on the attributes and attribute levels that defined each of two (unlabeled) survey alternatives. This represents a constrained choice, because respondents were not given the option of not responding. Next respondents were given the option not to respond, and asked whether they would respond or not to either survey. Figure 8 shows an example choice set. Table 60: Choice Experiment Attribute and Attribute Levels Attribute Attribute Levels Incentive offered None, small gift, lottery ticket, major prize draw, $1, $2, $5, $10 Recruitment method Telephone, e-mail, mail, face-to-face Survey conducted by Research institute, private firm, university, government

102 Who decides when the completed survey is returned Respondent chooses, interviewer chooses Who decides how the completed survey is returned Respondent chooses, interviewer chooses Length of survey Less than 10 mins, 10 – 19 mins, 20 – 29 mins, more than 30 mins Survey Features Green Survey Blue Survey Reward $1.00 Major prize draw Recruitment Method Telephone Telephone Survey conducted by: Government Private firm When completed survey is returned Interviewer chooses You choose How completed survey is returned You choose You choose Length of survey Under 10 minutes 10 to 19 minutes Would you be more likely to fill out the green or the blue survey?   If you were given the survey you just checked, would you fill it out? Yes  No  Figure 8: Example Choice Set A number of models were estimated to assess the influence that various attributes and attribute levels play in the choice to respond to a travel survey. A more thorough review of the Mixed Logit model, used in this work, is given in Hensher and Greene (2003). Consider a situation in which a sample of individuals is evaluating a finite number of alternatives, j = 1, 2, …, J. Let subscripts i, j, and k refer to individual i, alternative j, and alternative attribute k. The utility for any given alternative may be written as: where: Uij = the utility possessed by individual i for alternative j, xijk = a vector of explanatory variables observed by the analyst, which may include attributes of the alternatives, socioeconomic characteristics of the respondent, and descriptors of the decision context and choice task under consideration. βi = the weight (or parameter) associated with attribute xijk εij = the unobserved influences of sampled respondent i for alternative j. Neither βi nor εij are observed by the analyst and hence must be treated as stochastic influences. Within the logit model framework, εjq is assumed to be independently and identically distributed (IID) extreme value type 1. The IID assumption derived through the use of the extreme value type 1 distribution allows for ease of computation (as well as providing a closed form solution). Nevertheless, as with any assumption, violations both can and do occur. When they occur, violations of the IID assumption mean that the cross-substitution effects observed between pairs of alternatives are no longer equal given the presence or absence of other alternatives within the model (Louviere et al., 2000). The Mixed Logit (ML) model relaxes the IID assumption by partitioning the stochastic component of the model additively into two parts. The first element of the stochastic component of the model is allowed to be correlated over alternatives and to be heteroskedastic. The second component maintains the IID assumption over alternatives and individuals; hence, the model remains within the logit family. We show this partitioning in the equation below. jiijkiij x'U εβ +=

103 where: ηjq = a random component with a zero mean and a distribution over individuals and alternatives dependent on the underlying parameters and observed sample data relating to alternative j and individual i. The ML model assumes a general distribution for ηij such that ηij can take on any number of distributional forms such as normal, lognormal, uniform, or triangular. Within the ML framework, εij is treated as a random term with zero mean that is IID over alternatives and which is independent of the underlying parameters or sample data. We denote the joint density of [η1i, η2i,..., ηJi] as f(ηi |Ω) where the elements of Ω are the parameters of the distribution (i.e., mean and standard deviation) and ηi denotes a vector of J random elements across the universal set of utility functions. Given εij is distributed IID extreme value type 1, we are able to state that for any value of ηi, the conditional probability for choice j is logit. Hence: This equation is similar in form to the simple multinomial logit model differing only in that for each sampled individual we now have additional information with regard to the unobserved sources of influence as defined through the vector ηi. The unconditional choice probability is calculated as this logit probability integrated over all values of ηi and weighted by the density of ηi is as shown in the equation below (see Hensher and Greene, 2003): An important output of the ML model is the standard deviation parameter of the model. The standard deviation of an element of the βi (random) parameter vector, denoted σik, accommodates the presence of preference heterogeneity in the sampled population around the mean of the random parameter. This allows for the exploration of possible sources of preference heterogeneity that may exist across sampled respondents. This is accomplished through the interaction of each random parameter with other attributes or variables that one suspects may be possible sources of preference heterogeneity (for example, if one suspects that observed heterogeneity in a price parameter may be the result of gender differences, one may interact the price random parameter with a variable indicating each respondent’s gender to determine if this indeed is the case). The model results for the constrained choice experiment are reported in Table 61. Two models are reported; a multinomial logit (MNL) model and a mixed logit (ML) model estimated using 500 Halton sequence intelligent draws. Given the qualitative nature of the attributes, each attribute was effects coded. Effects codes were used as opposed to dummy codes so as to avoid confounding the base attribute level with the average of the unobserved effects of the model’s single utility function (because this is an unlabeled choice experiment, a single utility function is estimated to represent both unlabeled alternatives, see Hensher, Rose and Greene, 2004). Table 61: Results of the Constrained Choice Experiment Estimate Results – Random Parameters Model 1: MNL Model 2: ML Survey length (<10 mins) 0.4957 (7.838) ][x'U ijijijiij εηβ ++= ∑ + += j ijiji ijiji iiij )x'exp( )x'exp( )|(L ηβ ηβηβ i1i2Jiiiiijiij dd...d)|(f)|(L...)|(P i1 i2 Ji ηηηΩηηβΩβ η η η ∫ ∫ ∫=

104 Survey length (10-19 minutes) 0.1407 (2.261) Non Random Parameters Survey length (<10 mins) 0.4881 (8.055) Survey length (10-19 minutes) 0.14 (2.269) Reward (No incentive) -0.9116 (-8.29) -0.9133 (-8.276) Reward (Small gift) -0.5516 (-4.427) -0.5506 (-4.399) Reward ($5) 0.2352 (2.308) 0.2342 (2.29) Recruitment (Telephone) 0.3091 (2.43) 0.3077 (2.405) Recruitment (Email) -0.2253 (-3.399) -0.2252 (-3.377) Recruitment (Mail) 0.4025 (4.497) 0.4062 (4.496) When to reply (1 = respondent determines) 0.2387 (6.006) 0.2398 (6.006) How to reply (1 = respondent determines) 0.1537 (3.983) 0.15508 (4.000) Survey conducted by Research institute 0.1559 (2.351) 0.1572 (2.355) Survey conducted by University -0.2448 (-2.433) -0.2435 (-2.409) Standard deviations of parameter distributions Survey length (<10 mins) 0.3966 (7.838) Survey length (10-19 minutes) 0.0703 (2.261) No. of observations 1879† 1879† Constants only Log-Likelihood (β) at convergence -1302.4236 -1302.4236 Log-Likelihood (β) at convergence -1144.129 -1144.065 -2 Log-Likelihood 316.5892 316.7174 Degrees of freedom 12 14 Chi-square (χ2) 21.026 23.685 †Some observations were lost due to non response. Insignificant parameter estimates were removed from the utility specifications of both models. The MNL model is statistically significant (χ2 = 316.5892 with 12 degrees of freedom) with a pseudo R2 of 0.12. The parameter estimated for offering no incentive to complete a survey is statically significant and negative which is in the direction expected. Offering no incentive creates a disutility with regards to completing surveys. The parameter associated with offering small gifts is also statistically significant and negative, although the disutility is less than that associated with no incentive, suggesting that the offering of a small gift is preferred to the offering of no incentive, but less preferred to other reward strategies. Of the remaining reward strategies, only the parameter estimate for the $5 attribute was significant. As the other parameters removed from the analysis are set to zero, the estimate for the $10 attribute level is calculated as the sum of minus one times those attributes that remain. Thus, we calculate the parameter estimate for the $10 attribute level as 1.2297. The positive parameter estimates for the $5 and $10 attribute levels suggest a strong preference for relatively large monetary rewards for answering surveys. In terms of recruitment strategies, the model suggests a strong preference towards phone and mail contact and a strong preference against e-mail contact. Calculation of the base recruitment attribute level representing face-to-face contact (β = -0.4887) shows an even stronger preference against such a method. Respondents clearly prefer the option of determining how and when to reply to surveys. Not surprisingly, respondents also prefer shorter surveys than longer surveys, with surveys under ten minutes preferred the most. The model also suggests that respondents are more likely to respond to surveys being conducted by known research institutes and slightly less inclined to answer surveys instigated by government bodies (β = 0.0863), but are far less inclined to respond to university research efforts. A number of sociodemographic variables were also tested within the utility function of the model. Household size, age, gender, number of drivers within a household, number of vehicles in a household, and type of contact used to recruit the respondent for the study were tested. In no instance were any of these variables statistically significant, and in several cases, actually produced worse model fits. The ML model may be used to identify preference heterogeneity and possible sources of heterogeneity, should it be found. The ML model is statistically significant (χ2 = 316.7174 with 14

105 degrees of freedom) with a pseudo R2 of 0.12. Given the additional degrees of freedom necessary for the estimation of the ML model, the ML model does not statistically represent an improvement over the MNL model reported earlier. Nevertheless, the survey length less than 10 minutes and survey length between 10 and 19 minutes attributes were estimated as random parameters with a constrained triangular distribution. The standard deviation of the survey length less than 10 minutes was constrained to be 0.8 of the population mean of the random parameter and the survey length between 10 and 19 minutes was constrained to be 0.5 of the population mean of the random parameter estimate. The mean population parameter of each attribute is statistically significant (p < 0.05) as also are the standard deviation parameters, indicating the presence of preference heterogeneity for these parameter estimates. An interaction between the mean estimate of each of the random parameter estimates and each of the sociodemographic variables previously mentioned were tested within the model. Such interactions are equivalent to revealing the presence or absence of preference heterogeneity around the mean of each random parameter estimate (Hensher and Greene, 2003). In all cases, no statistical significance was discovered suggesting that these variables are not the source of the observed preference heterogeneity within the model. As such, the model suggests the existence of preference heterogeneity, but the source of this heterogeneity is yet to be determined. With the exception of the presence of preference heterogeneity, not detectable within the MNL model framework, the remaining non-random parameter estimates of the ML model are similar to those of the MNL model. Table 62 shows the model results for the unconstrained choice experiment where respondents were able to choose to not respond to either unlabeled survey alternative. The “not respond” alternative is treated as the base alternative in both models. Both the MNL and ML models are statistically significant (χ2 = 332.1052 with 10 degrees of freedom and χ2 = 359.5484 with 13 degrees of freedom for the MNL and ML model respectively with pseudo R2 of 0.082 and 0.088). As with the two models estimated on the conditional choice experiment, insignificant parameter estimates were removed from the both models. The results for the MNL model suggest a strong preference against e-mail as a recruitment strategy and a strong preference towards telephone and mail recruitment. The MNL model also suggests a strong preference against offering no incentive for completing a survey as well as a preference against offering small sums of money. Larger sums of money when offered as an incentive to complete a survey are strongly preferred with the $10 incentive preferred (calculated as β = 0.8217) to $5 as might be expected. The model suggests that the offering of lottery tickets and small gifts will be a disincentive to reply relative to larger cash payments. Table 62: Results of the Unconstrained Choice Experiment Estimate Results – Random Parameters Model 3: MNL Model 4: ML Recruitment (Email) -0.215 (-2.203) Reward (No incentive) -1.1656 (-4.783) Survey conducted by private firm 0.1387 (1.819) Non Random Parameters Recruitment (Email) -0.209 (-2.966) Reward (No incentive) -0.5918 (-5.137) Survey conducted by private firm 0.1031 (1.873) Reward (Lottery ticket) -0.3432 (-3.366) -0.4354 (-3.107) Reward ($2) -0.3527 (-3.681) -0.3778 (-2.895) Reward ($5) 0.466 (4.936) 0.7869 (5.91) Recruitment (Telephone) 0.2596 (3.528) 0.2992 (3.11) Recruitment (Mail) 0.3275 (4.844) 0.4992 (5.248) How to reply (1 = respondent determines) 0.1014 (2.665) 0.1383 (2.805) Survey length (<10 mins) 0.7106 (12.627) 0.9338 (10.157) Standard deviations of parameter distributions Recruitment (Telephone) 2.3821 (3.533) Reward (No incentive) 4.8953 (4.958) Survey conducted by private firm 2.3774 (3.696)

106 No. of observations 1829† 1829† Constants only Log-Likelihood (β) at convergence -2035.7286 -2035.7286 Log-Likelihood (β) at convergence -1869.676 -1855.954 -2 Log-Likelihood 332.1052 359.5484 Degrees of freedom 10 13 Chi-square (χ2) 18.307 22.362 †Some observations were lost due to non response. When respondents are allowed to choose not to respond, the when to respond parameter estimate becomes insignificant. The how to respond parameter, however, remains statistically significant such that respondents are more likely to respond when given the opportunity of selecting how they do so. Further, when respondents can choose not to reply there exists a strong preference for short surveys (less than 10 minutes) but an indifference to longer surveys, ceteris paribus. The e-mail recruitment, no incentive, and survey conducted by private firms were estimated as random parameters in a ML model. This is shown as model 4 in Table 62. Each random parameter was drawn from an unconstrained triangular distribution using 500 Halton sequence intelligent draws. The population means of the e-mail recruitment and no incentive random parameters are statistically different from zero (p < 0.05). The mean of the survey conducted by private firms random parameter estimate is not statistically different from zero. The standard deviation parameters of all three random parameters are statistically significant indicating the presence of preference heterogeneity around the population mean parameter estimates. As with the constrained choice ML model, various socio-demographic variables, interacted with the mean parameter estimates, were investigated to determine possible sources of the observed heterogeneity, none of which were found to be statistically significant determinants. This suggests the need for further research efforts to determine the possible sources of the observed heterogeneity. The remaining non-random parameter estimates are similar in size and magnitude to those of the MNL model. Refusals The survey for the refusers also asked respondents to circle a number between 1 and 5 that showed how the respondent felt about each statement in terms of agreement and importance. The following three tables show the results. Table 63 shows the percentage of respondents who strongly disagreed and strongly agreed with the statements about the original travel survey. Table 63: Status of Agreement to Statements in Relation to Original Travel Survey Attribute Strongly Disagree Strongly Agree You called me at a bad time 17% 51% I don’t do surveys 27% 30% I didn’t have the time to do it 14% 40% I thought it was a marketing deal or scam 13% 57% The person on the phone put me off 4% 13% I don’t care about transportation issues 38% 14% I just couldn’t be bothered to do it 19% 36% Table 63 shows that the majority of respondents strongly agreed with the statements “You called me at a bad time” and “I thought it was a marketing deal or scam”. Also, a relatively high percentage of respondents also strongly agreed with the statements “I didn’t have the time to do it” and “I just couldn’t be bothered to do it”: 40 percent and 36 percent respectively. The TDC Non-Response Study indicated

107 that 57 percent of refusers stated that the reason for not responding to the Sydney Household Travel Survey was they were “Not interested/didn’t want to” and 17 percent indicated that they “Had no time/ were too busy”. The results are very different to the results shown in Table 63. This was expected given that the original data retrieval methods for both surveys are different; the Sydney Household Travel Survey employs face-to-face data retrieval whereas NuStats used telephone interviews (CATI) to retrieve household travel information. Table 64 shows how important the statements were, to the respondents, in their decisions not to participate in the original travel survey. The most important statements, in terms of the decision not to participate in the original travel survey are: • You called me at a bad time (49 percent); and • I thought it was a marketing deal or scam (58 percent). This was expected given that almost the same number of respondents also strongly agreed with these statements. In Table 65, 34 percent of respondents strongly agreed, and regarded the statement “You called me at a bad time”, important in their decision not to participate in the survey, whereas two percent of respondents strongly disagreed with, and thought the statement was not important in their decision. Twenty-one percent of respondents strongly disagreed, and did not regard the statement “I don’t care about transportation issues”, important in their decision not to participate in the survey, whereas only seven percent of respondents strongly agreed with the statement and thought it was important in their decision. However, 40 percent of respondents were undecided about this statement, in relation to agreement and importance. Table 64: Status of Importance of Statements in Terms of the Decision Not to Participate in the Original Travel Survey Importance Not at all important Very important You called me at a bad time 15% 49% I don’t do surveys 14% 30% I didn’t have the time to do it 12% 31% I thought it was marketing deal or scam 14% 58% The person on the phone put me off 34% 8% I don’t care about transportation issues 31% 17% I just couldn’t be bothered to do it 19% 27% In relation to the statement “I thought it was a marketing deal or scam”, 45 percent of respondents strongly agreed with it and regarded it very important in their decision not to participate in the survey, whereas only two percent of respondents did not regard it as important, and strongly disagreed with it. Twenty-one percent of respondents indicated that they strongly agreed with the statement “I didn’t have time to do it” and regarded it very important in their decision not to participate in the survey; however, 26 percent of respondents were undecided. Also, 21 percent of respondents strongly agreed with the statement “I just couldn’t be bothered to do it” and regarded it as very important in their decision not to participate in the original travel survey. Surprisingly, though, 38 percent were undecided in relation to whether they agreed or regarded the statement as important in their decision not to participate in the study. There was a much higher incidence of respondents being undecided about how to rate this statement in relation to agreement and importance compared to the terminator non-respondents. Table 65: Cross-tabulation of Statements in Terms of Agreement and Importance Statements Strongly disagree and not at all important Strongly agree and very important Undecided

108 You called me at a bad time 2% 34% 17% I don’t do surveys 7% 23% 27% I didn’t have the time to do it 2% 21% 26% I thought it was marketing deal or scam 2% 45% 11% The person on the phone put me off 20% 8% 18% I don’t care about transportation issues 21% 7% 40% I just couldn’t be bothered to do it 12% 21% 38% Multidimensional scaling analysis, using the ALSCAL procedure in SPSS®, was employed to determine whether the statements in terms of agreement could be grouped under “new” variables. Initially, the model was asked to create a matrix with a maximum of three dimensions. All stimulus coordinates in dimension three were not significant, hence the model was asked for a two dimensional matrix. The stress and R squared values for the desired matrix are 0.01157 and 0.999886 respectively – a good fit model23. Table 66 shows the results of the Euclidean Distance Model for the Statements in terms of agreement, which indicate that many of the respondents disagree with the statement “I just couldn’t be bothered to do it”. Also Table 66 shows that the statements can be placed into two clusters (groups); interest and communication. The statement placed in communication is: • I thought it was a marketing deal or scam. Statements grouped under interest are: • You called me at a bad time; • I didn’t have time to do it; and • I just couldn’t be bothered to do it. Table 66: Euclidean Distance Model Results for Statements in Terms of Agreement Stimulus Number Stimulus Name Dimension 1 (Interest) Dimension 2 (Communication) 1 You called me at a bad time 2.0160 -1.0624 2* I don’t do surveys 0.4664 0.0748 3 I didn’t have the time to do it -1.2716 0.5556 4 I thought it was marketing deal or scam -1.1576 1.3347 5* The person on the phone put me off -.5451 -.7277 6* I don’t care about transportation issues -.5537 -.7105 7 I just couldn’t be bothered to do it -1.2697 -.5355 * not significant in either dimension in terms of agreement Statements which are insignificant in both dimensions, in terms of agreement, in relation to the original travel survey, are: • I don’t do surveys; • The person on the phone put me off; and • I don’t care about transportation issues. Similarly, multidimensional scaling analysis was employed to determine whether the statements, in terms of importance, could be grouped under “new” variables. This time the model was asked to create a matrix with a maximum of three dimensions. Some stimulus coordinates in the third dimension were 23 Lower stress values and higher R squared values are desired. These values depict the goodness of fit of the model to the data.

109 significant; hence, this model was retained for analysis. The stress and R squared values for the desired matrix are 0.00215 and 0.99996 respectively, depicting a good fit model. Table 67 shows the results of the Euclidean Distance Model for the importance statements, from which there are seen to be three clusters (groups); interest, communication, and respondent burden. Statements grouped under interest are: • You called me at a bad time; and • The person on the phone put me off. The statement placed under communication is: • I thought it was a marketing deal or scam. Finally, the statement placed under respondent burden is: • I don’t do surveys. Table 67: Euclidean Distance Model Results for Importance Statements Stimulus Number Stimulus Name Dimension 1 (Interest) Dimension 2 (Communication) Dimension 3 (Respondent Burden) 1 You called me at a bad time 1.7576 -.6586 1.3510 2 I don’t do surveys 0.3991 -.2625 -1.4623 3* I didn’t have the time to do it 0.6752 0.6527 -.6832 4 I thought it was marketing deal or scam -1.3681 -2.0009 -.0321 5 The person on the phone put me off -1.7468 1.3143 0.6665 6* I don’t care about transportation issues -.2080 0.3844 .1066 7* I just couldn’t be bothered to do it 0.4918 0.5706 0.2667 * not significant in any dimension in terms of importance In summary, the MDS analysis, for the refuser non-respondents, showed that the following statements depicted positive values; respondents tended to agree with the statements rather than disagree, in relation to their decision not to participate in the original study. These statements are grouped under the following: • Interest: o You called me at a bad time. • Communication: o I thought it was a marketing deal or scam. Also, the refuser non-respondents tended to consider the following statement important rather than not important, in their decision not to participate in the original study. This statement is grouped under the following: • Interest: o You called me at a bad time. Conclusion Addressing the non-respondent issue will become increasingly difficult in the future due to the following:

110 1. Higher levels of multiculturalism and multiple languages spoken within urban areas. 2. Less free time for individuals who are therefore more reluctant to devote limited spare time completing surveys. 3. Advances in communication will enable people to become even more selective in terms of who they communicate with. However, the introduction of the Do Not Call Registry (Federal Trade Commission, 2003) may benefit researchers because households on this registry will no longer think that calls from an unknown source are telemarketers hence, these households will be less likely to avoid incoming calls (higher contact rate). 4. Introduction of more restrictive privacy legislation in many countries. 5. Less public funds available to conduct research (Griffiths et al., 2000). There will always be a percentage of non-respondents to surveys regardless of the recruitment and retrieval methods employed. However, the purpose of this research was to gain some insight about the demographic and travel characteristics of non-respondents, why they did not respond, and if there are any particular elements in survey design and execution that would appeal to non-respondents. A summary of the overall results is described below. The unconstrained stated choice model for the terminator sample showed that respondents were strongly against e-mail recruitment, $10 was the most preferred incentive level (this was the highest offered in the choice sets. Preference towards this incentive amount for terminator non-respondents was also found in the descriptive data analysis, (results shown in Table 54), terminator non-respondents were unlikely to respond if small gifts, lottery tickets or small cash payment incentives were offered. How to respond became a significant parameter to induce response, and terminator non-respondents preferred shorter surveys (under ten minutes). This result was one of the results of the multidimensional scaling analysis: the length of the original survey was an important factor in the non-respondent’s decision not to complete the original travel survey. Therefore, the results indicate that to reduce the number of terminator non-respondents a $10 cash post incentive should be offered, mail and telephone contact and retrieval methods should be employed, (contrary to popular belief), and shorter surveys should be devised. For the refuser sample, appropriate time of contact will likely invoke interest in the survey topic. Refusers believed that they were contacted at a bad time; hence, interest in the survey topic was reduced or non-existent, resulting in an outright refusal. Even though research has investigated the optimal time to contact respondents, no research has investigated the best time to contact non-respondents, such as refusers. It may be that there is no particular time slot suitable to contact refusers because at every given time slot, a percentage of respondents will refuse. However, research is needed to confirm or deny this. From the MDS analyses of the agreement and importance data for the terminators and refusers, it can be seen that survey content, interest, communication and respondent burden are significant influences in the respondents’ decisions to participate in the original travel survey. With this in mind, therefore, survey design should carefully incorporate these elements to increase response rates, by decreasing the number of terminator and refuser non-respondents. In essence, good survey design and experienced interviewers (if data retrieval is through CATI; personal interviews were not preferred), will more likely lead to higher response rates. In the future, the use of internet and multimedia techniques will increase and so will the use of GPS devices to accompany surveys. This will enable the collection of more accurate data (Griffiths et al., 2000). Consequently, a thorough understanding of the response behavior to the new technology needs to be investigated before these instruments can be used. For example, Bosnjak and Tuten (2001) have discovered seven distinct response behaviors in web-based surveys. These are: 1. Complete responders – view and answer all questions; 2. Unit non-responders – do not participate in the survey. Two types of non-responder, technically hindered from participating or purposely withdraws; 3. Answering drop-outs – provide answers to questions but drop out before completion;

111 4. Lurkers – view all of the questions but do not answer any questions; 5. Lurking drop-outs – represent a combination of answering drop outs and lurkers; 6. Item non-responders – view all of the survey but only answer some of the questions; and 7. Item non-responding drop-outs – a mixture of answering drop-outs and item non-responders. These different response behaviors need to be investigated more thoroughly especially if this medium is to be used more regularly for research purposes. Recommended standardized procedures and guidelines on unit nonresponse are found in section 2.2.6 of the Final Report. 5.7 D-10: INITIAL CONTACTS 5.7.1 Item Description The subject of this section is the first contact made with a potential respondent in a survey. Contact can be by telephone, mail, e-mail, or possibly, even personal interview. In telephone surveys and personal interviews, it involves the very first few words uttered following contact with a prospective respondent. When the initial contact is by mail, it is the envelope in which the material is mailed, the documentation in the envelope, and the opening sentence on the cover letter. 5.7.2 Importance and Nature of Initial Contact The primary need is to design the introduction to surveys in such a fashion that refusals are avoided as much as possible. Currently, the proportion of refusals that occur during initial contact is surprisingly high. In the pretest of the National Household Travel Survey (NHTS) in 2000, 83 percent of the refusals occurred before the introduction was complete (McGuckin et al., 2001). Those conducting the National Survey of America’s Families report that “more than 80 percent of the refusals occur during the introduction or first question” (Vaden-Kiernan et al., 1997, p. 2-3). The number of refusals as a fraction of the number of calls made varies considerably with the type of survey and sampling frame. A political polling company using the telephone to conduct their poll has estimated that they need to make 15 calls for each successful contact, and that the contact rate is declining as resistance to telemarketing grows (Lessner, 2000). Resistance to telemarketing may be understood when the extent of its penetration of the market is understood: a national survey among registered voters showed that almost three-quarters of the sample had been called in the past to participate in a poll or product survey (Lessner, 2000). However, in the National Household Travel Survey, only nine percent of eligible households were, in the end, refusals. The factors that influence the rate at which people hang up seems to have received relatively little research in the past. One study experimented with different opening scripts and observed a “cooperation rate” that varied between 53 and 64 percent (Vaden-Kiernan et al., 1997). Cooperation rate was defined as the percentage of the calls in which the person picking up the phone listened to the entire opening message and permitted the interviewer to determine the eligibility of the household (i.e., establish that the household contained at least one person between the age of 18 and 64). The survey was conducted in areas with a high concentration of low-income households and, therefore, the results cannot be generalized. However, what is interesting is the results of the experiment they conducted into identifying the impact of various features of the opening message on cooperation rate. A pretest experiment was conducted using sample sizes varying between 100 and 200 observations per changed feature in the introductory message. Features tested included variations in the length of the introduction, inclusion of a $5 incentive in the opening statement, identification of the organization sponsoring the survey, altering the first question from a screening question (i.e., “are you a

112 member of the household 18 years of age or older?”) to requesting their opinion on ways to improve education, and inclusion of a statement assuring the respondent that no money was being solicited. It was found that brevity in the opening message was important although the difference between long and short messages was not statistically significant with the sample sizes considered. In this experiment, the $5 incentive was placed toward the end of a relatively long introductory message and it was found that it had no positive impact on cooperation rate, possibly because many respondents terminated the call before they learned of the incentive. Identification of the organization sponsoring the survey had a mildly positive impact on cooperation rate. In this survey, altering the first question from a screening question to one where the person’s opinion was immediately elicited did not alter the cooperation rate. On the other hand, including a statement on non-solicitation seemed to improve cooperation rate although the improvement, like all the comparisons in this study, was not statistically significant. In general, the conclusions of the experiment were that the introduction should be brief, state the purpose of the study, identify official sponsorship of the survey, and make it clear no funds were being solicited. The introductory text ultimately selected from the pretest experiment was (Vaden-Kiernan et al., 1997): “Hello, my name is (NAME), and we are preparing to do a study for private foundations interested in education, health care, and other services in (STATE). The study has been endorsed by state governments concerned with how recent changes in policies affect people’s lives. I am not asking for money – I’d only like to ask you a few questions.” However, the introduction was later changed in response to comments from interviewers who felt that a shorter introduction would be better (Vaden-Kiernan et al., 1999). Information on the purpose of the survey was withheld unless it was specifically requested. The amended introduction was: “Hello, this is (NAME) with the National Survey of America’s Families. I am not asking for money – this is a study for private foundations on education, health care, and other services in the state of (STATE). [IF ASKED: This study is to see how recent changes in federal laws affect people’s lives in your community.]” In the NHTS pretest it was found that most refusals involved the recipient terminating the call while the following message was being conveyed (McGuckin, Liss, and Keyes, 2001): “Hello, my name is _____ and we’re conducting a survey for the Department of Transportation...”. Considering that, in this case, very limited information was conveyed before the call was terminated, it is interesting to speculate in the context of the NSAF pretest findings which words were responsible for this response. When comparing the opening statement used in the NHTS pretest with the final text used in the NSAF survey, at least three differences are apparent. First, starting with the phrase “..my name is..” rather than “..this is..” may convey the caller as a stranger more readily. Using the introduction “my name is” implies that the caller is unknown to the person being called. On the other hand, if a caller says, “..this is so-and-so from XYZ”, it is a more neutral statement in which the caller is merely identifying herself or himself and the caller could be known or unknown. Introductions such as this are frequently used in business calls among acquaintances and strangers alike. Second, the word “survey” immediately conveys the purpose of the call, and suggests an activity that few people enjoy. Hanging up is an easy and non-confrontational way to avoid participating in a time-consuming and unrewarding experience. When comparing this to the NSAF text, the word “study” is used in place of “survey”, which is probably a less evocative word. Third, the NSAF text assures the person being called

113 that no money will be solicited which distinguishes the call from telemarketing. In the actual 2001 NHTS, the introduction was changed to: “Hello, this is _____ and I'm calling for the U.S. Department of Transportation. We are conducting the National Household Travel Survey.” These changes were instituted because the company conducting the survey felt this was an improvement over the pretest, and the low refusal rate may have been due in part to that change. While there was much debate among the survey team over whether to use the word “study” or “survey” in the mail-out material and the questionnaire, “survey” was ultimately used because it was felt that it was a more straightforward presentation of the truth. The NHTS study team were of the opinion that the low refusal rate was likely due to these changes and effective interviewer training and refusal conversion efforts (Freedman, 2003). Firm research findings on the subject of appropriate introductory text for travel surveys could not be found in the literature. It has become increasingly important in recent years due to the rise in telemarketing and the general decline in survey participation rates. The topic is likely to become an active area of research in the future. Initial contact in mail surveys is closely associated with some other topics addressed in this document, namely “Mailing Materials” (section 8.2) and “Incentives” (section 5.8). Publicity surrounding the survey is also likely to impact the extent to which respondents open and read survey material. If the population is informed of the survey through television, radio, or the press, and the survey is presented as an activity worthy of support, it is likely to have a positive impact on the cooperation rate. This is likely to be true of personal interviews as well, but insufficient research has been conducted to make definitive statements. Conclusions on initial contacts are provided in section 2.2.7 of the Final Report. 5.8 D-13: INCENTIVES 5.8.1 Review of Incentives Incentives are offered in some surveys to induce respondents to complete the survey. Many surveys do not offer incentives, but among those surveys where incentives are offered, considerable variability in type and magnitude are found. There is considerable difference of opinion among transportation professionals as to whether incentives should be offered or not. The review of recent practice (chapter 2 of this Technical Appendix) showed that generally less than one quarter of surveys in the 1990s used incentives, while the TTI scan of surveys showed a slightly higher rate of the use of incentives24 (almost 35 percent). There is also substantial diversity in what is offered for an incentive. Incentives have ranged from a gift to a significant payment of money ($10 and more per household, particularly for GPS surveys, where incentives as high as $50 have been offered), and some are offered only to those completing the survey while others are offered to all potential respondents. The only extensive review of the use of incentives in transportation surveys was performed in the mid-1990s by Tooley (1996), who concluded that “…general survey literature supports the use of monetary pre-incentives as being the most effective incentive method.” She also noted that the general survey literature also supported non-monetary incentives, but found them less effective than money, while the same literature is not supportive of post-incentives of any form. In general, one could conclude from this that the general survey literature would rank monetary pre- incentives as the most effective, followed by non-monetary pre-incentives, and then, as least effective, by 24 Informal presentation made to the mid-year meeting of the TRB Committee A1D10 on April 22, 2001 by David Pearson of TTI.

114 any form of post-incentive. The transportation profession appears to remain generally unaware of this and post-1995 surveys have still offered post-incentives, and also offered non-monetary incentives. In spite of the findings of Tooley (1996), it yet remains unclear how much of an effect incentives have on response rates from surveys, because of the lack of controlled experiments. A major problem here is that comparisons of different incentives are confounded by design differences in the surveys, differences in publicity, survey technique, etc. There are only two known cases in which comparisons have been made of incentives for the same instrument and same population, both of which occurred in pilot tests (Stopher, 1992; Goldenberg et al., 1995). In these cases, there were clear indications that incentives improved response rates, although it must be noted that, here again, other design changes may have had effects on the results obtained. Zmud (2003) provides more concrete evidence, however. She states: “One of the most compelling principles is reciprocation. The rule requires that one person try to repay, in kind, what another person has provided. Cialdini said the rule is extremely powerful, often overwhelming the influence of other factors (Cialdini et al., 1975). This principle underlies the large literature that finds consistent positive effects of incentives on survey cooperation. Monetary incentives for participation have long been used in surveys, including both pre-paid and promised incentives and contributions to charity. Kropf et al. (1999) conducted an incentive analysis using the survey administration opportunity from an annual National Omnibus telephone survey. They found, as have other researchers, that a pre-paid incentive is more effective than the promise of an incentive. Offers of a charitable contribution did not appear to motivate participation in the survey. Self- interest, as noted by Dillman above, is a very compelling factor in survey participation.” (Zmud, 2003, p. 93) Similarly, Kalfs and van Evert (2003) discuss the use of incentives as a means to reduce unit nonresponse. They note that response rates to postal surveys can be increased significantly if incentives are offered (Dillman, 1991). They also note that financial remuneration generally works better than other incentives, such as gifts, although gifts tailored to specific target populations, or ones that are related in some way to the survey objectives are an exception to this rule. They also note that incentives provided in advance work better than those that are promised in return for a completed survey. Importantly, Kalfs and van Evert (2003) note that, if the value of the incentive is too high, it will have an adverse effect on response. Dillman (1978) has explained that these results come about because people will respond if the psychological costs and benefits are in balance. “[T]he social standard of reciprocity only works if the gift or favor received is seen as fair; if it is seen as an attempt to coerce the respondent, make him feel guilty, or bribe him, the gift has an adverse effect.” (Kalfs and van Evert, 2003). There is also reciprocity in that interviewers who know that they can do something nice for respondents are more likely to be assured and convincing in their approach to potential respondents. They are more persuasive. While Kalfs and van Evert (2003) refer principally to postal surveys, they also note that incentives are used frequently in face-to-face surveys, while their use in telephone interviews (with no postal component) has been rare and there is no literature on the effects. Of course, in such surveys, an advance incentive will not normally be possible. It is quite clear that consistency on incentives would be helpful. Standardization should address whether or not incentives should be offered, whether incentives should be pre- or post-incentives, and what form incentives should take. Consistency would also be useful on how to present the incentive to prospective respondents, because Tooley (1996) points out that the wording used in offering a pre- incentive is almost as important as the incentive itself. She suggests that the incentive be provided explicitly in return for the respondent completing and returning the questionnaire, rather than in appreciation for the respondent’s time and effort in completing the survey.

115 Incentives are clearly cost-effective, even when only modest gains are obtained in response rates. As an illustration, consider the following case. Suppose a survey recruits 6,000 households, which comprise 15,000 individuals, and a $2 incentive is paid to each recruited individual. This will cost $30,000. If the average cost of a completed household survey is $200, the incentive would need to change only 150 households from refusals to responses to pay for itself. Furthermore, in the context of a survey that may cost $600,000 or more, expenditure of $30,000 on incentives is a small amount to pay to assure a higher response rate. An alternative way to see the value of incentives is to consider the recruitment requirements. Suppose that, without incentives, 40% of recruited households will respond, while 45% will respond with an incentive. Suppose that a final sample is required of 3,500 households. With a 40% response rate, this will require 8,750 households to be recruited, while the 45% response rate will require 7,780 households to be recruited. Assuming that recruited households that do not respond cost approximately $25 per household to contact and attempt to complete, the non-incentive recruitment will cost $24,250 more than the incentive-based recruitment. The cost of a $1 per person incentive in the latter case will be on the order of $19,500, representing a savings of $4,750. In addition, there are further savings from a probably less-biased response of 45% compared to 40%. Furthermore, these figures are very conservative, since anecdotal reports suggest that the increased response rates may be closer to 10-25 percent higher with incentives than without, and the estimated cost of a recruitment that fails to yield a survey could be much higher if ten or more attempts are made to collect the data from non-responding households. Recommendations for consistent approaches to incentives are given in section 2.2.8 of the Final Report. 5.9 RESPONDENT BURDEN 5.9.1 Definition Respondent burden is both tangible and intangible. In tangible terms, it can be measured as the amount of time, cost, etc. that is involved in a respondent complying with the requests of a survey. It could also be measured in terms of the number of times a respondent is contacted and asked to provide information. The intangible aspects of respondent burden are much less easily measured, and may be subsumed under the general title of perceived burden. There is general agreement that efforts should be made to reduce the data collection burden for respondents to travel surveys. There is less agreement as to what constitutes respondent burden, and how reductions in burden may be achieved. Respondent burden is examined here in terms of the measured burden (amount of time, cost, etc. to complete a survey) and the perceived burden. Thus, standardized procedures are needed on how to measure burden and on how much burden is too much. 5.9.2 Assessing Respondent Burden Measured Respondent Burden The Paperwork Reduction Act (PRA) of 1995 says that a United States federal agency may not conduct or sponsor the collection of information unless the agency has submitted, in advance, material to the Office of Management and Budget (OMB) certifying that the proposed data collections “reduce burden to the extent practicable” and “use information technology to reduce burden and improve quality.” According to OMB guidelines (OMB, 2004), respondent burden is defined as the “time, effort, or financial resources” expended by the public to provide information to or for a federal agency, including:

116 • “Reviewing instructions; • Using technology to collect, process, and disclose information; • Adjusting existing practices to comply with requirements; • Searching data sources; completing and reviewing the response; and • Transmitting or disclosing information.” (OMB, 2004) Burden is estimated in terms of the “hour burden” that individuals expend in filling out forms, and in terms of the “cost burden” derived from electronic recordkeeping and reporting. Of the larger household travel surveys conducted in the United States within the past decade, only the National Household Travel Survey (NHTS) has undergone a review by OMB. NHTS estimated the total reporting and record keeping burden per household at 52 minutes: 8 minutes for the household interview (screener); 30 minutes per household for the person level interviews (assuming 2.5 persons per household at 12 minutes each); plus 14 minutes of record keeping and recording odometer readings NHTS (2001f). For travel surveys conducted using CATI systems for recruitment and retrieval, it is possible to obtain the actual average duration of the telephone calls. Table 68 presents the average duration (in minutes) of the telephone calls in some of the more recent travel surveys that have used telephone for both recruitment and travel diary retrieval. Of the surveys represented, the average household respondent burden varied from 32.6 minutes in the 2001 California Statewide survey, to 77.1 (estimated) minutes in the 1996 Dallas-Fort Worth survey. The 2001 NHTS was estimated to actually require 41.8 minutes per household (based on average household size) for the telephone portion. Table 68: Measured Respondent Burden in Terms of Average Call Duration, for Telephone Recruitment and Retrieval Survey Recruitment/ Screener Call Reminder Call Retrieval Call Total Call Portion of Respondent Burden (per household) 2001 NHTS 7.8 Not Reported 14.8 minutes per person (Using 2.3 persons/ useable household, estimated total household time: 34.0 minutes) 41.8 minutes25 26 2001 California Statewide Survey 15.6 Not reported 17.0 minutes/household 32.6 minutes24 2002 Regional Transportation Survey, Greater Buffalo- Niagara Regional Transportation Council 21.2 Not reported 25.5 minutes/household 46.7 minutes24 1996 Dallas-Ft. Worth Household Travel Survey 8 3.6 33.4 minutes for household info; plus 13.2 minutes per person (Using 2.4 persons/household retrieved, estimated total household time: 65.5 minutes) 77.1 minutes Ampt (2000) has suggested that respondent burden is more than just the measured burden in terms of minutes but that it depends on the “perceived difficulty” of a survey and, as a perception, can vary for different people. She suggests that response burden is perceived as being less when: 25 Does not include reminder call average duration. 26 Does not include separate calls to household to collect odometer readings.

117 • The respondent has greater influence in choosing the time (and perhaps the place) to complete the survey; • The survey topic or theme is important or relevant to them and/or their community; • The questionnaire design is as simple as possible, to minimize perceived difficulties (physical, intellectual, and/or emotional); • Negative external influences (other people) are avoided, and/or positive external influences are enhanced; and • The survey appeals to the respondent’s sense of altruism. Perceived Respondent Burden Many of the suggested approaches to reducing perceived respondent burden are addressed directly or indirectly in other sections of this report. These include such measures as providing respondents with information about the importance of the survey topic and designing the questionnaire layout and wording to be as simple as possible. Among the key suggestions is to provide for a variety of response options (mail-back, telephone, in-person, Internet) so that respondents can direct the how and when of completing the survey. Recent household surveys have offered respondents the option of mail-back, telephone, or Internet retrieval. The option of in-person has almost completely disappeared in the United States, usually because of cost and security considerations. It must be noted that in-person is, however, still the preferred method in several countries such as New Zealand, Australia, and the U.K. Methods of Reducing Measured Respondent Burden The methods proposed specifically to reduce measured respondent burden include: 1. Reduce the number of questions (Murakami, 2000). This is the simplest method of reducing respondent burden, and yet the one that is used the least. In many of the recent household surveys, the respondent burden has been increased by asking for multiple days of travel instead of one, or asking for detailed information about in-home activities instead of simple trip purpose. While these may be fascinating data, respondent burden can be viewed as the fulcrum between more data and higher response rates. 2. Reduce the sample size. This has the effect of reducing the respondent burden as measured across all respondents, but not necessarily reducing the burden on any given respondent. 3. In CATI retrieval, use automated techniques such as “trip rostering” to reduce the need to ask the same questions of all household members. Trip rostering involves collecting information regarding trips for household members who traveled together during the travel day (or survey period) in detail from only one household member. The trip would be entered into a “roster,” and for the other household members participating in the same trip, the interviewers would merely confirm that the household member had indeed made that same trip. The full trip detail would then later on be copied into each household member’s trip record. The 2001 NHTS uses trip rostering (NHTS, 2001f). 4. Use split questionnaires, where each respondent is only asked a statistically selected subset of the overall survey. This approach has been successfully used in studies of education and health-care, but has not been used in household travel where the insistence has been on full data from each household or respondent. The most predominant use of this approach has been in stated preference surveys, which have generally been focused on asking about perceived travel. 5. Use administrative or census data to impute or estimate non-travel household characteristics instead of asking respondents. For example, instead of asking a series of questions to elicit

118 household income, census data could be used to derive an expected income level for households within a defined geographical area. 6. Use the variability in travel patterns from previously conducted surveys to model statistically the travel for different types of households. This is similar to the second option above in suggesting the use of smaller samples, but goes further in suggesting that not only are large samples not necessary, but that perhaps the collection of additional primary travel data is not necessary. Using statistical modeling techniques on the vast array of household travel data already collected, both travel patterns and the variability therein could be closely estimated. The last three options for reducing the time incurred by the participants in responding to household travel surveys may require additional research before they can be fully implemented. Respondent burden, whether measured or perceived, is widely regarded as one of the key factors contributing to the decline in response rates to travel surveys. While many of the standards discussed in this report may assist in reducing the perceived respondent burden, it is impossible to recommend a standard for measured response burden. Until there is further evidence, it is impossible to suggest that no survey require more than, for example, 40 minutes per household from respondents. Accordingly, the recommendation for standardized procedures focus on the need for consistent reporting of measured respondent burden. These are provided in section 2.2.9 of the Final Report.

Next: 6. Pilot Surveys and Pretests »
Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys Get This Book
×
 Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Web-Only Document 93 is the technical appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys, which explores the aspects of personal travel surveys that could be standardized with the goal of improving the quality, consistency, and accuracy of the resulting data.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!