National Academies Press: OpenBook
« Previous: 6. Pilot Surveys and Pretests
Page 131
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 131
Page 132
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 132
Page 133
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 133
Page 134
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 134
Page 135
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 135
Page 136
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 136
Page 137
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 137
Page 138
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 138
Page 139
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 139
Page 140
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 140
Page 141
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 141
Page 142
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 142
Page 143
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 143
Page 144
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 144
Page 145
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 145
Page 146
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 146
Page 147
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 147
Page 148
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 148
Page 149
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 149
Page 150
Suggested Citation:"7. Survey Implementation." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 150

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

125 CHAPTER 7 7. Survey Implementation 7.1 E-2: ETHICS 7.1.1 Definition Ethics describe minimum acceptable standards of conduct or practice. In travel surveys, this relates to how a survey agency conducts itself with respect to those interviewed, the client, any subcontractors, and the public as a whole. It also relates to a survey agency’s actions following the data collection process when data are cleaned, coded, analyzed, and archived. Ethics reflect what all stakeholders may consider “fair” or “reasonable” conduct by those involved. In practical terms, the application of ethics involves implementation of precautions to protect those affected from adverse effects. Ethics protect the rights of individuals and groups and serve to reduce public disapproval and criticism of what is done. 7.1.2 Review of Survey Ethics Several survey research associations have established regulations, codes of practice, guidelines, or ethical standards that their members are expected to maintain (CASRO, 1997; ESOMAR, 1999a; MRA, 2000a). However, there is no assurance that members of these associations abide by these standards. In addition, while the ethical standards are similar among these associations, they are not identical, meaning that there is no standard code of conduct in the travel survey industry. Establishing a standard code of conduct that would apply to the entire travel survey industry, would: • Bring all future travel survey practice under a single, uniform standard; • Provide a reference that survey respondents, survey clients, and survey practitioners can refer to, evaluate, and update; • Facilitate publicizing the standard; and • Make it easier for organizations commissioning travel surveys to require that surveys comply with these standards. Recommended ethical conduct is described in section 2.4.1 of the Final Report. 7.2 E-3: MAILING MATERIALS 7.2.1 Background Most surveys involve some activity of mailing materials to respondents, whether this is just an initial contact letter telling about the survey to be done, the sending of recruitment materials, or the full survey form. There is evidence to suggest that the materials used to mail to households, as well as materials for households to mail back, have an effect on response rates. Some survey practitioners

126 maintain that the appearance of mailing materials is of considerable importance for households to take a survey seriously (Dillman, 2000). This is particularly relevant in North America, where the amounts of “junk mail” received by most households has become excessive, and anything that appears to be another item of such junk mail is likely to be discarded without being opened. 7.2.2 Discussion and Review of Literature Both the survey profession and the direct mail advertising industry are facing the problem of declining response rates. In the direct mail advertising business, industry publications and journals have devoted a fair amount of space to discussing the benefits of appearance (Cavusgil and Elvey-Kirk, 1998; Vriens et al., 1998; Graham, 2002; Selzer and Garrison, 2002). Figure 9 shows the interrelationships between the motivating constructs to response behavior and their operationalization. These are the basis for the following set of questions (and potential solutions written in italics in the parentheses that follow): • Does the mailing have eye appeal? Will the recipient take it seriously or will the recipient discard it in the same way as cheap and poorly-presented junk mail? (envelope type, personalization) • Does the mailing create the right impression with regard to content and origin of the enclosed content? (source, envelope type, postage) • How is the material being mailed to the recipient? Are the materials being mailed out as bulk mail, or rather first class or express mail? (postage, envelope type) • How easy is it for the recipient to respond? Does the package contain a prepaid return envelope? Does the survey participant have the opportunity to respond in any other form, e.g., faxback forms, web-interface or a toll-free number? (follow-up, postage) The suggested solutions above were drawn from what has been observed to work in practice as documented by Dillman in his two books, Mail and Telephone Surveys; The Total Design Method (Dillman, 1978), and Mail and Internet Surveys; The Tailored Design Method (Dillman, 2000). For example, it is suggested that letterhead stationary is important because it is integrated with personalization and this may evoke feelings of importance. These feelings, together with the acknowledgement that researchers have taken some effort to select and contact these households, may make respondents feel comfortably obliged to participate in the survey. This is otherwise referred to as reciprocity, which is believed to have a positive effect on response rates (Kalfs and van Evert, 2003; Zmud, 2003).

127 Figure 9: Motivators of Mail Survey Response Behavior and their Operationalization Source: Cavusgil and Elvey-Kirk (1998) The appearance of the mailing package should not resemble marketing material. For example, it should not be overly colorful so that on first glance it is confused with “junk mail”. On the other hand, Dillman (2000) has suggested that unusual packaging will draw attention to the package, but states that the color of the outer envelope should be white or off white. Postage stamps should be unique or commemorative (not bulk mail or pre-printed bulk-mail) because this reinforces personalization and heightens the novelty motivator (Dillman, 1978; Cavusgil and Elvey-Kirk, 1998). This is also related to the use of stamped return envelopes which, in turn, is interrelated with the convenience motivator (Cavusgil and Elvey-Kirk, 1998; Dillman, 2000). Recommendations for mailing materials are provided in section 2.4.2 of the Final Report. Underlying Motivators of Mail Survey Response Behavior Operationalization 1. Net individual benefit Appeal Personalization Incentive 2. Societal Outcome Source Promised Anonymity 3. Commitment Prenotification Cut-off Date Notification Follow up 4. Novelty Envelope Type Cover Letter Form or postscript Questionnaire (format, color, …) 5. Convenience Postage Home versus work address Identify the “Informed Population” 6. Expertise Key Primary Motivator Secondary Motivator

128 7.3 E-4: RESPONDENT QUESTIONS 7.3.1 Introduction In virtually any travel survey, respondents have concerns regarding the legitimacy of the survey and those conducting it. While some of these concerns may be addressed in a cover letter, the typical survey has more nuances than may be explained in a single (or even double) page letter. The state of the practice has evolved three methods for respondents to verify the survey, and obtain answers to frequently asked questions. These include the use of a: • Telephone Contact Number; • Informational Brochure, with Frequently Asked Questions (FAQs); and • Internet Web Site. The use of each of these methods to answer respondent questions, and the potential to standardize aspects of these methods, are discussed below. 7.3.2 Elements for a Consistent Approach Telephone Contact Number As respondents find the typical list of questions asked in a travel survey to be more and more invasive, it is essential that the legitimacy of the survey be established. In addition to stating the authority under which the survey is being conducted in the cover letter, it is essential that respondents be provided with at least two telephone contact numbers: one for the sponsoring agency, and one for the data collection entity. The purposes of the telephone number for the sponsoring agency are threefold: • To provide a direct line to a designated employee of the sponsoring agency (survey spokesperson) who is knowledgeable about the survey effort and who can address public concerns regarding the survey (and survey firm’s) legitimacy; • To provide answers to basic respondent questions regarding how to complete the survey; and, • To serve as a quality assurance mechanism to address complaints (if any) regarding the survey staff. This telephone number should ideally be toll-free to all potential respondents in the survey area. However, a regular number to the designated employee’s desk is equally effective, because most of the respondents are likely to be local and thus not incur large long-distance charges. Care must be taken to ensure that the personnel who answer the main number for the sponsoring agency are informed about the survey so that they may route calls to the designated in-house spokesperson. Some agencies have connected toll-free hotlines to a telephone answering system that permits brief messages regarding the survey to be played for callers, and then provides callers with a menu of options enabling them to listen to recorded messages with answers to frequently asked questions, record a message, or transfer to the designated survey spokesperson (DRCOG, 1998). It has become standard practice for the survey data collection entity, whether inside the sponsoring agency or another contracted entity, to provide a toll-free telephone number for survey respondents to call for assistance in completing the survey. Respondents also use this number to call to request a change in travel days or to change a scheduled appointment for data retrieval. A summary of

129 hotline activity was presented in the Dallas-Fort Worth Household Travel Survey Report on Survey Methods (NCTCOG, 1996). The hotline received 621 calls over a six-month period. Of these, 66.5 percent were responding households indicating they had completed data collection and were awaiting a retrieval call. Another 10.8 percent of the calls were attempted refusals, which were referred to the survey data collection firm. Of the remaining calls, 7.4 percent were respondents with questions or requesting assistance. Informational Brochure/Fact Sheet Most household travel surveys conducted since the mid-1990s have included a separate informational brochure or fact sheet that contains frequently-asked questions and responses to them. The purpose of the brochure or fact sheet is to encourage respondents to participate in the survey, and to provide more detail than could otherwise be provided in a cover letter. This brochure or fact sheet is mailed out to respondents along with any other materials provided for respondents to use in the survey. The frequently asked questions (FAQs) that have appeared on informational brochures or fact sheets include: • Who is conducting this survey? • What is the survey about? (Why?) • How long does the survey take? • How did you choose me? (How was I selected?) • What kinds of questions will you be asking? • Why are you asking about my income? • Why are you asking for the names of people who live here? • What if I don’t travel much? • What if I drive for a living? • Why do you need to know where and how my children travel? • What about privacy? • Will we ever know the results? • How do I get help in answering a question? Much effort has gone into ensuring that the brochures or fact sheets are easy to read. Often, the brochure is a simple tri-fold, printed on both sides. Fact sheets are usually unfolded single sheets printed on one or both sides. Color and graphics are frequently used to help clarify a point, or brighten the display. Despite the prevalence of the use of informational brochures or fact sheets, there are no documented studies of their impact on response rate. Sample brochures available on the Internet include: • National Household Travel Survey 2001 (NHTS, 2001d; NHTS, 2001e); • 1997/98 New York Metropolitan Transportation Council’s Regional Travel Household Interview Survey (RTHIS); and • Perth and Regions Travel Survey, Government of Western Australia (PARTS, 2001). Internet Web Site Almost all of the major household travel surveys conducted in the United States within the past three years have provided survey respondents with one or more website addresses for the purposes of survey verification, frequently asked questions and, in some cases, on-line responding. Web sites may include pages that: • Provide general information about the survey and survey status;

130 • List Frequently Asked Questions; • Provide email addresses and telephone contacts for assistance or for further information; • Provide the ability to download survey materials; • Permit respondents to complete the survey on-line (in some instances); and • If the web site is provided by the survey data collection entity, link to the websites of the sponsoring agencies (or agency). There has been no systematic reporting to date (2003) of such web site usage. However, counters on various websites indicate an average of slightly more than 30 hits per month. Standardized procedures for respondent questions may be found in section 2.4.3 of the Final Report. 7.4 E-5: CALLER ID 7.4.1 Description Caller ID, Caller Line Identification, and Caller Display are different names for the service provided by many telephone companies that allows the customer to see the telephone number, and sometimes the directory listing, of the person who is calling. With the addition of Call Blocking, telephone customers may automatically block incoming telephone calls that do not permit the display of a telephone number. A recent industry survey estimated that 41 percent of all households in the United States subscribed to Caller ID (ATA, 2002). According to this study, age was one of the best indicators of whether a household subscribed to Caller ID services. Of those 18-24 years old, 57 percent subscribed to Caller ID, and of those 25-34, 54 percent subscribed. Only 26 percent of those 65 or older subscribed. 7.4.2 Impact on Surveys In light of the general decline in telephone survey response rates, it is incumbent upon legitimate survey researchers to provide any information that may encourage responses from the full range of households. One of the primary uses of Caller ID is for households to screen out unwanted telephone calls by simply ignoring calls that do not display a known number or identity of the caller. In one telephone company study, nearly 70 percent of the company's Caller ID users reported that they considered the ability to screen calls the most important attribute of Caller ID, and 15 percent said they had not answered their phones based on information displayed on their Caller ID screen (Southwestern Bell Telephone, 1998). A more recent study (Tuckel and O’Neill, 2001) showed similar findings, with 33.2 percent of households reporting they were “frequent screeners.” Use of Caller ID to Screen Calls The issue for transport surveys is whether households use Caller ID to screen out calls from survey researchers. A study by Link and Oldendick (1999) found that households that used Caller ID to screen calls tended to be younger, and from higher income levels than those that did not screen. However, they also found reported call screening behavior did not significantly increase the number of attempts or number of days needed to complete interviews with these respondents, nor was screening behavior significantly related to the likelihood of encountering a refusal before completion. They concluded that the increasing incidence of nonresponse to telephone surveys “does not appear to be driven by an increase in screening behavior...”

131 Caller ID Listing It has been suggested that any impact call screening does have on response rates could be improved if the Caller ID were to display the name of the agency commissioning the survey, or even a name of state government, or other entity involved in funding or commissioning the survey. In a survey conducted through a state university, Link and Oldendick (1999) asked their respondents what was displayed as the Caller ID listing. They found that the university name was displayed for 14.7 percent of the calls, “state government” was displayed for 26.6 percent of the calls, and the remainder got no specific listing, just an “out-of-area” or “listing unknown” message. Of those who saw a particular listing, 17.6 percent of those who saw the university name, and 20.7 percent of those who saw the “state government” listing, said the listing made them more willing to answer the call. More of those who saw “out-of-area” or “listing unknown” said it made them more hesitant to answer (26.9 and 22.3 percent, respectively). However, in each case the majority indicated that the particular listing made no difference at all (64.2 to 76.5 percent). The study authors concluded that their survey was helped, at least marginally, by being identified as either “university” or “state government.” This raises the question of whether a household travel survey could be conducted so that the sponsoring agency’s name appeared as the Caller ID listing. The technological (and legal) answer is that the caller listing must be the directory listing of the telephone that the call originates from. Technically, if all calls were routed through the sponsoring agency’s telephone system via some very sophisticated routing, the listing that might appear to respondents would be that of the sponsoring agency. There was mention on the Internet of a market research survey conducted by a private survey firm in which the outgoing calls were routed through a private advertising agency, so the advertising agency’s name came up on the Caller ID listing instead of the survey firm’s. Practically speaking, however, there are several reasons why this approach would not be fruitful for public or government agencies sponsoring travel surveys. First, most government agencies cannot accommodate such re-routing for security and legal reasons. Second, it is not a given that a specific Caller ID listing would consistently be displayed to all respondents. The listing displayed depends on the respondent’s service level and equipment (not all display names along with number) and the vagaries of the long-distance telephone routing system. Long-distance routing may switch providers when lines become congested in order to maintain a certain level of line efficiency. The Caller ID listing that shows depends on the listing available in that provider’s directory. Thus, the Caller ID listing displayed on a given respondent’s telephone could differ for the same number, or could come up as “unknown” or “out- of area.” Finally, some telephone firms route calls over 2 lines through Predictive or Power dialers, and these types of hardware do not "pulse out" digits that Caller ID devices can read. Call Blocking Many telephone companies offer their customers the option of electronically blocking the receipt of calls that are either not from a list of approved telephone numbers, or do not provide a telephone number. This service, which is referred to as “Privacy Manager” is a fairly recent call screening service that works with Caller ID to identify incoming calls that have no telephone number provided and which are identified as “anonymous”, “unavailable”, “out-of-area”, or “private.” The caller hears a message such as: “The person you are trying to reach does not accept unidentified calls. Your Caller ID was not received. To enter an access code, press 1. Or, to record your name so that we may announce your call, press 2.” If the caller provides a name, the customer then has the option of taking the call, or rejecting the call with a message to either call back at another time, or a message warning that this number does not accept telephone solicitations. To examine the extent to which such Privacy Manager devices impact survey response, data from the Bureau of Transportation Statistics (BTS) of the U. S. Department of Transportation were examined. BTS has been conducting a monthly, nationwide Omnibus Survey of customer satisfaction with

132 transportation issues. Detailed information on the call dispositions of each monthly survey is posted on the BTS website (BTS, 2002). Table 70 shows the percentage of eligible telephone numbers that were placed in the disposition category “Scope Undetermined” due to Privacy Manager devices and answering machines or voice mail in the six-month period from March through August, 2002. As shown in Table 70, one percent of telephone numbers were categorized as “Scope Undetermined” due to the use of Privacy Manager devices in a six-month period. While data regarding the characteristics of households that use Privacy Manager devices was not found, the distribution has to reflect those who purchase Caller ID, because Caller ID is a prerequisite for Privacy Manager. Thus, the households that could not be reached in the BTS survey due to Privacy Manager were more likely to be disproportionately composed of younger persons. Table 70: Percentage of Unresolved Telephone Numbers Due to Privacy Manager and Answering Machines/Voice Mail March, 2002 April, 2002 May, 2002 June, 2002 July, 2002 August, 2002 Six Month Total Percent of Eligible Numbers Telephone Numbers Dialed 3,511 3,645 3,871 3,559 3,512 3,339 21,437 Eligible Numbers (Total Numbers Dialed minus Out-of- Scope Numbers) 2,778 2,834 3,006 2,953 2,622 2,450 16,643 Scope Undetermined Due To: Answering Machine 331 355 473 438 82 50 1,729 10.4% Privacy Manager 52 36 34 12 23 13 170 1.0% Recommendations for standardization on caller ID are offered in section 2.4.4 of the Final Report. 7.5 E-9: ANSWERING MACHINES AND REPEATED CALL-BACK REQUESTS 7.5.1 Introduction There are two related issues encountered by every telephone-based survey: first, when an answering machine is reached, does it assist completion rates if a message is left? Second, when a household requests an interviewer call them back at another time, is there a point beyond which repeated call backs do not increase completion rates? Each of these issues is discussed in the following section. 7.5.2 Discussion of Issues Leaving Messages on Answering Machines/Voice Mail There are several points in the typical telephone-based survey in which a potential household maybe contacted: • During initial screening/recruitment;

133 • As a reminder in advance of their assigned travel day; and • During the process of retrieving travel information. A review of recent household travel surveys indicates that the practice of leaving a message when an answering machine was reached on the initial screening call varied, but that all left messages during the reminder and retrieval phases. While there has been no systematic study within the transportation field of the effectiveness of leaving a message on an answering machine in terms of impact on completion rates, there have been studies in other areas. The National Immunization Survey compared completion rates among households that had, and had not, had a message left in response to an answering machine (Kochanek, et al., 1995). When examined across different time periods of survey implementation, the results were inconclusive with response rates fluctuating in different directions – sometimes in favor of leaving messages and other times, not. The authors concluded, however, that “when used properly, answering machines can achieve a higher cooperation rate.” Among transportation surveys, the practice appears to be to leave messages at least once during the initial recruitment/screening. The Bureau of Transportation Statistics (BTS), in their Omnibus Surveys (Bureau of Transportation Statistics, 2002), required interviewers to leave messages on answering machines the seventh, fourteenth, or twentieth time an answering machine was reached. The message included the call center’s toll-free number to arrange for interviewing appointments. The rationale was that, given the dialing schedule, households with answering machines might be dialed two to three times per day, so that leaving a message on each call might contribute to potential respondents feeling “harassed.” Thus, BTS left a message for the first time at the seventh call. Other surveys have required a message be left on the third, and sometimes the first, contact with an answering machine (NuStats, 2003a). Anecdotally, there have been concerns raised over interviewers having to “start out on the defensive” after finally reaching a household where a message has been left (NuStats, 2003a). On the recruitment/screening call, the structure of the message generally includes the name of the sponsoring organization, the nature of the survey and the purpose of the call. In transportation surveys, a toll-free number to call for participation is left very rarely, because experience has shown that only extremely rarely do households call to volunteer. It should be noted that this is not the experience in other types of surveys, particularly health care surveys, which routinely leave a toll-free number and recruit slightly less than one percent of their respondents through volunteers (McGuckin et al., 2001). Within the transportation survey arena, there are some data that speak to the effectiveness of leaving a message on an answering machine during the reminder call. In the Dallas-Fort Worth Household Travel Survey, of those households for which an answering machine message was left during the reminder process, 43.2 percent ultimately completed the survey (Applied Management and Planning Group, 1996). This was much higher than the completion rate of 32.1 percent for households that did not receive any reminder contact, as shown in Table 71. Once a household has been recruited, leaving messages when an answering machine is reached is routine during the retrieval process. Repeated Call Back Requests There are two types of call back requests. The first is an unspecified call back request, in which the person answering the telephone or the door (for a face-to-face interview) indicates that this is not a convenient time to respond to the survey, and requests that the interviewer call back at another time. No specific time is suggested. Of course, this may be a subtle refusal that is difficult to convert to a full response because repeated call back requests are not usually categorized as “soft” refusals. Table 71: Survey Completion for Households Receiving an Answering Machine Message During the Reminder Call (Dallas-Fort Worth 1996 Household Travel Survey) Type of Reminder Contact Number Percent of Percent Retrieved

134 Reminder Calls Completely Spoke with Household 6,051 67.5 49.2 Answering Machine Message 1,272 14.2 43.2 Other (Refused to participate, disconnected number, language barrier, etc.) 593 6.6 0 Attempted-No Contact 1,055 11.8 32.1 No Contact Attempted 427 -- 30.2 Total: 9,398 100.0 A recent study of non-response in the National Household Travel Survey (NHTS) pretest (McGuckin et al., 2001) found that 24 percent of the households that requested a call back at least once eventually completed the survey successfully. Table 72 presents the final disposition of all households that requested a call back at least once during the survey process. This means, however, that in roughly three-quarters of the households, repeated requests for a call back are a form of “soft” refusal. The issue then becomes: how many times should a household that has requested a call back be called? The survey protocol for the NHTS called for at least eight attempts (2001 National Household Travel Survey, 2003). BTS left call back attempts in excess of seven to the discretion of the interviewer based on his/her perception of the likelihood of completing the interview. The basis of the interviewer’s perception was, in part, determined by how vigorously the interviewer was being encouraged to call back to complete the interview by the potential respondent or another member of the household. Table 72: Of Households Requesting a Call Back, Percentage Completing Survey National Household Travel Survey, 2000 Pre-Test Final Disposition Once a Household Requested a “Call Back” Percentage of “Call Back” Households Completed 24.0 Refused 18.5 Requested another “call back” 47.3 Never spoke to the household again (ring/no answer) 10.2 In light of the general decline in telephone survey response rates, anything within reason that can be done to encourage response should be done. Unless or until there is clear evidence that leaving a message when an answering machine is reached does more harm than good, messages should be left. Similarly, survey researchers should treat call back requests as a standard part of the survey process. Treating each request as if it was genuine, and honoring the request, appears to encourage potential respondents to participate. Recommended procedures are provided on this topic in section 2.4.5 of the Final Report. 7.6 E-10: INCORRECT REPORTING OF NON-MOBILITY 7.6.1 Description In any travel survey, it is to be expected that some portion of respondents will not have traveled from their home during the survey period. However, a claim of non-mobility on the diary day or days also may be a form of non-response. Some potential respondents may realize that a claim of non-mobility will shorten significantly the length of the interview. The issue addressed in this section is to reduce the incorrect reporting of non-mobility that is made as a form of non-response.

135 7.6.2 Genuine and False Non-Mobility Users of travel survey data frequently assume that a high percentage of reports of non-mobility is an indicator of poor survey technique. To use reliably the percent of non-mobile surveys as an indicator of survey quality, a standard set of questions must be asked and, at a minimum, the percent of non-mobile persons be routinely reported. Legitimate Non-Mobility A preliminary analysis of reported non-mobile persons in a sample of about 400 travel surveys from around the world, conducted by Madre, Axhausen and Gascon (2003), found the average share (percent) of non-mobile persons-days was 17 percent. The authors suggested this figure was high, because it included surveys that extended over a period of several weeks. The authors suggested that the “true” range of daily non-mobile persons (immobility) should be in the 8-15 percent range. The 2001 National Household Travel Survey (NHTS) found 11.8 percent of surveyed persons reported that they stayed in one place/home the entire travel day, which is well within the range suggested above. The 2001 survey included a step in which persons who reported they stayed in one place/home were asked to confirm this. An analysis of the characteristics of these non-mobiles (shown in Table 73) revealed: • 31 percent also reported they were retired; • 22 percent also reported having a medical condition that made travel difficult; and • 10 percent were aged four or younger. Note that this analysis is based on a review of the characteristics of non-mobile persons, not on what their reported reasons were. Only recently have questions been included in travel surveys asking why a person did not leave home during the travel day. Table 73: Non-Mobile Persons (Unweighted): 2001 National Household Travel Survey Statistic Number Percent of Total Percent of “Stayed in Same Place/Home” Total Persons 60,282 -- -- Total Reporting “Stayed in same place/home” all day 7,141 11.8% -- Also reported having a medical condition that made travel difficult 1,537 -- 21.5% Were aged 4 or less 730 -- 10.2% Also reported being retired 2,226 -- 31.2% Also reported being temporarily absent from a job or business 246 -- 3.4% Methods for Reducing Spurious Reports of Non-mobility There have been two approaches to reduce non-response through spurious reports of non- mobility: • Some surveys have included a question in which respondents who reported no out-of-home trips or activities were asked to verify that they did not leave the house the entire day (verification question);

136 • In a few surveys, all or a sample of persons who reported no trips received a follow-up telephone call for verification; and • Other surveys have asked respondents to provide reasons why they did not leave the house on the diary day (gently challenging questions). Table 74 presents a review of the percent of non-mobile persons in several recent U.S. household travel surveys, and the methods (if any) used to reduce spurious reporting of non-mobility. As may be seen from the table, the NHTS reduced the percentage of non-mobile persons from 25 percent in 1995 to 11.8 percent in 2001. It is difficult, however, to attribute all of this difference to the introduction of a verification question, because there were many other methodological changes made at the same time. Recommendations of strategies relating to false reporting of non-mobility are provided in section 2.4.6 of the Final Report. Table 74: Summary of Approaches to Reduce Incorrect Reports of Non-Mobility Survey Percent of Persons Reporting Zero Trips Verifying/ Challenging Question Asked? If asked, wording: 2001 National Household Travel Survey (U.S.) 11.8% Yes Does this mean {you/SUBJECT} stayed at {the same place/home} all day? 1995 National Person Travel Survey (U.S.) 25% No 2001-2002 Southern California Association of Governments Data Not yet Available Yes So,<YOU >made no trips, including for work or school? (Asked in CATI) Also, for each student and employee/worker in the household that reported not going to school or work on the travel day, the interviewers were instructed to record the reason why on the sample sheet. 2000 Bay Area Travel Survey 10.1% No* *Separate telephone calls were made to verify reports of non- mobility 1996 Dallas-Ft. Worth Survey 8.6% in pretest Yes I want to confirm that you (he/she) stayed at home during the whole diary day. If yes, Why were you at home during the whole diary day? 01 Temporary illness 02 Child/other household member was ill/needed care at home 03 Homebound (does not leave the house-includes newborns/infants) 04 Fulltime homemaker 05 Employed and worked at home 06 Home school 07 Day off 08 Vacation day 09 Other (specify): Zero trip diaries were flagged for review by a supervisor

137 7.7 E-11: RECORDING TIME OF DAY 7.7.1 Definition This item refers to coding time of day values for database entry. This item relates to how data are recorded (i.e., entered by the interviewer) and stored, rather than how respondents provide the information. 7.7.2 Discussion Although time of day reporting may seem to be a trivial issue, the way times are recorded can lead to the estimation of negative travel or activity times. Travel or activity diaries tend to start at 3 a.m. or 4 a.m., and end at the same time one or more days later, depending on the design of the survey. Standard practice in most travel surveys is to transform a.m. and p.m. times into military time. This is an appropriate practice, and should, theoretically, allow elapsed durations to be obtained by subtracting the start time from the end time. However, the problem arises with a diary that starts at 3 a.m. on one day and ends at 3 a.m. on the second day. By using military time alone, the first day runs from 03:00 to 23:59 hours, and the second day runs from 00:00 hours to 03:00 hours. While this means there is no duplication of hours, it results in a problem for any activity that spans midnight, where the subtraction of a time before midnight, such as 23:30, from a time after midnight, such as 00:30, results in a negative time. Using a format such as elapsed time in minutes would alleviate this problem, but the time of day would not be easily apparent from looking at the raw data. The same applies to a modified military time that adds 24 hours to the times on each additional day (e.g., 01:30 on the second survey day would be written as 25:30). In most modern database environments the time of day can be saved in conjunction with the date, thus allowing the application of a time difference function that takes the date into account. This would be the most practical way of storing times, but has the potential to slow down the entry of data, as has the elapsed time or modified military time method. This is especially a problem in the case of face-to-face or CATI data collection, where the additional time required to enter the date or convert the time adds to interviewer time, respondent burden, and costs. Also, including the date, especially if the data are to be released for public use, may result in some confidentiality problems. Therefore, a simpler, but easy to process format for data entry seems appropriate. This is recommended in section 2.4.7 of the Final Report. 7.8 E-12: TIME OF DAY TO BEGIN AND END REPORTING 7.8.1 Description Surveys use various different times at which to start and end the time for a 24-hour (or longer) diary. The aim is usually to choose a time that is expected to interrupt relatively little travel, so that respondents will not be put in the awkward situation of trying to respond about travel that had started before the start time of the diary. However, there is wide discrepancy in the selection of this time, which appears to range anywhere from midnight to 5 a.m.

138 7.8.2 Analysis Standardizing the time of day to begin and end reporting is more a convenience to make surveys clearly compatible and comparable, and probably has little overall effect on survey quality. However, some diaries fail to specify start and end times, or only a start time and not an end time, leading to problems as to the actual period of reporting. Generally, diaries tend to start around 2 a.m., 3 a.m., or 4 a.m. Ideally, start and end times should be selected so that there is little likelihood of beginning and ending in the middle of travel, or any other activity other than sleeping. Average hourly traffic volumes from highways and roads in North America, as well as in Great Britain and Australia suggest that the lowest volumes consistently occur between 3 a.m. and 4 a.m. A review of recent data sets in the U.S. generally confirms that the optimal time to start a diary is between 2 a.m. and 4 a.m. Figure 10 through Figure 21 show the distribution of trip start and end times from six recent surveys. Table 75, Figure 22, and Figure 23 provide a summary of the information for the hours from midnight to 4 a.m. From this, it is clear that the hour from 2 a.m. to 3 a.m. has the lowest percentage of both trip starts and trip ends. Therefore, a start time between 2 a.m. and 3 a.m. will have the least chance of intercepting a trip in progress. There is also little variation in this from region to region, in the surveys analyzed. A recommendation for standardization on this is provided in section 2.4.8 of the Final Report. 0 0.5 1 1.5 2 2.5 3 3.5 Trip Times Pe rc en t Figure 10: Trip Start Times for New York City 0 0.5 1 1.5 2 2.5 3 TripTimes Pe rc en t Figure 11: Trip End Times for New York City

139 0 0.5 1 1.5 2 2.5 3 3.5 Trip Times Pe rc en t Figure 12: Trip Start Times for Dallas-Fort Worth 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Trip Times Pe rc en t Figure 13: Trip End Times for Dallas-Fort Worth 0 1 2 3 4 5 6 Trip Times Pe rc en t Figure 14: Trip Start Times for the Ohio-Kentucky-Indiana Area

140 0 0.5 1 1.5 2 2.5 3 3.5 Trip Times Pe rc en t Figure 15: Trip End Times for the Ohio-Kentucky-Indiana Region 0 0.5 1 1.5 2 2.5 3 3.5 Trip Times Pe rc en t Figure 16: Trip Start Times for Phoenix 0 0.5 1 1.5 2 2.5 3 3.5 4 Trip Times Pe rc en t Figure 17: Trip End Times for Phoenix

141 0 0.5 1 1.5 2 2.5 3 Trip Times Pe rc en t Figure 18: Trip Start Times for South East Florida 0 0.5 1 1.5 2 2.5 3 Trip Times Pe rc en t Figure 19: Trip End Times for South East Florida 0 0.5 1 1.5 2 2.5 3 3.5 Trip Times Pe rc en t Figure 20: Trip Start Times for Salt Lake City

142 0 0.5 1 1.5 2 2.5 3 Trip Times Pe rc en t Figure 21: Trip End Times for Salt Lake City 0 0.5 1 1.5 2 2.5 3 3.5 Trip Times Pe rc en t Figure 22: Trip Start Times for Merged and Weighted Files 0 0.5 1 1.5 2 2.5 3 3.5 Trip Times Pe rc en t Figure 23: Trip End Times for Merged and Weighted Files

143 Table 75: Percentages of Trips Starting and Ending in the Early Morning Hours NYC Phoenix DFW OKI SEF SLC Merged Trip Times Start End Start End Start End Start End Start End Start End Start End 12:01- 1:00am 0.5 0.6 0.2 0.2 0.5 0.7 0.2 0.1 0.4 0.3 0.3 0.4 0.3 0.4 1:01- 2:00am 0.2 0.2 0.1 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0.1 0.2 0.1 0.2 2:01- 3:00am 0 0.1 0.1 0.1 0.1 0.2 0.3 0.1 0.1 0.1 0.03 0.1 0.1 0.1 3:01- 4:00am 0.1 0.1 0.3 0.2 0.1 0.1 0.3 0.1 0.2 0.1 0.1 0.1 0.2 0.1 Total 0.8 1.0 0.7 0.7 0.9 1.2 0.9 0.4 0.8 0.6 0.5 0.8 0.7 0.8 7.9 E-13: CREATION OF ID NUMBERS 7.9.1 Introduction Each completed survey requires a unique identification number. In addition, if data are retained on incomplete households, then all contacted households require a unique identification number. 7.9.2 Need for Standardized Procedures The primary issue with respect to identification numbers is that the numbers should permit ready retrieval of specific records, and should provide a unique identification for each unit in the survey. In addition, there is the potential to provide some additional information through the identification number, such as the membership in a specific sampling category, thereby permitting easy checking of the sampling progress during the survey and ready identification for purposes of expansion and weighting after the survey is completed. Specifically in some CATI programs, a new ID number is assigned each time that a call attempt or reminder is made. This should be avoided at all costs in personal travel surveys. Some surveys assign ID numbers only to completed households and not to incomplete households. This should also be avoided, particularly if the standardized procedure is adopted to retain data on incomplete households. There are two alternative procedures that can be used for creating ID numbers. The first is to create the ID number by starting with a number indicating the day of the week on which the diary was started (e.g., 1 for Monday, 2 for Tuesday, etc.). The second, third, fourth and fifth digits would consist of the date of recruitment of the household. Thus, a household recruited on March 15, with a diary day of Tuesday would have an ID number that would begin with 20315. The remainder of the number would be a sequential number that can optionally be sequenced through the entire survey or restarted on each day of the survey. In the former case, if the above household was the 1,537th household recruited since the beginning of the survey, and if the total number of households to be recruited exceeds 9,999, the household ID number would be 2031501537. In the latter case, if this household was the 38th household recruited on that day, and no day would have more than 150 households recruited, the ID number might be 20315038. The second procedure, where stratified samples are drawn, is to use the initial digits to indicate the stratum to which the household belongs, and the remainder of the number to be a sequential number assigned as each interview or contact is completed. Suppose a survey is undertaken in which households are stratified by county of residence, household size, and vehicle ownership. The household of the

144 previous example is drawn from the county that is coded 3, is a 4-person household with 2 cars. The ID number could be either 34201537 or 342038. In surveys where different sources are used to generate the sample, a digit in the ID number may be used to indicate the source of the sampling unit, e.g., RDD or a bus intercept survey. Thus, in the previous example, if the household had been obtained through RDD, and this is coded as 1 for the first digit, the ID number would become 134201537 or 1342038. Similarly, if the date-based ID number was adopted, then the ID number would be either 12031501537 or 120315038. Again, it would be helpful if all personal travel surveys used the same procedures for assigning identification numbers to survey units, because this would mean, first, that complete and incomplete households were always handled identically, and second, that if information is encoded into the ID number, this would be done consistently in all surveys. Such consistency would allow standard processing software to be set up that would utilize the information in the ID number. Recommendations on this are included in section 2.4.9 of the Final Report.

Next: 8. Data Coding Including Geocoding »
Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys Get This Book
×
 Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Web-Only Document 93 is the technical appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys, which explores the aspects of personal travel surveys that could be standardized with the goal of improving the quality, consistency, and accuracy of the resulting data.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!