National Academies Press: OpenBook
« Previous: 1. Introduction
Page 8
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 8
Page 9
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 9
Page 10
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 10
Page 11
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 11
Page 12
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 12
Page 13
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 13
Page 14
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 14
Page 15
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 15
Page 16
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 16
Page 17
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 17
Page 18
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 18
Page 19
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 19
Page 20
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 20
Page 21
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 21
Page 22
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 22
Page 23
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 23
Page 24
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 24
Page 25
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 25
Page 26
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 26
Page 27
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 27
Page 28
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 28
Page 29
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 29
Page 30
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 30
Page 31
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 31
Page 32
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 32
Page 33
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 33
Page 34
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 34
Page 35
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 35
Page 36
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 36
Page 37
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 37
Page 38
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 38
Page 39
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 39
Page 40
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 40
Page 41
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 41
Page 42
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 42
Page 43
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 43
Page 44
Suggested Citation:"2. Literature Review Results." National Academies of Sciences, Engineering, and Medicine. 2007. Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys. Washington, DC: The National Academies Press. doi: 10.17226/22042.
×
Page 44

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

2 CHAPTER 2 2. Literature Review Results 2.1 REVIEW OF THE STATE OF PRACTICE 2.1.1 Introduction There has been a tendency for metropolitan areas to concentrate their survey efforts around each turn of a decade, respecting the wishes of the U.S. Census Bureau to stay out of the field in the Spring of the year in which the Decennial Census is undertaken. As a result, the most recent major push to complete household travel surveys took place in the early 1990s, and is well documented in both NCHRP Synthesis 236 (Stopher and Metcalf, 1996) and the TMIP “Scan of Recent Travel Surveys.” (TMIP, 1996). These two documents reviewed surveys through 1994-5. Relatively few surveys have been conducted since then, until the year 2000, during which a number of major surveys were initiated. In this review, it has largely not been possible to include surveys initiated since 2000, except insofar as details may be available from the Requests for Proposals that have been issued. In addition, such surveys do not provide information on outcomes, which are a large part of what is important in this review. Therefore, we have opted not to include any surveys currently in the field, or those that have recently finished fieldwork, but from which final outcomes are not yet known or documented. This review examines a number of aspects of each survey and outlines how each was achieved. The aspects covered are: • Design of Survey Instruments; • Design of Data-Collection Procedures; • Sample Design; • Pilot Surveys and Pretests; • Survey Implementation; • Data Coding including Geocoding; and • Data Analysis and Expansion. Detailed descriptions are not provided, but an attempt is made to identify what appear to be the customary methods, procedures, and measures used in the surveys. Section 1 concludes with a discussion of the impact of technological and social changes on travel surveys. 2.1.2 Review of Recent Surveys Some Recent Surveys A few recent surveys were gathered together as part of this project. Of these, six were not included in other recent reviews of surveys. A brief description of the surveys is provided below. The 1997-98 New York and North Jersey Regional Travel Household Interview Survey (RT-HIS). The NY/NJ Regional Travel Household Interview Survey (RTHIS) was conducted to provide data to construct a state-of-the-art transportation planning model for the New York/ New

3 Jersey/Connecticut metropolitan region. Data were collected from 28 counties, and the study was conducted by the New York Metropolitan Transportation Council (NYMTC) and New Jersey Transportation Planning Authority (NJTPA) over a period of 16 months extending from February 1997 to May 1998. It used telephone recruitment and a telephone retrieval (CATI) procedure. The survey materials were mailed to the recruited households. The survey sampling plan was intended to provide sufficient information for mode choice model development and a snapshot of county level travel information for weekday travel. The sample design and selection procedures used a stratification process based on different levels of mode utilization and residential density. The sampling rate varied between 0.04% (minimum) and 0.68% (maximum) among the strata. Travel diaries were used to record the travel information of participating households who were assigned specific travel days to record their travel over a period of 24 hours. The actual number of households in the 28-county area of the survey was 7,180,538 (sampling frame). In total, 14,441 (including 323 weekend sample) households were recruited to participate in the survey. Of these, 11,264 households completed the travel diaries (10,971 were weekday samples, and 293 were weekend samples). Travel information was retrieved from all household members regardless of age. The 1996 Bay Area Travel Survey. The 1996 Bay Area Travel Survey was conducted to provide data for the continuing development and improvement of the Metropolitan Transportation Commission’s (MTC) Regional Travel Demand Forecasting Model, as well as to provide a better understanding of travel behavior in the San Francisco Bay Area. The Metropolitan Transportation Commission’s jurisdiction includes the nine-county area of Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Solana, and Sonoma counties. The survey was conducted by the Metropolitan Transportation Commission (MTC) of Oakland, CA. The Bay Area survey was conducted in two phases: January through May of 1996, and again between September and December, 1996. A 48-hour activity diary was used to record all activities and travel conducted during an assigned 48-hour period. Household recruitment was conducted by telephone followed by mailing out of the survey materials. Retrieval of the data was conducted by telephone (CATI). The proposed sample size for the Bay Area Travel Study was 3,750 households consisting of 1,750 randomly sampled households and 2,000 households that use the Bay Bridge corridor. The sample size was designed to attain a margin of error of 1.3 percent overall at a 90% confidence level. In addition, all strata were to achieve less than a 10% margin of error at the same confidence level. According to preliminary 1996 projections by the Association of Bay Area Governments, the total number of households in the nine-county area was 2,339,160. In total, 5,857 households were recruited to participate in the study. Of these, 3,678 households completed travel diaries. Information was retrieved from all household members regardless of age. The 1996-97 Corpus Christi Study Area Travel Survey. The Corpus Christi Survey was conducted to collect and update data related to travel characteristics in the Corpus Christi metropolitan area. The survey population included residents of Nueces and San Patricio counties. The survey was a part of an on-going effort of that period by the Texas Department of Transportation (TxDOT) to collect and analyze travel behavior across the state. The data was to be used by TxDOT and local agencies to identify transportation needs in urban areas and to update transportation and air quality models in the region. The survey collected travel and activity information from respondents over a specified 24-hour period. The sample was designed to attain an accuracy of ±12% at the 90 percent confidence level, resulting in an estimated sample size of 1,550 households. However, in order to cater for unforeseen circumstances, the final sample target was set at 1,705 households. According to the 1990 Census, the Nueces and San Patricio counties had 349,894 residents in 118,333 households. In total, 2,182 households were recruited to participate in the study. Of these, 1,712 households completed travel diaries. Information was collected from all household members aged 5 years and older.

4 The survey was conducted between April 1996 and April 1997. The survey used the telephone to recruit participants. After recruitment, a respondent package was mailed to each recruited household. Retrieval of travel information was accomplished by CATI. The 1995 Origin Destination Survey for Northwestern Indiana. In the fall of 1995, the Northwestern Indiana Regional Planning Commission (NIRPC) conducted an Origin-Destination Survey, the first such survey conducted for Northwest Indiana in over twenty years. The survey was conducted to address the ever-increasing challenge of providing more efficient transportation facilities to accommodate escalating travel in the region. The survey was a self-administered, mail-out, mail-back travel survey. Households were randomly recruited from a list of mailing addresses. The specific objective of the survey was to identify all trips made by members of the participating households on a single survey day. The survey was conducted during September, October and November of 1995, between Labor Day and Thanksgiving. The geographic scope of the survey was the three counties of Lake, Porter, and LaPorte in northwestern Indiana. The three counties were subdivided into 292 traffic analysis zones (TAZs). Approximately 25,000 households were randomly selected in the three-county area with the objective of obtaining 2,500 usable, completed surveys. This represented a targeted one percent sample, because it was estimated that there were approximately 257,000 households in the sampling frame. Because the households were not evenly distributed throughout the three county areas, sampling was stratified by urban, suburban and rural parts of the region. The Origin Destination Survey used a one-day trip diary to collect the travel information of the participating households. The survey collected travel information from all households members 14 years of age and older. The 1996 Broward Travel Characteristics Survey. The Florida Department of Transportation initiated the Broward Travel Characteristics Study (BTCS) in February 1996 to identify the localized trip making characteristics of Broward County and improve the travel forecasting accuracy of the Florida Standard Urban Transportation Model System (FSUTMS) for the area. A survey package was developed that requested information on household characteristics and income (Household Verification Survey), the daily trip making events (Travel Logs), and the propensity for using transit (the Direct Utility Assessment Survey). The survey utilized a series of telephone and mail-out questionnaire surveys to establish the socio-economic and travel characteristics of Broward County. A systematic random sample pool of 6,851 households was drawn from the Property Appraiser records of Broward County. More than 13,000 telephone calls were made to screen and recruit households to participate in the survey. From the initial sample of 6,654 recruited households, 42 percent of the households (2,803) participated in the Household Characteristics Survey, and 93 percent of those households (2,625) agreed to participate in a subsequent travel log survey. All households were requested to complete the Household Verification Survey, which included most of the questions asked in the Screener Survey with additional information on the Travel Maker’s Profile Code and household income. The major goal of the Direct Utility Assessment (DUA) survey was to identify the survey participant’s propensity to use travel modes other than “drive alone” and to develop coefficients for use in transit modeling. For the Travel Log Survey, a series of questionnaires were used to identify the travel characteristics of the study area. The major travel characteristics sought in the study included; household trip generation, trip purpose, trip length, travel time, and modal split. Mail out survey packages were sent to 2,625 households that agreed to participate in the mail out portion of the study. Approximately 33 percent of these households returned travel logs and 22 percent of these returned DUA surveys. This resulted in a total of 867 households that returned the travel log packages, and 194 households that returned the DUA survey. The survey used a one-day travel diary to collect travel information from participating households. Households were advised to complete the travel diaries for a selected day. The survey collected travel information from all household members six years of age and older. The survey used mail-out, mail-back for the Travel Log Survey, while the Household Verification Survey data were collected by CATI.

5 The Travel Log survey period for the Broward Survey was scheduled for the fourth and fifth weeks of March 1996. The Department required the travel log portion of the survey to be completed prior to the end of the “peak season” thereby requiring the survey logs to be completed by the end of March 1996. March 19-21 and March 26-28 were selected as the travel log dates for the survey. The 1991 California Statewide Travel Survey. The California Department of Transportation maintains a state-wide travel database, which is used to estimate, model, and forecast travel throughout the State. The information is used to help in transportation planning, project development, air quality analysis, and a variety of other program areas. The database contains socio- economic and travel data for California as a whole, all rural counties combined, and each of the following 15 urban regions: • AMBAG (Monterey and Santa Cruz Counties); • MTC (Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Solano, and Sonoma Counties); • SACOG (Sacramento, Sutter, Yolo, Yuba, and Western portion of El Dorado and Placer Counties); • SCAG (Los Angeles, Orange, Riverside, San Bernardino, and Ventura Counties); and • The counties of Butte, Fresno, Kern, Merced, San Diego, San Joaquin, San Louis Obispo, Santa Barbara, Shasta, Stanislaus, and Tulare. The survey data includes socio-economic household data such as persons per household, vehicle availability, employment status, household income etc. For travel, the survey collected information from all household members as well as out-of-state overnight guests who were five years of age and older. The travel data contains information such as characteristics of all trips by all modes, the location of trip origins and destinations including the time of each trip, trip purpose, and vehicle occupancy. The survey used telephone recruitment, a mail-out questionnaire and telephone retrieval. Sampling was performed using random digit dialing (RDD) with samples stratified by region for the weekday travel, and a single statewide sample for weekend travel. Regional sample sizes were determined in advance to meet minimum regional statistical reliability requirements. Thus, the statewide total sample was built as an aggregation of basic regional needs. The use of regionally-based sample size determination resulted in non proportional sampling across regions for weekday travel. The survey resulted in approximately 13,501 households being interviewed for weekday travel and 900 households being interviewed for weekend travel. Interviews were conducted from one of several centralized telephone facilities, with 75 percent of the sample using CATI. Interviews were conducted in English, Spanish, Vietnamese, and Korean, as well as Cantonese and Mandarin. Interviews for the weekday sample were conducted between May 15 and June 28, 1991, while interviewing for the weekend sample occurred between September 9 and October 29, 1991. The Greenville Urban Area MPO Household Travel Survey. The Greenville Urban Area MPO Household Travel Survey was conducted to address the transportation planning needs for obtaining accurate information on travel characteristics of households in the MPO area and for use in developing and calibrating transportation travel demand models. The Greenville MPO planning area is located in Pitt County, North Carolina. For transportation planning purposes, the Greenville MPO is subdivided into 229 Transportation Analysis Zones (TAZs). These TAZs, which comprise the entire MPO Planning Area, contain a total of 75,000 persons and 33,000 dwelling units. The survey used telephone recruitment with a mail-out, mail-back questionnaire. The target sample size for this survey was 1,000 households with a targeted response rate of 50 percent. A total of 1,596 households completed the telephone survey and agreed to complete the travel diaries. Of these, 1,058 households actually returned usable travel diaries. Participating households were assigned a travel day (Sunday-Saturday) to record all trips conducted in a 24-hour period by members of the household aged 5 years and over. The survey was conducted over a period of eight weeks (September-November, 1998). The 2000 Southeast Florida Regional Travel Characteristics Survey. The 2000 Southeast Florida Regional Travel Characteristics Survey was conducted to provide information for

6 developing planning tools for the surface transportation facilities for the three-county region of Broward, Palm Beach and Miami-Dade counties. The survey was designed to allow state and local government planners to understand when, where, how, and why people travel. It was also designed to provide information that will allow local planners to estimate where growth will occur, estimate congestion, and estimate how the development of roads, buses, and trains might improve travel in the region. Recruitment was by telephone, followed by a mail-out demographic survey form and a one-day travel diary. Retrieval was by CATI. Travel diaries were distributed to all household residents. Data collection began in December, 1998 and was completed in September 1999. The required sample size was estimated as 5,060 households. This sample required a specified number of households by each of 18 geographic areas (districts) of the three counties under study, and for a set of demographic characteristics. Travel logs were collected for 5,100 households involving all travel conducted by all members of the household on one day. Approximately 10,000 bus riders were surveyed. Visitors were surveyed at 79 hotels. Trucking information was gathered from 848 commercial establishments that use trucks. Workers were surveyed at seven major employment areas in the three counties. The 1993 Wasatch Home Interview Travel Survey. The 1993 Wasatch Home Interview Travel Survey was a survey conducted on a sample of 3,000 households. The survey was coordinated and managed by the Wasatch Front Regional Council (WFRC). The goal of the survey was to collect information about the demographics and travel characteristics of households within the jurisdiction of the WFRC and another local agency, the Mountainland Association of Governments (MAG). The WFRC covers Salt Lake City and Ogden, Utah, while MAG covers Provo and Orem, Utah. The survey used a one-day activity diary in which respondents were asked to record each activity they conducted during the 24-hour period. The assigned survey days were weekdays (Monday-Friday) only. The diary days were assigned between March 22 and May 14, 1991. Data were collected from all persons aged five or older in the sampled households. The survey was a telephone recruitment, mail-out, mail-back survey. The household sample was selected using RDD. It was estimated that approximately 7,500 completed households would have to be recruited to yield 3,000 completed households (Salt Lake City 1200, Ogden 900, and Provo/ Orem 900 households) with an estimated 40 percent response rate. The actual number of households recruited was more than 7,512 and the total number of completed households was 3,082 (WFRC 2,181, and MAG 901). Ohio Kentucky and Indiana Household Activity and Travel Survey 1995. The OKI Household Activity and Travel Survey was conducted to provide the service area of the OKI Governments with a new data base of travel patterns and behavior to assist in updating the region’s transportation models. Data were collected from seven counties. These included the Ohio counties of Butler, Warren, Clermont and Hamilton (including the City of Cincinnati); the Indiana county of Dearborn; and the Kentucky counties of Boone, Campbell, and Kenton. The study was conducted by URS Consultants with Market Opinion Research over a period of six months, in the latter half of 1995. It used telephone recruitment and retrieval (CATI) procedure. Survey materials, including cover letters, were mailed to recruited households. A 24-hour activity diary was used to record all activities and travel conducted during an assigned day. The proposed sample size for the OKI Household Activity and Travel Survey was 3000 households. The sample size was designed to attain a margin of error of 1.8% overall at a 95% confidence level. Initially, 5000 households were recruited, allowing for a response rate of 60%. Information was retrieved from all household members regardless of age. Treasure Coast Survey 1996. The Florida Department of Transportation initiated the Treasure Coast Travel Characteristics Study in January 1995 to identify the localized trip making characteristics of the Treasure Coast region and improve the travel forecasting accuracy of the Florida Standard Urban Transportation Model System (FSUTMS) for the area. Data were collected from the counties of Martin, St Lucie, and Indian Counties. The study was conducted by Walter H. Keller, Inc. consultants and sub-consultants Regional Research Associates, Inc. and Marda L. Zimring. Inc. It used telephone recruitment and retrieval (CATI) as well as mail back retrieval. Survey materials were mailed to recruited households.

7 A survey package was developed that requested information on household characteristics and income (Household Verification Survey), the daily trip making events (Travel Logs), and the propensity for using transit (the Direct Utility Assessment Survey). The survey utilized a series of telephone and mail-out questionnaire surveys to establish the socio-economic and travel characteristics of the Treasure Coast Region. A systematic random sample pool of 5,000 households was drawn from the Property Appraiser records of the Treasure Coast regions. From the initial sample of 1,531 recruited households, 46.4 percent of the households (702) participated in the travel log survey. This survey was identical to the Broward County Survey, with the Household Verification Survey, and the Direct Utility Assessment survey. Mail out survey packages were sent to 1,531 households that agreed to participate in the mail out portion of the study. Approximately 46 percent of households returned travel logs and 38 percent of these returned the DUA surveys. The survey used one-day, two-day and three-day travel diaries to collect travel information from participating households. Households were advised to complete the travel diaries for a selected day(s). The survey collected travel information from all household members six years of age and older. The survey used mail-out, mail-back for the Travel Log Survey, while the Household Verification Survey data were collected by CATI, as in Broward County. The Travel Log survey period for the Treasure Coast Survey was scheduled for the fourth and fifth weeks of March 1995. RESEARCH TRIANGLE SURVEY (1994). This survey data set was used in this research, but documentation was not provided to us, beyond what was needed to understand and make use of the data for analysis. Data from this source are reported in the following sections of the report, but no documentation is available to summarize the execution of the surveys. Examination and Comparison of the Acquired Surveys The surveys above were examined and compared in an effort to identify common trends in current travel survey practice. Due to the limited sample of surveys considered in this comparison, the results are not taken as entirely representative of current practice. However, this review together with the findings of the survey scans conducted in NCHRP Synthesis 236 and the Travel Model Improvement Program (TMIP) Scan of Recent Surveys, provide a reasonable review of the state-of-the-art in travel survey practice. Most of the surveys used a one-day travel diary, although the 1996 Bay Area Survey used a 2-day activity diary, while the Wasatch Survey, the Corpus Christi Survey, and the OKI Survey used 1-day activity diaries. None of these surveys used a time-use diary. Most of the surveys collected similar socio- demographic data, with the usual variables being gender, age, relationship of household members, driver status, and employment status. However, the methods for collecting age, and the categories of relationships and employment status vary from survey to survey. On employment status, for example, Table 1 shows some of the categories used. Table 1: Examples of Working Status from Recent Surveys Survey Work Status Categories Research Triangle Retired, homemaker, unemployed but looking, unemployed not looking, student, employed, multiple jobs Wasatch Employed full time, employed part time, multiple jobs, retired, unemployed Broward Retired, homemaker, working, unemployed Bay Area Employed, unemployed, homemaker, retired, other OKI Working outside the home, working within the home Trip purpose is another variable that was collected with varying categories from survey to survey. Activity surveys do not explicitly collect data on trip purpose, although it is derived from the activities

8 reported. To provide an idea of the variations that were used in these surveys, Table 2 provides an overview of some of the categories. Nine of the twelve surveys collected information on work status, but two did not. Two of the eleven surveys collected data on occupation and one collected data on industry. The remainder did not attempt to collect any industry or occupation data. In the two cases where occupation was collected, the categories were not the same in the two surveys. Table 2: Examples of Trip Purpose Categories Used in Some of the Surveys Survey Research Triangle Southeast Florida Treasure Coast Indiana Greenville Broward California Work X X X X Work commute or work errand X Work at regular jobsite X Primary work location X Work at other location X Work-related site X X X Work at home X Business X X Drop off/pick up someone X X X X X Visit friends/relatives X Eat meals X X X X X Social/recreational/entertainment X X X X X1 Recreational X X X Shop X X X X X X Doctor/dentist/other professional X X2 X2 Other family/personal business X X3 X3 X3 Household X Religious/civic X School X X X X X School at regular place X School at other place X Daycare/babysitter X X Sleep X Other activities at home X Other activities not at home X Home X X X X Change travel mode X X X Other X X X 1 Social/entertainment (recreation was defined as a separate purpose) 2 Medical or dental 3 All except Research Triangle used the phrase “Personal Business” As can be seen from Table 2, only the Southeast Florida and Treasure Coast surveys used identical categories, and this because both were conducted by the same firm for the same client. Otherwise, there is no agreement on trip purpose categories. For the four surveys that used activity diaries, there is similarly no agreement on activity categories. DESIGN OF DATA COLLECTION PROCEDURES. Two of the surveys – Greenville and Wasatch – used mail-out with mail-back for the household travel survey. The remainder used mail out with telephone retrieval. All of the surveys used telephone recruitment to recruit households for the survey, followed by mail out of the survey materials. Four of the twelve surveys used a reminder call on the eve of the assigned travel diary day. Of those surveys specifying the number of attempts that should be made

9 to retrieve data from a recruited household, one specified three attempts, while two others specified six attempts. Six of the twelve surveys specified that data were to be collected from all household members, regardless of age. Five specified five years old and above for data collection and one specified six years old and above. Most of the reports did not specify rules with respect to proxy reporting. The Research Triangle survey was an exception to this, specifying proxy reporting for adults, for minors, and for an adult who had completed a written diary. The definition of a complete household was also not provided in the documentation of most of the surveys. For Research Triangle, it was defined as a household with completed records for all household members. Nine of the surveys did not use an incentive. The Bay Area survey provided a calculator as an incentive for a subsample of the recruited households. The Research Triangle Survey provided $1 per household and a pen for each member of the household. Sample sizes and response rates, where reported, are shown in Table 3. Not all surveys reported response rates. In most cases, it appears that the response rate was that of recruited households, not the overall response rate including response to the recruitment. In the cases of the Bay Area, Corpus Christi, and Research Triangle, the response rates are overall response rates. In the other cases, it is not known, although the reports suggest they are completion rates, not response rates. No other aspects of Data Collection Procedures were defined in the reports on the surveys. Sample Design. Three surveys used simple random samples. The remainder used stratified samples, with stratification being conducted by geographic area, vehicle ownership, and household size. Table 3: Sample Size and Response/Completion Rate for the Twelve Surveys Survey Sample Size Response/Completion Rate Bay Area 3,678 63% (completion rate) Greenville Travel Study 1,058 55% of recruited HH Wasatch Travel Study 3,082 N/A Indiana Transportation Study 1,070 N/A Broward Travel Characteristics Study 702 46% (completion rate) Treasure Coast Study N/A N/A California Statewide Survey 14,417 N/A Corpus Christi Study 1,712 72% of recruited HH Southeast Florida Regional Travel Char. Study 5,168 N/A RT-HIS Regional Travel Interview Study 11,264 78% (completion rate) OKI Survey 2,870 57% (completion rate) Research Triangle Survey N/A N/A PILOT SURVEYS AND PRETESTS. Five of the surveys reported using a pilot test or pretest. No details were provided of changes that resulted from the use of these preliminary surveys. Also, no detail was provided on how much of the survey implementation was subjected to pilot testing. SURVEY IMPLEMENTATION. Again, few details were provided in the reports on survey implementation. Most of the telephone retrieval surveys used Computer-Assisted Telephone Interviewing (CATI) procedures for data retrieval and also for recruitment. In most cases, the reports indicated that CATI included various logic checks and validity checks as part of the programming. Other implementation details are not provided. DATA CODING INCLUDING GEOCODING. Six (possibly eight) of the surveys used GIS software to geocode data and provided geocodes to the level of latitude and longitude. One survey geocoded the data to Traffic Analysis Zone (TAZ) level only and did not specify the method, although it appears to be manual. Similarly, one geocoded to both TAZ and latitude and longitude, and appears to have done so using manual procedures or a computer-based address-matching software. No information is provided on the remainder. No other information was provided on the data coding. DATA ANALYSIS AND EXPANSION. Only one survey indicated methods used for expansion and weighting, which was the Research Triangle survey. This survey used a fairly intricate method of

10 weighting to correct for various biases in the sampling plan, and corrected for the presence of multiple telephone lines in some households and shared lines in others. Weighting was also applied to correct for nonresponse bias on the basis of household size, household income, number of vehicles owned, and age. The comparison base was the Public Use Microdata Sample (PUMS) data of the Bureau of the Census from 1990. However, five of the surveys reported sample biases that were determined either from the sampling plan or from comparison to supplemental data, predominantly the decennial census. The results of this are shown in Table 4. In all cases, except as noted, the categories of households identified are under-represented in the sample data. Table 4: Sources of Identified Bias in Five Surveys Survey Biases Identified Bay Area Household size > 4 Households with no workers or more than 2 workers Households with no vehicles Households earning less than $20,000 Households earning between $60,000 and $75,000 Renters Indiana Two-person households (over-represented) Broward Low income households Households with no vehicles Corpus Christi Low income households Two-person households (over-represented) OKI Two-person households (over-represented) 2.1.3 NCHRP Synthesis of Highway Practice 236 NCHRP Synthesis 236 (Stopher and Metcalf, 1996) provides a review of 55 household travel surveys conducted in the period from 1989 through 1995. The principal aspects of these surveys are summarized below. Design of Survey Instruments For the design of the instrument, most surveys comprise three elements: a household element, a person element, and a travel or activity element. In addition, when using telephone recruitment, followed by mail out of the survey, there are at least two instruments required: a recruitment script, and a survey package. To this may be added a retrieval script, when survey data are collected by telephone retrieval (54 percent of recent surveys) or reminder scripts, when survey data are collected by mail (22 percent). A number of features of recent survey instruments are summarized in Table 5. At the date of this review, no time-use surveys had yet been implemented, although two were underway – one in Portland and one in Dallas-Fort Worth. The retrospective surveys did not use diaries, and presumably collected trip-based data, rather than activity data, although this was not established formally. Table 5: Design of Survey Instruments Design Feature Proportion Using Retrieval Method Telephone 54% Mail 22% Other/Unspecified 24% Prospective/Retrospective Prospective 95%

11 Retrospective 5% Instrument Type Trip Diary 76% Activity Diary 19% Other/Unspecified 5% Instrument Format – Trip Diaries Sheet 86% Booklet 14% Instrument Format – Activity Diaries Sheet 10% Booklet 90% Design of Data-Collection Procedures The most important elements of this are timing, incentives, reminders, and response rates. Timing of surveys has traditionally been in the Spring or Fall, with the desire to produce an “average” travel day. Table 6 summarizes data on the season in which the survey was conducted. Weather was the reason that the majority (80 percent) gave for conducting the survey in only the spring or the fall, or both. A second timing issue is the days of the week for the survey. Again, the convention has been to collect data only on weekdays, and this was followed in 87 percent of cases. The remaining 13 percent also included weekend days in the survey. As of the mid-1990s, the use of incentives was still not widespread in household travel surveys. Of the surveys reviewed, 80 percent used no incentive. Of those using an incentive, half used cash, one- third used some form of lottery or drawing, and the remainder provided a gift, such as a pen. One case used both cash and a pen. The amount of the cash incentives was not reported, but other anecdotal information indicates that incentives of one dollar per person (diary) are the most common. Incentives can be offered as an inducement to respond (sent in advance) or as a reward for responding (sent to those completing the survey). The lottery or drawing is normally restricted to a reward. In those cases where cash was used, half provided the incentive in advance, and half as a reward to those completing the survey. Table 6: Season in Which Survey Was Conducted Seasons Included Proportion Reporting Spring only 40% Fall only 22% Fall and Spring 10% Fall/Spring and Summer 11% Fall/Spring and Winter 8% All Four Seasons 1.5% Not Specified 7.5% Table 7 shows the breakdown of how many reminders, the type of reminders, planned contacts, and the mix of multiple reminders that were used in the surveys. Where three contacts were planned, this was usually a recruitment contact, one reminder, and the retrieval contact. Response rate has become one of the most critical areas of household travel surveys, as a result of falling rates over the past several decades. This review found a lack of consistency in how response rates were calculated, making it somewhat difficult to determine comparative statistics on response rates. Response rate also depends on the method of data collection. The Synthesis reported that mail-back surveys achieved response rates between 5 and 24 percent, with a mean of 14 percent. For telephone surveys, recruitment rates were reported as ranging from 12 to 100 percent, with a mean of 49.9 percent and median of 50 percent. Although there appears little reason for it, the retrieval method also shows an

12 influence on the recruitment rate, with the rate averaging 58.3 percent for mail-back surveys, and 45.7 percent for telephone retrieval of the data. Table 7: Profile of Reminders in Recent Surveys Aspect of Reminders Proportion Reporting Using Reminders Yes 80% No 20% Form of Reminders Telephone Call 93% Letter 7% Number of Reminders Used One 60% Two to Three 20% Four or More 20% Form of Multiple Reminders Telephone only 75% Telephone and Postcards 10% Telephone and Letter 8% Telephone, Letter, and Postcard 8% Planned Contacts One Contact (no reminders) 10% Two Contacts (no reminders) 20% Three Contacts 50% Four or More Contacts 20% Sample Design Sample design covers several sub-elements. First, there is the sample size, which is summarized in Table 8. Of the surveys using telephone recruitment (78 percent), 83 percent used random digit dialing to draw the sample, while 17 percent used published telephone directories. Seventy percent of the surveys used a minimum age cut off for collecting data of five years of age, while 15 percent set no limit; 93 percent of the surveys intended to exclude group quarters, although four percent inadvertently included some in the end. The most common method of selecting the sample was telephone recruitment, followed by mail out of surveys and telephone retrieval of the data. This was the method used in 54 percent of cases. Mail- back of surveys following telephone recruitment and mail out of a package of materials was selected by 22 percent of the cases. Thus three-quarters of the surveys used telephone recruitment and mail out of materials. Table 8: Some Sample Properties Sample Property Final Sample Recruitment Goal Sample Size Mean 4,167 12,400 Median 2,460 7,700 Percent <2,000 45% - Percent >10,000 15% - Sampling Method Stratified 56% Simple Random Sampling 24% Other 30%

13 Pilot Surveys and Pretests As noted in the NCHRP Synthesis 236, the terms pilot test and pretest tend to be used interchangeably by the transportation profession, even though the survey research literature distinguishes these two activities. In this section, we use the term pilot test to cover either a true pilot test or pretesting. It was reported that 74 percent of the surveys included in the review used some form of pilot test. Among these, all tested the instrument, 58 percent tested survey management, and less than 50 percent tested other elements of the survey, such as training, sampling, data entry, geocoding, analysis, or incentives. Table 9 shows some statistics of the pretest and pilot surveys. It should be noted that some pilot tests were performed on agency staff and received a 100 percent response rate, which biases the response rates upwards. Also, not all regions reviewed provided both recruitment and completion figures for the pilot, so that the recruitment and sample data do not relate exactly to one another. As a result of conducting a pilot test, 85 percent of those testing it changed the survey instrument in some way, and 65 percent of those testing the management changed some element of the survey management. Similarly high figures of change are reported for each of the survey elements, except for data entry and analysis, where changes were reported for only 18 and 11 percent of those testing these elements, respectively. Table 9: Pretest and Pilot Survey Samples Attribute Statistic Sample Size Mean 336 Median 67 Percent Under 75 Households 82% Percent Under 200 Households 94% Responses Range 0 to 1,800 Recruitment Mean 121 Median 40 Response Rates Mean Response Rate 57.5% Median Response Rate 61.7% Survey Implementation Items included under this topic were not reported in the NCHRP Synthesis 236. They have to do with such elements as interviewer training, retention of data on incomplete households, cross-checks of data, days and periods to avoid collecting data, etc. These items were not elicited in the review done for the Synthesis Report. Data Coding including Geocoding In recent surveys, 43 percent reported manual coding (usually to a separate document) followed by data entry. The second most popular method of data entry was direct entry through the use of CATI, which was used by 39 percent of the surveys. There were two reported instances (three percent) of the use of mark-sensing. The remaining 15 percent used some combination of manual and direct entry procedures. Geocoding is generally required to be a separate activity, following coding and data entry of literal addresses. Among reviewed surveys, 30 percent used manual geocoding, consisting of having

14 coders look up addresses, locate them on a map, and provide the appropriate geocodes. Fifty-five percent reported the use of a combination of computer and manual geocoding, with the manual element usually being for those addresses that the computer could not recognize. Nine percent used computer geocoding alone, and six percent reported some other method of geocoding, such as relying on respondents to provide a zip code. The single most frequently-used source for geocodes was reported to be the TIGER or GBF/DIME files from the U.S. Bureau of the Census, which were used by 48 percent of recent surveys as either their sole or one of their sources of geocodes. The second most frequently-used source was telephone directories, used by 37 percent. Maps were used by 34 percent, while a community database, such as 911 data bases, was used by 28 percent. The level of geocoding has been changing from the sole use of Traffic Analysis Zones to using latitude and longitude. However, as of the mid 1990s, the most common geocoding level was still the TAZ, used by 36 percent, followed by 33 percent who used the TAZ together with at least one other level of geocode. Coding to latitude and longitude was performed by 31 percent of surveys, while 17 percent used the census tract, 15 percent the zip code, and eight percent used census blocks or block groups. As noted earlier, data are most frequently coded into three files – a household file, a person file, and a trip or activity file. Each of these files may contain some data from the higher aggregation file, while the higher aggregation files may contain summaries from the lower level files. Approximately 50 percent of the surveys reviewed followed this type of file structure, or a combination of these into a single file. Ninety percent of the surveys used a household file, 80 percent used a trip file, and 65 percent used a person file. Two other file types were reported – an activity file (16 percent) and a vehicle file (18 percent). As of the mid-1990s, 38 percent of agencies did not make their data available to anyone outside the agency, while 26 percent made the data available to any interested party. Most agencies provide the findings from the survey through a final report, with 85 percent reporting that these final reports are available from the survey. Both newsletters and public forums were reported as being used by 13 percent of agencies. Data Analysis and Expansion The rate of completion of recruited households ranged from 36 to 97 percent, with a mean of 69.5 and a median of 72.5 percent. Completion for mail back was lower at 61 percent, while telephone retrieval had a mean of 72.5 percent. As a percentage of contacted households, these response rates provide a range of 10 to 75 percent, with a mean and median of about 36 percent (these figures being obtained by multiplying the recruitment and completion percentages together). For all telephone contact methods, the average response rate was 33 percent, while for telephone contact with mail back, the mean was 35 percent, and for telephone contact with telephone retrieval, it was 32 percent. In addition to confusion on how to calculate a response rate, there are also differences in what constitutes a completed household for the purposes of calculating the response rate. Of the agencies that reported response rates, 56 percent required information from all household members for the household to be considered complete. Thirty-three percent allowed some household members to provide incomplete information, provided that data on critical variables was not missing. In one survey (two percent), the household was considered complete if no more than one person was missing from the household, while 19 percent had varying numbers of missing persons permitted, depending on household size. Another measure of the survey that was not reported in the NCHRP Synthesis 236 was the rate of non-mobile persons and households. These are households and persons reporting making no trips on the day of the survey. In trip diaries, this is potentially a mechanism of non-response, in that persons may indicate they did not travel on the survey day as a way to avoid completing the diary. It is not an effective non-response mechanism for activity and time-use diaries, if in-home activities are also to be reported. However, for an activity diary that requests only detailed out-of-home activities, it is again a potential nonresponse mechanism.

15 The second issue here is correction of data. Approximately 20 percent of recent surveys took the position that the data retrieved was noncorrectable. Obvious errors in the data retrieved were used, in these instances, as a criterion for acceptance or rejection of a household from the sample. Because time is essential in gaining information from the respondent for correction of data, 70 percent of recent surveys reviewed at least some aspects of each survey record on a daily basis, so that call backs could be made to resolve errors. Of the remainder, eight percent reviewed the data on a weekly basis, two percent on a monthly basis, and ten percent at the end of the survey. The remaining ten percent did not check the data or were not aware as to how the survey consultant checked the data. Even with the checks, 20 percent made no corrections to the data. Two-thirds made corrections to both missing and invalid data, and 14 percent restricted corrections to invalid data only. Only seven percent of surveys reported making corrections from inference only, while 62 percent made corrections through a combination of re-contact information and inference. When no re-contact was successful or possible, 38 percent left the data as invalid or missing, while 62 percent made some type of repair or discarded the data entirely. The use of such methods of data repair as hot-deck imputation were not reported. All repairs reported were made by inference and correction of earlier invalid, missing, or otherwise erroneous data. About 80 percent of recent surveys defined certain questions as being critical, and 81 percent of these then discarded households that were missing any of the critical data. The remainder set such households on one side for analysis only where the missing data item was not used. Households that terminated part way through the retrieval of data were dropped from the data set in 60 percent of the surveys, while 30 percent retained them in a separate file. Six percent of surveys indicated that such data remained in the main survey file. Issues of validation and weighting were not addressed in the NCHRP Synthesis 236 report. It seems that the profession has largely ignored the issue of weighting of data, and also rarely concerns itself with expansion of data, particularly because the data are generally intended to be used in unexpanded form for model estimation. One part of validation, the examination of trip rates, was reported in some studies. However, as NCHRP Synthesis 236 reports, this is a complex issue, because there are many ways to define trip rates, and many inconsistencies in how this is generally done. With all of the variety of definitions that can be used (linked and unlinked, inclusion or exclusion of non-motorized trips, minimum trip-length definitions, trip purposes, person or vehicle trips, etc.), the review reported that average person trips per person per day were generally between 3.5 and 4.2, with household person trip rates averaging between 8.9 and 10.2 in the surveys that were reviewed. 2.1.4 TMIP Scan of Recent Travel Surveys A document from the Travel Model Improvement Program (TMIP) of the U.S. Department of Transportation, reports on a number of recent travel surveys of various types (TMIP, 1996b). These surveys partially overlap those included in NCHRP Synthesis 236, but neither set out to be exhaustive, and each contains some different surveys. In addition, the Scan of Recent Travel Surveys (TMIP, 1996a) includes other than household travel surveys. For the purposes of this project, other than household travel surveys are not included. A useful point is made in this report about the implementation of household travel surveys: “All but two of the large MPOs have carried out household surveys since 1990...About two-thirds of the smaller MPOs surveyed have carried out household surveys since 1990...Overall, the largest MPOs have apparently been the most diligent about conducting surveys, due to available resources and to the greater extent of problems confronting large urban areas. It also appears that new survey efforts and practices generally are first introduced into the largest MPOs and then gradually spread over time into smaller urban areas. In particular, a select few of the larger

16 MPOs have been at the forefront of revising and expanding both the nature and scope of household travel surveys.” (Scan of Recent Travel Surveys, TMIP, 1996a Page 2- 1) In NCHRP Synthesis 236, there were 55 surveys that were included. In the TMIP Scan, there are also 55 surveys. Interestingly, the two sets are not identical, with 32 of them being the same. As a result, the findings from the two documents are not necessarily identical. The scan does not provide as detailed information as the Synthesis, and much of the information is not organized in summary form, but is provided through brief half-page summaries of each of the 55 surveys. Table 10 attempts to summarize most of the relevant information from these brief descriptions. The surveys are organized, in this case, by three MPO size groups and a category of statewide surveys.

17 Table 10: Summary of Scan of Recent Surveys Urban Area Size Urban Area Sample Size Sampling Method Recruitment Method Min. Age Bike/Wal k Trips Retrieval Method Diaries Returned Type of Diary Pilot Test Coding Method Data Repair Timing Resp Rate Incentives Atlanta 2,400 phone Phone No Trip (1-day) None Baltimore 2,700 phone 5 Yes Phone Trip (1-day) On-line Fall 44% None Boston 3,800 Stratified phone 5 No Mail Yes Activity (1-day) M/C Yes Spring Lottery Ticket Chicago 19,314 Random Mail 14 No Mail Yes Trip (1-day) Yes 24% None Cleveland 1,600 phone Phone Dallas/Ft Worth* 6,000 Stratified phone None Yes phone/mail Yes Activity (1-day) Yes Computer Yes Sp/F $2/person Detroit 7,400 Stratified phone 5 No Phone No Activity (1-day) Computer None Houston 2,443 Stratified phone 5 Phone Yes Activity (1-day) Computer F/W/ Sp None Los Angeles 16,086 Stratified phone 5 No Phone No Activity (1-day) Yes Comp/M F/Sp 45% None Miami 2,650 phone No Mail Yes Trip (1-day) $2.00 Minneapolis-St. Paul 9,746 phone 5 No Phone No Trip (1-day) Su/F None New York (1995) 2,000 phone Yes Mail Yes Activity (2-day) $5.00 per wave New York (1996)* 12,000 Stratified phone 5 Yes Phone No Activity (1-day) Yes F/Sp None New York (1989) 20,500 phone No Phone No Trip (1-day recall) Spring None Pittsburgh 450 Stratified phone Mail Yes Trip (1-day) Yes San Diego 2,049 phone Yes Phone No Trip Spring None San Francisco (1990) 10,900 phone 5 Yes Phone No Trip (1, 3,or 5 days) $5 for 3 or 5 day survey San Francisco (1996) 3,800 phone/transit None Yes phone/mail Yes Time-Use (2-day) W/Sp Seattle 1,700 Stratified phone (panel) None Yes Mail Yes Activity (2-day) $2/person St. Louis 1,400 phone No Phone Yes Trip (1-day) Spring No Tampa 1,800 Stratified Mail No Mail Yes Trip (1-day) Map Group 1 (>2,000,000 Population) Washington, DC 4,800 phone Phone No Trip (1-day) Buffalo 2,700 Stratified phone 5 No Mail Yes Trip (1-day) Yes Yes Spring None Cincinnati 3,000 Probability phone None No Phone No Activity (1-day) Computer Yes Fall None Denver* 5,000 phone/transit Yes Phone Activity (1-day) Spring Indianapolis 1,000 phone 5 Yes Phone Trip (1-day) None Kansas City 1,221 Stratified phone Mail Yes Trip (1-day) Computer Fall $1, $2, gifts Louisville 2,643 None Sp/Su Milwaukee 17,000 phone/home No phone/home Yes Trip (1-day) None Portland, OR 4,451 Stratified phone None Yes Phone Yes Time-Use (2-day) Computer Yes Sp/F None Raleigh-Durham 2,000 Random phone/transit Yes Phone Time-Use (2-day) Yes Computer None Sacramento 4,000 phone No Phone No Trip (1-day) Computer Spring $1 Salt Lake City 3,082 phone Yes Mail Yes Activity (1-day) Spring None San Antonio 2,643 phone Phone Yes Trip (1-day) W/Sp 28% None Group 2 (750,000 to 2,000,000 population) San Juan 1,610 phone 5 Yes phone/home No Trip (2-day) F/W Lottery for

18 Urban Area Size Urban Area Sample Size Sampling Method Recruitment Method Min. Age Bike/Wal k Trips Retrieval Method Diaries Returned Type of Diary Pilot Test Coding Method Data Repair Timing Resp Rate Incentives prizes Albuquerque 2,000 Stratified phone Yes Mail Yes Trip (1-day) GIS None Amarillo 2,590 phone Phone Yes Trip (1-day) None Boise 1,500 Random phone 5 Yes Phone No Activity (1-day) Yes Computer Spring None Brownsville, TX 1,411 phone Phone Yes Trip (1-day) None Charleston, WV 1,500 Des Moines 1,139 Random Mail No Mail Yes Trip (1-day) $100 drawing El Paso 2,510 phone 5 Yes Phone Yes Trip (1-day) W/Sp/Su None Fort Collins 1,000 Mail 5 Yes Mail Yes Trip (1-day) Spring None Harrisburg 1,161 Mail Mail Yes Trip (1-day) None Honolulu* 4,000 phone None Yes Phone No Activity (1-day) Winter pen Little Rock 856 Stratified phone No Mail Yes Trip (1-day) Fall None Reno 1,050 Sherman-Denison, TX 2,289 phone Mail Yes Trip (1-day) None Tucson 1,913 Stratified phone None Yes Phone No Trip (1-day) M/C Yes None Group 3 (<750,000 population) Tyler, TX 2,646 phone Mail Yes Trip (1-day) None California 13,500 Stratified phone No Phone No Trip (1-day) Computer $1 Indiana 1,000 phone 5 No Phone no Trip (1- and 14-day) Fall None New Hampshire 2,000 Stratified phone None Yes Phone No Activity (1-day) Computer None Oregon 10,000 phone None Yes phone Yes Activity (2-day) None Statewide Vermont 2,425 Mail No mail Yes Trip (1-day) None 4406.9 * Indicates survey underway at the time of the scan.

19 Design of Survey Instruments From Table 10, the only aspect of survey instrument design reported on is the type of diary. In 33 cases out of the 55 (60 percent), trip diaries are specified as being used and four cases did not provide that information. Of the remainder, 15 (27 percent) used activity diaries, and three (five percent) used time-use diaries. In this case, the percentages using time-use and activity diaries is higher than the surveys reviewed in NCHRP Synthesis 236, and the Scan uses more recent surveys than the NCHRP Synthesis 236 report. Two other design issues that are reported in the Scan are the minimum age from which data were collected. In 15 surveys, the minimum age was five years old, and in one survey it was 14. In ten surveys, there was no minimum age. The remainder did not report this information. Second was the inclusion of non-motorized trips (specifically walk and bicycle trips). In 21 cases, these trips were included. In 18 cases, they were definitely not included, so that only motorized trip data were collected. The remaining cases are not specified. Design of Data-Collection Procedures On timing, 19 of the surveys are indicated as being performed in the Spring, Fall, or both. Seven of the surveys indicated that either Winter or Summer was included with either or both of Spring and Fall, while one survey was done in Winter only. The remaining surveys are not specified as to season. Again, this indicates the strong preference to survey in Fall, Spring, or both. Thirteen surveys reported using an incentive, while 34 indicated no incentive was used. The remainder did not specify. This shows a slightly higher rate than in the NCHRP Synthesis 236 of the use of incentives (24 percent compared to 20 percent). Only four surveys reported a response rate, which ranged from 24 to 45 percent. This appears to show a lower response rate average than the NCHRP Synthesis 236, but this is probably due to the low number of surveys reporting a response rate. No indication is provided of how response rates were calculated. Sample Design Of those surveys for which the sampling method is reported (22 or 40 percent), the majority (17) selected stratified sampling. Sample sizes varied from 450 to 19,314, with a mean of 4,407. The median is just over 2,500. These figures are almost identical to the results reported in the NCHRP Synthesis 236. As in the Synthesis, the most common method used to recruit households was the telephone, which was used in 46 cases, either alone or with augmentation such as transit intercepts or on-board surveys. In five cases, solicitation was by mail, and the remainder did not specify. Pilot Surveys and Pretests This information was rarely reported in the Scan. Only six surveys indicated that a pilot test or pretest was performed, and usually this was because major changes occurred as a result of the pilot test. No details of the samples for pilot testing were provided.

20 Survey Implementation No aspects of implementation were reported in the Scan. Data Coding including Geocoding This is an area that was not consistently reported in the Scan. It appears that several surveys completed direct data entry from CATI, and a few specified that geocoding was done by a combination of computer and manual entry. Data Analysis and Expansion As with several of the previous topic areas, the Scan includes little information on this subject beyond the mention in a few cases of the fact that data repair activities were undertaken. In seven of the surveys, explicit mention is made of the fact that data repair activities were undertaken, mainly through re-contacting households to resolve anomalies in the data. In all other cases, no mention was made of data repair. No other aspects of data analysis and expansion were covered by the Scan. 2.1.5 Other Reviews There have been several other reviews performed recently. Included among these is the “Survey of Travel Surveys II” by Purvis (1990), which covers a number of surveys conducted in the late 1980s. This is not further summarized here, because it is largely superseded by the NCHRP Synthesis 236 and the TMIP Scan. It also provides only very brief summary information on each of the surveys included, with the primary information being the sample size, timing, cost, and contact method. In 1994, Benjamin put together a report to FHWA entitled Current Trends in Travel Demand Data Gathering (Benjamin, 1994). This report reviews four surveys that are also included in the NCHRP Synthesis 236 and the TMIP Scan and also reviews briefly the NPTS of 1990 and some Urban Regional Studies. Benjamin outlines a possible description of the State of the Art of Household Travel Surveys, based on his reviews of the surveys of the early 1990s. However, contrary to what one might expect, this description does not make recommendations of what should be included in the design of a survey, but outlines some of the recent practices. In 1994, Axhausen prepared a working paper at the University of London Centre for Transport Studies on Travel Diaries: An Annotated Catalogue. This was updated to a second edition in 1995 (Axhausen, 1995). This review is very useful in that it covers many different countries. In fact, of 21 surveys with travel/activity diaries reviewed, only six are from the U.S. One of the useful things in this review is the set of recommendations of the data items that should be included in future surveys, classified into those describing the household, the persons, the vehicles, transit ticketing, movements, and activities. These are reproduced here in Table 11 through Table 16. This review concentrates on the content of the survey diaries, and does not deal with other aspects of the design and implementation of the household travel survey. Table 11: Suggested Items for a Comprehensive Travel Survey: Household (Axhausen, 1995) Ref. Item Description H1 Location Home address H2 Size of residence Some measure of the size of accommodation, such as number of rooms, square feet of usable space and of garden, plot size, etc.

21 H3 Type of building Detached, semi-detached, terraced, flat; private, subsidized by privately owned, public sector controlled, public sector operated H4 Tenure H5 Duration of residence H6 Duration of ownership H7 Age of mortgage H8 Number of members H9 Number of visitors H10 Relationships Matrix of relationships between all members of the household, plus an indication of the persons visited by visitors H11 Parking spaces Number, kind, location, and cost of the parking spaces owned or rented by household members H12 Communications Inventory of the media available (number and type) to the household (daily newspapers, telephones, pagers, television, teletext, ...) H13 Income Indication of disposable income of the household as a whole H14 Visits Number of visits to the residence, especially for the delivery of goods or service provision (preferably with an indication of the access modes) Table 12: Suggested Items for a Comprehensive Travel Survey: Person (Axhausen 1995) Ref. Item Description P1 Sex P2 Year of birth P3 Marital status P4 Education level P5 Profession P6 Ethnicity Indication of ethnicity using the national Census standard P7 Language Self-assessed level of proficiency in the relevant languages of the survey area P8 Commitments Indication of the firm commitments of the respondent current during the survey period; at a minimum work status (working, searching for work, not working) and participation in education. Ideally indications of further firm commitments P9 Paid jobs Number and type of paid positions P10 Hours worked Number of hours contracted for and average over the last month in each P11 Working hours Contractual time table(s) for the survey day P12 Flexibility Level and type of flexibility of the working hours (Flextime, shift work, etc.) P13 Mode to work Most common mode to work location(s) during the last week/month... P14 Travel times Expected travel times for the modes used during the last week/month... P15 PT accessibility The n (=3, 4, 5) most frequently used public transport services. For each service: initial stop/station, distance from home (in min. or m.), service number, usual destination P16 Car pooling Indication of participation in a car pool and the cost sharing arrangements P17 Parking For employer/school-provided parking: type, location and cost; otherwise most common type, location and cost over the last week/month... P18 Education Type of current course P19 Driving License Types and length of ownership of the different licenses held P20 Cycling Indication of ability to cycle P21 Vehicles and tickets Cross-reference to all household vehicles owned and used P22 Income Indication of the disposable income and its sources (wages, retirement pensions, disability pensions, parental allowance, transfer payments, i.e., grants, welfare, housing benefit, etc.) P23 Handicap Types of mobility handicap, both temporary and permanent P24 No mobility Indicator of why no out-of-home activities were performed on a survey day P25 Start location Location at the beginning of the first survey day (e.g., at 3:00 a.m.) Table 13: Suggested Items for a Comprehensive Travel Diary: Vehicle (Axhausen, 1995) Ref. Item Applicable Description V1 Make ODU V2 Model ODU V3 Body ODU Type of body (saloon, estate, etc.; touring bike, mountain bike, etc.) V4 Seats ODU Number of regular seats V5 Year of Production O

22 V6 Year of Acquisition O V7 Replacement Status O Indication if vehicle replaced an earlier one or was an additional purchase V8 Fuel O Type of fuel used V9 Motor O Indication of motor size: cc, number of cylinders, power V10 Weight O V11 Converter O Presence of catalytic converter V12 Current kilometrage O Odometer reading at the start of the survey period V13 Kilometrage O Odometer reading at the end of the survey period V14 VKT O VKT during the last year V15 Check up O Date of last inspection of the motor V16 Information sources O Types of information sources attached to the vehicle (radio, RDS-TMC, telephone, route guidance systems, etc. V17 Owner ODU Reference to household member or outside institution V18 Responsible O Reference to legally responsible household member V19 Users O List of users among the household members and their level of use V20 Fixed costs O Distribution of fixed costs between different persons and institutions involved; may be broken down by further categories V21 Variable costs ODU Distribution of variable costs between different persons and institutions involved; may be broken down by further categories V22 Home location O Indication of where the vehicle was located during the last week/month V23 Parking O Which, if any, of the household parking spaces is allocated to this vehicle for overnight parking O Vehicles owned by household members D Vehicles driven, but not owned by household members (associated with person form) U Vehicles used, but not owned by household members (associated with person form) Table 14: Suggested Items for a Comprehensive Travel Diary: Season Tickets and Similar (Axhausen, 1995) Ref. Item Applicable Description S1 Type OU Type of ticket S2 Area O Area covered by the ticket S3 Validity O Period of validity of the ticket S4 Date of acquisition O Month S5 Replacement status O Indication if the ticket replaced an earlier one or was an additional purchase S6 Owner OU Reference to household member or outside institution S7 Users O List of users among household members and their level of use S8 Fixed costs O Distribution of fixed costs between different persons and institutions involved S9 Loan O Availability and amount of season ticket loan S10 Variable costs OU Distribution of variable costs between different persons and institutions involved O Tickets owned by household members U Tickets used, but not owned by household members (associated with person form) Table 15: Suggested Items for a Comprehensive Travel Diary: Movement (Axhausen, 1995) Ref. Item Applicable Description M1 Start time ST End of last activity M2 End time ST Start of next activity – end time of movement M3 Start wait S Duration of wait before start of movement M4 Waiting time T Amount of waiting and transfer times during the trip M5 End location ST M6 Mode S M7 Mode sequence T M8 Route ST Indication of route by major facilities used (bridges, tunnels, motorways, public transport lines, etc.) M9 Stops T Public transport stops M10 Costs ST Total amount spent on tolls or fares and share covered by respondent

23 M11 Parking ST Type, legality, and location/distance to destination; total cost and share of respondent; cross-reference to employer parking or parking space at home M12 Company ST Size of company and breakdown by household and non-household members M13 Situational handicap ST Type of situational handicap M14 Parallel activity ST Type of parallel activity engaged in during travel (reading, working, phoning, etc.) M15 Availability TJ Cross reference to all household vehicles/season tickets available for the duration of the trip/journey including ensuing activity M16 Information sources ST Type of information sources available during the movement M17 Information used ST Type of information sources used during the movement and usage cost S Applicable at stage level T Applicable at trip level J Applicable at journey level Table 16: Suggested Items for a Comprehensive Travel Diary: Activities (Axhausen, 1995) Ref. Item Description A1 Purpose A2 Land use Type of environment A3 Time window Earliest and latest possible start time A4 Start time Arrival time at the activity location A5 End time End of activity A6 Wait time Time spent waiting before the start of the activity A7 Importance Importance relative to the other activities of the day A8 Success Degree to which expectations for the activity were fulfilled A9 Commitment Level of commitment to other persons participating in or depending on the activity A10 Substitutability Ability to replace activity with a different one A11 Flexibility Ability to forgo the activity at the time of arriving at the destination A12 Planning interval Time since the traveler planned to engage in the activity A13 Execution horizon Time before the activity has to be executed A14 Frequency Number of activities of this type per week/month… A15 Regularity Presence of a fixed rhythm for the activity A16 Expenses Amount of money spent during the activity by the respondent A17 Company Size of party divided by household and non-household members A18 Situational handicap Type of situational handicap encountered during the activity A19 Information sources Information sources available during the activity A20 Information used Information sources used during the activity and their costs In 1996, the Institute of Urban and Regional Development at the University of California at Berkeley published a Working Paper on “Land Use and Travel Survey Data: A Survey of the Metropolitan Planning Organizations of the 35 Largest U.S. Metropolitan Areas” (Porter et al., 1996). Again, this document overlaps significantly with the NCHRP Synthesis 236 and the TMIP Scan, and, again, the details provided are brief, generally noting the timing of the survey, sample size, method of contact, and the survey instrument in some cases. From the summary of results, it is noted that 32 of the 35 metropolitan areas conducted at least one household travel survey since 1985, and 28 conducted one since 1990. For most household surveys, it was noted that the sample size is between 1,500 and 3,000 households. Nothing else that is new or relevant was included in this report. In a paper by Ampt et al. (1998), some characteristics of current best practice are outlined that seem to be relevant to this study. These are: • “Collection of stage-based trip data – ensuring that analyses can relate specific modes to specific locations/times of day/trip lengths, etc.; • Inclusion of all modes of travel, including non-motorized trips; • Measurement of highly disaggregate levels of trip purposes;

24 • Coverage of the broadest possible time period: e.g., 24 hours of the day, seven days of the week, and even possibly all seasons of the year (365 days); • Collection of data from all members of the household; • High quality data that is robust enough to be used even at a disaggregate level; and • An integrated data collection system incorporating household interviews as well as origin- destination data from other sources such as screenlines and cordon surveys.” These points suggest some of the important elements that should also be included in any effort to standardize household travel surveys. They also raise the issue of whether or not part of the standardization should address other necessary surveys that may be required to support the household or personal travel survey, and that should be included as a matter of necessity. This paper also introduces the idea, not discussed in any of the sources so far reviewed, of a continuous survey process. Specifically, the authors recommend a survey that should be collected “...each day of the week throughout the year and over several years.” (Ampt et al., 1998, italics in the original). Some of the issues relating to this type of design were addressed elsewhere in this project, to the extent that such a continuous, year-round design is considered further. The paper also describes a different way of sampling that permits the sample to be drawn from a small number of traffic analysis zones, but with sufficient richness to permit stratification not only on socioeconomic data, but also on such things as spatial differences in terms of distance from the CBD and access to the transit network. At the same time, the authors demonstrate a sampling procedure that permits use of 26 classes, stratified on household size, income, and vehicles. The paper also outlines some aspects of instrument design, correction, expansion, and validation of data that may be helpful in the standardization of personal travel surveys, although it must be noted that, for the context for which this paper was written, personal face-to-face interviews were possible and considered as a potential major strategy for data collection. Conclusions based on this methodology must be applied with care in contexts where such interviews are not feasible. 2.1.6 Impact of Technological and Social Changes on Travel Surveys Overview of Trends in the New Global Economy The increasing availability of small, powerful and affordable technology and connection via the Internet to the global economy has led to adoption of telework – literally, work “at a distance” – as a means to address environmental problems, help balance work/life responsibilities and gain flexibility and quick response to opportunities in the emerging e-commerce economy. Telework is a reorganization of the workplace, both in concept and execution. Telecommuting, or telework, falls under the umbrella of flexible work arrangements. Many authors see teleworking as closely parallel to the creation of new organizations variously called virtual, imaginary, extended, and collaborative organizations (Cohen, 1997). This enlarges the concept of telecommuting as a trip reduction strategy to viewing telework as just one component of the response to new business opportunities. In standardizing travel surveys, the critical point to consider is whether new travel patterns will emerge as location becomes relatively unimportant due to increasing reliance on telecommunications. The initial motivation for telecommuting was to reduce commuting trips and thus, mobility. Instead, “…the home is becoming only one location of an increasingly decentralized, multi-locations working environment” (Gareis, 2000). The corporation as a physical entity will probably continue to be needed, but mobility is becoming an increasingly important part of modern society (Drucker, 1994). In sum, we have a situation in which new technologies and telecommunications are rapidly developing. They include PCs, notebooks, personal digital assistants (PDAs), cell phones, and broadband (3G) with both fixed and wireless access to the Internet. They also include integrated products such as the wireless

25 phone connected to personal databases and a universe of information. In effect, the tools necessary for work are transferred from the office to the worker. The possibility of working anywhere in time and space is intersecting with societal trends such as more women working, a greater choice of career options and opportunities to realize work/life preferences. Therefore, compared with the population surveyed in the past, the trends indicate: • Greater variety of travel patterns; • More home-based work; • More mobile workforce; • Blurring of the 40-hour work week into a 24-7 integration of work, family responsibilities and leisure; and • More mixing of work with non-work travel. The question is, how can travel surveys be standardized so as to capture these trends? Standardization Challenges DEFINITIONAL PROBLEMS. Working at home has an impact on organizational behaviors and on the individual worker (Sparrow and Daniels, 1999). Home-based work may occur on a full-time schedule, or more typically, on a part-time or episodic basis (Pratt, 2002). Tasks are performed not only in the home but also at other locations distant from corporate headquarters such as on a plane or in the car. As those work patterns are accepted as normal practice, the words “telecommuting” or “telework” most likely will disappear. If forecasts are correct, there will be one billion mobile phone subscribers worldwide by 2005, and “this will be more than all the PCs and automobiles combined” (Golob, 2001). More significantly, the mobile phone combined with the PDA and access to broadband Internet, puts the power of an office in one’s hand. It is equivalent to shrinking the grandfather clock onto everyone’s wrist – but far more profound. An approach to monitoring these technological and social changes within the context of travel surveys is first to measure home-based work, which is being done, as described in the next section. The greater challenges are to measure mobile work and the global workforce, which is covered in the next subsection. Measurement of Work at Home ASKING THE RIGHT QUESTIONS. In designing travel behavior surveys, the problem is to define “work,” “home,” and similar words that are commonly used in our language but which have acquired associated meanings (Pratt, 2000a). The difficulty has not been resolved by coining new terms to describe non-traditional ways to work. Such words as “telecommuting,” “teleworking,” “at-home work,” “hoteling,” “home-based business,” “road warriors” and “mobile workers,” lack any agreed-upon definitions yet they are used in common parlance as if they did. These new work styles need to be measured by objective criteria in order to provide meaningful data for understanding any consequent variations in travel behavior. Standardizing questions in terms of measurable variables, such as the place of work and the time in days and hours spent at each location, leaves researchers the option of applying their own definitions that fit the context of their analyses. Thus, rather than ask “How many days a week do you telecommute?” the more precise question can be asked: “How many days last week did you work at home instead of going to your usual work location?” This approach has the advantage that information gathered over years can be used unambiguously in various contexts. Definitions can be applied at the point of analysis (STILE, 2004). Thus, using the phrase “work at home” as the standard and clearly identifying the time

26 units measured – in this survey, “days per month” and “during normal business hours” – the numbers of “telecommuters” in 1999 can be reported as follows: Classic telecommuting as understood by employers, is allowing some employees to work at home one or two days per week. As of 1999, 19.6 million employees and independent contractors, or ten percent of U.S. adults, were working at home during normal business hours for one or more days per month (Pratt, 1999). They worked at home an average of 9 days per month. An additional 10.4 million employees would like to work at home if their employers would let them. PIGGYBACKING STRATEGY USED TO MEASURE WORK AT HOME. Large samples and lengthy questionnaires are necessary to capture the variety of travel behaviors. Yet cost, respondent burden, and other barriers usually preclude separate surveys devoted to work at home. However, piggybacking work- at-home questions onto on-going surveys, as illustrated below, has provided rich detail that contributes to a deeper understanding of today’s travelers. The two-fold methodology obtains new perspectives on travel behavior by: 1) phrasing the questions in objective terms so that the responses can be compared across data sets and 2) adding questions to existing periodic surveys (Pratt, 2001). FEDERAL SURVEYS. Following that strategy, a series of questions were added to federal surveys including the Nationwide Personal Transportation Survey, the American Housing Survey, Current Population Survey, National Longitudinal Surveys of Labor Market Experience, Survey of Income and Program Participation, and the Characteristics of Business Owners survey. Table 17 lists some of the relevant topics included in some of the surveys. When those variables are cross-tabulated with work at home, a wealth of information becomes available for supplementing or aiding interpretation of travel data. For example, the 1995 Nationwide Personal Transportation Survey (NPTS) inventories daily personal travel and therefore serves as a baseline for comparing data collected regionally. A number of questions included work at home as a listed response in a choice set. As Figure 1 shows, three questions directly probed the practice and frequency of working at home (Pratt, 1997). Table 17: Characteristics of Mobile Workers Collected by Federal Surveys (As of 1995) (Pratt 1997) SURVEY VARIABLE AHS CBO Census CPS Supplement CPS Computer Supplement NPTS SIPP COMMUTING Distance X X Time X X X Mode X X X TRIPS Local X 75+ miles Purpose X FAMILY Home Address X X X X X X Income X X X X X X X Unit X X X X X X WORK Activities X X X X X Address X X X Computer use X Days of week X At home X X X X X X X Home hrs/days X X X X Multiple jobs X X Schedule X X WORKER Classification X X X X X X

27 Education X X X X X X X Occupation X X X X 1 X Industry X X X X X Surveys, in order listed are: American Housing Survey (AHS), Characteristics of Business Owners (CBO), 1990 Decennial Census, Current Population Survey Supplement(CPS), Current Population Survey Computer Supplement, Nationwide Personal Transportation Survey (NPTS), and Survey of Income and Program Participation (SIPP). 1Asked only of persons whose work required driving a licensed motor vehicle as part of the job Figure 1: 1995 Nationwide Personal Transportation Survey (NPTS) The phrasing of the actual work-at-home questions asked is difficult to standardize since the context of each survey differs. For example, the American Housing Survey (AHS) collects data on housing, including household characteristics, income, neighborhood quality, recent movers, work space in the home, and home-based work. National data are collected in alternate years covering, on average, 55,000 of the same housing units each time. The AHS identifies job classification, which the NPTS does not. Individuals are differentiated by those who work at home 1) on a wage and salary job, 2) as a self-employed person or contract worker or business owner, or 3) instead of traveling to work. However, even within that one survey, some of the results are not directly comparable because the wording and skip patterns of questions that identified spaces within the dwelling used for work differ in the two survey years. SECTION F – EDUCATION AND TRAVEL TO WORK – (HOUSEHOLD MEMBERS 16 YEARS OR OLDER; PROXY PERMITTED) Q3 Do you have more than one job? 1 YES – The next questions are about your primary job or occupation. Q4 What is the street address of your workplace? (IF R WORKS AT OR OUT OF HOME, ENTER “HOME” FOR STREET NUMBER. IF R HAS NO FIXED WORKPLACE, ENTER “NONE” FOR STREET NUMBER.) Q5 What is the one-way distance from your home to your workplace? ____blocks or miles NO FIXED WORKPLACE – GO ON TO NEXT SECTION WORKS AT OR OUT OF HOME GO TO NEXT SECTION Q8 How do you usually get to work? Please tell me all the kinds of transportation you usually use. WORKED FROM HOME/TELECOMMUTED (20 possible responses) Q9 What is the main means of transportation you usually use to get to work—that is, the one used for most of the distance? WORKED FROM HOME/TELECOMMUTED (20 possible responses) Q19 On any day last week, did you work from home instead of traveling to your usual workplace? IF R WORKED AT HOME INSTEAD OF GOING TO THE WORKPLACE. DO NOT INCLUDE WORKING AT HOME IN ADDITION TO WORKING AT THE WORKPLACE.) Q20 On any day in the past two months, did you work from home instead of traveling to your usual workplace? (CODE YES ONLY IF R WORKED AT HOME INSTEAD OF GOING TO THE WORKPLACE. DO NOT INCLUDE WORKING AT HOME IN ADDITION TO WORKING AT THE WORKPLACE.)

28 METROPOLITAN AREA SURVEYS. Several regional surveys have included work-at-home as a topic. Again, the phrasing of the questions varies, but objective information is obtained that makes comparisons possible. 1996 DALLAS-FORT WORTH HOUSEHOLD ACTIVITY SURVEY 24-HOUR DIARY. The household survey conducted from January to May in 1996 in the Dallas-Fort Worth region collected extensive activity and travel data on a sample of over 4,000 households. The work-at-home questions were included in the one-day travel diary1 (Figure 2). Frequency of work at home was asked in regard to both the main and any second job. 1994 ACTIVITY AND TRAVEL SURVEY OREGON AND SW WASHINGTON. Sponsored by Metro of Portland, Oregon, the 1994 Activity and Travel Survey asked respondents to fill out a 10-day diary on assigned travel days (Figure 3). The household diary did not identify non-travel activities except as implied by destination: “What was your activity?” “When did your activity take place?” Thus, if the activity was working at home, it would be listed as “work” with the home address filled in under “location.” (Pratt, 1997, p. 65.) In addition to the household diary, a CATI questionnaire was used that collected work-at-home and other transportation-related data as shown in Figure 3. In both the CATI questionnaire and the diary, respondent heads of households were asked for information about all members of the household including themselves.2 1 Source: 1996 Dallas/Fort Worth Household Activity Survey 2 Oregon CATI Questionnaire Version #2

29 Figure 2: 1996 Dallas-Fort Worth Household Activity Survey 24-Hour Diary THE 1995 OHIO-KENTUCKY-INDIANA (OKI) REGION SURVEY. The OKI survey specified two categories of in-home activities of which one was “Paid Work (in-home)”. Nine types of out-of-home activities included “Paid Work.” In recording each activity on the assigned day, the respondent had the option of checking IN-HOME, PAID WORK (in home). Thus the survey resulted in a complete record of periods of working at home interspersed with trips and other activities. The note: “All activities in the home not related to paid work should be recorded as….” clarifies that the respondent is not necessarily paid extra for time worked at home. Q16 Where do you usually work for your main job? There is no address (e.g., traveling salesman, repairman) In my home Q18. Did you work at this address on the diary day? Yes No Why not? No. 8 of 12 possible responses = Worked at home today Q25. Including today, how many days in the past seven days did you work at home for your main job INSTEAD of going to your main job place? Q27. Do you have a second job? Q32. Where do you usually work for your second job? There is no address (e.g., traveling salesman, repairman) In my home Q35. Including today, how many days in the past seven days did you work at home for your second job INSTEAD of going to your main job place? Q21 In the past two months, about how often have you worked from home instead of traveling to your usual workplace? 1. TWO OR MORE DAYS A WEEK (11+ TIMES) 2. ABOUT ONCE A WEEK (5-10 TIMES) 3. ONCE OR TWICE A MONTH (2-4 TIMES) 4. LESS THAN ONCE A MONTH (ONE TIME) (CODE YES ONLY IF R WORKED AT HOME INSTEAD OF GOING TO THE WORKPLACE. DO NOT INCLUDE WORKING AT HOME IN ADDITION TO WORKING AT THE WORKPLACE.)

30 Figure 3: 1994 Activity and Travel Survey Oregon and SW Washington THE 1997-8 RESEARCH TRIANGLE HOME INTERVIEW STUDY. The Research Triangle CATI interviews did not differentiate whether or not work at home was “paid.” As the respondent filled each time slot by checking “meals,” “shop,” “work,” or by writing in an activity, he or she was asked “Where did PERSON/you do that? (PLACE, STREET, CROSS STREET, CITY AND ZIP) with boxes to check indicating “home,” “work,” or “other.” Thus work was captured as taking place in one of those three places. Key Variables for Identifying Work at Home Based on the review of the surveys that included Telework: • Work at home needs to be differentiated according to when it is performed, that is, during normal business hours (self-defined), after-hours, on week-ends or interspersed with trips; • Time-use surveys must clarify whether work at home is “income-producing” (versus unpaid housework); • The job classification variables including employee, self-employed, and contractor status are critical to measure, because there are differences in the travel behavior of employees and the self-employed; • Work at home is associated with the second job or business, so that multiple job-holding may be important to capture; and • Because an advanced degree, higher income, use of technology, and the occupations of manager, professional, or sales are strongly associated with home-based work, the items education, income, occupation and technology ownership are useful to include in surveys. Measurement of the Mobile, Global Workforce The impacts of mobility on traffic and air quality will be increasingly important to measure as workers respond to new opportunities in the e-business economy. The literature review suggests that if, as Q35F In a typical week, how many hours does [NAME OF OTHER PERSON 1]WORK? Q38A-38F What is the address of (NAME OF OTHER PERSON 1)’s primary job? Q39A-39F Does (NAME OF OTHER PERSON 1) work at home? Q4OA-40F Of the [#HRS FROM Q35]hours (NAME OF OTHER PERSON 1) works in a typical week, how many hours are worked at home? Q45A-45F In the past five work days, bow many days did (NAME OF OTHER PERSON 1) travel to work by: (READ LIST. MUST SUM TO “5”.) 1 CAR (DROVE ALONE) 2 CARPOOL 3 PUBLIC TRANSIT (SCHOOL BUS/TRAIN) 4 OTHER 5 DID NOT TRAVEL TO WORK DURING PAST 5 DAYS

31 expected, travel patterns vary as behavioral change follows technology innovation, a number of factors must be considered in any attempt to standardize the measurement of mobile workers. They include, for example: • Identification of where the individual is in time and space. • Identification of his or her activity at that time. • Consider identifying multitasking, e.g., driving a car and conducting work on a mobile phone. • Identification of the work place(s). • Special caution is needed because of the traditional phrase “home-work” trip. Work no longer takes place in one non-residential location. It may take place at the corporate workplace, in the home – during normal business hours or after-hours and on weekends – during travel, or at a customer’s or client’s job site. • Identification of the routine work/travel pattern, e.g., does the person regularly work at home, i.e., “telework,” work at home one day a week, work in the employer’s office four days but travel to another city once a month, etc.? • Knowledge of technology and telecommunications used. Although it may not provide primary knowledge of travel, information on the use of wireless, PDAs, the Internet and combinations of all three may supply valuable data for interpreting and forecasting travel behavior. The information is essential for capturing the relation between use of the Internet and trip substitution or complementarity. (For example, does shopping on the Internet increase or decrease trips to the mall; does it increase truck trips to neighborhoods?) • Travel increase – work, leisure, work/leisure. • Travel decrease – Trip substitution; Tele-, Internet and video conferencing. 2.2 REVIEW OF RELEVANT STANDARDIZATION PROCEDURES 2.2.1 Introduction A review of current travel survey practice reveals that standards are not prevalent in the execution or evaluation of travel surveys. As stated by the Chief Statistician of Statistics Canada: “In some professions best practice is codified precisely or defined by reference to professional codes and standards. No such precise code exists in the domain of survey methodology. Indeed, survey methodology is a collection of practices, backed by some theory and empirical evaluation, among which practitioners have to make sensible choices in the context of a particular application. These choices must attempt to balance the often competing objectives of quality, relevance, timeliness, cost, and reporting burden.” (Statistics Canada, 1998, p. 2) Thus, the closest to standards existing in the travel survey field are generally accepted good practices. However, there is little doubt that standards in travel survey practice can assist in maintaining quality and facilitate evaluation and comparison of travel survey data. Before proceeding, it would be helpful to clarify the use of the term “standards” and “standardized procedures” as used in this report. Standards are considered minimum thresholds of the properties of a product that must be attained in order for the product to be acceptable. In the context of travel surveys, and taking a broad view of the properties a travel survey should embrace, the properties considered would typically be the quality of the data, the ethics employed in collecting the data, and the procedures used to evaluate, document, archive, and disseminate the information collected. Standardized

32 procedures, on the other hand, are stipulated methods of conducting an activity. By fixing a process, ambiguity is reduced, standards are indirectly achieved, and assessment is promoted by clarity of concept and the opportunity to compare values from different sources. Thus, standardized procedures are an indirect application of standards but they also enhance communication and understanding, promote efficiency, and facilitate assessment of the product. There is evidence in the literature of both the setting of standards and the imposition of standardized procedures in travel surveys. For example, standard time use categories have been recommended by several agencies including the Statistics Division of the United Nations and the Australian Bureau of Statistics (United Nations Secretariat, 2000b; Trewin, 1997). However, the move toward establishing standards in the industry is in its infancy, and suggested standards tend to be general and tentative in nature. As described in the opening paragraph of this section, specific standards and procedures for travel surveys do not exist at the moment, but there is an emergence of documented “good practices” that serve as guidelines in the industry. Similarly, there are suggested standardized procedures such as the Council of American Survey Research Organizations (CASRO) or the American Association of Public Opinion Research (AAOPR) methods of response rate calculation, although there is not universal acceptance of either of these procedures as a standard in travel survey practice. There have been attempts in the past to define quality in travel surveys, establish norms of ethical conduct, describe good practices, and introduce the concept of certification or accreditation of agencies that conduct travel surveys. These represent the initial effort within the travel survey industry at establishing standards and standardized procedures. 2.2.2 Standards Defining Quality In the manufacturing world, where standards are used extensively, it is common to define the quality of a product in terms of criteria such as size tolerances, hardness, resistance to fatigue, and so on. However, in travel surveys, quality is a much more comprehensive concept. Statistics Canada (1998a) suggest that quality in travel survey data should be measured in terms of six properties: relevance, accuracy, timeliness, accessibility, interpretability, and coherence. Relevance is the value of the data to a user. Thus, data may have different relevance depending on the use to which they are put, and the more data items that are relevant, the higher the quality of the data for the specified use. In surveys with adequate sample sizes, accuracy is primarily the lack of bias (Richardson et al., 1995, p. 99). Timeliness is the time value of information where its usefulness and value decreases with age. Accessibility is the ease with which data are obtained from a holding agency, but where ease is considered in its broadest sense and includes the form in which the data are provided, the availability of supporting descriptive information of the data, means of dissemination, and how likely a user is to know who to contact and be able to contact them. Interpretability is the ease with which a user will understand and correctly use data provided by an agency. Definitions, descriptions of procedures used, and a declarative description of the data set and the codes used, enhances the interpretability of data. Coherence is the consistency of terms, codes, concepts, and procedures within and across data sets. Many of the properties describing the quality of data above must be subjectively assessed. Because relevance is a component of quality, this also means that the quality of data will change from application to application depending on the purpose to which the data are put. Thus, not only must quality assessments be made subjectively but they will also vary from user to user. This makes setting standards for data quality difficult except in terms of the general or generic features of a survey. Furthermore, a careful review of these proposed quality terms reveals that several of them are actually not related to the generic quality of data, but incorporate characteristics of the user or the value of the data at a different

33 time. These include relevance, timeliness, and accessibility. Each of these help measure the value of data in a particular application, but none of them provide a measure of the quality of data per se. Ethics Several survey research organizations have established codes of practice and regulations aimed at directing their members to practices that ensure a certain code of conduct or ethical standard. CASRO has produced a document titled the Code of Standards and Ethics for Survey Research in which the responsibilities of the survey company as regards the execution of the survey, interaction with the client, and handling of the data are described (CASRO, 1997). In this code of standards, the respondent’s interests are described in terms of anonymity in any reported data, ready identification of the company conducting the survey, prohibition of taping or recording of an interview without the respondent’s knowledge, and respecting the right of the person being interviewed to refuse to be interviewed or to terminate an interview in progress. The Marketing Research Association (MRA) also has a Code of Ethics by which its members are expected to abide (MRA, 2000a). In the code, guidelines are provided on how the research firm is to conduct itself with respect to those interviewed, to the client, to subcontractors, and to the public as a whole. Most significant is the manner in which members of the MRA are required to treat those they interview. In a document titled the Respondent Bill of Rights, the MRA requires that their members abide by the following principles when interviewing members of the public (MRA, 2000b): • The privacy of the individual, and the information they provide in the survey, will be protected; • The name, address, phone number, or any other personal information of the respondent will not be disclosed to third parties without the respondent’s permission; • The interviewer will always be prepared to identify himself or herself, the research company he or she represents, and the nature of the survey being conducted; • The respondent will not be sold anything or asked for money as part of the survey; • Persons will be contacted at reasonable times to participate in the survey and they may request to be re-contacted at a later date if more convenient; • A person’s decision to participate, answer specific questions, or terminate the interview will be respected without question; • A participant will be advised in advance if the interview is to be recorded and they will be informed of the purpose of the recording; and • The respondent is assured of the highest professional standards in the collection and reporting of the information provided. In Europe, similar standards have been established by the European Society for Opinion and Marketing Research (ESOMAR). ESOMAR is primarily European in its membership but also has members in approximately 80 other countries around the world. ESOMAR has published rules for its members that describe the rights of respondents, the professional responsibilities of the researcher, and the mutual rights and responsibilities of the researcher and client (ESOMAR, 1999a). The rules are very similar to those stipulated by CASRO and MRA with a few qualifications to adapt them to the multinational environment in which they are applied. ESOMAR has separate guidelines for tape and video recording and client observation of interviews or discussions (ESOMAR 1999b). It also has rules regarding the conduct of market and opinion research using the Internet (ESOMAR, 2000). In the case of tape and video recording of individuals, or their observation from a hidden location, the main issues relate to prior notification and permission, and safeguards on the release of recordings. With regard to surveys conducted via the Internet, the same principles apply as outlined before, but extra care is required to ensure that information transfer is secure, that permission is obtained from parents for children under the

34 age of 14 to participate, and, if e-mail is used, that respondents who have indicated that they do not want to be re-contacted, be omitted from any further communication. Good Practices The general standards and ethics of the previous paragraph describe the general approach that must be adopted by survey research companies when conducting travel surveys. To provide guidance on how to implement those principles, some organizations have accumulated and documented “good practices” that are consistent with those principles. The “good practices” are not standardized procedures, because they are not prescribed and they do not define a specific procedure. However, they do direct practice in a general direction that leads to more uniform procedures than otherwise would be achieved. Statistics Canada has produced a comprehensive set of “good practices” in travel surveys in their document Quality Guidelines (Statistics Canada, 1998a). Guidance is provided on how to conduct each step in a survey and how to structure and operate a survey company, so as to collect quality data. With respect to advice on individual steps in conducting a travel survey, guidance is provided on the most efficient and effective manner of executing the following tasks: • Objectives; • Concepts, definitions, and classifications; • Coverage and frames; • Sampling; • Questionnaire design; • Response and non-response; • Data collection operations; • Editing; • Imputation ; • Estimation (i.e., estimating population parameters from sample values); • Seasonal adjustment and trend-cycle estimation; • Data quality evaluation; • Disclosure control; • Data dissemination; • Data analysis and presentation; • Documentation; and • Administrative data use. With respect to the management environment, it is recommended that a Quality Assurance Framework be established. This involves establishing an institutional structure and assigning responsibilities to specific individuals in the company to maintain quality. This is similar to the rapidly growing Total Quality Management process, employed by many companies, to establish and maintain quality in their operations (Richardson and Pisarski, 1997). CASRO have produced similar guidelines on “good practice” in their Survey Research Quality Guidelines document (CASRO, 1998). They provide guidance on the execution of the following steps in the survey execution process: • Problem definition; • Sample design; • Interview design; • Data collection; • Data processing; and • Survey reporting.

35 In providing guidance on establishing a problem definition, they describe the necessity of obtaining background information on the need and use of the data to be collected, of establishing objectives with the client, and determining topics to be covered in the survey. The sample design includes definition of the population to be sampled, determining the sample frame, sample size, and weighting, and providing a full description of the procedure to be followed in conducting the survey including call-back and replacement procedures if any. In coding non-responses they suggest that all the following categories be used: • Respondent not reachable (i.e., busy, etc.); • Respondents not available after callbacks; • Total refusals; • Respondents not interviewable (i.e., language/speech problems, etc.); • Respondents not qualified; and • Completed interviews. In the interview design, general guidelines are provided on designing the questionnaire or interview. In the guidelines for data collection, considerable guidance is offered on interviewer training, supervisor procedures, interviewing protocol, and validation procedures. In data processing, it is noted that data editing must first be applied to remove illegible, incomplete, or inconsistent errors in the data. During this phase, missing data that can be inferred from other complete data (e.g., the gender of a respondent from their name) may also be replaced. Coding must be consistently conducted and detailed coding of missing data must be made. Survey reporting should always include the study title, the name of the client and the research company, the date, and information on the survey such as the target population, location, respondent qualification requirements, and sample size. Information regarding the execution of the project such as the interview dates, sample design, disposition rules, response rate, weighting, and results of validation runs, should be reported. Certification/Accreditation One of the needs satisfied by standards is the assurance a user or client has when a product they plan to purchase carries the approval or certification of a recognized standards agency. One of the main functions of standards in the manufacturing industry is the assurance to the consumer that a product carrying the seal of a reputable standards organization, is of reliable quality. Standards of this type are usually handled at the national level by national standards organizations, although international standards agencies such as the International Standards Organization (ISO) also exist. The ISO uses national standards organizations and experts from each individual field to establish standards in those areas in which standards are requested by suppliers or consumers. The ISO requires that suppliers structure and operate their company according to quality management principles. The ISO defines a quality management principle as: “... a comprehensive and fundamental rule or belief, for leading and operating an organization, aimed at continually improving performance over the long term by focusing on customers while addressing the needs of all other stakeholders.” (ISO, 1997) The rationale is that by adopting appropriate quality management principles within an organization, the best quality product is produced irrespective of the type or nature of the industry involved. The ISO requires that organizations registered with them abide by the following eight quality management principles (ISO, 1997):

36 • The organization must be customer-focused. That is, it must understand the customer’s needs, meet the customer’s requirements, and strive to exceed the customer’s expectations. • The organization must have effective leadership. The leaders must direct the organization’s progress and promote unity of purpose among the employees of the organization. • The organization must involve all its members in its operation. Members must be able to contribute their individual abilities to the benefit of the organization. • Individual components in the operation of an organization must be managed as a process. Applying a process approach improves efficiency of the operation. • The organization should manage its operation as a system of interrelated processes. • The organization should always be looking for ways to improve its operation. • Decisions in the organization should be based on factual information. • A mutually beneficial relationship must be maintained between the organization and its suppliers. Relationships are sustained when both parties benefit from the association. These quality management principles are principles of management that could be expected in any well-managed organization. However, while philosophical and general in nature, guidelines are provided by ISO on how to structure and operate a company so as to maintain and pursue these principles. These are described in the ISO 9000 series of guidelines. These guidelines apply to a wide array of activities and are not only applicable to manufacturing as typically perceived. Richardson and Pisarski (1997) have translated the ISO guidelines into requirements for a travel survey company. They maintain that while it requires considerable commitment from the company to implement and maintain, its benefits in being able to deliver a quality product in a consistent manner are substantial. One of the factors that may drive travel survey companies to seek ISO certification or accreditation in the future is if agencies commissioning surveys increasingly require ISO certification. Agencies may be attracted to this option because it reduces significantly the responsibility they bear in ensuring good quality data are produced. This may be a particularly attractive option for those agencies that feel uncertain about their own ability to assess quality effectively. 2.2.3 Standardized Procedures There is little evidence of standardized procedures in travel surveys in the literature. However, there are at least two areas in which the prospects of introducing standardization have been discussed in the literature. These are the standardization of terminology or concepts, and the standardization of measures of assessment used to evaluate the quality of the survey. These areas of standardization are reviewed below. Terminology One of the greatest barriers to comparison of data between different data sets is the inconsistency in terminology and survey procedure used in different surveys. While standardization of survey procedure may be undesirable, given the variety of purposes and objectives directing individual data collection efforts, confusion due to inconsistent use of terms is unnecessary. The classic example is the definition of a trip which is likely to vary from survey to survey (Richardson, 1997). Other terms which may not always generate a common perception include expressions such as coverage, validation, deduction, and calibration. Overall, a distinct need appears to exist to establish a glossary of terms that can serve as a standard description of commonly used terms. Another area where standard terminology would be beneficial is in the phrasing of questions typically included in a travel survey. An example would be the phrasing of the question to determine vehicle ownership of a household. The question could be phrased so as to clearly indicate whether

37 vehicles not owned by the household, but available for their full-time use, or vehicles owned by the household, but not in operating condition, should be included in the total number of vehicles or not. A particular difficulty is incorporating new behaviors that impact travel such as work at home and Internet shopping. The question of standardizing questions and content of travel surveys needs further study before recommendations can be made. Classification An area where there is great potential for standardization is in the establishment of standard classifications. Data are often classified into categories to reduce the variety of cases, to make obtaining the data less offensive to the person being interviewed (as when establishing household income), or to combine the characteristics of several variables into a single category. Data are often categorized to portray household income, occupation, educational level, stage in life cycle, land use, industry, and race. If standard classification systems can be adopted, the opportunity to compare values among different data sets will be enhanced. Because some secondary data sets such as the Decennial Census and National Personal Transportation Survey (NTPS) (or its successor, the National Household Travel Survey) are important sources of information and are likely to be used to supplement a travel survey, it would be advantageous to adopt classification systems that are as similar as possible to those of these data sets. A standard classification of industry and other economic activities used in the past has been the Standard Industrial Classification (SIC) system. However, the SIC has recently been replaced by the North American Industry Classification System (NAICS). The NAICS provides a detailed classification of industrial, commercial, and public service activities (NTIS, 1997). Due to its wide acceptance and use, it would be advantageous to use the NAICS classification scheme in travel surveys. ESOMAR has established a standard socio-economic classification system called the European Social Grade (ESOMAR, 1997). The European Social Grade is a function of the “terminal education age of the main income earner” and the occupation of the main income earner for those households which have an actively employed income earner. For those households that do not have an employed person in the household, the occupation of the main income earner is replaced by the “economic status of the household.” Terminal education age is defined as the age category in which the main income earner received his or her last professional training or education. Age categories are 13 years or younger, 14, 15- 16, 17-20, and 21 years or older. Occupation is described in terms of seven categories ranging from management through professional to unskilled worker. Economic status is determined by the number of the following consumer items owned by the household: • Color television; • Video recorder; • Video camera; • Two or more cars; • A still camera; • A home computer; • An electric drill; • An electric deep-fat fryer; • A radio clock; and • A second home or holiday home/apartment. Six economic status scale categories were established from the above information by giving the value of six to those households possessing five or more of the above items, ranging down to a value of one for those households owning none of the above items or who failed to answer the question. From the five terminal education age categories and seven occupation categories, eight social grade categories were established for households with workers as shown in Table 18. The numbers in the

38 table range from 1 for the social grade described as “well-educated top managers and professionals” to 8 for the social grade described as “less well educated skilled and unskilled manual workers, small business owners, and farmers/fishermen.” A similar classification into eight social grades is established for households without a worker using economic status in place of occupation category. Table 18: Eight Social Grade Categories Occupation Category Terminal Education Age of Main Income Worker 1 2 3 4 5 6 7 21+ 1 2 3 17-20 1 2 3 4 5 15-16 2 3 4 5 6 14 3 5 6 13 or less 5 7 8 8 Source: Adapted from ESOMAR (1997) The European Social Grade was applied in the 12 nations in the European Union in seven separate waves of surveys between 1992 and 1995. The samples were random samples of approximately 1,000 households per nation in each wave. The Social Grade has been used to compare the socio- economic composition of the different countries. It can also be used to observe the change in socio- economic status within a country over time given sufficient passage of time between surveys. The developers of the European Social Grade believe that, while the measure was developed using European data and reflects European conditions, the concept could be used in other areas of the world with the necessary adjustments to economic and social factors in the expression. Coding Another part of the travel survey process that will benefit from standardization is coding. Coding is the assignment of labels to data to facilitate identification or analysis. For example, household income intervals are assigned numerical or alphanumeric labels to distinguish individual income categories. Descriptive variables such as driver license status, gender, educational level, and occupation, as well as item non-response categories such as no answer, refused, or not applicable, are usually assigned codes. The benefit of standardizing codes arises when categories with the same intervals use the same codes among different data sets. Similarly, if the types of missing data items were coded into the same number of descriptive categories, comparison and understanding of the terms would be enhanced. Assessment The means of assessing the quality or accuracy of travel survey data are currently few and are not applied uniformly among practitioners. Three measures of data quality that are suggested as good candidates as assessment measures in the literature are coverage error, response rate, and sampling error for key variables (Statistics Canada, 1998a, p. 51). The topic of assessment and recommendations as to how it can be measured are discussed in greater depth in Chapters 4 and 5.

Next: 3. Indentification and Categorization of Potential Procedures and Assessment Measures »
Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys Get This Book
×
 Technical Appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Web-Only Document 93 is the technical appendix to NCHRP Report 571: Standardized Procedures for Personal Travel Surveys, which explores the aspects of personal travel surveys that could be standardized with the goal of improving the quality, consistency, and accuracy of the resulting data.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!