Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
25 Background To better understand the current state of practice for originâdestination (OD) studies, an online questionnaire was distributed to 67 organizations throughout the United States responsible for overseeing these studies, including transit providers and metropolitan plan- ning organizations (MPOs). Participants were selected from the National Transit Database to represent a range of transit systems by size, location, and mode, and based on the availability of contacts and recommenda- tions from the study committee. Fifty-seven of the organizations responded at least partially to the questionnaire, resulting in an 85 percent response rate. Responses to the study questions were divided into the following sections: A. Planning a Survey B. Survey Approach and Instrument Design C. Survey Sampling Plan D. Assessing Survey Data Quality E. Survey Expansion F. Emerging Data Alternatives to On-Board, or Intercept, Surveying Modes Surveyed Survey respondents reported conducting OD surveys on all common modes of public trans- portation, with 92 percent stating they completed their latest survey on local bus (see Table 4). System Size Survey respondents were categorized in certain instances by the size of the transit system being surveyed. System size was defined as total annual unlinked trips according to 2016 NTD data (FTA, 2016c; see Table). Planning a Survey Frequency of Survey Completion The use of OD surveys among transit providers and MPOs is extremely common (see Fig- ure 5). Of the organizations that participated in this study, 93 percent reported completing an OD survey at some point in their history. This is not surprising, given that these types of surveys provide information on a wide variety of topics relevant to service planning and service provision. C H A P T E R 3 Current State of Practice
26 Public Transit Rider OriginâDestination Survey Methods and Technologies The frequency by which respondent organizations administer new surveys varies (see Fig- ure 5). Of the organizations that participated in this study, 7 percent responded that they do so on a rolling (continual) basis; another 7 percent, annually; 40 percent every 2 to 5 years; 23 percent every 6 to 10 years; and 2 percent complete a survey less than once every 10 years. The remaining organizations that have completed rider surveys in the past have done so either at irregular increments or when specific circumstances arise that require a survey to be com- pleted. These circumstances include the completion of large projects, service changes, when funding becomes available, or when the local MPO is also in the process of administering a wider survey effort. There was no discernable relationship between the size of an organization and the frequency at which they complete a rider survey, though the only respondents that reported collecting rider survey data on a regular or rolling basis were doing so for very-large transit systems. How Respondents Use Survey Data Respondents stated that they conducted OD surveys to support long-range planning, travel- demand modeling, route planning, and Title VI compliance. Most respondents stated that they did not have trouble justifying the cost of the survey to their leadership; federal compliance was the most commonly cited justification made to leadership for funding. Mode Count Percentage Local bus, including electric bus and trolley bus 47 92 Commuter bus 21 41 Streetcar or light rail 16 31 Bus rapid transit 13 25 Commuter rail 13 25 Heavy rail (subway, metro, rapid transit) 8 16 Paratransit or demand-response service 4 8 Ferry 2 4 Other (express bus) 1 2 Total respondents 51 Note: Respondents had the option to choose multiple responses. Table 4. Modes surveyed. 0% 10% 20% 30% 40% 50% 60% 70% 80% All (60) Small (19) Medium (14) Large (10) Very Large (14) Pe rc en t o f R es po nd en ts Transit System Size On-going/ Rolling Basis Annually Every 2 to 5 years Every 6 to 10 years < Every 10 years Never Other Figure 5. Frequency of conducting originâdestination surveys, by organization size.
Current State of Practice 27 How Surveys Are Funded A wide variety of funding sources are used to pay for rider surveys, with the most common source of funding being local and federal funds (see Table 5). Many organizations rely on a mix of funding sources to pay for surveys. Regional Coordination In most circumstances the transit agency (86 percent of respondents) is the primary party responsible for overseeing a rider survey effort (see Figure 6). Another 10 percent of surveys are led by the MPO. When asked if their most recent survey was administered in coordination with other transit agencies or the regional MPO, 59 percent responded in the affirmative. Rider survey results can affect many aspects of transportation planning, including those under the purview of other agencies and government entities. Survey Approach and Instrument Design Survey Modes Used The survey methods used by respondents were generally consistent with those identified in the literature review (Chapter 2). Respondents most often conducted surveys by paper (67 percent) or by tablet computer (53 percent); a much smaller share of respondents con- ducted surveys online or with a two-step approach (see Figure 7). Funding Source Count Local government 25 Federal government 23 Metropolitan planning organization 13 State government 3 Percentage 51 47 27 6 Total Respondents 49 Note: Respondents had the option to choose multiple responses. Table 5. Funding sources. 86% 10% 2% 2% Regional Transit Agency MPO City Regional Organization Figure 6. Lead organization for survey.
28 Public Transit Rider OriginâDestination Survey Methods and Technologies Survey Methods Used Respondents used a variety of methods, including self-administered, interview-administered, and two-step surveys: â¢ The most common method overall among respondents was a self-administered paper survey that staff members distributed to ridersâwho then returned the survey directly to staff members or placed it in a collection box (57 percent of respondents). â¢ The second most common survey method was an interview-administered survey using tablets (53 percent of respondents). One-fifth of respondents offered both a tablet and a paper survey option. â¢ Forty-one percent of respondents used multiple survey methods. Paper-based survey instru- ments were most frequently combined with other methods, which may indicate that orga- nizations responsible for completing OD studies continue to offer these as an option as they introduce other methods. â¢ Notably, no respondents administered their last survey using an online method alone. Invita- tions to the online survey were most frequently handed out by survey staff directly to riders or distributed through seat-drops. One respondent reported advertising their online survey in transit stations, on buses, and through social media (not advisable because this method may yield a very high rate of response bias); another invited participants through e-mail; and another gave respondents to a paper survey the option to complete it online. â¢ Eight percent of respondents used a two-step method in which riders were asked to com- plete a short survey aboard the vehicle, followed by an in-depth computer-aided telephone interview (CATI). â¢ Only one respondent utilized an âotherâ method. This organization tracked boarding and alighting completely electronically and thus collected trip-making behavior automatically without interacting directly with riders. Tablets Because tablets are an emerging survey mode, this synthesis questionnaire included several questions specifically on how and why respondents are utilizing tablets in their survey practice. All respondents chose to adopt tablets to improve data quality; a majority also stated that they use tablets to shorten data processing times and reduce labor costs associated with data cleaning and analysis, printing, and paper handling (see Table 6). 0% 10% 20% 30% 40% 50% 60% 70% Paper Online Tablet Two-Step Other Figure 7. Frequency of survey mode.
Current State of Practice 29 There were a few less commonly cited reasons for using tablets. Two respondents men- tioned FTA encouragement to use tablet technologies. Others chose tablets because they allow geocoding on the fly, provide a higher level of quality control and assurance, provide better origin and destination GIS data, and because previous research showed that they would increase response rates. Tablet surveys can be administered either as a âsmart surveyâ with automatic data validation built into the process, or as a âstatic surveyâ that simply collects the data through a tablet inter- face but does not take full advantage of data validation capabilities. Most respondents that used tablets in their most recent survey reported incorporating some type of automatic data valida- tion (see Table 7). Surprisingly, 17 percent of respondents administered a tablet survey that does not incorporate any response validation. While most respondents indicated that tablets improved data quality, responses were inconclu- sive on whether tablets reduced survey costs. The majority of respondents did not know if costs went up or down, while 18 percent reported lower costs and 29 percent reported higher costs. Days of Week and Times of Day Surveyed Nearly all providers (94 percent) completed surveys during all weekday time periods, whereas two respondents did so only during peak and midday hours and one only during off-peak hours. Saturday and Sunday time periods were surveyed to a lesser extent. Of those Reason Count % Improved data quality due to automated validation of responses 28 100 Shortened wait period between the completion of field survey and availability of data for analysis 17 61 Reduced labor costs related to data cleaning and analysis 17 61 Reduced labor and materials costs related to printing and paper handling 14 50 Enabled a new approach to survey design 11 39 Other 9 32 Total respondents 28 Note: Respondents had the option to choose multiple responses. Table 6. Reasons why tablets were used. Validation Used Count Percentage Consistency of responses (e.g., transit route used was consistent with stops used) 19 66 Addresses 18 62 Reasonableness of responses was validated (e.g., number of vehicles available in household less than 20) 17 59 Stop locations 16 55 Responses were required for each question (e.g., to prevent missed questions) 12 41 No automatic response validation 5 17 Total respondents 29 - Note: Respondents had the option to choose multiple responses. Table 7. Type of validation strategies used.
30 Public Transit Rider OriginâDestination Survey Methods and Technologies that surveyed riders on Saturdays and Sundays, most drew their sample from all time periods. Finally, no respondent limited their surveys to only one direction of travel. Survey Distribution Locations/Extent of Routes Respondents reported using a variety of survey sampling strategies, with a notable differ- ence in approach between self-administered and interview-administered surveys. Regardless of the method, most respondents conducted the surveys aboard vehicles instead of at transit stops and stations. â¢ Among those with a paper survey instrument or invitation, 55 percent distributed the survey only on select trips and 28 percent reported distributing surveys on all trips. A small number of respondents distributed surveys at select bus stops or stations, and others either distributed surveys both at stops and onboard vehicles or on one select route. â¢ If a tablet or other interview-supported method was used, respondents approached recruit- ment differently than paper surveys. Since an interview-supported survey takes more staff time per person approached, survey administrators cannot easily invite all riders on all trips to participate. To organize their sampling method, 79 percent of respondents set up a randomized method for recruitment, such as inviting riders to participate based on their boarding order. The remaining respondents approached all adult riders on a surveyed trip. Survey Question Design While respondents use the rider survey data for many different reasons (see Table 8), they tended to cover similar topics on their surveys. Nearly all respondents indicated that they included questions on trip purpose, demographics, fare type, access/egress modes, and fre- quency of transit use. Fewer than half of respondents asked questions on rider satisfaction, and only 8 percent asked about support of policy and planning proposals. Finally, in the âotherâ category, a variety of questions were posed to riders, including several system-specific questions. The literature review illustrates the importance of survey question design and wording; how- ever, there is little standardization in survey instrument development. Most respondents based their surveys on their own past surveys or developed questions from scratch. Ten percent of respondents relied on their MPOs for question wording, and a handful of respondents used external sources, such as surveys from other providers. Topics Count Percentage Trip purpose 51 96 Rider demographics 50 94 Type of fare used 46 87 Means of access and egress 45 85 Frequency of transit use 41 77 Customer satisfaction 21 40 Rider support for policy and planning proposals 4 8 Other 17 32 Total respondents 53 Note: Respondents had the option to choose multiple responses. Table 8. Topics addressed.
Current State of Practice 31 Boarding and alighting information is the most basic type of question included in OD sur- veys. Fifty-one percent of respondents asked riders to indicate the boarding and alighting loca- tion for all trip segments to capture transfer locations and routing. Thirty-five percent only asked for the first and last boarding location. A small number of respondents reported inferring the transfer location based on boarding, alighting and/or routes. Special Accommodations for Specific Population Groups When conducting surveys of transit riders, survey administrators have found that special accommodations were often needed to ensure that specific population groups could partici- pate. Two groups that may require structural changes in survey administration are persons with disabilities and those with limited English proficiency (LEP). In most circumstances, without specific accommodations, the opinions and trip-making behaviors of members of these groups are unlikely to be captured. â¢ Accessibility: Though many people with disabilities can participate fully without accom- modation, respondents included training and resources for survey staff that can increase participation. Fifty-four percent of respondents provided training to survey staff on how to interact with people with different kinds of disabilities (e.g., physical, visual, auditory, and cognitive). Fewer than half of respondents provided special accommodations for riders, such as a screen readerâcompatible online survey, integration of adaptive technologies with tablet surveys, or the availability of proxy respondents for those unable to complete the survey on their own. More than one-quarter of respondents reported not using any specific strategies to accommodate disabled passengers. â¢ Multilingual Surveys: Because it is not feasible to accommodate every language spoken by passengers, survey administrators tend to focus resources on the most widely spoken languages in their service areas. Ninety-three percent of respondents provided surveys in languages other than English, with the most common being Spanish, Chinese, Korean, and Vietnamese. Fifty-eight percent of respondents reported hiring multilingual surveyors who could administer their survey in a foreign language. Survey Length Survey administrators must design rider surveys to be long enough to capture the desired information, yet not so long or complex as to cause confusion or response fatigue. For respon- dents, the approximate time needed to complete their most recent rider surveys ranged from 2 to 30 minutes, but the majority reported times between 5 and 10 minutes. The average time needed to complete these surveys was 7.4 minutes across all respondents. The number of ques- tions reported ranged from 10 to 61, with 29 questions as the average (see Table 9). Survey Cost Survey administration costs varied widely among respondents. Of the organizations that responded to these questions, the cost of completing a rider survey ranged from $2,100 for a Survey Characteristic Average Min Max Total Respondents Time to Complete (in minutes) 7.4 2 30 35 Number of Questions 29 10 61 45 Table 9. Survey length.
32 Public Transit Rider OriginâDestination Survey Methods and Technologies small transit agency to $5 million for a very large one, with a median cost of $530,899 dollars. Survey costs increased with agency size (see Table 10 and Figure 8). The substantial overlap in the range of costs indicates that the size of the agency is not the only factor dictating survey costs. The most expensive survey completed by a small agency, for instance, was more expensive than the cheapest survey completed by a very large agency. Other factors that influence the cost include the desired sample size, the length and com- plexity of survey questions, whether the survey relied on contractors, the transit mode sur- veyed (e.g., large agency surveying a low-ridership mode) and the survey methods used. The costs per complete and usable response varied across respondents as well, ranging from $6.19 to $139.78 per response, with a median cost of $36.83 per response (see Table 11 and Figure 9). A plurality of respondents reported that survey costs grew by more than 10 percent between the second most-recent survey and their most recent survey. Only 7 percent of respondents reported a decline in survey costs, while 20 percent saw no change (see Figure 10). Survey Sampling Plan The decisions that survey designers make concerning sampling frame, population, and sample size can have a large impact on results. Sample size is directly related to the level of specificity that a summary of survey results can provide (see Table 12). The greater the level of geographic detail, the larger the sample required. A survey of ridership characteristics taken at the transit stop level will need a much larger sample than a survey of general rider characteristics across the system. Agency Size Median Min Max Total Respondents Small $80,000 $2,100 $412,000 10 Medium $365,000 $20,000 $753,000 7 Large $750,000 $340,000 $1,248,443 6 Very Large $1,140,000 $91,390 $5,000,000 10 All $530,899 $2,100 $5,000,000 34 Table 10. Survey cost by agency size. $6,000,000 $5,000,000 $4,000,000 $3,000,000 $2,000,000 $1,000,000 $0 Small Medium Large Very Large Figure 8. Survey costs by agency size (median, average, range, and 2 standard deviations).
Current State of Practice 33 Median Min Max Total Responses Cost Per Complete/Usable $ 36.83 $ 6.19 $ 139.78 21 Table 11. Costs per completed survey. $0 $20 $40 $60 $80 $100 $120 $140 $160 Figure 9. Cost per completed survey (median, average, range, and 2 standard deviations). 7% 43% 20% 30% Cost decreased with most recent survey (<-10% change) Cost increased with most recent survey (>10% change) Little to no change (i.e. +/- 10% change) Do Not Know Figure 10. Change in cost compared to previous survey efforts. Level of Geography Count Percentage Certain-higher ridership routes as well as the entire mode 3 7 Only for an entire mode of service 2 5 Route-level (may exclude lower ridership routes) 16 39 Route segment level 8 20 Stop level 12 29 Total respondents 41 Table 12. Level of geography for which survey results can be summarized.
34 Public Transit Rider OriginâDestination Survey Methods and Technologies The majority of respondents indicated that their sample size was large enough to provide results at either the route level, stop level, or route segment level of geographic specificity. A smaller percentage of organizations were only able to summarize their results to select high- ridership routes or for an entire mode of service. Surveying Minors Minors (under 18 years old), historically, have often been excluded from intercept OD survey samples, though many respondents now appear to be implementing sampling techniques to remedy this (see Table 13). Thirty-three percent of respondents surveyed minors who appear to be above a certain age, ranging from 11 to 16. Seventeen percent of respondents approached all riders no matter what their age, and 3 percent included a supplementary survey for minors. Assessing Survey Data Quality To maximize the accuracy of a survey and minimize response bias, survey designers aim for high response and completion rates. A high response rate indicates that a survey likely has good coverage across a selected sample of a population, whereas a high completion rate indicates that many of those responses are usable for the surveyâs expressed purpose. Response Rate Calculations Among respondents there is no consistent method for calculating the survey response rate, making it challenging to compare response rates across transit providers. FTA recommends using the number of surveys distributed (self-administered surveys) or number of persons approached as the response base (interview-administered surveys); a plurality of respondents used one of these two methods (see Table 14). More than half used another method to calculate Approach Count Percentage No specific efforts 17 47 Riders younger than 18 approached 12 33 All riders approached 6 17 Supplementary survey 1 3 Total respondents 36 Table 13. How minors were incorporated in the survey sample. Method Count Percentage Passenger counts 16 35 Number of surveys distributed 12 26 Number of persons approached by interviewers 9 20 Other 9 20 Total respondents 46 Table 14. Response base method.
Current State of Practice 35 the response rate, with surprisingly 35 percent of respondents basing the response rate on passenger counts. Reported Response Rates The average response rate reported was 49 percent, but the range of reported response rates was extremely large (see Table 15). The lowest response rate was 3 percent and the highest was 88 percent. Respondents using tablets to administer their surveys had the highest response rate (63 percent), followed by paper (34 percent), and online (5 percent). These results are consis- tent with the findings of the case examples in Chapter 4. This report struggled to collect accurate data on survey response rates. Only 23 respon- dents provided adequate data to calculate the response rate. Moreover, respondents calculated response rates in a variety of manners and, anecdotally, some respondents voiced skepticism in the accuracy of response rate data provided by a contractor. More research is needed to draw definitive conclusions on how survey methods affect response rates across a larger array of transit agencies. The response rates reported by respondents remained relatively consistent over time from one survey to the next. Comparing their most recent survey to past survey efforts, 47 per- cent reported little change in response rates, 39 percent reported an increase, and 14 percent reported a decrease. Sampling Short Trips No matter the rider survey method, accurately incorporating short trips into the sample is a challenge, usually because riders completing short trips are not on a vehicle long enough to encounter a surveyor or complete a survey. When asked an open-ended question about how they captured these types of trips, respondents indicated that the most common method was to gather the riderâs name, phone number, and alighting location for short trips and follow up later with a phone call. Other respondents allowed riders to return their survey instruments by mail, provided an online link to complete the questionnaire, or distributed a custom short version of the survey aimed at these riders. Underrepresented Population Groups Respondents identified a variety of user groups from whom they struggle to field a represen- tative sample during surveys (see Table 16). The most commonly reported underrepresented groups are LEP riders, minors, limited-literacy riders, and short-trip riders. Other groups that were reported as undercounted, but to a lesser extent, include riders with disabilities, riders Method % Average Response Rate % Min % Max Total Respondents All methods/method undefineda 49 3 96 19 Tablet 63 5 88 8 Paper 34 10 71 11 Online 5 3 7 2 aSome respondents specified response rate by survey method while others only provided totals, hence âall methodsâ line is not an average of the tablet, paper, and online row. Table 15. Average response rates by survey method.
36 Public Transit Rider OriginâDestination Survey Methods and Technologies belonging to particular ethnic or racial groups, low-income riders, and riders that use transit during specific time periods and days. Standards for Completion Respondents used different standards to determine whether a completed survey was usable (see Table 17). Only 17 percent reported that all survey questions had to be answered to count a survey as usable. Accurately recorded origin and destination and boarding and alighting stops were the most commonly required responses for a survey to be considered usable. Number of Completed Surveys The total number of completed surveys needed for statistically significant results is a func- tion of the size of the transit system and the level of specificity with which results are meant to be summarized (see Table 18). As expected, larger transit agencies on average achieve a larger sample of completed surveys, yet overall there was a large range in the number of completed sur- veys achieved by respondents. The purpose of the survey, desired level of specificity, and modes being surveyed all affect an organizationâs target number of completed surveys. Population Group Count Percentage Non-English speakers 22 63 Persons under the age of 18 14 40 Persons with limited literacy 11 31 Riders making short transit trips 10 29 Riders with disabilities 4 11 Riders during times of day surveyed (e.g., PM peak vs. AM peak ridership) 4 11 Particular ethnic or racial groups 3 9 Low-income riders 2 6 Transit-dependent riders 0 0 Total respondents 35 - Note: Respondents had the option to choose multiple responses. Table 16. Underrepresented population groups. Standards Count Percentage Origin and destination locations could be geocoded accurately 25 54 Boarding and alighting stops were identified 25 54 Trip purpose was collected 16 35 Access and egress mode was collected 14 30 Route sequence (for trips that included a transfer) was collected 11 24 Respondentsâ demographic information was collected 11 24 All questions were completed 8 17 Other 15 33 Do not know 7 15 Total respondents 46 - Note: Respondents had the option to choose multiple responses. Table 17. Standards for completion.
Current State of Practice 37 Survey Expansion Most OD surveys include only a sample of riders, and so survey results must be expanded to reflect the total rider population (see Chapter 2). Route-level ridership was the most com- monly used factor (62 percent) to generate survey expansion. Other common factors include on and off counts collected from station or stop-level data using automatic passenger counters (APCs) or turnstile counts (52 percent), boardings by route (50 percent), boardings by route and direction (50 percent), boardings by time of day (50 percent), and boardings by stop or segment (48 percent). To a lesser extent, respondents reported using census data, park and ride counts, and other travel survey results, such as modal split, to determine the appropriate expan- sion factor. In addition to performing an initial survey expansion to reflect the sample population, four respondents chose to re-expand their survey results with new data. The reasons for doing so vary among respondents, but most reported re-expanding survey results to reflect more recent ridership levels in order to validate regional travel demand models. Emerging Data Alternatives to On-Board Surveying Big data refers to large data sets (including passive data automatically generated for other purposes) that can be analyzed to yield information on rider behavior and travel patterns. These data sources are more robust than passenger surveys but report on a narrower set of characteristics. Transit providers already generate a range of big data internally. Automatic vehicle location (AVL) and APCs provide a steady stream of data. Systems with electronic fare payment can utilize fare-card data to analyze travel patterns and rider behavior. Even General Transit Feed Specification (GTFS) feeds can be harnessed to analyze level of service and automate the creation of sampling plans. Alongside internally generated data is a range of third-party data that transit providers can harness. Location-based services from mobile apps and cellular location data can be anonymized and aggregated to provide a robust snapshot of travel behavior. Use of Big Data When asked how they incorporate big data into their organization, 98 percent responded that they use fare-box and fare-card data collected from boardings and alightings, and 93 per- cent reported using AVL and APC data to support their planning and administrative processes (see Table 19). Eighty-two percent of respondents use data created for a GTFS, which defines a common format for all public transportation schedules and associated geographic information. Provider Size Definition (Annual Unlinked Trips) Average Min Max Total Responses All 16,070 339 96,614 29 Small <10 million 2,092 339 4,189 9 Medium 10â30 million 5,520 352 7,987 6 Large 30â100 million 20,958 5,008 33,897 5 Very large >100 million 38,755 2,274 96,614 7 Table 18. Number of completed surveys by provider size.
38 Public Transit Rider OriginâDestination Survey Methods and Technologies When asked how incorporating big data has affected intercept OD surveys, respondents predominantly used the data to improve sampling strategies and refine expansion factors (see Table 20). One-fifth of respondents were able to reduce the scope and scale of their survey efforts because of big data, and one transit provider decided to forgo traditional surveys entirely. Challenges with Big Data Though external data sources can benefit transit providers, there are challenges to incorporat- ing these new resources into the planning processes. Respondents were most concerned about the lack of quality data related to transit ridership. Other widely cited challenges to harnessing big data were concerns over certain riders being systematically underrepresented in the data, the cost of such data sources, and a lack of in-house knowledge to process and use such data (see Table 21). Summary of Current State of Practice The respondents to the synthesis survey are only a small fraction of the 2,000+ providers in the National Transit Database, but provide a valuable snapshot of the transit survey practice. Survey respondents carried almost 70 percent of nationwide transit ridership in 2016 accord- ing to the National Transit Database (FTA, 2016c) and are diverse in terms of size, location, Data Source Count Percentage Fare-box and fare-card data 43 98 Automatic vehicle locator and automatic passenger counter data 41 93 General Transit Feed Specification data 36 82 Third-party data services that provide travel flow based on passively collected geospatial information 12 27 Third-party data on rider characteristics and demographics 4 9 Video analytics (e.g., facial recognition software) 0 0 Other 6 14 Total respondents 44 Note: Respondents had the option to choose multiple responses. Table 19. Prevalence of the utilization of big data. Impact Count Percentage Improve sampling strategy 28 64 Refine expansion factors applied to survey results 22 50 Not currently affecting the survey process 13 30 Reduce the scope and scale of traditional rider surveys 9 20 Eliminate entirely the need for rider surveys 1 2 Other 1 2 Total respondents 44 Note: Respondents had the option to choose multiple responses. Table 20. Impact of big data on survey techniques.
Current State of Practice 39 and mode. Transit providers, as opposed to regional organizations and jurisdictions, represent the majority of respondents. Frequency, Motivator, and Justification for OD Surveys Seventy-seven percent of participants conducted an OD survey in the last 10 years, with a plurality conducting such surveys on 2- to 5-year intervals. Generally, larger organizations tend to conduct surveys more frequently than smaller ones. While the motivating factor behind such surveys varied widely, the most commonly cited reason was federal compliance (48 percent), followed by travel demand model development (23 percent) and planning purposes (21 per- cent). The most common source of funding for OD surveys was federal and local funds. Standardization of Survey Practices One of the key findings of this synthesis is that while respondents are collecting much of the same kinds of data in their surveys, there is a lack of standardization in survey practice, most notably in question wording and survey instrument design. Most of the commonalities among survey instruments submitted by respondents for this study were due to having the same survey vendor or consultant and not because of any industrywide guidance. Survey Mode While paper-based surveys continue to be the most common mode of survey, 53 percent of respondents used either tablet surveys alone or tablet surveys in combination with another survey mode. Tablet surveys were most commonly adopted to improve data quality, because they can eliminate errors from transcribing answers and validate responses in real time. Reducing Sample Bias Sample bias is one of the major challenges to successfully conducting a rider survey. Respon- dents reported that they struggle most with sufficiently sampling LEP riders. Several strategies have been implemented to improve language access, including use of bilingual survey teams, translation of the survey instrument, and alternative survey method for specific language communities. Other groups commonly cited as underrepresented in surveys are people with disabilities or limited literacy. These groups have been accommodated by respondents in a variety of ways, Challenge Count Percentage Lack of quality data related to transit ridership 27 61 Concerns that certain groups of riders are underrepresented 23 52 Cost of acquiring data 22 50 Lack of in-house knowledge to fully use and process data sources 21 48 Concerns that utilizing big data will not comply with regulatory requirements such as Title VI 16 36 Other 9 20 Total respondents 44 Note: Respondents had the option to choose multiple responses. Table 21. Big data challenges.
40 Public Transit Rider OriginâDestination Survey Methods and Technologies including improved survey staff training, availability of a verbal or written survey option, and accessible survey options (e.g., screen reader-compatible surveys). Use of Passive/Big Data When using big data, respondents largely rely on internally generated sources of data, such as fare cards, GTFS feeds, and AVL or APC systems. Twenty-seven percent of respondents utilize third-party passive data sources, such as aggregated cell-phone location data. Presently, big data are used mostly to refine sampling plans and refine expansion factors. Only one-fifth of respondents use these data to reduce the scope of or eliminate their intercept OD surveys.