National Academies Press: OpenBook

Developing a Guide to Bus Transit Service Reliability (2020)

Chapter: Appendix C - Case Study Summary Report

« Previous: Appendix B - Transit Agency Survey Report
Page 214
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 214
Page 215
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 215
Page 216
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 216
Page 217
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 217
Page 218
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 218
Page 219
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 219
Page 220
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 220
Page 221
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 221
Page 222
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 222
Page 223
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 223
Page 224
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 224
Page 225
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 225
Page 226
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 226
Page 227
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 227
Page 228
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 228
Page 229
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 229
Page 230
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 230
Page 231
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 231
Page 232
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 232
Page 233
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 233
Page 234
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 234
Page 235
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 235
Page 236
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 236
Page 237
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 237
Page 238
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 238
Page 239
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 239
Page 240
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 240
Page 241
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 241
Page 242
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 242
Page 243
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 243
Page 244
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 244
Page 245
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 245
Page 246
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 246
Page 247
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 247
Page 248
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 248
Page 249
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 249
Page 250
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 250
Page 251
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 251
Page 252
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 252
Page 253
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 253
Page 254
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 254
Page 255
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 255
Page 256
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 256
Page 257
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 257
Page 258
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 258
Page 259
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 259
Page 260
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 260
Page 261
Suggested Citation:"Appendix C - Case Study Summary Report." National Academies of Sciences, Engineering, and Medicine. 2020. Developing a Guide to Bus Transit Service Reliability. Washington, DC: The National Academies Press. doi: 10.17226/25903.
×
Page 261

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-1 Appendix C – Case Study Summary Report

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-2 1.0 Introduction This report summarizes Task 6 of the TCRP Project A-42,” Minutes Matter: A Guide to Bus Transit Service Reliability.” The original intent in Task 6 was to work with one or more transit agency partners to participate in a demonstration study to identify the impact of specific reliability strategies, including the costs and benefits of such strategies through “before” and “after” data collection and analysis. However, there was a limited response by agencies to partner in such field demonstration studies as part of the research, as identified through follow-up agency contacts after the initial transit agency survey was completed. Thus, the TCRP A-42 research team, with A-42 panel approval, instead turned their focus to conducting more detailed interviews with selected transit agencies. At its February 14, 2017 meeting, the A-42 panel decided to abandon specific field demonstration studies and instead have the research team focus on conducting in-depth phone-based case studies, and directed the research team to develop an approach to conduct such studies. This report summarizes the findings of 10 in-depth case study interviews conducted for the A-42 project, which include a mix of large, medium and small transit agencies. While the transit agency survey summarized in the Interim Report focused on the broader measures and strategies that agencies use to assess reliability, these case study interviews were designed to obtain an in- depth understanding of the state of the practice in several specific areas, and at a few key agencies. The goal of each case study was to explore how transit agencies measure fixed-route bus service reliability and apply different treatments to improve reliability, using an open-ended discussion and understanding of the current practices at each agency. It was clear during the conduct of the case studies that the perspectives of the transit agency staff, with respect to reliability, were primarily focused on how to consistently improve daily services, recognizing agency resource limitations and impacts of outside circumstances and conditions. What is most significantly affecting agencies abilities to measure and monitor reliability is the availability of vast amounts of data gathered through technology resources including CAD/AVL systems, GPS, and wireless technology. The industry is in a transitional period of understanding how to cope with vast amounts of real-time data and the most effective ways and means to process the information and use it in a meaningful way. How this is addressed from an industry perspective will build incrementally and will vary based on each agency’s resources to develop and transition traditional programs and processes. This report’s findings will inform the development of the Bus Service Reliability Guidebook in Task 7, in areas where less information was available through the literature review and agency survey. As needed, the research team will follow up with one or more of the case study agencies in Task 7 to probe more deeply into certain responses if it benefits the presentation of certain material in the Guidebook. 1.1 Methodology Ten transit agencies were interviewed, including six large agencies, two medium agencies, and two small agencies, listed below. These three categories were defined by the size of the fixed- route bus fleets – under 100 buses for small, 101-300 buses for medium, and over 300 buses for large. Eight of the 10 agencies identified (except for New York City Transit and Los Angeles Metro) had responded to the initial agency survey conducted. The study panel requested a focus on larger agencies for the case studies due to the small number of large agencies responding to the initial agency survey.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-3  Large Agencies o New York City Transit (New York, New York) o Chicago Transit Authority (Chicago, Illinois) o VIA Metropolitan Transit (San Antonio, Texas) o Regional Transit District (Denver, Colorado) o Los Angeles County Metropolitan Transit Authority (Los Angeles, California) o Transport for London (London, England)  Medium Agencies o Southwest Ohio Regional Transit Agency (Cincinnati, Ohio) o Pierce Transit (Tacoma, Washington)  Small Agencies o Kingston Transit (Kingston, Ontario) o Manatee County Area Transit (Sarasota, Florida) The case studies were used to understand five critical aspects of the transit reliability measurement and improvement process: 1. High-level measures that the agencies use to determine reliability, including both traditional and non-traditional measures 2. The standards used in assessing and communicating transit performance 3. How specific causes of unreliability are determined through more specific data collection and diagnostic tools 4. How improvement actions are chosen 5. How the response to improvement actions is measured to evaluate the level of success Since most of the agencies proposed for the case studies also responded to the survey conducted in Task 2, their previous survey answers were used to frame the initial conversation. The discussions were used to identify linkages between the measures of reliability and any action plans developed in response to factors affecting transit service reliability. As some of the proposed case study agencies are represented on the TCRP A-42 panel, the panel member was the first individual contacted, to identify the best individual(s) to speak with in more detail. For other agencies, the first interview was with the respondent to the survey, which varied in functional representation in each agency. From there, there were up to two additional interviewees, which included Supervisors, Operations Leads, Planning and Scheduling, or specific groups which collectively oversee issues related to reliability. The interviews and discussions conducted for these case studies were not intended to be prescriptive, as it was felt that would hinder the nature of the discussions from agency to agency. Since every transit agency’s operating environment is different, the discussion was framed by specific topic areas and provided the opportunity to dive deeper into subject areas specific to each agency. Discussions focused on the following topic areas: Reliability Program Organizational Structure:  Is there an internal team structured to review on-time performance and reliability measures, and to identify corrective actions?  What is the internal structure for? o gathering reliability data and information o developing reliability reports o reviewing reports and identifying problem areas o selecting diagnostic tools and techniques to identify causes of poor reliability o selecting improvement actions o implementing improvements o evaluating the success of improvements

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-4 Reliability Measures and Standards  What elements of reliability does the agency monitor? Punctuality of service? Variability of travel times and wait times? Non-operation of service? Customer perceptions?  What measures and standards are used to assess reliability? (confirm online survey response)  How were measures and standards selected? Have they changed? How so and why?  Are targets/standards set differently by service type? By area? By time period/day of week?  What are the triggers used to signify that action should be taken? Reliability Data Collection, Analysis and Reporting:  What are the primary datasets used for reliability measures?  How is data collected, stored, and manipulated for use?  What reports are developed? Diagnosing and Treating Reliability Issues:  What practices are used to diagnose possible causes of poor reliability or a decrease in reliability after a trigger signifies that action is needed?  What are the strategies employed internal to the agency to address reliability after possible causes have been identified?  What are the strategies employed which require outside agency coordination? Evaluating the Reliability Improvement Program:  How are before and after studies conducted? What data is used?  Have the higher-level reliability standards been assessed to understand if any modifications to the processes are required?  Have there been any proposals made, or new ideas discussed, for changes to the process, new measures or tools, or new strategies to improve reliability, that have not yet been implemented? What are the impediments to implementation?

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-5 2.0 Case Study Findings 2.1 Large Transit Agencies 2.1.1 New York City Transit Service Area Area Population Number of Bus Routes Size of Fixed-Route Bus Fleet Annual Passenger Miles Annual Trips Annual Vehicle Revenue Miles 321 sq. miles 8,550,405 312 3,286 Buses 1.6 billion 743.8 million 86.9 million The TCRP A-42 interview with New York City Transit (NYCT) was held September 6, 2017 with the following individuals:  Buckley Yung (Director, Short Range Bus Planning, Operations Planning)  Mauve Clements (Global Benchmarking and Best Practice Manager, Operations Planning)  Ally Reddy (Senior Director, System Data and Research, Operations Planning)  Lisa Schreibman (Chief of Staff, Operations Planning)  Aileen White (Assistant General Manager, Road Operations, Department of Buses)  Marlene Connor (A-42 Research Team)  Ted Orosz (A-42 Research Team) NYCT operates over a 321-square mile area, serving 8.5 million people in New York City. NYCT operates bus, bus rapid transit, commuter express bus, paratransit, and the subway system. There are two bus units, NYCT and MTA Bus Company (MTABC), a Sub-Authority which comprises formerly private operators placed under state control over the last 20 years. This interview was with NYCT Staff. There are five boroughs (counties) in New York City and Bus Operational management is decentralized to the Borough level. Queens is the largest Borough geographically, and it is divided into north and south sectors, so functionally there are six Borough offices. Road Operations regulates both NYCT and MTABC buses in a unified manner. Bus system reliability is a topic of significant interest to NYCT, and in 2017, the agency has invested heavily in measuring and monitoring reliability through a system called Bus Trek that provides continual data feeds gathered using GPS technology. As recently as five years ago, all reliability data was collected manually. While this is a newly developing system and process, many traditional programs/processes are being changed to respond to the availability of “big data” and to utilize these data streams to inform services both within the organization as well as externally to communicate with customers. While the intensity of bus service in New York City is significantly different than that of other transit programs nationally, the daily activities are the same, but measured and monitored at a scale commensurate with the amount of service on the street and the levels of ridership and passenger activity under NYCT control. There are two themes that resonate within NYCT - proactive service management and a strong reliance on communication. These are particularly enhanced through the active engagement of a strong data gathering system which is in the implementation phases. Reliability Program Organizational Structure Within NYCT, the Bus Service Planning, IT, System Data and Research, and Road Operations Departments work together in the development of the new data system and internal processes which can be used to manage service. Each Borough Road Supervision team has a mini Command Center staffed with dispatcher and road supervisors. Performance is reviewed and

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-6 compared against a previous years’ performance to ensure that the agency is accomplishing its mission of improving year-to-year and identifying solutions to specific problems. With the abundance of information that is now available, route managers can view individual bus operator on-time performance in real-time and can address issues and delays as they occur. This challenges management to instill policies and procedures using the data and information to enhance reliance and system performance. At NYCT, the command chain begins with the Route Manager, Road Supervisors, Superintendents, General Superintendents, Borough Assistant General Manager (AGM) of Road Operations, General Manager of Road Operations and the Vice President, Buses. Data and reports are made available daily, weekly and monthly in a variety of formats for monitoring performance and system reliability. Route Managers typically are responsible for monitoring and managing four to five routes, while Road Supervisors, who are typically located at terminals or at mid-route relief points, can also monitor service and performance though the Bus Trek system which is visible in their vehicles on an iPad. Supervisors receive an Electronic Booking Tally Sheet (EBS) so they can observe performance in real-time including on-time performance and wait assessment by timepoint. Staff can view service in comparison with schedule, so supervisors can constantly see current service performance with accuracy. Reliability Measures and Standards Like many other transit agencies, NYCT uses on-time performance data to measure bus system reliability, defining on-time as one minute early (-1) to five minutes late (+5). Management is primarily by exception, where percent of service early or late is tracked and actions are triggered when routes fall below 60 percent on-time performance. Service which operates at 10-minute headways or better is evaluated on Wait Assessment basis. The concept is that for frequent services, customers should not have to wait for more than 1.5 times the scheduled interval. The guideline varies by time of day. During peak times, the established guideline has a pass/fail tolerance of plus three minutes at peak times, thus a fail occurs at 14 minutes. The tolerance for pass/fail during the off-peak period is plus five minutes. This information is part of the report presented to the MTA Board monthly displaying data at the borough level and by service type, which is local, limited-stop, Express, and SBS (Select Bus Service, NYCT’s BRT service). There is competition among the boroughs and performance at that level is always on the board agenda. Other measures captured and reported for reliability include:  Number of trips completed  Pull outs timely  Lost trips  Detours in effect daily (used for tracking and comparing purposes) Reliability Data Collection, Analysis and Reporting Information on the system is captured by a wireless system that utilizes GPS on a continuous feed to provide data to the research team at NYCT. That data is processed nightly and compiled into a report format and distributed daily to the Road Operations group. The reports show every bus at every bus stop and timepoint and can be used to identify buses that are early or late and to see locations where bunching is prevalent.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-7 NYCT takes a very data-driven approach through a system which is being rolled out and provides bus operations data through GPS feeds daily. The system provides more than six million records per day, and it is anticipated that this data system and the changes that it will require to fully- support internal and external processes and procedures will be fully operational within two years. The real-time and location data is estimated to capture service performance information with 85 percent accuracy. This data is supplemented by and integrated with other data sets including timekeeping registered at the depots and through supervisor observation reports. NYCT reviews and processes data on a daily, weekly, and monthly basis. Data is reviewed for the quarterly operator pick (schedule adjustment) and by time period and compared, for example, with the previous period. Daily reviews are done at the Route Manager level for those routes that are within their responsibility. These daily reports highlight any routes with specific issues. In addition, there are offices in every Borough where dispatchers and superintendents assist in the daily management of services. Real-time bus activity can be seen by dispatchers on their desktop computers, as well as supervisors located at various terminals and in vehicles. It is anticipated that these data feeds will eventually be integrated with other sources of information such as customer reported information which is currently registered and tracked in a separate document called the Customer Relationship Management (CRM) report. This integration process is currently in development. Currently there is no systematic analysis of social media to identify trends in reliability, but Customer Communications does share what insights they can glean. The relationship between the Road Supervision and Scheduling Departments is changing rapidly and for the better. In the past, running time data was gathered through a two-day survey once every three years. The more dynamic data cycle allows NYCT to pinpoint problems with precision and adjust running time performance on a far more regular basis. These systems and processes are still in development. One area that continues to impact service reliability is the availability of peak buses. When there is a shortage of buses, reliability will suffer, as there may not be spare buses to correct for breakdowns. Sample reports presented to the NYCT Board are illustrated in Figure 2.1 below and in Tables 2.2, 2.3 and 2.4 on the next pages. Figure 2.1 – NYCT Bus Weekday Wait Assessment

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-8 Table 2.1 - NYCT Bus On-Time Performance Summary by Route

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-9 Table 2.2 - NYCT Bus Performance Indicator Summary

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-10 Table 2.3 - NYCT Road Calls Service Availability

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-11 Diagnosing and Treating Reliability Issues There is coordination and communication between the offices and the command centers. Dispatchers use a color coding system to communicate to supervisors those situations which need immediate attention and incident management such as accidents, road calls or other operator driven communication. Performance reports are reviewed daily and those analyses flag routes or locations where attention or activity is required. The initial response is for the supervisors to review the reports and then to conduct observations. The ‘big data’ availability allows supervisors to review performance by timepoint and by time of day, drilling down to specific problems and locations. Where warranted, transit personnel coordinate with the local police precinct to enforce parking violations, coordinate with the NYC Department of Transportation (DOT) on traffic signal problems, or reinstruct problem bus operators. Where appropriate, managers will communicate with partner agencies and stakeholders, or request field meetings with DOT staff to review a specific location or issue that has developed. The intent is to employ problem solving techniques at the lowest staff level. A very common technique is to lengthen or relocate bus stops in busy areas. When these approaches are not successful, managers will escalate to appropriate partners and stakeholders to solve problems collectively. When escalation is not successful then NYCT and NYCDOT may jointly undertake a Bus Corridor Study. Generally, transit personnel address issues related to customer issues, origin-destination data, dwell time, travel time bus operations and bus operator performance. City DOT personnel are responsible for traffic signals, signs, lane markings, bus stop length and location and shelters. NYCDOT has a Transit Development Team that focusses specifically on surface transit issues. Possible other approaches include Transit Signal Priority and development of queue jumps, with the intent to make service more responsive to road conditions and supporting infrastructure. Bus bunching reports are reviewed daily using the tools described. If bunching is occurring, typical tools Road Supervisors employ include holding a bus for time or skipping a stop or stops, if another bus is closely behind. These techniques are situational and dependent on local conditions, recognizing factors such as proximity to the peak load point, and ensuring customers can get to transfer locations to meet their needs. Evaluating the Reliability Improvement Program NYCT is developing new policies, procedures and tools, including processes to help customers create more actionable input into the customer data base so that this information can be added to the growing data sets available for use. Follow-up reports comparing before and after conditions can help quantify the effects of instituting treatments such as bus-only lanes, transit signal priority, or off-board fare collection. A sample SBS report summary below shows the impact of changes implemented on a route to enhance service performance. The 86th Street crosstown corridor connects the dense and vibrant Manhattan neighborhoods of the Upper East Side and Upper West Side. Although bus ridership on the M86 bus route serving the corridor had the highest per-mile ridership in New York City, in recent years the ridership had been dropping due to rising travel times and declining reliability. This made the route a strong candidate for Select Bus Service (SBS) conversion. Through targeted street treatments at problem intersections, the introduction of off-board fare payment, and an array of bus customer amenities and safety upgrades, the NYC DOT and MTA NYC Transit have worked to improve this underperforming route. Since launching in July 2015, the M86 SBS route has shown improvement across the board, including:

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-12  7% growth in ridership  8-11% decrease in travel time  10% improvement in reliability  96% customer satisfaction rating The improvements along this corridor serve as a model for similar short, high-ridership crosstown routes, with the M79 along 79th Street scheduled for SBS implementation in spring 2017. It should be noted that the Second Avenue Subway service began operation a year and a half after the start of the M86 SBS. It is expected that the Subway Extension will have an effect on M86 SBS ridership, and MTA NYCT will monitor possible ridership changes in the coming months. 2.1.2 Chicago Transit Authority Service Area Area Population Number of Bus Routes Size of Fixed-Route Bus Fleet Annual Passenger Miles Annual Trips Annual Vehicle Revenue Miles 309 sq. miles 3,272,295 140 1,572 buses 633.6 million 259.1 million 52.3 million The TCRP A-42 interview with Chicago Transit Authority (CTA) was held September 13, 2017 with the following individuals:  Mersija Besic (Director, Performance Management)  Derrick McFarland (General Manager, Bus Service Management)  Elias Mechaber (Senior Analyst, Performance Management)  Bonnie Fan, (Senior Analyst, Performance Management)  Jon Czerwinski (Director, Scheduling)  Marlene Connor (A-42 Research Team)  Jim McLaughlin (A-42 Research Team) CTA operates 140 bus routes over a 314-square mile area, serving 3.43 million people in the greater Chicago metropolitan area. CTA currently operates a bus system that reported 259 million bus boardings in 2016. With such an extensive system, CTA has a strong focus upon transit reliability. In 2011, CTA implemented broad-scale improvements to identify how they manage transit reliability. This section summarizes best practices from CTA’s ongoing efforts to manage and monitor bus system performance considering the availability of technology to measure and monitor performance, including reliability. Reliability Program Organizational Structure At CTA, several departments are part of measuring and monitoring transit reliability, including the Performance Management Department (Administration), Bus Service Management, (Operations), Service Planning and Traffic Engineering, and Scheduling. In addition, IT plays a role in technology development and deployment. A large part of the staff focus is on reliability. Of these departments, the Scheduling Department (13 people) has the largest staff. Reliability Measures and Standards The CTA Department of Performance Management oversees on-time performance and reliability, and uses three primary measures for bus service reliability, using AVL system data:  On-Time Terminal Departures: One minute early to five minutes late (-1 to +5).  Large Gaps in Service: measured from a fixed point, with two buses more than 15 minutes apart or double the scheduled headway.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-13  Bus Bunching: Measured as two buses running within 60 seconds of each other passing a fixed point, using departure from that point as the measure. The purpose of CTA’s monthly performance metrics is to set internal goals for agency performance to encourage improvement and establish accountability. CTA’s uses a system wide goal of at least of 80 percent for On-Time Terminal Departures. Goals for headways are a maximum of 4 percent large gap headways and 3 percent bunched headways. Reliability Data Collection, Analysis and Reporting Most recently, CTA has been utilizing Clever Cad, which is a robust system that enables dispatchers to communicate directly with transit vehicles and manage routes more effectively. It delivers greater efficiency and security to bus operations by providing dispatchers and supervisors with a clear, real-time picture of the location and status of every in-service vehicle utilizing AVL information. Supervisors use a tool called Real-Time Bus Management (RTBM), which is installed in computers in the mobile units, which identifies actual bus arrival times by location. The Ventra next bus and next train apps also make information available to the public regarding services and vehicle locations. Representative Key Performance Indicator (KPI) measures are reported monthly to the CTA Board, and cover a broad range of metrics for both bus and rail service. These reports are published monthly and appear on the CTA website, and a sample is provided in Table 2.4. In the example presented, all boxes in green demonstrate that CTA met or exceeded its monthly target and yellow boxes mean that the CTA came within 10 percent of the monthly performance target. Targets missed by more than 10 percent are indicated by a red colored box, and these routes are targets for further evaluation and analysis. In April 2017, the CTA met, exceeded or came within 10 percent of the agency’s monthly internal targets in nearly all categories for bus and rail, including the following bus reliability measures:  Percentage of Big Gap Intervals in Bus;  Percentage of Bunched Intervals in Bus;  Mean Miles between Reported Rail and Bus Vehicle Defects;  Average Daily Percent of Bus and Rail Fleet Availability; A run time analysis is conducted for every pick, looking at running time internally including level of service. Service performance is monitored daily as well as monthly by route. Daily performance is reviewed internally by Bus Service Management via information relayed through the control center. Most street supervisors are mobile and assigned to monitor routes either with delays or interruptions reported from operators or based on previously identified service problems. Five supervisors are posted at key locations, primarily train terminals.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-14 Table 2.4 – CTA Monthly Performance

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-15 Diagnosing and Treating Reliability Issues Routes selected for on-street supervision are prioritized based on several criteria such as: routes with highest ridership, how long since running time has been evaluated, changes to street and area conditions, and where the three primary reliability measures – departures, bunching or gaps have been flagged as below target. In addition to the supervisors, dispatchers at the control center monitor and relay information on service delays or interruptions via the Clever Cad system. Depending on the daily conditions, dispatchers can request field units (supervisors or spare buses) to assist in restoration of service. If operators on-street run into unplanned events, such as accidents or other blockages, and need to consider rerouting, they report to the control center and a supervisor is dispatched to the field to determine a course of action. If a reroute needs to be scheduled, supervisors are set up on both ends of reroute. The control center communicates to all operators if there is a planned reroute, and they create service bulletins and information that is disseminated through the CTA website to all garages. Information is also distributed through the CTA Twitter account to the public. On a day-to-day basis, a “gap” bus1 can be scheduled, but that is primarily the exception and the staff works to manage situations in the field. CTA uses a process called the Schedule Capacity Form, to report when operators indicate that they do not have sufficient time on their schedule. Those forms are submitted also to Service Planning staff who would coordinate with scheduling and determine if a run time analysis is required before the next operator pick. Causes for unreliability are primarily identified by supervisors doing field observations, who also are working with the Service Planning Department. Issues identified are first routed to Bus Service Management where field observations are conducted for up to four days at locations where issues are identified. After the field observations, reports are prepared. Planning and Operations meet regularly with the city to get updates on construction projects, so the staff can make changes and adjustments as necessary. Service Planning looks at the busiest routes and tries to identify problem flow areas and look for long-term causes of slow operations, then works with city to develop longer term improvements. Transit signal priority is used by CTA on several routes, along with other transit priority strategies such as dedicated transit lanes, and queue jumps. CTA and the City of Chicago Department of Transportation (CDOT) meet regularly to discuss future corridors which are presently in the development stage for TSP application. CTA also works closely with CDOT to provide input at various stages of a construction project. For short-term construction, CTA attends weekly meetings with the city’s Permitting Department to advocate ways to maintain service reliability. Examples of what has been done in the past for construction reroutes: 1. Reducing the number of travel lanes instead of closing all travel lanes during construction. 2. Encouraging motorists to take one reroute and bus service a different reroute, with preference for the shortest reroute given to transit. 3. Maintaining transit service in one direction, while construction is being done in the opposite direction. 4. Adjust construction work hours or day types to minimize impact to transit customers, i.e. can the work be done when transit service is not operating such as weekends, evening, and midday. 1 A “gap bus” at CTA is a bus with driver that is strategically positioned to be able to go into service on an as needed basis.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-16 5. Request adjustment to traffic timings along the reroute to improve overall traffic flow. Remove parking to provide bus movement clearances along the reroute. For long-term construction (6 weeks or more), CTA works with CDOT to review maintenance of traffic (MOT) to develop the most efficient reroute with minimal impacts to customers. Similar techniques as for short-term construction are applied, but with longer lead time. One strategy applied is to convert a parking lane to a bus-only lane through the rerouted area to allow buses to bypass the congestion. The majority of bus schedules are not adjusted to account for construction delays due to the uncertainty of the start and end date and if it coincides with the four scheduled pick changes a year as well as a lack of available funds in the capital project budget to account for an increase in travel times and thus added buses needed due to construction. CTA does not specifically collect data related to individual construction project impacts, but feels they address customer concerns very responsively. Evaluating the Reliability Improvement Program From an evaluation perspective, the service planning process looks in-depth at performance and tracks before and after performance when a run time analysis has been completed. The review looks to identify whether schedule adherence is improved as well as at other metrics such as ridership and the other measures of performance. These reports are completed monthly for three to six months following a change. 2.1.3 VIA Metropolitan Transit Service Area Area Population Number of Bus Routes Size of Fixed-Route Bus Fleet Annual Passenger Miles Annual Trips Annual Vehicle Revenue Miles 1,213 sq. miles 1,825,502 93 378 buses 158.3 million 37.8 million 21.8 million The TCRP A-42 interview with VIA Metropolitan Transit (VIA) was held August 14 and 17, 2017 with the following individuals:  Tracy Manning (Manager, Service Planning and Scheduling)  Manjiri Akalkotkar (Service Planning and Scheduling Coordinator)  Michelle Garza (Operations Supervisor)  Mark Vargas (Operations Supervisor)  Marlene Connor (A-42 Research Team)  Jim McLaughlin (A-42 Research Team) VIA operates 93 bus routes over a 1,213-square mile area, serving 1.8 million people in the greater San Antonio metropolitan area. VIA operates a bus system that reported 37.8 million bus boardings in 2016. Transit reliability is of increasing importance to VIA staff due to many internal and external factors including: the expansion of routes that are being operated at more frequent headways; the changing profile of the community; the increased availability of technology and other tools which gather and communicate information. In addition, the agency is hiring an individual with the responsibility to develop a performance measurement program and to communicate that information to the VIA Board and to the community. Reliability Program Organizational Structure Within VIA, the lead responsibility to address bus service reliability is with the Service Planning Department, which works collaboratively with both the IT Department, which gathers information

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-17 and publishes reports, and the Operations Department, which has responsibility for putting service on the street. With the expansion of routes that are being operated at more frequent headways there is a perception among planners that there is a need to get more people to see the benefits of headway management. The communication of reliability may be expanded as the agency looks to highlight more performance measures. Balancing activity and input from field supervision and dispatch center staffs along with improving real-time data and arraying different data sets requires a significant amount of resources. However, VIA staff is comfortable will the use of performance and headway management to address reliability. Reliability Measures and Standards There are two primary indicators which VIA utilizes to measure transit reliability: on-time performance and wait assessment. On-time performance is used for all routes except the trunk portion of BRT routes and Downtown Circulators. On-time performance measures schedule adherence at timepoints on each scheduled trip. A time point is considered on time by the automated system if the scheduled block arrives at the time point no more than 30 seconds early or 5 minutes late. However, early arrivals at the end of line, where the primary concern is usually making transfer connections, are not flagged as early. Data anomalies such as delay caused by short-term detours, construction, train cycles, inclement weather, and other delays that exceed normal boundaries not accounted for in the schedules (e.g. flooding) are excluded. While trips arriving at any scheduled time point less than 30 seconds early will be considered on time, operators are instructed not to depart any time point location ahead of the scheduled time. At transit centers and “super stops” that serve several less-than-frequent routes, a “pulse” (timed transfer system in which as many routes as possible arrive and depart during the same interval every 30 or 60 minutes) is most desirable. While “pulses” are not part of the on-time performance measurements, the planning and scheduling of “pulse” routes consider these timed transfers a critical component to helping passengers to reach their destinations in a timely manner. VIA defines a satisfactory on-time performance when the percentage of on-time transit trips arriving at all official time points does not fall below an 80 percent average, except on high frequency routes. Wait assessment is used where the service is based on a constant headway and the schedule is published in terms of headways rather than specific arrival times. Wait assessment measures adherence to headways at each designated trip starting point. VIA defines a satisfactory wait assessment performance when the percentage of acceptable headways (measured at the starting point for the trip) does not fall below 80 percent/ Headways between trips are considered acceptable if they are within five minutes of the scheduled headway. Wait assessment is employed on routes that operate at higher frequencies such as the two high frequency Primo (BRT) routes which operate at 10 minute headways. This measure will continue to gain importance, as VIA will have 12 routes with 12 minute or better headways by January 2019. A comprehensive operations analysis which is currently underway could also recommend an additional four corridors with 12-min or better which would bring the total to 16 route/corridors that VIA would like to use for headway management techniques. VIA has used these measures historically, and the agency’s ability to improve its processes have been assisted using AVL and APC data systems which have been upgraded and implemented over the last year. While managing this data requires significant resources, they are comfortable with the use of on-time performance and wait assessment to address reliability.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-18 Reliability Data Collection, Analysis and Reporting VIA staff have used a variety of data sources for reliability measurement, including several types of technology (e.g. APC and AVL data). The dispatch center views the data feeds, input from field supervisors, and data and input from customers through established customer service processes and systems, including social media. Supervisors utilize tablets in their vehicles to see buses in real-time, like what the central terminal dispatchers see. IT gathers and communicates all the reports including daily and monthly average on-time performance. Dispatchers review daily radio reports from operators to monitor anomalies such as late pull outs, (both AM and PM), incidents, delays, missed trips, and turn backs. Several data-related issues were noted. First, staff believes that there is a delay in pushing out their real-time information, e.g. 30-second cycles cause a breakage in data, thus there is inherent delay and real-time information is not communicated to riders. Also, since data is collected on multiple systems that are not merged, e.g. CAD/AVL and APC, there is an issue of managing and reporting data without expending further resources. Diagnosing and Treating Reliability Issues VIA’s current process is to address reliability on a route basis, looking at those routes with reliability issues and developing an action plan. Those routes are then reviewed to identify any specific issues, which could range from construction to congestion, and an action plan is developed. Results are reviewed monthly, building towards the three times per year service adjustments and operator picks. The focus of the review is internal to VIA and includes multiple departments. VIA has been specifically focusing on higher frequency routes, especially their Primo BRT service, which operates as frequently as every ten minutes for much of the day. The process which VIA has established is to review APC data considering stop boarding data and dwell times to look for efficiencies, and to validate that data in comparison with the AVL data also collected on-board vehicles. This review is to determine at what times and locations delays are occurring. Initial steps are to look at stops, route interlines, and vehicle blocks. This review is used to make any changes to route structure. VIA has over time changed interlines, and ultimately broken previously interlined routes to meet reliability standards. Schedules are developed based on both experience and new running times analyses. As in many cities, there is currently a significant amount of construction activity in downtown San Antonio, which is the hub for many routes including the two Primo services. There are consistently detours that have been impacting traffic and reliability, so the Service Planning Department adjusts run block times only during the three times per year driver picks as there are decreased resources available for reliability related activities at other times. VIA participates in weekly construction meetings with the City of San Antonio to discuss traffic plans and road closures, and acceptable bus rerouting to minimize transit delay. With conditions needing immediate attention, operations staff can allocate four available spacer buses to augment or fill in for a run. With the increase in number of higher frequency routes, there is a recommendation for a dedicated dispatch person to monitor those routes which also should improve reliability. In 2017, VIA participated in a pilot test with the Georgia Institute of Technology to assess tools for measuring transit reliability on two routes. The test indicated that there was value in maintaining headways mid-route for longer distance services using field supervision. However,

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-19 sustaining that level of personnel deployment is currently not feasible from a budgeting perspective. The headway maintenance was especially beneficial in decreasing bus bunching which was adversely affecting reliability. It was anticipated that the dedicated dispatch person would be able to improve bus spacing management by communicating with operators and field supervisors. VIA has shown an interest in reaching out to others and believes that balancing input from field supervision and dispatch center monitoring can be effective in reducing bus bunching and reliability impacts at mid-route time points. Staff noted that there could be a benefit to giving supervisors visuals at midpoints and the ends of the route to keep operations on track and make real-time adjustments to address bus bunching, particularly as runs will be operating at higher frequencies. VIA is also working on implementing a smart card program which should also assist in increasing the speed of passenger boarding. For its Primo routes, VIA has also instituted back facing self- securement devices for persons with disabilities which has made the boarding process faster. The street running BRT also is connected to a transit signal priority system which is operated by the City Traffic and Engineering Departments, thus the total data are not available to VIA. VIA staff expressed the view that transit signal priority, which they believe has been appropriately modified to only be active for late-running buses, does not have a significant impact on running times and reliability, based on run time comparisons with vs. without TSP. Evaluating the Reliability Improvement Program The increase in the number of high frequency routes will likely increase the emphasis on reliability in overall service planning and delivery. VIA is working to improve the organization of information and lessons learned. Current reports are good, but are organized in PDF format, requiring personnel resources to retrieve the various reports, merge data, and determine how to use all the information more holistically.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-20 2.1.4 Regional Transportation District Service Area Area Population Number of Bus Routes Size of Fixed-Route Bus Fleet Annual Passenger Miles Annual Trips Annual Vehicle Revenue Miles 2,342 sq. miles 2,920,000 138 873 buses 338.6 million 73.3 million 36.8 million The TCRP A-42 interview with Regional Transportation District (RTD) was held September 5, 2017 with the following individuals:  Jeff Becker (Senior Manager of Service Development)  Tim Lesaro (General Superintendent of Street Operations)  Doug Monroe (Service Planner), Natalie Hunt (Service Planner)  Jeff Dunning (Senior Service Planner Scheduler)  Jessie Carter (Manager Service Planning)  Jonathan Wade (Manager of Support)  Marlene Connor (A-42 Research Team)  Jim McLaughlin (A-42 Research Team) RTD operates (with both in-house and contracted operations) 138 bus routes over a 2,340-square mile area, serving 2.92 million people in the greater Denver metropolitan area. RTD currently operates a bus system that reported 73.3 million bus boardings in 2016. As Denver RTD continues to grow, the agency places increasing importance on transit reliability. This section summarizes best practices from RTD’s ongoing efforts to manage and monitor bus system performance. Reliability Program Organizational Structure At RTD, the Service Planning team manages on-time performance with support of the Operations and IT teams and regular oversight of the RTD Executive Team. RTD’s new acting Chief Operating Officer has been working to provide weekly reliability updates to the RTD Senior Leadership team for discussion and quarterly reliability reports to the RTD Board. In addition, there is a weekly KPI report to the Executive Team. Reliability Measures and Standards Denver RTD, along with most other large transit agencies, uses on-time performance (at system and route level) to measure fixed-route bus service reliability. System wide, RTD’s primary standard for on-time performance is one minute early (-1) to five minutes late (+5), with a goal of maintaining bus reliability at 88 percent for local bus service (scheduled to transition to 86 percent in 2018) and 94 percent for regional bus service. Recently, RTD engaged in an on-time performance review of their bus operations, to understand how their agency is meeting established goals and identifying underlying causes for unreliability. This initial performance review approach used AVL/APC data to calculate each operator’s on- time performance leaving the first timepoint. RTD chose the first timepoint for this study effort specifically as a comparison point because it is the activity which is most under the operator’s and scheduler’s control, and thereby least affected by other sources of unreliability. This initial approach focused on supporting operators by uncovering specific reasons why they were experiencing any difficulties leaving the first timepoint. For purposes of this specific review, first timepoint on-time performance was categorized as follows for leaving the stations:  Very Early: 30 minutes early to 5 minutes early  Early: 5 minutes early to 15 seconds early  Exactly On-Time: 15 seconds early to 1 minute late

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-21  Late: 1 minute to 5 minutes late  Very Late: 5 minutes to 30 minutes late In addition to on-time performance, RTD monitors reliability factors by trip and by time period, including: miles between road calls, service availability, customer response time and complaints and variability of running time (on a trip-by-trip basis). RTD contracts out between 43 and 45 percent of their fixed-route bus operations. Their contracts ensure consistency with the RTD stated performance goals and contain liquidated damages clauses for failure to meet stated goals. Listed in Tables 2.5, 2.6, and 2.7 are KPI measures, reported monthly by service type, that illustrate on-time performance, service availability, and adherence to revenue service trip start time for RTD Board Reports on a system wide basis. RTD also evaluates customer, operator, contractor, and social media input to evaluate the need for a further analysis and review of potential service modifications. Comments trigger a review of data to determine whether it is a recurring situation. Table 2.5 - RTD Performance Measures On-Time Performance Table 2.6 - RTD Performance Measures – Service Availability (Missed Trips) Table 2.7 - RTD Performance Measures – Trip (Scheduled) Start Time

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-22 Reliability Data Collection, Analysis and Reporting Beginning in 2014, RTD began using only INIT CAD/AVL/APC (Computer Aided Dispatch/Automated Vehicle Locator/APC) data along with their Station Starter data. The CAD/AVL/APC system is based on GPS measurements, and generates a record each time a bus door is opened, when the bus stops (including at traffic signals), when a bus begins moving again, and every 30 seconds during movement. If the bus is APC equipped (around 60 percent of the RTD fleet), passenger boardings and alightings are collected when the doors are opened. All records are stored in a mobile computer on the bus and downloaded at the end of each day, and in addition, a smaller subset of the records are transmitted on an ongoing basis to the RTD Dispatch Center for real-time operations. RTD uses CAD/AVL/APC data to calculate on-time performance only at timepoints, and only for the departure time from each timepoint. In response to incorrect measurements that occur when operators do not pull completely into a stop at a start/end terminal, RTD instituted a policy of not using the terminus of each route pattern for calculating on-time performance. RTD collects transit signal priority data across its system, but not consistently due to jurisdictional coordination challenges. RTD operates transit signal priority on the main arterial roadway pairs (Broadway Street and Lincoln Street) in downtown Denver, as well as a peak hour bus lane recently converted to a 24-hour operation. RTD is currently piloting transit signal priority technology using their on-board INIT-AVL/APC system, which will send a request via cellular communications to a traffic controller with a modem installed at a few locations. They had previously used an Opticom System, which was both expensive and challenging as every jurisdiction has a different signal system, making that system unworkable from a regional perspective. In addition to electronic data, RTD uses Road Supervisors and Station Starters to perform physical on-time performance checks along routes and at stations. Road Supervisors are responsible for physically checking one route per day, by following buses from point to point and monitoring on-time performance. These check points can vary such as from start to end, from midpoint to end or any combination, thereof, which varies from day to day and route to route. Station Starters located at the Union Station and Civic Center Stations record the arrival time of buses entering the station using an electronic logging system. These logs are combined with the CAD/AVL/APC system data in the process of calculating on-time performance. If Station Starter data is available, it is used instead of CAD/AVL/APC data. While they compile data each month, the data reported to the RTD Board is always a summary from the beginning of the year to the end of the evaluation month RTD’s planning team uses transit reliability data to perform several types of analysis, such as by route, bus run, and time of day. This includes a monthly on-time performance review which is evaluated through observations by bus operating company by service type. Due to the large quantity of data generated (over one million observations each month) from each CAD/AVL/APC tracker, station logs, and supervisor reports, RTD developed a prioritization process for data review and usage. For statistical integrity, the order of preference for selecting data sets for a single event is: station log, AVL, and service monitor data. RTD has a concept called ‘free running time’ on many routes. If the route is operating in ‘free running time’ it means the operator has permission to leave the timepoint early. This usually takes place on the drop-off only portions of express routes where they do not want the operator to stop and ‘burn off’ time in a suburban neighborhood if the bus happens to be early that day because of light traffic on the street system. RTD does not use any data for calculating on-time performance that occurs on portions of routes that operate in ‘free running time’.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-23 RTD does not include on-time data from some days which are ‘atypical’, e.g. due to snow when there are a significant number of buses that do not run on time. Their criterion is that if more than 30 percent of the buses arrive late at Union Station and Civic Center Station, they consider the date an ‘atypical day’ and that day is not counted toward on-time performance. For example, in 2013, 27 days were eliminated because they were considered ‘atypical’, but this was higher than usual because of flooding. While RTD compiles data each month, the data reported to the RTD Board is always a summary from the beginning of the year to the end of the evaluation month. For example, June data would be the summary of data from January 1 to June 30 of that year. Starting in 2014, route detours and lane closures were considered in the on-time performance calculations. For example, those routes with lane closures or rerouting due to construction, accidents, bad weather and other transportation issues were excluded from on-time performance analysis from the date and time of the beginning of the notice until its expiration. Those adjusted numbers are then reported in the on-time performance report each month. Diagnosing and Treating Reliability Issues By monitoring and reviewing data reports from the APC/AVL system, RTD can determine routes that are not operating per their on-time standards. Those routes are assigned street supervisors to conduct specific observations and determine whether there are issues with a specific operator, delays due to temporary conditions or a long-term condition that requires development of a specific action plan for the next bid cycle. Daily, dispatchers can communicate which routes need additional observation using a color-coded prioritization system. Supervisors focus on trying to prevent early departures and to get operators into position appropriately, using a loop bus2 if necessary and available. Where issues are identified with respect to meeting schedules at timepoints, RTD also conducts running time analyses to make changes to run blocks without requiring additional funds. RTD looks for extreme values for routes at the first time point. Looking further into a query, e.g., did the bus arrive at a final point in time or was it late? Did the bus make its interline time or pullout time? RTD serves several jurisdictions, and has developed a good working relationship with cities, counties and CDOT to get daily reports of items such as permits for special events and construction, and prioritizing snow removal of bus routes. Lead time for making schedule adjustments can vary based on the construction project situation. This information allows RTD to proactively make service adjustments and avoid potential detrimental impacts on reliability. RTD communicates detours and other schedule changes to its riders due to construction through email and text rider alerts, and deploys flyers and signs for longer term detours. Evaluating the Reliability Improvement Program Recently, RTD completed a stop consolidation analysis that included research on usage, boarding times and through routing, and developed a stop consolidation program. On one route, for example, approximately two dozen stops were consolidated and on a new bus lane in Denver, four of 18 stops were removed. They anticipate conducting a follow-up analysis of travel time, speed, and boardings for comparison. 2.1.5 Los Angeles Metro Service Area Area Population Number of Bus Routes Size of Fixed-Route Bus Fleet Annual Passenger Miles Annual Trips Annual Vehicle Revenue Miles 1,513 sq. miles 8,626,817 172 1,902 buses 1.3 billion 312.8 million 74.2 million 2 A “loop bus” at RTD is a bus with driver that is strategically positioned to be able to go into service on an as needed basis.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-24 The TCRP A-42 interview with Los Angeles County Metropolitan Transportation Authority (Metro) was held September 25, 2017 with the following individuals:  Dan Nguyen (Deputy Executive Officer, Operations)  Steve Rank (Superintendent Bus Transportation)  Diane Frazier (Superintendent Bus Transportation)  James Pachan (Superintendent Bus Maintenance)  Donell Harris (Superintendent Bus Maintenance)  Conan Cheung (Senior Executive Officer, Service Development, Scheduling and Analysis)  Al Martinez (Senior Director, IT)  Marlene Connor (A-42 Research Team)  Jim McLaughlin (A-42 Research Team) Metro operates 172 bus routes over a 1,513-square mile area, serving 8.6 million people in the greater Los Angeles metropolitan area. Metro operates a bus system that reported 312.8 million bus boardings in 2016. System reliability and on-time performance is an overarching and continuing priority for Metro. While there have been specific Reliability Task Forces within the agency from time to time, none are underway currently. With decreasing vehicle speed an ongoing concern, overall improvement would require political will from a regional perspective to employ treatments such as bus lanes and transit signal priority improvements. Metro is in the process of preparing for a complete system restructure review soon. Before such a review, the agency is looking to bring in engineering support to look at hot spot treatments such as bus lanes, queue jumps, and signal priority for major corridors, that can assist bus movement through congested intersections. Reliability Program Organizational Structure At Metro, transit reliability is the collaborative responsibility of several departments, including Transportation, IT, Planning, Scheduling and Maintenance. The weekly material management reports are given to 11 agency directors, ten managers and the executive management team. From a community stakeholder perspective, the on-time performance and other KPI’s are reported to the MTA’s five Service Councils who are engaged daily and hence have more detailed perspectives than the MTA Board. Members of these Councils are appointed by the MTA Board or the Council of Governments. These Councils have jurisdiction over service changes and are a front line to the community and riders. Reliability Measures and Standards Metro uses on-time performance to measure reliability. The on-time window ranges from minute early (-1) to five minutes late (+5), and is measured at timepoints. Metro also measures a range of maintenance-related factors for reliability, ensuring that they have well-maintained vehicles in the fleet to meet pull out demands. These measures include meeting KPI’s for preventive maintenance, road calls, and miles between mechanical failures. The standards applied for these measures are consistent with FTA and manufacturer guidelines and are primarily driven by mileage factors. The standard for preventive maintenance is that there should be no past due preventive maintenance (typically 6,000 miles between general inspections). The five bus divisions are expected to ensure that preventive maintenance is completed before it is due.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-25 Reliability Data Collection, Analysis and Reporting Like many other transit agencies, Metro uses APC data to measure bus system reliability, recorded at timepoints. The same APC system has been in use since 2004 and generally works well, although plans are in development to move to the next generation for this technology to improve accuracy and reduce required maintenance. The Metro Scheduling Department relies on robust APC data sets by time of day and day of week and by operator for updates. For the most part, since the on-time performance data is used in conjunction with ridership information, the bus routes that carry the most passengers are given the most attention for on-time performance monitoring. The Metro Transportation Department evaluates the APC data from the route and operator perspective to track individual performance and patterns as they develop. Each week, reliability data is gathered and prepared into a material management report which is given to management and community stakeholders. In addition, Metro collects and tracks customer comments using their Customer Comment and Tracking System (CCATS). Metro staff reviews APC data on a daily and weekly basis, and each department has its own perspective on how that information is reviewed and actions which occur. For example:  Data Analysis Department reviews APC data information and prepares reports.  Planning Department looks to identify any root causes for on-time performance issues.  On-time performance data is available and shown on screens at the Control Centers.  Supervisors have the same information available on laptops on their mobile units and can view whether routes are running ahead of time, late, or are off route for some reason.  Supervisors are also located in the field and often positioned at timepoints which are known to be problematic. Terminal departures are not monitored specifically, but there are several high ridership and high priority routes which have dedicated supervisors assigned. The Customer Relations team at Metro tracks and reviews customer complaints and accidents and determines which division is involved as the subject of the filing. Bus Divisions have 15 days to respond to customer complaints with a variety of actions, including field investigation, data gathering, and review. Complaint records are available for supervisors and executives to review, and some Metro departments have protocols in terms of how often complaint records are reviewed. The Transportation Department, for example, reviews customer complaint files on a weekly basis to view progress on each specific complaint. Diagnosing and Treating Reliability Issues If there is a problem on a route which is running late between two timepoints, supervisors conduct field reviews and then work specifically with the Scheduling Department to complete a run time analysis and determine if adjustments in time or route terminal need to occur. Like other transit agencies, Metro has experienced issues with operator vacancies, which impacts system reliability. For example, some of the laws that govern protected employee leaves impact their ability to be fully staffed. The current operator to assignment ratio goal for those operators active for assignment is set at 1.2, however LA Metro is experiencing a level of 1.14, which translates to a deficit of approximately 200 operators. In addition, the increases in the California minimum wage rates have brought the pay rate to a level like the starting wage rate for Metro. To respond appropriately, Metro has raised its starting wage rate this year by $2.00 an hour. From a maintenance perspective, staffing issues are related more to ensuring appropriate training for mechanics on evolving technologies associated with vehicles and other system components. Transit signal priority has been actively used in the Los Angeles metropolitan area for many years, but the system is independent from Metro, thus requiring ongoing inter-agency coordination. Since Metro serves many cities in Los Angeles County, there is not a consistent approach to

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-26 transit signal priority. There has been a goal of trying to establish a countywide program, but that is considered a long-term effort. Metro believes that while technology is available to improve reliability, through transit signal priority, queue jumps, or dedicated lanes, for example, there is no consistent political will across the various jurisdictions to establish these strategies for the benefit of transit. Thus, communicating the benefits of transit preference and priority, including enforcement of lane restrictions, for example, is a work in progress. Metro does have a relationship with the City of Los Angeles Bureau of Street Services which is the traditional conduit by which they receive information that might require service changes such as onetime events or construction, either long term or short term. Evaluating the Reliability Improvement Program Metro conducts evaluations on changes that relate to reliability and performance. For example, Figure 2.2 and 2.3 illustrate an example of a report completed for a change to the Silver Line BRT service where they implemented all door boarding to speed passenger boardings and reduce dwell time as part of their reliability improvement program. The recommendation is shown below as well as a chart which shows the impact of that change to on-time performance. Figure 2.2 – Metro Implementation of All-Door Boarding

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-27 Figure 2.3 – Metro Silver Line On-Time Performance 2.1.6 Transport for London Service Area Area Population Number of Bus Routes Size of Fixed-Route Bus Fleet Annual Passenger Miles Annual Trips Annual Vehicle Revenue Miles 977 sq. miles 8,788,000 675 routes 9,300 buses 5.16 billion 1.37 billion 305 million The TCRP A-42 interview with Transport for London (TfL) was held September 4, 2017 with the following individuals:  Shane Hymers (Buses Directorate, Planning Department, Policy)  Michael Smith (Buses Directorate, Planning Department, Policy)  Kenneth Cobb (A-42 Research Team)  Alan Danaher (A-42 Research Team)  Ted Orosz (A-42 Research Team) TfL operates an extensive system in the greater London metropolitan area that reported 1.37 billion bus boardings in 2016 over 977 square miles. Over the past decade, TfL has presided over a substantial increase in bus passenger journeys, with most of the bus trips in England now made in London. Since the mid-1990s, London’s bus services have been designed and planned by TfL, but contracted out to private operators. Reliability Program Organizational Structure At TfL, the primary responsibility for reliability management falls to the Bus Policy team. For the purposes of bus planning, the city is split into eight sectors – north, south, east and west for both inner and outer London. The Bus Policy team reviews reliability data with each lead TfL Bus Sector Planner to understand the cause of any decline in bus reliability (e.g. road works, changes in road conditions) and to develop solutions. Since the Bus Planners are geographically-based, they particularly have knowledge about their own area of London, which may be served by multiple operators. The Bus Policy team is also responsible for liaison with staff at the London boroughs, which involves dealing with local elected officials. In addition, the TfL Specifications Team reviews the details of route timetables and performance standards of operators, and the Contract Management Team carries out negotiation with operators, split into three regions. Each of the performance managers has two support staff (total of six support staff).

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-28 Reliability Measures and Standards For frequent services (every 12 minutes or more frequent), TfL uses Excess Wait Time (EWT) as the primary method of evaluation of reliability, which represents the additional wait experienced by passengers due to the irregular spacing of buses or those that failed to run. EWT is the difference between Scheduled Wait Time (SWT) and Average Wait Time (AWT) – frequent services represent around 80 percent of the TfL network. EWT results are published by individual service in each borough, as well as at the aggregated operator level. EWT statistics are calculated on a route-by-route basis using data from London’s CAD/AVL system (called iBus), measured at most scheduled timing points (rather than every bus stop) in both directions, between 5 AM and midnight every day. iBus allows TfL and bus operators to track the location of every bus in London by time and point. Since April 2012, iBus has been used to measure Quality of Service Indicators (QSIs), providing a major enhancement in:  time periods covered;  number of QSI points monitored; and  continuous monitoring. For non-frequent services (those with an advertised timetable, operating with four buses an hour or less frequently), TfL uses Percentage On-Time (POT), which is the percentage of journeys which operate “on-time”, defined as between 2 minutes early and 5 minutes late at fixed timing points compared to the schedule – non-frequent services represent around 20 percent of the TfL network. Compared to other definitions, TfL recognizes that this is relatively ‘generous’ to operators through its involvement with the International Bus Benchmarking Group (IBBG), which uses 1 minute early to 3 minutes late as the “on-time” window for its KPIs. The target on-time performance varies by non-frequent service route in the same way that each frequent route has its own EWT target, reflecting the characteristics of the route. The range of targets for each of the two basic reliability measures are shown below in Table 2.8. Table 2.8 – TfL Reliability Targets Reliability Targets EWT (Minutes) % On-Time Lowest 0.60 74.0 Highest 1.40 90.0 Separately, and only internally at this stage, TfL is monitoring average bus speed as a performance measure, since it is thought to be one of the main causes of the recent reduction in the number of journeys made by bus passengers in London. TfL is looking to use bus speed within an existing tool that uses passenger smartcards to estimate bus journey length, and potentially as part of another tool, “Run Time Variability,” which maps variation of the actual running time on a route (Figure 2.4). Once the use of these tools is refined, output will be mapped with passenger behavior to see what affects the variability that average bus speed has on passenger decision making (Figure 2.5).

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-29 Figure 2.4 – TfL Run Time Variability Example Figure 2.5 – TfL Passenger Journey Time Graph using ODX Data Reliability Data Collection, Analysis and Reporting iBus records bus movement on a continual basis, and separates reporting into daytime (05:00 to 23:59) and nighttime (00:00 to 04:59). Operator performance is compared against the minimum performance standard set in the contract for the route to determine any penalties or Quality Incentive Contract payments i.e. an operator with multiple routes will have multiple EWT targets, each tailored to the characteristics of the route. There is a single EWT target for the entire route, i.e. not separate targets for weekdays and weekends or different times of day. The volume of data generated from tracking each bus 24/7 is very significant and can cause some of TfL’s systems to run more slowly than would be ideal. Standard reports are therefore used to manage the analysis. iBus data is fed into Hyperion performance management software, which then automatically processes the data into standard reports. For further investigation, data is

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-30 exported into other software, but this is done before the raw data is deleted from iBus after 2 years. Data feeding standard reports in Hyperion is retained for a rolling two-year period and so creating a longer time series of data requires the results to be recorded separately Even with special events taking place in London on a regular occasion, most of the bus reliability data is maintained in overall performance scores. Service adjustments are planned for many event types, such as soccer matches or foreign heads of state visits. During service adjustments, revised scheduled data is fed into the iBus system, therefore retaining the performance score in the overall quarterly dataset. Unplanned diversions are not included within the performance scores as the iBus system cannot reconcile the actual tracking with the expected route, therefore data for such diversions is not recorded. Revised schedules will be loaded into iBus for weekends, single days or even part-day events if there is sufficient lead time for the schedule to be loaded (6 weeks minimum). Operators are not penalized for performance affected by traffic delays (unlike mechanical or staff issues). TfL aims to be transparent with their reliability and QSI data by making it available online, by borough and by bus route. Reports are published on the TfL website quarterly and to the London boroughs, the UK Department for Transport and London Travelwatch (the independent, statutory watchdog for transport users in and around London). Routes are reported individually, even if they share common sections with other routes. The EWT measure (an average across the whole network of around 700 bus routes) is on the Director of Buses’ performance scorecard and is therefore a key performance indicator for TfL in its entirety. Performance against the target is recorded by period and annually as a Red/Amber/Green (RAG status) dashboard. Diagnosing and Treating Reliability Issues In response to variable performance by the private operators and deteriorating reliability, TfL introduced the Quality Incentive Contract (QIC), which rewards operators for improving reliability of service. Under QICs, which retain deductions made for mileage lost for causes within the bus operator’s control (e.g. lack of staff and/or vehicles), Minimum Performance Standard (MPS) for reliability of service are set for operators. MPS are derived from the pre-existing performance level and are tightened progressively over time. Operators are rewarded financially for improvement (of up to 15 percent of the contract price), and can be financially penalized for failing to meet the target (by up to 10 percent). Operators are further incentivized for good performance by the prospect of a two-year extension of the service contract beyond the initial five years. As with any performance improvement strategy, it can be expected that the greatest progress can be delivered in the initial phase, with further improvements (and maintaining improved performance) being much harder to deliver as the issues outside the direct control of the operator will typically represent a greater proportion of the residual level of unreliability. Performance on high frequency routes has now improved so close to an EWT of 1 minute that scope for further improvement is both limited and likely to be disproportionately expensive to achieve, shown in Figure 2.6.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-31 Figure 2.6 – Reliability Compared to Contract Type TfL is implementing several strategies to continue addressing reliability, which is believed to be instrumental in the current reduction in bus ridership, including:  Roads Reliability Plan (RRP) – This is an amalgamation of work across TfL’s departments to ensure coherency and alignment of activity internally and with key stakeholders (e.g. local boroughs). Targets for the RRP include returning road and bus network operational performance to 2102/13 levels and delivering a 30 percent reduction in serious and severe disruption.  Bus Priority Programme (BPP) – This delivers targeted bus priority infrastructure to improve reliability and reduce bus travel times at key ‘hot spots’ and is delivered in partnership with the particular local borough. To date, the programme has focused on mitigating the impact of the wider Road Investment Programme on buses. Results from these strategies will be reviewed at the most senior levels of TfL as bus performance is now included as a Key Performance Indicator (KPI) in the Roads department’s business plan. Evaluating the Reliability Improvement Program As the principal measure of reliability is EWT and percent on-time, any outcomes of reliability improvement actions are considered given the impact which they make to the overall reliability score, fed by data from iBus which can by analyzed over the section of route to which the reliability improvement action relates. The recent decline in the number of bus passenger journeys has renewed attention on bus operating speeds and reliability as key factors in determining choice of travel mode. This has prompted the new analytical methods which TfL is developing, although the concept of a contractual standard which is tailored to each route is so well established across TfL and its contracted operators that it is likely to remain as the routine measure of performance. Although TfL has one of the lowest-emission bus fleets in Europe, air quality remains a major issue for London and therefore measures to promote environmentally-friendly modes such as walking and cycling are leading to reduced road space allocation for motorized traffic and therefore implementing physical bus priority measures such as bus lanes is likely to be become ever more difficult in future.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-32 2.2 Medium Transit Agencies 2.2.1 Southwest Ohio Regional Transit Agency Service Area Area Population Number of Bus Routes Size of Fixed-Route Bus Fleet Annual Passenger Miles Annual Trips Annual Vehicle Revenue Miles 262 sq. miles 845,303 40 299 buses 83.3 million 15.0 million 9.6 million The TCRP A-42 interview with Southwest Ohio Regional Transit Agency (SORTA) was held September 28, 2017 with the following individuals:  Ted Meyer (Manager of Planning & Scheduling)  Mark McEwan (Manager of Service Analysis)  Sean O'Leary (Director of Transit Operations)  Shaun Gatherwright (Radio Control Center Manager)  Eunice Brown (Operations Manager)  Carlos Rowland (Director of Maintenance)  Jeff Mundstock (Maintenance Manager)  Marlene Connor (A-42 Research Team)  Jim McLaughlin (A-42 Research Team) SORTA operates 40 bus routes over a 262-square mile area, serving 845,000 people in the greater Cincinnati metropolitan area. SORTA operates a bus system that reported 15.0 million bus boardings in 2016. At SORTA, maintaining bus system reliability is an integral part of the organization’s focus to manage and monitor the performance of their system. While there are a range of impacts which are out of their control, such as unpredictable traffic on the expressways, or the amount of construction activity, the agency believes the availability of data and information has made their work more effective both understanding trends and developing tools and techniques to assist in solving issues that arise. Reliability Program Organizational Structure There are several teams at SORTA that work together collaboratively including Planning, Scheduling, Radio Control, Transportation, and Maintenance. They work together frequently although this interaction has no formal structure. The staff also communicates regularly with the operators. Additionally, KPIs are regularly communicated with the SORTA Executive Team and Board of Directors. From a planning and scheduling perspective, SORTA works closely with the different jurisdictions, most closely with the City of Cincinnati staff and Police Department, especially with respect to routing and bus stop locations. They communicate prior to making any changes. Other communication avenues are related to planned detours or construction activities. In fact, SORTA uses email to communicate with the municipal jurisdictions about special events and any upcoming road work. The Communications Department handles social media, and distributes comments received appropriately. Reliability Measures and Standards Like many other transit agencies, SORTA uses on-time performance to measure bus system reliability, defining on-time as 59 seconds early to five minutes and 29 seconds late. This data is measured at timepoints and reported at the system, route and trip levels. SORTA also relies upon the Distance Without Service Interruption and Road Failures measures to determine their vehicle reliability, with a target at 5,700 miles between breakdowns. These performance measures are

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-33 presented to the SORTA Board monthly as measures of local service or express service. KPIs also include information related to ridership and revenue. In Figure 2.7 and 2.8, the blue line represents the agency target and the yellow line represents the agency’s minimum goal. Figure 2.7 – On-Time Performance: Local Figure 2.8 – On-Time Performance: Express Reliability Data Collection, Analysis and Reporting Using CAD/AVL equipped buses, the agency can record every timepoint, every day, for each trip. However, the first and last timepoints are not used to measure on-time performance, since it is not viewed as a negative that riders arrive early at the end of a trip. The data is an accurate and complete measure of schedule adherence for the services that are operated. It is robust information which can be used to view the system in many ways, such as by block, by route, by operator, by time of day. From this data, SORTA uses an analysis spreadsheet tool which they have developed to evaluate running times and deviations from planned services. The tool can be used to evaluate whether the running time is appropriate, what the correct layover should be and what recovery time should be in the schedule to account for unexpected delays. The staff relies heavily on technology and believes that the technology evolution has been dramatic with respect to the amount of data which is available. However, they believe it is still important to also add field experience to see what is happening on a regular basis and add value to the data.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-34 SORTA has APCs installed on 25-30 percent of the fleet and they anticipate continuing to add more APCs. There is other technology on the SORTA wish list including getting turn-by-turn software installed on buses so that substitute operators could be more self-guided. The maintenance staff uses “Maximo” as the data base tracker for their maintenance reliability factors such as mileage. The current average distance between service disruptions is tracking at 6,500 miles, which is more than their target of 5,700 miles. There are four operator picks per year when major service and schedule adjustments occur. The structure is for staff to select from four to 10 routes during each pick to evaluate for reliability and on-time performance. The review can be targeted, but more frequently the scheduling staff conduct a complete evaluation of a route including weekday, Saturday and Sunday, developing adjustments that make sense based on recent running time data and history. There are two bus divisions and each division has one schedule maker to complete this assignment. Each does an evaluation of three to five routes. Which routes they evaluate might be selected based on time since the last evaluation, if there have been specific issues or complaints, or if there is upcoming construction which might affect running time. When schedule makers look at reliability, they supplement the data with field reviews so that there is a continual effort to update the data behind the system. Now that there are three to five years of data history, they can compare running times, schedules and performance and understand trends. In general, they are working to reduce variations in schedules. Diagnosing and Treating Reliability Issues The radio control center is tasked with the responsibility to monitor the CAD/AVL system 24/7 for service. There are established rules that trigger an immediate response. Any timepoint which registers a bus operating five and a half minutes late triggers the dispatcher to document and file an incident report. If resources are available, a substitute vehicle will be deployed to get the bus back on time. How many buses are available to fill in depends on the operator work force and absenteeism; there is not a specific number of buses located at strategic locations. Previously, first response had been to call in what had been termed the “show up” staff. In previous years, there used to be stand by runs in the books for operators, but the cost for retaining that process was high and therefore was discontinued. With the regular review of data, SORTA has come to discern specific problems or events which trigger a response. Supervisors are stationed throughout five strategic areas. If there are ongoing issues at a specific location, supervisors are dispatched to those locations to determine next steps and work with the transportation staff to resolve specific issues. Field personnel have the same information on their mobile units as that available in the radio control center. There is also a supervisor located at the Government Square Transit Center with the responsibility for oversight of the primary hub. That individual also has a mobile computer unit with real-time performance data. Since SORTA operates through a crowded urban core, delays can be consistent across the board, but they pay attention to providing coverage with extra buses so that customers are not negatively impacted. With the availability of the large data set, they believe they can narrow down and focus on specific issues. They benefit from availability of data from either the CAD/AVL and APC systems and input from customers through social media. Another technique SORTA employs is to collect fares at alighting on most afternoon express services. This is called pay as you go and speeds boarding in the downtown area. As there is a lot of congestion in the downtown area, it is easier to board and pay when alighting. They have been doing this for over 30 years and seem to have high customer acceptance of this practice.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-35 SORTA has a zone based fare system which also affects boarding time and thus on-time performance. From a long-term perspective, they are studying the potential for an off-board fare payment system. SORTA staff view the variability of speed and congestion on the freeways as a cause of inconsistent running times. As SORTA operates approximately 20 routes on freeways that is a significant challenge. SORTA operates an eight-mile bus on shoulder segment to assist in reliability on Interstate 71. Only two routes operate on this freeway, the Route 71x and Route 72 (which is a seasonal route). Use of the shoulder lane is operated through rules established approximately 10 years ago; use of the bus shoulder lane requires that speed on the expressway has slowed to 35 MPH or less and the bus can only operate 25 miles per hour faster than the next running lane. Customers like this feature and report to staff if they believe the operators should be in the lane and are not using it. Operators’ use of the bus on shoulder lane is voluntary. However, staff is not aware of any operators who have refused the use of the bus on shoulder lane. There are dedicated transit lanes which are located around the Government Square Transit Center in downtown Cincinnati. In addition, there is one queue jump which has been developed, but since there is no transit signal priority available to support this, once the bus exits the lane it must mix with traffic. SORTA has done some limited work in developing other techniques to enhance reliability, such as bus stop consolidation. The agency will be undertaking a Bus Stop Optimization program, to conduct a comprehensive review of all routes to reduce stops. As part of this process, benefits to reliability will need to be forecast and any implementation will be closely monitored. Evaluating the Reliability Improvement Program The SORTA team does before and after monitoring of any service changes that they make, including doing run time analysis and on-time performance measurements. For the most part, they believe they see reliability improvements in more than two out of three routes which they analyze. 2.2.2 Pierce Transit Service Area Area Population Number of Bus Routes Size of Fixed-Route Bus Fleet Annual Passenger Miles Annual Trips Annual Vehicle Revenue Miles 292 sq. miles 547,975 32 118 buses 34.9 million 8.6 million 4.5 million The TCRP A-42 interview with Pierce County Transportation Benefit Area Authority (Pierce Transit) was held September 21, 2017 with the following individuals:  Peter Stackpole (Service Planning Assistant Manager)  Max Henkle (Senior Planner)  Jason Kennedy (Planner Analyst)  Marlene Connor (A-42 Research Team)  Jim McLaughlin (A-42 Research Team) Pierce Transit operates 32 bus routes over a 292-square mile area, serving 547,975 people in the greater Tacoma metropolitan area. Pierce Transit operates a bus system that reported 8.6 million bus boardings in 2016. Pierce Transit believes there is a strong relationship between

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-36 ridership and reliability and thus has created a culture of measuring on-time performance and reporting results to the policy board and to executive leadership. Reliability Program Organizational Structure At Pierce Transit, the Planning Department takes the lead in working with the IT and Operations Departments regarding service reliability. The Planning Department utilizes gathered data to prepare the dashboard for reporting purposes to the system’s executive leadership and its policy board. A Route Schedule Adherence Committee, an interdisciplinary group of operators, supervisors, planners and schedulers, meets on a regular basis to review route performance data. Reliability Measures and Standards Pierce Transit uses on-time performance as the primary way to measure bus transit reliability and defines on-time as one minute early (-1) to four minutes late (+4), at both the system and route levels. The established target is 85 percent for system on-time performance. In addition, Pierce Transit also monitors missed trips. Reliability Data Collection, Analysis and Reporting Pierce Transit utilizes CAD/AVL and APC data for tracking on-time performance. In addition, they offer data to customers through the “One Bus Away” mobile application, which provides real-time information. Pierce Transit is also working on a program to install real-time information for customers at their transit centers. Weekly on-time performance data is shared with the Operations and Data Analytics teams. Like other transit agencies, Pierce Transit has had some issues with blending their APC and CAD/AVL data into a combined format. However, their Planning and IT departments are working collaboratively to blend those data sets to view passenger loads at timepoints. The goal is to view the data sets holistically. One of the challenges facing Pierce Transit from a data gathering and communication viewpoint is their radio bandwidth, which results in CAD/AVL data gaps and slow radio communication from operators. Currently, Pierce Transit views their APC data as more reliable than the CAD/AVL data thus is working to develop a business case to upgrade their CAD/AVL system to improve its reliability. Each week, Pierce Transit reviews on-time performance data to diagnose any recent changes in reliability and highlight any “hot” routes (those with a greater than 5 percent change from the previous week). Samples of the on-time performance dashboard are illustrated in Figure 2.9, which shows a trend analysis of the seven highest ridership routes, and Table 2.9, which shows route level on-time performance quarterly.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-37 Figure 2.9 – Pierce Transit On-Time Performance for Top Seven Fixed Routes Diagnosing and Treating Reliability Issues Each day, Pierce Transit supervisors view activity regularly and receive anecdotal information from operators through their daily logs. Field supervisors investigate any problems they observe in the data and use that analysis to initiate field reviews. Most transit centers operate on a pulse system, so supervisors located at the transit centers can view if there are issues, such as a route running early. For the most part, running time concerns are more evident at intermediate time points, not at the transit centers. If need be, cover (spare) buses can be sent to assist in late running operations. Operators have display units in their buses that allow them to keep track of their own on-time performance. In their view, downtown traffic is relatively predictable, thus they are aware of which corridors might experience issues with reliability. Other issues with respect to reliability in their service area include construction, wheelchair boardings, and impacts on the State Route 7 Corridor due to light rail conflicts in the signal priority lanes. In addition, as boarding time (dwell time) is valuable to meeting on-time performance, fare disputes are not viewed as a reason to slow boardings, thus disputes are not contested. However, they are currently working with the public safety department on issues related to fare evasion. Each week, Pierce Transit reviews on-time performance data to diagnose any recent changes in reliability and highlight any “hot” routes (those with a greater than 5 percent change from the previous week). In response to the “hot” routes, the Operations Department works with the Area Supervisor to determine next steps, which typically involves a running time analysis after review of any other related issues.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-38 Table 2.9 - Pierce Transit On-Time Performance Dashboard The results of that running time analysis influence any service changes implemented as part of the twice-yearly operator picks (and the third time during the year which is available for minor service changes). Accurate scheduling has an important effect on ensuring system reliability, and Pierce Transit works to ensure that any long-term scheduled construction is configured into their route scheduling. In addition, the agency tracks customer complaints and can utilize their CAD/AVL and APC data to confirm input from customers, including late or missed stops or trips.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-39 Like other interviewed transit agencies, operator hiring and availability has been a problem for Pierce Transit and has affected their performance and reliability. Recently, the agency has accelerated hiring of new operators and thus are not as dependent on the extra board for filling assignments. In previous years, the agency has experienced a high attrition rate which they believed was at least in part due to their scheduling practices which had a high number of split runs. Recently, they completed a service change and are now operating more straight runs which they believe has resulted in a higher operator retention rate. With respect to coordination on-street construction projects, Pierce Transit has a dedicated person on staff responsible for attending meetings with the local jurisdictions and the Washington State Department of Transportation (WSDOT), particularly agency traffic engineers, to identify mitigation plans to minimize disruption to bus service. To communicate information on construction detours to transit riders, rider alerts are posted at stops and on-board buses, as well as information on the Pierce Transit website and through press releases (the agency is hopeful for GTTS-real-time service alerts soon). Within Pierce Transit’s service area, 15 routes in six corridors use transit signal priority. This system is currently manually operated by operators, and thus there are opportunities for inconsistencies. Interlocal agreements exist with several jurisdictions governing transit signal priority utilization which address attributes such as signal timing, red truncations and green extensions. The agreements also address access to the signal cabinets by transit staff. The longest corridor is State Route 7 and the agency agreement for that corridor includes WSDOT and the City of Tacoma. In their view, the transit signal priority originally had more impact on schedule adherence than it does currently. In developing their business case for an upgraded CAD/AVL system, they are studying the availability of systems which can communicate with transit signal priority transponders. Pierce Transit staff believe that investments in upgrades to transit signal priority, improved fare payment systems, development of transit only lanes and other corridor based elements such as queue jumps would each positively impact bus service reliability. Additionally, social impediments such as improving passenger awareness in reducing boarding times would speed vehicle loading at stops. Staff also feel that some barriers to reliability are political, rather than technological, for example, not allowing two-person vehicles in HOV lanes would speed bus service. Staff also feel that, for the most part, consolidating or changing bus stops is a slow process and not likely used to improve on-time performance. One of the system goals is to try to get paratransit customers onto the fixed route thus keeping stops is important in their estimation. Evaluating the Reliability Improvement Program Pierce Transit staff consistently reviews changes made to ensure they achieved the desired result. It is an ongoing cycle with route picks and changes. 2.3 Small Transit Agencies 2.3.1 Kingston Transit Service Area Area Population Number of Bus Routes Size of Fixed-Route Bus Fleet Annual Passenger Miles Annual Trips Annual Vehicle Revenue Miles N/A 129,653 19 69 buses N/A N/A N/A

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-40 The TCRP A-42 interview with Kingston Transit (Ontario, Canada) was held September 20, 2017 with the following individuals:  Jeremy DaCosta (Transit Manager)  Andrew Morton (Service Project Manager)  Marlene Connor (A-42 Research Team)  Jim McLaughlin (A-42 Research Team) Kingston Transit operates 19 routes, serving 129,000 people in the City of the Kingston and neighboring community of Amherstview. Kingston Transit has placed a high emphasis on transit service reliability and measuring the service provided from a customer perspective. For example, in 2016, Kingston Transit set a goal of reducing the number of customer complaints for buses leaving timepoints early by 20 percent. To accomplish this goal, Kingston Transit focused on monitoring and managing on-time performance from a customer perspective, closely tracking operators and their system performance. Kingston Transit believes that their focus on customer feedback is an important contributor to their success, which is measured by the fact that since 2014, Kingston Transit has experienced double digit ridership growth each year, tracked against a population growth of less than 1 percent. For them, the focus is truly on their customers rather than the data. Reliability Program Organizational Structure The City of Kingston Transit Department maintains the primary focus and responsibility for bus service reliability within the City government. Transit falls under the Transportation and Parking Department, which is part of the Transportation and Infrastructure Department. Partnering departments such as City Engineering and City Planning are separate, thus are independent entities, but they work collaboratively. Reliability Measures and Standards To measure bus system reliability, Kingston Transit uses on-time performance vs scheduled timetable (with an exact standard for measurement - 0 minutes early and 0 minutes late). On-time performance is recorded at the system, route and timepoint levels and measured against scheduled time. Other measures which they consider include missed trips, headway adherence and travel time variability. Standards for these measures are in development with the new AVL system. From a measure perspective, how reliability is communicated internally and externally is tracked from a customer comment basis and is tied to the annual performance appraisals for the supervisors. An annual goal for reduced customer complaints is established, tracked and then related to pay for performance for their supervisors. Since tracking from a customer focus, it was noted that supervisors are reviewing customer complaints daily. Since 2014, this measure has been tracked and reported to decision makers, and during that time, the number of complaints has been reduced. A sample of a performance assessment for this measure is shown in Figure 2.10.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-41 Figure 2.10 – Sample Performance Assessment Reliability Data Collection, Analysis and Reporting Recently, Kingston Transit began using AVL technology to monitor and measure bus pull outs. Prior to using AVL, Kingston Transit used farebox transaction data to measure reliability using a smart card system. The smart card system identified when and where passengers boarded buses at timepoints and is still available as a fall back if there is an issue with the AVL or as a verification mechanism. On-time performance is tracked at the beginning, middle (timepoint) and end of route. The CAD/AVL information provides continuous tracking and screens are available for view by staff stationed at the operations center. Dispatchers and supervisors look at deviations of buses against schedule time continuously, so performance is constantly monitored. Prior to the implementation of the CAD/AVL system, many field supervisors were positioned in the field. Now, supervisors are often located at the operations center, tracking performance from bus monitoring screens. Information is updated every three seconds. Supervisors in the field have laptops and can view information while they are mobile. Weekday afternoon peak reliability and peak performance is the area which is critical to their performance. They conduct queries from CAD/AVL database by time period and day of week, but they are still refining the most effective ways to measure and monitor performance data utilizing the CAD/AVL system. They are confident they can capture passenger tolerance levels and develop programs to measure those levels. Kingston Transit uses a performance score card, but schedule adherence and reliability are not recorded or reported currently. Deviation data is stored by time period, by day of week with the AVL, but a system for how to communicate the information is still in development. In general, they will try to identify what matters from a customer perspective; e.g. for those trips that require a transfer, did you make your connection or your trip? Diagnosing and Treating Reliability Issues Kingston Transit’s service planning group uses CAD/AVL data to review route schedule changes on a monthly and quarterly basis. They recently completed changes in September 2017 after reviewing changes for the prior eight months.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-42 To ensure morning pull out occurs seamlessly, Kingston Transit developed a model for tracking and monitoring this function. Within the model, Kingston Transit developed a position of cover operator, which is separate from the spare board operators. The cover operator arrives prior to scheduled pull out times and is responsible for starting the required pre-trip inspections for buses and to be available for immediate pull out if an operator is late or calls out for the shift. As part of pre-trip inspections, cover operators report defects and can get a replacement vehicle if necessary. Peak pull out is 55 buses, and two cover operators are available for the AM pull out. The Supervisor on Duty communicates to the cover operator as needed and makes those immediate assignments. More often, an operator might be running late, and while the cover operator might be done with pre-trip inspection and prepared for a regular route run usually the originally assigned operator takes over and the cover operator goes on to other duties. Traditional traffic patterns are such that mornings run smoothly, and delays occur more frequently from midday through the afternoon. Often there is a cover bus available in the afternoon to restore routes back on schedule. Supervisors and managers look at schedule deviation patterns regularly and thus trouble areas become apparent. They can then locate the cover bus nearby these patterns so that they can respond quickly. There is no threshold established for what might require a cover bus, but history, such as information about an operator’s ability to get back on time is part of that consideration, as is passenger volume on a given route. If there are heavy passenger loads on certain routes, these are the priority for getting back on schedule. Kingston Transit has not utilized transit signal priority to the extent other agencies have and has only applied it at one intersection with a modified queue jump lane. They are currently running a pilot for the same treatment on three express routes at three intersections. Additionally, Kingston Transit has consolidated bus stops, not on a route-by-route program but using a systematic approach when other changes are underway, such as street reconstruction or the addition of bus stop amenities. Thus, it is part of a total approach, not a one-off event. Kingston Transit recently worked with other City Departments by adding shelters, benches and other amenities on a route and at the same time reduced the number of stops from three to one. The overall response from the public was positive. Since the City Transit Department is part of the overall city government, it is relatively easy to work collaboratively with other City Departments to consider piloting transit signal priority and upgrading bus stops. There is a recent transportation master plan which encourages active transportation modes, thus all modes have been positioned for prioritization with respect to the automobile. They receive priority as a matter of mission and vision for the city which through the plan has aggressive targets for modal share improvement. Because missed transfers due to late buses can be a significant source of unreliable travel times, Kingston Transit has placed an emphasis on reducing the necessity for passengers to transfer and has looked to create one-seat rides between major activity centers such as employment, university, and hospital sites from all locales, rather than rely on a hub and spoke design. They have worked toward a system design to connect origin and destination pairs, without reliance on transfer. In their view, making trips easier for customers is what matters. Evaluating the Reliability Improvement Program Kingston Transit staff consistently reviews changes made to ensure they achieved the desired result. It is an ongoing cycle with route picks and changes.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-43 2.3.2 Manatee County Area Transit Service Area Area Population Number of Bus Routes Size of Fixed-Route Bus Fleet Annual Passenger Miles Annual Trips Annual Vehicle Revenue Miles 743 sq. miles 322,833 10 23 buses 7.2 million 1.7 million 1.4 million The TCRP A-42 interview with Manatee County Area Transit (MCAT) was held October 26, 2017 with the following individuals:  Ryan Suarez (Transit Planning Manager)  Susan Montgomery (Transit Planner)  Jim Egbert (Transit Operations Manager)  Marlene Connor (A-42 Research Team)  Alan Danaher (A-42 Research Team) MCAT operates 10 bus routes over a 743-square mile area, serving 322,800 people in the greater Sarasota-Bradenton, Florida metropolitan area. MCAT operates a bus system that reported 1.7 million bus boardings in 2016. MCAT places a strong emphasis on transit system reliability for its customers, underscoring its policies with the understanding that dependability is also important. Due to traffic congestion, particularly during the high tourist season, the bus may be a minute or two late, but customers can always be assured the bus will be there. MCAT’s philosophy echoes that notion of dependability, which is also communicated through training to their drivers. In their view, barriers to reliability include traffic congestion and operator variability. Reliability Program Organizational Structure As a small team, MCAT’s Operations and Planning staff are responsible for a wide range of functions and activities, including the management of bus reliability. Reliability Measures and Standards MCAT uses on-time performance to measure bus service reliability, using APC system data against their published timetables, with a system wide on-time performance goal of 60 percent. The range for on-time performance is one minute early (-1) to five minutes late (+5). Reliability Data Collection, Analysis and Reporting Before 2015, MCAT used field checks by supervisory staff to gather data on ridership and reliability. Since 2015, the entirety of MCAT’s fleet has been equipped with APCs and a radio system, which records data at every timepoint and allows for communication between field staff including supervisors and operators and the control center. There are plans to implement a new CAD/AVL system on MCAT buses in the spring 2019. Staff reviews on-time performance when changes to schedules and running times are implemented on a regular basis. MCAT compiles their APC data nightly, reviews the APC data weekly, and compiles an on-time performance analysis monthly. Each month, MCAT updates their board on ridership and revenue, but not on- time performance. Diagnosing and Treating Reliability Issues If a vehicle is running late, MCAT adds extra buses if there are standby operators available. Typically, MCAT has two standby buses available in the AM and PM peak periods. Depending on availability, standby buses are located at the central terminal, so they can be staged to assist when needed. There are no specific criteria or triggers which inform when to deploy buses, it depends on the specific situation, such as traffic, collisions or detours, and whether staff can

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-44 determine if the bus can catch up to its scheduled service. Because of the highly seasonal nature of MCAT’s service area, late winter and spring are when there is a greater opportunity for congestion and heavy traffic. On-time performance, whether early or late, gets a rating assigned at each timepoint which then compares actual to scheduled time which is exported through the Trapeze system. MCAT employs three supervisors, one on each shift, and one scheduled as overlapping, but monitoring on-time performance is not their priority. Three stations are in operation currently, including two major hubs in downtown and the DeSoto station, and one to the north in Palmetto. No vehicles are scheduled to meet up at the station. MCAT has some reliability issues on a jointly operated route with Sarasota County, which connects Sarasota and Bradenton. Thus, they deploy attendants to monitor those departures from the downtown terminal. Like many transit agencies, MCAT has been experiencing an operator shortage which affects activities and services. In fact, the operator shortage required MCAT to increase the headway on one of its major routes from 30 minutes to 60 minutes to accommodate that deficit. To remedy the shortage, MCAT staff was recently authorized by their board to add 16 operators and establish an extra board program, rather than rely on overtime, to meet published timetables. At MCAT, 22 vehicles represent the daily peak pull out and there are 67 operators. Importantly, all the operators are crossed trained for both the fixed route and Handibus services. Two operator picks are conducted annually at which time any major schedule or runtime changes are implemented. During the remainder of the year, only moderate run time or schedule changes are made. There is no transit signal priority in place within MCAT’s service area; however, several communities have an Opticom traffic signal system in place for emergency vehicles. The Route 99 Corridor is under consideration for transit signal priority in the future. Coordination on road construction is poor, in fact often MCAT finds out only one day in advance of scheduled construction which inhibits their planning for route detours. The new CAD/AVL system will allow MCAT to communicate detour information directly to passengers in real-time, via mobile app, website, wayside signage, and on-board bus video monitors. MCAT regularly briefs its board on efforts to improve overall bus system reliability. More detail is provided to users about run time and specific stop and service changes, while summaries are provided to the Board of County Commissioners, County Administration and external stakeholders. Evaluating the Reliability Improvement Program Manatee Transit staff consistently reviews changes made to ensure they achieved the desired result. It is an ongoing cycle with route picks and changes.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-45 3.0 Case Study Summary 3.1 Reliability Program Organizational Structure In most of the interviewed transit agencies, bus transit reliability is a shared responsibility between many departments, most often including Service Planning, Operations, Performance Management, and IT departments (summarized in Table 3.1). The largest agencies have developed processes for on-time performance review, analysis of routes with issues through running time analyses, and addressing issues using a combination of resources and departments. Most agencies produce reports for monthly or quarterly board meetings. From a planning and scheduling perspective, most transit agencies work closely with their surrounding jurisdictions, most closely with their city staff and police department with respect to routing and bus stop locations. Other communication avenues are related to planned detours or construction activities. During several of the interviews, a few agencies noted the challenges associated with current hiring and maintaining of operators and in some places, maintenance staff. During construction, detours, or with sick operators, it can be challenging for agencies to staff cover or standby buses with available operators. Table 3.1 – Reliability Program Organizational Structure Comparison Agency Departments In Charge of Daily Operations Large Transit Agencies New York City Transit Service Planning, IT, System Data and Research, Road Operations Route Manager Chicago Transit Authority Performance Management, Bus Service Management, Service Planning, Traffic Engineering, Scheduling, IT VIA Metropolitan Transit Service Planning, IT, Operations Service Planning Regional Transportation District Service Planning, Operations, IT Los Angeles Metro Transportation, IT, Planning, Scheduling, Maintenance Transport for London Medium Transit Agencies Southwest Ohio Regional Transportation Authority Planning, Scheduling, Radio Control, Transportation, Maintenance Pierce Transit Planning, IT, Operations Planning Small Transit Agencies Kingston Transit City Transit City Transit Manatee County Area Transit Operations, Planning 3.2 Reliability Measures and Standards Most transit agencies interviewed had many similarities, particularly in their performance measures and standards. Nearly all interviewed agencies rely upon on-time performance as their primary measure for bus transit reliability, and for the most part, agencies seemed to agree on an “on-time” definition of around 1 minute early to 5 minutes late (summarized in Table 3.2). NYCT, CTA and VIA also measure headway adherence on frequent routes while TfL uses a similar headway-derived measure, Excess Wait Time. TfL has also been exploring the use of running time variability to assess reliability. While most agencies indicated that they consider customer complaints when assessing reliability, Kingston Transit sets goals and closely tracks customer complaints and uses the measure in assessing supervisor performance. The only other measures

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-46 mentioned were measures of service provision, such as missed trips, miles between road calls, etc. Agencies were less similar in their reliability goals, ranging from a low of 60 percent (NYCT and MCAT) to 88 percent (Denver RTD and SORTA). System size appears to have little to do with the goal as the largest (NYCT) and the smallest (MCAT) have the same low goal, while the highest goals were large and medium agencies. Table 3.2 – Reliability Measures and Standards Comparison Agency Size of Fixed-Route Bus Fleet On-Time Definition Acceptable Headway Definition Bus Reliability Target Large Transit Agencies New York City Transit 3,286 buses -1 minutes to +5 minutes 1.5 x headway OR +3 minutes (peak) +5 minutes (off- peak) 60% Chicago Transit Authority 1,572 buses -1 minutes to +5 minutes > 60 seconds < 15 minutes OR < 2 x headway 80% on-time < 4% gap headway < 3% bunched VIA Metropolitan Transit 378 buses -30 seconds to +5 minutes + 5 minutes 80% Regional Transportation District 873 buses -1 minutes to +5 minutes N/A 88% in 2017 86% in 2018 Los Angeles Metro 1,902 buses -1 minutes to +5 minutes N/A Transport for London 7,300 buses -2 minutes to +5 minutes uses EWT goal, varies by route varies by route Medium Transit Agencies Southwest Ohio Regional Transportation Authority 299 buses - 59 seconds to + 5 minutes, 29 seconds N/A 86% target 85% minimum Pierce Transit 118 buses -1 minutes to +4 minutes N/A Small Transit Agencies Kingston Transit 69 buses 0 minutes N/A no target Manatee County Area Transit 23 buses -1 minutes to +5 minutes N/A 60% 3.3 Reliability Data Collection, Analysis and Reporting All transit agencies interviewed rely upon a very data-driven approach in measuring bus transit reliability, combined with customer comments and feedback. Nine of the 10 agencies rely upon collection of CAD/AVL and APC data, recorded at timepoints, to measure bus system reliability (summarized in Table 3.3). These systems are relied upon by the Service Planning and IT departments to review reliability and performance on a daily, weekly, or monthly basis, and are used in real-time by supervisors and control centers to manage day-to-day operations. In the small transit agencies, MCAT, only APC data is currently used, although a CAD/AVL system is planned. In addition to CAD/AVL and APC data analysis, customer relations plays a large role in tracking and reviewing customer complaints using social media and other customer comment mechanisms at most agencies. All but Kingston Transit have regular reliability data reporting procedures, with various departments receiving daily, weekly, or monthly reports and varying frequency of published reports. Transport for London, as a contracting agency, publishes quarterly reliability reports by route and contract operator, though individual contractors may produce reports for internal use on a more frequent basis. Most larger transit agencies can produce reports for their policy boards

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-47 each month or quarter, while medium sized or smaller transit agencies might produce reports upon request or annually. Kingston has not yet developed a reliability reporting mechanism and several agencies, large and small, noted that this is an evolving process. Table 3.3 – Reliability Data Collection, Analysis and Reporting Comparison Agency Type of Data Collected Frequency of Reporting Large Transit Agencies New York City Transit CAD/AVL Daily/Weekly Chicago Transit Authority CAD/AVL Daily/Weekly VIA Metropolitan Transit CAD/AVL and APC Daily/Weekly Regional Transportation District CAD/AVL and APC Daily/Weekly Los Angeles Metro APC Daily/Weekly Transport for London CAD/AVL Quarterly Medium Transit Agencies Southwest Ohio Regional Transportation Authority CAD/AVL Daily/Weekly Pierce Transit CAD/AVL and APC Weekly Small Transit Agencies Kingston Transit CAD/AVL None Manatee County Area Transit APC Daily/Weekly 3.4 Diagnosing and Treating Reliability Issues Each transit agency monitors their real-time bus service using CAD/AVL data, and most act in real-time as reliability issues develop. However, most lack explicit rules which trigger an immediate response, rather supervisors rely on their experience to identify and address reliability issues. SORTA, as the one exception, uses a specific guideline where any timepoint which registers a bus operating five and a half minutes late triggers the dispatcher to document and file an incident report. Throughout the day at most agencies, supervisors view activity regularly and receive anecdotal information from operators through their daily logs. Interaction with the agency dispatch center allows for supervisors to monitor buses in real-time for increased insights into potential causes. Once reliability data is collected, most agencies use their Service Planning Department to analyze available CAD/AVL and APC data, although in many places, the IT or Operations departments are heavily involved. Service Planning evaluates the APC data from the route and operator perspective to track individual performance and patterns as they develop. Some agencies evaluate customer, operator, contractor, and social media input to evaluate the need for a running time analysis on routes, and often verify those using road supervisors to perform physical on-time performance checks along routes and at stations. At most agencies, the first step to address persistent recurring reliability problems is to conduct a running time analysis. Running time data is evaluated and schedules are updated to better reflect actual running times. However, schedule adjustments are typically only made three or four times per year at scheduled operator picks. When evaluating route reliability data, many of the larger transit agencies focus efforts on higher frequency routes that have headways of 15 minutes or less. Since those are the routes most dependent upon reliability fast service, it is important that the transit agency guarantee performance and address any route issues.

Developing a Guide to Bus Transit Service Reliability Appendix C – Case Study Summary Report C-48 All the case study agencies had active coordination with local jurisdictions on-street construction projects, in identifying adequate bus detours and construction schedules to minimize impact on bus travel time and reliability. This included in some cases established committees meeting on a regular basis to discuss transit/construction coordination issues. Customer outreach to provide information on construction detours and service schedules has been enhanced through the application of CAD/AVL systems to provide real-time data, and the use of social media through websites, Twitter and Facebook accounts, as well as more traditional press releases. 3.5 Evaluating the Reliability Improvement Program While many agencies indicated that their reliability data gathering and reporting procedures are constantly evolving, most had little to say regarding any re-assessment of their reliability measures, standards and goals, or of their overall reliability management process. Most service reliability standards have been in place for a long time, even as the ability to measure reliability accurately on an ongoing basis has increased considerably, and there appears to have been little thought about changing them. Strategies to address reliability largely focus on long-standing procedures, using supervisors to manage service on a day-to-day basis and schedulers adjusting running times with the next pick if problems persist. There was little mention of using the extensive new data sources to evaluate possible reliability improvements beyond increased supervision and schedule adjustments. Several agencies did mention conducting before and after evaluation studies of specific improvements made to see the magnitude of impact the change is having on service. In addition to data-driven evaluation, most transit agencies add field experience or watch customer feedback to see what is happening on a regular basis. In many cases, however, it appears that schedule adjustments are made and the route is revisited only if supervisors notice continuing reliability issues.

Developing a Guide to Bus Transit Service Reliability Get This Book
×
 Developing a Guide to Bus Transit Service Reliability
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

There are three major perspectives on transit reliability: from the customer, agency, and operator points of view.

The TRB Transit Cooperative Research Program's TCRP Web-Only Document 72: Developing a Guide to Bus Transit Service Reliability finds, through a transit agency survey, that most agencies do not have a formal bus service reliability improvement program. The guidebook presents a framework for such a program, including eight steps, and is a supplemental report to TCRP Research Report 215: Minutes Matter: A Bus Transit Service Reliability Guidebook.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!