Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
16 3.1 Introduction This chapter details the steps that agencies can use to start and maintain a reliability improve- ment program. Although the focus of this work is on fixed-route bus services, the same process could easily be used for any mode of public transportation subject to similar delays in boarding, alighting, and travel time. The eight steps detailed in this chapter are shown in Figure 3.1. What follows is a general description of each step. In order to provide a concise description of the process, detailed information on the various measures and strategies is not included in this chapter. More detailed discussions of the issues in each step are included in Chapter 4, Chapter 5, and Chapter 6, while detailed information sheets for each measure and treatment strategy are included in Chapter 7. For ease of use, hyperlinks provided throughout this chapter link to those discussions and information sheets. 3.2 Step 1 â Define Goals and Objectives The process described here assumes that the reliability improvement program is being estab- lished by agency management. If the program is not being initiated by upper management, its buy-in is a critical first step. As with every productive process, a guiding set of goals and objectives must be established at the outset. The specific goals and objectives of the reliability improvement program should be tied to the overall mission and goals of the agency. As an example, an agency should first understand if it intends to provide broad coverage over a wide service area or focus on high-ridership corridors. Similarly, an agency should understand if its focus will be on impeccable customer service or on highly cost-effective operations. These broader goals can be used to distinguish between specific goals and objectives regarding reliability, as a customer-centric process may differ from an agency- centric one. In establishing the goals and objectives, the agency should first identify the stake- holders, both direct and indirect, and the resources and constraints associated with the system. 3.2.1 Defining Stakeholders Direct stakeholders are those who interact directly with the final product, in this case transit system users; however, these direct stakeholders can be grouped and understood in multiple ways. These various groups may have different needs and desires regarding their experience. As an example, an agency may serve a large college population or a large senior population, and the differences in their needs should be taken into account in establishing goals and objectives. Indirect stakeholders are those affected by the service but not as direct users. These include groups such as the transit agency employees (operators, supervisors, service planners, etc.), other C H A P T E R 3 Developing a Bus Service Reliability Improvement Program
Developing a Bus Service Reliability Improvement Program 17 agencies that the agency works with (city/county public works, state transportation depart- ment, utilities, etc.), and the general public (car drivers, cyclists, taxpayers, local businesses, etc.). Reliability of the system matters to these groups as they may be affected by attempts to improve reliability, such as a city agency approving right-of-way or the public voting for funding for service enhancements. As an initial exercise, the agency should undergo a stakeholder mapping process that includes identifying different groups of direct and indirect stakeholders and their relationships. For both direct and indirect stakeholders, the use of personas/roles and scenarios allows the agency to better understand their interactions with the system, their requirements, and their needs. Personas are fictional descriptions of individuals who represent a real group of stakeholders. They can be used effectively to focus on users and their goals, prioritize requirements, prioritize audiences, challenge assumptions, and realize how the priorities of users differ from those of transit operators, among other things. Scenarios describe the personasâ use of a system. Scenarios are essentially a story, including the setting, the motivation, knowledge and capability of the personas, and the tools that they encounter and manipulate. An example of a simplified persona/scenario is Beth, a college student living in the area, who uses the system to travel to and from school as well as in the evening and weekend for leisure trips. She is concerned about making it to class on time. 3.2.2 Resources and Constraints A critical part of the first step is gaining an understanding of the data, staff, and software avail- able for measuring and analyzing reliability. If the data are not available or the staff and software to analyze the data are not available to use a particular measure, either another measure must be chosen or additional funding must be allocated to begin collecting or analyzing the data. However, a lack of data or analysis capabilities should not be taken as a barrier to using a certain measure since additional data collection processes can be justified using reliability as a goal. Resources regarding potential actions should also be assessed at this stage. Importantly, smaller agencies may find it more difficult to collect data to quantify the results of their efforts, but that should not stop them from implementing strategies that have been successful elsewhere. The availability of automatic vehicle location (AVL) data, and in some cases automated passenger counter (APC) or automated fare collection (AFC) data, is essential for routine and accurate measurement of on-time performance, travel times, and wait times. Without such capabilities, a serious analysis of improving reliability cannot be undertaken, although with some very small agencies, manual data collection techniques could be used. At a minimum, a routine manual assessment of arrival and departure times at both terminals as well as major boarding and alighting stops would be needed to assess on-time performance, running times, headways, and wait times. Low-cost GPS-enabled devices could also be used to measure running times in some cases. Passenger count data are an important component of reliability analysis as well. APC data can be used for passenger counts, but AFC data in combination with AVL data has been shown to be a viable way to infer alighting stops. This opens the possibility of estimating travel time for passenger trips as well as for journeys involving transfers. Define Goals and Objectives Select Reliability Measures Select Reliability Standards Implement the Program/Monitor Performance Review and Update Implement and Monitor Treatments Identify Treatments Diagnostic Assessment Figure 3.1. The steps discussed in this chapter.
18 Minutes Matter: A Bus Transit Service Reliability Guidebook Similarly, an agencyâs capacity to gather data on customer perceptions and preferences can play a role in selecting measures of reliability. Developing the data needed to estimate customer travel times and wait times requires more extensive survey capabilities than do less customer-focused assessments of on-time performance and running times. Customer preferences obtained through surveys can also guide the selection of measures, the relative weight to give to the various measures, and the standards to be used to distinguish acceptable versus unacceptable performance. An agencyâs analysis capabilities can play a role in selecting measures of reliability. Developing the data needed and analysis capabilities for several measures, particularly those relating to customer travel times and wait times, customer-demand impacts on reliability, and details of running-time components, would require more sophisticated analysis capabilities than basic assessments of on-time performance and running times. Agencies with greater capabilities and resources may also be able to measure reliability on a more frequent, or at least more regular, basis than agencies with more limited capabilities. Finally, the existing performance measures program should be assessed and taken into account in understanding goals and objectives for a reliability improvement program. These performance measures may include existing reliability measures that may be refined in the new process as well as other performance aspects such as travel speed or ridership that may be negatively or positively affected by changes in reliability. 3.2.3 Feedback Processes As part of the initial goal-setting step, it is critical that the agency obtain feedback from both direct and indirect stakeholders. This feedback process will be used to identify the goals and objectives as well as the resources and constraints. This process can be undertaken using a full suite of interaction tools such as surveys, focus groups, and advisory committees. Rider and non-rider surveys can be used to assess the importance of various aspects of transit service reliability to the user and potential user. Both rider and non-rider surveys can help agen- cies understand the importance of various aspects of reliability, such as the balance between on-time performance and travel time, or the possible impact of treatments, such as the potential usage of real-time information apps. Focus groups can be used both for direct and indirect stakeholders, and can be used with riders to further refine feedback obtained from surveys. TCRP Synthesis 105: Use of Market Research Panels in Transit  provides guidance on rider-based feedback panels that can be used with the same participants repeatedly over time to gauge changes in opinion. Other focus group formats gather a sample of riders to express detailed opinions about a transit service. Focus groups can also be used with employees (to assess internal processes) and external indirect stakeholders. Advisory committees can be used internal to the agency or they can involve external partners such as city/county agencies (e.g., public works and city planning) or the state department of transportation. Advocacy organizations can also be a powerful asset as a new goals and objectives process is being undertaken. TCRP Synthesis 89: Public Participation Strategies for Transit  and TCRP Report 179: Use of Web-Based Rider Feedback to Improve Public Transit Services  detail methods and best prac- tices for in-person and online public involvement and feedback processes. These guides are also relevant to the feedback processes for developing a reliability improvement program. 3.2.4 New Versus Existing Reliability Improvement Programs Every transit agency except for a brand-new agency just starting service has a reliability improvement program of some sort. However, many agencies may have a program that was set
Developing a Bus Service Reliability Improvement Program 19 up with limited consideration of goals and objectives, or treatments may have been undertaken without a thorough understanding of the factors that may lead to unreliable service. There- fore, the tasks described previously will be more extensive with a new comprehensive reliability improvement program. Similarly, reliability is one of many performance measures an agency might track. Therefore, the reliability improvement program should be integrated into the larger performance goals and objectives of the agency, and any new program to track performance should take into consideration reliability as a set of critical performance measures. With a reliability program already in place, an agency may choose not to revisit the process of setting goals and objectives, but including an assessment of resources and constraints would often still be applicable. New sources of data, such as from AVL, APC, or AFC, should be considered by doing a comprehensive review of the reliability program. 3.3 Step 2 â Select Reliability Measures Once the available resources and constraints as well as the goals and objectives are under- stood, an agency should begin the process of selecting reliability measures appropriate for its needs, considering its data and resources. At a high level, these measures address three funda- mental concepts, those of punctuality (schedule versus actual arrivals), variability (consis- tency of a service), and non-operation (failure to run). These measures should take multiple perspectivesâthose of customer, agency, and operatorâinto account. The measures should be applied at a system level and at more detailed levels to identify problems. Finally, the measures may be processed in multiple formats. These ways of categorizing potential measures are explained in more detail in Chapter 4. Each set of reliability measures discussed in this guidebook has a specific purpose, and all have some value in assessing reliability and identifying causes and corrective actions. For a given agency, however, some may have more value than others. As discussed previously, agencies may have different degrees of data availability as well as analysis capabilities. Goals and objectives from the previous step should take local issues and perceptions into account. Agencies can also have widely varying service levels, travel patterns, and network types. 3.3.1 Schedule-Based Versus Headway-Based Services There is a fundamental difference between reliability measures for low-frequency (schedule- based) routes and high-frequency (headway-based) routes. Schedule-based services are those that have a published timetable that is available to the public. Successful performance is based on how the timing of the service compares to this timetable using measures such as on-time performance and travel time variability. Customers are usually targeting specific trips, although wait time variability should be assessed by measuring on-time performance at major customer boarding locations in addition to destinations. Headway-based services are those that arrive on a certain interval (e.g., every 10 minutes or less) with no published timetable. Successful performance is based on how regular this service arrives and is determined using measures such as headway regularity. Agencies operating many high-frequency routes should focus on wait time and customer travel time measures, including buffer time estimates. These services often have some indication of the schedule journey time; therefore, measures that use travel time compared to promised travel time can be useful as well. On-time measures may not be meaningful, or even possible, as customers are not focused on specific trip arrival times. If services are overcrowded, unreliability caused by customer demand should also be explored. The cut-off point between schedule-based and headway-based services is traditionally between 10 and 15 minutes, the point at which passengers have been shown to
20 Minutes Matter: A Bus Transit Service Reliability Guidebook arrive randomly rather than to prefer consulting a timetable. However, research has shown that even at 5-minute headways, some customers will prefer to time their arrivals at the station to a schedule . In establishing reliability measures for the route and the system, the type of service must be considered. Systems with only a few headway-based services may be fine using an overall measure that is schedule-based, such as on-time performance. Systems with many headway-based services should look to alternative system-wide measures. At the route level, measures can be specific to the type of service provided, although switching from schedule- to headway-based service during the day (such as peak versus off-peak) can make overall route measures more complicated, as well as branding routes as high-frequency or high-reliability routes. 3.3.2 Route Network Structure and Type of Service Regarding route network structure, systems with higher transfer rates should consider focusing on travel times and the time needed for transfers. If the system is radial and based on timed trans- fers, on-time arrivals at the transfer point would be a critical concern. Systems with grid networks and multiple transfer points would need to emphasize on-time arrivals at all transfer points. 3.3.3 Selecting Reliability Measures Taking the stated goals and objectives as well as the resources and constraints into account, the agency should select the high-level measures appropriate for the type of service. Chapter 7 includes reliability measure information sheets for an agency to consider in its selec- tion of reliability measures. The measures are categorized into the three aspects (punctuality, variability, and non-operation) as well as orientation (customer, agency, and operator). The calculation and analysis section provides more information related to resource requirements related to staff time, training, and data. The usage section explains general usage in the industry and understanding by the public. Finally, each measure is evaluated in seven categories: â¢ Definition of Reliability â Does the measure address applicable aspects of reliability? â¢ Customer Impact â Does the measure reflect impacts of unreliability on customers rather than internal to the agency? â¢ Cost/Ease â What are the cost and ease of use by the agency related to analysis time and data required? â¢ Transit Comparisons â Does the measure allow system-to-system, route-to-route, stop-to- stop, or temporal comparisons? â¢ Multimodal Comparisons â Does the measure allow comparison to other modes of transpor- tation (automobile, bicycle, walking)? â¢ Corrective Actions â Does the measure allow for reasonable corrective actions to be undertaken? â¢ Communicating Results â Is the measure easily understood by high-level officials or customers? The combination of these items should be considered in the selection of a program of measures. Although all measures do not need to address all aspects, as a set, the measures should be chosen to be comprehensive. Agencies should ask themselves a series of questions as measures are selected: 1. Is the measure to be used primarily internally or as a customer-facing measure? There may be differences in the required ease of understanding. 2. What data are available? Would additional data be required for the measure? What additional processing is required to validate and remove bad data? Additional data collection systems such as use of AVL and APC may be costly to implement, and measures may be difficult to calculate without such data. A new technology system may allow a new measure to be used.
Developing a Bus Service Reliability Improvement Program 21 3. Is the agency able to analyze data to obtain the measure on a frequent enough basis to make the measure usable for monitoring reliability? Data must be used at least every regular service update or, for smaller agencies, multiple times per year. Designing a thorough program with several detailed measures is great, but with a limited staff, the program will not be sustainable over time. 4. On what basis will the measure be applied temporally and spatially? Many agencies calculate measures for the public on a system-wide level and internally on a route or time-point level. The desired geography and time over which a measure will be calculated should be considered in the selection. 3.3.4 Developing Consensus among Stakeholders Once measures are identified, they should be vetted across departments within the agency to ensure buy-in from intra-agency stakeholders. Users of the measures and providers of the necessary data should be consulted to ensure that data will be available in a timely manner for calculation and that the measures will be appropriate for decision making. A task force or steering committee that includes multiple departments internal to the agency as well as external stakeholders may be used to vet chosen measures. 3.4 Step 3 â Select Reliability Standards Once measures are chosen, the targets for an acceptable range in the measures can be set. Other approaches, such as measuring change over time or using relative standards, can be used as well. 3.4.1 Choosing Realistic Standards Measures require some method to assess whether the performance is meeting a target consid- ered to be acceptable. In many cases, this will be a standard that includes a range; however, some measures have multiple dimensions to such a standard. Using on-time performance as an example, the range in which a vehicle is considered on time can vary by agency, but based on the survey results, it typically falls in a window of 0 to 1 minute before the scheduled time to 5 minutes late. Additionally, agencies will then choose a percentage of service that must fall within this range, such as 80 percent. Both dimensions, the range of time and the percentage of vehicles falling within the range, can differ based on the type of service and time of day. Similarly, these dimen- sions can differ based on the location of the stop, such as the end of the line or mid-route. In some cases, a standard may be difficult to establish or is not appropriate. Another method to assess reliability measures over time is to set a target, such as the bottom 10 percent of routes in any period, to be assessed further for any measure of reliability selected. Similarly, reliability measures may be compared to trends over time so that individual characteristics of routes will automatically be considered. If reliability suddenly decreases or steadily decreases over time, that particular route can be assessed further. The standard or target may also differ depending on the audience. Internal to the agency, one standard or target may be used to trigger more in-depth analysis. For any routes or runs that routinely fall below the standard, diagnostic tools discussed in Step 5 would be used to evaluate the problem and determine the root cause and applicable treatments. Another standard or target may be used at the service or route level for agency management, external stakeholders, and riders for the agency to demonstrate that the service is performing as expected. The agency can also consider whether its standards are intended to be aspirational (not achieved now, but a goal to work toward
22 Minutes Matter: A Bus Transit Service Reliability Guidebook to continually improve performance), or minimums (used to flag low-performing services to be targeted for improvements). This will relate back to the goals and objectives and the agencyâs broader performance measurement philosophy. Although the same measure may be used for multiple purposes, the standards may differ. At the time of writing, there was little guidance from the FTA or APTA on standards, although there were reliability reporting requirements. Thus, agencies often conduct a peer- agency analysis to find standards used by other agencies with similar levels of service and ridership demographics. The reliability measures presented in Chapter 7 all include applicable information about the standards used by some example agencies. However, this is an area in need of future research to establish good standards based on levels achievable by an agency that meet passenger expectations. 3.4.2 Developing Consensus Among Stakeholders It is very important that the standards chosen to evaluate the measures are reasonable and agree- able to all parties. Although standards should be set high enough to encourage improvement in reliability, all standards should have an element of realism in them to ensure that an agency has the means to react to results below the standard. As in previous steps, users of the measures should be consulted in setting the standard. This would be an appropriate item for review by a task force or steering committee as well to ensure buy-in from internal and external stakeholders. 3.5 Step 4 â Implement the Program and Monitor Performance 3.5.1 Implementation Once the reliability measures and applicable standards have been chosen, the program should be implemented. Agencies may want to start with a small pilot program to better assess the data and analysis capabilities available. These small programs can be a good way to assess the initial effectiveness of a reliability measure and to understand if the cost of measurement is worth the possible gains in performance. Agencies will often attempt such pilot programs with the introduction of a major new service aimed at improving reliability, such as a bus rapid transit (BRT) corridor. Along with implementation, the responsibility for data collection, data analysis, and reporting of reliability measures must be understood. Data collection systems may already be in place, but new data sharing within the agency may be occurring for analysis to take place. Similarly, an agency may already be analyzing the data but not reporting it in a format that can be used to monitor the reliability measure. 3.5.2 Monitoring and Reporting Performance One key aspect of the implementation process is the reporting of the reliability measures. As previously discussed, the frequency and scale of measures are a key component of the system. It is one thing to assess the on-time performance for the entire transit system, and it is another to look at on-time performance on a stop-level basis. More applicable perhaps is the need to aggre- gate on-time performance, both by route to assess if a particular route is consistently behind schedule, and aggregated at a level such as by stop to understand whether characteristics of the stop (near side versus far side, improved boarding platform, etc.) can be altered. In the monitoring process, the measures should be calculated by an analyst and compared regularly to applicable standards or to trends over time, with any deviations noted. External
Developing a Bus Service Reliability Improvement Program 23 factors influencing measures in any particular period should be noted, such as a bad weather day, a major incident, or construction. Although backup detailed data may be provided, the reports should be easy to read and digest by decision makers within the agency, who will push reliability issues to the next step in the process: diagnostic assessment. Along with the frequency and scale of measures comes a separate question of frequency and scale of reporting. An agency may have one set of measures that it is monitoring internally to identify problem times and locations, but another set of measures that are available to the public and other stakeholders for accountability. 3.5.3 Checking Results for Reasonableness As the data are being analyzed and reported, there may be errors found in the accuracy of the data sources or calculations. Given the importance of the measures for decision making, checks for reasonableness should be undertaken to ensure a practical level of validity in the reliability measures. Dramatic changes in measures over time may require additional checks to ensure that data are not out of line with the long-term trend. 3.6 Step 5 â Perform Diagnostic Assessment 3.6.1 Choosing the Right Diagnostic Tools Overarching reliability measures can assess unreliability on a refined temporal and spatial basis, but often the types of measures implemented in Step 2, Step 3, and Step 4 will only identify that a problem exists. For an agency to be able to address unreliability, the root cause should be identified to better determine the most appropriate treatments. Causes of unreliability can be grouped into five areas: non-operation, early or late start, inconsistent travel speeds, inconsistent dwell times, and inconsistent transfer times. Diagnostic tools must be capable of identifying which of these causes is present in the system, on the route, at the stop, and so forth. Chapter 5 provides a discussion of how these causes affect reliability. Furthermore, there is a set of factors that affect reliability, including both factors internal to the agency (bus operators, route planning, bus fleet, fare collection) and external to the agency (traffic, weather, incidents, customer flows). These factors are also presented in Chapter 5. Again, understanding which of these factors is influencing the reliability problem under investigation will aid in selecting treatments. 3.6.2 Developing Consensus Among Stakeholders As with Step 2, the implementation of diagnostics will require the buy-in of other departments within the agency to ensure that necessary data will be available in a timely manner. The accuracy and precision of the data to be used must be assessed by both the department conducting reliability diagnostics and the departments providing data. This may again involve a task force or steering committee that includes multiple departments internal to the agency, especially if diagnostic measures are being used for the first time due to a new system (for example, use of AVL or APC data) or a new process being put in place. 3.7 Step 6 â Identify Reliability Treatments Using the results from the diagnostic tools assessment, an agency can now begin to identify and assess possible treatments to improve reliability.
24 Minutes Matter: A Bus Transit Service Reliability Guidebook 3.7.1 Range in External Involvement with Treatments For the purposes of this guidebook, treatments have been grouped into four categories: operational, physical, technological, and policy-related. Operational treatments are those that agencies implement at the system, route, trip, or stop level through service planning and for real- time control. These treatments are often the first line of defense in treating reliability problems, but they sometimes come at the expense of other aspects of service, such as longer travel times. Physical treatments are those that alter vehicles or infrastructure and often take longer and have a higher cost to implement. Technological treatments involve the use of technology to improve actual or perceived reliability through technology such as signal priority, real-time information, and fare payments. Finally, policy treatments change the rules or the behavior of customers or other policies that are enforced by the agency. Of these categories, physical and technological treatments usually require the most external involvement. Operational treatments are often entirely within the agencyâs control. Some policy treatments, such as bus shoulder operations or yield-to-bus laws, typically require legislation or cooperation with other agencies. Agencies should take the involvement of external stakeholders into account in the selection of various treatments and should not shy away from implementing treatments that require coordina- tion. With good established relationships between the agency, the state department of transporta- tion, and the city/county public works and traffic departments, these treatments may be easier to implement than some operational treatments that require extensive public involvement processes (such as service changes) and yield better results in terms of reliability improvements. 3.7.2 Selecting Effective Treatments In selecting treatments, multiple items must be considered, including the causes of unreliability, treatment trade-offs, expected effect of the treatment on addressing unreliability, capital and operating costs associated with the treatment, and the ease of implementation. Chapter 6 includes tables listing treatments for each of the four types (operational, physical, technological, and policy) with rankings (low, moderate, high) for each of these factors. Chapter 7 includes much more detail about each factor for each of the treatments. Although it is possible to write generalities about each factor for the purposes of a guidebook, internal and external factors specific to the agency must be considered in the selection of the most appropriate treatment. As a process, first, the causes of unreliability identified in Step 5 should be used to identify treatments that could address the problem at hand. Table 6.1 organizes the treatments based on the causes of unreliability addressed. To select an appropriate treatment, it is essential for an agency to have identified the root cause of the unreliability. Agencies should then use the generalized capital costs, operating costs, and ease of implementa- tion (presented in Chapter 6 and Chapter 7) to choose the most appropriate treatments based on resources available and the severity of the problem being addressed. Knowledge about local factors must be used to assess possible treatments. As an example, a dedicated transitway is one of the most expensive capital cost expenditures among the treatments. However, if right-of-way is avail- able, the cost can be much lower, and the expected effect is one of the highest of all the treatments. Similarly, the balance between capital and operating costs should be considered. Capital funds are often available through specific FTA, state, or regional grants, but operating costs will be limited by the agencyâs budget. Finally, the ease of implementation can vary by agency. Bus operator training and incentives require the support of labor, but for an agency with a good operatorâ agency relationship, this may be an easy undertaking. Similarly, the ease of implementing transit signal optimization or transit signal priority (TSP) can vary greatly based on the relationship with the municipality.
Developing a Bus Service Reliability Improvement Program 25 Finally, the treatment trade-offs and expected effects should be considered to ensure that the treatment is an appropriate fit. The trade-offs in customer impact, community impact, or neces- sary changes or training within the agency may not be acceptable in exchange for the expected effect on reliability. Therefore, the downside of any treatment should be carefully considered before proceeding. Likewise, spending time implementing a treatment that is expected to have little effect to address a major reliability problem may not be worthwhile. In the same way, the expense of a treatment with high impact may not be needed in certain corridors where reliability issues are minor and may be addressed with more incremental improvements. 3.7.3 Developing Consensus Among Stakeholders This step in the process will require the most consensus among stakeholders. There are very few treatments that require no interaction outside of the department overseeing reliability. Inter- nal to the agency, many of the most effective treatments require interaction with operations or scheduling. Many of the most effective treatments require involvement by stakeholders external to the agency, such as the local public works or traffic department in the city or state agency. Still others involve outreach to the public. Relying on relationships built in earlier steps will be critical to ensuring that treatments are accepted and buy-in is achieved. As in previous steps, a task force or steering committee that includes multiple departments internal to the agency as well as external stakeholders may be used to assess possible treatments. 3.8 Step 7 â Implement and Monitor Reliability Treatments Once a treatment or multiple treatments are selected, the process of implementing the treatment(s) begins. As stated in Step 6, the ease of implementation is generally discussed in Chapter 7. However, the effectiveness of working relationships within and outside the agency can have a vast impact on the success of the implementation process. 3.8.1 Implementation Tips for Working Within the Agency For a reliability improvement program to succeed, often multiple departments must work in coordination, and buy-in from multiple departments must be achieved. For a typical organiza- tional structure, these departments would include planning, scheduling, capital budgeting, main- tenance, and bus operations. The bus operations department must be involved and not feel that change is being dictated to it without its input. Once a treatment has been selected, buy-in has been achieved by applicable departments and external partners, and the chief executive officer and board of the agency have approved of the treatment, an implementation plan must be developed. This plan should include before-and-after data collection to test the effectiveness of treatments. Ideally, treatments will be rolled out one by one to allow the effectiveness of each to be monitored and understood individually. 3.8.2 Implementation Tips for Working with Outside Agencies Often, treatments must be implemented in coordination with outside agencies. Typically, these treatments are roadway elements such as bus lanes, TSP, and enhanced stops, and the approval of municipal roadway agencies is needed. Transit agencies must often work with multiple munici- palities, which can make such efforts even more challenging. The transit agencyâs budget can also be affected if the municipality expects the agency to pay for any improvements to the road- way environment.
26 Minutes Matter: A Bus Transit Service Reliability Guidebook Although such outside coordination can make implementing treatments more time-consuming and costly, with good ongoing relationships between agencies, such efforts can be much easier and may be less costly than some technological treatments that require substantial equipment cost. Many metropolitan planning organizations (MPOs) host regional forums to encourage communi- cation between partners such as local governments and transit agencies. With regular forums such as these, projects can be discussed in advance to judge feasibility. Furthermore, once a treatment project has begun, periodic project management meetings should include all relevant stakeholders. 3.8.3 Monitoring Treatments for Effectiveness One of the key aspects of a reliability improvement program is determining the effective- ness of treatments that are being implemented. Often, when reliability is an issue, treatments will be attempted without a clear plan for assessing them. With a comprehensive reliability improvement program, reliability measures will be continuously monitored and reported to assess changes in the higher-level performance of the system in terms of reliability. However, if the diagnostic measures from Step 5 are not continued after the implementation of the treat- ment, detailed data to assess the effectiveness will not be available. Not only should this assess- ment take place within the agency, but the before-and-after results should be published to make other agencies aware of ranges in treatment effectiveness. 3.9 Step 8 â Review and Update the Program In the final step of the reliability improvement process, the existing program should be reviewed and updated. 3.9.1 Reviewing Reliability Measures over Time At least every 5 years, the program should be reviewed to assess whether it is working. The process should additionally be reviewed when there are major changes to the structure or opera- tions of the agency, such as the hiring of a new general manager, the introduction of a service such as a rapid bus service, the introduction of a mode of transit such as light or heavy rail, or the introduction of a major funding source such as a sales tax. There are several ways to determine whether the reliability improvement program is working, but one of the primary ways is through measuring customer and employee satisfaction. Using annual or biannual rider surveys, an assessment of perception of reliability can clarify whether an agency is making improvements. Periodic non-rider surveys can ask for opinions about reliability that may be affecting a non-riderâs willingness to try transit. Finally, few transit agencies conduct internal employee satisfaction surveys, but as mentioned previously, bus operators can affect and be affected by reliability issues, and their perception of reliability and willingness to improve reliability are critical to the agencyâs success. Additionally, as agencies monitor their own reliability over time, they should compare it to the reliability of their peer agenciesâ services to understand whether their program needs revision to have a greater focus on particular aspects of reliability or resetting of standards to match those in the industry. 3.9.2 Updating the Program of Reliability Measures When changes or issues discussed previously are identified, it is time to begin the process again from Step 1. In the interim, agencies will often find themselves reiterating diagnostics
Developing a Bus Service Reliability Improvement Program 27 in Step 5, treatment selection in Step 6, and treatment implementation in Step 7, as reliability measures in Step 2 identify new issues that should be assessed and treated. 3.10 Example Application This section contains several examples of how to use this guide, including setting up or modifying a program, as outlined earlier in this chapter; using the more detailed discussions in Chapter 4, Chapter 5, and Chapter 6; and using the information sheets in Chapter 7. The first example follows a new transit agency that is just establishing a reliability improvement program. The second example explains how an agency integrates a new computer-aided dispatch (CAD)/AVL system into its program. The third example shows how an agency might move from identifying a problem to finding applicable treatments. 3.10.1 Example 1: A New Reliability Improvement Program Situation: The Regional Transit Agency (RTA) is a newly established agency providing regional express bus service. As a new agency, it is starting a reliability improvement program from scratch. It is simultaneously creating an entire performance management program and making decisions about equipment and technology purchases. Initial Process: With buy-in from the general manager, the planning group at RTA goes through each step of the Reliability improvement program development steps in order, beginning with Step 1 â Define Goals and Objectives. Through the process that established the agency, a general set of goals and objectives have already been defined, and the major stakeholders have been iden- tified. As a new agency, RTA is in the process of purchasing a fleet that is already equipped with a CAD/AVL system on every vehicle. This will give the agency the data it needs for reliability analysis in later steps. RTA establishes a monthly advisory committee meeting to allow feedback on each stage of the process, beginning with the goals and objectives. RTA then begins Step 2 â Select Reliability Measures and selects a program of reliability measures that allows it to monitor its schedule-based express services for punctuality, variability, and non-operation. RTA wants to monitor punctuality on a route-level and time-pointâlevel basis. It selects on-time performance as a measure that can be used internally to adjust schedules and can be reported on its dashboard for public outreach. RTA also wants to monitor variability on a route-level basis. It selects travel time variability since many of its customers are choice riders, and the regional planning agency to which it is tied would like to monitor average travel times and variability in travel time for multiple modes of travel. RTA wants to monitor non-operation on a trip-level basis for use within the operations group and within the planning group for longer- term analysis. It selects pullouts missed and number of crashes as measures that can be used to assess operations over time. This package of measures gives RTA both internal and customer-facing measures. Finally, RTA has a small customer research group that will be conducting biannual passenger surveys. The planning group prepares several questions that can be added to monitor passenger ratings of reliability via onboard surveys. RTA then turns to Step 3 â Select Reliability Standards and begins a process of setting its stan- dards for each measure by conducting a peer-agency analysis. It contacts 10 peer agencies to learn at what level each peer agency has set its standards and the process used to set them for all the measures selected. For on-time performance, RTA decides to set two levels, including one that will be used internally at the time-point level to identify segments of trips that are not meeting performance targets. The other level is at the route level and will be reported to passengers on the public-facing dashboard. As an operator of express buses, the agency would be most interested
28 Minutes Matter: A Bus Transit Service Reliability Guidebook in not leaving stops early where passengers are picked up and arriving at the destination on time. This might require different on-time definitions for the pick-ups and drop-offs. Ongoing Process: RTA then begins Step 4 â Implement the Program and Monitor Perfor- mance. It has set up a process by which data flow from the CAD/AVL system to the operations staff for immediate reactions. Data are also archived each night, and an analyst assesses the measures on a monthly basis for all the routes. As issues are identified, RTA will begin to address Step 5 â Diagnostic Assessment. However, initially, it is just pleased to have a plan for a reliability program in place before beginning service. 3.10.2 Example 2: Revision of Reliability Improvement Program in Conjunction with New CAD/AVL System Situation: The City Transit Authority (CTA) has just purchased a new CAD/AVL system with APC placed on every vehicle, which allows vastly more in depth analysis of vehicle location data, including signal delays, dwell times, and stop-level boardings and alightings. Initial Process: Although CTA already has a reliability improvement program in place, the availability of better data encourages it to revise its program to conduct more in-depth analysis of reliability. The agency goes back to the initial two steps of Define Goals and Objectives and Select Reliability Measures to assess how its goals and measures may change with the availability of new data. In addition, the new system already allows the agency to conduct better diagnostic assessment for some long-standing reliability issues it is facing. CTA begins with Step 1 â Define Goals and Objectives by reviewing its existing goals and objectives. It finds that its existing goals still apply, although it adds the goal of better assessing the components of travel time to diagnose problems in the system. CTA then begins Step 2 â Select Reliability Measures. In addition to several non-operation measures, CTA previously used on-time performance at route start and end points and running time variability on a trip level. However, its new CAD/AVL system now allows it to look more in depth into the components of travel times, including dwell times, signal delays, and time between stops. It specifically adds the measure of dwell time variability to the program of reliability measures that it has decided to track as an agency. For this new dwell time variability measure, CTA addresses Step 3 â Select Reliability Standards. Rather than a numeric standard, it decides to track stop-level data to identify the worst 10 percent of routes to allow continuous improvement over time. The agency then turns to Step 4 â Implement the Program and Monitor Performance. Ongoing Process: Previously, when operators were not meeting scheduled time points, CTA usually added time to the schedule. Although the agencyâs on-time performance is one of the best in the business, its average speeds are much slower than prevailing speeds on the roadway. The new CAD/AVL system allows CTA to look more in depth into why speeds are inconsistent by looking at all the components of travel time that could be affecting reliability. Therefore, CTA begins Step 5 â Diagnostic Assessment. Along with a new measure of dwell time variability, the first analysis CTA conducts is a dwell time analysis. Using its new CAD/AVL system, it can merge dwell time data with passenger boarding data from its new APCs and combine the data for multiple routes. The agency finds that it has variable passenger demand on several routes, and it consults Table 6.5. Possible treatments for inconsistent dwell times. Several treatments may be applicable, including several operational treatments such as introduce standby buses, right-sizing bus stops, schedule and headway opti- mization, and increase fleet size. Several physical treatments are also applicable, including level boarding and low-floor buses, articulated buses, transit signal priority, and boarding limits.
Developing a Bus Service Reliability Improvement Program 29 CTA happens to be looking at a major bus purchase in the next few years, so it takes several possible treatments into further consideration along with that purchase. In addition, CTA decides to review its reliability improvement program once it has CAD/AVL collecting data for one service period. The agency believes the additional knowledge of the system will allow it to bring even more diagnostic level measures into its program over time. 3.10.3 Example 3: Reliability Issue Identified in Variable Travel Speeds Situation: The Area Transit District (ATD) has been monitoring its reliability using a set of measures for a number of years. Over time, the agency has noticed that its travel speeds have become more variable, and its performance against this standard has been failing. Initial Process: ATD has been through the entire reliability improvement program process and, as part of Step 8, regularly revisits previous steps. In this case, the agency begins with Step 5 â Diagnostic Assessment to further analyze its measure of variable travel speed. The agency finds that the worst variability is occurring in two corridors on a regular basis. A field visit identifies that buses have delays merging into traffic from several stops and that passenger demand is often variable. ATD begins Step 6 â Identify Reliability Treatments. It consults Table 6.4. Possible treatments for inconsistent travel speeds and finds right-sizing bus stops one possible measure. This measure also addresses some of the agencyâs dwell time issues when it consults Table 6.5. Possible treat- ments for inconsistent dwell times. During the field visit, agency staff observe that many of the stops along these two corridors are configured poorly, causing delays for passengers trying to board the bus. The agency decides that right-sizing bus stops may be one of the best options. Armed with the knowledge of how much delay it is causing both transit and general roadway traffic, ATD reaches out to the city to begin discussions about reconstructing the areas close to those stops. Ongoing Process: In its meeting with the city, ATD also learns about a Smart Cities signal systems pilot program to which the city is interested in applying. The city would like to know on which corridors transit signal priority would be most useful to ATD to improve reliability. Jumping at this chance to improve its operations, ATD conducts additional analysis related to the signal delays along several corridors, using combined AVL data for multiple routes. The agency finds that three corridors, each serving multiple routes, have certain signals that are causing issues in signal delay. ATD reports to the city with detailed data regarding the improvement in signal delay that various signal priority configurations would enable. The city and ATD decide to implement both right-sizing bus stops and transit signal priority as part of a pilot program, so the agency turns to Step 7 â Implement and Monitor Reliability Treatments. ATD would like to follow best practices for a pilot program to understand the impacts of each of these treatments individually. It therefore comes up with a plan with the city to phase in the treatments one before the other. The agency establishes a data collection program to compare all reliability measures before and after the implementation of each measure. 3.10.4 Summary These three examples provide a sample of the types of situations in which the sections of the guidebook may be used. They are meant to serve as an overview of how to consult Chapter 4, Chapter 5, and Chapter 6 as agencies address the relevant steps and how to use the descriptions of measures and treatments in Chapter 7.