National Academies Press: OpenBook

Transit Service Evaluation Standards (2019)

Chapter: Chapter 5 - Conclusions

« Previous: Chapter 4 - Case Examples
Page 54
Suggested Citation:"Chapter 5 - Conclusions." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 54
Page 55
Suggested Citation:"Chapter 5 - Conclusions." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 55
Page 56
Suggested Citation:"Chapter 5 - Conclusions." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 56
Page 57
Suggested Citation:"Chapter 5 - Conclusions." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 57
Page 58
Suggested Citation:"Chapter 5 - Conclusions." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 58
Page 59
Suggested Citation:"Chapter 5 - Conclusions." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 59
Page 60
Suggested Citation:"Chapter 5 - Conclusions." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 60
Page 61
Suggested Citation:"Chapter 5 - Conclusions." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 61

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

54 The purpose of this synthesis report is to explore issues and document effective practices in the development and application of performance measures, service evaluation standards, and data collection methods at North American transit agencies. Findings from the literature review, survey responses, and case examples identify and assess how performance metrics and standards are used throughout all levels of the organization to determine service delivery. This chapter presents conclusions from this synthesis project and offers areas for future study. The areas of future study offered here address the design and use of metrics that reflect the customer’s experience; ways to expand benchmarking efforts and opportunities; the process of ensuring the accuracy of performance data; ways to implement a service evaluation process; extension of the process to new modes and technologies; multimodal applications; and the education process for board members, elected officials, and stakeholders. Findings from the Survey and Literature Review • A majority of respondents rated their efforts to design and apply service evaluation standards as either “very successful” (22%) or “somewhat successful” (44%). • The four most-often-mentioned challenges in response to an open-ended question regarding the single biggest challenge faced by the transit agency were – Limited staff time, – Data quality and accuracy, – The process of achieving buy-in and consensus around evaluation standards and what to do if these are not met, and – How to balance evaluation results against competing goals. • The most common strategies for overcoming the single biggest challenge were – Ongoing education within the organization and with stakeholders and the public, – Investment in new technologies for collecting and evaluating data, and – Attention to data collection protocols. • Benefits of service evaluation standards included a transparent rationale for service changes and an objective, data-based framework for analysis. • Drawbacks of service evaluation standards include – The difficulty of obtaining buy-in by the board and others, – An ongoing need for education, – The need to strike a balance between rigid application of standards and flexibility, – The complexity and time-consuming nature of the evaluation process, – Inability to increase service where warranted because of budget constraints, and – Inconsistent application of standards. C H A P T E R 5 Conclusions

Conclusions 55 • Adjusting service levels (by modifying or reallocating service, enabling cuts to low-demand routes, identifying opportunities for successful service improvements, or guiding a major redesign of the existing network) was the most successful action resulting from service evaluation. • The most common responses to the question, “If you could change ONE aspect in the development and use of service evaluation standards by your agency, what would you change?” were – Formalizing the process (both standards and actions to be taken) and applying it uniformly, educating everyone involved, and automating data collection and analysis. – Maintaining a focus on the evaluation process and potential improvements was a high- level suggestion from multiple agencies. • The most frequently mentioned lessons learned regarding service evaluation standards were – A flexible process, – Verified and understandable data, – Simplicity, – A connection between standards and action, – Active board involvement, and – Performance standards based on agency mission and goals, communication, and education. • Several respondents provided final thoughts at the end of the survey. Their comments stress connections between standards and goals and between evaluation and action, widespread involvement, consistency in how data are collected and analyzed, a transparent process, benchmarking, appropriate timeframes for presentation of information, and accountability to the standards. Lessons Learned and Keys to Success from Case Examples The six case example cities and agencies were selected for specific reasons: • Boston’s MBTA is pioneering the development of customer-based metrics that better reflect the passenger experience instead of the operational characteristics of delivered service. • CCRTA has a flexible process that takes board concerns into account in the application of performance standards and has recently made changes to bus stop spacing in accordance with these standards after an in-depth discussion between board members and staff. • Denver’s RTD has a long history of service evaluation using standards tied directly to the agency’s mission and goals (1) to define the type and level of service that a community can expect and (2) to provide an objective, transparent basis and a rationale for RTD’s service- level decisions that everyone can understand. • Modesto’s MAX is a small agency that consulted its stakeholders and riders as it developed metrics, guiding policies, and principles to guide the future development of transit. MAX is continuing to educate the public and elected officials as it begins to use these metrics to make decisions about its system. • Seattle’s KCM developed a performance evaluation process in conjunction with its strategic plan and has established priorities that directly inform service decisions. • PalmTran in West Palm Beach created a new PMO that reports directly to the executive director with the purpose of using performance metrics to improve the agency as opposed to simply being performance monitors and report providers. The agency created nine process improvement teams around individual performance areas with specific metrics assigned to each team.

56 Transit Service Evaluation Standards Although each agency has specific concerns of its own, recurring themes emerge throughout the case examples, as summarized in the following paragraphs. • Develop a transparent, data-driven process that uses verified and understandable data. Buy-in at the very top of the organization is an absolute necessity. • Begin at the highest level: tie performance standards directly to the agency mission and goals. Ensure policies are aligned with the metrics to facilitate or enable the allocation of resources to fix problems revealed by the metrics. Then develop other specific criteria and measurements (standards) that support the principal objectives and provide specific and clearly defined sup- port for service development. • Keep official or public-facing standards and metrics simple, broad, and few in number. Ensure you have the capability to measure the metrics being proposed without spending excessive staff time in collecting, cleaning, and analyzing data. • Conduct as much outreach and education as possible. Encourage broad participation— including staff, stakeholders, and members of the public—in deciding on the high-level stan- dards. Public feedback is very valuable in striking a balance between operational versus public views of level of service. Build flexibility into the process. • Never assume knowledge of what a service standard means or how it is used. Show your man- agement, board, and customers (preferably annually) how your individual services are meet- ing them or will lead to investigation of changes. Express the standards as clearly and concisely as possible, with a balance between detail and simplicity, depending on your audience. Use graphics to convey complex ideas. Address multiple audiences in a public-facing dash- board; that is, provide the key metrics but offer the opportunity to explore in greater detail. • Choose standards that can lead directly to taking action. Link standard problem-solving actions to underperforming services (“If service x is underperforming on metric y, do z to correct it.”). Actualize and use metrics as a decision-making tool to seed the discussion and get employees involved. Educate your employees and build strong working relationships across departments. Employees cannot be held accountable if they do not understand the concepts and know the expectations. • Communicate internally, with your board, and with stakeholders. Communication needs to be a constant process, not a one-shot event. • Think through the implications of performance standards and be willing to revise them as necessary. Evolve on the basis of available data, analytical tools, and new situations. Find out what works well at peer agencies, then see how that can apply to your agency’s context and your board’s goals and objectives. • Regular presentations to the leadership team reflect a commitment to ongoing identification of potential solutions. Without a continuous process improvement, all an agency has is a scorecard. Summary of Key Metrics and Standards in Use Today Appendix D includes performance evaluation standards submitted by 23 responding transit agencies. Chapter 2 summarized the evolution of these standards. Table 16 summarizes metrics and standards in use today. Conclusions • A majority of respondents rate their efforts to design and apply service evaluation standards as either “very successful” (22%) or “somewhat successful” (44%). Midsized agencies were more likely to rate their efforts as “very successful,” large agencies as “somewhat successful,” and small agencies as “neutral” compared to the overall results.

Conclusions 57 Indicatora Typical Criteria and Examples Productivity/Effectiveness Passengers per revenue hour (n = 13) By route type; fixed (20 corridor, 10 neighborhood); percentage of system average (below 66% probationary); percentile (bottom 10% local, 25% limited) Passengers per trip (n = 2) Trip must have percentage of system average, varies by time of day Ridership (n = 2) Specific numerical goal Ridership trend (n = 2) Ridership increase population increase Financial/Efficiency Farebox recovery ratio (n = 7) Varies from 17% to 60%; most in range from 18% to 32%; may include other revenue sources Operating expense per revenue vehicle hour (n = 2) Operating expense increase < consumer price index increase Subsidy per passenger (n = 2) Fixed ($4) or percentage (35% above peer group average triggers intensive review) Service Availability Service span (n = 12) Varies by type of service; median 14–18 hours Monday–Friday; usually less on weekends Population served within a given distance (n = 9) Varies by density; can include employment 75%–85% within 1/2 mile, 50%–80% within 1/4 mile Route spacing (n = 8) ½ mile typical; varies by density Service Quality Schedule adherence (n = 20) 0–5 minutes late common; –3 to 7 widest; 65%–95% range; varies by stop location (lower midroute), time of day (lower in peak), and type of service Bus shelters (n = 10) Minimum daily boardings 20–50, for bench 10–25; fewer if concentration of low-income, senior, or disabled passengers Bus benches (n = 6) Minimum daily boardings 10–25; fewer for stops used by senior or disabled riders Lost runs/missed trips (n = 5) Maximum 0.5%–1.5% of all trips or 1 per day Complaints (n = 3) 20–48 per 100,000 boardings; lower for “valid complaints” Headway adherence (n = 3) Wait time or time between arrivals headway (or HW + 2); 87%– 90% standard Vehicle accessibility (n = 3) 100% of buses equipped with lifts/ramps, 98.6% of trains with at least one accessible car Service Design Policy headways (n = 19) 30 peak/60 off peak is common; more frequent in peak (10–15) and off peak (15–20) for strongest routes Loading standard (n = 18) 135% average for peak periods/100% off-peak; express 100%; time limits for standees (15 minutes); space per passenger Bus stop spacing (n = 10) Median 660–1,250 feet; varies by central business district/urban/suburban New service design (n = 9) Meet performance standards within a given period (typically 1 year; up to 3 years) Directness of service (n = 9) (see also transfers) Bus travel time/distance no more than 150%–175% of auto; 125%– 133% for express/limited Added passenger-minutes per boarding/alighting on route deviation < 3, 5, or 10 Limit on terminal loop length; no midroute loops Route structure (n = 6) Limits to number of branches/turnbacks (0–2) (continued on next page) Table 16. Performance metrics and standards in use today.

58 Transit Service Evaluation Standards • A transparent rationale for service changes and an objective, data-based framework for analysis are among the benefits of service evaluation standards. • Drawbacks of service evaluation standards include lack of buy-in by the board and others, an ongoing need for education, the appropriate balance between rigid application of standards and flexibility, and the complexity and time-consuming nature of the evalua- tion process. Respondents also noted the inability to increase service where warranted due to budget constraints and inconsistent application of standards as drawbacks. • Ongoing education within the organization and with stakeholders and the public, invest- ment in new technologies to collect and evaluate data, and attention to data collection pro- tocols were the most common strategies to overcome challenges. Limited staff time was rated as a major challenge by more than half of all respondents; new technologies and consistent data collection protocols address this challenge. • Adjusting service levels was the most successful action resulting from service evaluation. Performance standards provided a clear and objective justification for needed changes. • Suggested changes (in response to the question: “If you could change ONE aspect in the devel- opment and use of service evaluation standards by your agency, what would you change?”) included uniform application of a formal process (standards and actions), education, and automated data collection and analysis. Maintaining a focus on the evaluation process and potential improvements was a high-level suggestion from multiple agencies. • Lessons learned included flexibility in the process, use of verified and understandable data, simplicity, active board involvement, and ongoing communication and education. Indicatora Typical Criteria and Examples Performance index (n = 6) Combines productivity and financial metrics, sometimes demographics; point system or percentage of average as standards Transit-dependent areas (n = 4) Relaxed productivity standards or specific requirements Stop placement (n = 4) Far side versus near side; pullouts for stops on high-speed arterials Transfers (n = 4) Timed transfers especially for low-frequency routes; transfer time maximum by route type, e.g., 10 minutes local routes/20 minutes regional routes Route duplication (n = 3) Offset schedules on common segments of routes Population/employment density (n = 3) Enhanced service levels in areas with 7/7.5+ dwelling units per acre Two-way service (n = 3) Wherever possible; one-way couplets no more than 2 blocks apart Route terminals (n = 3) At transit centers and major activity centers Distribution of service (n = 3) 60%–80% ridership/40%–20% coverage Recovery time (n = 3) Minimum 10%–15% of running time or 5 minutes, whichever is greater Interlining (n = 3) List interlined routes as a single route, evaluate as separate routes Safety Preventable bus accidents (n = 6) Between 1 and 2/100,000 miles or 0.25/100,000 kilometers Revenue miles per road call (n = 5) Range 4,000–20,000; median 10,000; average 12,400 Street network/sidewalks (n = 3) No stops if inadequate pedestrian facilities or (one case) limited street network connectivity Source: Appendix D: Sample Performance Evaluation Standards. an = number of systems using the indicator. Table 16. (Continued).

Conclusions 59 • Final thoughts from survey respondents stressed connections between standards and agency goals and between evaluation and action, consistency in how data are collected and analyzed, a transparent process, benchmarking, appropriate timeframes for presenta- tion of information, and accountability to the standards. • Case examples provide details on procedures, challenges, lessons learned, and keys to success. Smaller agencies (MAX in Modesto, California, and CCRTA in Corpus Christi, Texas) have greater limits on staff times and a greater need for education and outreach, yet have used performance standards to justify changes to their transit networks. King County Metro (Seattle, Washington) uses performance standards to inform investment decisions, follow- ing established priorities (reduced crowding, improved reliability, increased frequency, and investments in highly productive routes). Denver RTD has a transparent, long-established performance evaluation process that balances a business-like approach with jurisdictional equity issues. MBTA (Boston, Massachusetts) represents an emerging trend toward service evaluation focused on the passenger experience. PalmTran (West Palm Beach, Florida) high- lights a trend toward creating a dedicated department responsible for performance evalu- ation that reports directly to the Executive Director, and the agency has also drawn on the principles of Lean Six Sigma, a methodology to improve performance through a collabora- tive team effort, in designing its performance evaluation process. • The case examples emphasized transparency, explicit connection to agency goals and objectives, buy-in from senior management, flexibility, simplicity, use of peer agencies’ experience, a link between standards and problem-solving actions, outreach and educa- tion, ongoing communication (internally, with the board/governing body, and with stake- holders), and a willingness to evolve. Areas of Future Study Findings from this synthesis suggest several areas of future study: • Exploration and use of metrics reflecting the customer’s experience, • Expansion of benchmarking efforts, • Ensuring accuracy of performance data, • Improving the data analysis process, • Implementing a service evaluation process, • Extending the service evaluation process to emerging technologies and modes (such as micro- transit or mobility on demand), • Customer-focused performance metrics for multimodal trips, and • Guides for board members and stakeholders. These areas are discussed below. Exploration and Use of Metrics Reflecting the Customer’s Experience MBTA case example includes customer-focused measures of schedule adherence on rail and passenger comfort on buses that are more directly relevant to the customer experience than operations-based measures. With continued growth in data availability, customer-based metrics are likely to be increasingly possible and practical. How can transit best measure performance from the customer’s perspective, and what additional data would be most useful in this regard? Can simplifying assumptions, such as those used to estimate origin-destination pairs on tran- sit systems that only record boarding locations, extend agencies’ abilities to measure things it cannot readily measure today? How can these new measures be forecasted? Can accessibil- ity be integrated into performance standards beyond stop spacing in a way that acknowledges

60 Transit Service Evaluation Standards the importance of customer satisfaction among riders with disabilities? Individual agencies are exploring these issues, but an industry-wide effort could avoid re-inventing the wheel. Expansion of Benchmarking Efforts Several respondents noted the benefits of their agency’s membership in a benchmarking group, but a minority of transit agencies are involved in formal benchmarking efforts. How do benchmarking efforts begin, and what actions could encourage additional efforts that could offer more transit agencies more opportunities to interact with and learn from their peers? Ensuring Accuracy of Performance Data This was a critical concern for many respondents and this study was not designed to elicit information regarding the best means of ensuring data accuracy. Some technology providers have proprietary methods of cleaning APC data, as one example. What steps can a transit agency take to enhance accuracy in a situation of limited staff availability? How can data be verified as accurate, especially data that streams out to the public? Can new technologies assist in this area? Improving the Data Analysis Process Existing procedures are often time-consuming, exacerbating the challenge of limited staff time. Can technology help to streamline data collection and data analysis tasks? Can the needs of various departments be accommodated by a streamlined process? Are there models for a streamlined process? Implementing a Service Evaluation Process The results of this study show that outreach and communication are important elements of a service evaluation process, but how might an agency design its process? What are the best ways of incorporating input from riders, the general public, stakeholders, and governing bodies? How does an agency select performance measures and standards that are congruent with its goals and objectives? How important is a review of peer service evaluation processes, and how does an agency identify appropriate peers? Are there models of implementation that can be adapted for agencies that have never used performance measures or seek to update their existing processes? Extending the Service Evaluation Process to Emerging Technologies and Modes As transit agencies explore new means of service delivery enabled by emerging technologies (such as microtransit or mobility on demand), how do they evaluate new types of services? Can existing standards be adapted to these new modes, or are new standards more appropriate? How have other agencies incorporated emerging modes of service delivery into their service evalua- tion processes? Customer-Focused Performance Metrics for Multimodal Trips Forward-looking transit agencies are moving toward a service evaluation process based on the passenger perspective as opposed to the operational perspective. The new availability of data allows metrics such as schedule adherence and crowding to be evaluated as the customer actu- ally experiences them. Multimodal trips are a promising avenue of research from the customer’s

Conclusions 61 perspective. How can the ease of transferring be best evaluated? Timed transfers are used by many agencies, but wait time is not the only consideration and is possibly not the most impor- tant. Importance of the path from one mode to another has not been explored in detail. The environment of the transfer path, including changes in elevation, and of the waiting area are obviously important, but there is no information on which factors are most important to the customer and how these can best be measured. Guides for Board Members and Stakeholders Education is another important concern for respondents, especially given the turnover among board members and elected officials. Is there an optimal way to provide the needed informa- tion on the service evaluation process to decision-makers and even interested members of the public who do not have a strong transit or statistical background? Agency performance standards included in the appendix may provide a starting point to address this question. What is the appropriate level of detail? Would a “frequently asked questions” section of an agency’s website help? How do agencies educate their stakeholders and constituents today?

Next: References »
Transit Service Evaluation Standards Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Transit Cooperative Research Program (TCRP) Synthesis 139: Transit Service Evaluation Standards provides an overview of the purpose, use, and application of performance measures, service evaluation standards, and data collection methods at North American transit agencies.

The report addresses the service evaluation process, from the selection of appropriate metrics through development of service evaluation standards and data collection and analysis to the identification of actions to improve service and implementation.

The report also documents effective practices in the development and use of service evaluation standards. The report includes an analysis of the state of the practice of the service evaluation process in agencies of different sizes, geographic locations, and modes.

Appendix D contains performance evaluation standards and guidelines provided by 23 agencies.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!