Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
3 Project Background and Objectives Section 15 of the National Mass Transportation Act of 1974 required the Urban Mass Trans- portation Administration (UMTA) to develop a National Urban Transportation Reporting System. The system was based on the concept that sufficient data should be collected on a sys- tematic and permanent basis to allow various types of reviews, analyses, and evaluations of urban transportation. There had been prior efforts to specify service standards and measure- ment techniques, but the availability of reliable transit data for all transit agencies in the country spurred a wave of quantitative analysis of transit performance. Today, transit agencies of all sizes use service evaluation standards to measure their per- formance. As analysts wade through an explosion of transit data, a quote attributed to Albert Einstein is relevant: âNot everything that counts can be counted, and not everything that can be counted counts.â The challenge for this study was to identify what is worth counting. This synthesis report addresses the service evaluation process, from the selection of appropriate metrics through development of service evaluation standards and data collection and analysis to the identifica- tion of actions to improve service and implementation. Two important concepts in performance evaluation are efficiency and effectiveness. These are often used interchangeably and not clearly defined in the literature and in agency service standards. For the purposes of this synthesis report, efficiency minimizes the cost per unit of output (e.g., cost per revenue mile) and effectiveness maximizes the benefit per unit of output (e.g., riders or revenue per revenue hour). This synthesis report identifies successful strategies and best-practice solutions through a review of literature documenting the history and development of transit service evaluation standards, a web-based survey of a cross-section of North American transit agencies, and case examples of six specific transit agencies. The report focuses on lessons learned through transit agenciesâ experiences with performance evaluation. An important element is Chapter 4, which documents case examples that provide details on the development and updates of the service evaluation process, how standards are used, priorities among standards, board and agency attitudes toward the service evaluation process, challenges, lessons learned, and keys to suc- cess. Findings from all of these efforts are combined to report on the state of practice, including lessons learned, challenges, and gaps in information. A summary of areas of future study is also included. C H A P T E R 1 Introduction
4 Transit Service Evaluation Standards Technical Approach The approach to this synthesis report included the following elements: 1. A literature reviewâa search of the Transportation Research Information Database (TRID) on several different keywords was conducted to aid the literature review; 2. A survey of transit agencies, described in the following paragraphs; and 3. Telephone interviews with six agencies selected as case examples to examine details regarding the development and use of service evaluation standards. The survey was designed to solicit information on data sources and collection protocols, definition and updating of standards, priorities among standards, use of standards in making decisions, responsibility for data analysis, unintended consequences, agency assessments, and lessons learned. Once finalized by the panel, the survey was posted on the Survey Monkey web- site and pretested. The pretest revealed no issues with survey logic and flow. The sample for this survey included 59 transit agencies and was designed to include agencies of all sizes and from all regions of the United States and Canada. An email with an attachment from a program manager in the Transit Cooperative Research Program (TCRP) that explained the importance of the survey and provided a link to the online survey site was sent to a known contact at each agency. Follow-up emails were sent as needed on a regular schedule after the original contact to encourage response. Fifty-one completed surveys were received from the 59 transit agencies in the sample, a response rate of 86%. The effort to attract participation by small agencies was successful, with 12 agencies (24% of all agencies in the sample) that operate fewer than 75 vehicles in maximum service (Figure 1). Distribution of respondents by Federal Transit Administration (FTA) region was also exam- ined (see Figure 2). FTA Region IX (Arizona, California, and Nevada) accounted for 29% of responding agencies, but an analysis of National Transit Database (NTD) data indicates that 18% of 2016 full-reporting agencies are located in Region IX. Geographic representation among survey respondents was reasonably balanced, as shown in Table 1. Figure 3 shows the location of the 51 transit agencies that completed the survey. The locations of the case examples are shown by a large red dot. The case example agencies also completed the survey. Small (fewer than 75 peak vehicles) 24% Medium (75â299 peak vehicles) 37% Large (300 or more peak vehicles) 39% Source: Survey results. Figure 1. Responding transit agencies by size.
Introduction 5 Figure 2. Map of FTA regions. Table 1. Survey respondents by FTA region. Agencies Responding FTA Region Number Percent I 2 4 II 4 8 III 4 8 IV 6 12 V 8 16 VI 4 8 VII 0 0 VIII 1 2 IX 15 29 X 5 10 NonâUnited States (Canada) 2 4 Total 51 100 Source: FTA; survey results. Note: Percentages do not add to 100 because of rounding.
6 Transit Service Evaluation Standards Organization of This Report Following this introductory chapter, Chapter 2 summarizes the findings of the literature review. Chapter 3 presents the results of the survey. The first part of the chapter addresses data sources and data collection and analysis protocols, definition and updating of standards, priorities among performance standards, use of standards in making decisions, and responsi- bility for data analysis and unintended consequences. The second part discusses the agenciesâ assessments of their efforts to design and apply service evaluation standards and summarizes the agenciesâ assessment of challenges, benefits and drawbacks, and lessons learned that would be of interest to other transit agencies. Chapter 4 reports detailed findings from each of the six case examples. The selection process for case examples had two criteria: to include (a) transit agencies of various sizes in different parts of North America and (b) agencies that reported detailed observations in the survey that would add value to the synthesis report. Source: Survey results and case examples. Figure 3. Survey respondents and case examples.
Introduction 7 Chapter 5 summarizes the findings, presents conclusions from this synthesis project, and suggests areas of future study. Findings from the surveys and, particularly, the case examples provide an appraisal of the current state of the art. Appendix A lists the transit agencies that participated in the online surveys. Appendix B is the online survey of transit agencies. Appendix C provides verbatim responses to the survey. Appendix D contains samples of agency guidelines and standards.