National Academies Press: OpenBook

A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry (2010)

Chapter: Chapter 3 - Applications and Performance Measures

« Previous: Chapter 2 - Performance Measurement, Peer Comparison, and Benchmarking
Page 22
Suggested Citation:"Chapter 3 - Applications and Performance Measures." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 22
Page 23
Suggested Citation:"Chapter 3 - Applications and Performance Measures." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 23
Page 24
Suggested Citation:"Chapter 3 - Applications and Performance Measures." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 24
Page 25
Suggested Citation:"Chapter 3 - Applications and Performance Measures." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 25
Page 26
Suggested Citation:"Chapter 3 - Applications and Performance Measures." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 26
Page 27
Suggested Citation:"Chapter 3 - Applications and Performance Measures." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 27
Page 28
Suggested Citation:"Chapter 3 - Applications and Performance Measures." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 28
Page 29
Suggested Citation:"Chapter 3 - Applications and Performance Measures." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 29

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

22 Applications The range of questions where benchmarking and peer com- parison are valuable spans all aspects of a transit agency’s func- tions. Applications can range from the very detailed, such as a comparison of mean time between farebox failures, to broad public policy goals, such as a planning effort to develop a bal- anced, multi-modal regional transportation system. Peer- comparison applications are divided below into four general categories that describe the overall focus of a particular analy- sis, recognizing that there is room for overlap between the various categories. 1. Administration – questions related to the day-to-day administration of a transit agency, including (but not lim- ited to) financial-performance questions asked by agency management, agency board members, and transit funding organizations. 2. Operations – questions related to a transit agency’s daily operations. 3. Planning – long-term policy and service questions of in- terest to transit operators, metropolitan planning organi- zations, and state departments of transportation. 4. Public and market focus – questions that consider the view- point of the broad range of customers, including riders, non-riders, local jurisdictions, and policy-makers. Administration Performance questions falling into the agency adminis- tration category can be raised at all levels of management and oversight, including department managers, top-level transit agency managers, transit board members, oversight and funding agencies, and legislative bodies. Historically, peer comparison has been most widely applied in the United States to the financial aspects of transit agency administra- tion (and financial questions were the most common per- formance topic picked by participating agencies in this proj- ect’s methodology testing), but peer comparison can also be applied to other aspects of transit agency administra- tion, particularly aspects relating to labor costs and labor utilization. Examples of performance questions relating to agency administration include: • How efficient are our bus and rail operator work schedules? • How comparatively cost-effective is our operation? • What percentage of transit revenue comes from advertising? • What is the typical subsidy level for an area our size? • How does our absenteeism compare to peer agencies? • What is the farebox recovery ratio for peer agencies’ long- distance regional commuting routes? • How do our state’s small urban operators compare to their peers in terms of cost-effectiveness, cost-efficiency, and productivity? • How relatively cost-efficient are the transit agencies that we fund? • How does our employee compensation compare to other transit agencies? Operations These are performance questions asked by those respon- sible for the day-to-day operations of the transit agency to help ensure that the service provided meets the agency’s stated goals. These kinds of questions can also be asked when looking for ways to improve specific departmental opera- tions. Transit operators typically ask these questions in sup- port of continuous process improvement and short-term (e.g., 1-year) planning efforts. Examples of these questions include: • How cost-effective are our vehicle and non-vehicle main- tenance programs? C H A P T E R 3 Applications and Performance Measures

• How often do our buses break down on the road, com- pared to our peers? • How does the average speed of our buses while in service compare to our peers? • What is our vehicle fuel economy compared to our peers? • What are vehicle accident rates for other agencies our size? • How do other agencies compare to ours in terms of demand- response ridership trends? Planning Planning performance questions are typically longer- term in nature and have policy and funding implications. They can be asked by transit agencies as part of their own internal planning or by external agencies [e.g., cities, Met- ropolitan Planning Organizations (MPOs), states] in sup- port of long-range or modal plans. Planning questions can be hypothetical in nature and involve looking at peers that have characteristics that a transit agency expects or wants to have in the future. Examples of these kinds of questions include: • How well do peer agencies with dedicated local funding sources perform in terms of ridership and financial per- formance, compared to ours? • What mix of funding sources are used by transit agencies that have just reached the 200,000 population threshold? • How much does it cost per hour for relatively new light-rail systems to provide service? • What mix of transit services do peer regions provide? Public and Market Focus In Chapter 2, the importance of integrating the customer perspective into benchmarking efforts was identified. Public transit has multiple customer types, both those who use the service directly and those who benefit indirectly (for example, through improved air quality, reduced congestion, or land use and infrastructure improvements designed to support transit). Public and market-focus applications look at the viewpoints of a broad range of customers. Examples of per- formance questions in this area include: • How does our service quality compare to that of our peers? • How do we compare to our peers in terms of customer service and satisfaction? • How do we compare to our peers in terms of how much transit service is provided? • How does our level of investment in transit compare to peer regions? • How do our fares compare to fares of other agencies? Performance Measures Performance measures are used in peer-comparison and benchmarking processes to (a) provide quantitative informa- tion about the selected performance topic, (b) provide con- text about the peer agencies, and (c) screen out potential peers based on specific transit agency characteristics. Once a performance topic has been picked, it is necessary to identify measures that can be used to compare a transit agency to its peers in a standardized, credible way. Some performance measures used in a peer comparison quantify outcomes, while other, descriptive, measures provide context about peer agen- cies or are used to screen out transit agencies with particular characteristics from consideration as potential peers. The performance measures selected for any given peer comparison will vary, depending on the performance ques- tion being asked. For example, a question about the cost- effectiveness of a transit agency’s operations would focus on financial outcome measures, while a question about the effec- tiveness of an agency’s maintenance department could use measures related to maintenance outcomes (e.g., maintenance expense per vehicle mile), agency investments (e.g., average fleet age), and performance outcomes (e.g., revenue miles between failures). In addition, some descriptive measures would often be incorporated into the review to provide context about the individual peer agencies. Because each performance question is unique, it is not pos- sible to provide a standard set of measures. Instead, the remainder of this section provides lists of standardized mea- sures that are available from (or derivable from) the FTIS software tool, categorized by type of measure (descriptive versus performance ratio) and subject area (e.g., maintenance performance, agency characteristics). Scan through these lists and read the accompanying text on definitions and limita- tions of the measures to identify readily available measures that relate to the performance question. The case studies in Chapter 5 can also be used to identify performance measure examples for selected performance questions. TCRP Report 88 (1) provides definitions and further information about these and other measures. The lists in this section also provide a selection of standard- ized measures available outside FTIS, along with other mea- sures commonly collected by transit agencies. These measures can be used for peer comparisons on topics where the NTD lacks measures. However, refer to Chapter 4 for cautions about the extra time and effort required when incorporating non-NTD measures into a peer comparison. Note also that the NTD only provides detail at the agency and mode levels and that it will be necessary to obtain data directly from peer agencies if a finer level of detail is desired. Chapter 4 also dis- cusses things to consider when requesting data directly from peer agencies. 23

Outcome Measures Outcome measures describe the performance achieved by the transit agency, given a set of inputs. Many of these mea- sures are performance ratios that compare an out-come (e.g., ridership) to an input (e.g., revenue hours). These ratios can often be derived from two or more NTD variables, and FTIS provides a set of “Florida Standard Variables” that includes common performance ratios as direct outputs. Outcome measures are organized into the following nine categories: • Cost-efficiency, • Cost-effectiveness, • Productivity, • Service utilization, • Resource utilization, • Labor administration, • Maintenance administration, • Perceived service quality, and • Safety and security Cost-Efficiency Cost-efficiency measures (Table 1) assess an agency’s ability to provide service outputs within the constraints of service inputs. According to TCRP Report 88 (1), “These types of measures are very common and are utilized by virtually all transit systems when evaluating system-wide perfor- mance. However, these measures should be viewed with cau- tion, because they do not measure a transit system’s ability to meet the needs of its passengers. These measures only eval- uate how efficiently a system can put service on the street, irrespective of where the service is going or how much it is utilized.” Four cost-efficiency measures are directly avail- able from FTIS. Operating cost per revenue hour and operat- ing cost per revenue mile measure how much it costs to pro- vide a unit of service. Vehicle miles (hours) per revenue mile (hour) assesses how much vehicle usage occurs in revenue service (as opposed to traveling to or from a garage, or other non-revenue service). Operating cost per peak vehicle in ser- vice looks at how much it costs annually to operate each vehicle used in peak service. Cost-Effectiveness Cost-effectiveness measures (Table 2) compare the cost of providing service to the outcomes resulting from the provided service. As with the cost-efficiency measures, many of these measures are commonly used by the transit indus- try. Farebox recovery ratio measures how much of a transit agency’s operating costs are covered by farebox revenue. As noted in Chapter 4, some agencies may have significant directly generated revenue that does not come from the farebox (e.g., service contracts with universities or advertising); there- fore, the operating ratio (directly generated non-tax revenue divided by operating costs) may be a better measure in those situations. Operating ratio is not directly provided by FTIS, but can be derived from other measures available through FTIS. (When doing a mode-specific analysis, a portion of the agency’s non-farebox revenue will need to be allocated to each mode—for example, in proportion to ridership.) Oper- ating cost per boarding looks at how much it costs to serve one unlinked trip, while subsidy per boarding (derivable from FTIS) measures the difference between the average cost to provide a trip and the average fare paid. Operating cost per passenger-mile relates costs to passenger loads, while operat- ing cost per service area capita relates costs to the number of people within the agency’s service area. (Because service area populations are reported inconsistently by transit agencies, this variable should be used with caution.) Productivity Productivity measures (Table 3) look at how many passen- gers are served per unit of service—hours, miles, vehicles, or employee full-time equivalents. 24 Directly Available from FTIS Operating cost per revenue hour Operating cost per revenue mile Vehicle miles (hours) per revenue mile (hour) Operating cost per peak vehicle in service Table 1. Cost-efficiency measures. Directly Available from FTIS Derivable from FTIS Farebox recovery ratio Operating ratio Operating cost per boarding Subsidy per boarding Operating cost per passenger-mile Operating cost per service area capita Table 2. Cost-effectiveness measures. Directly Available from FTIS Derivable from FTIS Boardings per revenue hour Boardings per vehicle operated in maximum service Boardings per revenue mile Boardings per employee full-time equivalent Table 3. Productivity measures.

Service Utilization Service utilization measures (Table 4) look at how passen- gers use the service that is provided. Annual boardings (un- linked trips) is one of the most basic performance indicators for a transit agency; however, it overstates the number of person- trips made by transit each day, as each transit vehicle boarding is counted as a separate trip (i.e., a one-way trip involving a transfer between vehicles is counted as two unlinked trips). An- nual linked trips measures the number of actual person-trips made using transit, which is useful for comparing transit usage to the usage of other modes. Annual linked trips can be calcu- lated as annual unlinked trips minus annual transfers (which may be available from agency farebox data, depending on the type of fare media used, or may have been estimated from rider surveys). Annual passenger miles reflects both how many peo- ple use transit and the length of their trips; average trip length can be calculated as annual passenger miles divided by annual unlinked trips. Average boardings per service area capita is a use- ful measure for comparing transit usage between regions, but can be influenced by the service pattern used by an agency. (Agencies with timed-transfer hubs, grid networks, or multi- ple modes, for example, may have more boardings than agen- cies using radial networks that have an equivalent number of people making transit trips.) Using linked trips, if possible, addresses this issue. Since service area population is re- ported inconsistently to the NTD, urban area population can be used as a substitute, but only when the agencies being compared have similar service patterns (e.g., when they are the only agencies providing service to their regions). Resource Utilization Resource utilization measures (Table 5) investigate how well the agency’s resources—vehicles, employees, consum- ables, and so on—are used. Most of these measures are self- explanatory. Peak-to-base ratio compares the number of ve- hicles operated during the highest peak period to the number of vehicles operated midday. It can be derived from FTIS for larger agencies (those operating 150 or more vehicles, not including demand response and vanpool vehicles). Labor Administration Labor administration measures (Table 6) include an array of measures that are applicable to both day-to-day transit agency management and to labor negotiations (e.g., compar- isons of wages and benefits). A number of these measures can be derived from other FTIS measures. The relative propor- tions of administrative, vehicle operator, vehicle mainte- nance, and non-vehicle maintenance staff costs to total oper- ating costs can be compared. Pay-to-platform hours compares vehicle operators’ total regular paid working time (including reporting and turn-in time, minimum work guarantees, and other time allowances) to platform time worked (i.e., time spent operating the vehicle). Percent of labor hours that are overtime looks at the contribution of overtime to overall costs. Some overtime may be beneficial to a transit agency’s bottom line, as it can be less than the total wages and benefits required to hire someone else to do the work, but excessive 25 Directly Available from FTIS Derivable from FTIS Vehicle hours per vehicle operated in peak service Revenue hours per vehicle operated in peak service Vehicle miles per vehicle operated in peak service Revenue miles per vehicle operated in peak service Revenue hours per employee full-time equivalent Peak-to-base ratio Vehicle miles per gallon of fuel consumed Vehicle miles per kilowatt-hour of power consumed Table 5. Resource utilization measures. Directly Available from FTIS Not Available from FTIS Annual boardings (unlinked trips) Annual linked trips Annual passenger miles Annual linked trips per service area capita Average trip length Annual boardings per service area capita Table 4. Service utilization measures. Derivable from FTIS Not Available from FTIS Cost of staff type/operating costs Employee absenteeism rate Pay-to-platform hours Staff turnover rate Percent of labor hours that are overtime Percent of operating costs that are wages (and benefits) Table 6. Labor administration measures.

overtime and overtime required to cover other employees’ absences is not cost-efficient (1). Percent of operating costs that are wages (and benefits) measures how much employee com- pensation contributes to total operating costs. Other employee- related data that are not available from FTIS, but may be available from peer agencies’ human resources departments, are employee absenteeism rate (impacts the costs required to pay other employees to do the work scheduled for the absent employees) and staff turnover rate (reflects costs required to train new staff and inefficiencies when other employees are covering for staff who have left). Maintenance Administration Maintenance administration measures (Table 7) focus on the performance of the transit agency’s vehicle maintenance function, and also provide insights into the overall condition of the vehicle fleet. Vehicle (car) miles between failures is a measure of how often vehicles break down while in service, while number of vehicle system failures looks at the total num- ber of failures. It should be kept in mind that these measures do not tell the whole story about maintenance quality, as fleet age and overall agency investment in maintenance activities (e.g., maintenance cost as a percentage of operating costs) also play a role. Labor cost per vehicle hour is an indicator of how much maintenance work is required relative to the amount of time that vehicles are operated. Cost data are available for several maintenance categories (labor, parts, consumables), which can be compared to the overall maintenance budget. Finally, the average annual maintenance cost per vehicle oper- ated in maximum service can be derived from FTIS. Perceived Service Quality Perceived service quality measures (Table 8) describe the transit agency’s service as perceived by customers. (Delivered service quality—taking the agency point of view—is dis- cussed later in the descriptive measures section.) Except for average system speed (revenue miles per revenue hours), which is provided directly by FTIS, the NTD does not provide any measures of perceived service quality. However, a num- ber of useful measures may be obtainable from peer agencies. On-time performance is a measure of reliability; however, it is not defined consistently by transit agencies [i.e., what consti- tutes “on-time” and the location(s) where it is measured]. If archived automatic vehicle location data are available, excess wait time (the number of extra minutes passengers had to wait past the scheduled departure time) is an alternative measure of reliability that avoids the “on-time” definition issue. Passenger load data may be available from archived automatic passenger counter data, or (with considerable man- ual effort) from data-collection sheets used for NTD passenger- mile reporting. Many transit agencies conduct customer satisfaction surveys, and questions relating to overall satisfac- tion are often asked in a consistent manner (although the scale used to measure satisfaction may vary from survey to survey). Many transit agencies also track complaints and compliments, but because the process to submit comments may be easier at some agencies than at others, it may be nec- essary to analyze the total volume of comments in conjunc- tion with analyzing (for example) the number of complaints (compliments) per 1,000 boardings to get an accurate picture of relative satisfaction or dissatisfaction with service. Call cen- ter response time is a measure of how conveniently passengers 26 Directly Available from FTIS Derivable from FTIS Vehicle (car) miles between failures Labor cost per vehicle hour Number of vehicle system failures Maintenance category cost/total maintenance cost Maintenance cost as a percentage of operating costs Average annual maintenance cost per vehicle operated in maximum service Vehicle maintenance cost/vehicle (car) mile Maintenance full-time equivalents (FTEs)/vehicle operated in maximum service Non-vehicle maintenance cost/track mile Table 7. Maintenance administration measures. Directly Available from FTIS Not Available from FTIS Average system speed On-time performance Excess wait time Passenger loading Overall satisfaction Number of complaints per 1,000 boardings Number of compliments per 1,000 boardings Call-center response time Missed trips Table 8. Perceived service-quality measures.

can request information or book a demand response trip by telephone. Finally, missed trips tracks how many scheduled demand response trips were missed due to a problem on the part of the transit agency or its service contractor. Safety and Security Safety and security measures (Table 9) look at performance related to accidents, crimes, and quality-of-life incidents that can impact passengers’ perceptions of the transit agency. Ex- cept for casualty and liability cost per vehicle mile, these mea- sures are not available through FTIS because safety and security data are not publicly released by the FTA. However, as discussed in Chapter 3, when peer agencies are willing to share their NTD viewer password with the target agency, safety and security data reported to the NTD can be readily incorporated into a peer comparison. It should be kept in mind that there are consistency issues in how crime data are reported by transit agencies, depending on, for example, whether or not a transit agency has its own police force, how frequently arrests are made for lesser incidents, and how incidents are coded in police reports (46). Descriptive Measures Descriptive measures provide context about a particular transit agency. While they are not direct indicators of transit agency performance (i.e., outcomes), they are nevertheless valuable components of a performance-measurement process. Descriptive measures are particularly useful for diagnosing why outcome measure results vary between transit agencies. They can also be used as screening tools to make sure the selected peer agencies match the target agency in specific char- acteristics relevant to the performance question being asked. Finally, descriptive measures can provide additional informa- tion to stakeholders in the benchmarking process that confirms that the selected peer agencies are reasonably similar to the target agency. Many descriptive measures are available from FTIS. These measures usually come directly from NTD reporting data, but also include selected measures available from (or deriv- able from) other standardized national databases. These meas- ures are organized into five categories: • Urban area characteristics, • Transit service characteristics, • Transit agency characteristics, • Delivered service quality, and • Transit investment. Urban Area Characteristics Urban area characteristics measures available from FTIS (Table 10) describe the region’s population characteristics, ge- ographic size, land use patterns, demographic characteristics, congestion level, and presence of a state capital. These mea- sures were derived from Census Bureau or Urban Mobility Re- port (45) data or were developed by TCRP Project G-11. See Appendix B for definitions of these measures; urban areas themselves are defined by the Census Bureau. Other standard- ized measures that are available outside of FTIS relate to cli- mate (available from the National Oceanic and Atmospheric Administration) and cost-of-living index (for example, from the Council for Economic and Community Research). 27 Directly Available from FTIS Not Available from FTIS Urban area population Annual rainfall Urban area size Mean January high temperature Urban area population density Mean July high temperature Urban area population growth rate Cost-of-living index Census block density Population dispersion Employment dispersion Percent residents in transit-supportive areas Percent college students Percent low income residents Annual delay per traveler Freeway lane-miles per capita State capital (yes/no) Table 10. Urban area characteristics measures. Derivable Available from FTIS Not Available from FTIS Casualty and liability cost per vehicle mile Collisions per 1,000 miles Collisions per 1,000 boardings Incidents per 1,000 boardings Table 9. Safety and security measures.

Transit Service Characteristics The service characteristics measures available from FTIS (Table 11) describe the size and population of a transit agency’s service area, the type of service provided by a transit agency (e.g., service to the entire region vs. service to a por- tion of the region’s suburbs combined with commuter trips into the central city), the amount of hours and miles of ser- vice provided, the amount of transit infrastructure provided, the amount of service that is contracted, the amount of total service that is demand-response, and the average fare. Except for service type (developed by TCRP Project G-11), these mea- sures are taken directly from the NTD or are derived from other NTD measures (for example, average fare is defined as annual fare revenue divided by annual unlinked trips). Note that service area population and size are not currently re- ported consistently by transit agencies. Analysts with access to transit route network data in a geographic information sys- tems (GIS) compatible format can combine these data with census data to estimate the percent of a region’s population served by fixed-route transit. Transit Agency Characteristics The transit agency characteristics measures available from FTIS (Table 12) describe the organization type (e.g., public agency that directly operates all service); the institutional structure (e.g., independent agency with an appointed board of directors); the demand-response provider type (e.g., social service agency); the number of employee full-time equiva- lents (FTEs) in vehicle operations, vehicle maintenance, non- vehicle maintenance, and general administration; and the amount of revenue from various sources. All of these mea- sures are taken directly from the NTD. A transit agency’s ser- vice philosophy (i.e., service coverage emphasis vs. efficiency emphasis) is a potential screening measure. It can often be identified from an Internet search, by looking at an agency’s goals and objectives, or by looking at the system’s route map. Delivered Service Quality Delivered service quality measures (Table 13) describe the transit agency’s service as delivered by the agency. The service quality perceived by passengers was discussed earlier in the out- come measures section (Table 8). Except for service span, which applies to the agency as a whole, the NTD does not provide any direct measures of delivered service quality. A few measures are derivable from NTD data and are provided directly by FTIS, in- cluding average system peak headway (derived from directional route miles, average system speed, and the number of vehicles operated in maximum service), revenue miles per urban square 28 Directly Available from FTIS Not Available from FTIS Service area population Percent of population served by fixed-route transit Service area size Service area type Annual vehicle miles operated Annual revenue hours operated Miles of track Number of stations Percent of service operated as fixed-route Percent of service that is demand-response Average fare Table 11. Service characteristics measures. Directly Available from FTIS Not Available from FTIS Organizational type Service philosophy (coverage vs. efficiency) Institutional structure Demand-response provider type Number of employee FTEs by category Revenue by source Table 12. Agency characteristics measures. Directly Available from FTIS Derivable from FTIS Service span Percent of fleet with ramps/low-floor Average system peak headway Revenue miles per urban area sq. mi Revenue miles (hours) per capita Table 13. Delivered service quality measures.

mile, and revenue hours per capita (measures of coverage). Fi- nally, percent of fleet with ramps/low floor is a measure of ADA accessibility that can be derived from the NTD vehicle fleet data available through FTIS. Transit Investment Transit investment measures (Table 14) look at local, state, and federal investments in transit service and infrastructure and the agency’s investment in transit vehicles. These mea- sures can also compare the total transit investment to the number of people within an agency service area or region. (As discussed previously, per-capita measures based on service area population should be used with caution, as this service area population is reported inconsistently to the NTD.) Av- erage fleet age is based on the active vehicles in the fleet. Spare ratio is the difference between the number of vehicles avail- able and the number of vehicles operated in maximum ser- vice, divided by the number of vehicles operated in maximum service. Low spare ratios may indicate potential problems in scheduling preventative maintenance and lack of vehicle ca- pacity to respond to increased demand for service, while high spare ratios may indicate an inefficient use of capital and maintenance funds. Local, state, and federal operating and capital revenue amounts are available through FTIS, both as aggregate amounts and broken down by source. 29 Directly Available from FTIS Derivable from FTIS Average fleet age Operating funding per capita Spare ratio Operating subsidy per capita Local revenue Capital funding per capita State revenue Federal revenue Table 14. Transit investment measures.

Next: Chapter 4 - Benchmarking Methodology »
A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry Get This Book
×
 A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Transit Cooperative Research Program (TCRP) Report 141: A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry explores the use of performance measurement and benchmarking as tools to help identify the strengths and weaknesses of a transit organization, set goals or performance targets, and identify best practices to improve performance.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!