National Academies Press: OpenBook
« Previous: Chapter 4 - Benchmarking Methodology
Page 44
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 44
Page 45
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 45
Page 46
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 46
Page 47
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 47
Page 48
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 48
Page 49
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 49
Page 50
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 50
Page 51
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 51
Page 52
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 52
Page 53
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 53
Page 54
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 54
Page 55
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 55
Page 56
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 56
Page 57
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 57
Page 58
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 58
Page 59
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 59
Page 60
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 60
Page 61
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 61
Page 62
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 62
Page 63
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 63
Page 64
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 64
Page 65
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 65
Page 66
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 66
Page 67
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 67
Page 68
Suggested Citation:"Chapter 5 - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2010. A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry. Washington, DC: The National Academies Press. doi: 10.17226/14402.
×
Page 68

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

44 Overview This chapter presents six real-world applications of this report’s peer-comparison and performance-measurement methodology. These case studies have been selected as exam- ples of the variety of applications, transit agency sizes, and modes that the methodology can be applied to, but are in no way comprehensive. Agencies considering performing peer comparisons similar to the ones shown here should not feel constrained by the case studies’ choices of performance mea- sures and screening criteria. Every agency’s goals, objectives, and reasons for performing a peer comparison will be differ- ent, resulting in different choices. Each case study includes a description of the context of the study, which helps in under- standing the choices that were made. These case studies are based on studies that were per- formed during the course of the research to test different drafts of the peer-grouping methodology. As a result, apply- ing the final peer-grouping methodology described in this re- port and implemented in FTIS may not result in exactly the same peer group members or likeness scores presented in this chapter. The focus here is on the process of conducting a peer comparison. The following case studies are included in this chapter: • Altoona, PA: An application of state performance indicators to a small urban bus operator and an example of exploring the causes of performance results. • Knoxville, TN: An example of applying secondary screening criteria to help answer a “what-if” question at a medium- sized bus operator. • Salt Lake City, UT: A comparison of bus and light rail operator schedule efficiency at a large multimodal transit agency. • Denver, CO: A financial performance comparison for a large multimodal transit agency, illustrating the normal- ization of cost data. • San Jose, CA: A maintenance performance comparison for a light rail operator. • South Florida: A comparison of transit investments and outcomes for a commuter rail operator receiving signifi- cant funding from a state department of transportation. Altoona, Pennsylvania Context Altoona Metro Transit serves the Altoona, Pennsylvania, urban area, which had a population of just over 80,000 in 2007. The agency operates fixed-route bus service and contracts its demand-response service. In 2007 it operated about 581,000 vehicle miles and had an operating budget of $3.7 million. This case study was developed on behalf of the Pennsylva- nia DOT. PennDOT is required by its state legislature (Act 44 of 2007) to report four performance indicators annually for all urban and rural transit operators in Pennsylvania: cost per revenue hour, fare revenue per revenue hour, boardings per revenue hour, and cost per boarding. In addition, PennDOT includes performance factors in its operating grant funding formula. Similar case studies were developed for the other nine small-urban transit operators in Pennsylvania that report to the NTD, giving PennDOT a picture of how Pennsylvania small-urban operators compare to their peers in the areas of interest to the state legislature. Performance Question How do Pennsylvania’s small-urban transit systems com- pare to their peers in the areas focused on by state legislation? Performance Measures In this case, the set of performance measures had already been decided by the state legislature, and are listed above. The C H A P T E R 5 Case Studies

performance question is basic and the agencies involved in the full case study were spread across the state, so no second- ary screening measures were necessary. Peer Grouping FTIS was used to develop a set of peers for Altoona, using this report’s methodology. Table 16 shows which peers were identified through this process. All of the likeness scores are very good (0.50 or less), so no further investigation of the peers was performed. As identi- fied above, no secondary screening was needed. Performance Results FTIS was used to retrieve the desired performance data from the NTD. All of the desired performance measures are ratios of NTD measures, and three of the four are provided directly by FTIS as part of its set of Florida Standard Vari- ables (these are labeled as operating expense per revenue hour, operating expense per passenger trip, and passenger trips per revenue hour in FTIS). The fourth desired measure, fare rev- enue per revenue hour, can be calculated from three of the Florida Standard Variables in a spreadsheet as follows: mul- tiply average fare by passenger trips to get total farebox revenue, and then divide the result by revenue hours. The advantage of using the Florida Standard Variables is that FTIS provides agency-wide totals (all modes combined) and service totals (directly operated and purchased transportation combined) for the Florida Standard Variables. If the raw NTD measures were retrieved from FTIS, the analyst would need to manu- ally sum the individual mode and service results to get the same agency-wide total. A spreadsheet’s pivot table function was used to organize the data for each measure by year and agency. A 2007 peer-group median value was also determined for each measure within the spreadsheet. Finally, the spreadsheet’s charting functions were used to develop comparative graphs for each measure of inter- est, as shown in Figure 9. Interpreting Results Altoona has the highest operating expense per revenue hour in its peer group, more than $25 per hour above the peer group median in 2007 [Figure 9(a)]. Altoona’s trend of a sharp increase in this measure over the 5-year period is consistent with its peers. At the same time, Altoona generates the second-highest fare revenue per revenue hour in its peer group [Figure 9(b)]. Altoona generated nearly $5 per revenue hour more than the peer group median. Altoona’s upward trend in this measure is consistent with its peers. A peer of note in this category is Sioux City, which more than doubled its fare revenue per revenue hour between 2003 and 2007 while maintaining ridership levels over the longer term. Looking at the other two measures, Altoona is slightly above the group median for boardings per revenue hour [Figure 9(c)] and at the group median for cost per boarding [Figure 9(d)]. Altoona’s small upward trend for boardings per revenue hour is better than most of its peers, which generally held steady or dropped from 2003 to 2007. Asking Questions Altoona’s relatively high hourly operating cost stands out as an area to investigate more closely to see if any clues can be found that would indicate the source(s) of the high costs, which could then be the focus of efforts to lower those costs. FTIS’ data-exploration functions, such as its cross-table fea- ture, can be used to quickly go through a list of possible causes. As a first step, demand-response costs can be compared to motorbus costs to try to narrow the cause down by mode. Altoona has the second-lowest demand-response cost per boarding and is at the group median for cost per revenue hour, so demand-response can be eliminated as a significant contributor. Next, Florida Standard Variables relating to costs can be investigated for the motorbus mode specifically. Altoona’s average bus fleet age (by far the highest at 16 years), vehicle miles per gallon (lowest), vehicle system failures (second highest), and maintenance cost per revenue mile (second highest) all suggest 45 Agency City State Likeness Score Sheboygan Transit System Sheboygan WI 0.26 Sioux City Transit System Sioux City IA 0.27 Ohio Valley Regional Transportation Authority Wheeling WV 0.31 Wausau Area Transit System Wausau WI 0.35 Battle Creek Transit Battle Creek MI 0.38 Belle Urban System - Racine Racine WI 0.39 City of Anderson Transportation System Anderson IN 0.42 Springfield City Area Transit Springfield OH 0.43 Table 16. Altoona peer group candidates.

that the cost of maintaining an old fleet is contributing to the high operations costs. From a state DOT perspective, channel- ing grant funding to Altoona for vehicle replacement could pay off with ongoing maintenance cost savings. Data available on NTD form F-30, relating to agency ex- penses, can be used to dig deeper into possible causes for the higher costs, particularly when the data are normalized by rev- enue hours. Here, fleet maintenance costs also stand out in terms of maintenance wage cost per revenue hour (highest), fuel costs per revenue hour (highest), and other materials/supplies costs per revenue hour (second-highest). At the same time, other cost factors are uncovered: fringe benefit costs per revenue hour are $3.35 higher than the peer group median, non-vehicle operations staff wage costs per revenue hour are $4.30 higher, and adminis- trative staff wage costs per revenue hour are $0.80 higher. These data do not indicate by themselves that these costs are “too high,” as no context is available from the data to make that de- termination, but merely that the costs are higher and that it could be worthwhile for Altoona to investigate them further. Knoxville, Tennessee Context The Knoxville urban area had approximately 452,000 res- idents in 2007. The urban area is served by Knoxville Area Transit, which operates both motorbus and demand-response service, including service contracted by the University of Tennessee (which is operated fare-free). The agency oper- ated 3.2 million vehicle miles in 2007 and had a budget of $14.3 million. The largest source of operations funding for the agency is the city’s general fund. 46 $0 $10 $20 $30 $40 $50 $60 $70 $80 $90 $100 Altoona Anderson Battle Creek Racine Sheboygan Sioux City Springfield Wausau Wheeling Op er ati ng E xp en se /R ev en ue H ou r 2003 2004 2005 2006 20072007 peer group median (a) Operating Expense per Revenue Hour (b) Fare Revenue per Revenue Hour Altoona Anderson Battle Creek Racine Sheboygan Sioux City Springfield Wausau Wheeling $0 $2 $4 $6 $8 $10 $12 $14 $16 Fa re R ev en ue /R ev en ue H ou r 2003 2004 2005 2006 20072007 peer group median (c) Boardings per Revenue Hour Altoona Anderson Battle Creek Racine Sheboygan Sioux City Springfield Wausau Wheeling 0 5 10 15 20 25 Bo ar di ng s/R ev en ue H ou r 2003 2004 2005 2006 20072007 peer group median (d) Cost per Boarding $0 $2 $4 $6 $8 $10 $12 $14 Co st pe r B oa rd ing Altoona Anderson Battle Creek Racine Sheboygan Sioux City Springfield Wausau Wheeling 2003 2004 2005 2006 20072007 peer group median Figure 9. Performance results for Altoona.

Performance Question How does Knoxville’s performance compare to similarly sized transit agencies that have a dedicated local funding source, both in terms of the amount of service that can be delivered and the cost-effectiveness of that service? Performance Measures There are three types of measures that need to be considered: • Measures that address the service-delivery question, • Measures that address the cost-effectiveness question, and • Measures that screen for the presence of dedicated local funding. To address the service-delivery question, the tables in Chapter 3 relating to transit investment and delivered service quality are consulted, and the following measures are selected: operating expense per capita, operating subsidy per capita, and revenue hours per capita. To address the cost-effectiveness question, the tables in Chapter 3 relating to cost-effectiveness, cost-efficiency, and productivity are consulted, and the following measures are selected: cost per revenue hour, boardings per revenue hour, cost per boarding, and boardings per capita. The farebox re- covery ratio would also be a common measure to include in this kind of analysis, but because Knoxville’s university- subsidized service is fare-free, this measure would not be particularly informative in this case. Instead, a measure that looks at the percentage of operating costs that are sub- sidized is used, as this accounts for all of an agency’s directly generated non-tax revenue. Finally, information from NTD form F-10 will be used to identify potential peers that do not have a dedicated local fund- ing source. Peer Grouping FTIS was used to develop an initial set of potential peers for Knoxville (Table 17) using this report’s methodology. A sec- ondary screening process was then used to eliminate peers without a dedicated local funding source (shown in strikeout type), as was illustrated in Chapter 4 (methodology Step 3c and Table 15). Performance Results Service Delivery FTIS was used to retrieve the desired performance data from the NTD (Figure 10). None of the selected service-delivery per- formance measures are provided directly from FTIS, but they can be derived from other variables available through FTIS. Following the guidance in Chapters 3 and 4, urban area pop- ulation from the American Community Survey (ACS) was used for “per capita” measures, as all of the agencies in the peer group are the sole agencies in their respective urban areas. ACS population estimates include university students based on a “2-month rule”—if they are staying in their university resi- dence for at least 2 months at the time of survey contact, they 47 Agency City State Likeness Score Winston-Salem Transit Authority Winston-Salem NC 0.25 South Bend Public Transportation Corporation South Bend IN 0.36 Birmingham-Jefferson County Transit Authority Birmingham AL 0.36 Connecticut Transit - New Haven Division New Haven CT 0.39 Fort Wayne Public Transportation Corporation Fort Wayne IN 0.41 Transit Authority of Omaha Omaha NE 0.41 Chatham Area Transit Authority Savannah GA 0.42 Stark Area Regional Transit Authority Canton OH 0.44 The Wave Transit System Mobile AL 0.46 Capital Area Transit Raleigh NC 0.48 Capital Area Transit Harrisburg PA 0.48 Shreveport Area Transit System Shreveport LA 0.49 Rockford Mass Transit District Rockford IL 0.50 Erie Metropolitan Transit Authority Erie PA 0.52 Capital Area Transit System Baton Rouge LA 0.52 Western Reserve Transit Authority Youngstown OH 0.53 Table 17. Knoxville peer group candidates.

are counted as living in the community where the university is located. This is different from the decennial census procedure, where persons are counted based on their “usual residence,” which may not be their current residence (48). The ACS’s population-counting methodology (and, therefore, the per- capita measures based on those population estimates) rea- sonably accounts for Knoxville’s student population, as well as the student populations of other communities in the peer group, such as South Bend. Operating funding per capita is a ratio of total operating expenses (a Florida Standard Variable) and urban area popula- tion (a TCRP Project G-11 variable). Similarly, revenue hours per capita divides the Florida Standard Variable revenue hours by urban area population. Operating subsidy per capita sub- tracts total farebox revenue and total directly generated park- and-ride/other/auxiliary revenue (both from NTD form F-10) from total operating expenses and divides the result by urban area population. Service Cost and Productivity Three of the five measures, cost per revenue hour, boardings per revenue hour, and cost per boarding, are available directly from FTIS as Florida Standard Variables. Boardings per capita is calculated from passenger trips (a Florida Standard Variable) and urban area population (a TCRP Project G-11 variable). Percent of operating costs subsidized is calculated by sub- tracting total farebox revenue and total directly generated 48 $30 $35 $40 $45 $50 $55 $60 $65 $70 $75 $80 Knox- ville Birmingham Canton Fort Wayne Omaha Savannah South Bend Winston- Salem Youngs- town Op er ati ng Fu nd ing /C ap ita 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median (a) Operating Funding Per Capita (b) Annual Revenue Hours Per Capita Knox- ville Birmingham Canton Fort Wayne Omaha Savannah South Bend Winston- Salem Youngs- town $0 $10 $20 $30 $40 $50 $60 Op er at in g S ub sid y/C ap ita (c) Operating Subsidy Per Capita Knox- ville Birmingham Canton Fort Wayne Omaha Savannah South Bend Winston- Salem Youngs- town 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 An nu al Re ve nu e H ou rs /C ap ita Figure 10. Service delivery performance results for Knoxville.

park-and-ride/other/auxiliary revenue (both from NTD form F-10) from total operating expenses and dividing the result by total operating expenses. Figure 11 shows the service cost and productivity results for Knoxville. Interpreting Results Service Delivery On a per-capita basis, the Knoxville region’s investment in transit is the lowest in the peer group. Although it grew from 2003 to 2007, so did the peer regions’ investments, as shown in Figure 10(a). Despite the relatively low investment, the amount of service Knoxville has been able to put on the street (revenue hours per capita) is slightly above the peer group median. Knoxville’s revenue hours per capita held steady during 2003–2007, while the peer group trend was a slight increase, as seen in Figure 10(b). In terms of operat- ing subsidy per capita, Knoxville is slightly below the group median. Knoxville’s subsidy increased sharply in 2006 due to a reduction in the revenue received from its contract with the university to provide shuttle service. The peer group trend for subsidy has been higher to sharply higher [Figure 10(c)]. Service Cost and Productivity Knoxville has the lowest cost per revenue hour of any agency in the peer group. Knoxville’s costs are increasing, as are those of the peers [Figure 11(a)]. Knoxville’s boardings per revenue hour are slightly below the peer median [Figure 11(b)]; its long-term trend is generally upward, though, while there is no clear trend among the peers (some are decreasing, some are steady, and some are increasing). There is a wide spread of cost per boarding values within the peer group and no clear peer trend [Figure 11(c)]; Knoxville is slightly below the group median value and held costs steady during 2003–2007. Knoxville’s boardings per capita and percent of service subsidized [Figures 11(d) and (e)] are both at the group median and both values have increased over time. Savannah, South Bend, and Winston-Salem are the other top performers in the peer group that Knoxville could consider looking to for ideas to further improve its service. Answering Questions In terms of building support for a dedicated local fund- ing source, the operating funding per capita measure indi- cates that all of Knoxville’s peer cities have invested more in transit operations than Knoxville, while the revenue hours per capita measure indicates that Knoxville is doing a good job converting revenue into service on the street. Knoxville’s subsidy per capita is currently a little below the peer group average; adding a new tax-supported revenue source would tend to increase this value, but fare revenue derived from the new service would tend to decrease it. Determining the overall impact of new funding and new service on this mea- sure would require more detailed analysis. Both the cost per revenue hour and cost per boarding values support an argu- ment that Knoxville has done a good job relative to its peers of controlling costs. Boardings per capita is at the group me- dian and would be expected to increase with new service. Neither of the other cost-related measures would argue against seeking additional funds, compared to looking first internally for opportunities for cost savings. However, the cost data also highlight the importance of Knoxville Area Transit’s relationship with the University of Tennessee, and the agency could also look to see what it could do to strengthen that partnership. Salt Lake City, Utah Context Utah Transit Authority serves the Salt Lake City and Provo urban areas. It operates light rail, motorbus, and vanpool service, and started commuter rail service in 2008. Demand-response service is partially directly operated and partially contracted. UTA operated 30.1 million vehicle miles in 2007 and had an operating budget of $136.8 mil- lion. The Salt Lake City urban area had a 2007 population of 944,000. Performance Question How efficient are UTA’s motorbus and light rail operator work schedules? Performance Measures The following measures are derived from the tables in Chap- ter 3 relating to labor administration and resource utilization, using data specific to operating employees: • Operator wages as a percent of total operating expenses, • Operator wages and fringe benefits as a percent of total oper- ating expenses, • Pay-to-platform hours, • Premium hours as a percent of total operating hours, • Vehicle revenue hours per operating employee full-time equivalent, and • Boardings per operating employee full-time equivalent. 49

50 (a) Cost per Revenue Hour (b) Boardings per Revenue Hour (c) Cost per Boarding (d) Boardings Per Capita (e) Percent of Service Subsidized $0 $10 $20 $30 $40 $50 $60 $70 $80 Co st/ Re ve nu e H ou r Knox- ville Birmingham Canton Fort Wayne Omaha Savannah South Bend Winston- Salem Youngs- town $0 $1 $2 $3 $4 $5 $6 $7 $8 Co st/ Bo ar din g Knox- ville Birmingham Canton Fort Wayne Omaha Savannah South Bend Winston- Salem Youngs- town 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100% % S er vic e S ub sid ize d Knox- ville Birmingham Canton Fort Wayne Omaha Savannah South Bend Winston- Salem Youngs- town 0 2 4 6 8 10 12 14 Bo ar di ng s/C ap ita Knox- ville Birmingham Canton Fort Wayne Omaha Savannah South Bend Winston- Salem Youngs- town 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median 0 5 10 15 20 25 30 Bo ar di ng s/R ev en ue H ou r Knox- ville Birmingham Canton Fort Wayne Omaha Savannah South Bend Winston- Salem Youngs- town Figure 11. Service cost and productivity performance results for Knoxville.

Three cost-effectiveness and cost-efficiency measures are also selected to provide context about overall mode efficiency: revenue hours per vehicle hours, cost per boarding, and cost per revenue hour. UTA desires that the peer agencies operate bus and light rail service, provide region-wide service, and be located in regions with growing populations that have similar land-use characteristics. Peer Grouping FTIS was used to develop two sets of peers for UTA, one for the light rail mode and one for the motorbus mode. Light Rail Table 18 shows the initial set of potential light rail peers that was identified, based on selecting all peers with like- ness scores of 1.00 or less. Based on UTA’s screening crite- ria, Baltimore is eliminated on the basis of also operating heavy rail, while Minneapolis is eliminated because (a) its light rail line opened during the 2003–2007 period planned to be studied and (b) other agencies provide service to its sub- urbs (determined from the “service type” measure in FTIS’ peer-grouping results). The five peer agencies in the group are less than the recommended ideal number of eight to ten, but exceeds the four-agency minimum. Larger, multimodal agen- cies typically have fewer agencies with similar characteristics available to consider as peers. Motorbus Table 19 shows the initial set of potential motorbus peers that was identified, based on selecting all peers with likeness scores of 1.00 or less. Based on UTA’s screening criteria, North County Transit District is eliminated because it only provides suburban service and its (diesel) light rail line opened in 2008, San Francisco MUNI is eliminated because its service area is limited to its region’s central city, Buffalo is eliminated because its region is losing population, and Jacksonville is eliminated because it does not operate light rail. All of these eliminated agencies’ likeness scores are over 0.75 (i.e., are in the “consider with caution” category), so eliminating them is reasonable. Performance Results Data Retrieval FTIS was used to retrieve the desired performance data from the NTD. Operator wages as a percent of total operating expenses and operator wages and fringe benefits as a percent of total operating expenses are derivable from data on NTD form F-30. 51 Agency Name Location State Total Likeness Score Denver Regional Transportation District Denver CO 0.52 Santa Clara Valley Transportation Authority San Jose CA 0.59 Sacramento Regional Transit District Sacramento CA 0.65 Maryland Transit Administration Baltimore MD 0.79 Tri-County Metropolitan Transportation District of Oregon Portland OR 0.88 Bi-State Development Agency St. Louis MO 0.94 Metro Transit Minneapolis MN 0.95 Table 18. UTA light rail peer group candidates. Agency Name Location State Total Likeness Score Santa Clara Valley Transportation Authority San Jose CA 0.58 Sacramento Regional Transit District Sacramento CA 0.63 Denver Regional Transportation District Denver CO 0.68 Tri-County Metropolitan Transportation District of Oregon Portland OR 0.74 North County Transit District Oceanside CA Charlotte Area Transit System Charlotte NC 0.80 Bi-State Development Agency St. Louis MO 0.85 San Francisco Municipal Railway San Francisco CA Niagara Frontier Transportation Authority Buffalo NY Jacksonville Transportation Authority Jacksonville FL 0.78 0.90 0.94 1.00 Table 19. UTA motorbus peer group candidates.

(Note that fringe benefit costs need to be proportioned between vehicle operators and other operating staff.) Pay-to-platform hours and premium hours as a percent of total operating hours are derivable from data on NTD form F-50. Vehicle revenue hours per operating employee full-time equivalent, annual board- ings per operating employee full-time equivalent, cost per board- ing, and cost per revenue hour are directly available from FTIS as Florida Standard Variables. Vehicle hours per revenue hour are derivable from the Florida Standard Variables vehicle hours and revenue hours. Denver Regional Transportation District (RTD) is the only agency in the motorbus peer group to use a significant amount of purchased transportation; in 2007, about 47% of RTD’s motorbus revenue hours were contracted. Because many of the detailed wage-related variables are not reported to the NTD for purchased transportation, only Denver’s directly operated service is included in the comparison. However, the broader cost-efficiency variables can be compared: for example, in 2007, RTD’s purchased transportation revenue hours per vehicle hour was 85%, cost per boarding was $4.33, and cost per revenue hour was $60.08. Agency-Wide Results The NTD data used to derive two of the measures in this case study, pay-to-platform hours and premium hours as a percent of total operating hours are only reported on an agency-wide basis (i.e., mode-specific data are not available). Figure 12 shows the performance results for these two measures. Light Rail Figure 13 shows the performance results for the light rail mode. 52 Agency-Wide 1.00 1.05 1.10 1.15 1.20 1.25 UTA Denver Portland Sacramento San Jose St. Louis UTA Denver Portland Sacramento San Jose St. Louis Pa y- to -P la tfo rm H ou rs 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median (a) Pay-to-Platform Hours (b) Premium Hours as a Percent of Total Operating Hours Agency-Wide 0% 5% 10% 15% 20% 25% 30% 35% Pr em iu m H ou rs /T ot al H ou rs Figure 12. Agency-wide performance measure results for UTA. Light Rail 0% 2% 4% 6% 8% 10% 12% 14% 16% 18% UTA Denver Portland Sacramento San Jose St. Louis Op er at or S ala ry /T ot al Op er at in g Ex pe ns e (a) Operating Salary as a Percent of Total Operating Expense (b) Operator Wages and Benefits as a Percent of Total Operating Expense UTA Denver Portland Sacramento San Jose St. Louis Light Rail 0% 5% 10% 15% 20% 25% 30% Op er at or W ag es & B en ef its / To ta l O pe ra tin g Ex pe ns e 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median Figure 13. Light rail performance measure results for UTA.

53 UTA Denver Portland Sacramento San Jose St. Louis Light Rail 80% 82% 84% 86% 88% 90% 92% 94% 96% 98% 100% Re ve nu e Ho ur s/V eh icl e H ou rs (e) Revenue Hours per Vehicle Hours (f) Cost per Boarding UTA Denver Portland Sacramento San Jose St. Louis Light Rail $0 $1 $2 $3 $4 $5 $6 $7 $8 $9 Co st /B oa rd in g 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median UTA Denver Portland Sacramento San Jose St. Louis Light Rail $0 $50 $100 $150 $200 $250 $300 $350 $400 $450 Co st /R ev en ue H ou r (g) Cost per Revenue Hour 2003 2004 2005 2006 20072007 peer group median UTA Denver Portland Sacramento San Jose St. LouisUTA Denver Portland Sacramento San Jose St. Louis Light Rail 0 500 1,000 1,500 2,000 2,500 3,000 Re ve nu e Ho ur s/O pe ra tin g Em pl oy ee F TE (c) Revenue Hours per Operating Employee Full-Time Equivalent (d) Annual Boardings per Operating Employee Full-Time Equivalent Light Rail 0 20,000 40,000 60,000 80,000 100,000 120,000 140,000 160,000 180,000 Bo ar di ng s/O pe ra tin g Em pl oy ee F TE 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median Figure 13. (Continued).

Motorbus Figure 14 shows the performance results for the motorbus mode. Interpreting Results Agency-Wide UTA’s pay-to-platform hours ratio had been the second- lowest in the group, but rose sharply in 2007 and is now above the group median [Figure 12(a)]. (Portland’s low values for this measure in most years are explained by its union con- tract, which allows operator breaks to occur as part of layover and recovery time, and thus are treated as platform time, rather than as separately paid break time.) UTA’s percentage of hours worked that were overtime is the lowest in the group [Figure 12(b)], which is not necessarily good or bad, but more a reflection of the agency’s philosophy regarding overtime. However, having a low overtime rate means that more oper- ators are needed to work the same number of hours, which can result in higher fringe benefit costs. Light Rail Operator wages [Figure 13(a)] and the combination of oper- ator wages and benefits [Figure 13(b)] form a greater propor- tion of overall operating costs at UTA than at any other agency in the peer group. This result is not necessarily good or bad, but does indicate that increases in costs in these categories will translate more significantly to increased operating costs at UTA than at its peer agencies. UTA operates more revenue hours per employee FTE than any of its peers [Figure 13(c)], although this ratio dropped in 2007 to its lowest level in the 5-year analysis period while the ratio rose at all of UTA’s peer agencies. UTA’s boardings per employee FTE is second-highest in the peer group [Figure 13(d)]. Here, 54 Motorbus 0% 5% 10% 15% 20% 25% 30% 35% UTA Charlotte Denver Portland Sacramento San Jose St. Louis Op er at or S ala ry /T ot al Op er at in g Ex pe ns e (a) Operating Salary as a Percent of Total Operating Expense (b) Operator Wages and Benefits as a Percent of Total Operating Expense (c) Revenue Hours per Operating Employee Full-Time Equivalent (d) Annual Boardings per Operating Employee Full-Time Equivalent UTA Charlotte Denver Portland Sacramento San Jose St. Louis 0 200 400 600 800 1,000 1,200 1,400 1,600 1,800 Re ve nu e H ou rs /O pe ra tin g E m plo ye e F TE Motorbus UTA Charlotte Denver Portland Sacramento San Jose St. Louis Motorbus 0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% Op er at or W ag es & B en ef its / To ta l O pe ra tin g E xp en se UTA Charlotte Denver Portland Sacramento San Jose St. Louis Motorbus 0 10,000 20,000 30,000 40,000 50,000 60,000 Bo ar di ng s/O pe ra tin g E m pl oy ee F TE 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median Figure 14. Motorbus performance measure results for UTA.

55 UTA Charlotte Denver Portland Sacramento San Jose St. Louis 70% 75% 80% 85% 90% 95% Re ve nu e Ho ur s/V eh icl e H ou rs Motorbus UTA Charlotte Denver Portland Sacramento San Jose St. Louis Motorbus $0 $1 $2 $3 $4 $5 $6 $7 Co st /B oa rd in g (e) Revenue Hours per Vehicle Hours (f) Cost per Boarding UTA Charlotte Denver Portland Sacramento San Jose St. Louis Motorbus $0 $20 $40 $60 $80 $100 $120 $140 $160 $180 Co st /R ev en ue H ou r (g) Cost per Revenue Hour 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median Figure 14. (Continued). too, UTA’s result dropped in 2007 while rising at all the other peer agencies. Through 2006, UTA was the peer group leader for revenue hours per vehicle hour, but dropped to the group median in 2007 [Figure 13(e)]. UTA’s cost per revenue hour is second- lowest in the peer group, but increased in 2007 while peer costs held steady or declined [Figure 13(f)]. UTA’s cost per boarding is lowest in the peer group. It increased slightly in 2007, and there was no consistent peer trend [Figure 13(g)]. Motorbus Vehicle operator wages are above the group median [Fig- ure 14(a)], while the combination of wages and fringe benefits are at the group median [Figure 14(b)]. The same comments that applied to these measures for light rail also apply here. In terms of both revenue hours per operating employee FTE [Figure 14(c)] and trips per operating employee FTE [Fig- ure 14(d)], UTA was second-lowest in the peer group. There was a fairly narrow range of values among the peer group for revenue hours per operating employee FTE; Portland and Denver stand out for trips per operating employee FTE, with the other agencies in a relatively narrow range. UTA’s per- formance in both categories is improving. UTA’s revenue hours per vehicle hour are by far the lowest in the peer group [Figure 14(e)]. UTA’s service area is more spread out than any of the other peers, with the possible exception of Denver, so significant deadheading may be required to serve longer-distance commute trips. (This case study focuses on scheduling efficiency; however, a comparison of farebox recov- ery ratio would provide clues as to whether UTA is recouping the cost of providing longer-distance service.) UTA’s cost per boarding is above the peer-group median; the cost has increased in recent years, consistent with the peers [Figure 14(f)]. Finally, UTA’s cost per revenue hour is at the peer-group median, but is increasing at a faster rate than its peers [Figure 14(g)].

Asking Questions On the light rail side, UTA’s performance was generally among the top in its peer group and UTA appears to have his- torically scheduled its employees efficiently. However, in the final year, UTA’s pay-to-platform hours increased substan- tially, from well below average to above average, which had an impact on costs. UTA would want to look into the reasons for the increase to see if actions could be taken to reverse it. On the bus side, UTA’s performance lags its peers in many areas. With the peer data now in hand, UTA could use the motor- bus results to dig deeper into its own data; for example, by comparing efficiency by service type (e.g., urban bus service vs. commuter bus service). As noted previously, a comparison of farebox recovery and other related financial indicators would help answer the question of whether UTA is recouping the cost of the extra deadhead time it incurs. UTA could also analyze its garage locations and the impacts of future commuter rail service on commuter bus routes to see if deadheading could or will be reduced in the future. UTA’s light rail cost indicators moved up in 2006–2007, which was opposite the peer-group trend. Although UTA can obviously track its own current-year costs, NTD data typically have a 2-year time lag, which makes it difficult to apply peer-group trend insights to near-term decision- making. Peer-group information that is as up to date as one’s own would be more useful in that regard. However, now that UTA has identified its peer group, it could contact its peers to either (a) request their NTD viewer passwords (to obtain NTD data submitted, but not yet released by FTA) or (b) re- quest the desired data directly. Ideally, if the peer group members agreed to share their current cost information with each other on a regular basis, all could benefit from having up-to-date peer trend information to work with. Because this performance question focused on schedule efficiency, no adjustments were made to costs to reflect either inflation or differences in wage rates between regions. How- ever, since average wages in Salt Lake City were among the lowest of comparably sized Western metropolitan areas, an adjustment for wage rates would provide insights into how much of UTA’s relatively low cost per revenue hour and cost per boarding for light rail is due to efficient operation and how much is due to the region’s lower labor costs. This kind of adjustment is illustrated next as part of the Denver case study. Denver, Colorado Context The Denver RTD serves the Denver and Boulder urban areas. RTD provides light rail, motorbus, demand-response, and vanpool service. About 47% of the motorbus revenue hours, nearly all of the demand-response revenue hours, and all of the vanpool service are contracted. RTD operated and purchased 54 million vehicle miles in 2007 and had an oper- ating budget of $320 million. The Denver urban area had a 2007 population of slightly over 2 million. Performance Question How comparatively cost-efficient is RTD’s overall opera- tion in terms of a fairly calculated and compared cost per rev- enue hour of service? Performance Measures The performance question identifies one measure, cost per revenue hour. Two other measures, cost per boarding (cost- effectiveness) and boardings per revenue hour (productivity) will also be included to provide a more rounded comparison. Costs will be adjusted for both inflation and regional wage rates, and the two cost-related measures will be compared on both an adjusted and an unadjusted basis in order to look at the impact of including those two factors in the analysis. No secondary screening criteria were identified. Peer Grouping FTIS was used to develop an agency-wide peer group for RTD. A peer group of eight was desired, likeness score values permitting. The candidate peers that were identified are shown in Table 20. Five of the potential peers have likeness scores over 0.75, which suggests the need for a closer look at the suitability of the peers. Based on the peer-grouping and service area data supplied by FTIS, the following are noted: • Denver RTD has the largest service area by far of the peer group, even after accounting for the fact that—like most agencies—it reports its district size (which in this case includes large unpopulated and unserved areas) rather than its actual service area [determined as 3⁄4 mile from bus routes and rail stations, according to the NTD reporting instructions (49)]. • Houston is very comparable to Denver in operating budget and revenue miles operated and is the only regional transit agency in its urban area. • Metro Transit’s urban area contains multiple transit oper- ators, unlike Denver, where RTD serves the entire region. However, Metro Transit’s service area population is com- parable to that of other peer agencies, and both Denver and St. Paul are state capitals (which provide concentrations 56

of office employment). Metro Transit’s light rail service started during the analysis period. • DART serves just the Dallas sub-region of the Dallas– Ft. Worth–Arlington urbanized area, but its budget is similar in size to RTD’s. • Sacramento’s revenue miles operated and budget are one- third and one-half of Denver’s, respectively, but the urban areas have similar population densities—Sacramento’s urban area population is larger than both San Jose’s and Salt Lake City’s—and both Denver and Sacramento are state capitals. Other transit operators serve about 20% of the population within the Sacramento urban area. • St. Louis is a comparably sized urban area and Bi-State Development Agency is the only multimodal transit oper- ator in its urban area, although it only operates about half the amount of service that Denver does. Keeping in mind the principle that peers should be simi- lar but should not be expected to be exactly the same, Hous- ton, Dallas, and St. Louis can readily be included as peers. Minneapolis and Sacramento differ more substantially from Denver, but also have notable similarities. Therefore, they will be retained as peers, but the differences will be kept in mind when interpreting the results. Performance Results Cost Adjustments for Inflation and Cost of Living All three performance measures are directly available from FTIS as Florida Standard Variables. Inflation and wage data required to calculate adjusted costs were obtained from the BLS at the websites identified in Chapter 4. Inflation data specific to metropolitan areas are available for seven of the nine agencies. For the two regions without detailed inflation data, Sacramento and Salt Lake City, aver- age inflation data for urban areas in the Western United States will be used instead. Table 21 shows the CPI (all consumers) values for each urban area, by year. It can be seen that differ- ent regions experienced different levels of inflation between 2003 and 2007, with Denver experiencing the lowest percent inflation. The process described in Chapter 3 was used to develop fac- tors that convert prior-year costs into 2007 equivalents. The results are shown in Table 22. Average hourly wage data across all occupations is avail- able for all nine transit agencies’ urban areas. As described in Chapter 3, it is possible to drill down into the BLS wage data- base to get more-specific data—for example, average wages for 57 Agency City State Likeness Score Santa Clara Valley Transportation Authority San Jose CA 0.50 Utah Transit Authority Salt Lake City UT 0.59 Tri-County Metropolitan Transportation District of Oregon Portland OR 0.66 Metropolitan Transit Authority of Harris County, Texas Houston TX 0.77 Metro Transit Minneapolis MN 0.88 Dallas Area Rapid Transit Dallas TX 0.88 Sacramento Regional Transit District Sacramento CA 0.88 Bi-State Development Agency St. Louis MO 0.93 Table 20. Denver peer group candidates. 2003 2004 2005 2006 2007 % change Dallas 176.2 178.7 184.7 190.1 193.2 9.6% Denver 186.8 187.0 190.9 197.7 202.0 8.1% Houston 163.7 169.5 175.6 180.6 183.8 12.3% Minneapolis 182.7 187.9 193.1 196.2 201.2 10.1% Portland 186.3 191.1 196.0 201.1 208.6 12.0% Sacramento 188.6 193.0 198.9 205.7 212.2 12.5% Salt Lake City 188.6 193.0 198.9 205.7 212.2 12.5% San Jose 196.4 198.8 202.7 209.2 216.0 10.0% St. Louis 173.4 180.3 186.2 189.5 193.2 11.4% Table 21. Consumer Price Index values for Denver peer group.

“bus drivers, transit and intercity.” However, the more-detailed category would be dominated by the transit agencies’ own workforces. The intent here is to (a) investigate whether Denver is spending more or less for its labor relative to its region’s average wages and (b) adjust costs to reflect differ- ences in a region’s overall cost of living (which impacts overall average wages within the region). Table 23 shows the average hourly wage values for each urban area by year. It can be seen that different urban areas experienced varying amounts of wage growth between 2003 and 2007 and that there is a relatively wide spread in the cost of living (as reflected by the regional average wage) among the peer agencies. The process described in Chapter 4 was used to develop factors that reflect how much higher or lower each region’s wages are compared to Denver. These factors are applied to each region’s cost data to produce adjusted costs that reflect the approximate cost each agency would have experienced if their region’s average wages and cost of living were the same as Denver’s. The results are shown in Table 24. Performance Comparison Graphs Figure 15 shows the performance results, based on costs adjusted for regional differences in inflation, labor market con- ditions, and cost of living. For illustrative purposes, results based on unadjusted costs are also presented. Interpreting Results Cost-Efficiency Looking at the adjusted cost per revenue hour first, Denver has the best performance among its peers [Figure 15(a)]. The trend data indicate that Denver’s costs held steady relative to inflation during 2003–2007. There is no apparent peer trend: some agencies’ costs increased at a faster rate than inflation, while other agencies’ costs increased at a slower rate (indi- cated by a declining trend in cost per revenue hour, as mea- sured in 2007 dollars). If the comparison had been performed with unadjusted data, Denver would have been second-best in the peer group, just behind Houston [Figure 15(b)]. However, Houston’s cost performance is influenced by the fact that average wages in Houston are 10% lower than in Denver. Region-wide wages are something that is out of the control of a transit agency, whereas one objective of performing a peer comparison is to find things that are under an agency’s control that can be improved. Using adjusted costs as a basis of comparison helps 58 2003 2004 2005 2006 2007 Dallas 1.096 1.081 1.046 1.016 1.000 Denver 1.081 1.080 1.058 1.022 1.000 Houston 1.123 1.084 1.047 1.018 1.000 Minneapolis 1.101 1.071 1.042 1.025 1.000 Portland 1.120 1.092 1.064 1.037 1.000 Sacramento 1.125 1.099 1.067 1.032 1.000 Salt Lake City 1.125 1.099 1.067 1.032 1.000 San Jose 1.100 1.087 1.066 1.033 1.000 St. Louis 1.114 1.072 1.038 1.020 1.000 Table 22. Inflation cost adjustments for Denver peer group. 2003 2004 2005 2006 2007 % change Dallas $18.35 $18.86 $19.23 $19.68 $20.57 12.1% Denver $19.65 $20.05 $20.49 $21.15 $21.93 11.6% Houston $18.05 $18.51 $18.71 $19.09 $19.72 9.3% Minneapolis $19.92 $20.59 $21.07 $21.63 $22.31 12.0% Portland $18.50 $18.97 $19.35 $20.07 $20.85 12.7% Sacramento $19.19 $19.81 $20.11 $20.98 $21.64 12.8% Salt Lake City $16.51 $16.91 $17.49 $18.22 $19.04 15.3% San Jose $25.99 $26.84 $27.88 $28.84 $29.67 14.2% St. Louis $17.88 $18.03 $18.22 $18.72 $19.53 9.2% Table 23. Mean hourly wages (all occupations) for Denver peer group.

to eliminate some of these external factors. A comparison of the adjusted and unadjusted data also indicates that operat- ing costs in Dallas and Sacramento are relatively high regard- less of the cost basis used, while much of San Jose’s relatively high operating costs can be explained by that region’s high cost of living. Cost-Effectiveness Again looking at the adjusted data first, Denver is slightly below the group median for cost per boarding [Figure 15(c)]. Denver’s value increased during 2003–2007, while five of the eight peers showed a decrease. The relative placement of the agencies does not change much when the unadjusted data are compared [Figure 15(d)]. This is due in part to the fact that cost per boarding measures both a service input and a service outcome, while cost per revenue hour compares two service inputs. Denver’s shift from best-in-class for cost per revenue hour to middle-of-the-pack for cost per boarding sug- gests that other agencies generate more boardings per rev- enue hour. This tentative conclusion will be confirmed by the productivity measure. Productivity The final graph only comes in one version, as boardings per revenue hour does not involve any cost data. This graph shows that Denver has the second-lowest productivity among the peer group [Figure 15(e)]. Denver’s trend of a slight de- cline in this measure is consistent with five of the eight peer agencies. The two leaders in this category are Minneapolis and Portland. Asking Questions The results show that Denver has done a good job relative to its peers at controlling the costs of providing transit service. However, the service that Denver provides has not been as productive as that of most of its peers. One possible explana- tion is that because Denver has a larger service area than any of its peers, it provides relatively more long-distance routes, which would be expected to have lower productivity due to the amount of time that passengers spend on the bus. This theory could be tested in at least two ways. First, Denver could look more in-depth at UTA’s and Houston’s results. Those two systems are similar to Denver in terms of regional coverage and the operation of longer- distance bus routes, yet they had better productivity. Second, Denver could use its own in-house data to remove operating costs, revenue hours, and boardings for routes serving out- lying communities (e.g., routes originating outside the Den- ver urban area). The results for the remainder of the system could then be compared to the results of the six remaining peers with more compact service areas since there would now be more of an apples-to-apples comparison of service area sizes. San Jose, California Context The Santa Clara Valley Transportation Authority (VTA) serves Santa Clara County, located at the south end of San Francisco Bay and containing the Bay Area’s largest city, San Jose. VTA directly operates light rail and motorbus service and purchases about 3% of its motorbus revenue hours and all of its demand response service. VTA operated 25 million vehicle miles in 2007 and had an operating budget of $282 million. The San Jose urban area had a 2007 popu- lation of 1.58 million. Performance Question How effective are VTA’s light rail vehicle maintenance and non-vehicle maintenance programs? 59 2003 2004 2005 2006 2007 Dallas 1.071 1.063 1.066 1.075 1.066 Denver 1.000 1.000 1.000 1.000 1.000 Houston 1.089 1.083 1.095 1.108 1.112 Minneapolis 0.986 0.974 0.972 0.978 0.983 Portland 1.062 1.057 1.059 1.054 1.052 Sacramento 1.024 1.012 1.019 1.008 1.013 Salt Lake City 1.190 1.186 1.172 1.161 1.152 San Jose 0.756 0.747 0.735 0.733 0.739 St. Louis 1.099 1.112 1.125 1.130 1.123 Table 24. Labor market and cost-of-living adjustments for Denver peer group.

60 Agency-wide (a) Operating Cost per Revenue Hour (Adjusted Costs) (b) Operating Cost per Revenue Hour (Unadjusted Costs) (c) Operating Cost per Boarding (Adjusted Costs) (d) Operating Cost per Boarding (Unadjusted Costs) (e) Boardings per Revenue Hour Agency-wide $0 $20 $40 $60 $80 $100 $120 $140 $160 $180 Denver Dallas Houston Minne- apolis Portland Sacra- mento Salt Lake City San Jose St. Louis Op er ati ng C os t p er R ev en ue H ou r $0 $20 $40 $60 $80 $100 $120 $140 $160 $180 Ad jus ted Op era tin g C os t p er Re ven ue Ho ur 2003 2004 2005 2006 20072007 peer group median Denver Dallas Houston Minne- apolis Portland Sacra- mento Salt Lake City San Jose St. Louis 2003 2004 2005 2006 20072007 peer group median Agency-wide 0 5 10 15 20 25 30 35 40 45 Bo ar di ng s p er R ev en ue H ou r Denver Dallas Houston Minne- apolis Portland Sacra- mento Salt Lake City San Jose St. Louis 2003 2004 2005 2006 20072007 peer group median Agency-wide $0 $1 $2 $3 $4 $5 $6 $7 Ad jus ted Op era tin g C os t p er Bo ard ing Denver Dallas Houston Minne- apolis Portland Sacra- mento Salt Lake City San Jose St. Louis 2003 2004 2005 2006 20072007 peer group median Agency-wide $0 $1 $2 $3 $4 $5 $6 $7 Op er ati ng C os t p er B oa rd ing Denver Dallas Houston Minne- apolis Portland Sacra- mento Salt Lake City San Jose St. Louis 2003 2004 2005 2006 20072007 peer group median Figure 15. Performance results for Denver.

Performance Measures The following measures are selected or derived from the tables in Chapter 3 relating to maintenance administration, service characteristics, and transit investment: • Percent of maintenance costs that are labor, • Average fleet age, • Spare ratio, • Miles of track, • Vehicle maintenance cost per vehicle operated in maximum service, • Vehicle maintenance cost per car mile, • Car miles between failures, • Maintenance costs as a percentage of total operating costs, and • Non-vehicle maintenance cost per track mile. The first four measures are descriptive measures that provide context about each light rail operator. The remain- ing measures are outcome measures. No secondary screen- ing measures were identified. As noted in the Denver case study, San Jose’s cost of living is higher than in many other parts of the country. Therefore, vehicle maintenance cost comparisons will be adjusted to account for wage differ- ences between regions. Peer Grouping FTIS was used to develop a light rail peer group for VTA. A peer group of eight was desired, likeness score values permitting. Table 25 shows the candidate peers that were identified. MBTA was dropped as a peer on the basis of being an oper- ator of streetcars rather than modern light rail vehicles (LRVs). San Francisco Muni also operates some historic streetcars, but the majority of its fleet consists of modern LRVs. A significant portion of Muni’s system operates underground, unlike the others in its peer group, so that fact will need to be considered when non-vehicle maintenance costs (e.g., stations and right- of-way) are compared. Performance Results Cost Adjustments for Labor Market and Cost of Living Wage data required for cost adjustments were obtained from the BLS at the website identified in Chapter 3. The pro- cess used to adjust wage data is similar to the one used in the Denver case study (except that San Jose is used as the reference point this time) and will not be repeated here. Data Retrieval Average fleet age, spare ratio, total maintenance costs, total operating costs, and number of vehicle system failures are Florida Standard Variables. Total rail track miles is available from NTD form A-30. Maintenance vehicle labor costs come from two variables on NTD form F-30 (vehicle maintenance other salaries/wages and vehicle maintenance fringe benefits); two other variables on NTD form F-30 provide the same informa- tion for non-vehicle maintenance labor costs. Finally, car miles is available from NTD form S-10. All of the desired perfor- mance measures can be determined directly from these vari- ables or as ratios of these variables. Data Issues The following data issues were noted when the data were retrieved from FTIS: • Denver did not report vehicle system failures in 2003–2006. It did report them in 2007, but the resulting average dis- tance between failures was 20 times greater than any other peer in 2007. Therefore, Denver’s 2007 data were discarded from the analysis. Similarly, Salt Lake City’s average distance between failures was four times greater than any other peer in 2003–2005 and was discarded. • San Diego’s light rail data were reported by the San Diego Metropolitan Transit System (MTS) in 2007, which is a separate NTD reporter from the former San Diego Trolley, Inc., which reported in earlier years. Some key variables 61 Agency City State Likeness Score Denver Regional Transportation District Denver CO 0.49 Sacramento Regional Transit District Sacramento CA 0.55 Maryland Transit Administration Baltimore MD 0.57 Utah Transit Authority Salt Lake City UT 0.58 Tri-County Metropolitan Transportation District of Oregon Portland OR 0.61 San Diego Trolley, Inc. San Diego CA 0.70 San Francisco Municipal Railway San Francisco CA 0.71 Massachusetts Bay Transportation Authority Boston MA 0.82 Table 25. San Jose peer group candidates.

needed for this case study’s performance measures were not reported by MTS in 2007. • San Francisco Muni reported the exact same number of light rail vehicle failures (2,002) each year from 2003–2005. • Track miles was not an NTD reporting variable until 2005. Performance Comparison Graphs Figure 16 shows the performance results. Note that some of the graphs show a 2006 median value to maximize the num- ber of peers included in the calculation of the median because some variables could not be calculated for San Diego for 2007. Interpreting Results Descriptive Measures VTA has the second-youngest fleet among its peers [Fig- ure 16(a)] and the third-most track miles [Figure 16(b)]. Labor makes up slightly more than 50% of the total main- tenance budget, which is a higher ratio than all but one peer [Figure 16(c)]. However, this result is unsurprising, given the urban area’s high average hourly wage rate. VTA has by far the largest spare ratio of any of its peers [Figure 16(d)], with 50% more LRVs available as spares than are operated in max- imum service (i.e., a spare ratio of 150%). Outcome Measures Even after adjusting for labor costs, VTA has the highest maintenance cost per vehicle in maximum service [Figure 16(e)] and second-highest maintenance cost per car mile operated [Fig- ure (16(f)], although both measures have been trending down- ward while generally holding steady or increasing at VTA’s peers. VTA’s non-vehicle maintenance costs, on the other hand, are at the peer group median after adjusting for labor costs [Figure 16(g)]. Maintenance costs make up slightly more than half of total operating costs [Figure 16(h)], which is second-highest in the peer group. In terms of distance between light rail car failures, VTA has been at or near the best-in-class within its peer group through- out the analysis period [Figure 16(i)]. However, there appears to be a wide variation in how agencies report light rail car fail- ures to the NTD, so it may not be possible to conclude much from this measure. Asking Questions The data suggest that VTA’s high spare ratio may be a key driver of the agency’s relatively high maintenance costs, after controlling for labor cost differences among the peer regions. VTA received 70 new low-floor LRVs during the analysis pe- riod. VTA’s own maintenance records could be used to com- pare the maintenance costs of the two fleets to confirm whether or not this theory is true. If true, and if the agency anticipated keeping its older high-floor vehicles to support future service expansions, it could contact its peers that have also purchased low-floor vehicles to learn from their experi- ences maintaining mixed high- and low-floor fleets. The objec- tive of the contacts would be to try to identify whether VTA is performing more maintenance than needed on low-usage vehicles to keep them in good working order. To effectively draw solid conclusions from the maintenance data, agency contacts would be needed to provide more con- text about maintenance activities and needs. For example, VTA would be interested in finding out the types of non-vehicle maintenance performed at its peer agencies and the ages of var- ious components of its peers’ light rail infrastructure. Because VTA’s peers appear to report vehicle failures differently, agency contacts would also be needed to find out what definitions they used before firm conclusions could be drawn from the car miles between failures measure. South Florida Context The Florida Department of Transportation (FDOT) con- tributed capital and operating funds to double-track the Tri- Rail commuter rail line operated by the South Florida Regional Transportation Authority (SFRTA) and was SFRTA’s largest source of funds in 2005, 2006, and 2007. Consequently, FDOT is interested in comparing Tri-Rail’s performance to that of similar commuter rail operations to make sure that the value of its investment is maximized. SFRTA contracts commuter rail service in a single corridor running from Palm Beach County through Broward County into northern Miami-Dade County. It also contracts motor- bus and demand-response service that feeds its commuter rail stations. In 2007, SFRTA’s commuter rail service operated just over 2 million vehicle miles and had an operating budget of $33.5 million. The Miami urbanized area had a 2007 pop- ulation of 5.23 million and contains all three counties that SFRTA operates in. Performance Question Compare Tri-Rail’s level of service, investment in public transportation, and cost-effectiveness to that of its peers. Performance Measures The following measures are selected or derived from the tables in Chapter 3 relating to cost-efficiency, cost-effectiveness, 62

63 Light Rail 0 5 10 15 20 25 San Jo se Ba ltim ore De nve r Po rtla nd Sac ram ent o Sal t La ke City San Die go San Fra nci sco San Jo se Ba ltim ore De nve r Po rtla nd Sac ram ent o Sal t La ke City San Die go San Fra nci sc o Av er ag e F lee t A ge (y ea rs) 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072006 peer group median 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072006 peer group median 2003 2004 2005 2006 20072007 peer group median (a) Average Fleet Age (b) Track Miles 0% 10% 20% 30% 40% 50% 60% San Jose Baltimore Denver Portland Sacra- mento Salt Lake City San Diego San Francisco San Jose Baltimore Denver Portland Sacra- mento Salt Lake City San Diego San Francisco San Jose Baltimore Denver Portland Sacra- mento Salt Lake City San Diego San Francisco San Jose Baltimore Denver Portland Sacra- mento Salt Lake City San Diego San Francisco La bo r a s % o f M ain te na nc e C os ts (c) Labor as a Percentage of Maintenance Costs (d) Spare Ratio $0 $50,000 $100,000 $150,000 $200,000 $250,000 $300,000 $350,000 $400,000 $450,000 $500,000 Ad jus ted An nu al Ma int en an ce Co st pe r V eh icl e i n Ma xim um S er vic e (e) Adjusted Annual Maintenance Cost per Vehicle in Maximum Service (f) Adjusted Maintenance Cost per Annual Car Mile Light Rail Light Rail Light Rail Light Rail Light Rail 0 20 40 60 80 100 120 Tr ac k M ile s 2005 2006 20072006 peer group median 0% 50% 100% 150% 200% 250% Sp ar e R ati o $0 $2 $4 $6 $8 $10 $12 Ad jus ted M ain ten an ce Co st pe r A nn ua l C ar Mi le Figure 16. Performance results for San Jose.

resource utilization, service utilization, perceived service qual- ity, and delivered service quality: • Operating cost per revenue hour, • Operating cost per revenue mile, • Operating cost per passenger mile, • Operating funding per capita, • Revenue miles per capita, • Vehicle hours per vehicles operated in peak service, • Service span, • Average system speed, • Average trip length, and • Average system peak headway. A comparison of miles of track to directional route miles was also investigated (to describe the prevalence of double tracking); however, all the agencies in the peer group reported the same number of miles of track as directional route miles, so this was not possible. (According to the NTD reporting guidelines, miles of track should be one-half the directional route miles for commuter rail lines operating on single track, plus the length of any sidings/passing tracks and yard tracks, so the two values should never be equal, even when a route is fully double-tracked.) Individual agency contacts would therefore be required to determine the prevalence of double tracking. The average peak headway is derived by FTIS from other NTD measures. [FTIS divides directional route miles by average speed (revenue miles per revenue hours) to give the average time for a train to make a round-trip, then divides this result by the number of trains operated in peak service to give an average peak headway in hours, and finally multiplies the result by 60 to give a value in minutes.] Since many commuter rail lines tend to have strongly directional peak service and some also 64 2003 2004 2005 2006 20072006 peer group median San Jose Baltimore Denver Portland Sacra- mento Salt Lake City San Diego San Francisco San Jose Baltimore Denver Portland Sacra- mento Salt Lake City San Diego San Francisco $0 $50,000 $100,000 $150,000 $200,000 $250,000 $300,000 $350,000 Ad jus ted No n-V eh icl e M ain ten an ce Co st pe r T ra ck M ile (g) Adjusted Non-Vehicle Maintenance Cost Per Track Mile 2003 2004 2005 2006 20072006 peer group median (i) Car Miles Between Failures (h) Maintenance Cost as a Percentage of Operating Cost Light Rail San Jose Baltimore Denver Portland Sacra- mento Salt Lake City San Diego San Francisco $0 $5,000 $10,000 $15,000 $20,000 $25,000 $30,000 $40,000 $45,000 $50,000 $35,000 Ca r M ile s B etw ee n F ail ur es Light Rail Light Rail 0% 10% 20% 30% 40% 50% 60% Ma in te na nc e C os t a s % of O pe ra tin g C os t 2005 2006 20072007 peer group median Figure 16. (Continued).

operate a variety of service patterns, the reported value will often not correspond to the peak-direction headway experi- enced at a given station. However, the measure is still useful as a comparative indicator of the relative frequency of service operated by different systems. (Note that a direct compari- son of rail schedules using the Internet would also have dif- ficulty accounting for multiple service patterns, variations in headways during the peak period, and the relative amounts of peak-direction and off-peak-direction service.) For this comparison, FDOT wishes to focus on other com- muter rail operators that operate a single route like SFRTA does, or a single route with two branches. Peer Grouping FTIS was used to develop a commuter rail peer group for Tri-Rail. Table 26 identifies the candidate peers that were identified. Commuter rail operators in the Baltimore and Los Angeles regions were screened out by the criterion that peers should not operate multiple routes (excepting a trunk-and-branch opera- tion like that operated by Virginia Railway Express). Trinity Railway Express (TRE) is somewhat unique in that it is jointly operated by the transit agencies in Dallas and Ft. Worth. There- fore, data used for TRE’s likeness score calculation had to be manually combined in a spreadsheet from the individual data reported by the two agencies. Two commuter rail lines had likeness scores that warranted further investigation. Coaster is the smallest operator in the peer group in terms of metropolitan area population and operating budget, but the San Diego region experiences a sim- ilar level of congestion as Miami, as measured by annual hours of delay per traveler. A significant portion of TRE’s likeness score came from the difference between its parent agencies’ service types (primary agency serving a region’s central city) and SFRTA’s service type (suburban service connecting to a central city). If TRE were treated as a stand-alone agency with the same service type as SFRTA, its likeness score would be a satisfactory 0.63. Therefore, both of these commuter rail lines were retained as peers. Performance Results Cost Adjustments for Labor Market and Cost of Living Because there is a wide range of average wages among the urban areas represented in the peer group, operating costs were adjusted to reflect wage differences between regions. As in the previous two case studies, the wage data were obtained from the BLS at the website identified in Chapter 4. A similar process was used to make the adjustments as in the previous two case studies. In three cases, Chicago, Miami, and San Fran- cisco, wage data are available for subareas within the urbanized area. In these cases, the subarea containing a transit agency’s headquarters was used. Data Retrieval All of the desired performance measures are available as Florida Standard Variables or as ratios of Florida Standard Variables except for urban area population (TCRP Project G-11 variable), weekday A.M. peak number of vehicles/trains in operation (NTD Form S-10), and rail total track miles (NTD Form A-20). Following this report’s guidance, urban area population was used instead of service area population. (Note that both Tri-Rail and South Shore report the population of the entire Miami and Chicago urban areas, respectively, as their service area populations, even though those lines serve only relatively small portions of their urban areas. Because the other peers all operate single lines or a trunk with two branches, the por- tion of the urban area served by each commuter rail line should 65 Agency City State Likeness Score South Florida Regional Transportation Authority (Tri-Rail) Pompano Beach FL 0.00 Central Puget Sound Regional Transit Authority (Sounder) Seattle WA 0.47 Peninsula Corridor Joint Powers Board (Caltrain) San Carlos CA 0.47 Virginia Railway Express (VRE) Alexandria VA 0.65 Northern Indiana Commuter Transportation District (South Shore) Chesterton IN 0.67 Southern California Regional Rail Authority (Metrorail) Los Angeles CA 0.79 North County Transit District (Coaster) Oceanside CA 0.80 Maryland Transit Administration (MTA) Baltimore MD 0.95 Trinity Railway Express (TRE) Dallas-Ft. Worth TX 1.05 Dallas Area Rapid Transit (TRE) Dallas TX 1.27 Fort Worth Transportation Authority (TRE) Fort Worth TX 1.41 Table 26. Tri-Rail peer group candidates.

be relatively similar.) As noted previously, while service area population would be the theoretically preferable basis of com- parison, consistently reported service area data are not avail- able from the NTD, the NTD service area definition does not include commuter rail’s park-and-ride market area in any event, and the detailed census data required to develop station- area population estimates may be up to 10 years old. While not perfect, urban area population is sufficient for developing insights that can be followed up later on with a more-detailed analysis, if necessary. As was the case with the peer-grouping data, performance data for TRE had to be combined from the data separately submitted by the Dallas and Ft. Worth transit agencies. Descriptive Measure Graphs Figure 17 presents descriptive measures for the peer group. Outcome Measure Graphs Figure 18 presents outcome measure results for the peer group. Interpreting Results Descriptive Measures Tri-Rail’s operating funding per capita is at the peer group median [Figure 17(a)]. Except for Caltrain, which is much higher than the rest of the peer group, Tri-Rail operated the most revenue miles per capita [Figure 17(b)]. Tri-Rail’s values for both of these measures increased at the same rate as or faster than its peers between 2003 and 2007. Tri-Rail’s week- day service span is a little above the peer group median [Fig- ure 17(c)], while its average peak headway is a little longer than the median [Figure 17(d)]. Weekday service span has increased 66 Commuter Rail $0 $5 $10 $15 $20 $25 Tri-Rail Caltrain Coaster Sounder South Shore TRE VRE Op er ati ng Fu nd ing pe r C ap ita (a) Operating Funding Per Capita (b) Annual Revenue Miles Per Capita (c) Weekday Service Span (d) Average Peak Headway Commuter Rail Tri-Rail Caltrain Coaster Sounder South Shore TRE VRE 0.0 0.5 1.0 1.5 2.0 2.5 An nu al Re ve nu e M ile s p er C ap ita 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median Commuter Rail 0 4 8 12 16 24 20 Tri-Rail Caltrain Coaster Sounder South Shore TRE VRE W ee kd ay S er vic e S pa n ( ho urs ) Commuter Rail Tri-Rail Caltrain Coaster Sounder South Shore TRE VRE 0 10 20 30 40 60 50 Av er ag e P ea k H ea dw ay (m in) 2003 2004 2005 2006 20072007 peer group median 2003 2004 2005 2006 20072007 peer group median Figure 17. Descriptive measure results for Tri-Rail.

67 Commuter Rail $0 $800 $600 $400 $200 $1,000 $1,200 Tri-Rail Caltrain Coaster SounderSouth Shore TRE VRE Ad jus ted Co st pe r R ev en ue Ho ur (e) Adjusted Cost per Revenue Hour 2003 2004 2005 2006 20072007 peer group $0.00 $0.50 $0.30 $0.20 $0.10 $0.70 $0.60 $0.40 $0.90 $0.80 $1.00 Tri-Rail Caltrain Coaster SounderSouth Shore TRE VRE Ad jus ted Co st pe r P as se ng er Mi le (d) Adjusted Cost per Passenger Mile 2003 2004 2005 2006 20072007 peer group median Commuter Rail $0 $20 $15 $10 $5 $25 $35 $30 Tri-Rail Caltrain Coaster Sounder South Shore TRE VRE Ad jus ted Co st pe r R ev en ue M ile (f) Adjusted Cost per Revenue Mile Series1 Series2 Series3 Series4 Series52007 peer group Commuter Rail 0 500 3,000 3,500 2,000 2,500 1,000 1,500 Tri-Rail Caltrain Coaster Sounder South Shore TRE VRE Ad jus ted Co st pe r R ev en ue Ho ur (c) Annual Vehicle Hours per Peak Vehicle 2003 2004 2005 2006 20072007 peer group Commuter RailCommuter Rail Commuter Rail 0 20 15 10 5 25 35 30 40 45 50 Tri-Rail Caltrain Coaster Sounder South Shore TRE VRE Av er ag e S pe ed (m ph ) (b) Average Speed 2003 2004 2005 2006 20072007 peer group median 0 20 15 10 5 25 35 30 Tri-Rail Caltrain Coaster Sounder South Shore TRE VRE Av er ag e T rip Le ng th (m ile s) (a) Average Passenger Trip Length 2003 2004 2005 2006 20072007 peer group median Figure 18. Outcome measure results for Tri-Rail.

(compared to a peer group trend of holding steady) and aver- age peak headway has gotten shorter (compared to a peer group trend of steady to shorter). Outcome Measures Tri-Rail’s passengers take the longest trips of any in the peer group [Figure 18(a)] and travel at the second-fastest average speed [Figure 18(b)]. Speeds increased significantly in 2007, while the long-term trend for the peers has been one of little change, except for Caltrain. Tri-Rail gets good utilization out of its vehicles (in terms of vehicle hours per vehicle operated in maximum service), although it dropped sharply in 2007 from being consistently the best in this category to third in the peer group, opposite the peer group trend [Figure 18(c)]. Looking at cost-related measures, TriRail has the second- highest adjusted cost per passenger-mile [Figure 18(d)] and adjusted cost per revenue hour [Figure 18(e)] in the peer group, and these values have increased more between 2003 and 2007 than for those of any of its peers. (Note that cost per passenger- mile values are fairly tightly clustered for five of the agencies, including Tri-Rail, with one outlier above and one below.) Tri-Rail’s adjusted cost per revenue mile, on the other hand, is at the peer group median [Figure 18(f)], although it, too, has gone up significantly since 2003 (although it held steady between 2006 and 2007). If unadjusted costs had been used, Tri-Rail’s position relative to its peers would have been the same, but VRE’s unit costs would have increased more than Tri-Rail’s due to the Washington, DC, region’s relatively high wages ($26.37 in 2007 vs. $18.75 for Ft. Lauderdale) and much greater increase in average wages from 2003 to 2007 (19.6%, compared to 12.5% for the Ft. Lauderdale region). Asking Questions The peer-comparison results show that, on a per-capita basis, the state’s and region’s investment in commuter rail service is on a par with Tri-Rail’s peers. The aspects of Tri-Rail’s quality of service that can be assessed through the NTD were as good as or better than its peers. Two of Tri-Rail’s cost-effectiveness and cost-efficiency values, on the other hand, were higher than most of its peers, and all three cost-related measures increased significantly during the analysis period. Since Tri-Rail began a service expansion during this period, associated with its double-tracking project, a question to be investigated fur- ther would be: What aspects of the service expansion, if any, are contributing to the significant unit cost increases? Caltrain stands out as the best-in-class in the peer group, even with its region’s high cost of living, and could be an agency that Tri- Rail could look to for inspiration for cost-saving ideas and ideas for operating varying service patterns on double track. 68

Next: Chapter 6 - Concluding Remarks »
A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Transit Cooperative Research Program (TCRP) Report 141: A Methodology for Performance Measurement and Peer Comparison in the Public Transportation Industry explores the use of performance measurement and benchmarking as tools to help identify the strengths and weaknesses of a transit organization, set goals or performance targets, and identify best practices to improve performance.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!