Skip to main content

Currently Skimming:


Pages 30-43

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 30...
... This report's peer-grouping methodology screens potential peers based on a number of common factors that influence performance results between otherwise similar agencies; however, additional screening factors may be needed to ensure that the final peer group is appropriate for the performance question being asked. These secondary screening factors are also identified at this stage.
From page 31...
... Step 2: Develop Performance Measures Step 2a: Performance Measure Selection The performance measures used in a peer comparison are, for the most part, dependent on the performance question being asked. For example, a question about the costeffectiveness of an agency's operations would focus on financial outcome measures, while a question about the effectiveness of an agency's maintenance department could use measures related to maintenance activities (e.g., maintenance expenses)
From page 32...
... Step 2c: Identify Thresholds The peer-grouping methodology seeks to identify peer transit agencies that are similar to the target agency. It should not be expected that potential peers will be identical to the target agency, and the methodology allows potential peers to be different from the target agency in some respects.
From page 33...
... , as nearly all potential peers will be smaller or will operate modes that the target agency does not operate; • Transit agencies operating relatively uncommon modes (e.g., commuter rail) , as there is a smaller pool of potential peers to work with; and • Transit agencies with uncommon service types (e.g., bus operators that serve multiple urban areas)
From page 34...
... As discussed in more detail in Appendix A, bus-only operators that wish to consider rail operators as potential peers can export a spreadsheet containing the peer-grouping results and then manually recalculate the likeness scores, excluding these three screening factors. Depending on the type of analysis (rail-specific vs.
From page 35...
... In general, a total likeness score under 0.50 indicates a good match, a score between 0.50 and 0.74 represents a satisfactory match, and a score between 0.75 and 0.99 represents potential peers that may usable, but care should be taken to investigate potential differences that may make them unsuitable. Peers with scores greater than or equal to 1.00 are undesirable due to a large number of differences with the target agency, Total likeness score Sum screening factor sc = ores Sum peer grouping factor scores Cou ( )
From page 36...
... Use as Peer? Knoxville Area Transit Knoxville TN 0.00 1 W inston-Salem Transit Authority Winston-Salem NC 0.25 Yes 2 S outh Bend Public Transportation Corporation South Bend IN 0.36 Yes 3 B irmingham-Jefferson County Transit Authority Birmingham AL 0.36 Yes 4 C onnecticut Transit - New Haven Division New Haven CT 0.39 No 5 F ort Wayne Public Transportation Corporation Fort Wayne IN 0.41 Yes 6 T ransit Authority of Omaha Omaha NE 0.41 Yes 7 C hatham Area Transit Authority Savannah GA 0.42 Yes 8 S tark Area Regional Transit Authority Canton OH 0.44 Yes 9 T he Wave Transit System Mobile AL 0.46 No 10 Capital Area Transit Raleigh NC 0.48 No 11 Capital Area Transit Harrisburg PA 0.48 No 12 Shreveport Area Transit System Shreveport LA 0.49 No 13 Rockford Mass Transit District Rockford IL 0.50 No 14 Erie Metropolitan Transit Authority Erie PA 0.52 No 15 Capital Area Transit System Baton Rouge LA 0.52 No 16 Western Reserve Transit Authority Youngstown OH 0.53 Yes 17 Central Oklahoma Transportation & Parking Auth.
From page 37...
... The following variables are available on a monthly basis, with only an approximate 6-month time lag: unlinked passenger trips, revenue miles, revenue hours, vehicles operated in maximum service, and number of typical days operated in a month.
From page 38...
... or 1.121. Average labor wage rates can be used to adjust costs for differences in labor costs between regions since labor costs are typically the largest component of operating costs.
From page 39...
... Figure 5 shows a graph of spare ratio (the number of spare transit vehicles as a percentage of transit vehicles used in maximum service)
From page 40...
... Therefore, farebox recovery is not telling the entire story about how much of Knoxville's service is self-supporting. As an alternative, all directly generated non-tax revenue used for operations can be compared to operating costs (a measure known as the operating ratio)
From page 41...
... A comparison of the two graphs also shows that Knoxville is the only agency among its peers (all of whom have dedicated local funding sources) to get much directly generated revenue at present from anything except fares.
From page 42...
... Contacting top-performing peers addresses the "why" aspect and can lead to identifying other transit agencies' practices that can be adopted to improve one's own performance. In most cases a transit agency will find one or more areas where it is not the best performer among its peers.
From page 43...
... The agency's peers hopefully will also have been working to improve their own performance, so there may be something new to learn from them -- either by investigating a new performance topic or by revisiting an old one after a few years. A successful initial peer-comparison effort may also serve as a catalyst for forming more-formal performance-comparison arrangements among transit agencies, perhaps leading to the development of a benchmarking network.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.