National Academies Press: OpenBook

Transit Service Evaluation Standards (2019)

Chapter: Appendix C - Transit Agency Survey Results

« Previous: Appendix B - Transit Agency Survey
Page 79
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 79
Page 80
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 80
Page 81
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 81
Page 82
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 82
Page 83
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 83
Page 84
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 84
Page 85
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 85
Page 86
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 86
Page 87
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 87
Page 88
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 88
Page 89
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 89
Page 90
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 90
Page 91
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 91
Page 92
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 92
Page 93
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 93
Page 94
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 94
Page 95
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 95
Page 96
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 96
Page 97
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 97
Page 98
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 98
Page 99
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 99
Page 100
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 100
Page 101
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 101
Page 102
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 102
Page 103
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 103
Page 104
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 104
Page 105
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 105
Page 106
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 106
Page 107
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 107
Page 108
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 108
Page 109
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 109
Page 110
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 110
Page 111
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 111
Page 112
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 112
Page 113
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 113
Page 114
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 114
Page 115
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 115
Page 116
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 116
Page 117
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 117
Page 118
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 118
Page 119
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 119
Page 120
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 120
Page 121
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 121
Page 122
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 122
Page 123
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 123
Page 124
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 124
Page 125
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 125
Page 126
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 126
Page 127
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 127
Page 128
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 128
Page 129
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 129
Page 130
Suggested Citation:"Appendix C - Transit Agency Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 130

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

79 A P P E N D I X C Transit Agency Survey Results ESTABLISHING AND UPDATING TRANSIT SERVICE EVALUATION STANDARDS 1. Does your agency have service evaluation standards or guidelines? Yes, formal guidelines approved by our governing board 34 66.7% Yes, formal guidelines approved internally but not by our governing board 6 11.8% Yes, informal guidelines used by operations and/or planning departments 6 11.8% Evaluation standards are being developed 3 5.9% No 2 3.9% 2. How many buses does your agency operate in maximum service? Small (<75 peak vehicles) 12 23.5% Medium (75 to 299 peak vehicles) 19 37.3% Large (300+ peak vehicles) 20 39.2% 3. How long has your agency been using service evaluation standards? More than 10 years 30 66.7% 5 to 10 years 9 20.0% 2 to 4 years 5 11.1% Less than 2 years 2 2.2% 4. Has your agency updated its performance standards or measures within the past five years? Yes 25 62.5%

80 Transit Service Evaluation Standards No 15 37.5% Unsure 0 0.0% 5. Please describe what you changed in the update and how this was done. Cost performance measures and trips / capita were added to the annual budget process Our Service Delivery Policy was revised, including the performance metrics used and the standards applied. An internal process analyzed the proposed changes and submitted recommendations to our Board of Directors, which then approved the new policy. As part of our New Bus Network, the agency revised its service standards in summer 2015. The agency provided consultants with the then-existing service standards, they proposed revisions, the agency countered with other revisions, a draft was presented to the Board, they proposed revisions, The agency countered, and the Board adopted with their revisions. Redefined span of service and included farebox recovery In September 2017, the agency rolled out several new weekday subway performance measures to be reported each month to the Board and posted on a Dashboard on the agency website. These are: Additional Platform Time (average time waiting for trains compared to scheduled wait times), Additional Train Time (average time onboard trains compared to scheduled running times), Service Delivered (percent of rush hour service passing a central point on each line compared to scheduled train frequencies), and Major Incidents (number of incidents not associated with maintenance and construction work causing 50 or more train delays). The agency is also continuing all older performance measures (On-Time Performance, Wait Assessment, Mean Distance Between Failures), but may reevaluate their continued usage in the coming months and years. The new additional time measures are made possible by the rollout in 2017 of systemwide train tracking technology (previously, train tracking was only possible 32 percent of all lines) and an expanded ridership modeling methodology capable of generating daily route paths for all customers based on train performance. There have been no changes in bus performance indicators. Changes are implemented during the development of our Comprehensive Plan and its subsequent adoption by our Board of Commissioners. This is done annually. Most recently, we changed our On-Time Performance from two minutes and 59 seconds early through five minutes and 59 seconds late to two minutes and no seconds early through five minutes and no seconds late. We changed our crowding threshold from one based solely on seats ("load factor") to one based on seats and the space available for standing passengers. We also made changes to the way we determine how much service each corridor in our system deserves (target service levels). Several other policy changes that would have the most implication during service reduction scenarios were also made. These changes were formulated and approved by a task force of stakeholders and elected officials, vetted through our executive branch, and approved by our governing board through a legislative process. We changed the levels of under-performance that triggered a review of the route. We also changed our route classifications and minimum standards to conform to Title VI and ADA regulations. Most of the updates added our latest "mode," bus rapid transit, to the standards. The standards also received a graphical update. The revised standards were adopted by the board of directors. We changed ridership based standards based on ridership forecasts and financial based standards based on current and upcoming operations contracts.

Transit Agency Survey Results 81 Service standards are evaluated during each SRTP process. During the last SRTP the measures remained the same, however the goal changed. Additionally rather than have a single pass/fail goal, the measure was modified to a stop light approach with a pass, ok but improve, and fail (Green, yellow, red) approach. Changed mean distance between roadcalls Changes to: the service classification system; new service warrants; service design and performance standards; demand-responsive transit; and standards for transit-dependent riders. Extensive analysis and review, both internal and with stakeholders Revisions to how we evaluate areas for establishing new service. The update was quite significant and paired with restructuring of our bus network. We took a different design philosophy than we have in the past by separating the goals of our service into two categories, ridership and coverage. 70 percent of our resources were dedicated to ridership lines with the goal of being as productive as possible. 30 percent of our service was allocated towards coverage lines, those lines that are expected to not be productive but provide vital access to locations such as suburban job centers, hospitals, housing clusters and other sensitive areas. How we evaluate service now depends on the goals. Ridership is measure based on productivity while coverage on access. In relation to KPI's actual short and long term goals. Rail has standards, service was changed in way that does not adhere to standards, bus has guidelines that are not board adopted We hired a consultant to complete a Comprehensive Operational Analysis, and the Short Range Transit Plan; they recommended modifying the standards to better reflect the new route network. The Board approved new service design guidelines as a part of the COA, and then the new service standards which were incorporated into the 2016 SRTP. On-time performance, Bus Stop Spacing, Bus Stop Amenities (Revised Number of Weekday Daily Boardings Requirement) We moved away from ranking routes within their route categories and began ranking all routes together with a composite score, based on the following performance measures: Passengers per Revenue Mile, Passengers per Revenue Hour, Cost per Passenger, and Farebox Recovery. We developed additional service standard to comply with revised Title VI circular. Bus stop spacing, route spacing Goals and objectives. New modes added; vehicle load standards updated; shelter placement guidelines. Some standards are in our regional policy plan and were updated through that process. Other updates were made through internal recommendation and review. The first set of changes was incorporated in the County's Master Transportation Plan within the transit element section. The next set of updates is going through an internal review and will be updated in the County's next Transit Development Plan update in middle of fiscal year 2019.

82 Transit Service Evaluation Standards CHALLENGES 6. Please characterize the following elements as major challenges, minor challenges, or not an issue in the development and use of service evaluation standards. Major challenge Minor challenge Not a challenge Lack of data 16% 44% 40% Disagreement between data sources 17% 61% 22% Accuracy of data 26% 52% 22% Limited staff time 51% 33% 16% Lack of consensus on appropriate measures 16% 40% 44% Too many measures/indicators 7% 27% 67% Disagreement on key performance indicators 7% 33% 60% Lack of interest by Board/governing body 11% 36% 53% Difficulty in obtaining approval for performance-based proposals 11% 20% 69% Multiple reporting agencies and data requirements 7% 31% 62% Comments include: Limited staff time is by far the biggest challenge, even though we have limited the performance standards to just three (we have to produce a lot of other data in the process). Agency is undergoing adding measures and standards that will govern future service changes and updates to the transit development plan. The current standards have been shared but not used to effect changes to service. Also the public is not aware of the standards and how the County uses them for changes to transit service. Also these measures do not have any connection to the regional provider of bus service. The next update will seek to tie some measures to regional service for comparison. For informal standards, sometimes they are applied selectively. Staff have some discretion as to when to apply standards; they are not hard and fast. The agency finished installing a new ITS system as of a year ago. This past year has been used to validate data and develop reports. There is a strong desire to update and use the new information. So as of today, these elements are not an issue due to the installation of the new ITS system. One of our challenges is understanding of evaluation standards and metrics. There is a clear disconnect between the technical skill level and understanding of the staff, which is high, and that of management, which is low.

Transit Agency Survey Results 83 Overall recognition of importance of the measures and using the measures to manage the agency across departments. We had lots of data and little staff time. We have added staff just for data so this should improve. We are also looking at updating and adopting new standards in 2018. Our board is very interested in how we measure performance. We currently do not have CAD/AVL and our APC system is very unreliable to provide accurate data. Ridership data is primarily gathered from farebox data. Once every three years we conduct 100% sampling of all trips of all of our routes which is time consuming, expensive, and only gives data from a sample. We now have funding to implement a full transit ITS package starting with CAD/AVL and APC systems. Amount of time required to gather and clean/process data - Major Challenge Consultant direct communication with Board member 7. Please describe the one major challenge in contracting fixed-route bus service and strategies or tactics used to overcome this challenge. The primary challenge is to inform and educate elected officials / stakeholders on the purpose and value of using service evaluation standards. Software tools can be a helpful tool in this process. Setting our service levels to meet our service evaluation standards often faces opposition. The primary challenge in the development of the service evaluation standards was the difference between the level of detail required by staff for evaluation, desired by management for open discussion, and wanted by Board members for clarity. The primary challenge in the use of service evaluation standards was the addition of several standards which sound great but for which data are difficult to collect and which could change yet are required by a Board member. The agency is still struggling with the latter. The governing board is a city council, which have other more pressing issues to tackle. They often don't seem to get too involved with the operation or planning of the system. We continue to educate and talk with councilmembers as often as we can. The range of standards and various data sources make it time consuming to compile the information especially with limited staffing resources. Starting prior to the fiscal year ending and compiling as much as possible monthly and quarterly help alleviate this issue. Data quality is an issue especially in non- automated systems like maintenance and missed trips. Public acceptance The primary challenge had been a lack of reasonably accurate train tracking information. Prior to March 2017, only 8 out of 25 had end-to-end train tracking. This was overcome in 2017 with the roll-out of new beacon-based train tracking technology on the remaining 17 routes in the system. With the ability to collect this information, the agency was able to combine it with frequently updated origin-destination ridership models, based on farecard entry swipes, to calculate, on a daily basis, the amount of time every passenger trip (1) should take if trains ran perfectly on schedule and (2) did take based on actual train performance. This yields an average additional journey time figure, which is split into wait time and on-board train time. The background research into this methodology, including the development of the approach to ridership modeling, was conducted over a period of several years, which enabled a swift roll-out (6 months) of the new performance measures once train tracking data was available for the entire system.

84 Transit Service Evaluation Standards Staff time to analyze multiple data points The primary challenge would be staff time to compile and analyze the necessary data. We have made investments in two ways to combat this. First, we created a dedicated Performance Measurement and Analytics group, a team of data scientists and information technology professionals with the requisite skill set to perform the tasks necessary. Having a dedicated team alleviates the burden from others within the organization. Second, we have begun implementation of a custom enterprise data warehouse and business intelligence solution. This creates a single source of the truth and allows for improved self-service reporting throughout the organization as well as enabling the aforementioned Analytics group to more easily gather the data necessary to inform our service standards. Good, reliable data. Setting the standards in the beginning. Developing the data stream. Balancing the results of service evaluation against other, sometimes competing, goals is challenging. It is unlikely that a single, manageable set of evaluation criteria will comprehensively capture every goal an agency has in terms of providing mobility. Therefore, when resource allocation is tied directly to performance (as it is at our agency), only those services that "pop" in the analysis become available for investment, even though investment in other services may help advance certain other agency or county goals. Striking a balance between having a manageable set of metrics that identify top investment priorities and being able to allocate resources to projects, places, or services that don't "pop" in the analysis -- and then being able to communicate the rationale behind those investments -- can be difficult. Collecting useful data beyond the basics, such as ridership, trip length, hours, etc. Limited staff time is the major challenge. We need a dedicated staff person or persons to regularly evaluate data. Most of our issue was data, we are investing in better data collection equipment. While we have the data, the real issue is in gaining consensus within the organization regarding proper standards, and what to do when services do not meet the established standards. Existing service standards more reflect existing performance and the need for standards rather than lead to improved performance. Low performance may only be addressed in times of fiscal crisis Staff time is the biggest challenge. Staff does not have the time to actually analyze routes and apply the standards. The lack of transit ITS data collection. We have used onboard sampling, video analysis for on-time performance. Lack of uniform education on all levels about what evaluation standards should be used and what each one represents therefore what business decision should be made based on the trend of the KPIs. Lack of the exchange of experience with other peer transit systems that have advanced establishment/usage of the KPIs. Our primary challenge was when to bring it out to the public and how we message it when we have so many other projects going on such as a network redesign, new BRT service, and a referendum. Primary challenge is the ridership decline trend affecting most agencies. Strategy is to set the standard for the year and let the board know some of the decline factors are beyond the control of the agency.

Transit Agency Survey Results 85 Consensus. Not as much on the measure, but what is the purpose of the measure and then utilizing the measures to manage the business. Too many internal staff believe that the standards are just measures where the only importance is in collecting the data and reporting it. They don't see it as a tool to be used to guide improvement strategy. The primary challenge was to get all units (Union, Operators, Public, Special Interest groups, School Districts, etc.) to buy into the ideas based around the establishment of the Service Standards. Numerous meetings, fact gathering adventures, were used to overcome these issues. Institutional buy-in. In 2012, the planning department attempted to update the agency's performance standards to make them more simple, understandable, and flexible; however, there was not uniformity of this vision, and more elaborate provisions were added back into the performance standards (e.g., a binding, automatic process for route elimination). Enthusiasm for application of the new standards vanished in less than six months, leaving the rank-and-file planning staff with an extensive evaluation program that went from being burdensome and laborious to being outright ignored, effectively rescinded, and forgotten or unknown by the time of the next administration. Staff placing emphasis on monitoring standards. Understanding changes in the political, regulatory and operating environment and how these should effect changes to our standards. This required research, evaluation and major discussions. The transit agency has 17 member agencies who all had a seat at the table to develop our Transit Standards and Performance Measures. Achieving consensus was a challenge that took months to solve. Politically-driven decisions tend to override data-driven policy. Outside factors that the agency cannot control. Regardless of the outcome of evaluating service, political, public and stakeholder pressure trump sound planning. Data accuracy and not having a unified reporting system. Also, we have incorporated new technology and there have been hiccups along the way. Ongoing maintenance and updating of ITS. While the new technologies provide greater insight and data to review, plan and implement routes the systems do require a different set of skills to maintain. In light of this, the Planning and Scheduling department includes a staff position that is assigned to this task and oversees the health of the systems and equipment that provide this large set of data. The primary challenge for us is the amount of time it requires to compile various metrics. A service standard that is reported regularly to the Board for bus and rail is crowding. Our methods to collect this information are lacking, though. We until recently reported data observed manually at select time points for rail. Bus is modeled data to illustrate the most crowded trip on a route. To overcome this, we are building an O&D model tied to our vehicle location data to more precisely quantify crowding across all trips, all times of day, etc. Making sure the service standards accurately reflected the operating conditions and likely ridership in a suburban service area. Education was very important! Governing Board was not aware of current standards which made decisions to approve difficult.

86 Transit Service Evaluation Standards The primary challenge in the use of service evaluation standards is the need to provide adequate service coverage to important destinations and adhere to Title VI requirements. The agency conducts a Title VI evaluation for all major service change proposals and solicits customer input. A Title VI report and customer feedback will factor into the final decision to alter routes that fall below service evaluation standards. All proposals are brought to the public and the Board of Directors for review and approval. Service standards may work against each other. Improving in one area may cause issues with compliance in another. Ensuring data integrity. System accuracy. There is a focus in our standards on service productivity, however policy makers often express support for service that is by nature low-productivity (coverage, suburban job access) and expect it to perform well. We could use better understanding and guidance in how to allocate service dollars. A recent initiative is to better quantify coverage vs. productivity in our current service in order to develop understanding and move towards having a target. The primary challenge in developing the new and updated service evaluation standards has been the limited amount of staff time to work on the new developments and how to communicate the current measures and their impact on service to the general public. Staff time and organizational capacity to compile the data. We have but three benchmarks, but it's not easy to compile the data. We have increased overall staff, but the problem is that increase came with significant planned growth in service, so staff have less time to monitor and focus more on making service improvements, which of course is informed by monitoring, but less so than preferred because of the lack of time. ASSESSMENT 8. How would your agency rate its efforts to design and apply service evaluation standards? Very successful 10 22.2% Somewhat successful 20 44.4% Neutral 9 20.0% Somewhat unsuccessful 3 6.7% Very unsuccessful 0 0.0% Other 3 6.7% Other includes: For most of the agency’s history, service changes have been about adding service. So standards were not developed until 2012/2013 when the Board focused on its role as a policy-setting body. Unfortunately, with change in Board members and executive leadership, the rail service standards were not referred to or updated in the 2017 service changes that reduced headways and hours of service.

Transit Agency Survey Results 87 Not sure yet. We have worked on the updates over the past few years but are only rolling out to the public in 2018. The evaluation standards are developed in house and are limited to the usage of planners. Boardings/service hour, subsidies per boarding, ridership per period, are not sufficient to get the better understanding of the service. Management level disagree on what KPIs should be used, which is the most important - boardings/service hour, subsidies or minutes of lost time, etc. Standards provide reasons / support for recommended changes to service levels Giving planners a clear directive on what levels of service they should try to provide (frequency, span). The primary benefits of transit service evaluation standards have been creation of objective, or less subjective, support for service change options as well as enhanced understanding which was a critical factor leading to the development of the New Bus Network. A written standard that everyone knows about. Good baseline for how we’re doing Peer group comparisons Measures that better reflect passenger experience, compared to vehicle-based measures like On-Time Performance. Support of data to cut unproductive service or increase service where needed It has enabled us to run a data driven decision making process where routes are continually evaluated by well-defined and agreed upon measures that balance the financial and operational needs of our customers. At the very least it means that the County and the Contractor are on an even playing field. The ability to point to the standards and apply them equally The relative transparency of process and results; data-driven decision-making; ridership growth and increased mobility The measurement of trips per hour has become a basic measurement understood from everyone from front line to Board members A tool for evaluating proposals from the public and providing feedback Allows us to plan better routes, especially in older parts of the city where demographics have shifted Identifying services that need review, where the population or employment have changed over the years. Existing standards reflect existing performance. The minimum productivity for fixed routes was used extensively during the agency's COA. 9. What have been the primary benefits of transit service evaluation standards? Verbatim comments below.

88 Transit Service Evaluation Standards A uniform approach to evaluating the performance of the system and the ability to present the results with confidence. Performance monitoring: Results used decision-making Planners seeing which route in what period of the day perform the worst so a cut of service could be proposed. Also good performing routes are acknowledged but due to the lack of buses and lately operators available a proposal to increase the frequency where required is delayed. It's a great way to explain what we are doing and why. How we can improve. Ability to adjust service levels to meet the current demand. An ability to use the standard and resulting measures to recommend strategic decisions. The Standards let people know what the system is set to offer and maintain. None, in practical terms. Making decisions based on real data. Everyone - board, stakeholders, senior management, staff, et al. - knows the same ground rules, their application and where specific services stand any given time. Transparency and credibility and equal treatment. Being able to point to empirical data and rules for decision-making. Identifying areas to focus additional revenues. Consistency and efficiency As evidence to support our decision when asked why we cannot or will not change our service. Benchmarking with others through the ABBG. Uniformity of assessing routes and potential service changes Multiple measures allow for a broader understanding of performance; otherwise, Board concentrates on one metric Under a different board, they were useful for helping the board understand their role in establishing what service we were to provide, apart from solely a budget discussion. Having realistic numbers to compare our system to Defined and approved guidelines to follow to communicate to public and others. The ability to make a case to the public for why we propose service changes. Allows us to have a rational process for recommending changes to bus service. Used as an official guide Knowing our strengths and weaknesses.

Transit Agency Survey Results 89 Provides a framework for decision making (and rationale for decisions made) The current standards pose as neutral benefit in the fact that prior to this year, the measures had only been used internally. During the recession it helped us articulate why certain routes should be discontinued completely. We also are able to tell a much richer narrative of how the service works. 10. What have been the primary drawbacks of transit service evaluation standards? Verbatim comments below. A primary drawback has been when used to justify eliminating a bus route, external forces caused the County to reconsider and retain the low performing bus route. Determining when to make exceptions or apply them selectively. While we tell a good story with the data, it is still not complete. It does not address more qualitative attributes of service operations. For instance, people want service to the airport. I agree, we should serve the airport. But it is hard to serve it effectively because of its location and the surrounding geography. So performance standards are poor and suggest we should do something about it, but it's not easily solvable. I will say, the benefit is that we continue to search for ways. Our solution is reliant on future capital projects but we have to have a long view of performance remediation. Data collection and integrity Making changes to meet standards may be too costly or otherwise decrease overall efficiency. Standards may be perceived by the public as ignoring customer needs, although we address all concerns received through Customer Service and formal public meetings. Sometimes too rigid or narrow. Do not allow for special circumstances at times. No applying them regularly. Seen as inflexible, ignored The organizational leadership fixates excessively on low-performance; the reality is that augmenting high performance is sometimes a better path forward than trying to "fix" something that is hopeless Making the process easy to understand for the public and stakeholders Ultimate buy-in and data accuracy Leadership does not prioritize the outcomes of the process and it is time consuming for little benefit. Not being able to meet all of the community demands due to budget constraints. There are times where we can identify transit not meeting standards, but don't have the funds or fleet to correct it. It is still necessary to have flexibility to accommodate a variety of conflicting needs. Staff time to collect the needed data

90 Transit Service Evaluation Standards Time-consuming to prepare and not be used. Trying to update the system as the areas of coverage have grown, along with the changing ADA guidelines. Lack of consistent use We only have one standard but different service types, local and commuter, not one size fits all. Many don't know or are only interested in their service. Reaching a broad audience is challenging. Not widely published what each could indicate and possible solution Helping our governing body understand the difference between slight variances in data, and actual trends. They tend to react to any change without considering context such and seasonal swings in ridership. We have not done a good job of actually evaluating the routes and communicating to the public which routes are performing poorly and may need restructuring or possibly elimination. Existing standards reflect existing performance. Route performance is somewhat stationary. Therefore, after a few years, the same routes that have always been evaluated (and kept for reasons not related to the standards) are the ones that are showing up for review. This creates issues when you start choosing other routes for review. Can’t think of any There are always exceptions to applying them; they can require intensive data analysis which we lack the resources for. The time to collect and proof the data In our particular case, future service growth was largely restricted to the network (route alignments) as it existed when the standards were established; in other words, the standards did not tell us how the network should grow and expand. We had to embark on a separate policy-making process to accommodate that shortcoming. Applying them equally not on a case by case basis They have not been evaluated in recent years, do they still hold? None. Consistency and timeliness of data A learning curve as to what the measures mean; a general degree of skepticism of the small size of the average additional journey time (about 3 minutes), since the average measure is net of both slower and faster travel times. Meeting goals Too many to know what's most "important" Some decision makers don't know/understand or ignore them

Transit Agency Survey Results 91 The primary drawback of transit service evaluation standards comes when the data lead to politically incorrect or politically difficult decisions which, because of the application of transit service evaluation standards, are now documented. The complexity of the standards makes them difficult for planners to use when adjusting service levels (crowding/comfort, reliability). 11. What was the most successful action based on service evaluation standards, and why? Verbatim comments below. They have primarily been used to bring service levels, i.e., headways and service spans, into compliance. The most successful use of service standards we've had so far is to show the Board and our public the current extent to which we're failing those standards. In so doing, we shine a light on a problem and are tasked with solving it. New Bus Network. The designation of bus routes as coverage routes allowed the Board to better understand the differing levels of performance and stop expecting routes in areas with low population, low employment, and poor infrastructure to produce high ridership. We were able to reduce some routes that don't perform well based on those standards OTP review and service adjustments that result from them on an annual basis and operational adjustments on a quarterly basis. Route designs resulting in route efficiencies Establishment of a Dashboard on our website that all members of the public can access. This has generated positive feedback. Reduced some unproductive service on weight of performance data - little push back. The chance to embark on our system redesign project. By continuously monitoring key metrics we were well poised to approach our Board when it became apparent that wholesale change was required to meet the changing demands of our local communities and customer base. I think for us has been OTP, understanding how poorly the service was performed before, the direct impacts it has on the customers. Where and why we place amenities at certain locations Keeping crowding in check. Reducing overcrowding is our top investment priority as established by our service evaluation standards. By routinely evaluating our service in standardized fashion, we are able to monitor crowding and invest in service where it is in most demand, leading to less crowding and/or higher ridership. The use of trips per hour in our budgeting for service I cannot think of one. We redesigned the north side of our city, the service standards created a very detailed path for us to use to evaluate.

92 Transit Service Evaluation Standards When several routes bucked the usual trend and showed up as under-performing, we were immediately able to take corrective action. No examples Our service standards are somewhat ineffective since the board has been resistant to remove service from anyone regardless of how poor performing. While the board approved the standards, I don't believe that they have "buy-in" to applying the standards. The ability to start the process of service enhancement due to new funding with the confidence that we now have standards to evaluate the new service from its inception. Standards are critical to be able to monitor the overall health of the system and justify changes as needed. Cutting back the service with low demand I think moving to the frequency vs coverage model helps us better explain our service distribution and why we don't measure the different types against each other as we might have in the past. Reducing/ increasing service to meet needs and adjust run times to improve on time performance. This helps promote a more reliable and efficient service. Reallocation of service resources and a relatively smooth political response to changes because they were based on data compared to board-approved standards. The creation of the bus stop locations and amenities. This section spells out how far stops will be apart, where stops are to be located, and, where buses will pick up passengers. Dispels the idea of a "flag" system. Modifications to service based on performance measures We regularly propose changes, often significant lately, that significantly affect groups of riders and back them with the standards rationale. These are 90% successful. Ending duplicative service that did not meet standards and seeing the improvement of the underlying service. Adding significant resources to and restructuring a struggling route. Resulted in significant increase in reliability and ridership Rapid success of new routes The restructuring of our bus network used some of our evaluation standards to aid in redesigning service. Moving towards On Time Performance goals. Identifying opportunities to expand service and reduce cost Following our route restructure in 2014, we more or less "got it right" in terms of where we located our frequent transit network. However, it was clear due to loads on some buses (and lack of the same on others), that some smaller adjustments were necessary. When the new standards were still fairly new, quarterly reports were made to the board of directors, which were somewhat educational, and which raised awareness of planning principles and personnel, although this did backfire at times by opening up issues at the board level that were probably best worked out administratively.

Transit Agency Survey Results 93 That the Board receive regular reporting on crowding. That report remains, and the members are increasingly interested to see how the service change (headways went from 6 min to 8 min) has impacted crowding. Updating the standards to realistic numbers that actually reflect the suburban operating conditions, as well as changing ridership trends. Guides annual budget decisions In response to declining ridership and revenue, we were able to make the case to reduce service in FY18 by highlighting routes that failed to meet adopted performance standards. The agency was able to reallocate bus service in 2016 and used some of the standards to justify route eliminations or improvements. Development of Balanced Scorecards that drive behavior: 1. Developing service reduction plans (standards provide a rationale) 2. Temper unrealistic expectations from stakeholders (demonstrate what it takes for service to be considered successful) 3. Applying load standard for service management (respond to ridership changes with consistency) To date, we have not had or received any actions based upon our current service evaluation standards. We are in the first year of implementing and applying the standards in a document for publication. Our Board decisively accepting a recommendation to discontinue the poorest performing routes during the recession 12. If you could change ONE aspect in the development and use of service evaluation standards by your agency, what would you change? Responses summarized in Figure 22, Chapter 4. Verbatim comments below. Make the data easier to assemble/analyze. Hope to change how the service evaluation standards are used in proposing service changes and updating the Transit Development Plan and needs for capital and operational resources through the budget process. Despite efforts to clarify, we still occasionally find that some standards are subject to different interpretations by different people using them Automation, eKPI Federal policy requires that four service standards (on-time performance, headway, accessibility, and loading) be taken out to the public for extensive review prior to making changes. This instead should be subject to board approval and the public can participate through the regular board agenda process. Improve awareness in the public regarding said standards. Because we operate in a hot climate area throughout most of year, I would recommend more shelters at stops. I believe this would help keep ridership levels at a higher mark throughout the year. I would look to apply the service standards regularly in the evaluation of the routes, and then have a standardized "corrective action" plan to address routes that aren't meeting the standard.

94 Transit Service Evaluation Standards That we find ways to ensure executives are aware that they exist. Changes in leadership at Board and executive level make this difficult. I think it would be a good idea to have a small allocation for leadership to "blow" on underperforming service. They decide what it is used for, and balance various non-performance goals. So long as the amount is limited, it should not deleteriously affect what works well. Agreement on data sources and more time. Formalize a decision making process that is founded in evaluation of service rather than anecdotal evidence or stakeholder pressure. N/A Update/formalize service standards and apply them firmly and uniformly during the service development/implementation process. Include more "teeth" in the standards that would not be subject to decisions by disparate city councils. Not much, must simply keep bringing them up and asking if "you" have a better idea that will be acceptable to everyone else. Don’t want to change anything. Remove the "sunset clause" for new routes, which says that new routes must be eliminated within their first two years if they don't meet a certain productivity standard. In practice, all this does is force the organization to contrive time-consuming and cynical workarounds to "save" routes that for political or coverage reasons, are destined to exist. This includes laborious Title VI analyses, wasteful "marketing campaigns" to boost ridership on routes with no manifest demand or potential markets, and embarrassing presentations to the public and board of directors iterating all the formal steps to save a route from elimination, consuming hours of the board's time in procedure. The maintenance standards would be my change. The Service Standards have too many exact numbers to adhere to for mechanical items. Those numbers are hard to meet, if not impossible as the fleet of vehicle ages. Standards, or more specifically the goals, need to be fluid and adjust to trends, rather than be static. The standard needs to include how the goal is calculated, but you may not want it to be a number that is static for several years. Develop multiple standards based on service type, i.e. local vs express. Getting the message out. We work very hard to do this, but it's a huge effort to reach people and have them understand what and why. Education about the KPIs I would have done it earlier, much earlier. I'd make sure that FTA service monitoring program (part of the triennial review) is considered when developing the standards. If you have optimistic or aspirational standards for your routes, it can cause difficulties when an agency conducts this program. For example, the agency’s standard says all feeder routes should operate every 30 minutes. However only 2 out of 10 routes meet this standard. In reality most feeders operate every 45 - 60 minutes. Therefore the agency has to state routes don't meet the

Transit Agency Survey Results 95 standard. When the standards are revised, I'd like the standards required in the FTA service monitoring program to be closer to reality than aspirational. Require annual performance review and resulting changes to service. Make route standards that are customer and geography (i.e., connectivity to the network) oriented, instead of the usual operating standards. I wish the standards for doing so were a regulation. I would hire more people to spend more time on it. The reliance of the fareboxes to collect the ridership data. Vehicle headways, does not allow for routes that are lifeline routes Should have started sooner. Keeping metrics simple, both in definition and methodology, is critical to being able to perform the measurement without excessive staff time and to being able to easily communicate the results to stakeholders and governing boards (particularly when your governing board is a local government). I think it again goes back to data, there is so much technology on these vehicles that getting these agencies to talk and to share the data can be a challenge. A better way of explaining the new performance measures, and perhaps finer gradations in reported measures (currently they are reported for all-day/6am-11pm; AM rush hour/7am-10am, and PM rush hour/4pm-7pm). Midday and evening journey time measures can be affected by planned work on the subway. Timeliness Automation of data sources It would be to get all decision makers on board, with a thorough/deep discussion on what they want transit to be in the city. Since FY2008, the agency has reported two indicators on a monthly basis for each bus route (passenger boardings per revenue hour, passenger boardings per revenue mile). As these data are reported monthly, they are stand-alone and lack the perspective required to reflect the impacts of seasonality such as bus routes which carry large numbers of school children. The result is concern over perceived declines in performance in November and December when ridership is always lower than September and October. I'd try to make sure that the service standards provide a clear guide on not just how to measure service, but also how to structure it. n/a

96 Transit Service Evaluation Standards 13. Please describe any “lessons learned” with regard to service evaluation standards that would benefit other transit agencies. We currently have a disconnect between certain standards that are used to assess service and the metrics we use to adjust service (i.e. our reliability standard doesn't tell us how to set run times or minimum layover times; our crowding/comfort standard doesn't tell us what frequencies are necessary to get comfort at the acceptable level). The performance metrics we use to adjust service and assess service should be the same if we hope to structure service to meet the standards we've applied. From inception in 1979 until 1999, the agency used informal service standards. In April 1999, the agency formalized its service standards and they were adopted by the Board of Directors. These service standards were not revised again until 2011. The twelve-year time period did not owe to a lack of need for revisions but rather to a desire on the part of management to maintain maximum flexibility from a decision making standpoint. It is critical to recognize that once you establish service standards you set a benchmark which may force future decisions. Most senior managers and Board members want the maximum flexibility in their decision making and do not want a document to effectively force them into decisions which may be contrary to other non-ridership or non-service-related agency goals. Service standards must be flexible enough to force these decisions when there is truly justification for them. We just implemented these new standards, so it’s still a work in progress. I think individual board member education is key. Cull the number down to a manageable number that answers how you're doing in relation to your strategic plan. Too many indicators are a distraction. If it doesn't relate to the strategic plan or core mission, toss it. Avoid massive overhauls (1) Establish measures that the operating departments can understand and act upon. Improving maintenance to boost infrastructure reliability can reduce the number of Major Incidents as well as lead to fewer delays reflected in the journey time measures. Similarly, managing incidents better so that they don't lead to an excessive number of delays not only reduces the number of Major Incidents, but also leads to fewer delays reflected in the journey time measures. (2) Be cognizant of how measures can be "gamed" and monitor them to make certain operating departments adhere to the new measures. For instance, the agency closely monitors all incidents to make certain that there is no trend to cap the number of delays associated with any incident at 49, so that they are not defined as "Major Incidents." Thus far, we have not seen any instances of such "gaming." Make sure data source(s) are accurate and consistent. Double check and check again. We were too reliant on the on-board passenger counter data and found many inconsistencies, causing us to question the results. Creating a culture of data driven decision making needs to start from the top. The executive and leadership teams need to believe in the process and support the creation of metrics for their adoption to be successful throughout the organization. Defining a stretch goal as well a minimum acceptable standard is a great way to achieve continuous improvement. As goals

Transit Agency Survey Results 97 are met, do not be afraid to continue to set them higher and higher. The minimum acceptable standard will then follow as improvement is achieved. The best piece of advice is to take a minute to think about why you are measuring something, what value does that measurement bring your organization. There are so many pieces of data that we can measure when it comes to transit, but just because all the data is there doesn't mean taking the time to analyze all will bring us value. Allow for standards that can be applied equally but with wiggle room for lifeline type routes. Keeping "official" standards and metrics simple, broad, and few in number can be beneficial. Sub- metrics can always be developed for deeper insight, and can be used to justify investments and reductions under the umbrella of a smaller, broader set of metrics. Ensure you have the capability to measure the metrics being proposed without excessive staff time being spent collecting, cleaning, and analyzing data. The less complex, the better. Choose standards that can lead directly to taking action. Link standard problem-solving actions to under-performing services. ("If service 'x' is under-performing on metric 'y', do 'z' to correct it.") Ensure policies are aligned with the metrics to facilitate/enable the allocation of resources to fix problems revealed by the metrics. We have been able to get our Board to understand trips per hour and how to use that matrix in terms of measurement of our efficiency on a yearly basis, in terms of usage of the buses and a way to compare our ridership to other transit systems in the state. In an area with different development patterns - cities, suburban development and rural areas, it is difficult to have service standards that apply to all areas for all times of day on local and express routes. Standards must be judiciously developed, but also wisely applied. Develop and use them to improve service rather than just because Title VI requires you to have them. Make sure the board of directors know how the standards are going to be applied -- routes that don't meet standards may be modified or even eliminated. Decisions are based on actual performance data and not political or "gut" feelings Our standards are the result of many months of careful consideration. In the beginning there were a number of data elements that, while interesting, had little value in evaluating the effectiveness of the system's operation. The lesson learned that just because it can be measured does not necessarily mean the time invested to collect that data will yield anything of use to improve the overall system. Not applicable. The agency has a long way to go to learn the benefit of having the evaluation, service performance well established equally important as planning and scheduling. Without performance evaluation the process is not completed. As all transit agencies know, there is more than just the data. Use the data and the standards as a starting point, then delve deeper. The service deemed "below standards" might actually be a lifeline to a subset of your customers. Rather than canceling service, service levels can be

98 Transit Service Evaluation Standards adjusted. We had a line with poor boardings per vehicle service hour and identified it for cancellation. During the public outreach process we learned it was a key serving certain schools, so we decreased service levels by more than half and still maintained the same ridership but increased the boardings per vehicle service hours. Please keep your service standards updated if shared with the public on a regular basis. Standards become truly easy to "pick" apart and become a social media article overnight if your agency states that their standards are this, but you're doing something different. The service evaluation process would be simpler and more honest if it was purely advisory. It is natural for a certain number of routes to be below the agency's performance standard, for a number of reasons. No single metric can capture all the different variables that play into a route's worth. Performance indicators should be understood to be a form of triage, but not definitive. Also, make sure you don't reinvent the wheel. Even within what are essentially small public agencies, there may be parallel and redundant reporting processes (e.g., one coming from finance, one coming from planning). Budget time to educate a new administration on existing dashboards and performance standards, otherwise, there is a risk that new ones will be erected in parallel to existing ones. Obtain buy-in from all stakeholders. Lay out clear and simple rules and standards with very few exceptions. Make frequent reports of compliance and performance. Identify the agency's mission and goals. Develop a few principal objectives that measure these goals and show your management, board and customers how your individual services are meeting them (preferably annually) or will lead to investigation of changes. Develop other specific criteria and measurements (standards) that support the principal objectives and provide specific and clearly defined support for service development. Employ verified and understandable data and information that directly supports all the above. Data driven. Of course, manage the process and communications with all those interested.+ If you consistently apply your service standards to requests for new service your transit system will be efficient. Determine the weight evaluation standards actually hold in the decision making process. Then design evaluation standards based on the weight of the standards. In general, benchmarking with other authorities is very helpful. It is important that participants are transparent about their results and that there is no "gaming" of the rules set in place. Guidelines and not hard and fast rules, they should not preclude certain initiatives. I would encourage any service standards to be Board adopted; as it gives more credibility to those whose job it is to “implement” the service, especially as it applies to elimination of underperforming routes that some group has a strong affinity for. When we initially introduced the rail headway standards, we proposed headways from the terminal. This was seen as a cut in service and got lots of negative coverage in the press. The Board ended up adopting combined headways where applicable (e.g., 6 minutes in the core, 12 minutes outside core). - Given that the rail standards were ignored in the last service change (budget cuts forced the change in service), we need to do a better job of showing the impact of those potential changes. We're showing crowding, but in a lagging performance

Transit Agency Survey Results 99 report. And we should also what that means for customer travel time, and if we could, what we expect in terms of ridership impact. - Look to other agencies and learn from them (your report will help on this!). We just had staff from another agency over for a visit after TRB and we were blown away by their service standards and how well aligned with the external and internal performance reporting. Don't assume someone knows what a service standard is used for or what it means. Conduct as much outreach and education as possible. Collect and share feedback using innovative methods. Good data is the foundation for developing and implementing service evaluation standards, but do not ignore the human side of transit. Ask yourself, what are the needs of the riding population that may not be captured within the numbers? Service standards must be balanced with public engagement. Take time to brief policy makers regarding the service standards and how they are calculated. This should be done regularly as new board members start. Understanding productivity, reliability, and efficiency measures and how they interplay is important when making decisions. Doing this will make it easier to make changes to service as they will understand why they are being made. Tying improvement initiatives to improve performance is essential. Make sure data is readily available to answer the question. Keep up with changes to the operation (i.e., gradually fewer seats on buses means that load standards based on a percentage change over time). Don't rush to put guidelines into formal policy documents that don't rise to the policy level (i.e., operating guidelines). Design some flexibility/room for judgement. One lesson that has been learned early on is to have some connection from service evaluation standards that a local/supplemental transit service has to the regional service provider and the service evaluation standards governed by the regional service provider where routes are operating on the same corridors. This will enable to show the public comparisons how service is provided by both agencies. Avoid making things too black-and-white when it comes to outcomes. We set out three measures and found that some could never meet all three. As we realized the measures looked at different data points, we began to articulate that some routes will perform better in some categories over others. So then our message was routes that fail all three need a serious re- do while routes that failed two needed an improvement plan while routes that failed one metric only were normal. DATA SOURCES AND COLLECTION PROTOCOLS 14. How does your agency collect data for performance measures? Automated data collection 13 28.9% Manual data collection 1 2.2% Both automated and manual data collection 31 68.9%

100 Transit Service Evaluation Standards 15. What manual data collection methods are used to collect data for performance measures? Check all that apply. (For those responding “manual data collection” in Question 14) Traffic checkers 1 100.0% Bus operators via the farebox 0 0.0% On board surveys 1 100.0% None 0 0.0% 16. What automated data collection methods are used to collect data for performance measures? Check all that apply. Automated passenger counters (APCs) 38 86.4% Automated vehicle location system (AVL) 38 86.4% Automated fare collection system (AFC) 35 79.6% Other (please specify) 6 13.6% Other includes: GFI boarding data CAD, Incident Management We have some APC data but it is not very reliable. We will be implementing a new system within the next 1 - 2 years. Ride surveys conducted by drivers Paratransit data is collected through the reservation software. The AFC is just for our regional fare collection system and does not account for all boardings. It is a fairly good indicator of trends. 17. What percentage of your agency’s fleet is equipped with the devices selected in the previous question (APCs, AVL, or other)? APC - 10% AVL / AFC - 100% 100% for AFC and AVL except for private contracted services. Approaching 75% for APC. 100%

Transit Agency Survey Results 101 It will be 100% in Feb 2018. Right now, it’s about 60%. 95% 100% 100% AVL for both subway and bus. 100% 100% 100% 100% 100% AVL, about 50% APC All fixed route buses have automatic fare collection systems. 100% 100% AVL--100%, APC--75%, AFC--100% 100% as of May 2017 100% 100% Bus: CAD-AVL: 100% APC: 70% AFC: 100% Light Rail: CAD-AVL: 0%* APC: 100% AFC: n/a (platforms have TVMs) *Currently in the middle of a CAD- AVL project that replaces legacy CAD/AVL bus and introduces CAD-AVL in Light Rail. 100% 100% 30% APC, 100% AVL, 100% AFC 100% 100% of bus fleet has APC and AVL. Smart card covers about 5-10% of ridership 100% 65% Almost 100% are equipped with at least VMS, all full size buses with AFC, about 75% with APC APCs - 100% (but not all functional/reliable) AVL - 100% AFC - 100%

102 Transit Service Evaluation Standards 100% Approximately 50% contain APC units, 100% contain AVL units and 100% contain fareboxes. 100% 100% 100% All automated on buses. Railcars don't have APCs so developing a model to estimate that (previously measured via checkers) 100% 100% for AVL 90% 100% 100% AVL = 100% APC = 92% of buses, 50% of LRVs 100% of our agencies fleet have AVL, APC, and AFC equipment. However due to the low reporting of the APC data, it is used for ridership at a stop level. The AFC equipment captures ridership at a route level and the agency uses that data for reporting purposes. 100% for all of the above (but APCs have a much lower accuracy result because of interlines, end of line idiosyncrasy) 18. At what level are performance data collected? Check all that apply. Systemwide 42 93.3% Route level 43 95.6% Day type level (e.g., weekday-Saturday-Sunday) 44 97.8% Time-of-day level 37 82.2% Stop level 38 84.4% Trip level 38 84.4% Vehicle level 30 66.7% Operator level 23 51.1%

Transit Agency Survey Results 103 Other (please specify) 8 17.8% Other includes: Performance data is collected down to the individual transaction/boarding/timepoint crossing, but is generally not analyzed at this level of granularity. Block level We have access to data by time of day, at the stop and trip level; however, the software we use isn't user friendly so we look at that data less frequently. Manual surveys with on-board ride checks, road monitors, supervisors. Note: we report out annually on stop level data for understanding needs of bus stops and transit centers, but these are not performance measures. For specific issues, we do time of day analysis. For maintenance, vehicle level data is tracked for road calls and vehicle performance. Note: we *collect* data at the operator level, but have not typically analyzed it at this level. Vehicle level data is typically only analyzed in our Vehicle Maintenance section. Regarding time of year, *collection* processes occur without regard to time of year. Our data *analysis* processes do, when warranted, take time of year into account. On board manual ridership count using transit checkers boarding the bus collecting the boarding and alighting information at each stop system wide (all trips, all routes, all services - weekday, Saturday, Sunday, almost 24 hour/day) 19. For performance measures included in your agency’s service evaluation standards, how often are data collected, summarized, and reported? Annually Most data is measured monthly, quarterly, and annually. Depends on the metric. Data collected daily. Summarized and reported depending on metric and needs-- monthly, by pick, or annual. Daily Monthly, annual, quarterly reports It depends, but at least every three years for Title VI. Quarterly Monthly, semi-annually, annually Collected daily; reported monthly and annually

104 Transit Service Evaluation Standards Crowding is reported to the Board quarterly based on model data. Monthly Data can be collected for anytime period needed for the analysis On a monthly basis; however, some data points more frequently. Each trimester most of our evaluation metrics are collected. Daily Monthly Monthly Annual report for principal measures. Quarterly or monthly for many others. Some data is mostly monthly Quarterly Monthly Depends on measures. Monthly, quarterly and annually. Monthly Monthly Once a year Collected daily; typically reported monthly but can accommodate on-demand reporting Monthly 3 times per year (there are three markups: Spring, Summer, Fall) Monthly Continuously Monthly At least annually but very often more frequently on an as needed basis for planning purposes. Monthly Either monthly, annually, or by service change (3x/year), depending on the metric Quarterly Monthly Data is reviewed monthly on a route level basis.

Transit Agency Survey Results 105 Collected daily and summarized reported monthly Data is collected daily, reported internally daily, weekly, and monthly, and reported publicly monthly. Monthly Quarterly for some; annually for all Monthly Daily, monthly, quarterly, and annually Annual Quarterly (operator pick) 20. Please describe any sampling techniques your agency uses to collect data if you do not use a 100 percent sample. Use FTA guidelines for NTD reporting For our APC samples, when a trip is not collected over an entire rating, we will look to a previous rating to get that trip's data. The NTD sampling plan FTA standards For passenger environment measures of stations, subway cars, and buses, the agency randomly samples stations and vehicles, using survey personnel to collect the data. Results are statistically valid quarterly or annually, depending on the measure and the level of detail required. N/A Random sample About half of the fleet is equipped with automatic passenger counters (APCs). These coaches serve all routes and are rotated throughout the scheduled service. The sample of data is stratified to proportionately represent the annual schedule of trips. We create a random sample We sample route boardings and alightings manually to calibrate APCs. N/A N/A Stratified random sampling N/A We use an alternate sampling plan for NTD passenger miles

106 Transit Service Evaluation Standards None Except for stop level data, all data is based on 100% sample. Depending on the need APC data for stops is either broken into average trips by day type and then aggregated across the trips for stops. Or APC data shares for stop are calculated and then applied to other 100% collections. N/A Sample/expand from 95% coverage (valid data) to 100% We collect 100% passenger rides This is either straightforward or complicated. Rotate APC equipped buses to cover every trip of every route for at least one week, twice a year. NTD template-based sampling plan using grouping, 100% UPT, PPMT method IT Quality Assurance process, and in the field verification. Obviously, some data (APC for instance) has a statistically acceptable margin of error. Not applicable NTD Passenger Miles Sampling (every 3 years) APC software does sampling Max load checks. Targeted sampling for LRVs. Daily ridership counts on commuter rail (by conductors). Currently we have sampling data collected for the annual measure of passenger miles traveled for the NTD. This is done by way of consultant conducting ride checks on the system's bus routes. 21. Is time of year considered in the data collection process for performance metrics? Yes 22 48.9% No 21 46.7% Unsure 2 4.4% 22. Please describe how time of year is considered in the data collection process. Seasonal variations. We would want to One example would be ridership seasonality. The August service change sets the service levels for the time period between late August and late January. Between late August and mid-November, some routes have higher ridership due to school children than in late November through December. The result is that indicators such as passenger boardings per revenue hour and per revenue mile are higher in late August, September, October, and mid-November than in late November and December. But the

Transit Agency Survey Results 107 decline in late November and December is not indicative of poorer performance but rather of a known seasonal change. As a result, evaluations such as the route ranking model incorporate twelve months of data to effectively factor out the impacts of seasonality. We generally don't do passenger count readings for planning during the holidays or other school breaks, except summer. Annual report uses an annual average; with academic and non-academic service so different, on quarterly reports, we provide a break out The agency generally compares "like with like" (e.g., compared to same month in the prior year) to eliminate seasonality. The agency also uses 12-month rolling averages to address seasonality. Seasonal goals are set when appropriate for use in our Transit Operating Performance Scorecard (TOPS), for example separate quarterly goals are set for On-Time Performance as well as an annual goal. We just recognize that transit, especially on the commuter bus mode has its peak seasons, so comparing certain month to certain months is not necessarily apples to apples Peak ridership periods are April and October; therefore, they are optimal times for data collection and analysis. Summer data not considered for maximum service level and run time decisions We compare performance data year over year to evaluate trends and patterns for deviations. Even when growth or decline occur, we evaluate for similarities in ridership patterns. Seasonal variation taken in the consideration; from spring count moved to autumn count due to no school breaks (two board periods available for data collection with no school breaks or public holiday fluctuation) Most data except APC data is collected on an ongoing basis. It is compared year-over-year rather than other measures to account for seasonality. On APC measures, we try to always collect the same set of data for the same two months every year so that seasonality doesn't impact the data comparisons. Some routes are dependent on whether or not school is in service or not. Winter months will indicate lower service to some routes that handle more of the senior citizen areas. Some (not all) stats are presented as trailing twelve months Winter months and holidays Seasonal effects on ridership and on-time performance exist, so YOY comparisons at the month level are most often used for trend analysis Seasonal due to increased tourism, conventions, school schedules. While we collect data throughout the year, we realize trends exist in ridership, on-time performance and other metrics. Winter weather is a factor that can greatly impact ridership.

108 Transit Service Evaluation Standards Spring or Fall timeframe is best time of year to collect ridership data. Other performance metrics are collected throughout the year. Ridership changes during the summer months, as well as holidays. Service is usually evaluated for peak ridership. Holiday season and summer are not used unless looking at annual trends. On-time performance goals are seasonalized. Ridership evaluation (i.e., for service reductions) based on typical or higher ridership periods (avoiding school breaks, construction) 23. Who is primarily responsible for or oversees data collection regarding performance measures at your agency? Operations Planning Department 5 11.1% Planning Department 20 44.4% Operations Department 1 2.2% Finance Department 2 4.4% Department dedicated to performance analysis 5 11.1% Contractor 0 0.0% Other (Please specify) 12 26.7% Other includes: Currently within the Transit Bureau, there are two planners with one focused on service analysis and the other planner working on service development and evaluation (based upon the work performed by the service analysis planner). There is not a primary—depends on audience (NTD reporting, MPO level reporting, agency KPIs, etc.) Performance Management Office IT and Finance Combination of Scheduling (essentially operations planning), Planning, and Service Quality. Scheduling supervisor collects the data, distributing to Operations, Planning, and Finance Data Management group that along with Planning, Scheduling and Infrastructure group forms Service Development section. Mix of Operations and Finance do the data collection and official records. Planning staff do more detailed and targeted data collection and analysis. We only have 6 employees that do contractor oversight, we all look at the data.

Transit Agency Survey Results 109 Small agency => general manager We are a small transit system. So, it’s done by a person. The Office of Management and Budget (finance) reports the information but it is collected in multiple departments. 24. How would you describe your agency's initial development of performance standards? Check all that apply. Board/governing body initiative 10 22.7% Senior management initiative 25 56.8% Department-level initiative 22 50.0% Inspired by other agencies’ standards 11 25.0% Inspired by media attention 1 2.3% Other (please specify) 9 20.5% Other includes: Handed down by Moses (Performance standards predate our knowledge) Consultants Title VI We had an opportunity to update the service standards in conjunction with the route redesign. Our most recent iteration (2012) was part of a comprehensive operational analysis. In house initiated (based on my previous transit work experience I proposed to form the Data Management group that will take care of the data collection/analysis/reports required as an input to the planner's projects/initiatives); also proof that other agencies also have the data collection groups established, however the reports and analysis are completed by planners (TTC example used) Required by Title VI When we went to update our service standards in 2011, we contacted several transit properties regarding some of the numbers used in our 1999 service standards. To our surprise, we were informed that they had gotten those numbers from us previously. So the initial development pre-dates my 25 years here and may have been a department-level initiative. Service standards were initially developed by the regional planning commission. Performance measures were developed by the transit agency and the local DOT.

110 Transit Service Evaluation Standards 25. Has your agency received any protests in recent fixed-route bus service procurements? Yes 6 16.2% No 25 67.6% Not for the most recent procurement; unsure about past procurements 6 16.2% CONTRACT/OPERATING AGREEMENT STRUCTURE 26. What is the length in years of the initial term of award? One year 1 3.2% Two years 1 3.2% Three years 12 38.7% Four years 4 12.9% Five years 10 32.3% More than five years 3 9.7% 27. Does the contract have an option to extend the award? Yes 27 87.1% No 4 12.9% 28. How many years are included in the option? One year 2 6.3% Up to two years 11 34.4% Up to three years 7 21.9% Up to more than three years 12 37.5% 29. What is the payment basis for your agency’s current fixed-route contract? Check all that apply. Cost plus fixed fee 12 32.4% Revenue miles 4 10.8%

Transit Agency Survey Results 111 Revenue hours 24 64.9% Passengers 1 2.7% Total vehicle miles 3 8.1% Total vehicle hours 3 8.1% Fuel costs reimbursed separately 7 18.9% Other (please specify) 16 43.2% Other includes: Both fuel and maintenance costs are “passed through” the contractor to our agency. All of the assets including buildings (owned or leased), vehicles and equipment remain as the responsibility of our agency. Fixed cost plus variable cost based on revenue hours Rate per revenue hour with replacement costs, fuel, and some maintenance supplies paid directly by the agency. Training hours Two components - revenue hours for variable costs and then a monthly fixed fee to cover fixed costs. We use this structure on the two fixed route contracts operated by private entities. In our contract with the public agency we pay on revenue hours. A combination of fixed and variable rates based on vehicle service hours Reimbursement for safety bonus and performance bond. Monthly fee for dispatch services. Monthly fee for management services. Penalties. Plus, an hourly cost for maintaining bus stops, transit centers and a direct cost for major engine overhauls on fleet - pass-through. Fixed monthly fee Maintenance Some expenses are pass through with no mark up in cost Contract includes a fuel cost hedge based on a base rate, so this clause can benefit either the contractor or agency depending on fuel prices. Fixed fee Fixed fee and revenue miles and revenue hours. We pay for fuel CNG for CNG buses and electricity costs for electric buses Variable costs are billed based on a cost per hour, fixed costs are billed as a monthly fixed fee.

112 Transit Service Evaluation Standards Fixed route is based on revenue hours; paratransit is based on per trip fee. 30. How does your agency handle fare revenues under the current contract? Fare revenues given to agency 28 75.7% Fare revenues kept by contractor 1 2.7% Direct offset to cost 2 5.4% Other (please specify) 6 16.2% Other includes: We are a fareless system System is free to ride Cash currently kept by contractor. Pass revenue kept by agency. Future cashless system is expected with next fare system, and all fare revenue is expected to be handled by agency. Fixed-route: Agency collects the revenue. These are County-owned buses & fareboxes. Paratransit: Contractor deducts the revenue from the bill. Contractor owns those vehicles. Fare revenues collected & deposited by contractor into contractor's account, then deducted from invoice to agency for service Electronic fares (via a smartcard) & cash fares are collected directly by our agency. In one case, cash fares are collected via a local approved credit union under a vendor agreement. 31. Does the current contract include contractor performance provisions (incentives, penalties, or liquidated damages)? Yes 30 81.1% No 7 18.9% 32. What contractor performance provisions are included in this contract? Check all that apply. Performance incentives 14 46.7% Liquidated damages 20 66.7% Performance penalties, but not liquidated damages 12 40.0% Other (please specify) 2 6.7%

Transit Agency Survey Results 113 Other includes: Note: we did not include penalties with this contract: In our experience, proposers include potential penalties in their proposed cost so it increases the contract cost. FYI, per FTA, the term is now "contract deductions." 33. What are the percentage or dollar amounts of the incentives or penalties? The percentage/dollar amounts are very low for the current contract. The new contract (scheduled to be approved by the Board tomorrow, Jan. 24, 2017) includes significant penalties such as hourly fixed route rate for late or missed trips. Or $500 for non-reported incidents/accidents within 24 hours, etc. Incentive/disincentive is based on agency key performance indicators (accidents, road calls, on-time performance, complaints). Penalties can be up to 2% of the invoice. Incentive can be up to 1% of the invoice. We also have "performance deficiency credits" (PDCs) for failing to do specific things. Examples include late PM inspections, improper uniform, late pullouts, etc. PDCs range from $50 to $1000 per occurrence. There are 6 performance standard categories in which contractors have an opportunity to earn an incentive. The incentive payments range from $1,500 - $5,000. Liquidated Damages range from $100 - $2,500 Liquidated Damages are $100, $200, $300, $500. We haven't charged them yet. Issues abound. $100 for Running Red Light, and no ADA announcements Incentives ~1.5% for parent company and 1.725% for contract employees assigned to our contract. It's impossible to estimate percentage of penalties - theoretically, they could exceed the dollar value of the contract. The percentage has gone up with the new contact starting July 1, 2016. That is partially because technology is being leveraged to measure performance. However because that performance measure has been adjusted and the contractor is managing performance levels more proactively the service has improved greatly over the last 6 months. Because the amount of Liquidated Damages has increased greatly with the new contract, I have been slowly adding in new damages to be applied and I give the contractor a date in which the next damage we are going to focus on will be applied. There were a lot of performance issues when I got here, but applying damages in a progressive manner meant the contractor could bring performance back into line in a reasonable manner. At the end of the day LDs are not a "gotcha" tool. They are really to ensure that customers are getting a baseline reasonable acceptable service level and that the contractor is doing what is expected in terms of the operations and maintenance of your service. Meet scheduled service incentive based on amount of scheduled service met $250 - $750/month. Operator overtime less than 6% $250/month. Maintenance overtime less than 6% $250/month. Meet agency- set vehicle maintenance budget line item $5,000/annually. Actual service hours less than 99.10% $750 damages/month. Operator overtime greater than 10% $250/month. Maintenance overtime greater than 10% $250/month.Vehicle maintenance budget line item 10% over adopted rate $2,000 damages annually.

114 Transit Service Evaluation Standards Small stuff, $50 to $500 per occurrence for LDs, similar for incentive bonus No penalties up to 2% or the monthly contract can be earned in incentives. (They have been earning .75%) Varies, but around $3,000 per month Driver Incentives/Penalties - Penalties - Missed Trips 2 or more - $2650 per event - Operator Complaints 25 or more per month - $1500 per month Incentives - Missed trips 0 a month - $2650 -15 or fewer Operator Complaints - $100 per event maximum $1500 They are negotiated Less than 1% Liquidated damages and performance incentives are less than 1 % Penalties for missed trips due to driver/dispatch error of $250 each, not wearing uniform $50, preventable accidents minimum $500 and maximum $5,000. Twice the per-trip rate for dropped trips. Insignificant, really. Biggest incentive is $500/month if the PMs are done on time. Penalties are assessed after a letter is sent describing the problem and demanding a plan to rectify the problem within a time limit. If the problem recurs the next month, then the penalty would be assessed. I've only had to send one letter, and the problem was corrected. This approach does not increase costs. However, the standard approach (see the problem, ding the contractor) does, and that's how I'll respond to # 36. Variable Description of Violation Penalty: 1) Early Trips $500 per occurrence. 2) Late Trips (>10 minutes) $50 per occurrence. 3) Missed Trips (>20 minutes) (cost of trip + $200). 4) More than 5 Verified Complaints per Month $50 per additional complaint. 5) Failure to Submit Reports $50 per report. 6) Falsification of Reports $1,000. 7) Heating or Air Conditioning Failure in Service $50. 8) Unsafe Operation of Vehicle $100. 9) Misuse of Marin Transit Vehicle $1,000. 10) Use of Cell Phone during Vehicle Operation $1,000. 11) Operator Discourtesy $50. 12) Operator not Wearing Seatbelt during Vehicle Operation $100. 13) Operators not Adequately Trained or Failing to Properly Operate Fareboxes or Destination signs $100. 14) Schedules or Complaint Cards not Available on Vehicles $50. 15) Rider Alerts/Posters not Posted on Vehicles $50. 16) Radio Communication Not Maintained $100 per occurrence after two warnings. 17) ADA Related Operator Error, e.g., Failure to Announce Stops, Failure to properly secure wheelchair $50. 18) Failure to Complete Operator Daily Pre and Post Trip Inspection $100. 19) Negligence of Contractor Staff Resulting in Serious Injury to Passengers $500. Varies by performance measure Operations—Fixed Route and Route Deviation Performance Criteria Standard Incentives/ Damages Missed Trips: No bus will operate behind schedule by more than ½ the headway—e.g., for a route that runs every half hour a missed trip will be any run that leaves its starting point more than 15 minutes late. Damages = $100 per missed trip. In addition, payment for vehicle hours corresponding to all missed trips will be deducted from the monthly invoice. On-time Departures: Buses will depart from all designated time points no later than five minutes after their scheduled time. Damages = $1,500 if on time performance is at or below 89% on average for the month. Late Yard Pull Out: All buses will leave the yard at the designated time. Damages = $100 per late yard pull out. Trips Operated Ahead of Schedule: No bus will leave any time point prior to its scheduled departure time. Damages = $50 per incident.

Transit Agency Survey Results 115 Late Route Deviation Pickups: Passengers who have reserved trips will be picked up no more than 20 minutes after the promised time. Incentive = $1,500 per month if 98% or more of pickups are on time. Damages = $1,500 per month if less than 92% of pickups are on time. Deviation Denial: A minimum of two deviations will be made available per trip. Damages = $50 per deviation not provided within guidelines. Missed Route Deviation Passenger Trips: All scheduled passenger trips will be served unless cancelled by the customer. A trip is considered missed if the bus is more than 20 minutes late and the passenger cancels or does not show for the trip. Damages = $50 per missed trip. Hold Times: Calls for customer information, deviations, or DAR services will be answered either with no hold time or less than two minutes hold time. Incentive = $1,500 per month if 95% or more of calls are answered with less than two minutes hold time. Damages = $1,500 per month if less than 90% of calls are answered with less than two minutes hold time. Operations—Dial-a-Ride Late Pickups: Passengers must be picked up no more than 20 minutes after the promised time. Incentive = $1,500 per month if 98% or more of pickups are on time. Damages = $50 for each late pickup exceeding 8% each month. Missed Trips: All scheduled passenger trips will be served unless cancelled by the customer. A trip is considered missed if the bus does not arrive or is more than 20 minutes late and the passenger cancels or does not show for the trip. Damages = $50 per missed trip. Operations—General Preventable Accidents: 70,000 – 90,000 average in-service miles between preventable accidents. The contract may be terminated for failure to operate a safe service (i.e., having an accident record higher than industry norms). Incentive = $1,500 per month if the average miles between preventable accidents exceed 90,000 total miles. Damages = $1,500 per month the average miles between preventable accidents falls below 70,000 total miles. Customer Complaints: The number of valid customer complaints will not exceed more than one complaint per 10,000 passengers. The agency will determine validity Incentive = $500 per month if less than one complaint per 10,000 riders during a month. Damages = $500 per month if complaints exceed one per 10,000 riders in the month. Staffing Vacancies: No vacancy over 30 calendar days for any position included in the cost proposal. A vacancy is defined as not having a person employed full time on-site in the position. Credit to monthly invoice for position’s salary and benefits. Key Personnel: Unauthorized substitution of key personnel. Damages = $10,000 per occurrence. Dress Code Compliance with uniform/dress code while operating a bus in revenue service. Damages = $25 per infraction. Road Supervision: Road supervision must be available at all times a revenue vehicle is in operation. Road Supervisors will respond to any incident/accident within a maximum of 20 minutes of the call during revenue operating hours. Damages = $50 per occasion that a Road Supervisor does not respond within 20 minutes. Management Reports: Provide RTD with reports as defined in the Scope of Work. Reports submitted more than 10 days after the due date will be subject to damages. Damages = $10 per day – first violation; $20 per day – second violation; $30 per day – third violation. $100 per day for additional violations. Cell Phones: Operators are never to use a cell phone while operating a RTD-owned vehicle. Damages = $100 per documented occurrence. Maintenance Vehicle Accessibility: Lifts and securement equipment are fully operational on any revenue vehicle placed into service. Damages = $50 per infraction. Vehicle Appearance/Cleanliness: Vehicles leaving the yard shall be cleaned as defined in the “Cleaning of Buses” section in the Scope of Work Damages = $50 per day, per vehicle, until vehicle is inspected and approved by agency staff. CHP Maintenance Facility Inspection: Achieve a satisfactory rating in all categories in the CHP Safety Compliance Report. Damages = $10,000 for any less than satisfactory rating. CHP Revenue Vehicle Safety Inspection: Achieve a satisfactory rating in all categories in the CHP Safety Compliance Report. Damages = $10,000 for any less than satisfactory rating. Vehicle Maintenance and Inspection: Periodic maintenance and inspections shall be completed on or before the scheduled intervals (mileages, hours, and days) identified in the Scope of Work. Damages = $50 for any preventive maintenance or inspection not completed as required.

116 Transit Service Evaluation Standards Maximum LDs assessed in first year is $500,000 . Cap increases 3% each year. Liquidated damages are assessed daily based on service performed the previous day. The LDs range from $50 per incident to $200 per incident. LDs are also assessed quarterly based on performance measures for the quarter (5 difference measures). These LDs range between $10,000 and $15,000 per quarter. Incentives are calculated quarterly based on performance measures for the quarter (5 different measures). The incentives are $5,000 per quarter. The performance penalty amounts are specific to the KPIs in each service contract (as well as confidential). However, we generally have not had to impose performance penalties. 34. In your agency’s experience, does inclusion of liquidated damages or performance penalties increase the cost of the proposals received? Yes 13 43.3% No 4 13.3% Unsure 13 43.3% 35. Has your agency ever assessed liquidated damages under the current contract? Yes, more than three times 15 50.0% Yes, three times or fewer 1 3.3% No 14 46.7% Unsure 0 0.0% Services Interruptions Due to Road Calls: The average combined revenue vehicle miles per mechanical road call is more than 15,000 miles. Incentive = $500.00 per month the average miles between road calls exceed 15,000. Damages = $500.00 per month the average miles between road calls falls below 15,000. Accident Repairs: All vehicles and equipment used in this Contract with accident damage shall be repaired within 30 days of the accident. Damages = $50 for any infraction left uncorrected after 30 days. Vandalism Repairs: All vehicles and equipment used in this Contract with vandalism damage shall be repaired within 30 days of the incident. Damages = $50 for any infraction left uncorrected after 1 day. Vehicle Availability Sufficient vehicles meeting all standards must be available for every scheduled pullout. Damages = $100 per trip not operated or missed due to insufficient vehicle availability. (Applies only if maintenance contractor does not also perform operations.) Adherence to Procedures: Follow agency-recommended procedures. Damages = $100 for every occurrence found of not following agency recommended procedures. Care and Use of Agency Equipment: Equipment must be properly maintained and appropriately used. Damages = $500 per incident of negligence, misuse, and/or abuse of agency equipment. Staffing: No vacancy over 30 calendar days for any position included in the cost proposal. A vacancy is defined as not having a person employed full time on-site in the position. Credit to monthly invoice for position’s salary and benefits. Damages = $50 per day per vacancy is required positions exceeding 30 days. Key Personnel: Unauthorized substitution of key personnel. Damages = $10,000 per occurrence. Down Vehicles—Average Number of Days: Vehicles shall not be out of service for any maintenance issue for longer than five days except as specified otherwise in the Scope of Work. Incentive = $500 for average down vehicle days fewer than three in a month. Damages = $500 for average down vehicle days greater than six in a month. Management Reports: Provide RTD with reports as defined in the Scope of Work. Reports submitted more than 10 days after the due date will be subject to damages. Specific records must be provided as requested. Damages = $10 per day – first violation; $20 per day – second violation; $30 per day – third violation. $100 per day for additional violations. $100 per day for specific records request not completed within 10 days.

Transit Agency Survey Results 117 36. Who provides the buses under this contract? Transit agency 36 97.3% Contractor 0 0.0% Transit agency but contractor provides emergency spares 1 2.7% Other 0 0.0% 37. What equipment/facilities are provided by the contractor? Check all that apply Non-revenue/support vehicles 26 70.3% Bus storage facilities 15 40.5% Bus maintenance facilities 16 43.2% Bus maintenance equipment 16 43.2% Scheduling software 10 27.0% None – all provided by transit agency 7 18.9% Other 4 10.8% TRANSITION ISSUES 38. Has your agency changed contractors within the past three years? Yes 8 21.6% No 29 78.4% 39. Please characterize the transition. Smooth transition – no issues 1 11.1% Acceptable transition – a few minor problems easily resolved 6 66.7% Difficult transition – major problems and/or many minor problems 2 22.2% 40. What were the nature of the transition problems? Check all that apply Labor 3 42.9% Equipment 3 42.9%

118 Transit Service Evaluation Standards Facility 1 14.3% Pensions 0 0.0% Other financial 1 14.3% Retention of records 2 28.6% General lack of cooperation between old and new contractor 3 42.9% Other (please specify) 1 14.3% Other includes: The transition to this new contractor has been challenging as the current provider has not provided all necessary documents in advance of the transition. This has delayed the awarded vendor’s efforts to re-hire, screen current operators. 41. Please describe the most serious transition problem and how it was resolved. Transferring of insurance documents due to the leasing agreement already in place and the 3rd party insurance company The cooperation between the two corporate entities was not smooth. Local folks were often charged with being the "man in the middle." Agency often had to get in the middle to handle. Acquisition and training of new employees, most of which were formerly employed by the previous contractor. Outgoing vendor may skimp on vehicle maintenance once they know they are not awarded new contract. New vendor always says outgoing vendor did poor maintenance and wants to be held harmless for vehicle issues. In the past few transitions we hired an inspector to conduct a DOT-style inspection to identify all issues that went beyond normal wear and tear. We then hold the outgoing contractor responsible for repairing the vehicles before transition. The inspector then conducts a follow-up to ensure issues have been resolved. Ensuring clarity on acceptable fleet, facility, and equipment condition at turnover. Must be clear in contract documents (for contract closeout and contract startup). Employee/labor transition was the most public (employees came to board meetings to speak out), but vehicle transition was the most painful. Eventually met with attorneys from both sides to settle the amount owed for outstanding maintenance with the outgoing contractor. Still in the transition process...TBD

Transit Agency Survey Results 119 LABOR ISSUES 42. Has your agency experienced any labor issues related to contracting? Yes 14 37.8% No 21 56.8% Unsure 2 5.4% 43. Please describe the nature of the labor issues. Check all that apply. Continuity of employment 6 42.9% Employees right of first refusal 0 0.0% Wage levels 7 50.0% Benefit levels 6 42.9% Pensions 3 21.4% Collective bargaining agreements 10 71.4% Work requirements (length of shifts, work hours, etc.) 2 14.3% Other (please specify) 2 14.3% Other includes: Current operators have not received clear direction from the current prime vendor of the continuity of benefits, the end of the contract date and the expected transition period and related activities. Several years ago, some employees conducted a wildcat strike on behalf of a fired supervisor. 44. Please describe the most serious labor issue and how it was resolved. Drivers and mechanics at one of our service contractors (a municipal operator) went on strike last year after failing to reach a new collective bargaining agreement. This contractor supplies nearly all of our transit service in that municipality - however, because those routes are primarily branded as our services, many customers did not understand and either went to wait for the bus anyway (which never showed up) or thought the strike was our fault. Because the bargaining agreement is between the municipal operator and its employees, we did not have a role in resolving the labor issue. Employee retention. If agency recruits contractor's employee, contractor receives monetary reimbursement for training employee. Difficulty to attract and retain qualified staff

120 Transit Service Evaluation Standards Some returned to work and the rest were fired and replaced. The NLRB upheld the position of the contractor. The labor issues are connected to the contractors' employees, splitting the service and the threat of a work stoppage. The labor issue was resolved by the contractor and their employees through a new CBA. Threatened strike five years ago. Agency worked with contractor to have backup operators in place. Lousy private sector benefits. There is no resolution unless we have enough funding to mandate a certain premium level of benefit comparable to the public sector Current most problematic issue is being able to call out without advance notice with little, if any, consequence, per collective bargaining agreement. Threatened strike, more talk than action, 3 year CBA eventually settled CBA not renewed yet, and not resolved yet We are including a minimum hourly wage clause in the scope of work for the first time. Due to low wages the Drivers and Dispatch Unionized. Negotiation took more than a year and almost resulted in a strike Labor issues were the primary driver behind the push to contract out. When the 100% contracting was achieved, a difficult negotiation occurred surrounding the freezing of the public employee pension in which the employees had previously participated. TBD 45. Has your agency ever had to respond to a Section 13c complaint? Yes, in response to a formal 13c complaint 3 8.8% Yes, not to a formal complaint but in response to Section 13c issues raised during negotiations. 2 5.9% No 29 85.3% OVERSIGHT 46. Does your agency monitor contracted services? Yes 36 97.3% No 1 2.7%

Transit Agency Survey Results 121 47. Please check the areas that you monitor. Check all that apply. Workers comp and related administration costs 3 8.3% Liability costs and related administration costs 3 8.3% Maintenance 32 88.9% Depreciation of operating facilities 13 36.1% Accounts payable and payroll 5 13.9% Cash counting and farebox maintenance 26 72.2% Human resources and recruiting costs 1 2.8% Contract administration 27 75.0% Third party vehicle inspection 15 41.7% Internal audit 13 36.1% Driver training 19 52.8% Verification of NTD and other data 28 77.8% Street supervision 17 47.2% Dispatch 18 50.0% Background checks 11 30.6% Drug and alcohol testing 31 86.1% Operations department management 23 63.9% Operator training and safety 22 61.1% Other (please specify) 4 11.1% Other includes: DBE Accessibility--announcements, customer service to persons with disabilities, functioning lifts/ramps. Use "secret shopper" type program. Looking forward to the results of this study. We have a database of customer comments categorized by issue type. Vehicle complaints sent to the maintenance manager, Safety to the Safety Manager, Compliments to the GM, Operations to the Ops Mgr. I am cc'd on everything. We monitor resolution of all issues. The agency conducts on the road dispatching. The contractor conducts window dispatching.

122 Transit Service Evaluation Standards 48. Does your agency have a specific unit or specific staff members with the responsibility of monitoring the performance of fixed-route contracted services? Yes, a specific unit 10 27.8% Yes, specific staff members but not a specific unit 22 61.1% No 4 11.1% 49. How many agency employees (in full-time equivalents) are involved in contractor oversight? 1 from operating agency, 1-2 from parent agency 5 15 5 3 3 14 for the fixed-route service contract One employee is dedicated to contractor operations. Other employees are involved in oversight of contract terms and contractor invoices. 2 4 1 Less than 1 FTE 0.3 1 4 4 41 FTEs 3 1 3 FTE

Transit Agency Survey Results 123 3 2 All 2.25 of us! We have introduced AVL/CAD and Paratransit Scheduling Software that greatly enhances the ability of our micro-staff to monitor contractor performance, but it’s still an issue. ~6 2 2 3 FTEs for fixed route monitoring, but many areas of the agency as a whole assist informally. 1 2 8 12 FTEs for bus and paratransit contracts are SOLELY dedicated to contract oversight. Another 6 or so are regularly involved in some fashion. 0. All five of the Transit staff perform some type of oversight of the contractors and their staffs/subcontractor's performance throughout our daily operations. 50. How frequently does your agency communicate with your contractor? Daily 28 77.8% Several days a week 5 13.9% Weekly 0 0.0% Two or three times a month 2 5.6% Monthly 0 0.0% Other (please specify) 1 2.8% Other: Hourly if necessary -- we are in the same facility 51. How would you rate the quality of communication with the contractor? Very good 15 41.7% Good 16 44.4% Fair 3 8.3%

124 Transit Service Evaluation Standards Poor 1 2.8% Very poor 0 0.0% Multiple contractors; depends on the contractor 1 2.8% 52. Who has responsibility for collecting operating data, including NTD data? Contractor 14 40.0% Transit agency 13 37.1% Other (please specify) 8 22.9% Other includes: They provide some manpower but we oversee the data processing Both, depends on the data We share responsibility for NTD data Mixture - County does financial, a consortium does passenger miles through a contract with a consultant Scheduled hours and miles are entered into a Transit Agency web portal and contractor is required to enter any deviations from schedule directly into the same website. Contractor also enters roadcall, customer contact and accident information into the web portal. Transit agency works with raw data to prepare NTD reporting. Both, but primarily the agency The agency works closely with the contractor to ensure the necessary data is collected. It is a shared responsibility. Both. The Contractor and the Agency both collect different sets of data used for NTD submission. 53. How often is operating/NTD data reported for contracted services? Daily 7 20.0% Weekly 0 0.0% Monthly 23 65.7% Annually 4 11.4% Other (please specify) 1 2.9%

Transit Agency Survey Results 125 Other: As required by the FTA 54. Is the operating/NTD data publicly available? Yes 19 52.8% Some is, some is not 11 30.6% No 6 16.7% 55. Please describe the type of data that is publicly available. We publish a Transit Service Performance Review every year that reports metrics such as boardings, passenger loads, on-time performance, bus bunching, speed, and service costs for each of our routes (directly operated and contract) based primarily on APC and GPS data. However, the data quality varies. Some of our contract services are operated with vehicles (owned by us) that do not have APCs, and GPS was only installed recently, because those contractors are in remote areas (e.g., an island) and those components are difficult to service Whatever is published to NTD. Ridership, on-time performance, services hours, passenger miles NTD reports Monthly reports are given to our Board of Directors showing passengers, revenue hours and miles. This information is available on the transit agency’s website. NTD Unlinked passenger trips Data published in the annual report Any data gathered as part of service delivery that is not specifically protected can be requested by the public (State Data Practices Act). Ridership is the only data that we report to our board and the public. Monthly ridership is reported at the MPO meetings in monthly reports. As a public agency all data is available upon requests. (public knowledge) 56. How is the data made available to the public? Check all that apply. Periodic posting of performance reports on the agency website 6 20.0% Printed reports that are available to anyone on request 5 16.7% In response to information requests under local public records law 21 70.0%

126 Transit Service Evaluation Standards Other (please specify) 12 40.0% Other includes: Monthly ridership reports provided to the MPO Operating statistics are provided in monthly reports to our board of directors. Planning documents Public presentation to the board. NTD website CUTA Canadian Transit Fact Book and reports to Council Agenda packets for board available to the public online NTD website Monthly board packet Reports issued by NTD NTD website Contractor NTD data is combined with the agency NTD data. 57. Are there issues with fixed-route service integration (either between directly operated and contracted service or between different contractors)? A single contractor operates all of our fixed-route service 18 51.4% Yes, there are ongoing issues 3 8.6% Yes, there are occasional issues 4 11.4% No 10 28.6% 58. How does your agency resolve disputes with its contractor? Check all that apply Discussed and resolved at regular meetings 28 80.0% Discussed and resolved at ad hoc meetings addressing specific issues 23 65.7% Performance penalties/liquidated damages assessed 13 37.1% Other (please specify) 2 5.7%

Transit Agency Survey Results 127 Other includes: Usually via email/phone. The previous response needs explanation. All fixed route services are with one contractor. However, the County also pays the regional transit agency to provide bus services that go through the County. Occasionally, issues arise between their bus operators and ours. 59. How does your agency evaluate service performance for contracted fixed-route service? Check all that apply. Agency-wide performance standards 24 68.6% Performance standards for contracted service 21 60.0% Customer feedback via surveys 19 54.3% Customer feedback informally 25 71.4% Qualitatively 10 28.6% Other (please specify) 7 20.0% Other includes: Record reviews (training, hiring, maintenance, DOT hours), pull-out checks, vehicle inspections, site visits, dispatch reports, maintenance reports Secret rider program, review of on-bus video, operations software reports, documented and verified customer complaints Customer Comments system accrues comments received via the website, through the call center and by emails. We have agency performance standards which were primarily developed for our direct-operated services. We have been adapting these to monitor contracted routes, which only as part of new contract in July will have some of the technologies (APC, AVL) that enable use of certain standards. Review of APC/AVL report data. We monitor AVL system daily to ensure service is operating on time. We also review camera footage randomly and in association with incidents and conduct periodic ride-alongs. We do have a formal customer comment process. 60. What is the most important issue regarding agency oversight and how is this addressed?

128 Transit Service Evaluation Standards Monthly reporting by contractor received in a timely manner is a common problem with the current contractor. One of the ongoing challenges we have had that prompted support for entering into a new RFP for a new service provider. We are most sensitive to quality of vehicle maintenance, since we own the vehicle. We employ full time inspectors to monitor fleet and maintenance programs and utilize periodic third party assistance to inspect fleet. Ensuring service standards meet or exceed agency requirements. Solution is constant and consistent (scheduled and random) inspection of work. Safety, which is addressed through direct communication with the contractor. Customer service. We discuss customer service at every meeting. Talk about examples. Getting a qualified staff member who understands the universal elements of transit Customer Service and OTP, constant communication. Compliance - through consistent monitoring Operator training. We observe operators within first week after being released and have incorporated several technologies to supplement on-board evaluation. I would say as an agency we have been working over the last year to do a better job on oversight. To work with the contractor to make sure the expectations are set of what we expect and an open process in terms of contract oversight. Again (I know I keep saying this) communication is important to this so that in the end the two work together to maintain good service for the customers on the street. I could use one FTE just to monitor the contractor all the time, but there is no money. Not sure what you are asking.... Quality of service, regular bi-weekly meetings with senior staff and daily interaction with mid-level staff. We are all in the same facility—makes communication much easier. Could use additional management staff to provide regular oversight, on site at our garages. Ensuring consistent contract compliance. Ensuring that the contractor is in compliance with the agreed upon terms and conditions of the contract. Acquisition of good NTD data. This is addressed through defined data collection methodologies and constant communication with contractor when there are changes to the system. Detailed customer complaint process with resolution within established timeframe. Regular leadership team meetings. Co-location of agency and contractor. Know what service is running and how people are using it. Get CAD/AVL on buses and APCs. Lack of administrative staff time available for this purpose. We added staff in 2013, but we could use more. Safety is always the most important. Contractor investigates and applies progressive discipline, including retraining, as needed.

Transit Agency Survey Results 129 Safety. Accidents are investigated and/or reviewed; unsafe actions are reported to the contractor and monitored for compliance. For us the most important issue is not having a direct connection with operations. We address this by having AVL data so that we can see how the buses are performing on route, hiring a third party contractor to inspect our fleet to ensure good maintenance practices, having a web portal where contractors can enter data about service and having standing monthly meetings with our contractors to operations and issues. On-time performance. Running reports and analyzing and assessing causes (whether driver-caused) Ensuring that contracted employees are providing the level of service expected from our agency to ensure that directly operated and contracted services are provided to the customer as seamlessly as possible. This is accomplished through oversight of the contractor, communication and review of the contractor’s OTP. Communication One of the challenges is receiving information in a timely manner. We address this issue at the weekly meeting. Safety and Federal reporting Just getting out into the system to perform oversight. Making sure the contractor is aware of what and how much oversight is being performed. Contract management responsibilities have been distributed across many separate departments and even subsidiaries in our enterprise. In order to improve our oversight, we are in the process of centralizing contract management within one department. Regular monitoring 61. Is there any other information you would like to share that could benefit other transit systems that contract fixed-route service? Collect and report performance data on a regular basis, discuss this data with contractors on a regular basis, track trends and don't let a problem go unresolved - address quickly. Once contractor performance deteriorates, it's very difficult to get it back to an acceptable level. No No We find that the partnership model tends to be more effective with this type of service delivery. In 2006, our agency switched from a management contract approach to a service contract approach. We found it to be less expensive, better control over finances, more responsive to customers and the community, and more transparent. We may be unlike other agencies in that we have three fixed route contractors, a paratransit contractor and two yellow bus contractors. Having someone on your review team who was on the private contractor side is invaluable. If I'd not had that experience, I would have missed and/or been unaware of any number of things.

130 Transit Service Evaluation Standards Just hope all my responses got recorded. Had a hiccup with Survey Monkey 1/3rd way thru. No FYI, most of this information relates to our new contract that starts in July for the next 4+2 years, since the RFP is fresh in my mind. We are also looking at other service contracting opportunities, such as rail replacement shuttles due to construction, and contracting of new or current direct-operated routes. Ensuring the contractors' local management team is qualified to deliver the contractual expectations and contract oversight begins at the start of the contract. Can't think of anything at the moment. I spent many of my decades in the transit industry on the contractor side, so my understanding of that side of the business, I think, helps to create a more positive environment. This is the 3rd stop in the last 16 years all managing contracted out fixed route and paratransit services. I am sure there are other insights I can offer. There are pros and cons to both models. I think an agency has to think about its goals and challenges and which model can best suit it. I could teach a course. When doing an RFP, try to minimize the built-in advantages that a particular bidder may have over other bidders. Oversight. Oversight. Oversight. And collaboration. I think we need to do more as an industry to link agency staff that deal with contracted services so they can support each other with best practices and lessons learned. Transit agencies should be clear, specific and unequivocal regarding the type of demonstrated skills and experience (years of experience) they would like their GM and Operations and Safety managers to have. The current contract has placed managers and GMs with no transit experience in leadership positions because our contract did not detail the specific experience required. 62. Would you be willing to participate further as a case example, involving a telephone interview going into further detail on your agency’s experience, if selected by the TCRP panel for this project? Yes 31 86.1% No 5 13.9%

Next: Appendix D - Sample Performance Evaluation Standards »
Transit Service Evaluation Standards Get This Book
×
 Transit Service Evaluation Standards
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Transit Cooperative Research Program (TCRP) Synthesis 139: Transit Service Evaluation Standards provides an overview of the purpose, use, and application of performance measures, service evaluation standards, and data collection methods at North American transit agencies.

The report addresses the service evaluation process, from the selection of appropriate metrics through development of service evaluation standards and data collection and analysis to the identification of actions to improve service and implementation.

The report also documents effective practices in the development and use of service evaluation standards. The report includes an analysis of the state of the practice of the service evaluation process in agencies of different sizes, geographic locations, and modes.

Appendix D contains performance evaluation standards and guidelines provided by 23 agencies.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!