National Academies Press: OpenBook

Guidebook for Measuring Performance of Automated People Mover Systems at Airports (2012)

Chapter: Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach

« Previous: Chapter 4 - Performance Measurement of APM Systems at Airports: The Current Situation
Page 24
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 24
Page 25
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 25
Page 26
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 26
Page 27
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 27
Page 28
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 28
Page 29
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 29
Page 30
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 30
Page 31
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 31
Page 32
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 32
Page 33
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 33
Page 34
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 34
Page 35
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 35
Page 36
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 36
Page 37
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 37
Page 38
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 38
Page 39
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 39
Page 40
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 40
Page 41
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 41
Page 42
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 42
Page 43
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 43
Page 44
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 44
Page 45
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 45
Page 46
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 46
Page 47
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 47
Page 48
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 48
Page 49
Suggested Citation:"Chapter 5 - Performance Measures for APM Systems at Airports: Recommended Approach." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 49

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

24 Performance measures for APM systems at airports are organized into a set of seven metrics (shown in Table 2) and are reported along with system and service descriptive characteristics (shown in Table 3) at monthly intervals on Form A and Form B, which are provided in Exhibit A. The measures are developed from the passengers’ perspective and therefore generally do not exclude events typically excused (1) under force majeure contract clauses or (2) when outside the control of the system operator. This section provides the definition, data requirements, data sources, and data collection techniques for the airport APM performance measures, and begins with the definition of the system and service descriptive characteristics that are critical to providing context to the performance measures when the measures are used for comparison purposes among reporting airport APM systems. 5.1 System Descriptive Characteristics System descriptive characteristics of airport APM systems are descriptors that provide a general understanding of air- port APM system size and help put into perspective the per- formance measures of such systems when they are used to compare performance among other airport APM systems. System descriptive characteristics are likely to remain the same from one reporting period to the next unless a system expansion or reduction has taken place since the last report- ing period. The following system descriptive characteristics, to be reported on Form A, are defined in this section: • Single Lane Feet of Guideway, Mainline • Single Lane Feet of Guideway, Other • Routes Operated in Maximum Service • Trip Time in Maximum Service • Stations • Vehicles in Total Fleet 5.1.1 Single Lane Feet of Guideway, Mainline Single Lane Feet of Guideway, Mainline is defined as the total length of track in the passenger-carrying portion of the system, regardless of direction, and is expressed in single lane (track) feet. For example, if a system contains one mile of dual-lane mainline guideway, this would be expressed as 10,560 single- lane feet (slf) of mainline guideway (5,280 ft/mile × 2 lanes). This system characteristic does not include guideway in the non-passenger-carrying portion of the system, such as mainline pocket tracks, storage tracks beyond terminals, turnback/switchback tracks, and yard tracks. Single Lane Feet of Guideway, Mainline plus Single Lane Feet of Guideway, Other is the quantification of all track in the system. 5.1.2 Single Lane Feet of Guideway, Other Single Lane Feet of Guideway, Other is defined as the total length of track in the non-passenger-carrying portion of the system, regardless of direction, and is expressed in single lane (track) feet. This system characteristic includes all guideway used in the non-passenger-carrying portion of the system, such as main- line pocket tracks, storage tracks beyond terminal stations, turnback/switchback tracks, yard maintenance and storage tracks, and related shop approach tracks. Single Lane Feet of Guideway, Other also includes track lengths located indoors in the non-passenger-carrying portion of the system, such as vehicle storage and/or maintenance shop tracks. Single Lane Feet of Guideway, Other plus Single Lane Feet of Guideway, Mainline is the quantification of all track in the system. 5.1.3 Routes Operated in Maximum Service Routes Operated in Maximum Service is defined as the number of routes operated in the system during the peak C h a p t e r 5 Performance Measures for APM Systems at Airports: Recommended Approach

25 period of the day that maximum service is provided during the reporting period, with a route being the unique path a train follows and station stops a train makes from its terminal of departure to its terminal of arrival before changing directions (or in the case of single- or dual-lane loop systems, before beginning the same route again). For example, the Routes Operated in Maximum Service for a dual-lane pinched-loop system would typically be two; for a dual-lane shuttle system, four; for a dual-lane loop system, two; and so on. Routes Operated in Maximum Service excludes routes implemented to address atypical, failure, or special-event service during the reporting period. 5.1.4 Trip Time in Maximum Service Trip Time in Maximum Service is defined as the trip time in the system, by route, during the peak period of the day that maximum service is provided during the reporting period, beginning upon the start of the door closing sequence at the originating terminal and ending once all doors are open at the destination terminal (or in the case of single- or dual-lane loop systems, ending once all doors are open at the originat- ing terminal). This system characteristic includes interstation travel times and dwell times at intermediate stations and excludes dwell times at the origin and destination terminals as well as atypi- cal and failure-related events and operations, such as wayside speed restrictions and onboard vehicle speed limitations. 5.1.5 Stations The Stations system characteristic is defined as the num- ber of stations in the APM system at which APM trains stop and dwell to carry out the passenger exchange, with a station being the general locale at which passengers can board and alight APM trains, regardless of the configuration or number of platforms at the station. For example, a station with two side platforms separated by a dual-lane guideway is counted as one station. 5.1.6 Vehicles in Total Fleet Vehicles in Total Fleet is defined as the number of vehicles in the system that are either currently operable or capable of being operable once the appropriate maintenance, cleaning, or other action has been undertaken. For the purpose of this system characteristic, “vehicle” is defined and distinguished as follows: Car. An individual passenger-carrying unit that cannot operate individually but must be connected and share equipment with other cars to form a vehicle. A car is not a vehicle. Vehicle. The smallest passenger carrying unit that can operate individually. This may be a single unit or a permanently coupled set of dependent cars. A vehicle can also be coupled with one or more other vehicles to form a train. Train. A set of one or more system vehicles coupled together and operated as a single unit. Vehicles in Total Fleet does not include heavily damaged vehicles in need of extensive repair, decommissioned vehicles awaiting disposal, and other similar permanently blocked-up vehicles that would require major repair or refurbishment efforts. 5.2 Service Descriptive Characteristics Service descriptive characteristics of airport APM systems are descriptors that provide a general understanding of the service and operational aspects of airport APM systems and help put into perspective the performance measures of such systems when they are used to compare performance among other airport APM systems. Service descriptive character- istics are likely to change from one reporting period to the AIRPORT APM PERFORMANCE MEASURES Service Availability (one of three approaches to be selected) Safety Incidents per 1,000 Vehicle Service Miles O&M Expense per Vehicle Service Mile Actual and Scheduled Capacity (Peak Versus All Other) Passenger Satisfaction Missed Stations per 1,000 Station Stops Unintended Stops per 1,000 Interstations Table 2. Airport APM performance measures. SYSTEM DESCRIPTIVE CHARACTERISTICS SERVICE DESCRIPTIVE CHARACTERISTICS Single Lane Feet of Guideway, Mainline Single Lane Feet of Guideway, Other Routes Operated in Maximum Service Trip Time in Maximum Service Stations Vehicles in Total Fleet Passenger Trips Vehicle Service Miles Vehicles Operated in Maximum Service Vehicles Available for Maximum Service Headway in Maximum Service Table 3. Airport APM system and service descriptive characteristics.

26 next. The following system descriptive characteristics, to be reported on Form A, are defined in this section: • Passenger Trips • Vehicle Service Miles • Vehicles Operated in Maximum Service • Vehicles Available for Maximum Service • Headway in Maximum Service 5.2.1 Passenger Trips Passenger Trips is defined as the number of passenger trips taken on the system during the reporting period, with a pas- senger trip being made by any individual that uses the APM system to get from one station to another in the system. It is recognized that there is variability in how, and even if, passenger trips are quantified in airport APM systems. Some systems use automatic passenger counting systems, either installed above train or platform doors or at a common location where passengers enter and exit stations. Because of the open nature of airport APM systems, these automatic counting tech- nologies count global ins and outs and do not track individual passenger movements like fare collection systems in closed transit systems that use turnstiles and tickets. This makes these automatic counting technologies, although convenient, gener- ally less precise than those in closed transit systems. Other airport APM systems may not use automatic passen- ger counting systems, but instead estimate APM passenger trips based on actual or forecast parking or airline passenger data. Still other airport APM systems may quantify APM passen- ger trips by physically counting passengers on a semi-regular basis using employees or consultants posted on APM station platforms. It is recognized that airport APM systems may not be able to report the Passenger Trips service descriptive characteristic or may not be able to report it accurately. For these systems, they are encouraged to report at least order-of-magnitude Passenger Trips where possible. 5.2.2 Vehicle Service Miles Vehicle Service Miles is defined as the total miles traveled by all in-service vehicles in the system during the reporting period, with a vehicle being in service when located in the passenger-carrying portion of the system and when passen- gers are able to use it for transport. For example, if an in-service train is composed of three vehicles and the distance the train travels is 4 miles, then the number of Vehicle Service Miles is 12. Vehicle Service Miles performed during the reporting period can be obtained directly from vehicle hub-o-meter or odometer readings or possibly from the automatic train supervision (ATS) system (control center computer system). Caution needs to be exercised so that miles performed in the non-passenger-carrying portions of the system or while a vehicle is out of service are subtracted from totals that include such mileage. 5.2.3 Vehicles Operated in Maximum Service Vehicles Operated in Maximum Service is defined as the number of in-service vehicles operated at once in the system during the peak period of the day that maximum service is provided during the reporting period, with a vehicle being in service when located in the passenger-carrying portion of the system and when passengers are able to use it for transport. Vehicles staged as hot-standby or operational spares, regard- less of location, are not included. For example, if during maximum service an airport APM system uses five in-service trains composed of three vehicles each and one train composed of two vehicles in standby at a terminal station, the number of Vehicles Operated in Maxi- mum Service is 15. Vehicles Operated in Maximum Service excludes vehicles used to address atypical, failure, or special-event service during the reporting period. 5.2.4 Vehicles Available for Maximum Service Vehicles Available for Maximum Service is defined as the number of vehicles available to be in service at once in the sys- tem during the peak period of the day that maximum service is provided during the reporting period, with a vehicle being available when it can be placed in service after no more than a departure test, for example, and without first requiring any maintenance, cleaning, or other similar action. A vehicle is in service when located in the passenger-carrying portion of the system and when passengers are able to use it for transport. For example, if during maximum service an airport APM system has 29 vehicles in its total fleet, and: • 20 are in service; • One is in the maintenance shop but capable of being in service; • One is in the maintenance shop and being repaired due to failure; • Three are coupled together on the vehicle storage tracks at the maintenance and storage facility (M&SF), with one of those three in failure; and • Four are coupled together on the vehicle storage tracks at the M&SF, with none of the four in failure; then the number of Vehicles Available for Maximum Service is 24. Vehicles located in the maintenance shop, regardless of their status, as well as operable vehicles coupled to failed vehicles at the M&SF, are not considered available.

27 5.2.5 Headway in Maximum Service Headway in Maximum Service is defined as the most fre- quent headway operated in the system during the peak period of the day that maximum service is provided during the report- ing period, with headway being the elapsed time between the same part of consecutive, in-service trains operating in the same direction on the same guideway. For example, as described in Figure 6, if during maximum service trains operate on separate routes between terminals A and D, between terminals B and D, and between terminals C and D, the Headway in Maximum Service would occur on the guideway with the most frequent headway, or between terminals C and D. Headway in Maximum Service excludes headways that involve in-service vehicles used to address atypical, failure, or special-event service during the reporting period. 5.3 Airport APM Performance Measures Airport APM performance measures are the metrics used to track and compare the performance of airport APM sys- tems as seen from the passengers’ perspective. There are seven recommended measures, described by title in Table 4, that are to be reported on Form B, which is provided in Exhibit A. For each measure in this section, the following is provided: • A definition, • Data requirements and sources for the measure, and • Data collection techniques and the calculating and record- ing of the measure. To aid with the tracking and calculation of the measures, a Microsoft Excel workbook file containing several spreadsheets (one for each measure) has been provided and made available for download at the summary page for ACRP Report 37A, which can be found at http://www.trb.org/Main/Blurbs/ 166387.aspx. The file allows the user to simply input daily data, and the measures are automatically calculated for the day, month-to-date, and year-to-date. The forms provided in Exhibit A, which are to be used in reporting the measures and descriptive characteristics dis- cussed in Sections 5.1 and 5.2, have also been made available for download from the ACRP Report 37A summary page, and can be completed electronically for easier distribution and tracking. All of the measures described in Table 4 should be imple- mented. The airport APM performance measures are defined in detail in the following sections, beginning with Airport APM Performance Measure #1, which includes three approaches for determining Service Availability. The Tier A approach is the least complex and least comprehensive of the three approaches, whereas the Tier C approach is the most complex and most comprehensive of the approaches. For Airport APM Performance Measure #1: Service Avail- ability, choose only one of the three approaches described in Sections 5.3.1, 5.3.2, and 5.3.3. 5.3.1 Airport APM Performance Measure #1: Service Availability (Tier A Approach) 5.3.1.1 Definition Service Availability (Tier A Approach) is the percent- age of time service has been available on the airport APM system, as defined herein. Recognizing that headway regu- larity is of significant importance to airport APM users, Service Availability (Tier A Approach) is based largely on headway performance. In an effort to maintain the simplic- ity and usability of the measure, it deliberately does not attempt to capture all system events that an airport APM user could perceive as a loss of availability. Service avail- ability approaches in subsequent sections become more comprehensive in nature by capturing a greater share of those events, and carry with them a greater level of sophis- tication as well. Service Availability (Tier A Approach) is defined as: Daily SA AOT SOT A =   ×100 Monthly SA AOT SOT A d=1 m d=1 m =     × ∑ ∑ 100 Figure 6. Example route structure on an airport APM system. A B DC No. Title 1 Service Availability (choose one of the three approaches below) A Tier A Approach B Tier B Approach C Tier C Approach 2 Safety Incidents per 1,000 Vehicle Service Miles 3 O&M Expense per Vehicle Service Mile 4 Passenger Satisfaction 5 Actual and Scheduled Capacity (Peak Versus All Other) 6 Missed Stations per 1,000 Station Stops 7 Unintended Braking Applications per 1,000 Interstations Table 4. Airport APM performance measures.

28 Yearly SA AOT SOT A d=1 y d=1 y =     × ∑ ∑ 100 Where: • SAA = Service Availability (Tier A Approach). • AOT = Actual operating time. The total time, in seconds, that the system was operating, calculated by subtracting downtime from scheduled operating time (SOT – D). • SOT = Scheduled operating time. The total time, in seconds, that the system was scheduled to provide service. • D = Downtime. The total time, in seconds, of all downtime events. • A downtime event is any of the following: – When the actual headway of in-service trains exceeds the scheduled headway by more than 20 sec during the time when the system is scheduled to provide service. This downtime event begins at the departure time of the in-service train that produced the last on-time head- way on the scheduled route before the event; it ends at the departure time of the in-service train that pro- duces the first on-time headway on the scheduled route after the event. – When any in-service train has an incomplete trip on a scheduled route during the time when the system is scheduled to provide service. This downtime event begins at the departure time of the in-service train that produced the last on-time headway on the scheduled route before the departure time of the train having the incomplete trip; it ends at the departure time of the in-service train that produces the first on-time headway on the scheduled route after the departure time of the train having the incomplete trip. – When the first daily departure of an in-service train from the terminal on each scheduled route fails to occur within the time of one scheduled headway during the time when the system is scheduled to provide service. This downtime event begins at the scheduled opening time and ends at the time of the first departure of an in-service train from the terminal on the scheduled route. If any of these downtime events occur at the same time or overlap one another, the earliest start time and the latest end time of the events, as defined by the rules herein, are to be used in determining downtime. • Headway is the elapsed time between the same part of con- secutive, in-service trains operating in the same direction on the same guideway. • In-service train is a train located in the passenger-carrying portion of the system that passengers are able to use for transport. • Incomplete trip is the trip of an in-service train that fails to make a station stop on the scheduled route or that fails to finish the trip on the scheduled route. • On-time headway is a headway that does not exceed the scheduled headway by more than 20 sec. • d = Day of the month or year, as applicable. • m = Days in the month. • y = Days in the year. Deliberately employing operating strategies to eliminate or stop the accumulation of downtime by exploiting the intent of the rules herein, especially when those strategies do not benefit the APM user, is not permitted in the context of this system of evaluation (e.g., using a schedule that provides for less frequent scheduled headways than the actual service headways). Inserting additional trains to recover from a downtime event is permitted, but operating additional trains as a routine course over and above what the schedule requires is not. In such a case, the schedule should be modified to reflect the actual operation. All downtime is to be quantified and assigned to one of the following predefined causal categories: • Weather-induced. Downtime caused by the weather, such as lightning striking the guideway, or a snow or ice storm. • Passenger-induced. Downtime caused by a passenger, such as a passenger holding the vehicle doors open or a passenger pulling an emergency evacuation handle on an in-service train. • System equipment-induced. Downtime caused by system equipment, such as a broken axle on an in-service train, or train control system equipment that fails while in service. • Facilities-induced. Downtime caused by the facilities, such as a station roof leaking water onto the floor immediately in front of one side of the station sliding platform doors, requiring a bypass of that side of the station, or a crack in a guideway pier that limits the number of trains in an area. • Utility-induced. Downtime caused by a utility service pro- vider, such as the loss of an incoming electrical feed to the APM system. • O&M-induced. Downtime caused by personnel affiliated with the O&M organization, such as the mis-operation of the system from the control center or the failure of a main- tenance technician to properly isolate a piece of equipment from the active system operation on which he or she is working. • Other. Downtime caused by other issues, such as a terrorist threat or a delay due to the transport of a VIP. There are no provisions for partial service credit in this measure, no penalties for line capacity reductions (i.e., shorter trains), no allowances for grace periods (other than the 20-sec

29 duration defined previously), and no penalties for unsched- uled stops of trains outside stations. Nor are there exclusions for downtime events. This maintains the simplicity and usability of the measure while providing a measure most reflective of the perspective of the airport APM user. 5.3.1.2 Data Requirements and Sources The data and sources required to calculate Service Avail- ability (Tier A Approach) are provided in Table 5. The location in the system where the departure times will be used as the basis for calculating Service Availability (Tier A Approach) should be where the Headway in Maximum Service occurs, as defined in Section 4.2.5. It should specifically be at a terminal station, where possible. 5.3.1.3 Data Collection Techniques and Calculating and Recording the Measure It is recommended that the collection of data for the Ser- vice Availability (Tier A Approach) performance measure be accomplished daily, since the measure will serve a useful pur- pose in describing performance when reported daily within an organization. For this measure, most of the data will typically be col- lected from records and systems in the control center. In some cases, the control center computer system (CCCS) that is part of the ATS subsystem will have the functionality to allow user-defined reports and/or performance measures to be generated based on custom rules set by the user, and from output data generated by the airport APM system itself. After the one-time setup of the performance measure in the CCCS, most of what is needed thereafter are the incidental updates of the causes of particular events, and perhaps not much more, depending on the sophistication of the CCCS and output data generated by the airport APM system. Control center personnel usually perform these updates after each downtime event or before their shifts are complete. In many cases, this allows reports to be automatically generated (usually daily, monthly, and/or yearly) directly by the CCCS. If this functionality exists within the CCCS, it is recommended that it be used since it could save time and effort. Some CCCSs do not have the capability described previ- ously but instead can dump the raw output data acquired from the airport APM system automatically to a batch file or to some other network location connected to the CCCS. This is typically done at the end of the operating day or shortly thereafter. In this case it may be easiest to import the data into a spreadsheet application having a file specifically devel- oped to calculate and track this performance measure. The application and file could be installed on a personal computer in the control center so that staff there would have the same ability to keep the data current on each shift. It is assumed for the purpose of this guidebook and this performance measure that airport APM systems at least have the capability to retrieve departure times (with train numbers), scheduled opening and closing times, and incomplete trip information in an electronic file format from the CCCS. Regardless of how the data are collected, some manual updates will need to be undertaken in the application for each downtime event to ensure that the measures are recorded and reported accurately. Specifically, a cause for each downtime event will need to be assigned. These causes are defined and discussed in Section 5.3.1.1. There can be one or more causes assigned to a single downtime event. For example, there may be one downtime event for the day, which was initially caused Data Requirement Source 1 Actual departure times, by train number, of in-service trains from the terminal station of each route in the system ATS subsystem of the ATC system; typically recorded by the control center computer system (CCCS) 2 Scheduled headways, by period, and opening and closing times of the system ATS, CCCS 3 Location, time, and train number of trains that fail to dwell at stations on a scheduled route Control center logbooks Incident reports Work orders ATS, CCCS 4 Location, time, and train number of trips not finished on a scheduled route Control center logbooks Incident reports Work orders ATS, CCCS 5 Cause of downtime events Control center logbooks Incident reports Work orders Table 5. Data requirements and sources, Airport APM Performance Measure #1: Service Availability (Tier A Approach).

30 by weather, but the recovery of the system was delayed further by a technician who forgot to remove his or her red tags and locks from the breakers that allow the system to be energized and restored. This oversight extended the delay by 1 hour. If the total downtime for the event was 2 hours, then half of the downtime (3,600 sec) would be assigned to “weather” and the other half (3,600 sec) to “O&M.” To track performance over time, it is recommended that Service Availability (Tier A Approach) be calculated for the day, month, and year, with all of those measures reported daily to the hundredth of a percent (See Section 4.3.1.1). The measures reported for the month and year are always cumulative-to-date, and they reset upon the start of the new month or new year. For example, if the daily report is being issued for the 10th of Febru- ary for a particular year, the reported daily measure would be for the 10th of February, the reported monthly measure would be the cumulative availability of days one through 10 of Febru- ary, and the reported yearly measure would be the cumulative availability of the days from January 1st through February 10th. Downtime event causes could be reported similarly. An example of how Service Availability (Tier A Approach) performance measures could be reported for the day of Feb- ruary 10, 2010, and the associated assignment of downtime are provided in Table 6, which represents a more comprehen- sive level of reporting for this measure. The minimum data to be reported for this measure would be as found on Form B in Exhibit A. 5.3.2 Airport APM Performance Measure #1: Service Availability (Tier B Approach) 5.3.2.1 Definition Service Availability (Tier B Approach) is the percentage of time service has been available on the airport APM system, as defined herein. In an effort to limit the complexity of the measure and provide an alternate means of calculating ser- vice availability, the measure deliberately does not attempt to capture all system events that an airport APM user could per- ceive as a loss of availability. For example, actual line capacity as compared to the scheduled line capacity (in terms of train consist sizes) is not addressed by this measure. Another, head- way performance, is not directly captured by this measure but may be reflected in the Service Availability (Tier B Approach) measure if a failure occurs. The service availability approaches in this and subsequent sections become more comprehensive than the Service Availability (Tier A Approach) measure by capturing a greater share of those events, and carry with them a greater level of sophistication as well. Because service reliability (MTBF) and service maintain- ability (MTTR) are key components of the Service Availabil- ity (Tier B Approach) measure, they have been included in this section. Service reliability (MTBF) is the mean amount of time that the system has operated before experiencing a failure. Service maintainability (MTTR) is the mean amount of time that it has taken to restore service on the system once a failure has occurred. Service Availability (Tier B Approach), service reliability (MTBF), and service maintainability (MTTR) are defined as: SA MTBF MTBF MTTR B p p p = +( )     ×100 MTBF SOT NF p p =     MTTR TTR NF FF=1 NFp p =     ∑ Where: • SAB = Service Availability (Tier B Approach). • MTBF = Mean time between failure = service reliability. • MTTR = Mean time to restore = service maintainability. February 10, 2010 Day Month-to-Date Year-to-Date Service Availability (Tier A Approach) 91.67% 98.00% 98.89% Downtime, by Category, for February 10, 2010 Day Month-to-Date Year-to-Date Weather 3,600 sec 4,320 sec 4,320 sec Passenger — 4,320 sec 12,833 sec System equipment — 4,320 sec 9,803 sec Facilities — — — Utility — — — O&M 3,600 sec 4,320 sec 4,500 sec Other — — 7,864 sec Total 7,200 sec 17,280 sec 39,320 sec Table 6. Example reporting of airport APM Performance Measure #1: Service Availability (Tier A Approach).

31 • p = The represented period of time, typically the day, month- to-date, or year-to-date. • SOT = Scheduled operating time. The total time in seconds that the system was scheduled to provide service. • F = Failure. Failure is any of the following: – When any in-service train has an unscheduled stoppage during the time when the system is scheduled to provide service. – When any in-service train has an incomplete trip on a scheduled route during the time when the system is scheduled to provide service. – When any vehicle or station platform door blocks any portion of the nominal doorway opening that passen- gers use to board and alight trains dwelling in the station during the time when the system is scheduled to provide service. • NFp = Number of failures for the period. The total number of all failures in the period. (Multiple failures occurring at the same time, during the same incident, or due to the same malfunction are to be counted as one failure.) • TTR = Total time to restore. The total time to restore ser- vice after a failure, calculated as follows: – For unscheduled stoppages not occurring in conjunc- tion with a station dwell, the total time to restore begins when the train reaches zero speed and ends when the train restarts (in automatic or via manual operation). – For unscheduled stoppages occurring in conjunction with a station dwell, the total time to restore begins at the end of the scheduled dwell time and ends when the train departs the station. Where the unscheduled stop- page occurs during a dwell at a terminal station, and the train is taken out of service at the terminal station, the total time to restore ends when all doors of the train are closed and locked at the completion of the dwell. – For incomplete trips where a train fails to make a sta- tion stop on its route before arriving at its destination terminal, the total time to restore begins at the moment the train bypasses a station and ends at the start of the next station dwell for the same train. – For incomplete trips where a train fails to finish a trip on the scheduled route, the total time to restore begins at the moment the train ceases its trip on the route and ends at the scheduled arrival time of the trip for the scheduled destination terminal on the route. – For vehicle or station platform doors that block any portion of the nominal doorway opening that passen- gers use to board and alight trains dwelling in station, the total time to restore begins at the moment a door blocks any portion of the nominal doorway opening during the dwell and ends when the train departs the station. Where blockage of the nominal doorway open- ing occurs during a dwell at a terminal station, and the train is taken out of service at the terminal station, the total time to restore ends when all doors of the train are closed and locked at the completion of the dwell. When multiple failures occur at the same time, during the same incident, or due to the same malfunction, the total time to restore begins at the earliest start time of the failures and ends at the latest end time of the failures. • In-service train is a train located in the passenger-carrying portion of the system that passengers are able to use for transport. • Incomplete trip is the trip of an in-service train that fails to make a station stop on the scheduled route or that fails to finish the trip on the scheduled route. • Unscheduled stoppage is the unscheduled stopping of any in-service train that is not dwelling in the station or the unscheduled stopping of any in-service train that remains in the station longer than the scheduled dwell time. Deliberately employing operating strategies to eliminate or stop the accumulation of downtime by exploiting the intent of the rules herein, especially when those strategies do not benefit the APM user, is not permitted in the context of this system of evaluation. All failures and total restoration times are to be quanti- fied and assigned to one of the following predefined causal categories: • Weather-induced. Failures caused by the weather, such as lightning striking the guideway, or a snow or ice storm. • Passenger-induced. Failures caused by a passenger, such as a passenger holding the vehicle doors open or a passenger pull- ing an emergency evacuation handle on an in-service train. • System equipment-induced. Failures caused by system equipment, such as a broken axle on an in-service train, or train control system equipment that fails while in service. • Facilities-induced. Failures caused by the facilities, such as a station roof leaking water onto the floor immediately in front of one side of the station sliding platform doors, requiring a bypass of that side of the station, or a crack in a guideway pier that limits the number of trains in an area. • Utility-induced. Failures caused by a utility service provider, such as the loss of an incoming electrical feed to the APM system. • O&M-induced. Failures caused by personnel affiliated with the O&M organization, such as the mis-operation of the system from the control center or the failure of a maintenance technician to properly isolate a piece of equipment from the active system operation on which he or she is working. • Other. Failures caused by other issues, such as a terrorist threat or a delay due to the transport of a VIP. There are no provisions for partial service credit in this measure, no penalties for line capacity reductions (i.e., shorter

32 trains), no allowances for grace periods, and no exclusions for failures. This maintains the simplicity and usability of the measure while providing a measure most reflective of the perspective of the airport APM user. 5.3.2.2 Data Requirements and Sources The data and sources required to calculate Service Avail- ability (Tier B Approach), service reliability (MTBF), and service maintainability (MTTR) are provided in Table 7. 5.3.2.3 Data Collection Techniques and Calculating and Recording the Measure It is recommended that the collection of data for the Service Availability (Tier B Approach), service reliability (MTBF), and service maintainability (MTTR) performance measures be accomplished daily, since the measures will serve a use- ful purpose in describing performance when reported daily within an organization. For these measures, most of the data will typically be col- lected from records and systems in the control center. In some cases, the CCCS that is part of the ATS subsystem will have the functionality to allow user-defined reports and/or perfor- mance measures to be generated based on custom rules set by the user, and output data can be generated by the airport APM system itself. After the one-time setup of the performance measures in the CCCS, most of what is needed thereafter are the incidental updates of the causes and numbers of particular failures, and perhaps not much more, depending on the sophis- tication of the CCCS and output data generated by the airport APM system. Control center personnel usually perform these updates after each downtime event or before their shifts are complete. In many cases, this allows reports to be automati- cally generated (usually daily, monthly, and/or yearly) directly by the CCCS. If this functionality exists within the CCCS, it is recommended that it be used since it could save time and effort. Some CCCSs do not have the capability described previ- ously, but instead can dump the raw output data acquired from the airport APM system automatically to a batch file or Data Requirement Source 1 Scheduled arrival and departure times, by train number, of in-service trains at the terminal stations of each route in the system ATS subsystem of the ATC system; typically recorded by the CCCS 2 Actual arrival and departure times, by train number, of in-service trains at every station stop in the system ATS, CCCS 3 Scheduled opening and closing times of the system ATS, CCCS 4 Actual dwell start and end times, by train number, of all in-service trains at every station stop in the system ATS, CCCS 5 Location, time, and train number of trips not finished on a scheduled route Control center logbooks Incident reports ATS, CCCS 6 Times of all zero speed and non-zero speed indications for all in-service trains, by train number and location ATS, CCCS 7 Times and locations of in-service trains taken out of service, by train number Control center logbooks Incident reports ATS, CCCS 8 Times of all vehicle and station platform doors’ closed and locked status in the system, by train number and terminal station location ATS, CCCS 9 Times of vehicle and station platform door opening faults, by train number and station location Control center logbooks Incident reports ATS, CCCS 10 Number of failures Control center logbooks Incident reports ATS, CCCS 11 Cause of failures Control center logbooks Incident reports Work orders ATS, CCCS Table 7. Data requirements and sources, Airport APM Performance Measure #1: Service Availability (Tier B Approach).

33 to some other network location connected to the CCCS, for example. This is typically done at the end of the operating day or shortly thereafter. In this case it may be easiest to import the data into a spreadsheet application having a file specifi- cally developed to calculate and track this performance mea- sure. The application and file could be installed on a personal computer in the control center so that staff there would have the same ability to keep the data current on each shift. It is assumed for the purpose of this guidebook and performance measure that airport APM systems at least have the capabil- ity to retrieve the data described in numbers 1 through 10 of Table 7 electronically from the CCCS. If not, and if the control center manually logs all actions or events that occur throughout the operating day, then whatever information cannot be obtained electronically from the CCCS will have to be mined manually from the logbook and other locations as described in Table 7. Regardless of how the data are collected, some manual updates will need to be undertaken in the application for each failure to ensure that the measures are recorded and reported accurately. Specifically, a cause for each failure will need to be assigned. These causes are defined and discussed in Sec- tion 5.3.2.1. There can be one or more causes assigned to a single failure. For example, there may be one failure for the day, which was initially caused by weather, but the recovery of the system was delayed further by a technician who forgot to remove his or her red tags and locks from the breakers that allow the system to be energized and restored. This oversight extended the delay by 1 hour. If the total time to restore for the event was 2 hours, then half of it (3,600 sec) would be as- signed to “weather” and the other half (3,600 sec) to “O&M.” Similarly, the number of failures may need to be manually updated. For example, if a train has a failed door and there is a door opening fault at every station at which the train dwells, this would need to be reflected as one failure in the applica- tion rather than as multiple failures coincident with the door opening faults at each of the stations. To track performance over time, it is recommended that Ser- vice Availability (Tier B Approach), service reliability (MTBF), and service maintainability (MTTR) be calculated for the day, month, and year, with Service Availability (Tier B Approach) reported daily to the hundredth of a percent, and service reli- ability (MTBF) and service maintainability (MTTR) reported daily in hours, minutes, and seconds. The measures reported for the month and year are always cumulative-to-date, and they reset upon the start of the new month or new year. For example, if the daily report is being issued for the 10th of February for a particular year, the reported daily measures would be for the 10th of February, the reported monthly measures would be the cumulative availability of days one through 10 of February, and the reported yearly measures would be the cumulative availability of the days from January 1st through February 10th. The number of failures and total time to restore could be reported similarly, by causal category. The example provided in Table 6 reflected downtime for the Service Availability (Tier A Approach) performance measure being reported in seconds. That could be replicated for the reporting of total time to restore under this Service Availability (Tier B Approach) performance measure, but is reported in Table 8 in hours, minutes, and seconds for the sake of comparing the two report- ing units. An example of how the Service Availability (Tier B Ap- proach), service reliability (MTBF), and service maintain- ability (MTTR) performance measures could be reported for the day of February 10, 2010, and the associated assignment February 10, 2010 Day Month-to-Date Year-to-Date Service Availability (Tier B Approach) 99.17% 99.88% 98.50% Service reliability (MTBF) 10:00:00 80:00:00 06:34:00 Service maintainability (MTTR) 00:05:00 00:06:00 00:06:10 Failures and Total Time To Restore (TTR), by Category, for February 10, 2010 Day Month-to-Date Year-to-Date Failures TTR Failures TTR Failures TTR Weather — — — — 5 03:00:00 Passenger 1 00:06:00 10 01:26:00 75 02:45:00 System equipment 1 00:04:00 8 00:30:00 30 01:30:00 Facilities — — 1 00:20:00 4 00:30:00 Utility — — 2 00:04:00 4 04:00:00 O&M — — 4 00:10:00 6 00:20:00 Other — — — — 1 00:46:00 Total 2 00:10:00 25 02:30:00 125 12:51:00 Table 8. Example reporting of airport APM Performance Measure #1: Service Availability (Tier B Approach).

34 of number of failures and total restoration time are provided in Table 8, which represents a more comprehensive level of reporting for this measure. The minimum data to be reported for this measure would be as found on Form B in Exhibit A. 5.3.3 Airport APM Performance Measure #1: Service Availability (Tier C Approach) 5.3.3.1 Definition Service Availability (Tier C Approach) is the percentage of time service has been available on the airport APM system, as defined herein. This availability measure, as compared to the Tier A and Tier B availability measures, is the most compre- hensive among the three tiers, and also the most complex. It generally captures all events that an airport APM user could perceive as a loss of availability. Because service mode availability, fleet availability, and station platform door availability are key components of the Service Availability (Tier C Approach) measure, they have been included in this section. Service mode availability is the fraction of the entire time the service mode has been available on the system, as defined herein. Fleet availability is the fraction of the entire time the fleet has been available in the system, as defined herein. And station platform door availability is the fraction of the entire time the station platform doors have been available in the system, as defined herein. Service Availability (Tier C Approach), service mode availability, fleet availability, and station platform door availability are defined as: SA SA ST C TFp=1 n p=1 n =     × ∑ ∑ 100 SA ST A A ATF SM F SPD= × × × A AMOT SMOT SM =   A ACOT SCOT F =   A APDOT SPDOT SPD =   Where: Service Availability (Tier C Approach) • SAC = Service Availability (Tier C Approach). • SATF = Time-factored service availability value for each period. • ST = Service time of each service period, in hours. • p = Service period. • n = Number of service periods. • ASM = Service mode availability. • AF = Fleet availability. • ASPD = Station platform door availability. Service Mode Availability • AMOT = Actual mode operating time. The total time, in seconds, that the system was operating in the scheduled operating mode, calculated by subtracting mode down- time from scheduled mode operating time (SMOT – MD). • SMOT = Scheduled mode operating time. The total time, in seconds, that the system was scheduled to provide service in the specific operating mode. • MD = Mode downtime. The total time, in seconds, of all mode downtime events. • Mode downtime event is any of the following: – When any in-service train has an unscheduled stoppage during the time when the system is scheduled to provide service. For unscheduled stoppages not occurring in conjunction with a station dwell, the downtime begins when the train reaches zero speed and ends when the train restarts (in automatic or via manual operation). For unscheduled stoppages occurring in conjunction with a station dwell, the downtime begins at the end of the scheduled dwell time and ends when the train departs the station. Where the unscheduled stoppage occurs during a dwell at a terminal station, and the train is taken out of service at the terminal station, the downtime ends when all doors of the train are closed and locked at the comple- tion of the dwell. – When any in-service train has an incomplete trip on a scheduled route during the time when the system is scheduled to provide service. For incomplete trips where a train fails to make a station stop on its route before arriving at its destination terminal, the downtime begins at the moment the train bypasses a station and ends at the start of the next station dwell for the same train. For incomplete trips where a train fails to finish a trip on the scheduled route, the downtime begins at the moment the train ceases its trip on the route and ends at the scheduled arrival time of the trip for the scheduled destination terminal on the route. If any of these downtime events occur at the same time or overlap one another, the earliest start time and the latest end time of the events, as defined by the rules herein, are to be used in determining downtime. • In-service train is a train located in the passenger-carrying portion of the system that passengers are able to use for transport. • Incomplete trip is the trip of an in-service train that fails to make a station stop on the scheduled route or that fails to finish the trip on the scheduled route.

35 • Unscheduled stoppage is the unscheduled stopping of any in-service train that is not dwelling in station or the unscheduled stopping of any in-service train that remains in station longer than the scheduled dwell time. Fleet Availability • ACOT = Actual car operating time. The total cumulative time, in seconds, of cars that operated within in-service trains. ACOT is calculated by subtracting car downtime from scheduled car operating time (SCOT – CD). The actual number of cars is not to exceed the scheduled number of cars for the time operated, either in the aggregate or in any vehicle/train. • SCOT = Scheduled car operating time. The total cumulative time, in seconds, of all cars scheduled to operate within in-service trains. SCOT is calculated by multiplying the total number of cars scheduled to operate in the specific operating mode by the time, in seconds, scheduled for that mode. • CD = Car downtime. The total time, in seconds, of all car downtime events. • Car downtime event is any of the following: – When the car of any in-service train is not fully functional during the time when the system is scheduled to provide service. This car downtime event begins upon discovery of the event or anomaly causing a car to not be fully functional and ends when the anomaly is corrected or the train is removed from service. – When the car of any in-service train is not in service during the time when the system is scheduled to provide service. This car downtime event begins when the car is not able to be used for passenger transport and ends when the car is able to be used for passenger transport or when the train is removed from service. – When an in-service train operates with fewer than the scheduled number of cars during the time when the system is scheduled to provide service. This car down- time event begins at the time when a train with a deficient number of cars is placed in service against a schedule that requires more cars per train; it ends when either the train is removed from service or when the schedule is automatically reduced due to an operating period transition, allowing the previously deficient number of cars in the train to be sufficient. – When the system operates with fewer in-service trains than required by the schedule during the time when the system is scheduled to provide service. This car down- time event begins at the time the system is operated with fewer trains than required by the schedule; it ends either when the scheduled number of trains are placed into service or when the schedule is automatically reduced due to an operating period transition, allowing the pre- viously deficient number of trains to be sufficient. If any of these downtime events occur at the same time or overlap one another, the earliest start time and the latest end time of the events, as defined by the rules herein, are to be used in determining downtime. • Fully functional car. An in-service car without any anomaly that could be noticed by a passenger. For example, the failure of an HVAC unit on a hot day, restricted speed of a train due to low tire pressure, spilled coffee on the car floor, or graffiti etched into a window would all prevent a car from being fully functional. A car with an out-of-service coupler on the end of the train, a failed smoke detector, and a failed hub-o-meter are examples of anomalies that may likely go unnoticed by passengers and therefore not prevent a car from being fully functional. • In-service car is a car located in the passenger-carrying portion of the system that passengers are able to use for transport. Where individual cars are not provided, the language in this section is to apply to vehicles. See Section 5.1.6 for definitions and discussion of car, vehicle, and train. Station Platform Door Availability • APDOT = Actual station platform door operating time. The total time, in seconds, that station platform doors were in service, calculated by subtracting door downtime from scheduled station platform door operating time (SPDOT – DD). • SPDOT = Scheduled station platform door operating time. The total time, in seconds, that the station platform doors were scheduled to be in service, calculated by multiplying the scheduled number of platform doors to be in service by the time, in seconds. • DD = Door downtime. The total time, in seconds, of all door downtime events. • Door downtime event is when any station platform door does not fully open upon the arrival of an in-service train. This event begins when any station platform door does not fully open upon the arrival of an in-service train and ends when the in-service train departs. For door downtime events occurring at the same or separate platforms at the same time, the earliest start time and the latest end time of the events, as defined by the rules herein, are to be used in determining downtime. • In-service train is a train located in the passenger-carrying portion of the system that passengers are able to use for transport. Deliberately employing operating strategies to eliminate or stop the accumulation of downtime by exploiting the intent of the rules herein, especially when those strategies do not benefit the APM user, is not permitted in the context of this system of evaluation.

36 All downtimes are to be quantified and assigned to one of the following predefined causal categories: • Weather-induced. Failures caused by the weather, such as lightning striking the guideway, or a snow or ice storm. • Passenger-induced. Failures caused by a passenger, such as a passenger holding the vehicle doors open or a passenger pull- ing an emergency evacuation handle on an in-service train. • System equipment-induced. Failures caused by system equipment, such as a broken axle on an in-service train or train control system equipment that fails while in service. • Facilities-induced. Failures caused by the facilities, such as a station roof leaking water onto the floor immediately in front of one side of the station sliding platform doors, requiring a bypass of that side of the station, or a crack in a guideway pier that limits the number of trains in an area. • Utility-induced. Failures caused by a utility service provider, such as the loss of an incoming electrical feed to the APM system. • O&M-induced. Failures caused by personnel affiliated with the O&M organization, such as the mis-operation of the sys- tem from the control center or the failure of a maintenance technician to properly isolate a piece of equipment from the active system operation on which he or she is working. • Other. Failures caused by other issues, such as a terrorist threat or delay due to the transport of a VIP. There are no provisions for partial service credit in this measure, no allowances for grace periods, and no exclusions for failures. This provides a measure most reflective of the perspective of the airport APM user. 5.3.3.2 Data Requirements and Sources The data and sources required to calculate Service Avail- ability (Tier C Approach) are provided in Table 9. 5.3.3.3 Data Collection Techniques and Calculating and Recording the Measure It is recommended that the collection of data for the Service Availability (Tier C Approach) performance measure be accomplished daily, since the measure will serve a useful pur- pose in describing performance when reported daily within an organization. For this measure, most of the data will typically be collected from records and systems in the control center. In some cases, the CCCS that is part of the ATS subsystem will have the func- tionality to allow user-defined reports and/or performance measures to be generated based on custom rules set by the user, and output data can be generated by the airport APM system itself. After the one-time setup of the performance measure in the CCCS, most of what is needed thereafter are the incidental updates of the causes of particular events, and perhaps not much more, depending on the sophistication of the CCCS and output data generated by the airport APM system. Control center personnel usually perform these updates after each down time event or before their shifts are complete. In many cases, this allows reports to be automatically generated (usually daily, monthly, and/or yearly) directly by the CCCS. If this functionality exists within the CCCS, it is recommended that it be used since it could save time and effort. Some CCCSs do not have the capability described previously but instead can dump the raw output data acquired from the airport APM system automatically to a batch file or to some other network location connected to the CCCS. This is typically done at the end of the operating day or shortly thereafter. In this case it may be easiest to import the data into a spreadsheet application having a file specifically developed to calculate and track this performance measure. The application and file could be installed on a personal computer in the control center so that staff there would have the same ability to keep the data current on each shift. It is assumed for the purpose of this guidebook and this performance measure that airport APM systems at least have the capability to retrieve departure times (with train numbers), scheduled opening and closing times, and incomplete trip information in an electronic file format from the CCCS. Regardless of how the data are collected, some manual updates will need to be undertaken in the application for each downtime event to ensure that the measures are recorded and reported accurately. Specifically, a cause for each downtime event will need to be assigned. These causes are defined and discussed in Section 5.3.3.1. There can be one or more causes assigned to a single downtime event. For example, there may be one downtime event for the day, which was initially caused by weather, but the recovery of the system was delayed further by a technician who forgot to remove his or her red tags and locks from the breakers that allow the system to be energized and restored. This oversight extended the delay by 1 hour. If the total downtime for the event was 2 hours, then half of the downtime (3,600 sec) would be assigned to “weather” and the other half (3,600 sec) to “O&M.” To track performance over time, it is recommended that Service Availability (Tier C Approach) be calculated for the day, month, and year, with all of those measures reported daily to the hundredth of a percent. The measures reported for the month and the year are always cumulative-to-date, and they reset upon the start of the new month or new year. For example, if the daily report is being issued for the 10th of February for a particular year, the reported daily measure would be for the 10th of February, the reported monthly measure would be the cumulative availability of days one through 10 of February, and the reported yearly measure would be the

37 cumulative availability of the days from January 1st through February 10th. Downtime event causes could be reported similarly. Service Availability (Tier C Approach) is calculated as follows: • First, service mode availability (ASM), fleet availability (AF), and station platform door availability (ASPD) are calculated for each service period (i). • Second, the time-factored service availability values (SATF) are calculated for each service period. • Third, the time-factored service availability values (SATF) for all service periods to be reported upon are summed. • Fourth, the service times (STs) of all service periods are summed. • Fifth, Service Availability (Tier C Approach) is calculated by dividing the sum of time-factored service availability values for all service periods by the sum of service times for all service periods. An example of how Service Availability (Tier C Approach) performance measures could be reported for the day of Data Requirement Source 1 Scheduled arrival and departure times, by car, vehicle, and train number, of in-service trains at the terminal stations of each route in the system ATS subsystem of the ATC system; typically recorded by the CCCS 2 Actual arrival and departure times, by car, vehicle, and train number, of in-service trains at every station stop in the system ATS, CCCS 3 System schedule, including scheduled opening and closing times of the system, scheduled start/end times of service periods, scheduled number of trains and cars or vehicles per train to be in service, and scheduled headway or departure times ATS, CCCS 4 Actual dwell start and end times, by car, vehicle, and train number, of all in-service trains at every station stop in the system ATS, CCCS 5 Location, time, and car, vehicle, and train number of trips not finished on a scheduled route Control center logbooks Incident reports ATS, CCCS 6 Times of all zero-speed and non- zero-speed indications for all in- service trains, by car, vehicle, and train number and location ATS, CCCS 7 Times and locations of in-service trains placed into and taken out of service, by car, vehicle, and train number Control center logbooks Incident reports ATS, CCCS 8 Number of automatic station platform doors in the system System description manual Schedule 9 Times of all train and station platform doors’ closed and locked status in the system, by car, vehicle, and train number, and terminal station location ATS, CCCS 10 Times of train and station platform door opening faults, by car, vehicle, and train number, and station location Control center logbooks Incident reports ATS, CCCS 11 Cause of failures Control center logbooks Incident reports Work orders ATS, CCCS Table 9. Data requirements and sources, Airport APM Performance Measure #1: Service Availability (Tier C Approach).

38 February 10, 2010, and the associated assignment of down- time are provided in Table 10, which represents a more com- prehensive level of reporting for this measure. The minimum data to be reported for this measure would be as found on Form B in Exhibit A. 5.3.4 Airport APM Performance Measure #2: Safety Incidents per 1,000 Vehicle Service Miles 5.3.4.1 Definition Safety Incidents per 1,000 Vehicle Service Miles is the rate at which safety incidents have occurred in the airport APM system. It is defined as: Monthly SI SI VSM 1kvsm d=1 m d=1 m = ( )×( )∑ ∑ 1 000, Yearly SI SI VSM 1kvsm d=1 y d=1 y = ( )×( )∑ ∑ 1 000, Where: • SI1kvsm = Safety Incidents per 1,000 Vehicle Service Miles. • SI = Number of safety incidents. • Safety incident is an unintentional event defined as: – The evacuation of passengers from a train, APM station, or other public or non-public area of the APM system, regardless of whether the evacuation was attended or directed by system or life safety personnel. The removal of passengers from trains or stations for routine opera- tions or maintenance purposes does not constitute an evacuation. – A mainline derailment. Mainline is defined as the APM guideway in the passenger-carrying portion of the system but not including mainline pocket tracks and storage and turnback/switchback tracks beyond terminals where passengers are prohibited. – Any incident involving damage to APM system property wherein safety was compromised during the incident or the damage compromises safety going forward. APM system property is defined as any APM system equip- ment within the APM system or any APM facilities and related facilities equipment within the system, such as the guideway, traction power substations, APM stations, station escalators and elevators, other APM equipment rooms, and the M&SF. – Any verified incident involving any person on APM system property (e.g., on a train, in an APM equipment room, in an APM station, on the guideway, at the M&SF, along the right-of-way) that resulted in injury or could have resulted in injury. Injury is defined as an incident that requires any medical attention, including first aid treatment. – Application of the emergency brake(s) on a moving in-service train in the passenger-carrying portion of the system, but not including mainline pocket tracks and storage and turnback/switchback tracks beyond terminals where passengers are prohibited. – The fatality of any person on APM system property (e.g., on a train, in an APM equipment room, in an APM station, on the guideway, at the M&SF, along the right-of-way). February 10, 2010 Day Month-to-Date Year-to-Date Service Availability (Tier C Approach) 99.27% 98.77% 99.75% Service mode availability 98.65% 99.74% 99.33% Fleet availability 98.55% 97.44% 98.65% Station platform door availability 98.44% 99.77% 99.67% Downtime, by Availability and Category, for February 10, 2010 (in seconds) Day Month-to-Date Year-to-Date Mode Fleet Door Mode Fleet Door Mode Fleet Door Weather 300 — — 400 — — 600 — — Passenger — 1,100 — — — — 2,000 1,100 — System equipment — — 2,000 — — — — — — Facilities — — — — — — — — — Utility — — — 5,000 — — 5,000 — — O&M — — — — — 500 — — 500 Other — — — — — — — — — Table 10. Example reporting of Airport APM Performance Measure #1: Service Availability (Tier C Approach).

39 • VSM = Vehicle service miles. Vehicle service miles is defined as the total miles traveled by all in-service vehicles in the system, with a vehicle being in service when located in the passenger-carrying portion of the system and when pas- sengers are able to use it for transport (see Section 5.2.2 for further clarification). • d = Day of the month or year, as applicable. • m = Days in the month. • y = Days in the year. Safety incidents should not be double counted. For example, if all trains are evacuated as a result of a total loss of incoming power from the utility service provider, this would be recorded as one safety incident, as opposed to one safety incident per train evacuation. When more than one of the events that define a safety incident (as described previously) occur during the same incident, the order of precedence in classifying the safety incident is as follows: (1) fatality, (2) injury, (3) evacuation, (4) mainline derailment, and (5) property damage. For exam- ple, a mainline train derails and as a result three passengers are transported to the hospital for treatment of their injuries. This is defined as a safety incident because of the mainline derailment, because of the injuries involved, and possibly because of property damage. In this case, one safety incident would be recorded as a result of the injury event. In addition to all safety incidents being classified according to the event by which they are defined, safety incidents are also to be assigned to one of the following predefined causal categories: • Weather-induced. Safety incidents caused by the weather, such as lightning striking the guideway, or a snow or ice storm. • Passenger-induced. Safety incidents caused by a passenger, such as a passenger pulling an emergency evacuation handle on an in-service train. • System equipment-induced. Safety incidents caused by sys- tem equipment, such as a broken axle on an in-service train. • Facilities-induced. Safety incidents caused by the facilities, such as a station roof leaking water onto the floor imme- diately in front of the station sliding platform doors. • Utility-induced. Safety incidents caused by a utility service provider, such as the loss of one or more incoming electrical feeds to the APM system. • O&M-induced. Safety incidents caused by personnel affili- ated with the operations and/or maintenance organization, such as the mis-operation of the system from the control center or the failure of a maintenance technician to prop- erly isolate a piece of equipment from the active system operation on which he or she is working. • Other. Safety incidents caused as a result of other reasons. 5.3.4.2 Data Requirements and Sources The data and sources required to calculate Safety Incidents per 1,000 Vehicle Service Miles are provided in Table 11. 5.3.4.3 Data Collection Techniques and Calculating and Recording the Measure It is recommended that the collection of data for the Safety Incidents per 1,000 Vehicle Service Miles performance measure be accomplished daily but be reported no more frequently than monthly since safety incidents in airport APM systems are relatively rare. In addition, the numeric value of the measure, if reported daily, could be misinterpreted to be high because the underlying basis is only 1 day’s worth of vehicle service miles, as opposed to 30 days’ worth of vehicle service miles in a monthly reported measure. For this measure, most of the data will typically be collected from records and systems in the control center. Where the functionality of the CCCS and the specificity of the APM system output data allow, it may be possible to collect data for the Safety Incidents per 1,000 Vehicle Service Miles performance measure directly from the CCCS. After the one-time setup of the performance measure in the CCCS, all that may be needed thereafter are the incidental updates of classifying incidents Data Requirement Source 1 Number of safety incidents and the event classifications by which they are defined Control center logbooks Incident reports Work orders Reports and/or records of life-safety agencies 2 Vehicle service miles ATS, CCCS Vehicle maintenance records 3 Cause of safety incidents Control center logbooks Incident reports Work orders Table 11. Data requirements and sources, Airport APM Performance Measure #2: Safety Incidents per 1,000 Vehicle Service Miles.

40 as safety incidents, where appropriate, and categorizing safety incidents by cause. Control center personnel usually perform these updates after each incident or before their shifts are com- plete. In many cases, this allows reports to be automatically generated directly by the CCCS. It is recommended that this process be instituted if the systems and data will support it. Other airport APM systems may have to rely on a process that is separate from the CCCS for this performance measure. For example, vehicle service miles may have to be obtained from the vehicle maintenance department, and safety incidents may have to be determined from a separate incident report- ing system. If this is the case, then it is recommended that the data required for this performance measure be manually collected daily and entered in a file of a spreadsheet application containing the necessary formulas and placeholders to calculate the measure. In some cases, the information required to determine whether an incident is a safety incident may not be readily available from data residing at the airport APM system. For example, if a passenger sustains an injury while in the system but leaves the system and then requests emergency medical assistance from airport life safety agencies while still at the airport, that information either may never be known or may only become known at a later date. In such case, the O&M organization should have some protocol in place to be able to automatically be alerted to this type of information, when available, from life safety agencies at the airport. Regardless of how the data are collected, the safety incident will need to be classified into one of the definitions described in Section 5.3.4.1, and the cause of the safety incident will need to be assigned. As mentioned previously, this is often accomplished through manual updates, regardless of the pro- cess employed. To track performance over time, it is recommended that Safety Incidents per 1,000 Vehicle Service Miles be calculated for the month and year, with those measures reported monthly to the thousandths. The measure reported for the year is al- ways cumulative-to-date, and it resets upon the start of the new year. For example, if the monthly report is being issued for the month of February for a particular year, the reported monthly measure would be for the entire month of February, and the reported yearly measure would be the cumu lative measure of Safety Incidents per 1,000 Vehicle Service Miles from January 1st through February 28th of that year. An example of how Safety Incidents per 1,000 Vehicle Service Miles performance measures could be reported for the month of February 2010, and the associated classifications and categories of those incidents are provided in Table 12, which represents a more comprehensive level of reporting for this measure. The minimum data to be reported for this measure would be as found on Form B in Exhibit A. 5.3.5 Airport APM Performance Measure #3: O&M Expense per Vehicle Service Mile 5.3.5.1 Definition O&M Expense per Vehicle Service Mile is the operations and maintenance expense for an airport APM system per vehicle service mile performed. It is defined as: Monthly OME OME VSM vsm d=1 m d=1 m = ∑ ∑ Yearly OME OME VSM vsm d=1 y d=1 y = ∑ ∑ Where: • OMEvsm = O&M Expense per Vehicle Service Mile. • OME = O&M expense. O&M expense is the expenses asso- ciated with the operation and maintenance of the airport APM system that typically have a useful life of less than 1 year February 2010 Month Year-to-Date SI/1kVSM 0.003 0.002 Safety Incidents, by Category and Classification, for February 2010 Fatalities Injuries Evacuations Mainline Derailments Property Damage E.B. Applications Month YTD Month YTD Month YTD Month YTD Month YTD Month YTD Weather — — — — — — — — — — — — Passenger — — 1 1 — — — — — — — — Sys. eqp. — — — — — — — — — — — — Facilities — — — — — — — — — — — — Utility — — — — 1 1 — — — — — — O&M — — — — — — — — — — — — Other — — — — — — — — — — — — Table 12. Example reporting of Airport APM Performance Measure #2: Safety Incidents per 1,000 Vehicle Service Miles.

41 or an acquisition cost that equals the lesser of (1) $5,000 or (2) the capitalization level established by the owner in accordance with its financial accounting practices. O&M expense includes expenses for: – Salaries, wages, and fringe benefits. The salaries, wages, and fringe benefits expenses for all operations, main- tenance, and general and administrative personnel employed to manage, operate, or maintain the airport APM system. Fringe benefits are expenses for FICA; pen- sion plans; hospital, medical, and surgical plans; dental plans; life insurance plans; short-term disability plans; unemployment insurance; worker’s compensation insur- ance; sick leave, holiday, vacation, and other paid absence pay; uniform and work clothing allowance; and other similar salaries, wages, and fringe benefits expenses. – Services. The services expenses for managing, operating, or maintaining the airport APM system, or the services expenses for supporting the operation and maintenance of the system, which include expenses for management service fees; advertising fees, where applicable; profes- sional and technical services; temporary help; contract operation and/or maintenance services; custodial services; security services; and expenses for other services. – Materials and supplies. The materials and supplies expenses required to manage, operate, or maintain the airport APM system, which include expenses for vehicle and non-vehicle maintenance materials and supplies, administrative supplies, and expenses for other materials and supplies to operate or maintain the airport APM system. – Utilities. The utilities expenses for operating or maintain- ing the airport APM system, including expenses for pro- pulsion and system power and all other utilities expenses. Expenses for utilities not related to the operation or main- tenance of the airport APM system are not to be included. – Other. The other expenses for operating or maintaining the airport APM system, such as expenses for casualty and liability, taxes, interest expense, leases and rentals, depreciation, purchase lease payments, expense transfers, and miscellaneous expenses such as dues and subscrip- tions, charitable donations, and travel and meetings. • VSM = Vehicle service miles. Vehicle service miles is defined as the total miles traveled by all in-service vehicles in the system, with a vehicle being in service when located in the passenger-carrying portion of the system and when pas- sengers are able to use it for transport (see Section 5.2.2 for further clarification). • d = Day of the month or year, as applicable. • m = Days in the month. • y = Days in the year. 5.3.5.2 Data Requirements and Sources The data and sources required to calculate the O&M Cost per Vehicle Service Mile are provided in Table 13. 5.3.5.3 Data Collection Techniques and Calculating and Recording the Measure It is recommended that the collection of data for the O&M Expense per Vehicle Service Mile performance measure be accomplished monthly and be reported no more frequently than monthly. As with previous measures, collection of vehicle service miles data may be able to be obtained automatically from the CCCS, or, depending on the level of sophistication of the CCCS and/or output data from the airport APM system, may need to be obtained from the vehicle maintenance department. Airport APM O&M expense data may likely be obtained from the owner of the airport APM system or the O&M services provider, and may in certain cases incorrectly include expense data not associated with the operations or maintenance of the airport APM system. The best example of this may be expenses for utility power, which may include expenses for power not associated with the operation or maintenance of the airport APM system. For example, the meters on some services serving Data Requirement Source 1 Airport APM system O&M expenses, with and without utilities expenses, required for operating and maintaining the system Airport APM system owner’s financial records and/or financial statements in its computerized accounting system Utilities service providers billing statements Operations and/or maintenance services provider’s financial records and/or financial statements in its computerized accounting system 2 Vehicle service miles ATS, CCCS Vehicle maintenance records Table 13. Data requirements and sources, Airport APM Performance Measure #3: O&M Expense per Vehicle Service Mile.

42 the APM may not be dedicated to only the APM services, but rather may be on a common upstream feed that branches off to the APM service and another non-APM service. In such a case, it may be difficult, if possible at all, to determine the expense for the APM service versus that of the non-APM service. In anticipation of this potential problem, it is recommended that for those systems that have dedicated meters/services, total O&M expenses be reported both with and without utilities expenses. For those systems that share services with other non-APM services, total O&M expenses should be reported without utilities expenses only. To track performance over time, it is recommended that O&M Expense per Vehicle Service Mile be calculated for the month and year, with those measures reported monthly to the cent. The measure reported for the year is always cumulative-to- date, and it resets upon the start of the new year. For example, if the monthly report is being issued for the month of February for a particular year, the reported monthly measure would be for the entire month of February, and the reported yearly measure would be the cumulative measure of O&M Expense per Vehicle Service Mile from January 1st through February 28th of that year. An example of how O&M Expense per Vehicle Service Mile performance measures could be reported for the month of February, 2010, is provided in Table 14, and the Airport APM Performance Measures reporting form can be found in Exhibit A as Form B. 5.3.6 Airport APM Performance Measure #4: Actual and Scheduled Capacity (Peak Versus All Other) 5.3.6.1 Definition Actual and Scheduled Capacity (Peak Versus All Other) is the comparison of actual cumulative line capacity to scheduled cumulative line capacity for peak periods versus all other periods in an airport APM system. Actual and Scheduled Capacity (Peak Versus All Other) is defined as: SC SCP P_x x=A n = ∑ SC SCAO AO_x x=A n = ∑ AC ACP P_x x=A n = ∑ AC ACAO AO_x x=A n = ∑ SC SNT CDC SNCTP_x P_x P_x= × × SC SNT CDC SNCTAO_x AO_x AO_x= × × AC ANT CDC ANCTP_x P_x P_x= × × AC ANT CDC ANCTAO_x AO_x AO_x= × × Where: • SC = Scheduled capacity. The scheduled cumulative line capacity. • AC = Actual capacity. The actual cumulative line capacity. • P = Peak periods. • AO = All other periods. • x = Train consist type (e.g., type A for a two-car train; type B for a three-car rain). • n = Number of train consist types. • Trip = The departure of an in-service train from the scheduled originating system terminal and arrival at the scheduled destination system terminal. • In-service train is a train located in the passenger-carrying portion of the system that passengers are able to use for transport. • SNT = Scheduled number of trips. The scheduled number of trips departing from the busiest system terminal. • ANT = Actual number of trips. The actual number of trips departing from the busiest system terminal. • CDC = Car design capacity. The originally specified design capacity of the car, expressed as the number of passengers per car. If this information is unavailable, then: – The number of passengers that can be accommodated in seats in a car (not including flip-up and stowable seats), plus February 2010 Month Year-to-Date W/Utilities Expense W/O Utilities Expense W/Utilities Expense W/O Utilities Expense O&M Expense per Vehicle Service Mile $24.05 $18.15 $20.45 $17.50 Table 14. Example Reporting of Airport APM Performance Measure #3: O&M Expense per Vehicle Service Mile.

43 – The number of standee passengers that can be accom- modated in a car based on one standing passenger for each 2.7 ft2 of standee floor area. Standee floor area is defined as the area available to standing passengers, including the area occupied by flip-up and stowable seats (all non-fixed seats). • SNCT = Scheduled number of cars per trip. • ANCT = Actual number of cars per trip. Where individual cars are not provided, the language in this section is to apply to vehicles. See Section 5.1.6 for definitions and discussion of car, vehicle, and train. 5.3.6.2 Data Requirements and Sources The data and sources required to calculate Actual and Scheduled Capacity (Peak Versus All Other) are provided in Table 15. 5.3.6.3 Data Collection Techniques and Calculating and Recording the Measure It is recommended that the collection of data for the Actual and Scheduled Capacity (Peak Versus All Other) performance measure be accomplished daily since the measure may serve a useful purpose in describing performance when reported daily within an organization. This measure might also be interesting when comparing it with the same measure for other airport APM systems, which can provide perspective on the sizes of such systems. For this measure, most of the data will typically be collected from records and systems in the control center. In some cases, the CCCS that is part of the ATS subsystem will have the func- tionality to allow user-defined reports and/or performance measures to be generated based on custom rules set by the user, and output data can be generated by the airport APM system itself. After the one-time setup of the performance measure in the CCCS, most of what is needed thereafter are the incidental updates of eliminating incomplete trips, and perhaps not much more, depending on the sophistication of the CCCS and output data generated by the airport APM system. Control center personnel usually perform these updates after each downtime event or before their shifts are complete. In many cases, this allows reports to be automatically generated, usually daily, monthly, and/or yearly, directly by the CCCS. If this function- ality exists within the CCCS, it is recommended that it be used since it could save time and effort. Some CCCSs do not have the capability described previously but instead can dump the raw output data acquired from the airport APM system automatically to a batch file or to some other network location connected to the CCCS. This is typically done at the end of the operating day or shortly thereafter. In this case it may be easiest to import the data into a spreadsheet application having a file specifically developed to calculate and track this performance measure. The application and file could be installed on a personal computer in the control center so that staff there would have the same ability to keep the data current on each shift. It is assumed for the purpose of this guidebook and this performance measure that airport APM systems have the capability to provide the data requirements at least through the export of data files for use by control center and/or other personnel in their analysis and calculations of the performance measure discussed herein. Data Requirement Source 1 Scheduled arrival and departure times, by car, vehicle, and train number, of in-service trains at the terminal stations of each route in the system ATS subsystem of the ATC system; typically recorded by the CCCS 2 Actual arrival and departure times, by car, vehicle, and train number, of in-service trains at terminal stations of each route in the system ATS, CCCS 3 System schedule, including scheduled opening and closing times of the system, scheduled start/end times of service periods, scheduled number of trains and cars or vehicles per train to be in service, and scheduled headway or departure times ATS, CCCS 4 Car design capacity (design loading) Conformed systems contract documents Authorized official from the operating organization Table 15. Data requirements and sources, Airport APM Performance Measure #4: Actual and Scheduled Capacity (Peak Versus All Other).

44 To track performance over time, it is recommended that Actual and Scheduled Capacity (Peak Versus All Other) be calculated for the day, month, and year, with all of those measures reported daily and rounded to the hundreds. The measures reported for the month and the year are always cumulative-to-date, and they reset upon the start of the new month or new year. For example, if the daily report is being issued for the 10th of February for a particular year, the reported daily measure would be for the 10th of February, the reported monthly measure would be the cumulative avail- ability of days one through 10 of February, and the reported yearly measure would be the cumulative availability of the days from January 1st through February 10th. An example of how the Actual and Scheduled Capacity (Peak Versus All Other) performance measures could be reported for the day of February 10, 2010, is provided in Table 16, and the Airport APM Performance Measures report- ing form can be found in Exhibit A as Form B. 5.3.7 Airport APM Performance Measure #5: Passenger Satisfaction 5.3.7.1 Definition Passenger Satisfaction is the degree or level of contentment of passengers using the airport APM system and is defined as: PS MPS NSE SESE 1 NSE = = ∑ MPS PS NS SE SES 1 NS = = ∑ Where: • PS = Passenger Satisfaction. • SE = Survey element. The particular topic in the passenger satisfaction survey about which airport APM passengers are questioned, as follows: – 1: System availability/wait time – 2: Convenience/trip time – 3: Comfort/ride quality and cleanliness – 4: Ease of use/wayfinding – 5: Informational/announcements – 6: Helpfulness of staff – 7: Responsiveness to complaints • NSE = Number of survey elements. The number of survey elements in the passenger satisfaction survey with a mean passenger satisfaction (MPSSE) greater than 0. If any MPSSE equals 0, it is not to be included in the count of NSE. • PSSE = Passenger satisfaction per survey element. The passen- ger satisfaction rating of a survey element on a passenger satisfaction survey. • MPSSE = Mean passenger satisfaction per survey element. The mean passenger satisfaction rating of a survey element across NS passenger satisfaction surveys. • S = Survey. A completed passenger satisfaction survey. • NS = Number of surveys. The number of completed pas- senger satisfaction surveys for a particular survey element. “Completed” means that a survey element has been given a numerical rating of 1 to 5. If a survey element has not been answered or has been answered as “N/A” or “0,” then the survey element is considered incomplete and is not to be included in the count of NS. 5.3.7.2 Data Requirements and Sources The data and sources required to calculate Passenger Sat- isfaction are provided in Table 17. The sources of data for the Passenger Satisfaction measure will likely be the passenger satisfaction survey, an example of which is provided in Exhibit A, and passenger satisfaction surveyor records, which are discussed in more detail in the next section. 5.3.7.3 Data Collection Techniques and Calculating and Recording the Measure It is recommended that the collection of data for the Pas- senger Satisfaction performance measure be accomplished throughout the month, with reporting of the measure upon closeout of each month. For this measure, data can be collected using one or more of the following methods: Day Month-to-Date Year-to-Date Peak All Other Peak All Other Peak All Other Scheduled capacity 16,000 24,000 160,000 240,000 656,000 984,000 Actual capacity 15,900 22,300 156,200 220,100 648,700 909,600 Table 16. Example reporting of Airport APM Performance Measure #4: Actual and Scheduled Capacity (Peak Versus All Other) for February 10, 2010 (in total passengers).

45 • Forms. Survey forms, as provided in Exhibit A, should be reduced to a postcard size and be readily available to airport APM passengers on the trains, in the stations, and in other high-circulation areas where airport APM passengers will pass by or wait. Secure drop boxes should be well-placed in stations or similar areas to allow pas- sengers to return the completed surveys with ease. Ad- dresses should be preprinted on the opposite side of the card to allow passengers to return the surveys by U.S. postal mail as well. Survey forms may be the most cost- effective method to obtain data for the Passenger Satis- faction measure, and therefore are likely to be the most commonly used. • Face-to-face contact. Employing the use of a subcontractor, or employees of the airport or the APM O&M organization, is another method that can be used to obtain Passenger Satisfaction data. The surveyor(s) can be posted at a sta- tion and ask alighting (rather than boarding) airport APM passengers to rate their satisfaction for the survey elements provided in Exhibit A. This data collection method can be costly if performed on a regular basis but likely is the best way to obtain objective feedback. It may also be the best way to obtain the greatest quantities of feedback, thereby providing greater confidence in the overall Passenger Sat- isfaction performance measure. • Phone-in. The phone-in survey method could be a conve- nient way of collecting data from passengers because of the popularity and common use of cellular telephones. A toll- free number could be posted throughout the airport APM system inviting passengers to phone in their perceptions of their experience using the system. The survey elements could be collected via a system where passengers listen to the questions then provide their single-digit numerical rat- ing when prompted. At the end of the survey, passengers could leave a voicemail, if desired. This method, although convenient for both passengers and the organization col- lecting the data, could be costly, at least possibly for the initial investment in the telephone/computer system that would manage this effort. • Email/Internet. The email/Internet survey method is simi- lar to the phone-in survey method in that an email or web address could be posted at various locations in the airport APM system inviting passengers to complete the survey form via email or directly in a web browser via the Inter- net. Passengers could again likely use their cell phones via this method, which makes it a convenient method for both the passenger and organization collecting the data. It could even prove to increase the response rate as compared to other methods. In addition, this method would likely be appreciably more cost effective than the phone-in or face- to-face methods, thereby making it one of the more attractive options. The primary difference between these data collection methods is that feedback occurs only upon the initiative of the passenger under the forms, phone-in, and email/Internet methods, whereas for the face-to-face method, feedback occurs upon the initiative of the data collection organization through interactive, one-on-one contact. If an organization relies solely on obtaining feedback via methods dependent only on the passengers’ initiative, the Passenger Satisfaction measure could be more representative of passenger dissatis- faction since it could be assumed that passengers might be more motivated to provide feedback when encountering a bad experience than an expected or good experience. Under this premise, it is recommended that organizations at a minimum collect 10 surveys per month (approximately 2 to 3 per week) using an employee, for example, to elicit and record responses to survey elements via face-to-face contact with passengers. It is also recommended that a permanent, continuous data collection method be implemented using one of the other methods listed previously to acquire as much feedback on passenger satisfaction performance for the airport APM system as possible. The more surveys that have been completed for the month via both of these methods, the more confidence there can be that the Passenger Satisfaction performance mea- sure is generally representative for all measures of passenger satisfaction. Data Requirement Source 1 Date of passenger feedback Passenger satisfaction surveys Passenger satisfaction surveyor records 2 Number of survey elements Passenger satisfaction surveys Passenger satisfaction surveyor records 3 Number of surveys Passenger satisfaction surveys Passenger satisfaction surveyor records 4 Numerical rating of each survey element Passenger satisfaction surveys Passenger satisfaction surveyor records Table 17. Data requirements and sources, Airport APM Performance Measure #5: Passenger Satisfaction.

46 The calculation of the Passenger Satisfaction measure is to be accomplished using the numerical ratings of the survey ele- ments assigned by the passengers. The numerical rating value of survey element number one, for example, is summed across all surveys and then divided by the number of those surveys to obtain the mean passenger satisfaction for that survey ele- ment. The same is done for all other survey elements. Then, the mean passenger satisfaction values for the survey elements are summed and divided by the number of survey elements to obtain the representative value for the Passenger Satisfaction measure. When any of the survey elements or mean passenger satisfaction values are 0, they are not included in the number of surveys or survey elements in the divisor. This avoids an undue penalty for a question left blank or a “do not know” response. To track performance over time, it is recommended that Passenger Satisfaction be calculated for the month and year, with those measures reported monthly to two decimal places. Passenger Satisfaction for the month would reset at the con- clusion of each month, and Passenger Satisfaction for the year would be reported monthly as year-to-date and reset at the conclusion of the year. Care should be exercised in reporting Passenger Satisfaction measures more frequently than monthly (i.e., month-to-date in conjunction with a daily report, for example) unless at least 10 surveys have been collected via face-to-face contact. The goal is ultimately to provide reporting on the measure that reflects a significant enough survey sample size without overburdening the organization responsible for collecting that data. The data collected for the Passenger Satisfaction measure could be further analyzed by calculating standard deviation or plotting histograms, which can identify problems not recog- nized through monthly reporting of a numerical value, such as the mean passenger satisfaction measure set forth herein. An example of how Passenger Satisfaction and mean pas- senger satisfaction for each survey element could be reported for the month of February 2010 is provided in Table 18, which represents a more comprehensive level of reporting for this measure. The minimum data to be reported for this measure would be as found on Form B in Exhibit A. 5.3.8 Airport APM Performance Measure #6: Missed Stations per 1,000 Station Stops 5.3.8.1 Definition Missed Stations per 1,000 Station Stops is the rate at which missed stations have occurred in the airport APM system. It is defined as: Monthly MS MS SS 1kss d 1 m d 1 m = ( )×( ) = = ∑ ∑ 1 000, Yearly MS MS SS 1kss d 1 y d 1 y = ( )×( ) = = ∑ ∑ 1 000, Where: • MS1kss = Missed Stations per 1,000 Station Stops. • MS = Number of missed stations. • Missed station is when an in-service train either does not stop at a station on the scheduled route or stops at a station on the scheduled route in the following manner: – The train overshoots the station, resulting in either no doors opening, some doors opening, or all doors opening but misaligned by 6 in. or more. – The train stops short of the station, resulting in either no doors opening, some doors opening, or all doors opening but misaligned by 6 in. or more. • SS = Station stops. Station stops is defined as the total num- ber of station stops in-service trains are scheduled to make for the specified operating mode. For example, if due to Passenger Satisfaction Month Year-To-Date February 2010 4.30 = High 4.00 = High Mean Passenger Satisfaction, by Survey Element February 2010 Month Year-To-Date System availability/wait time 4.00 3.25 Convenience/trip time 3.50 3.25 Comfort/ride quality/cleanliness 4.00 4.00 Ease of use/wayfinding 5.00 4.25 Informational announcements 5.00 4.25 Helpfulness of staff 0.00 5.00 Responsiveness to complaints 0.00 0.00 0–2.44 = Low passenger satisfaction 2.45–3.44 = Medium passenger satisfaction 3.45–5.00 = High passenger satisfaction Table 18. Example reporting of Airport APM Performance Measure #5: Passenger Satisfaction.

47 failure the system operation is changed from the nominal pinched-loop mode to a failure mode, such as a runaround (single tracking around a failed area), then station stops would be calculated based on the stops scheduled to be made on the pinched loop for the time operating in that mode and based on the stops scheduled to be made on the runaround for the time operating in that mode. • In-service train is a train located in the passenger-carrying portion of the system that passengers are able to use for transport. • d = Day of the month or year, as applicable. • m = Days in the month. • y = Days in the year. 5.3.8.2 Data Requirements and Sources The data and sources required to calculate Missed Stations per 1,000 Station Stops are provided in Table 19. 5.3.8.3 Data Collection Techniques and Calculating and Recording the Measure It is recommended that the collection of data for the Missed Stations per 1,000 Station Stops performance measure be accomplished daily but be reported no more frequently than monthly since missed stations in airport APM systems are relatively rare. In addition, the numeric value of the measure, if reported daily, could be misinterpreted to be high because the underlying basis is only 1 day’s worth of stations stops, as opposed to 30 days’ worth of station stops in a monthly reported measure. For this measure, most of the data will typically be collected from records and systems in the control center. Where the functionality of the CCCS and the specificity of the APM sys- tem output data allow, it may be possible to collect data for the Missed Stations per 1,000 Station Stops performance measure directly from the CCCS. After the one-time setup of the perfor- mance measure in the CCCS, all that may be needed thereafter are the incidental updates of classifying incidents as missed stations, where appropriate. Control center personnel usually perform these updates after each incident or before their shifts are complete. In many cases, this allows reports to be automati- cally generated directly by the CCCS. It is recommended that this process be instituted if the systems and data will support it. Other airport APM systems may have to rely on a process that is separate from the CCCS for this performance measure. For example, missed stations may have to be obtained from a logbook or station stops manually determined from the oper- ating schedule. If this is the case, then it is recommended that the data required for this performance measure be manually collected daily and entered in a file of a spreadsheet application containing the necessary formulas and placeholders to calculate the measure. To track performance over time, it is recommended that Missed Stations per 1,000 Station Stops be calculated for the month and year, with those measures reported monthly to the thousandths. The measure reported for the year is always cumulative-to-date, and it resets upon the start of the new year. For example, if the monthly report is being issued for the month of February for a particular year, the reported monthly measure would be for the entire month of February, and the reported yearly measure would be the cumulative measure of Missed Stations per 1,000 Station Stops from January 1st through February 28th of that year. An example of how the Missed Stations per 1,000 Station Stops performance measure could be reported for the month of February 2010 is provided in Table 20, and the Airport Data Requirement Source 1 Number of missed stations ATS, CCCS Control center logbooks Incident reports Work orders 2 Station stops, determined from individual station arrival and departure times, by train number, for example ATS, CCCS Control center logbooks (start/stop time of failure modes) Incident reports (start/stop time of failure modes) Table 19. Data requirements and sources, Airport APM Performance Measure #6: Missed Stations per 1,000 Station Stops. February 2010 Month Year-to-Date Missed Stations per 1,000 Station Stops: 0.003 0.002 Table 20. Example reporting of Airport APM Performance Measure #6: Missed Stations per 1,000 Station Stops.

48 APM Performance Measures reporting form can be found in Exhibit A as Form B. 5.3.9 Airport APM Performance Measure #7: Unintended Stops per 1,000 Interstations 5.3.9.1 Definition Unintended Stops per 1,000 Interstations is the rate at which unintended stops have occurred outside of stations in the airport APM system. It is defined as: Monthly US US I 1ki d 1 m d 1 m = ( )×( ) = = ∑ ∑ 1 000, Yearly US US I 1ki d 1 y d 1 y = ( )×( ) = = ∑ ∑ 1 000, Where: • US1ki = Unintended Stops per 1,000 Interstations. • US = Number of unintended stops. • Unintended stop is the stopping of an in-service train in a location outside of a station where the train does not nor- mally stop as part of the nominal system operation. • I = Interstations. Interstations is defined as the total num- ber of interstations traveled by all in-service trains, with one interstation being the directional segment between two adjacent stations in the system. For example, a train operating from stations A to D in Figure 6 would travel two interstations (A to C and C to D); an in-service train operating a round trip on a two-station shuttle system would travel two interstations. • In-service train is a train located in the passenger-carrying portion of the system that passengers are able to use for transport. • d = Day of the month or year, as applicable. • m = Days in the month. • y = Days in the year. 5.3.9.2 Data Requirements and Sources The data and sources required to calculate Unintended Stops per 1,000 Interstations are provided in Table 21. 5.3.9.3 Data Collection Techniques and Calculating and Recording the Measure It is recommended that the collection of data for the Un- intended Stops per 1,000 Interstations performance measure be accomplished daily but be reported no more frequently than monthly since unintended stops in airport APM systems may not occur frequently. In addition, the numeric value of the measure, if reported daily, could be misinterpreted to be high because the underlying basis is only 1 day’s worth of inter- stations traveled as opposed to 30 days’ worth of interstations traveled in a monthly reported measure. For this measure, most of the data will typically be col- lected from records and systems in the control center. Where the functionality of the CCCS and the specificity of the APM system output data allow, it may be possible to collect data for the Unintended Stops per 1,000 Interstations performance measure directly from the CCCS. In many cases, this allows reports to be automatically generated directly by the CCCS. It is recommended that this process be instituted if the systems and data will support it. Other airport APM systems may have to rely on a process that is separate from the CCCS for this performance measure and/or export and analysis of data from the CCCS. If this is the case, then it is recommended that the data required for this performance measure be manually or automatically collected daily and entered in a file of a spreadsheet application containing the necessary formulas and placeholders to calculate the measure. To track performance over time, it is recommended that Unintended Stops per 1,000 Interstations be calculated for the month and year, with those measures reported monthly to the thousandths. The measure reported for the year is always cumulative-to-date, and it resets upon the start of the new year. For example, if the monthly report is being issued for the Data Requirement Source 1 Number of unintended stops ATS, CCCS Control center logbooks Incident reports 2 Interstations, determined from origin terminal departure times, by train number, and destination terminal arrival times, by train number, for example ATS, CCCS Table 21. Data requirements and sources, Airport APM Performance Measure #7: Unintended Stops per 1,000 Interstations.

49 February 2010 Month Year-to-Date Unintended Stops per 1,000 Interstations 0.003 0.002 Table 22. Example Reporting of Airport APM Performance Measure #7: Unintended Stops per 1,000 Interstations. month of February for a particular year, the reported monthly measure would be for the entire month of February, and the reported yearly measure would be the cumulative measure of Unintended Stops per 1,000 Interstations from January 1st through February 28th of that year. An example of how the Unintended Stops per 1,000 Inter- stations performance measure could be reported for the month of February 2010 is provided in Table 22, and the Airport APM Performance Measures reporting form can be found in Exhibit A as Form B.

Next: Chapter 6 - Other Airport APM System Performance Measures »
Guidebook for Measuring Performance of Automated People Mover Systems at Airports Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Airport Cooperative Research Program (ACRP) Report 37A: Guidebook for Measuring Performance of Automated People Mover Systems at Airports is designed to help measure the performance of automated people mover (APM) systems at airports.

The guidebook identifies, defines, and demonstrates application of a broad range of performance measures encompassing service availability, safety, operations and maintenance expense, capacity utilization, user satisfaction, and reliability.

The project that developed ACRP Report 37A developed the set of forms below that are designed to help periodically compile the necessary data for input into the overall performance measurement process.

Form A: System and Service Descriptive Characteristics

Form B: Airport APM Performance Measures Page 1 of 2

Form B: Airport APM Performance Measures Page 2 of 2

Passenger Satisfaction Survey

The project also developed an interactive Excel model containing spreadsheets that can be used to help track and calculate system-wide performance and service characteristics.

The set of forms and Excel model are only available electronically.

ACRP Report 37A is a companion to ACRP Report 37: Guidebook for Planning and Implementing Automated People Mover Systems at Airports, which includes guidance for planning and developing APM systems at airports.

In June 2012, TRB released ACRP Report 67: Airport Passenger Conveyance Systems Planning Guidebook that offers guidance on the planning and implementation of passenger conveyance systems at airports.

Disclaimer: The software linked to from this page is offered as is, without warranty or promise of support of any kind either expressed or implied. Under no circumstance will the National Academy of Sciences or the Transportation Research Board (collectively “TRB") be liable for any loss or damage caused by the installation or operation of this product. TRB makes no representation or warranty of any kind, expressed or implied, in fact or in law, including without limitation, the warranty of merchantability or the warranty of fitness for a particular purpose, and shall not in any case be liable for any consequential or special damages.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!