National Academies Press: OpenBook
« Previous: Exhibit A - Form A, Form B, and Passenger Satisfaction Survey
Page 63
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 63
Page 64
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 64
Page 65
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 65
Page 66
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 66
Page 67
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 67
Page 68
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 68
Page 69
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 69
Page 70
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 70
Page 71
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 71
Page 72
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 72
Page 73
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 73
Page 74
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 74
Page 75
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 75
Page 76
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 76
Page 77
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 77
Page 78
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 78
Page 79
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 79
Page 80
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 80
Page 81
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 81
Page 82
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 82
Page 83
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 83
Page 84
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 84
Page 85
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 85
Page 86
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 86
Page 87
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 87
Page 88
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 88
Page 89
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 89
Page 90
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 90
Page 91
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 91
Page 92
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 92
Page 93
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 93
Page 94
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 94
Page 95
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 95
Page 96
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 96
Page 97
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 97
Page 98
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 98
Page 99
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 99
Page 100
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 100
Page 101
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 101
Page 102
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 102
Page 103
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 103
Page 104
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 104
Page 105
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 105
Page 106
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 106
Page 107
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 107
Page 108
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 108
Page 109
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 109
Page 110
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 110
Page 111
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 111
Page 112
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 112
Page 113
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 113
Page 114
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 114
Page 115
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 115
Page 116
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 116
Page 117
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 117
Page 118
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 118
Page 119
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 119
Page 120
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 120
Page 121
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 121
Page 122
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 122
Page 123
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 123
Page 124
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 124
Page 125
Suggested Citation:"Appendix A." National Academies of Sciences, Engineering, and Medicine. 2012. Guidebook for Measuring Performance of Automated People Mover Systems at Airports. Washington, DC: The National Academies Press. doi: 10.17226/14606.
×
Page 125

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

63 a p p e n d i x a Contents 66 Section 1 Preface 67 Section 2 Research for Performance Measurement 67 2.1 Research for Performance Measurement—In General 67 2.1.1 Balanced Scoreboard 68 2.1.2 Data Envelopment Analysis 68 2.1.3 Network Models 68 2.1.4 Summary 68 2.2 APM Performance Measurement 68 2.2.1 Applied Methods 69 2.2.1.1 System Dependability Method 69 2.2.1.2 Contract Service Dependability Method 70 2.2.1.3 System Service Availability Method 71 2.2.1.4 Paris Airport Authority Method 71 2.2.2 Theoretical Methods 71 2.2.2.1 Airport APM Performance Measurement: Network Configuration and Service Availability 71 2.2.2.2 Defining and Measuring Service Availability for Complex Transportation Networks 72 2.2.2.3 RAM: Reliability, Availability and Maintainability of APM Systems 72 2.3 Public Transit Performance Measurement 72 2.3.1 Historical Development 73 2.3.2 Concentrated Efforts 73 2.3.2.1 TCRP Report 88 73 2.3.2.2 Conferences on Transportation Performance Measures 74 2.3.2.3 National Transit Database 75 2.3.3 International Practices 75 2.4 Airline Performace Measurement 76 2.4.1 Government-Monitored Measures 77 2.4.1.1 BTS-Monitored Measures 77 2.4.2 Airport Operator/Airline Measures 78 2.4.3 Other Airport Agency Measures 78 2.4.4 Design Recommendations, Standards, and Levels of Service

64 78 2.5 Highway Performance Measurement 78 2.5.1 FHWA Performance Measurement Program 80 2.5.1.1 90th or 95th Percentile Travel Time 80 2.5.1.2 Buffer and Planning Time Indices 81 2.5.2 National Transportation Operations Coalition 81 2.5.2.1 Customer Satisfaction 81 2.5.2.2 Incident Duration 81 2.5.2.3 Throughout 82 2.5.3 Freeway Performance Measurement: NCHRP Project 3-68 82 2.5.3.1 Vehicle Miles of Travel 82 2.5.3.2 Safety 82 2.5.3.3 Fuel Consumption per Vehicle Mile Traveled 82 2.6 Conclusion 84 Section 3 Appendix A Bibliography 84 3.1 Performance Measurement—In General 84 3.2 APMs 85 3.3 Public Transit 85 3.4 Airlines 86 3.5 Highways 87 Section 4 Survey Plan and Instrument (Task 3) 87 4.1 Survey Sites and Site Visits 87 4.1.1 Identify Candidate APM Systems to Survey 87 4.1.2 Select Final APM Systems to Survey 87 4.1.3 Select APM Systems for Site Visits 89 4.2 Survey Instrument 89 4.2.1 Develop Survey Instrument 89 4.2.2 Conduct Site Visits 90 4.2.3 Finalize Survey Instrument 90 4.3 Survey Plan 90 4.3.1 Step 1: Distribute Introductory Letter 90 4.3.2 Step 2: Determine Willingness to Participate in Survey 90 4.3.3 Step 3: Report to ACRP Panel on Participation Ratio 91 4.3.4 Step 4: Distribute Survey 91 4.3.5 Step 5: Verify Receipt of Survey 91 4.3.6 Step 6: Receive Survey Responses 91 4.3.7 Step 7: Survey Follow-Up 91 4.3.8 Step 8: Report to ACRP Panel on Response Ratio 91 4.3.9 Step 9: Compile Data and Clarify Responses 91 4.3.10 Step 10: Transmit Thank-You Letters to Respondents 92 Section 5 Survey Implementation and Data Analysis (Task 4) 92 5.1 Survey Implementation 92 5.1.1 Section 1: General Information 92 5.1.2 Section 2: Performance Measures 93 5.1.3 Section 3: Data Collection 93 5.1.4 Section 4: Suggestions for Improving APM Performance Measures 94 5.1.5 Section 5: System and Operating Characteristics 95 5.1.6 Section 6: Cost 95 5.1.7 Section 7: Other

65 95 5.2 Survey Response Data, Compilation, and Analysis 95 5.2.1 Age of Airport APM Systems Surveyed 95 5.2.2 System and Operating Characteristics 100 5.2.3 O&M Cost 100 5.2.4 Performance Measures 105 5.2.5 Data Collection 105 5.2.6 Suggestions for Improving Airport APM Performance Measures 106 Section 6 Airport APM Survey

66 The objective of ACRP Project 03-07 was to provide a user- friendly guidebook for measuring performance of APM systems at airports. Specifically, the guidebook identifies a set of perfor- mance measures and associated data requirements for airport APM operators to assess and improve performance, compare APM systems, and plan and design future APM systems. The performance measures address the efficiency, effectiveness, and quality of APM systems at airports, particularly focusing on impacts on APM passengers and on airport performance. Throughout the course of the project, research was con- ducted and work developed that contributed in shaping the guidebook. This research and work, however, are not directly germane to the objective of the guidebook and therefore not appropriate to be incorporated within the main body of that document. As a result, this appendix is provided to document the relevant historical work accomplished on the project that helps form the basis of the main body of the guidebook, including a more detailed summary of the research conducted on performance measurement, generally and as applied to APMs, public transit, airlines, and highways. The appendix also contains information about the survey conducted on the project, including the survey plan specifics, a copy of the survey, and the survey response data and analysis undertaken as part of the research effort. By providing some of the underlying details of the project’s research and work efforts in the appendix, the end user can gain a more thorough understanding of the resulting guidebook and approach provided. S e c t i o n 1 Preface

67 2.1 Research for Performance Measurement—In General Performance measurement is a type of assessment. It is the ongoing monitoring and reporting of system or pro- gram accomplishments, particularly of progress toward pre- established goals [2.1.5—Numbers in brackets throughout this appendix refer to numbered items in the appendix’s bibliography.] A key aspect of a successful performance measurement sys- tem is that it makes up a balanced set of a few vital measures. Performance measures may address the type or level of program or system activities conducted (process), the direct products and services delivered by a program or system (outputs), or the results of those products and services (outcomes) [2.1.5]. In any case, measures should: • Be meaningful; • Describe how well the goals and objectives are being met; • Be simple, understandable, logical, and repeatable; • Show a trend; • Be unambiguously defined; • Allow for economical data collection; and • Be timely and sensitive [2.1.12]. There is an extensive body of research on performance mea- surement in general, and in particular as applied to public tran- sit, which is described later. For example, the scope of transit performances measures has expanded from simple or limited indicators such as cost [2.1.8] to comprehensive indices such as regularity index [2.1.6] or total productivity index [2.1.13]. There is also a wide range of performance measurement methodologies, including: • Balanced Scorecard [2.1.7], • Data envelopment analysis (DEA) [2.1.2], • Multi-criteria multimodal network analyses [2.1.16], • Traffic-based, • Mobility-based, and • Accessibility-based. Litman [2.1.9] has suggested that the accessibility-based approach is the best since accessibility is the ultimate goal of most transportation. It is not possible to describe all performance measures, approaches, or methodologies here since the length of this appendix and scope of the project is limited; however, a few methodologies are briefly documented for potential reference when developing performance measures for APM systems at airports. 2.1.1 Balanced Scoreboard The Balanced Scorecard approach is a strategic manage- ment approach introduced in 1992 by Robert S. Kaplan and David P. Norton of the Harvard Business School [2.1.7]. This management approach galvanized and revolutionized the field. It not only enables organizations to clarify their vision and strategy and translate them into action, but it also pro- vides feedback about both the internal business processes and external outcomes in order to continuously improve strategic performance and results. The Balanced Scorecard approach gained wide use and acclaim in the private sector as a way to build customer and employee data to measure and ensure better performance outcomes. It thus transformed the way private-sector companies could achieve and analyze high levels of performance and was critical in revitalizing such companies as Federal Express, Corning, and Sears [2.1.11]. The Balanced Scorecard approach found its way to the public sector in the later 1990s. Phillips [2.1.14] suggested the application of the Balanced Scorecard to transit performance measures by developing a comprehensive list of constructs and measures for public transit performance assessment (i.e., a shopping list of measures for managers to choose S e c t i o n 2 Research for Performance Measurement

68 corridor [2.1.10], the level of service can be very different when transfer penalties are included in the analysis, which can only be accomplished via analysis of network models. In models for public transit usage, the factor representing transit service most often involves the proximity to transit stops, either using walking distance buffers around transit routes or more detailed land use information. These approaches are insufficient to examine the effect transit service has on a person’s travel mode decision. In work for the Delaware Transportation Institute [2.1.15], factors for transit level of service were developed using ArcInfo network models that more realisti- cally estimate level of service between specified origins and destinations taking into account walking distances, transfers, wait times, and park and rides. Methods discussed for travel time and distance estimates are applicable for other travel modes as well. 2.1.4 Summary In short, performance measurement focuses on whether a system or program has achieved its objectives, expressed as measurable performance standards. Because of its ongoing nature, it can serve as an early warning system to management, as a vehicle for improving accountability to the public [2.1.5], as a method to document accomplishments, and as a way to compare similar programs and systems. 2.2 APM Performance Measurement This study has determined from research of numerous industry documents that current performance measurement of both airport and non-airport APMs is primarily focused on traditional measures of operating system performance (i.e., reliability, maintainability, and availability). Other APM performance measures related to economic efficiency, comfort, and convenience, among others, have received sig- nificantly less attention in the literature, if any at all. Some of these measures are applied in other industries, however, and are considered in this appendix for their application to an airport APM. As the research progresses, we anticipate that site visits and surveys will yield more information about these measures from APM system owners and operators. The documented methods of system performance measure- ment for airport APMs can be broadly divided into two classes: applied methods and theoretical methods. These classifications are described in the following sub-sections. 2.2.1 Applied Methods In general, there are four applied methods used in airport APM performance measurement: the System Dependability from in constructing their own scorecard). The original four metrics used by this approach for for-profit assessments (financial, internal business, customer, and innovation and learning) were adjusted to fit the unique requirements of a nonprofit, public service. The Balanced Scorecard approach for public use would consider three perspectives: efficiency, effectiveness, and impact. Phillips then details the elements that go into each of the three perspectives and creates constructs for each. 2.1.2 Data Envelopment Analysis Data envelopment analysis, first put forward by Charnes, Cooper, and Rhodes in 1978 [2.1.2], is a methodology widely used for measuring the relative efficiency of decision-making units, which can be business units, government agencies, police departments, hospitals, educational institutions, and even people. The underlining assumption is fairly simple even though the mathematics can get complex. DEA is a multi- criteria approach, capable of handling multiple inputs and outputs that are expressed in different measurement units. This type of analysis cannot be done with classical statistical methods. Furthermore, because DEA is not a statistical method, one is not constrained by the type and the relations of the data used, as in regression techniques. Inputs and outputs can be anything, including qualitative measurements. Chu and Fielding [2.1.3] applied the DEA technique in transit performance measurement by developing DEA models for relative efficiency and effectiveness. DEA can measure effectiveness by using consumed service as the output and produced services along with selected external environmental variables as inputs. Chang et al. [2.1.1] extended the model for measuring the relative effectiveness and changes in effec- tiveness of an organization by merging it with the Malmquist Productivity Approach. The DEA technique may be applied to APM performance measures since APM systems have simi- lar input and output variables, even though the quantity and configurations may be very different. 2.1.3 Network Models Transportation network models were originally developed to forecast travel demand and simulate traffic conditions. However, model outputs can often be used as comprehen- sive performance measures for the entire network, individual mode, selected areas, or corridors. The flexibility afforded by transit network analysis often provides the most powerful indicators for measuring the efficiency and effectiveness of transportation networks. Such comprehensive evaluation is extremely important since performance measures of a single mode or isolated sections may distort the results. As demon- strated in an early study of the Hudson–Bergen Light Rail

69 implemented at some APM systems. The method is very similar to the System Dependability Method in that it incor- porates the same three performance measures: reliability, maintainability, and availability. While the older literature revealed that this method previously relied on three sets of the RAM measures (one set called “absolute” RAM, one called “design/operational” RAM, and another called “con- tract service” RAM), it has today generally evolved into two measure sets—one RAM set where all failures are taken into account and another RAM set where failures that are consid- ered exceptions are not taken into account. This method gen- erally allows for a grace period of 3 min or less for downtime resulting from incidents/failures. Concerning the method’s treatment of partial service credit and capacity considerations, most of the examples of this method revealed that these are not incorporated as part of the method. There were two exceptions, however. The first exception is the method as applied to the pinched loop APM system at Chicago’s O’Hare International Airport. There, the system has the capability to operate around many types of failures because its numerous switches and routing combi- nations, as well as its bidirectional capability, provide a high degree of flexibility. As such, partial service credit is allowed, and system capacity is considered only so far as to make the calculation of the credit [2.2.3 and 2.3.4]. The formula is complicated by the fact that the Chicago system can operate with various train lengths, which forces the consideration of both transportation capacity and headway as well as a cor- responding set of specific rules. This is not so in the case of the Paris Airport Authority Method that will be discussed later. The second exception is the method as applied to the shut- tle APM at Orlando International Airport [2.2.10]. Although Method, the Contract Service Dependability Method, the System Service Availability Method, and the Paris Airport Authority Method. These methods are primarily distinguished from one another by the number of factors measured, grace period durations, whether credit is allowed for partial service operations during failures, and whether capacity is a consider- ation in any of the measures. The methods and characteristics of each are summarized in Table A-1. Each of the applied methods is described in detail in the following. 2.2.1.1 System Dependability Method The classical measurement of performance for systems in general, as well as in the APM industry, is the System Dependability Method, as presented in ASCE 21-05, Ameri- can Society of Civil Engineers, Automated People Mover Standards – Part 1, Chapter 4 [2.2.1]. This method incorporates three measures of overall system performance: reliability, or mean time between failure; maintainability, or mean time to restore; and availability (the ratio of MTBF to the sum of MTBF and MTTR). This method allows for the consideration of grace periods for downtime incurred as a result of an inci- dent or failure, and it also allows for downtime credit during partial service operations. Capacity is not considered as part of this method. 2.2.1.2 Contract Service Dependability Method The Contract Service Dependability Method was devel- oped by U.S. consulting firm JKH Mobility and has been Method/Measures No. ofMeasures Grace Period Partial Service Credit Capacity Considered System Dependability Method Reliability 3 yes optional no Maintainability Availability Contract Service Dependability Method Contract service reliability 3 3 min no* no* Contract service maintainability Contract service availability System Service Availability Method Service mode reliability 6 1 head- way yes yes Service mode maintainability Service mode availability Fleet availability Station platform door availability System service availability Paris Airport Authority Method Contract service availability 1 no yes yes** *In most cases of the literature reviewed **During degraded mode operations Table A-1. APM performance measurement, applied methods.

70 where each of the above terms has a precise technical definition, and is subject to recordation of actual conditions. A Mode Downtime Event is an event in which one or more Operating System-related problems cause an interruption of the normal service provided by the desired operating mode. When such an interruption occurs, downtime for the event shall include all the time from the beginning of the interruption until all trains stopped on the guideway are restarted and normal operation in the scheduled mode is restored (i.e., continuously and normally equal train spacing). Downtime events of a duration that are less than one operational headway shall not be counted in the calcula- tion of service mode availability, but shall be counted for down- time limits purposes. A train stopping on the guideway or failing to depart from a station shall be considered a mode downtime event. Stoppages resulting from causes listed in the following as exclusions shall not be counted as mode downtime events. Exclusions. The following events are not attributable to the Operating System itself and are not mode downtime events. Delays due to these exclusions are not to be used in determin- ing service mode availability, and shall result in the entire period affected by them being deleted from consideration in calculating service mode availability (i.e., Scheduled Mode Operating Hours is reduced), but not from data collection and storage. All data collection means shall include all periods of time. 1. The time period to transition from one scheduled operating mode to another scheduled operating mode or adjusting sched- uled fleet size. Valid transition periods shall not be counted in calculating Am. Time in excess of allowable transition time by more than one operational headway, as operated during the peak period for the route, shall not be excluded, but the availability achieved during that period shall be adjusted by a “K” factor. The time to change into and out of a lesser, nonscheduled, operating mode due to a failure of the scheduled, or higher- order backup, operating mode shall not be excluded, but shall be counted as the lower of the operating modes. 2. Passenger-induced interruptions or delays. 3. Interruptions caused by intrusions of unauthorized persons or of animate or inanimate objects into non-public areas of the Operating System. 4. Interruptions caused by non-Operating System induced loss of service, e.g., loss of utility service, electrical power provided outside the nominal range, vehicle diversion resulting from intended security responses, force majeure, and acts or omis- sions of the Owner or its agents or contractors. 5. Periods of scheduled operating times when the specified envi- ronmental limits are exceeded. 6. Periods when the Fixed Facilities are not available, unless their unavailability is attributable to the Contractor or its vehicles/ subsystems. 7. Operational delays induced by the ATC system to regulate train operations, maintain schedules and for anti-bunching; where such delays do not exceed the operational headway for the route. K-Factor. Used to calculate partial Operating System mode availability during failure mode operations. When the Operating System is not in any failure mode, K factor shall be equal to one. If a downtime event occurs and service is not restored within the time specified to that scheduled for the Operating System, but rather a lesser service mode is operated for failure management, then the entire time period for operating in any failure mode shall be counted as partial mode downtime. To determine A(m), capacity is normally not considered in this exception at all, a type of partial service credit is allowed in one specific case— during a failure of the scheduled single train operation when the standby train is undergoing maintenance and is unavailable. 2.2.1.3 System Service Availability Method The System Service Availability Method has been advo- cated and used by U.S. consulting firm Lea+Elliott since 1994. As a result, it is in wide usage at airport APMs world- wide. The method is distinguished from the other methods by measures that record the performance of subsystems that are most likely to affect passengers. Because the other methods concentrate only on performance as affected by inter- ruptions to system service (i.e., train stoppages), other fail- ures that affect passenger service without interrupting system service are not captured. For example, station platform door failures that deny passengers access to the vehicles affect passenger service and may not be reflected in the measures of the other methods. This method incorporates measures of service mode availability, fleet availability, station plat- form door availability, and system service availability. The additional availability measures related to fleet and station platform doors ensure that all failures affecting passengers (not just those that interrupt service) are reflected in the overall service availability measure. The System Service Availability Method also tracks the number of service mode failures, or downtime events, which allows measures of service mode reliability and maintainabil- ity to be easily made. These two measures, along with the four availability measures described previously, make up the six measures unique to this method. The method allows an equivalent of one headway dura- tion or less for a grace period for both incidents/failures and schedule transitions. It also allows for the consideration of downtime credit for partial service operations provided during failures, and it considers capacity as part of its normal measure set rather than during partial service credit only. The System Service Availability Method, as excerpted in the following, discusses the four measures of availability. Measures of service mode reliability and maintainability pre- viously discussed in this subsection are not presented in the excerpt for the sake of brevity [2.2.6]. System Service Availability Method Described Service Mode Availability For each time period and service mode of operation, Service Mode Availability is measured as: A m Scheduled Mode Operating Hours Mode Do( ) = − wntime Hours Scheduled Mode Operating Hours

71 different, the partial service credit incentive concept is very similar to the approach used for the Chicago O’Hare APM discussed previously. One significant difference is that whereas the Chicago APM must deal with a specific approach due to its variable length trains, the CDG APM system does not (it has fixed-length trains). Both of these APM systems, Roissy Charles-de-Gaulle and Chicago O’Hare, are supplied by the same APM system manufacturer, Siemens, which may explain why the methods are similar, at least in the context of partial service allowances. 2.2.2 Theoretical Methods The literature review and analysis have also revealed that APM performance measurement has been studied and reported on theoretically. Three papers in particular have been presented in this area. 2.2.2.1 Airport APM Performance Measurement: Network Configuration and Service Availability The first paper, “Airport APM Performance Measurement: Network Configuration and Service Availability,” was pre- sented in 2007 at the 11th International ASCE APM Confer- ence in Vienna [2.2.5]. It was written by Wayne D. Cottrell, Associate Professor at California State Polytechnic University, and Yuko J. Nakanishi, President of Nakanishi Research and Consulting, LLC. The paper examines service availability and reliability and how they are affected by airport automated people mover network configurations and other system parameters. The paper affirms the importance of availability and reliability measurements in the APM industry that was discussed pre- viously in Section 2.2.1: Applied Methods. The paper suggests that detailed measures of headway regularity would be useful in an empirical study of AAPM reliability performance. Measures that are set forth include headway adherence, service regularity, headway ratio, headway regularity index, and headway deviation. The ultimate conclusion is that network configuration affects the reliability and availability of airport APMs, albeit in a lim- ited way due to the limited variety of airport APM networks. Other system parameters such as consist size and the number of in-service trains also affect reliability and availability. 2.2.2.2 Defining and Measuring Service Availability for Complex Transportation Networks The second paper, “Defining and Measuring Service Avail- ability for Complex Transportation Networks,” was presented for the time period the Operating System operates in failure mode, the appropriate K factor shall be used. Fleet Availability For each time period and service mode of operation, Fleet Availability is measured as: A f Actual Car Hours Scheduled Car Hours ( ) = where each of the above terms has a precise technical definition and is subject to recordation of actual conditions. Station Platform Door Availability For each time period and service mode of operation, Station Platform Door Availability is measured as: A s Actual Platform Door Hours Scheduled Pl ( ) = atform Door Hours where each of the above terms has a precise technical definition and is subject to recordation of actual conditions. System Service Availability For each time period (I), System Service Availability is mea- sured as: A I Am I Af I As I( ) = ( )× ( )× ( ) where Am(I), Af(I) and As(I) are measured as described above. 2.2.1.4 Paris Airport Authority Method The Paris Airport Authority (Aeroports de Paris) Method is a variation on the System Dependability Method (ASCE) discussed previously. It was introduced by Aeroports de Paris and ALONEX for Line 1 of the APM system at Roissy Charles-de-Gaulle Airport [2.2.8]. Unlike the other methods, it calculates contracted service availability on the basis of service unavailability, and in so doing, eliminates from the calculation any need to consider grace periods or the down- time exclusions common to the other methods. (The other methods are similar in that they exclude the consideration of downtime caused by external sources, such as passenger- induced delays, interruptions caused by intrusions of un - authorized persons or objects, and other external sources beyond the control of the system or operating entity.) Working from the perspective of service unavailability, the goal of this method is to take into account the transporta- tion capacity of the system during periods of degraded mode operations. Providing the ability to earn this partial service credit, and tying the contracted service availability to payment, is an incentive to the operator to provide the best possible transportation capacity during failures. Although the path to calculating the contracted service availability number is

72 2.3.1 Historical Development There is a large literature base that exists today about performance measures in the transit industry. However, two or three decades ago, the landscape on transit performance measures was fairly similar to that of airport APM systems today. That is, no systematic approach existed. Some early documents on transit performance evaluations can be traced back to the early 1980s [2.3.1, 2.3.2, 2.3.6, and 2.3.22]. Based on the fact that most obstacles to the compara- tive evaluation of transit performance lie chiefly in the non- conformity and inaccuracy of the data and the inadequate coverage of the local operating characteristics, studies around this period tended to address the need for data collection and systematic analysis of the data collected. After May 1981, however, this first obstacle was overcome with the publication of the annual reports required by Section 15 of the Urban Mass Transportation Act of 1964 [2.3.1]. However, the data analysis techniques were limited to rudimentary statistical correlation and regression analysis [2.3.2], and performance measures were still in the stages of infancy [2.3.6]. This Section 15 reporting by U.S. transit agencies continues but is known today as the National Transit Database [2.3.12]. The NTD program is discussed in more detail later in this section. Another trend in the early development of transit perfor- mance measures was the mode-specific approach, which is still practiced today and may have potential to be applied to airport APM systems. Topp [2.3.18] conducted an extensive study of Toronto Light Rail services to identify potential prob- lems, practical performance measures, and policies linked with performance evaluations. Rouphail [2.3.15] examined performance evaluation of bus priority measured in the Chicago Loop. And Lauritzen [2.3.8] examined the first year operation of the Chicago Transit Authority’s special services, using performance measures tailored to those services. An early framework for transit performance concepts was presented by Fielding [2.3.3], where cost efficiency, service effectiveness, and cost effectiveness were the terms used to describe the three dimensions of transit performance. Other studies [2.3.4 and 3.3.5] applied this framework for performance evaluation. Lew, Li, and Wachs [2.3.9] carried the framework of transit performance measures one step further by defining several categories of common indicators. They identified three critical limitations to commonly used performance indicators and proposed a new set of intermodal performance indicators. The new proposed indicators overcame the limitations of single- mode indicators by incorporating mechanisms for comparison of one mode to another, for rating the performance of systems that include multiple modes, and by incorporating both capital and operating costs. in 1996 at the International Conference on Personal Rapid Transit (PRT) and Other Emerging Transportation Systems in Minneapolis [2.2.7]. It was written by Charles P. Elms, a former Principal of U.S. consulting firm Lea+Elliott, Inc. The paper first defines measures of service availability in current use and analyzes exact and approximation methods for data collection and computation. It then postulates and explores classical and new definitions of service availability applicable for complex networks such as PRT. Insight is pro- vided for choosing a suitable definition based on the type of transportation network. The methodology in the paper is based on the classical approach of service mode availability [MTBF/(MTBF + MTTR)], and adjusts for fleet availability and station plat- form door availability. Ultimately, the methodology outlined in the paper aligns with the System Service Availability Method discussed previously. 2.2.2.3 RAM: Reliability, Availability and Maintainability of APM Systems The third paper, “RAM: Reliability, Availability and Main- tainability of Automated People Movers,” was presented in 1989 by John K. Howell of U.S. consulting firm JKH Mobility at the Second International Conference on APMs in Miami [2.2.11]. The paper discusses in detail reliability theory in particu- lar, as well as the factors that influence reliability (MTBF), maintainability (MTTR), and availability in an APM system. It also describes approaches to specifying contract service requirements based on classical definitions of MTBF, MTTR, and availability. The paper ends with a discussion of RAM monitoring and accountability. The methodology in the paper is generally based on the classical approach of availability [MTBF/(MTBF + MTTR)] and aligns with the Contract Service Availability Method dis- cussed previously. 2.3 Public Transit Performance Measurement This section is a summary of the key findings of perfor- mance measurement used in the public transit industry. It contains subsections with discussions in three areas: the historical development of public transit measures, provid- ing a brief overview of the history and current practices of perfor mance measurement in the public transit industry; concentrated efforts in the area of public transit performance measurement, focusing on two efforts of performance mea- surement in the transit industry; and international practices, containing examples of measuring transit performance in the international arena.

73 uses of various indicators. The study also focused on a number of case studies of various transit systems to gather effective performance measures. As a result of the study TCRP Report 88: A Guidebook for Developing a Transit Performance-Measurement System was produced [2.3.19]. This guidebook assists transit system managers in developing a performance-measurement system or program that uses traditional and nontraditional perfor- mance measures to address customer and community issues. The guidebook contains six chapters, each covering a different aspect of developing a performance-measurement program: • Chapter 1 describes how to use the guidebook. • Chapter 2 makes the case for why agencies should measure their performance. • Chapter 3 presents 12 case studies of successful programs. • Chapter 4 provides an eight-step process for implementing, using, and periodically updating a program. • Chapter 5 describes resources available to agencies devel- oping or updating a program. • Chapter 6 contains 130 summaries describing more than 400 individual performance measures and a series of selection menus to help users quickly identify measures appropriate to particular agency goals and resources. In addition to the report, a computer program was devel- oped for the purpose of gathering transit performance mea- sures. Using Florida Transit evaluation data as the foundation, the program can be used to process National Transit Data- base (also known as Section 15) data. The software is useful to compare transit performances of various peer agencies. As one of the concentrated efforts in developing transit performance measures, the TCRP report compiled a fairly comprehensive database and explored an effective process. The process is useful for the development of airport APM performance measures even though the indicators may be different. 2.3.2.2 Conferences on Transportation Performance Measures There have been two conferences on performance measures held in the past several years, the topics of which go back to the growing interest and debates that surrounded the develop- ment and use of performance measures to guide investment decisions at all levels of government. The first national conference on performance measures, held in Irvine, California, in 2000 [2.3.20], brought together a group of government, academic, and business leaders who had experience in transportation system performance measures and performance-based planning and programming. The Advancement in systematic transit performance measures was documented in a study by Kopp, Moriarty, and Pitstick [2.3.7]. Past transit performance measures typically focused on attributes of service supply such as capacity, passenger loading, frequency, and reliability. These measures were effective in describing the quality of transit service available at a given location, but they did not describe how well transit serves actual passenger trips from that location to potential destinations. Kopp and his coauthors developed a methodology to evaluate the relative attractiveness of travel by public transit and personal automobile on a sample of origin–destination pairs throughout the Chicago metropolitan region. Transit attractiveness was computed by using a logit mode choice framework that compared the utility of travel by transit, auto, and park-and-ride for various components of travel time and travel cost. 2.3.2 Concentrated Efforts Many more performance measures have been developed and used in a variety of ways in response to differing tran- sit system goals and objectives. What is currently missing is a rigorous process for determining the most appropriate performance measures to be used by a transit organization. Furthermore, traditional service efficiency indicators, such as operating expense per vehicle revenue mile and/or hour, and cost-effectiveness indicators, such as operating expense per passenger mile and/or passenger trip, are not always linked to customer-oriented and community issues. There have been two major efforts in recent years to develop a process that transit systems can use to prepare a performance- measurement program sensitive to customer-oriented and community issues and showcase the potential examples and lessons learned. The first concentrated effort was TCRP Project G-06, the results of which were documented in TCRP Report 88: A Guidebook for Developing a Transit Performance- Measurement System [2.3.19]. The second effort was a series of conferences on performance measures to improve trans- portation systems and agency operations held in Irvine, California, in 2000 and 2004 [2.3.20 and 2.3.21]. Even though there are a number of studies and conferences related to the subject of transit performance measures, we have highlighted the content of these two efforts in an effort to narrow the focus of the topic. 2.3.2.1 TCRP Report 88 The objectives of this TCRP research were to provide a framework by which to select and apply appropriate perfor- mance measures integral to transit-system decision making. The study explored various subjects directly related to transit performance measures, such as purpose, characteristics, and

74 The NTD is the system through which FTA collects uniform data needed by the Secretary of Transportation to adminis- ter department programs. The data consist of selected finan- cial and operating data that describe public transportation characteristics for all types of transit modes, including but not limited to bus, heavy rail, light rail, monorail, automated guideway transit (AGT), ferry, inclined plane, and vanpool. These data include performance measures in the areas of service efficiency, cost effectiveness, and service effectiveness. Specifically, they include the following measures: • Service efficiency – Operating expenses per vehicle revenue mile – Operating expenses per vehicle revenue hour • Cost effectiveness – Operating expenses per passenger mile – Operating expenses per unlinked passenger trip • Service effectiveness – Unlinked passenger trips per vehicle revenue mile – Unlinked passenger trips per vehicle revenue hour [2.3.12] These measures may be applicable to airport APM systems and merit further analysis. For those measures involving a cost/expense component, the type of expense information applied in the measure would have to be studied carefully to ensure the same or similar information would be included. The measures containing a vehicle-hour component, how- ever, may not be appropriately applied to the APM industry in general since vehicle hours are closely tied in traditional public transit to vehicle operator expenses and time, which, because of the fully automated nature of the systems, do not exist in the APM industry. A list of safety and security data collected by the NTD program is provided as follows. • Safety – Collisions – Derailments – Fires – Fatalities/injuries – Evacuation – Suicides (attempting/committing/others affected) • Security – Part I offenses b Robbery b Aggravated assault b Burglary b Larceny/theft offenses b Motor vehicle theft b Arson conference included transit but was not limited to it. The con- ference’s purpose was to address organizational approaches, implementation experiences, customer perspectives, and tech- nical issues related to performance measures of transportation systems. Covering performance measures of multimodal transpor- tation systems, the conference was organized around four main topics: 1. Linking performance measures with decision making; 2. Implementing transportation system performance measures in agencies; 3. Selecting measures, data needs, and analytical issues; and 4. Connecting system performance measures to broader goals. The second national performance measure conference, held in Irvine, California, in 2004 [2.3.21], served as a milestone to define the state of the practice and acknowledge recent work in the use of performance measures, share experiences and resources, and identify key areas that needed further research or additional peer exchange. Designed to maximize the exchange of information and perspectives among the participants, the second conference on performance measures commissioned a series of resource papers on the five themes discussed during the conference: 1. Performance measures—state of the practice; 2. Impact of performance measures on internal and external relationships; 3. Tying together performance-based program development and delivery; 4. Data and tools required to support decision making; and 5. Measuring performance in difficult-to-measure areas. Individual papers or presentations from these conferences may be applicable to the development of performance mea- sures for airport APM systems. 2.3.2.3 National Transit Database The National Transit Database is the FTA’s primary national database for statistics on the transit industry. Recipients of FTA Urbanized Area Formula Program grants (§5307) and Nonurbanized Area Formula Program grants (§5311) are required by statute to submit data to the NTD. Over 650 transit agencies and authorities file annual reports to FTA through the internet-based reporting system. Each year, NTD perfor- mance data are used to apportion over $4 billion of FTA funds to transit agencies in urbanized areas. Annual NTD reports are submitted to Congress summarizing transit service and safety data [2.3.12].

75 ment can be used in many different settings. It is also interesting to observe what measures and processes were similar to those found in the United States. Rystam and Renolen [2.3.16] developed a guidebook for evaluating measures in public transit systems based on experi- ences from the evaluations of several public transit projects in Norway and Sweden. The guidelines may be used by planners, consultants, and municipalities. The guideline is a general document so that it can be used as a basis when evaluating minor as well as major public transit systems. Another international example of transit performance mea- sures came from Thailand [2.3.17]. This study dem onstrated that the performance indicator analysis technique can be used as a diagnostic tool to identify operational in efficiency and ineffectiveness at the route level of transit operation. Applying the technique to 14 bus routes of the Bangkok Mass Transit Authority, the research revealed the inter-route differ- ences in operational efficiency and effectiveness. The authors selected 20 performance indicators related to costs of services, fuel consumption, staff ratio, crew productivity, fleet utiliza- tion, service output per bus, daily revenues, and so forth to represent the resource efficiency, resource effectiveness, and service effectiveness of the bus system. Results of the analyses revealed that considerable variations existed across the routes against many of these 20 selected indicators. Light rail transit (LRT) is the focus of another inter national application of transit performance measures [2.3.10], which may have direct implications for developing airport APM performance measures due to the common characteristics of the modes. Conducted by the Urban Transport Group of the European Conference of Ministers of Transport, this study was based on case studies and national overviews pro- vided by the six participating countries: France, Germany, the Netherlands, Switzerland, the United Kingdom, and the United States. The research traced the development of LRT; reviewed policy, managerial, and technological trends; and analyzed comparative cost-effectiveness. The standardized framework developed for the project allowed consistent comparisons of the international systems. 2.4 Airline Performance Measurement This section is a summary of the key findings of performance measurement as it relates to the airline industry. Four airline performance measurement areas are discussed in this section: government-monitored measures, airport operator/airline measures, other airport agency measures, and measures result- ing from design recommendations, standards, and levels of service. Performance measures in the airline industry generally take two forms: financial and nonfinancial. Financial performance – Part II offenses b Fare evasion (citations) b Other assaults (arrests) b Trespassing (arrests) b Vandalism (arrests) – Other security issues b Bomb threats b Nonviolent civil disturbances [2.3.13] Some of these safety and security incidents that one would see in an urban transit system, or even in an urban APM sys- tem, might be different from what is seen in an airport APM system. Because of this, these data should be studied further to determine their applicability to airport APM systems and how they might best be developed into effective performance measures. 2.3.3 International Practices In order to identify the use of performance measures in different institutional and cultural contexts, Meyer [2.3.11] examined the use of performance measures in three coun- tries: Australia, Japan, and New Zealand. This work repre- sented an international review on performance measures and was sponsored by the FHWA and AASHTO. After discussing the organizational context for the use of performance mea- sures, identifying key performance measures, and making observations on aspects of the performance-based planning approach used, the author highlighted performance mea- sures related to safety, congestion, and freight movement. The paper noted the following common characteristics of each case: • The use of a common framework for performance measurement; • The importance of collaboration among different agencies for performance categories that transcend one policy area; • The use of performance measures at different levels of planning and decision making; • The vertical integration of information flow in agencies; • The distinction between outcomes and outputs, the importance of data-collection capability, and the use of information technologies; • The importance of performance measurement as a means of providing greater accountability and visibility to the public; and • The need for top management leadership and commitment. The Meyer publication brought home performance mea- surement experience taking place in three different institutional and cultural contexts. The common characteristics of each provide an important understanding of how such measure-

76 passengers voluntarily remain on the platform, such as when they have plenty of time and do not want to squeeze into an occupied vehicle. Both of these types of denied boardings could provide useful information regarding the performance of an airport APM system. However, collecting such data may prove to be difficult. The third measure on the list is the rate of mishandled passenger baggage. The airlines report the number of incidents for lost, damaged, delayed, or pilfered baggage. The airlines that are required to report these statistics are ranked in the Air Travel Consumer Report based on lowest to highest rate of mishandled baggage. This statistic is not directly translatable to airport APM systems because passengers handle their own baggage. While there may be a need in the future to collect such statistics for airport APM systems, the need for mis- handled baggage statistics is not expected in the foreseeable future. The fourth measure on the list concerns consumer com- plaints. The number and type of complaints filed with the U.S. DOT are collected and reported in a variety of formats. The types of reports filed include flight problems, baggage, customer service, oversales, and disabilities. Similar data could be collected for airport APM systems to determine the types of problems encountered by passengers. These statistics could be used by the system operator to improve the level of service provided to passengers. The fifth measure on the list is collected by the NTSB and reported to the FAA regarding accidents and incidents. The report lists all accidents and incidents, including those resulting in fatalities, by aircraft type. Summary data are pre- sented in both preliminary and final reports. More detailed accident and incident data are also collected and reported by the NTSB. Accident and incident data could also be collected for airport APM systems to measure the safety of the system operation, both with regard to passengers and vehicles. These data could then be reviewed to determine when problems exist and corrective actions should be taken. The final measure concerns runway incursions and is collected by the FAA. A runway incursion is “any occurrence in the airport runway environment involving an aircraft, vehicle, person, or object on the ground that creates a colli- sion hazard or results in a loss of required separation with an aircraft taking off, intending to take off, landing, or intend- ing to land” [2.4.6]. A similar measure for incursions on an APM guideway should normally not be necessary given the security and precautions that are taken during design, instal- lation, and operation of an APM system. However, there are instances where incursions, in the form of objects and/or pas- sengers rather than between vehicles, can interfere with APM operations. For example, an open pedestrian bridge cross- ing above an APM at a particular airport has contributed to objects being dropped on the guideway and causing service measures relate some function of cost to an individual unit such as aircraft or originating passenger. Nonfinancial perfor- mance measures usually have no component of cost associ- ated with them. Both types of performance measures could be applied to airport APM systems. In this section, performance measures found in the airline industry will be discussed along with the possible application to airport APM systems. 2.4.1 Government-Monitored Measures The most widely reported nonfinancial performance mea- sures of airlines are collected by government agencies such as the FAA, the United States Department of Transportation (U.S. DOT), and the National Transportation Safety Board (NTSB). These performance measures are used to assess and compare airline performance across the industry. The performance measures collected by the government agencies include: 1. On-time performance, 2. Oversales, 3. Mishandled baggage, 4. Consumer complaints, 5. Accidents and incidents, and 6. Runway incursions. These statistics and others are presented in the Air Travel Consumer Report [2.4.1], which is presented on the U.S. DOT’s website, http://airconsumer.ost.dot.gov/reports/index.htm. On-time performance measures are collected for both arriving and departing aircraft. When aircraft are not con- sidered on time, the reasons for the delays are recorded. The percent of on-time arrivals by airline is just one of the statistics presented in the Air Travel Consumer Report. These statistics can be translated to airport APM systems. However, since APM systems typically operate on a headway basis rather than a schedule basis, a statistic measuring of the frequency of trains may be more useful. The second measure is the number of oversales, or the number of passengers who hold confirmed reservations and are denied boarding on a flight because it is oversold. These include both voluntary and involuntary denied boardings. The first accounts for those passengers who voluntarily give up their seats in exchange for compensation. The second accounts from those passengers who did not volunteer to give up their seats but were denied boarding (bumped) and who may have received compensation. Similar statistics could be collected for airport APM systems to facilitate the operation of the APM. For example, the number of denied boardings due to trains being at capacity could be collected to deter- mine when additional vehicles need to be put into operation to satisfy increased demand. There are also occasions when

77 The majority of the employment and traffic measures listed previously relate to employees. Perhaps the most meaningful statistic with regard to airport APM systems is full-time equiv- alent employees per aircraft. A similar statistic could provide airport APM system operators with a measure of employees per vehicle or employees per vehicle revenue mile that could be useful when comparing systems or when considering expansion plans. 2.4.2 Airport Operator/Airline Measures The performance measures discussed in the preceding paragraphs are collected and reported by government agen- cies and not by the airports or airlines. While those perfor- mance measures are useful to the airports and airlines, there are additional performance measures that airports and air- lines use to gauge and monitor performance. The perfor- mance measures discussed in the following can be considered internal measures, used to monitor employee and process performance. Many airports and airlines have established performance measures for wait time in queue and baggage delivery time to baggage carousel. The following list is an example of the types of processes that are monitored: • First bag delivery to baggage claim, • Last bag delivery to baggage claim, • Curbside check-in time, • Ticket counter check-in time, • Security checkpoint wait time, • Gate check-in time, and • Personal space allocation in queues and waiting areas. The standards against which performance is gauged are not universally defined for these measures but instead are set by individual airports or airlines. For example, one airport operator set a standard of a maximum of 15 min after aircraft arrival for the first bag to be delivered to baggage claim, while another airport operator has a maximum of 5 min for inter- national baggage and 10 min for domestic baggage. Similarly, one airport operator has a standard of a maximum of 20 min to reach a counter for ticket counter check-in, while another requires that 95% of passengers be served with 12 min. While baggage delivery times may not be relevant to airport APM systems, the wait time and personal space allocation measures may be. Airport APM systems should be operated such that passenger demand is satisfied and that adequate personal space is provided. However, as demand grows or as airlines change their schedules, airport APM systems have to adapt to continue to provide passengers with a high level of service. Wait times for trains should be considered so that passengers do not become anxious that they may not make interruptions to either retrieve the object or recover from failures induced by the object. Similarly, a voluntary evacua- tion from a stopped vehicle on the guideway provides access by passengers to the secure (guideway) side of the system, thereby causing a system shutdown. Given the impact that these incursions can have on an airport APM system, up to and including a temporary suspension of service across the entire line, it may be useful to develop performance measures in this area. 2.4.1.1 BTS-Monitored Measures Financial, employment, and traffic performance measures for airlines are collected and reported by the Bureau of Trans- portation Statistics (BTS). These performance measures rep- resent standard airline industry units of production, various output measurements, and output valuations. There are four financial measures and seven employment and traffic measures. Financial Measures. Financial measures collected and reported by the BTS are: 1. System operating profit/loss per originating passenger, 2. System operating expenses per originating passenger, 3. System operating expenses per aircraft, and 4. Passenger revenue per originating passenger [2.4.2]. With regard to the financial measures, since airport APM systems provide nonrevenue service to passengers, data regard- ing profit, loss, and revenues are not relevant. However, oper- ating expenses are relevant. Statistics regarding operating expenses per passenger or vehicle revenue mile may be useful in evaluating the APM system or making comparisons between different system technologies. Therefore, financial perfor- mance measures similar to those used in the airline industry may be relevant to airport APM systems. Employment and Traffic Measures. Employment and traffic measures collected by the BTS are: 1. Full-time equivalent employees per aircraft, 2. Average monthly available seat-miles per full-time equiva- lent employee, 3. Average monthly revenue aircraft minutes per full-time equivalent employee, 4. Average monthly originating passengers per full-time equivalent employee, 5. Fuel cost per originating passenger;, 6. Average full-time equivalent employee compensation per originating passenger, and 7. Average annual full-time equivalent employee compensa- tion [2.4.2].

78 and anxious. Maximum wait time as a measure of passenger service for airport APM systems is a reasonable extension of the IATA wait-time guidelines. Similarly, IATA recommends level-of-service and space stan- dards for many airport facilities, such as check-in and baggage claim areas. Level-of-service standards can be defined for air- port APM platforms and vehicles as a means of comparing the level of service provided by different systems or for determining when additional trains are required. In addition, John J. Fruin’s Pedestrian Planning and Design [2.4.7] provides recommendations for personal space alloca- tion and has become a handbook for designers and planners. Fruin’s level-of-service standards are based on a six-point scale, with “A” being the best level of service and “F” being unacceptable service. 2.5 Highway Performance Measurement This section is a summary of the key findings of perfor- mance measurement as it relates to the highway industry on the national level. Three areas are discussed in this section: per- formance measurement activities of the FHWA, the National Transportation Operations Coalition, and a review of NCHRP Project 3-68, “Guide to Effective Freeway Performance Mea- surement” (the final report for which was produced as NCHRP Web-Only Document 97: Guide to Effective Freeway Performance Measurement) [2.5.6]. 2.5.1 FHWA Performance Measurement Program The FHWA’s Office of Operations supports a performance measurement program focused on system (highway) perfor- mance as it relates to mitigation of congestion. The program measures the sources and consequences of congestion, and the effectiveness of operations strategies to reduce that congestion [2.5.5]. Some examples of congestion performance measures that can be used by highway agencies to monitor trends are provided in Table A-2. While many of these metrics may be useful in the highway industry, one of the principles that the FHWA has established for monitoring congestion is that meaningful congestion performance measures must be based on the measurement of travel time. The reason for this is that travel time is easily understood by a wide variety of audiences—both technical and nontechnical—and it can be used from both a user and owner/agency perspective. In particular, the FHWA has identified travel time reliability and its associated measures as the most effective measures of (highway) system perfor- mance from the user’s perspective [2.5.5]. As a result, this their connecting flights. Similarly, passengers prefer adequate personal space on platforms and trains. Periodic monitoring of the wait times and personal space allocations may be use- ful to be sure that passengers have a comfortable experience using the APM. 2.4.3 Other Airport Agency Measures Agencies operating at airports, such as immigration ser- vices, have performance criteria they use to determine proper staffing levels and space requirements. International airport organizations also have defined standards for performance to be used in the planning and monitoring of the immigration services function. The International Air Transport Association (IATA) defines level-of-service space requirements for passport control [2.4.8], while the International Civil Aviation Organization (ICAO) has standards for the processing of passengers from an arriving aircraft [2.4.9]. Immigration agencies have also set standards for how long passengers wait for immigration processing and how much space they should be allocated. As discussed in the previous section, the wait time and personal space allocation measures are relevant to airport APM systems. Just as the immigration services consider the passenger wait times and space allocation in monitoring per- formance, airport APM system operators may benefit from applying these same types of performance measures. 2.4.4 Design Recommendations, Standards, and Levels of Service Finally, there are level-of-service recommendations used by airport developers, planners, and designers that, while not expressly intended as performance measures, can be used to gauge and compare the efficiency and performance of airport facilities. For example, the Airport Development Reference Manual published by IATA is frequently used as a guide for those planning new or expanding existing airport facilities. While the manual presents recommendations, they have, in fact, come to be viewed as standards [2.4.8]. Perhaps the most relevant recommendations presented in the manual for airport APM applications are those pertain- ing to walk distances, wait times, and space occupancy. For example, IATA recommends that the maximum unassisted walking distance between major airport functions be 300 m. In assessing airport APM system designs or comparing one system to another, the maximum walk distance can be used as a measure of the level of passenger service provided. The waiting time guidelines recommended by IATA for various airport facilities are akin to passenger wait time for an APM train. The idea is to set a maximum wait time as a standard so that passengers do not become uncomfortable

79 Performance Metric Definition/Comments Throughput Vehicle miles of travel (VMT) Vehicle miles of travel is the number of vehicles on the system times the length of highway they travel. Person miles of travel is used to adjust for the fact that some vehicles carry more than a driver. Truck-vehicle miles of travel Person miles of travel Average Congestion Conditions Average travel speed The average speed of vehicles measured between two points. Travel time The time it takes for vehicles to travel between two points. Both travel time and average travel speed are good measures for specific trips or within a corridor. Number and percent of trips with travel times > (1.5 × average travel time) Thresholds of 1.5 and 2.0 times the average may be adjusted to local conditions; additional thresholds may also be defined. Number and percent of trips with travel times > (2.0 × average travel time) Travel time index Ratio of actual travel time to an ideal (free-flow) travel time. Free-flow conditions on freeways are travel times at a speed of 60 mph. Total delay (vehicle hours and person hours) Delay is the number of hours spent in traffic beyond what would normally occur if travel could be done at the ideal speed. Determining delay by source of congestion requires detailed information on the nature and extent of events (incidents, weather, and work zones) as well as measured travel conditions. Bottleneck (recurring) delay (vehicle hours) Traffic incident delay (vehicle hours) Work zone delay (vehicle hours) Weather delay (vehicle hours) Ramp delay (vehicle hours and person hours; where ramp metering exists) Delay per person Delay per person and delay per vehicle require knowledge of how many vehicles and persons are using the roadway. Delay per vehicle Percent of VMT with average speeds <45 mph Percent of VMT with average speeds <30 mph Percent of day with average speeds <45 mph These measures capture the duration of congestion. Percent of day with average speeds <30 mph Reliability Planning time (computed for actual travel time and the travel time index) The 95th percentile of a distribution is the number above which only 5% of the total distribution remains. That is, only 5% of the observations exceed the 95th percentile. For commuters, this means that for 19 out of 20 workdays in a month, their trips will take no more than the planning time. Planning time index (computed for actual travel time and the travel time index) Ratio of the 95th percentile (planning time) to the ideal or free-flow travel time (the travel time that occurs when very light traffic is present—about 60 mph on most freeways). Buffer index Represents the extra time (buffer) most travelers add to their average travel time when planning trips. For a specific road section and time period: Buffer index (%) = 95th percentile travel time – average travel time average travel time Source: [2.5.1] Table A-2. Examples of congestion performance measures used by highway agencies.

80 cal conditions. By its very nature, roadway performance is highly variable and unpredictable, in that on any given day, unusual circumstances such as vehicle accidents can dra- matically change the performance of the roadway, affecting both travel speeds and throughput volumes. Because travel conditions are so unreliable on congested highways, travel- ers must plan for these problems by leaving early to avoid being late [2.5.1]. In terms of an agency/operator using this as an internal measure for an APM system, it may not be meaningful because airport APM systems tend to have a high level of availability and do not have wide variances in the travel times found on the roadways (i.e., whereas the 90th or 95th percentile travel times for a highway will provide a measurable level of travel time and unavailability over a month’s time, an airport APM system may not). 2.5.1.2 Buffer and Planning Time Indices Another effective travel time reliability measure in the highway industry is the buffer index. The buffer index, pre- sented as a percentage, represents the extra time travelers must add to their average travel time when planning trips in order to ensure an on-time arrival most of the time. It is expressed as a percentage, and its value increases as reli- ability gets worse. For example, if the average travel time is 30 min and the buffer index is 40%, a traveler should allow an additional 12 min to rely on an on-time arrival 95% of the time. The extra 12 min is called the buffer time. The buffer index is computed as the difference between the 95th per- centile travel time and the average travel time, divided by the average travel time. The planning time index measure represents the total travel time that should be planned for a trip and includes an adequate buffer time to rely on an on-time arrival of 95%. For the previous example, the planning index would be 1.40. The planning time index differs from the buffer index in that it includes typical delay as well as unexpected delay. In addition, where the buffer index is used in deter- mining the additional time necessary to make the trip, the planning index is used in determining the total travel time necessary for the trip. The planning time index is computed as the 95th percentile travel time divided by the average travel time. In the context of the airport APM industry, it may not be expected that passengers would be traveling daily on an air- port APM, nor that the travel time on an airport APM would vary to a degree great enough or frequent enough to make these measures meaningful for use by the public or an APM owner/operator. The additional time that must be allowed to make a trip within an established time, or the total time to be allowed for planning a trip within such a time, are mea- section will focus on travel time reliability and the measures that quantify it. Travel time reliability is the consistency or dependability in travel times, as measured from day to day and/or across different times of the day. It better represents a commuter’s experience than a simple average travel time measurement. It is important because most travelers are less tolerant of un expected delays (nonrecurring congestion) than of every- day congestion, and they also tend to remember bad traffic days over an average daily travel time throughout the year. The recommended measures used to quantify travel time reliability are 90th or 95th percentile travel time, buffer index, and planning time index [2.5.4]. 2.5.1.1 90th or 95th Percentile Travel Time The 90th or 95th percentile travel time measure is sim- ply an estimate in minutes of how bad delays will be on certain routes during the heaviest traffic days. The 90th or 95th percentile represents those days in the month that are the heaviest traffic days causing the greatest congestion and longest travel times; it is the near-worst-case travel time. This method requires continuous tracking of travel times in order to provide an accurate estimate. State departments of trans- portation employ this method for use by the public online. A traveler can, for example, determine on a website that the 95% reliable travel time for a particular route is 59 min, which means that if the traveler allows 59 min for the trip on that route, he or she would be on time 19 out of 20 weekdays in the month [2.5.4]. This measure is ideally suited for traveler information in that it provides a gauge of how many times in a month, for example, the travel time can be relied upon. It does not obviously predict the day(s) when the 90th or 95th travel time may occur, but used in conjunction with other mea- sures to be described in the following subsections, a reason- able probability of arriving on time can be computed. From the agency’s view, this measure may be useful in that it can track the creep in the average travel time over a period of time (i.e., the 59-min travel time may creep to 63 min over time). This would be useful in planning for the mitiga- tion of congestion, whether in the form of providing additional infrastructure or employing other techniques such as HOV lanes, directional lanes by time of day, or other operational strategies. As applied to the airport APM industry, providing this measure to the traveling public may not be as useful as in the highway industry since APM travel times tend to be relatively short, thereby making the measure less meaningful. In addi- tion, the high availability of airport APM systems (99%+) provides more dependable transportation than a highway system, where there is a great variation from average or typi-

81 Of these 12, there are four that may be appropriately used as performance measures in the airport APM industry: cus- tomer satisfaction, incident duration, throughput – person, and throughput – vehicle. Each of these is briefly discussed in the following paragraphs with regard to applicability to airport APMs. 2.5.2.1 Customer Satisfaction Customer satisfaction is a measure in the NTOC Perfor- mance Measurement Initiative final report that applies spe- cifically to highway management and operations; however, it is a measure that can be applied in the airport APM industry as well. It can be measured in different ways, one of which is by assigning values to survey responses and tracking those values over time. While the literature review of APM-related material did not specifically yield customer satisfaction as a measure used in the airport APM industry, it is a measure that merits further exploration and may possibly be found to be in use at airport APM properties today. 2.5.2.2 Incident Duration The second NTOC performance measure that may be applicable to the airport APM industry is the incident dura- tion measured in median minutes per incident. While this measure has specific definition and meaning in the NTOC report as applied to the highway industry, a similar measure is in use today in the airport APM industry, as described earlier in this appendix. MTTR, or mean time to restore, is a measure that similarly gauges the time elapsed from the beginning until the end of an incident or failure and is used in the overall calculation of system availability. Where this measure is used to evaluate the effectiveness of emergency responders on incident duration in the highway industry, it can and is similarly used to evaluate the maintainability and effectiveness of maintenance technicians on failures in the airport APM industry. 2.5.2.3 Throughout The remaining two NTOC performance measures that may be applicable to the APM industry are the measures of person and vehicle throughput. Both are measures of capacity and are currently used in the design and operations of airport APM systems. For the most part, they are very well defined in the APM industry. As discussed earlier in this appendix, capac- ity is taken into account during revenue operations of airport APM systems in general and as a way to credit operators who provide the highest capacity during degraded mode opera- tions. It will be useful to further explore the potential use of capacity as a performance measure for airport APM systems. sures that attempt to provide a level of predictability for a traveler using a (highway) system that presents a significant level of unreliability in travel times. Airport APM systems provide a relatively high level of travel time reliability that may make other measures better candidates for measuring performance. 2.5.2 National Transportation Operations Coalition In 1999 the FHWA initiated the National Dialogue on Transportation Operations to encourage discussions on roadway operations issues and advocate for a stronger focus on operating the nation’s transportation system. This resulted in several major initiatives and evolved into the creation of the National Transportation Operations Coali- tion (NTOC). The NTOC is supported by the FHWA and serves as an important foundation for institutionalizing manage- ment and operations in the transportation industry. It is an alliance of national associations, practitioners, and private-sector groups that represent the collective interests of stakeholders at state, local, and regional levels who have a wide range of experience in operations, planning, and public safety. The mission of the NTOC is “to improve management and operation of the nation’s existing trans- portation system so that its performance will exceed customer expectations” [2.5.8]. The Performance Measurement and Reporting sub- committee of the NTOC is one of a number of subcommittees and action teams working to promote operations strategies and benefits to stakeholders. In July 2005 this subcommittee issued a final report on its Performance Measurement Initia- tive, which identified 12 performance measures commonly agreed upon by federal, state, and local transportation offi- cials to be the basis for a national set of performance mea- sures. The measures may be used for internal management, external communications, and comparative measurement. The measures are: • Customer satisfaction • Extent of congestion – spatial • Extent of congestion – temporal • Incident duration • Nonrecurring delay • Recurring delay • Speed • Throughput – person • Throughput – vehicle • Travel time – link • Travel time – reliability (buffer time) • Travel time – trip

82 team that airport APMs, although intrinsically very safe, may result in passenger injury when the system is not used correctly. Such isolated instances occur when trains perform an emergency braking while passengers are not either seated or holding onto a stanchion, handrail, or strap, and when platform or vehicle doors close on passengers that attempt to enter or exit a train after the warning chime has sounded and the doors have begun to move. These instances are the reasons most often cited when litigation is involved. Some safety measures that may be interesting to track in these contexts are the number of emergency brakings per thousand interstation runs performed and the number of door closing alarms per thousand dwell sequences. These safety-related measures may be useful in gauging the risk of exposure to passenger injuries for these isolated instances. Theoretically, the higher the value of the measure, the greater the risk of exposure. The lower the value of the measure, the lower the risk of exposure. 2.5.3.3 Fuel Consumption per Vehicle Mile Traveled The report recommends a performance measure of fuel consumption per VMT, which is calculated based on the modeled gallons of fuel consumed on a freeway divided by the freeway VMT. A variation of this measure may be use- ful in the APM industry. Because APM systems use electrical energy rather than fuel (gas/diesel) energy, the correspond- ing measure would be electrical energy consumption per vehicle mile traveled. Designers today use this measure as part of the process of estimating O&M costs for APM sys- tems. It remains to be seen, however, if this measure would be useful beyond that. This measure may be relevant in the measurement of freeway performance because roadway vehicles often can be standing still or creeping in bumper- to-bumper congestion. As congestion gets worse, the mea- sure would theoretically reflect that (i.e., the gallons of fuel consumed would increase while VMT decreases). Although APM systems consume electrical energy when standing still, the systems are not susceptible to the type of congestion and delays seen on freeways, and because of this, it is not expected that this measure as applied to airport APM systems would be meaningful. 2.6 Conclusion It is undeniable that automated people movers are playing a vital role in various airports and activity centers around the world. APM systems transport people from their origin to destination with a high degree of reliability, comfort, and speed. Given their importance, it is essential for APM operators 2.5.3 Freeway Performance Measurement: NCHRP Project 3-68 The final area of highway performance measurement is discussed in this section and concerns the final report and guidebook of NCHRP Project 3-68, “Guide to Effective Freeway Performance Measurement,” produced as NCHRP Web-Only Document 97: Guide to Effective Freeway Perfor- mance Measurement [2.5.6]. The report recommends a total of 47 measures in 12 cate- gories as core freeway performance measures, and an addi- tional 78 measures in nine categories as supplemental freeway performance measures. The research team focused on the core performance mea- sures and found several that were the same as or similar to measures previously discussed in this section of the appendix. As such, those are not reconsidered here. However, there are three measures that were found to merit exploration in terms of their applicability to airport APMs: vehicle miles of travel (VMT), safety, and energy consumption. 2.5.3.1 Vehicle Miles of Travel Vehicle miles of travel is the product of the number of vehicles traveling over a length of freeway times the length of the freeway. This measure is also used in the APM industry, except that it is based on the distance the vehicles travel over the length of the revenue areas of the system guideway. The numbers of fleet miles and/or average annual vehicle miles are used regularly in the development of fleet sizes, maintenance and storage facility sizes, preventive maintenance schedules, and cost estimates for operations and maintenance of APM systems. Although the research of airport APM literature performed for this memorandum did not reveal performance measures employing a vehicle- or fleet-mile component, it may nevertheless be useful in comparing airport APM systems. For example, annual fleet mileage may be useful as a stand- alone measure to describe an APM system as compared to other APM systems. A measure incorporating vehicle mileage could also be useful, such as the number of platform door failures per vehicle mile traveled. 2.5.3.2 Safety The core measures listed for safety (quality of service) in the report for NCHRP Project 3-68 specifically apply to crashes (i.e., total crashes, fatal crashes, overall crash rate, fatality crash rate, and secondary crashes). Any measure- ment related to crashes would not be applicable to the APM industry since crashes almost never occur. However, there are some safety measures that may merit further exploration for airport APMs. It has been the experience of the research

83 in covering efficiency, effectiveness, and customer perspectives. On the other hand, performance measurement methodolo- gies are still evolving, and there is no uniformly compiled framework for various agencies due to the diversity of systems and their respective service areas. Nevertheless, the material reviewed in preparation of this appendix served as a road map for the project team to develop the guidebook for perfor- mance measures of airport APM systems. and decision makers to evaluate and manage their systems using a set of performance measures. The base of literature on transportation performance measures of other modes is large, especially when compared with that of APM systems. With the long history of devel- oping, improving, and applying performance indicators in the transportation planning and operations fields, transit performance measures, for example, are fairly comprehensive

84 3.1 Performance Measurement— In General [2.1.1] Chang, H. W., R. D. Banker, and W. W. Cooper. “Simulation Studies of Efficiency and Returns to Scale with Nonlinear Func- tions in DEA,” Annals of Operations Research, 1995. [2.1.2] Charnes, A., W. Cooper, and E. Rhodes. “Measuring the Effi- ciency of Decision-Making Units.” European Journal of Operational Research, 1978. [2.1.3] Chu, X., G. J. Fielding, and B. W. Lamar, “Measuring Transit Performance Using Data Envelopment Analysis.” California Uni- versity, Irvine, 1990. [2.1.4] Federal Highway Administration. Office of Operations. Performance Measurement Fundamentals. http://ops.fhwa.dot. gov/perf_measurement/fundamentals.htm. Accessed October 2007. [2.1.5] Government Accountability Office. United States of America. Performance Measurement and Evaluation: Definitions and Rela- tionships. GAO-05-739SP. May 2005. [2.1.6] Henderson, G., P. Kwong, and H. Adkins. “Regularity Indices for Evaluating Transit Performance.” Transportation Research Record 1297. TRB, National Research Council, Washington, D.C., 1991, pp. 3–9. [2.1.7] Kaplan, R., and D. Norton. “The Balanced Scorecard: Measures that Drive Performance.” Harvard Business Review. Vol. 70, Issue 1, Jan/Feb 1992, pp. 71–79. [2.1.8] Lee, D. B., “Transit Cost and Performance Measurement.” Transport Reviews, Vol. 9, No. 2. Taylor & Francis Limited, 1989. [2.1.9] Litman, T. 2005. “Measuring Transportation: Traffic, Mobility and Accessibility.” The ITE Journal, Vol. 73, No. 10, October 2003, pp. 28–32. [2.1.10] Liu, R., R. Pendyala, and S. Polzin. “A Simulation of the Effects of Intermodal Transfer Penalties on Transit Use.” Transportation Research Record 1623. TRB, National Research Council, Washing- ton, D.C., 1998, pp. 88–95. [2.1.11] National Partnership for Reinventing Government. Balancing Measures: Best Practices in Performance Management. August 1999. http://govinfo.library.unt.edu/npr/library/papers/bkgrd/bal measure.html. Accessed October 2007. [2.1.12] National Performance Review. Serving the American Public: Best Practices in Performance Measurement. Benchmarking Study Report. June 1997. http://govinfo.library.unt.edu/npr/library/papers/bench mrk/nprbook.html. Accessed October 2007. [2.1.13] Obeng, K. “Total Productive Measurement in Bus Transit: Comment.” Journal of Transportation Engineering, Vol. 117, No. 6. American Society of Civil Engineers. 1991. [2.1.14] Phillips, J. K. “An Application of the Balanced Scorecard to Public Transit System Performance Assessment.” Transportation Journal, Vol. 43, No. 1. American Society of Transportation and Logistics, 2004. [2.1.15] Racca, D. P. “Transportation Network Models to Accurately Estimate Transit Level of Service (Paper 1921),” Proceedings of the Twenty-Fourth Annual ESRI User Conference. 2004. [2.1.16] Takyi, I. K. “A Multi-Dimensional Methodology for Evaluating Public Transportation Services.” Transportation Research. Part A: Policy and Practice, Vol. 27, No. 5., Pergamon Press plc; Elsevier, 1993. 3.2 APMs [2.2.1] American Society of Civil Engineers. Automated People Mover Standards—Part 1. American Society of Civil Engineers, Reston, VA, 2006. [2.2.2] Anderson, J. E. “Dependability as a Measure of On-Time Per- formance of Personal Rapid Transit Systems.” Journal of Advanced Transportation, 26, 1992, Institute for Transportation, Inc. [2.2.3] Chicago O’Hare International Airport. AGT System Technical Provisions. Undated. [2.2.4] Chicago O’Hare International Airport. How to Count the Avail- ability. 1993. [2.2.5] Cottrell, W. D., and Y. J. Nakanishi. “Airport APM Performance Measurement: Network Configuration and Service Availability.” 11th International Conference on Automated People Movers, Apr. 2007, American Society of Civil Engineers. Vienna. [2.2.6] Dallas–Fort Worth International Airport. APM System Oper- ations and Maintenance General Conditions (Conformed), November 2000. [2.2.7] Elms, C. P. “Defining and Measuring Service Availability for Com- plex Transportation Networks.” International Conference on PRT & Other Emerging Transportation Systems, Nov. 1996. Minneapolis. [2.2.8] Frey, H., and A. Levy. “An Incentive Approach to Contractual Availability for Downgraded Operation.” 9th International Con- ference on Automated People Movers, Sept. 2003, ALONEX/Aero- ports de Paris. Singapore. [2.2.9] Gosling, G. D., and L. Xiao-Yun. “Modeling Measures to Improve Intermodal Connectivity At Airports.” University of California at Berkeley, 2006. S e c t i o n 3 Appendix A Bibliography

85 [2.3.14] National Transit Database Program. Welcome to NTD Data. http://www.ntdprogram.gov/ntdprogram/pubs/national_ profile/2005NationalProfile.pdf. Accessed October 2007. [2.3.15] Rouphail, N. M., “Performance Evaluation of Bus Priority Measures in the Chicago Loop.” Proceedings of Metropolitan Con- ference on Public Transportation Research. Sponsored by Center for Urban Research and Policy Studies, 1986. [2.3.16] Rystam, A., and H. Renolen. “Evaluation of Public Transport Measures: Guidelines for Evaluation on a Common Basis.” Swedish Transport and Communications Research Board, 1998. [2.3.17] Tanaboriboon, Y., A. S. M. Abdul Quium, and C. Chang singha. “Performance Indicator Analysis: A Management Tool for the Improvement of Bus Transit Operations in Bangkok.” Journal of Advanced Transportation, Vol. 27, No. 2. Institute for Transporta- tion, 1993. [2.3.18] Topp, R. M., “Improving Light Rail Transit Performance in Street Operations: Toronto Case Study.” In State of the Art Report 2. TRB, National Research Council, Washington, D.C., 1985, pp. 227–233. [2.3.19] Transportation Research Board, Kittelson and Associates, Incor- porated. TCRP Report 88: A Guidebook for Developing a Transit Per- formance Measurement System. Transportation Research Board of the National Academies, Washington, D.C., 2003. [2.3.20] Committee for the Conference on Performance Measures to Improve Transportation Systems and Agency Operations. Con- ference Proceedings 26: Performance Measures to Improve Trans- portation Systems and Agency Operation: Report of a Conference. Transportation Research Board of the National Academies, Wash- ington, D.C., 2001. [2.3.21] Turnbull, K. F. Conference Proceedings 36: Performance Measures to Improve Transportation Systems: Summary of the Second National Conference. Transportation Research Board of the National Acad- emies, Washington, D.C., 2005. [2.3.22] Vaziri, M., and J. A. Deacon. “Peer Comparisons in Transit Performance Evaluation.” Transportation Research Record 961. TRB, National Research Council, Washington, D.C., 1984, pp. 13–21. 3.4 Airlines [2.4.1] Aviation Consumer Protection Division, Office of Aviation Enforcement and Proceedings, US Department of Transporta- tion. Air Travel Consumer Report. http://airconsumer.ost.dot.gov/ reports/index.htm. Accessed September 2007. [2.4.2] Bureau of Transportation Statistics. Performance Measures in the Airline Industry. http://www.bts.gov/programs/airline_infor- mation/performance_measures_in_the_airline_industry. Accessed September 2007 [2.4.3] Federal Aviation Administration. Preliminary Accident and Incident Reports. http://www.faa.gov/data_statistics. Accessed September 2007. [2.4.5] Federal Aviation Administration. Runway Safety, Historical Data and Statistics. http://www.faa.gov/runwaysafety/stats.cfm. Accessed September 2007. [2.4.6] Federal Aviation Administration. Runway Safety. http://www. faa.gov/runwaysafety. Accessed September 2007. [2.4.7] Fruin, J. J. Pedestrian Planning and Design. Revised edition. Mobile: Elevator World, Inc., 1987. [2.4.8] International Air Transport Association. Airport Development Reference Manual. 9th ed. Montreal: IATA, 2004. [2.2.10] Greater Orlando Aviation Authority, Orlando International Airport. Phase II Expansion, Automated Guideway Transit System, Part 4—O&M Contract Provisions. 1987. [2.2.11] Howell, J. K. “RAM: Reliability, Availability, and Maintainabil- ity of APM Systems.” 2nd International Conference on Automated People Movers, March 1989, American Society of Civil Engineers. Miami. [2.2.12] Lede, N. W., and L. Yu. A Manual for Evaluating Personal- ized Public Transit Systems. Federal Transit Administration, Texas Southern University, and Dallas Area Rapid Transit, 1992. 3.3 Public Transit [2.3.1] American Public Transit Association. “Transit Performance and Productivity 1975–1980: Improvements Through the Inter- Governmental Partnership.” American Public Transit Association, 1985 [2.3.2] Dzurik, A. A., and W. T. Olsen. “Development of Transit System Productivity Measures Based on Section 15 and Urban Area Envi- ronment Data.” Florida State University, Tallahassee; Urban Mass Transportation Administration, 1985. [2.3.3] Fielding, G. J. Managing Public Transit Strategically. Jossey-Bass, San Francisco, 1987. [2.3.4] Hensher, D. A., and R. Daniels. “Productivity Measurement in the Urban Bus Sector.” Transp. Policy, 2-3, 1995, pp. 179–194. [2.3.5] Hooper, P. G., and D. A. Hensher. “Measuring Total Factor Pro- ductivity of Airports: An Index Number Approach.” Transp. Res. Part E, 33-4, 1997, pp. 249–259. [2.3.6] Hobeika, A. G., C. Kanok-Kantapong, and T. K. Tran. “A Methodology for Comparative Transit Performance Evaluation with UMTA Section 15 Data.” Transportation Research Record 961. TRB, National Research Council, Washington, D.C., 1984, pp. 36–43. [2.3.7] Kopp, J. C., J. A. Moriarty, and M. E. Pitstick. “Transit Attractive- ness: Systematic Approach to Transit Performance Measurement.” Transportation Research Record 961. TRB, National Research Coun- cil, Washington, D.C., 1986, pp. 11–16. [2.3.8] Lauritzen, T. “Chicago Transit Authority Special Service Perfor- mance Measures for First Year of Service.” Proceedings of Metro- politan Conference on Public Transportation Research. Sponsored by Center for Urban Research and Policy Studies, 1987. [2.3.9] Lew, L. L., J.-L. Li, and M. Wachs. “Comprehensive Transit Per- formance Indicators.” Working paper. Institute of Transportation Studies, School of Public Policy and Social Research, University of California at Los Angeles, 1994. [2.3.10] Lyons, W. M., E. Weiner, and P. Shadle. “Comparative Evalu- ation of Performance of International Light Rail Systems,” Trans- portation Research Record 1433. TRB, National Research Council, Washington, D.C., 1994. pp. 115–122. [2.3.11] Meyer, M. D. “Use of Performance Measures for Surface Transportation in Different Institutional and Cultural Contexts: Cases of Australia, Japan, and New Zealand.” Transportation Research Record: Journal of the Transportation Research Board, No. 1924. Transportation Research Board of the National Acad- emies, Washington, D.C., 2005, pp. 163–174. [2.3.12] National Transit Database Program. Welcome to the National Transit Database. http://www.ntdprogram.gov/ntdprogram/ntd. htm. Accessed October 2007. [2.3.13] National Transit Database Program. 2007 Safety and Security Reporting Manual. http://www.ntdprogram.gov/ntdprogram/pubs/ safetyRM/2007/HTML/2007_Safety_and_Security_Reporting_ Manual_TOC.htm. Accessed October 2007.

86 [2.5.4] Federal Highway Administration. Travel Time Reliability: Mak- ing it There on Time, All the Time. Cambridge Systematics, Inc., and Texas Transportation Institute. http://www.ops.fhwa.dot.gov/ publications/tt_reliability/TTR_Report.htm. Accessed February 2006. [2.5.5] Federal Highway Administration. Office of Operations. Opera- tions Performance Measurement. http://www.fhwa.dot.gov/aboutus/ one_pagers/perf_measurement.pdf. FHWA-OP-04-039. Accessed April 2004. [2.5.6] Cambridge Systematics, Inc., Texas Transportation Institute, University of Washington, and Dowling Associates. NCHRP Web- Only Document 97: Guide to Effective Freeway Performance Mea- surement: Final Report and Guidebook. Transportation Research Board of the National Academies, Washington, D.C., 2006. http:// onlinepubs.trb.org/onlinepubs/nchrp/nchrp_w97.pdf. Accessed August 2006. [2.5.7] National Transportation Operations Coalition. Performance Measurement Initiative: Final Report. Accessed July 2005. [2.5.8] National Transportation Operations Coalition. Performance Measures. http://www.ntoctalks.com. Accessed October 2007. [2.4.9] International Civil Aviation Organization, International Stan- dards and Recommended Practices for Facilitation, Annex 9. 9th edi- tion. ICAO, 2006. [2.4.10] Massachusetts Port Authority (Massport). Press release, 2/15/2001. http://www.massport.com/about/press01/press_news_ performance.html. Accessed September 2007 3.5 Highways [2.5.1] Federal Highway Administration. Traffic Congestion and Reli- ability: Trends and Advanced Strategies for Congestion Mitigation: Final Report. Cambridge Systematics, Inc., and Texas Transporta- tion Institute. September 2005. [2.5.2] Federal Highway Administration. Office of Operations. National Transportation Operations Coalition. http://www.ops.fhwa.dot. gov/nat_dialogue.htm. Accessed October 2007. [2.5.3] Federal Highway Administration. Office of Operations. The National Transportation Operations Coalition. http://www.ops. fhwa.dot.gov/aboutus/one_pagers/ntoc.htm. Accessed October 2007.

87 Task 3 of the research requires the following work: Develop a detailed plan for a survey to investigate the cur- rent practice of performance measurement by APM systems at airports. The survey should (1) address the characteristics of the APM systems, (2) determine the performance measures used, (3) identify the data-collection practices associated with the performance measures, (4) request performance data for the most recent year, and (5) request suggestions for improving data-collection and performance-measurement practices. The detailed plan should include a survey instrument and a list of airports to be included in the survey. The plan is not to be executed until the panel has reviewed and approved the final survey plan. The objectives of the survey plan were to: • Describe the process of developing the survey instrument; • Describe the steps taken to implement the survey; • Transmit the draft survey instrument; and • Obtain ACRP and panel approval of the survey plan and instrument prior to its implementation. The following sub-sections document the results of the work undertaken for Task 3 of the research. 4.1 Survey Sites and Site Visits In conjunction with developing the first version of the survey instrument, the team identified APM systems to be surveyed and those APM sites that would be visited, in accor- dance with the scope of work and panel comments and in support of finalizing the draft of the survey instrument. 4.1.1 Identify Candidate APM Systems to Survey The scope of work for ACRP Project 03-07 originally envisioned surveying all airport APM systems worldwide. Prior to contract award and in consideration of project bud- get and schedule constraints, ACRP limited the survey to North American airport APM systems. In addition, proposal review comment number 5 provided by the panel on March 12, 2007, recommended that the survey “at a minimum, include all air- port APMs in North America and several non-airport APMs to obtain data from at least 10 airports and two non-airport APMs.” With this in mind, the research team began the process of identifying the APM systems to be surveyed as well as those to be visited during development of the survey instrument. The Airfront.21 information clearinghouse maintains a list of all APM systems operating throughout the world. Systems are classified as airport, airfront, leisure, institutional, line transit, and local transit. The list of North American APM systems in Table A-3 is excerpted from Airfront.21 and provides a starting point for the current research project. 4.1.2 Select Final APM Systems to Survey The research team analyzed the list of North American APM systems in Table A-3 and, based in part on the scope of work, selected 31 systems to survey. All North American airport APM systems were selected to be surveyed. Nine non-airport APM systems were also selected based on those systems’ size and operation, a cross section of technologies, and a number likely to result in receiving the minimum two responses requested by the panel. The research team generally did not include low-volume, leisure, and institutional systems in its selection of non-airport systems. The final lists of airport and non-airport APM systems selected to be surveyed are provided in Tables A-4 and A-5, respectively. 4.1.3 Select APM Systems for Site Visits After finalizing the list of APM systems to be surveyed, the research team analyzed the characteristics and geographic S e c t i o n 4 Survey Plan and Instrument (Task 3)

88 Table A-3. North American APM systems. Airport Leisure Atlanta Bellagio-Monte Carlo APM – Las Vegas, NV Chicago Bronx Zoo Monorail – New York City, NY Cincinnati CalExpo Dallas–Fort Worth Circus-Circus – Reno, NV Denver Hershey Park – PA Detroit Mandalay Bay/Excalibur Tram – Las Vegas, NV Houston (2) Mud Island River Park – Memphis, TN Las Vegas Miami Metrozoo Monorail – FL Mexico City Minneapolis Zoo Monorail – MN Miami Mirage/Treasure Island Tram – Las Vegas, NV Minneapolis (2) Primm Valley UniTrak – NV Newark Primm Valley Shuttle System (Whiskey Pete) – NV New York City – JFK Airport Institutional Orlando Clarian People Mover – Indianapolis, IN Pittsburgh Duke Hospital – Raleigh, NC San Francisco Getty Center Tram – Los Angeles, CA Seattle-Tacoma Huntsville Hospital Tram System – AL Tampa (2) Las Colinas APT – Irving, TX Toronto Morgantown PRT – WV Local Transit Pearlridge Center Monorail – Honolulu, HI Detroit, MI PeopleMover Senate Subway – Washington, DC Jacksonville, FL Skyway Line Transit Las Vegas Monorail – NV Vancouver SkyTrain – British Columbia, Canada Miami, FL Metromover Scarborough RT – Toronto, Canada Source: Airfront.21 at http://www.airfront.us/PDFs/Count07.pdf Airport System Name Technology Propulsion Wheel/RailInterface 1 Atlanta (ATL) Concourse People Mover People mover (large) Onboard Rubber/concrete 2 Chicago (ORD) Airport Transit System People mover (large) Onboard Rubber/steel 3 Cincinnati (CVG) n/a People mover (medium) Cable Hovair/concrete 4 Dallas–Fort Worth (DFW) Skylink People mover (large) Onboard Rubber/concrete 5 Denver (DEN) Auto. Gdwy Transit Sys. People mover (large) Onboard Rubber/concrete 6 Detroit (DTW) Express Tram APM People mover (medium) Cable Hovair/concrete 7 Houston (IAH) TerminaLink People mover (large) Onboard Rubber/concrete 8 Houston (IAH) Inter-Terminal Train People mover (small) LIM Polypropylene/steel 9 Las Vegas (LAS) C and D Gates Tram People mover (large) Onboard Rubber/concrete 10 Mexico City (MEX) n/a People mover (medium) Cable Rubber/steel 11 Miami (MIA) Concourse E Shuttle People mover (large) Onboard Rubber/concrete 12 Minneapolis (MSP) Concourse Tram People mover (medium) Cable Steel/steel 13 Minneapolis (MSP) Hub Tram People mover (medium) Cable Hovair/concrete 14 New York City (JFK) AirTrain JFK Rapid Rail (medium) LIM Steel/steel 15 Newark (EWR) AirTrain Newark Monorail (small) Onboard Rubber/steel 16 Orlando (MCO) n/a People mover (large) Onboard Rubber/concrete 17 Pittsburgh (PIT) n/a People mover (large) Onboard Rubber/concrete 18 San Francisco (SFO) AirTrain People mover (large) Onboard Rubber/concrete 19 Seattle-Tacoma (SEA) Satellite Transit System People mover (large) Onboard Rubber/concrete 20 Tampa (TPA) n/a People mover (large) Onboard Rubber/concrete 21 Tampa (TPA) Garage Monorail Monorail (small) Onboard Rubber/steel 22 Toronto (YYZ) The LINK People mover (medium) Cable Rubber/steel Note: Table A-4 was updated and edited by Lea+Elliott, Inc. for the purposes of this report. Table A-4. North American airport APM Systems to be surveyed.

89 City System Name Technology Propulsion Wheel/RailInterface 1 Detroit, MI Detroit People Mover Rapid rail (small) LIM Steel/steel 2 Indianapolis, IN Clarian People Mover People mover (small) Onboard Rubber/concrete 3 Jacksonville, FL Skyway Monorail (small) Onboard Rubber/concrete 4 Las Vegas, NV Las Vegas Monorail Monorail (medium) Onboard Rubber/concrete 5 Las Vegas, NV Mandalay Bay–Excalibur Tram People mover (medium) Cable Rubber/steel 6 Miami, FL Metromover People mover (large) Onboard Rubber/concrete 7 Morgantown, WV Morgantown PRT PRT Onboard Rubber/concrete 8 Vancouver, BC SkyTrain Rapid rail (medium) LIM Steel/steel 9 Washington, DC US Senate Subway People mover (small) LIM Polypropylene/steel Table A-5. North American non-airport APM systems to be surveyed. APM System Type Layout Toronto (YYZ) Airport Dual-lane shuttle Detroit (DTW) Airport Single lane bypassing shuttle Detroit, MI Non airport Single-lane loop Chicago (ORD) Airport Pinched loop Newark (EWR) Airport Pinched loop New York City (JFK) Airport Pinched loop (trunk/branches) and single loop Vancouver, BC Non airport Pinched loop (trunk/branches) Seattle-Tacoma (SEA) Airport Single-lane loop (2) and single-lane shuttle Seattle, WA* Non airport Dual-lane shuttle Dallas–Fort Worth (DFW)** Airport Dual-lane loop *The Seattle Center Monorail, although a manually operated people mover, was selected for a site visit since the research team was also visiting the APM at Seattle–Tacoma International Airport. **The Dallas–Fort Worth Skylink APM, although not visited specifically for the purpose of this project, is listed here as a site visit since one of the research team members is very familiar with the system, having worked on the implementation of that APM over a multiyear period. Table A-6. APM systems at which to conduct site visits. taining APM systems. The draft survey instrument was com- pleted in advance of the site visits and contained questions organized in the following five areas: performance measures, data collection, suggestions for improving airport APM per- formance measures, general information, and system and operating characteristics. 4.2.2 Conduct Site Visits Concurrent with the development of the draft survey instrument, the research team scheduled visits at the APM systems listed in Table A-6. While coordinating the schedule, the team corresponded with the hosts to explain the purpose of the visits and the objectives they wished to accomplish while there, which included a system tour and discussion with the hosts about aspects of their system and performance measurement practices. The team conducted its site visits in two trips, as reflected in Tables A-7 and A-8. During the site visits, the research team provided a copy of the team’s scope of work (the ACRP RFP) to the hosts. The team also provided a copy of the draft survey instrument for the purposes of obtaining comments on the instrument from the owner/operator perspective. These comments, in conjunction with the information gleaned from the site locations of the systems in the list to determine the preferred properties for conducting site visits. The general purpose of the site visits was to gain a better understanding of the dif- ferent APM systems, thereby helping to structure the survey instrument. The research team’s goal was to visit, in the fewest number of trips, as many diverse systems as possible within the constraints of the project budget. Table A-6 shows systems that were selected for site visits. 4.2 Survey Instrument Concurrent with identifying sites to be surveyed and visited, the team developed the survey instrument and presented an early form of it to the APM systems where site visits took place. The instrument was then refined based on feedback from those visits and other information observed/obtained as a result of the visits. 4.2.1 Develop Survey Instrument In preparing for the site visits, the research team developed a draft survey instrument based on the project scope of work, the tasks performed on the project up to that point, and its own experience planning, designing, operating, and main-

90 4.3.1 Step 1: Distribute Introductory Letter In the first step of the survey, the research team sent a letter to the chief executive of each system to introduce the research team, the project’s scope of work, and the objectives of the project and survey. The letter informed the recipients that a member of the team would be calling them within the fol- lowing week to inquire as to their willingness to participate in the survey. 4.3.2 Step 2: Determine Willingness to Participate in Survey In the second step of the survey, the research team contacted by telephone the chief executive of each system to discuss the introductory letter and any questions they may have had and determine whether they were willing to complete a survey of their system. For those that agreed to participate, the team obtained the participant’s email address to which the survey could be distributed. The research team tracked the systems that agreed to participate and proceeded with subsequent steps once all responses were obtained. 4.3.3 Step 3: Report to ACRP Panel on Participation Ratio In the third step of the survey, the research team notified the ACRP Senior Program Officer (SPO) and panel of the participation ratio on the survey. Once it was known that at least 50% of the systems were willing to participate in the survey, the team immediately visits themselves, assisted the team in developing the survey instrument. The research team rode the systems and toured the main- tenance shops and control centers during each of the visits. The team was also able to obtain a large amount of information from the owners/operators through their comprehensive presentation of the systems as well as their answers to the many questions asked by the research team members. In some cases, the hosts provided preliminary performance measurement reports and data for the team to take with them. 4.2.3 Finalize Survey Instrument Based on the information collected during the visits and from the host’s comments obtained on the instrument, the research team updated the draft survey at the conclusion of the site visits. It became apparent during development of the draft survey and site visits that a separate survey instrument for airport and non-airport APM systems would be more appropriate since certain questions for an airport APM system may not apply to a non-airport APM system. While the differences between the surveys were not expected to be substantial, the team decided to implement different surveys, which also enabled the team to provide a clearer explanation to the non- airport APM systems as to the reasons for their inclusion in a project about airport APM performance measurement. 4.3 Survey Plan The plan for implementing the survey of airport and non- airport APM systems is described in the following steps. System Location Type Name Date Visited Host* Toronto (YYZ) Airport The LINK 12-03-07, a.m. Mr. M. Riseborough, GTAA Detroit (DTW) Airport Express Tram 12-03-07, p.m. Mr. D. Farmer, NWA Detroit, MI Non-airport Detroit People Mover 12-04-07, a.m. Ms. B. Hansen, DTC Chicago (ORD) Airport Airport Transit System 12-04-07, p.m. Mr. R. Rambhajan, OATS Newark (EWR) Airport AirTrain Newark 12-05-07, p.m. Ms. J. Giobbie, PANY&NJ New York City (JFK) Airport AirTrain JFK 12-06-07, a.m. Mr. H. McCann, PANY&NJ *GTAA = Greater Toronto Airports Authority; NWA = Northwest Airlines; DTC = Detroit Transportation Corporation; OATS = O’Hare Airport Transit System; PANY&NJ = Port Authority of New York & New Jersey Table A-7. Trip #1 site visits. System Location Type Name Date Visited Host* Vancouver, BC Non-airport SkyTrain 12-10-07, a.m. Mr. C. Morris, BC Rapid Transit Seattle–Tacoma (SEA) Airport Satellite Transit System 12-11-07, a.m. Mr. T. O’Day, Port of Seattle Seattle, WA Non-airport Seattle Center Monorail 12-11-07, p.m. Mr. G. Barney, Seattle Monorail Svcs *BC Rapid Transit = British Columbia Rapid Transit Company Table A-8. Trip #2 site visits.

91 4.3.7 Step 7: Survey Follow-Up In the seventh step, the research team contacted by telephone the survey participants that had not yet provided responses to ensure that they had not encountered any difficulties with the survey. The call was made by the end of the third week from the date the survey was distributed. The team also contacted survey participants by telephone and email with any outstanding questions resulting from the responses that were received. These contacts were made as was necessary once responses were received and questions had arisen. Where participants were having problems answering questions, the team assisted by providing clarification(s) and sharing responses from other owners/operators. In cases where there were multiple questions and/or time constraints and a face-to-face meeting was more efficient, the team coordinated with the participant in answering survey questions on site. 4.3.8 Step 8: Report to ACRP Panel on Response Ratio In the eighth step of the survey, the research team notified the SPO and panel of the response ratio on the survey. 4.3.9 Step 9: Compile Data and Clarify Responses In the ninth step of the survey, the research team continued to compile survey responses and other data and contacted participants to clarify responses where necessary. 4.3.10 Step 10: Transmit Thank-You Letters to Respondents In the tenth and final step of the survey, the research team distributed thank-you letters to the participants that provided responses to the survey. notified the SPO and panel that the participation ratio had reached that level. The team then proceeded with Step 4. 4.3.4 Step 4: Distribute Survey In the fourth step of the survey, the research team dis- tributed via email the survey instrument attached to a letter containing a synopsis of the survey categories, instructions on how and where participants could forward their survey responses and other material, the date that responses were desired, and other information relevant to completing the survey. The survey instrument was in an electronic format that the participants could complete on their computers and send back via email attachment or print out and complete by hand. In the survey cover letter, participants were given the option of returning responses and accompanying material via email, upload to an FTP site, or delivery service to the research team at the project’s cost. 4.3.5 Step 5: Verify Receipt of Survey In the fifth step of the survey, the research team contacted the survey participants by telephone approximately 4 days after distribution of the survey to verify that the survey had been received. During the call the team inquired if the recipients had had the opportunity to review the survey, if there were any questions, and if they were still intending to participate. This provided an opportunity to determine the recipients’ reaction to the survey and make adjustments early in the process, if necessary. 4.3.6 Step 6: Receive Survey Responses In the sixth step of the survey, the research team began to receive responses from participants, organize the data, and perform analysis by identifying the similarities and differences in the data.

92 Task 4 of the research required the following work: Conduct the survey developed in Task 3 and compile the survey findings. In addition to compiling the survey results, the researchers should identify similarities and differences in defining performance measures and data among APM systems at airports. For example, APM systems at airports currently define grace periods and downtime in different ways. The following subsections document the results of the work undertaken for Task 4 of the research. 5.1 Survey Implementation The research team surveyed 31 North American APM systems (all 22 airport systems in North America and nine non-airport systems) in accordance with the survey plan described in Section 4. Fourteen of the 22 airport APM systems returned survey responses, and four of the nine non-airport APM systems returned survey responses. This represents participation rates of 64% for airport APM systems, 44% for non-airport APM systems, and an overall participation rate of 58%, exceeding the 50% participation rate desired by the ACRP panel. With 14 airport APM systems and four non-airport APM systems participating, the minimum quantity of APM systems participating also exceeded the panel-recommended numbers of 10 airport APM systems and three non-airport APM systems. In summary, the surveys contained the following questions, organized in the five sections described in the following and provided in Section 6 of this Appendix. 5.1.1 Section 1: General Information 1. What is the name of your APM system? 2. What is the location of your system? 3. Who is the owner of the system? S e c t i o n 5 Survey Implementation and Data Analysis (Task 4) 4. Who is the operator of the system, contracted or otherwise? What was the basis for their selection? 5. Who is the maintainer of the system, contracted or other- wise? What was the basis for their selection? 6. Who is the supplier of the system elements (e.g., the vehicles, automatic train control equipment, guide- way running surfaces)? What was the basis for their selection? 7. What functions at your system are contracted by the owner (and what functions are subcontracted by the contracted system operator or maintainer, if applicable)? 8. What is the number of operations and maintenance per- sonnel required to operate and maintain the system? 9. When did your system first open to the public? 10. Who can we contact with questions about your survey responses? Please provide a name, title, and contact infor- mation (telephone and email address). 5.1.2 Section 2: Performance Measures 1. What performance measure(s) do you use to judge over- all performance of your system? Please describe each measure, including their names, how they are calculated, and their associated definitions, rules, and formulas. Please attach, upload, or send material as necessary to explain this answer. 2. What performance measure(s) do you use for contrac- tual compliance purposes? Please describe each mea- sure, including their names, how they are calculated, and their associated definitions, rules, formulas, and interpretations, including how rigorous the contract is followed and any modifications of the contract that may have been made. Please attach, upload or send material, including any applicable contract sections, to explain this answer.

93 the passenger’s perspective (i.e., what measure is best at representing impacts on your system’s passengers)? 13. How does your system affect overall airport perfor- mance? In response to this question, please consider the following: A. Is your system the only form of transportation from which passengers can choose, or is an alternative form/mode of transportation available while your system is operating (such as walking, automobiles/ taxis, buses)? B. How disruptive to airport performance is it when your system is unavailable? For example, would the airport continue to perform fairly well during a shutdown because passengers can get to areas served by your system by other means (such as walking), or would the loss of your system have a major adverse impact on airport performance (because buses would have to be called in, for example)? Please describe if such an outage affects only part of the airport (e.g., one concourse) or if it affects the entire airport (e.g., all terminals, parking facilities, rental cars). 14. What operating strategies do you employ to improve the performance of your system? 15. What equipment capabilities or configurations that do not exist in your system today would improve its perfor- mance if they were implemented? Please describe how these would improve performance. 5.1.3 Section 3: Data Collection 1. What methods do you use to collect and report data for the performance measures provided previously? Please attach, upload, or send any procedures that may describe collection and reporting of data at your system. 2. Please attach, upload, or append examples of the daily, monthly, and annual data collection forms and reports that you currently use. 3. Please provide quantitative data that describes the per- formance of your system in 2007, using the performance measures you have described previously. Please attach, upload, or append material as necessary to explain this answer. 5.1.4 Section 4: Suggestions for Improving APM Performance Measures 1. Please provide any plans you may have regarding ways to improve your own data collection and performance mea- suring process. Please attach, upload, or append material as necessary to explain this answer. 3. Please describe every instance in which you allow a grace period (e.g., at schedule transitions, during incidents, for late trains), the duration of each grace period, and its effect on calculation of performance measures such as system availability. Please attach, upload, or send material as necessary to explain this answer. 4. Please describe the instances in which you allow credit for partial service, including how it is calculated and its associated definitions, rules, and formulas. Please attach, upload, or send material as necessary to explain this answer. 5. What performance measure(s) do you use to judge sub- system and/or component performance? Please describe each measure, including their names, how they are calcu- lated, and their associated definitions, rules, and formulas. Please attach, upload, or append material as necessary to explain this answer. 6. What safety-related performance measure(s) do you track? Please attach, upload, or append material as necessary to explain this answer. 7. What security-related performance measure(s) do you track? Please attach, upload, or append material as neces- sary to explain this answer. 8. What performance measure(s) do you use to judge efficiency and/or effectiveness in general, and in par- ticular, the economic efficiency and/or effectiveness at your system? Please describe each measure, including their names, how they are calculated, and their asso- ciated definitions, rules, and formulas. Please attach, upload, or append material as necessary to explain this answer. 9. What performance measure(s) do you use to judge the passenger experience at your system? Please describe each measure, including their names, how they are calculated, their associated definitions, rules, and formulas, and the frequency of collection from passengers. Please attach, upload, or append material as necessary to explain this answer. 10. What other performance measure(s) is in use at your system that you have not already provided? Please describe each measure, including their names, how they are calculated, and their associated definitions, rules, and formulas. Please attach, upload, or append material as necessary to explain this answer. 11. What data do you collect (that is not already collected for the measures you have provided previously) that could be used as a performance measure? Please append material as necessary to explain this answer. 12. Which of the measures that you have provided previ- ously best represents your system’s performance from

94 C. Number of vehicle door failures preventing passen- gers from using a vehicle. D. Total number of vehicle door failures. 12. How many stations and platforms does your system have? What is the average dwell time? 13. What type of propulsion system does your system use? A. Nominal operating voltage and current? 14. What type of vehicle conveyance and track running equipment is employed at your system? 15. What locations are monitored by a video surveillance system (closed circuit television equipment), and how is the information managed? 16. How many cars make up the smallest independently operated vehicle that can be operated in your system? For example, a married pair would be considered two cars for the purposes of this survey. 17. What is the configuration of the operating fleet used in typical daily operations? 18. What is the number of peak operating vehicles and trains required for an average operating weekday at your system? 19. What is the total number of vehicles in your fleet? 20. How is the coupling/uncoupling function managed at your system? 21. What is the location of the vehicle maintenance facility at your system? If possible, please provide a drawing, plan, or map of the maintenance facility and yard area, as well as its location in relation to your overall system. 22. How many hours is your system scheduled to operate? How many hours did your system operate in 2007? (If an on-demand system, how many hours per day is the system scheduled to be available for use?) 23. How many on-guideway maintenance and recovery vehicles (MRVs) does your system have? How many incidents have occurred in your system where an in-service vehicle needed to be pushed/pulled by the MRV or another vehicle? 24. What is your system’s average weekday operating schedule? 25. What is the passenger capacity of a vehicle at your sys- tem? Please describe how this is determined (i.e., if it is obtained from the initial vehicle specification or tech- nical proposal, through visual observation or passenger counts during peak periods, or some other method). Please provide as much detail as possible about the capacity of your vehicles. 26. Do passengers use luggage carts (e.g., Smarte Carte) on board your vehicles? If so, does the capacity number pro- vided in question 25 take this into account? [This ques- tion was left blank for the non-airport APM survey.] A. How does the use of luggage carts on your vehicles affect your vehicle capacity? Please quantify this if possible. 2. Please provide any suggestions you may have for improving data collection and performance measurement practices among airport APM systems, including what you see as being the best set of performance measures common to all or most (airport or non-airport) APM systems. Please attach, upload, or append material as necessary to explain this answer. 3. Please provide your thoughts on any measures you consider to be poor measures of (airport or non-airport) APM per- formance and why. 5.1.5 Section 5: System and Operating Characteristics 1. What is the configuration/layout of your system? 2. Does your system have bidirectional functionality (i.e., can a vehicle operate in full automatic mode in both directions)? On the mainline? In the yard/storage areas? 3. Please attach, upload, or send the following information if available: A. System or route map B. Track map 4. What type of passengers does your APM transport? [This question was left blank for the non-airport APM survey.] 5. In what type of environment does your system operate? 6. What areas does your system serve? [This question was left blank for the non-airport APM survey.] 7. What are the operating modes of your system? 8. What is the length of your system’s guideway? (total without maintenance and storage/yard guideway and total of maintenance and storage/yard guideway length) 9. Is it possible at your system for passengers to initiate their own exit from a train and egress onto the guideway or walkway without the assistance of O&M staff or other life safety personnel? If no, why not? 10. How many evacuations has your system experienced? Please count the number of evacuations by incident. For example, if multiple trains are evacuated during the same incident, that would be considered one evacuation. Please indicate the numbers of each of the following for both 2007 and the last 3 years: A. Passenger-initiated evacuations. B. Operator-initiated evacuations. C. Total evacuations. D. Total unsupervised evacuations. 11. Does your system have automatic sliding platform doors at stations? Please indicate the numbers of each of the following for both 2007 and the last 3 years: A. Number of platform door failures preventing passen- gers from using a vehicle. B. Total number of platform door failures.

95 the planning stages reveals that five new airport APM systems in North America will be inaugurated in the next three years (Trans.21, “Current APM Implementations: Fall, 2008.” APM Guide Online). The four participating non-airport APM systems surveyed opened in 1986, 1987, 1997, and 2003. 5.2.2 System and Operating Characteristics Response data from the system and operating character- istics portion of the survey were distilled and are reported on the following pages. As can be seen from the reported data, a majority of the airport APM systems are pinched loop systems, have four or more stations, are less than 3 miles in length, operate outdoors, operate 24 hours per day, operate in a continuous mode, operate on a peak headway of less than 300 sec, transport non-secure passengers, use rubber- tire vehicles running on concrete or steel, and are designed to operate at a propulsion power supply voltage and current of 600VAC. Half of the airport APM survey participants provided ridership data for 2007. Of those seven, six based their passenger counts on airline data, parking data, random sampling, or some other estimate. Only one airport APM system had counts taken from an automatic passenger counting system installed within the APM system. The remaining seven airport APM systems participating in the survey provided no passenger counts. The research indi- cates that passenger counting for airport APM systems is mostly performed manually, when that data is collected. Annual passenger trips (2007) for the seven airport APM systems participating in the survey ranged from about 3,000 to about 15 million. All four of the non-airport APM systems participating in the survey have in place a means to count passengers, with three of them using automatic collection methods. The 2007 annual ridership for these properties ranged from about 620,000 to 2.3 million. In comparison to the characteristics of the airport APM survey data, the non-airport APM survey data for the system and operating characteristics showed that a large majority of those systems have eight or more stations, are greater than 2½ miles in length, operate 17 to 19 hours daily, operate on a peak headway of less than 200 sec, use rubber running tires on concrete, and operate in a continu- ous mode. All of the surveyed non-airport APM systems responded that the systems are outdoors and have average dwell times of 30 sec or less. Figures A-1 through A-8 provide a synthesis of the sur- vey response data received by the research team from both the airport and non-airport APM systems participating in the survey. 27. What is the capacity of your system? 28. How is the system operation and operating schedule managed at your system? 29. What is the total number of vehicle (fleet) miles accrued at your system in 2007? 30. What is the total number of in-service vehicle (fleet) hours accrued at your system in 2007? 31. What is the total number of passenger trips taken on your system in 2007? Please indicate how this number is obtained (e.g., through fare gate information, ticket sales, or other passenger counting systems, or if it is an estimate based on parking data, airline passenger data, or other external data). 32. Are passengers charged a fare to use your system? If yes, please indicate the fare and basis. 5.1.6 Section 6: Cost 1. What are the costs to operate and maintain your system? 5.1.7 Section 7: Other 1. Please provide any additional information about your sys- tem or (airport or non-airport) APM performance measures that might not have been covered by the previous survey questions that you believe could be useful to our research. 5.2 Survey Response Data, Compilation, and Analysis The data received from survey participants were compre- hensive in total. The research team generated a simple cat- egorical overview of responses based largely on the structure of the survey questionnaire, summarized in Section 5.1 and provided in Chapter 6. Since some responses to survey ques- tions are either not complete or are missing, some data are not reported here but otherwise provide the reader with a substantial understanding of airport APM systems and their associated performance measurement systems. Data within the report are referred to by generic APM system IDs rather than by an identifiable APM system name. The treatment of data in this manner (as confidential) was ensured in order to obtain the highest rate of participation in the surveys and the most comprehensive levels of data. 5.2.1 Age of Airport APM Systems Surveyed Of the 14 airport APM system participants responding to the survey, two have been in service since the early 1970s and are the earliest implementations of APM systems in North America. The other systems reflect opening dates in the 1990s and after. A quick scan of projects under construction or in

System Configuration APM Passenger Type APM Environment System Operating Mode 1 Dual-lane shuttle Single-lane shuttle Single-lane bypassing Shuttle Dual-lane shuttle Double Loop Pinched Loop Single-lane shuttle & Single loop Secure Non-Secure Combination Non-secure Outdoors Continuous Single-lane shuttle; 2 single loops Combination Indoors Continuous 2 3 Pinched loop Non-secure Indoors Continuous 4 Pinched loop Non-secure Outdoors Continuous 5 Pinched loop Combination Indoors Continuous 6 Pinched loop Non-secure Outdoors Continuous AIRPORT APM SYSTEM IDs 7 Pinched loop Non-secure Outdoors Continuous Dual-lane shuttle Non-secure Indoors Combined 8 Single-lane bypassing shuttle 9 Secure Indoors Continuous 10 Double loop Non-secure Outdoors Continuous 11 Single-lane by passing shuttle Secure Outdoors Combined 12 Double loop Secure Outdoors Continuous Dual-lane shuttle Non-secure Outdoors Continuous 13 Single-lane shuttle Non-secure Outdoors Continuous 14 Indoors Outdoors Combined Continuous 36% 14% 22% 7% 7% 14% 22% 14% 14% 64% 64% 36% 86% Figure A-1. Airport APM systems characteristics #1.

Guideway Length (miles) and No. of Stations Average Dwell Time (sec) Propulsion System Supply Voltage and Current Vehicle Conveyance and Track Running Equipment 1 1.62 miles 16 stations 37 600VAC 600VAC 480VAC 750VAC Rubber tire on concrete Rubber tire on steel Hovair over concrete Steel wheel on steel rail Rubber tire on concrete; onboard motor(s) 1.87 miles 6 stations 600VAC Rubber tire on concrete; onboard motor(s) 2 0.55 miles 8 stations 42 Data missing Rubber tire 3 480VAC on steel; onboard motor(s) 4 6.30 miles 5 stations 44 750VDC Rubber tire on steel; onboard motor(s) 5 2.17 miles 4 stations 30 600VAC Rubber tire on concrete; onboard motor(s) 6 Data missing 8 stations 45 to 60 600VAC Rubber tire on steel; onboard motor(s) AIRPORT APM SYSTTEM IDs 7 3.36 miles 4 stations Data missing 600VAC Rubber tire on concrete; onboard motor(s) 0.21 miles 2 stations 480VAC Hovair over concrete; w m 8 32 ayside otor(s) H 9 0.72 miles 3 stations 30 600VAC ovair over concrete; wayside motor(s) 10 7.00 miles 9 stations 30 600VAC Rubber tire on concrete; onboard motor(s) 11 0.50 miles 4 stations 28 480VAC Steel wheel on steel rail; wayside motor(s) 12 5.42 miles 10 stations 32 750VDC Rubber tire on concrete; onboard motor(s) 0.92 miles 3 stations 600VAC Rubber tire on steel; wayside motor(s) 13 1.88 miles 2 stations 36 Rubber tire 14 60 600VAC on steel; wayside motor(s) 22% 14% 14% 7% 43% 36% 64% 70 60 50 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 0 2 4 6 8 10 12 1 Giveaway Length (miles) No. of Stations 2 3 4 5 6 7 8 9 10 11 12 13 14 Figure A-2. Airport APM systems characteristics #2.

Operating Fleet Configuration Total Fleet Daily Hours of Operation PeakHeadway (sec) 1 16 24 24 hours per day 17 to 19 hours per day data missing 90 Mixed 2 22 24 180 3 Singles 6 24 30 4 Entrained 15 24 310 5 Entrained 31 Data missing 20 6 Singles 18 24 45 to 60 Entrained 24 Data missing 7 Entrained 12 8 4 24 114 9 Entrained 4 17 to 19 210 10 Singles 38 Data missing 150 11 Entrained 4 17 to 19 213 12 Singles 32 24 149 Singles 13 2 24 240 14 Singles 1 17 to 19 670 Entrained Singles Mixed AIRPORT APM SYSTEM IDs 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 700 800 600 500 400 300 200 100 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 22% 14% 64% 50% 7% 43% Figure A-3. Airport APM systems characteristics #3.

Vehicle Capacity (no. of passengers per vehicle) Line Capacity (pphpd) Annual Fleet Miles (2007) 1 100 5,600 992,000 2 92 7,452 717,087 3 17 680 146,000 4 57 to 85 2,962 1,152,913 7,000 1,633,415 5 96 Data missing 6 78 1,560 Data missing Data missing 7 100 8 60 to 97 7,578 93,888 9 218 3,737 301,148 10 40 3,600 1,500,000 11 47 to 67 3,177 98,184 69 to 99 6,624 3,031,245 185,310 13 150 2,250 14 100 537 28,828 AIRPORT APM SYSTEM IDs 12 7,000 8,000 6,000 5,000 4,000 3,000 2,000 1,000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 2,000,000 1,500,000 1,000,000 500,000 - 1 2 3 4 5 6 7 8 9 10 11 12 13 14 0 20 40 60 80 100 120 140 160 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Figure A-4. Airport APM systems characteristics #4.

100 System Configuration Total Fleet 40 35 30 25 20 15 10 5 0 1 2 3 4 APM Environment System Operating Mode 1 Double loop Dual-lane shuttle Double Loop Pinched Loop Single Loop Indoors Outdoors Combined Continuous 29 Outdoors Continuous 2 Single loop 12 Outdoors Continuous Pinched loop 5 Outdoors Continuous Dual-lane Outdoors Combined 4 shuttle 2 NON-AIRPORT APM SYSTEM IDs 3 Figure A-5. Non-airport APM systems characteristics #1. 5.2.3 O&M Cost O&M costs were reported by eight of the 14 airport APM survey participants and ranged from $1.02M to $15.39M annu- ally. Although the survey requested specific breakdowns for cost data, many properties reported rolled-up costs. Table A-9 shows the annual O&M costs reported by the airport APM survey par- ticipants: O&M costs were reported by two of the four non-airport APM survey participants. Annual O&M costs for the two non-airport APM systems that did not report such data were obtained from the National Transit Database. Table A-10 shows the annual O&M costs for the non-airport APM sur- vey participants: 5.2.4 Performance Measures Survey response data collected from the performance measures section of the surveys are provided in Tables A-11 and A-12 for airport and non-airport APM survey partici- pants, respectively, and categorized as follows: • Overall performance measures (including contractual per- formance measures), • Safety and security performance measures, • Efficiency/effectiveness performance measures, • Passenger experience performance measures, • Other performance measures, • Grace periods, • Partial service, • Other data collected as performance measures, and • Best performance measures from the passenger’s per- spective. The data show that availability is overwhelmingly the most common performance measure used to judge system perfor- mance, being used by 13 of the 14 airport APM systems and by all four of the non-airport APM systems. The data also reflect that grace periods and partial service credit are used in determining availability for almost all of the airport APM systems but are much less prevalent for the non-airport APM systems.

Guideway Length (miles) and No. of Stations Average Dwell Time (sec) Propulsion System Supply Voltage and Current Vehicle Conveyance and Track Running Equipment 1 9.03 miles 21 stations 20 600VAC Rubber tireRubber tire on concrete Steel wheel on steel rail on concrete onboard motor(s) 2 3.40 miles 13 stations 30 600VAC Steel wheel on steel rail; linear induction motor(s) 2.80 miles 8 stations 14 480VAC Rubber tire on concrete; linear induction motor(s) 1.40 miles 3 stations 480VAC Rubber tire on concrete; onboard 4 30 motor(s) NON-AIRPORT APM SYSTEM IDs 3 25 20 15 10 5 0 1 35 30 25 20 15 10 5 0 1 2 3 4 600VAC 480VAC Series 2Series 1 2 3 4 Figure A-6. Non-airport APM systems characteristics #2. Operating Fleet Configuration Daily Hours of Operation Peak Headway ( sec) 1 Mixed 17 to 19 180 2 Entrained 17 to 19 150 Singles 17 to 19 1 3 90 4 Singles 24 450 NON-AIRPORT APM SYSTEM IDs Entrained Singles 24 hours per day 17 to 19 hours per day 500 450 400 350 300 250 200 150 100 50 0 1 2 3 4 Mixed Figure A-7. Non-airport APM systems characteristics #3.

102 Vehic Line C Annu le Capacity (no apacity (pphp al Fleet Miles (2 . of passenger d) 007) s per vehicle) 1 100 Data missing 934,906 NON-AIRPOR 2 78 4,200 692,309 T APM SYSTE 3 56 500 252,94 M IDs 4 8 1,2 4 63, 1 96 931 Figure A-8. Non-airport APM systems characteristics #4. Airport APM System Annual O&M Cost Remarks FY 07/08 Budgeted FY 06/07 Actual A $4.4M — power not included B $10.93M $10.18M C $8.5M $8.0M D $2.23M $2.17M E $1.05M $1.02M F $1.81M $1.49M power not included G $2.01M $1.88M power not included H $15.39M $13.95M includes all O&M costs Table A-9. Airport APM systems annual O&M costs. Non-Airport APM System Annual O&M Cost Remarks FY07/08 Budgeted FY06/07 Actual A — $4.61M includes all O&M costs B — $21M includes all O&M costs C $19.04M $21.29M includes all O&M costs D — $0.97M power not included Table A-10. Non-airport APM systems annual O&M costs.

AIRPORT APM SYSTEM IDs 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Overall performance measures System availability Availability; days without service interruption System availability System availability System availability System availability; DT events System availability; DT events Reliability; maintainability; availability System availability; DT events System availability; DT events System availability; DT events Reliability; maintainability; availability Reliability; maintainability; availability Safety and security performance measures — Occurrences tracked (safety) and logged (security) — Employee and passenger injuries Occurrences logged OSHA occurrences tracked; Security occurrences logged Fires and injuries tracked Emergency stops logged Occurrences logged Occurrences reported Emergency stops logged Safety occurrences monitored Occurrences reported Occurrences reported Efficiency/effectiveness performance measures Cost/pax trip KWH per vehicle mile Cost/pax trip System availability — — Operating hours per vehicle; riders per month Operating miles/ lane/headways; operating hours — — Operating miles/ lane/headways; operating hours Pax/car-hour; cost/pax; cost/car-mile Maintainability Cost per pax trip Passenger experience performance measures Cleanliness; friendliness of staff; ride quality and comfort; complaints; responsiveness to complaints; convenience Availability Cleanliness; friendliness of staff; ride quality and comfort; complaints; responsiveness to complaints; convenience — Comments cards received from passengers Station and train announcements Cleanliness; friendliness of staff; responsiveness to events; station and train announcements; complaints Complaint cards received from passengers Comments and feedback from passengers and employees passenger surveys; complaints; observations; audits Complaint cards received from passengers Wait time; trip time; in-service inspections — — Other performance measures DT events (when exceeded, affect availability) Occurrences of interruptions and incidents DT events (when exceeded, affect availability) Ridership — System availability; DT events; QA audits Maintenance completion; staff training Round trip time; station availability — PM schedule inspection adherence Round trip time; station availability Cost/car-hour; car-hour per employee; op. hours/$1 cost PM/CM completions; breakdowns PM/CM completions; breakdowns Grace periods Yes, for stoppages equal to or less than 1 min and 10 alarms or less of 1 min in duration or less — Yes, extensive and specific grace periods allowed relative to delays and downtime events Yes, for late trains up to 21 sec and for schedule transitions up to 15 min Yes, for service interruptions up to 3 min Yes, for delays up to 5 min; for delays and failures up to 1 min (fleet and platform doors) Yes, no more than 2 min during schedule transitions Yes, for operational transitions up to 2 min plus one headway Yes, for downtime up to one round trip time Yes, up to 1 headway (mode); up to 2X headway (fleet); up to 1 min (platform) Yes, for operational transitions up to 2 min plus one headway Yes, up to 1 headway (mode); up to 2X headway (fleet); up to 1 min (platform) Yes, for first 10 unrelated malfunctions, up to 1 headway; up to one service day for one door blockage Yes, for first 10 unrelated malfunctions, up to 1 headway Partial service credit Yes, for vehicle HVAC unit(s) OTS; car LED graphics OTS; doors blocked; and 1-car of a 2- car train blocked — Yes, for scheduled train(s) OTS, station(s) OTS, and vehicle HVAC unit(s) OTS Yes, if actual capacity >½ the scheduled capacity and the actual headways <2 X scheduled headway — Yes, K factors applied depending on actual mode operated vs. mode scheduled Yes, K factor if time allotted for schedule transition is exceeded Yes, K factors applied depending on actual mode operated vs. mode scheduled Yes, for liquidated damages; monetary penalty factored according to failure type Yes, K factors applied when exceeding schedule transition time; and for lesser mode operated Yes, K factors applied depending on actual mode operated vs. mode scheduled Yes, K factors applied when exceeding schedule transition time; and for lesser mode operated Yes, proportional to capacity provided vs. scheduled capacity Yes, proportional to capacity provided vs. scheduled capacity Other data collected as performance measures — Car mileage — — — — — Average headways/miles Vehicle trips; mileage — Average headways/miles PM schedule tracking Stoppages and DT Shutdowns; ridership Best measures from passenger’s perspective Convenience Availability Convenience System Availability Comments card DT events and durations System availability DT events Maintainability System availability DT events DT events Availability Shutdowns *Pax = passengers, DT = downtime, QA = quality assurance, KWH = kilowatt hours, K = partial service factor, PM/CM = preventive maintenance/corrective maintenance, OTS = out of service Table A-11. Airport APM systems performance measures.*

104 types of safety and security incidents that can be associated with public transit in urban environments. In terms of economic efficiency and/or effectiveness measures, four of the 14 airport APM system survey partici- pants reported tracking an economic measure of this type. The limited use of this type of measure could be explained by the distributed nature of O&M cost responsibility for airport APM systems. Often a contractor is responsible for one function or aspect of the APM system (operations and/ or maintenance of systems and/or facilities), whereas the owner or other contractors are responsible for other func- tions or aspects. Similarly, traction power and other APM utilities costs can often be rolled up in other owner utili- ties costs, making it more difficult to ascertain the costs allocated to the APM. Collecting and allocating airport APM-only O&M costs can therefore become somewhat challenging. For the airport APM system survey participants, safety and security performance measures exist in large part via occur- rences logged and reported but not tracked. This is different for the non-airport APM system participants that are FTA- mandated reporters. They log, report, and track all safety and security incidents as required for the National Transit Data- base. The differences in the approaches can be explained not only by the mandated reporting aspect for FTA properties, but also by the frequency and nature of the safety and security incidents that occur in airport versus non-airport APM sys- tems. For the airport APM system survey participants, safety incidents are generally limited to slips, falls, and/or other minor injuries that are reported to and handled by the own- er’s risk management department. Security incidents are also somewhat minor, being limited to crimes such as vandalism, which are reported to and handled by the local airport or city police. This is contrasted with the more frequent and diverse NON-AIRPORT APM SYSTEM IDs 1 2 3 4 Overall performance measures System availability; reliability (time and distance); peak vehicles System availability Train availability; schedule adherence System availability Safety and security performance measures Occurrences tracked Occurrences tracked Efficiency/effectiveness performance measures Cost/mile; cost/hour; cost/trip; cost/pax mile; etc. Operating hours per employee Passenger experience performance measures Complaints per month Number of complaints Cleanliness; elevator and escalator in service; and complaints Other performance measures Subsystem failures graphed; scheduled vs. unscheduled maintenance; inactive vehicles/ scheduled peak vehicles Grace periods Yes, train may be late for up to one min Yes, delays up to 5 min Partial service credit Credit given for what is operated Other data collected as performance measures Daily ridership data System shutdowns and service interruptions Best measures from passenger's perspective Fleet reliability and peak vehicles Service interruptions Table A-12. Non-airport APM systems performance measures.

105 formance measures and data collection in that effort. The suggestions included: • Use nonproprietary systems for data collection; • Standardize training and certification of personnel work- ing in airport APM systems; • Improve availability calculation since it is too vague; • Improve communication with passengers; • Improve definitions of “fully functioning vehicle” and when a door is “available”; • Improve the user interface for the proprietary reporting tool; • Improve fault tracking database; • Incorporate automatic passenger counting systems into stations during initial construction of the system; • Ensure automatic passenger counting is reported directly to control center; • Use on-time performance to measure overall performance and reserve availability to refer to the fleet; • Have all airport APM systems use service availability and the same formula [MTBF/(MTBF+MTTR)]; • Use fleet, mode, and station door availability, as well as duration and number of downtime events to measure performance; • Use a train control software package in the industry that creates performance data automatically. Passenger-experience performance measures appeared to be in large part tracked according to passenger complaints and/or comments for both the airport APM and non-airport APM survey participants. 5.2.5 Data Collection The collection of data for airport APM system survey participants is largely accomplished automatically using data received via supervisory control and data acquisition (SCADA) and control center computer system equipment. Manual collection of data is typically involved when passen- ger surveys and counts are required or when data not collected automatically by the system, such as narrative descriptions of incidents, need to be entered for reporting purposes. The reported quantitative availability performance data for the airport APM survey participants are reported in Figure A-9. The same data for the non-airport APM survey participants were provided by two of the four participants and reported at 92% and 99%. 5.2.6 Suggestions for Improving Airport APM Performance Measures Both airport and non-airport APM survey participants provided suggestions on how to improve airport APM per- 97.50% 1 2 3 4 5 6 Airport APM System ID 7 8 9 10 11 12 13 14 98.00% 98.50% 99.00% 99.50% 100.00% 100.50% Figure A-9. Airport APM systems availability values.

106 Survey Section 1: General Information Question 1.1 What is the name of your APM system? Question 1.2 What is the location of your system? Airport Name: City, State/Province: Question 1.3 Who is the owner of the system? Owner: Address: City, State/Province, Zip: Contact Name: Office Phone: email: Question 1.4 Who is the operator of the system, contracted or otherwise? What was the basis for their selection? Operator: Address: City, State/Province, Zip: Contact Name: Office Phone: email: Basis for Selection: S e c t i o n 6 Airport APM Survey

107 Question 1.5 Who is the maintainer of the system, contracted or otherwise? What was the basis for their selection? Maintainer: Address: City, State/Province, Zip: Contact Name: Office Phone: email: Basis for Selection: Question 1.6 Who is the supplier of the system elements (e.g., the vehicles, automatic train control equipment, guideway running surfaces)? What was the basis for their selection? Subsystem Supplier 1. Vehicle: _____________________ 2. Automatic Train Control System: _____________________ 3. Power Distribution System: _____________________ 4. Guideway Running/Guidance Eqp.: _____________________ Basis for Selection: Question 1.7 What functions at your system are contracted by the owner (and what functions are subcontracted by the contracted system operator or maintainer, if applicable)? Examples of contracted or subcontracted functions include the following: Operations Vehicle maintenance Wayside maintenance All maintenance Engineering Vehicle cleaning Security Station attendants Facilities maintenance Other

108 Question 1.8 What is the number of operations and maintenance personnel required to operate and maintain your system? Bargaining Unit Non-Bargaining Unit Personnel (FTEs) Personnel (FTEs) 1. Management: ______________ ______________ 2. Administration: ______________ ______________ 3. Ops—Control Center: ______________ ______________ 4. Ops—Service Agts: ______________ ______________ 5. Ops—Other: ______________ ______________ 6. Maint.—Vehicle: ______________ ______________ 7. Maint.—Track: ______________ ______________ 8. Maint.—ATC/Power: ______________ ______________ 9. Maint.—Facilities/Plant: ______________ ______________ 10. Warehouse/Stores: ______________ ______________ 11. Engineering: ______________ ______________ 12. Other: ______________ ______________ 13. ______________ ______________ 14. ______________ ______________ 15. ______________ ______________ Note: FTE = full-time equivalent Question 1.9 When did your system first open to the public? Question 1.10 Who can we contact with questions about your survey responses? Please provide a name, title, telephone number, and email address.

109 Survey Section 2: Performance Measures Question 2.1 What performance measure(s) do you use to judge overall performance of your system? Please describe each measure, including their names, how they are calculated, and their associated definitions, rules, and formulas. Please attach, upload, or send material as necessary to explain this answer. Examples of these measures include but are not limited to: System reliability Travel time System maintainability Wait time System availability Trip time System dependability Round trip time Operational headway Line capacity Platform headway Punctuality Station dwell time Missed stations Missed trips On-time performance Question 2.2 What performance measure(s) do you use for contractual compliance purposes? Please describe each measure, including their names, how they are calculated, and their associated definitions, rules, formulas, and interpretations, including how rigorous the contract is followed and any modifications of the contract that may have been made. Please attach, upload, or send material, including any applicable contract sections, to explain this answer.

110 Question 2.3 Please describe every instance in which you allow a grace period (e.g., at schedule transitions, during incidents, for late trains), the duration of each grace period, and its effect on calculation of performance measures such as system availability. Please attach, upload, or send material as necessary to explain this answer. A grace period is generally described as the period of time when the system can be late or even unavailable to passengers and not be taken into account in the calculation of a particular system performance measure, such as system dependability, as long as the system is restored to its normal operating configuration and headway within that grace period. Question 2.4 Please describe the instances in which you allow credit for partial service, including how it is cal- culated and its associated definitions, rules, and formulas. Please attach, upload, or send material as necessary to explain this answer. Partial service credit is generally described as the credit given (usually in terms of the amount of time) to system unavailability when only partial or alternate service is provided during periods of downtime.

111 Question 2.5 What performance measure(s) do you use to judge subsystem and/or component performance? Please describe each measure, including their names, how they are calculated, and their associ- ated definitions, rules, and formulas. Please attach, upload, or append material as necessary to explain this answer. Examples of these measures include but are not limited to: Subsystem and/or component reliability [mean time, distance, or cycles between failure (MTBF)] Fleet reliability; Platform door reliability; Switch reliability; Elevator, escalator, moving walkway reliability; Automatic Train Control system reliability. Subsystem and/or component maintainability [mean time to repair or restore (MTTR)] Fleet maintainability; Platform door reliability; Power distribution equipment maintainability; Elevator, escalator, moving walkway maintainability; Automatic Train Control system maintainability. Subsystem and/or component availability (MTBF/MTBF + MTTR) Fleet availability; Platform door availability; Central control computer system availability; Elevator, escalator, moving walkway availability; Automatic train control system availability. Question 2.6 What safety-related performance measure(s) do you track? Please attach, upload, or append material as necessary to explain this answer. Examples of these measures include but are not limited to: Occurrences and/or rates of: Collisions Emergency brakings Derailments Fatalities Fires Suicides Injuries Train evacuations Slips/trips/falls Supervised Caught in platform/vehicle doors Unsupervised

112 Question 2.7 What security-related performance measure(s) do you track? Please attach, upload, or append material as necessary to explain this answer. Examples of these measures include but are not limited to: Occurrences and/or rates of: Fare evasion (as applicable) Assault (aggravated and other) Trespassing Vandalism (including graffiti) Bomb threats Civil disturbances Robbery Larceny/theft Burglary Question 2.8 What performance measure(s) do you use to judge efficiency and/or effectiveness in general, and in particular, the economic efficiency and/or effectiveness at your system? Please describe each measure, including their names, how they are calculated, and their associated definitions, rules, and formulas. Please attach, upload, or append material as necessary to explain this answer. Examples of these measures include but are not limited to: Passenger trips per vehicle service mile Passenger trips per vehicle service hour Cost per vehicle service mile Cost per vehicle service hour Cost per train service mile Cost per passenger mile Cost per passenger trip Operating hours per vehicle Operating hours per employee Maintainability (mean time to repair or restore)

113 Question 2.9 What performance measure(s) do you use to judge the passenger experience at your system? Please describe each measure, including their names, how they are calculated, their associated definitions, rules, and formulas, and the frequency of collection from passengers. Please attach, upload, or append material as necessary to explain this answer. Examples of these measures include but are not limited to the following, and could be solicited in part via a passenger satisfaction survey: Cleanliness Friendliness of staff Ride quality/comfort Complaints Responsiveness to complaints Convenience Wait time Ease of wayfinding Trip time Elevator, escalator, moving walkway performance Question 2.10 What other performance measure(s) is in use at your system that you have not already provided? Please describe each measure, including their names, how they are calculated, and their associ- ated definitions, rules, and formulas. Please attach, upload, or append material as necessary to explain this answer. Question 2.11 What data do you collect (that is not already collected for the measures you have provided pre- viously) that could be used as a performance measure? Please append material as necessary to explain this answer. Examples of this data include but are not limited to: System shutdowns Vehicle station stopping accuracy Peak period vehicles or trains Trips completed

114 Question 2.12 Which of the measures that you have provided previously best represents your system’s perfor- mance from the passenger’s perspective (i.e., what measure is best at representing impacts on your system’s passengers)? Question 2.13 How does your system affect overall airport performance? In response to this question, please consider the following: A. Is your system the only form of transportation from which passengers can choose, or is an alternative form/mode of transportation available while your system is operating (such as walking, automobiles/taxis, buses)? B. How disruptive to airport performance is it when your system is unavailable? For example, would the airport continue to perform fairly well during a shutdown because passengers can get to areas served by your system by other means (such as walking), or would the loss of your system have a major adverse impact on airport performance (because buses would have to be called in, for example)? Please describe if such an outage affects only part of the airport (e.g., one concourse) or if it affects the entire airport (e.g., all terminals, parking facilities, rental cars). Question 2.14 What operating strategies do you employ to improve the performance of your system? Examples of these strategies include but are not limited to: Operating entrained vehicles as opposed to independent vehicles Operating in a pinched loop instead of on a shuttle or vice versa Placing agents or attendants on board trains or at stations

115 Question 2.15 What equipment capabilities or configurations that do not exist in your system today would improve its performance if they were implemented? Please describe how these would improve performance. Examples of these capabilities or configurations include but are not limited to: Entrained vehicle operability Pinched loop or shuttle configuration More switches Additional platforms at current stations Platform screen doors Redundant onboard control units Platooning Stub-end shop or storage tracks versus flow-through tracks Survey Section 3: Data Collection Question 3.1 What methods do you use to collect and report data for the performance measures provided previously? Please attach, upload, or send any procedures that may describe collection and reporting of data at your system. Examples of these methods include but are not limited to: Automatic collection by passenger counting systems Automatic collection and reporting by the central control computer system Manual collection by hand-filled passenger surveys Manual collection by staff surveys/observations (owner or operator/maintainer) Question 3.2 Please attach, upload, or append examples of the daily, monthly, and annual data collection forms and reports that you currently use.

116 Question 3.3 Please provide quantitative data that describes the performance of your system in 2007, using the performance measures you have described previously. Please attach, upload, or append material as necessary to explain this answer. Survey Section 4: Suggestions for Improving Airport APM Performance Measures Question 4.1 Please provide any plans you may have regarding ways to improve your own data collection and performance measuring processes. Please attach, upload, or append material as necessary to explain this answer. Question 4.2 Please provide any suggestions you may have for improving data collection and performance measurement practices among airport APM systems, including what you see as being the best set of performance measures common to all or most airport APM systems. Please attach, upload, or append material as necessary to explain this answer. Question 4.3 Please provide your thoughts on any measures you consider to be poor measures of airport APM performance and why.

117 Section 5: System and Operating Characteristics Some questions in this section may be more easily answered using pictures and/or diagrams. Please attach, upload, or send this information to expand upon your system’s characteristics as necessary. Question 5.1 What is the configuration/layout of your system? A. Single-lane shuttle B. Single-lane bypassing shuttle C. Dual-lane shuttle D. Single loop E. Double loop F. Pinched loop G. Multiple pinched loop (trunk/branch arrangement) H. Other (please describe) Question 5.2 Does your system have bidirectional functionality (i.e., can a vehicle operate in full automatic mode in both directions)? A. On the mainline: YES NO B. In the yard/storage areas: YES NO Question 5.3 Please attach, upload, or send the following information if available: A. System or route map (An example could be the map provided to the traveling public in a pamphlet or at an online website.) B. Track map [An example could be the map on the train location screen in the control center, which reflects, in part, track layout and switch locations. (Hand sketches are perfectly acceptable.)] Question 5.4 What type of passengers does your APM transport? A. Secure B. Unsecure C. Sterile D. Combination (please describe)

118 Question 5.5 In what type of environment does your system operate? Please indicate all that apply. A. The system operates primarily or exclusively in an enclosed environment such as a building or tunnel. B. The system operates primarily or exclusively outdoors. C. The system is exposed to snow and ice during the winter months. D. The system is exposed to 100°(F) days (or higher) during the summer months. Question 5.6 What areas does your system serve? A. Airline terminal(s) and/or concourse(s) B. Rental car facilities C. Public and/or employee parking D. Public transit E. Multimodal center F. Other (please describe) Question 5.7 What are the operating modes of your system? A. Continuous B. On demand C. Combined D. Other (please describe) Question 5.8 What is the length of your system’s guideway? Please state in single-lane units (feet or meters). A. Total guideway length without maintenance and storage/yard guideway B. Total maintenance and storage/yard guideway length Question 5.9 Is it possible at your system for passengers to initiate their own exit from a train and egress onto the guideway or walkway without the assistance of O&M staff or other life safety personnel? If no, why not?

119 Question 5.10 How many evacuations has your system experienced? Please count the number of evacuations by incident. For example, if multiple trains are evacuated during the same incident, that would be considered one evacuation. In 2007 In the Last 3 Years Passenger-initiated evacuations Operator-initiated evacuations Total evacuations Total unsupervised evacuations Question 5.11 Does your system have automatic sliding platform doors at stations? A. YES B. NO In 2007 In the Last 3 Years Number of platform door failures preventing passengers from using a vehicle Total number of platform door failures Number of vehicle door failures preventing passengers from using a vehicle Total number of vehicle door failures Question 5.12 How many stations and platforms does your system have? What is the average dwell time? A. Number of side platform stations: _______________ B. Number of center platform stations: _______________ C. Number of double platform stations: _______________ (platforms on both sides of a vehicle) D. Number of triple platform stations: _______________ E. Other (please describe) _______________ F. Total number of stations: _______________ G. Total number of platforms: _______________ H. What is the average dwell time: _______________

120 Question 5.13 What type of propulsion system does your system use? Nominal operating voltage: _______________ Current: AC or DC A. Onboard propulsion—traction motors (please indicate motor type) AC or DC B. Wayside propulsion—cable (please indicate grip type) Fixed grip or Detachable grip C. Linear induction motors D. Other (please describe) Question 5.14 What type of vehicle conveyance and track running equipment is employed at your system? A. Steel wheel on steel rail B. Rubber tire on concrete running surface C. Rubber tire on steel running surface D. Other (please describe) Question 5.15 What locations are monitored by a video surveillance system (closed circuit television equip- ment), and how is the information managed? Please check the boxes that apply. Real-Time Monitoring Capability Record/Playback Capability Maintenance/control facility Yard area Station platforms Escalators/elevators Vehicle interiors Question 5.16 How many cars make up the smallest independently operated vehicle that can be operated in your system? A married pair would be considered two cars for the purposes of this survey. A. One B. Two C. Other (please describe)

121 Question 5.17 What is the configuration of the operating fleet used in typical daily operations? A. Independently operated vehicles (including independently operated married pairs) B. Entrained vehicles (including entrained married pairs) C. Mixed (independently operated vehicles and entrained vehicles) D. Other (please describe) Question 5.18 What is the number of peak operating vehicles and trains required for an average operating weekday at your system? A. Vehicles: _______________ B. Trains: _______________ Question 5.19 What is the total number of vehicles in your fleet? A. Active vehicles in fleet: _______________ B. Inactive* vehicles in fleet: _______________ (*not usable for revenue service for reasons other than typical maintenance) C. Total vehicles in fleet: _______________ Question 5.20 How is the coupling/uncoupling function managed at your system? Please provide additional information if these choices do not fully describe this function at your system. A. Vehicle coupling and uncoupling are fully automated B. Vehicle coupling is fully automated but uncoupling is performed manually C. Vehicle uncoupling is fully automated but coupling is performed manually D. Vehicle coupling and uncoupling are performed manually E. Vehicle coupling and uncoupling are not performed Question 5.21 What is the location of the vehicle maintenance facility at your system? If possible, please provide a drawing, plan, or map of the maintenance facility and yard area and its location in relation to your overall system. A. Our vehicle maintenance facility is located online (i.e., at a station platform). B. Our vehicle maintenance facility is located offline. C. Other (please describe). Question 5.22 How many hours per day is your system scheduled to operate? How many hours did your system operate in 2007? (If an on-demand system, how many hours per day is the system scheduled to be available for use?) A. Scheduled average weekday operating hours: _______________ B. Scheduled average weekend day operating hours: _______________ C. Scheduled annual operating hours for 2007: _______________ D. Total hours actually operated in 2007: _______________

122 Question 5.23 How many on-guideway maintenance and recovery vehicles does your system have? How many incidents have occurred in your system where an in-service vehicle needed to be pushed/pulled by the MRV or another vehicle? A. Number of MRVs: ______________ B. In-service vehicle push/pull incidents: In 2007 In the Last 3 Years ______________ ______________ Question 5.24 What is your system’s average weekday operating schedule? Please complete the following table and attach, upload, or send material as necessary to explain this answer. ROUTE 1 Route Round Trip Time: Period Begin End No. of Vehicles No. of Trains Hdway (sec) System Config* Mode** ROUTE 2 Route Round Trip Time: Period Begin End No. of Vehicles No. of Trains Hdway (sec) System Config* Mode** ROUTE 3 Route Round Trip Time: Period Begin End No. of Vehicles No. of Trains Hdway (sec) System Config* Mode** *Pinched loop, shuttle, single loop, etc. **Continuous, on demand, etc.

123 Question 5.25 What is the passenger capacity of a vehicle in your system? Please describe how this is determined (i.e., if it is obtained from the initial vehicle specification or technical proposal, through visual observation or passenger counts during peak periods, or by some other method). Please provide as much detail as possible about the capacity of your vehicles. Question 5.26 Do passengers use luggage carts (e.g., Smarte Carte) on board your vehicles? If so, does the capac- ity number provided in Question 5.25 take this into account? A. Do passengers use luggage carts (e.g., Smarte Carte) on board your vehicles? YES NO (if yes, please answer B and C) B. Is the use of onboard luggage carts taken into account for the answer YES NO provided to Question 5.25? C. How does the use of luggage carts on your vehicles affect your vehicle capacity? Please quantify this if possible. Question 5.27 What is the capacity of your system? Scheduled Capacity During Peak Period* Ultimate Capacity** Route 1: _________________ _________________ Route 2: _________________ _________________ Route 3: _________________ _________________ Route 4: _________________ _________________ *Scheduled capacity during peak period can be calculated, for the purpose of this survey, as (No. of passengers that one vehicle can accommodate) × (No. of vehicles per train during the peak period) × (60/peak period directional headway in minutes). **Ultimate capacity can be calculated, for the purpose of this survey, using the same formula, but under the following assumption: operating maximum-consist trains at the shortest headway without trains stopping outside stations, regardless of whether there are enough vehicles in the fleet to support such an operation. Question 5.28 How is the system operation and operating schedule managed at your system? A. The system is controlled based on a departure (trip) schedule, and our timetable is presented to the public as a headway schedule. B. The system is controlled based on a headway schedule, and our timetable is presented to the public as a headway schedule. C. Other (please describe)

124 Question 5.29 What is the total number of vehicle (fleet) miles accrued at your system in 2007? A. In-service vehicle miles accrued in 2007: _______________ B. Other vehicle miles accrued in 2007: _______________ C. Total vehicle miles accrued in 2007: _______________ D. Average miles per vehicle accrued in 2007: _______________ Question 5.30 What is the total number of in-service vehicle (fleet) hours accrued at your system in 2007? A. In-service vehicle hours accrued in 2007: _______________ B. Other vehicle hours accrued in 2007: _______________ C. Total vehicle hours accrued in 2007: _______________ D. Average hours per vehicle accrued in 2007: _______________ Question 5.31 What is the total number of passenger trips taken on your system in 2007? Please indicate how this number is obtained (e.g., through fare gate information, ticket sales, or other passenger counting systems, or if it is an estimate based on parking data, airline passenger data, or other external data). Question 5.32 Are passengers charged a fare to use your system? If yes, please indicate the fare and basis. A. YES Amount: _______________ per _______________ B. NO C. Other (please describe)

125 Survey Section 6: Cost Question 6.1 What are the costs to operate and maintain your system? FY 2007–2008 FY 2006–2007 (budgeted USD) (USD) Labor Management: _____________ ____________ Administration: _____________ ____________ Operations: _____________ ____________ Maintenance—vehicles: _____________ ____________ Maintenance—ATC/communication/power/track: _____________ ____________ Maintenance—facilities/plant: _____________ ____________ Materials: _____________ ____________ Utilities Propulsion power: _____________ ____________ All other: _____________ ____________ Services/contracts Supplier technical support: _____________ ____________ Security: _____________ ____________ All other: _____________ ____________ Profit and G&A: _____________ ____________ Total: _____________ ____________ Electrical cost per KWH: _____________ ____________ Survey Section 7: Other Question 7.1 Please provide any additional information about your system or airport APM performance measures that has not been covered by the previous survey questions and that you believe could be useful to our research.

Guidebook for Measuring Performance of Automated People Mover Systems at Airports Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Airport Cooperative Research Program (ACRP) Report 37A: Guidebook for Measuring Performance of Automated People Mover Systems at Airports is designed to help measure the performance of automated people mover (APM) systems at airports.

The guidebook identifies, defines, and demonstrates application of a broad range of performance measures encompassing service availability, safety, operations and maintenance expense, capacity utilization, user satisfaction, and reliability.

The project that developed ACRP Report 37A developed the set of forms below that are designed to help periodically compile the necessary data for input into the overall performance measurement process.

Form A: System and Service Descriptive Characteristics

Form B: Airport APM Performance Measures Page 1 of 2

Form B: Airport APM Performance Measures Page 2 of 2

Passenger Satisfaction Survey

The project also developed an interactive Excel model containing spreadsheets that can be used to help track and calculate system-wide performance and service characteristics.

The set of forms and Excel model are only available electronically.

ACRP Report 37A is a companion to ACRP Report 37: Guidebook for Planning and Implementing Automated People Mover Systems at Airports, which includes guidance for planning and developing APM systems at airports.

In June 2012, TRB released ACRP Report 67: Airport Passenger Conveyance Systems Planning Guidebook that offers guidance on the planning and implementation of passenger conveyance systems at airports.

Disclaimer: The software linked to from this page is offered as is, without warranty or promise of support of any kind either expressed or implied. Under no circumstance will the National Academy of Sciences or the Transportation Research Board (collectively “TRB") be liable for any loss or damage caused by the installation or operation of this product. TRB makes no representation or warranty of any kind, expressed or implied, in fact or in law, including without limitation, the warranty of merchantability or the warranty of fitness for a particular purpose, and shall not in any case be liable for any consequential or special damages.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!