Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
5 C H A P T E R 2 Review Approach A review was performed to identify materials published in the last decade describing (1) approaches for measuring transit quality of service, (2) examples in which the relationship between condition and quality of service has been quantified, and (3) data required to support model development. The review was supplemented with targeted searches for data on key model parameters. As a first step, the research team reviewed the following resources that synthesize much of the available literature in these areas: â¢ NCHRP Report 616: Multimodal Level of Service Analysis for Urban Streets (Dowling et al. 2008) reviews literature related to transit QoS calculation and presents a QoS calculation approach. â¢ TCRP Report 165: Transit Capacity and Quality of Service Manual (Kittelson & Associates Inc., et al. 2013) defines transit quality of service and related concepts and describes specific numerical approaches developed for calculating capacity and quality of service. This report incorporates the QoS calculation approach detailed in Dowling et al. (2008). â¢ TCRP Synthesis 92: Transit Asset Condition Reporting (McCollom and Berrang 2011) summa- rizes the state of the practice as of 2011 with respect to measuring asset condition. â¢ TCRP Report 157: State of Good Repair: Prioritizing the Rehabilitation and Replacement of Existing Capital Assets and Evaluating the Implications for Transit (Spy Pond Partners et al. 2012) details literature pertaining to transit state of good repair (SGR), including models and approaches for predicting and prioritizing SGR investment needs. Appendix E of this report details models for predicting vehicle failures as a function of accumulated mileage and describes models for other assets that are time-based or condition-based models and use defaults derived from the FTAâs Transit Economic Requirements Model (TERM). â¢ TCRP Report 172: Guidance for Developing the State of Good Repair Prioritization Framework and Tools: Research Report (Robert et al. 2014a) updates the review in TCRP Report 157 (Spy Pond Partners et al. 2012). â¢ Rail Guideway Performance and Facility Condition Measures: Review Report (Spy Pond Partners et al. 2015) reviews approaches for measuring facility condition and guideway performance restrictions, prepared as part of a project for FTA to recommend new SGR-related measures. â¢ The FTA Asset Management Guide (Parsons Brinckerhoff 2012) includes a set of case studies summarizing current transit asset management practices. This report includes a supplement describing lifecycle management practices, condition assessment/performance monitoring approaches, and relevant industry standards by asset class. The research team supplemented these materials with review of resources found through literature searches and web searches. In performing literature searches in Worldcat and the TRB Literature Review Summary
6 The Relationship Between Transit Asset Condition and Service Quality Transportation Research Database (TRID), the research team focused on materials published in the last decade related to âtransit quality of serviceâ and work linking asset condition to quality of service to supplement the recent reviews of materials related to transit asset condition and SGR analysis in TCRP Report 157 (Spy Pond Partners et al. 2012, Robert et al. 2014b, McCollom and Berrang 2011, and Spy Pond Partners et al. 2015). The review uncovered papers, reports, and data sources relevant to the research, including multiple reviews of approaches to calculating transit quality of service published since comple- tion of TCRP Report 165, as well as information on data from individual transit agencies that may support analysis. Relevant findings from the review are discussed in the following sections. Review Findings The following subsections discuss literature review findings, grouped by the following categories: â¢ Measuring Transit Service Quality â¢ Approaches and Parameters for Relating Asset Condition to Service Quality â¢ Data Needed for Relating Asset Condition and Service Quality Measuring Transit Service Quality Several different approaches have been developed for describing transit service quality. Approaches reviewed include â¢ TCRP Report 165: Transit Capacity and Quality of Service Manual (TCQSM) (KAI et al. 2013), which in turn references TCRP Report 88 (KAI et al. 2003) for information on performance measurement and the multimodal level of service model detailed in Dowling et al. (2008). Specifically, Section 5 of TCRP Report 165 lists service quality measures for two categories of service quality: (1) availability and (2) comfort and convenience. â¢ European standards developed by the European Committee for Standardization (CEN, one of three standards organizations recognized by the European Union) (CEN 2002). The Organ- isation for Economic Co-operation and Development (OECD) report Measuring and Valuing Convenience and Service Quality (Anderson et al. 2013) describes the eight aspects of transit service quality as defined in this standard. â¢ TCRP Report 47: Handbook for Measuring Customer Satisfaction and Service Quality (Morepace International, Inc.; and Cambridge Systematics, Inc., 1999) describes how transit agencies can measure customer satisfaction and relate this to service quality. It defines ten dimensions of service quality using the framework developed by Parasuraman, Zeithaml, and Berry (1985). â¢ The Victoria Transport Policy Institute (VTPI) report Evaluating Public Transit Benefits and Costs (Littman 2015) defines several aspects of service quality. In general, the factors presented in this reference are similar to the U.S. and European examples described above. However, the framework presented in this document is specific to transit and clearly delineates the different aspects of transit service quality. â¢ The New Zealand Transport Agency (NZTA) Economic Evaluation Manual (EEM) (2016) details processes for evaluating road- and transit-related options for investing in transport infrastructure. Specific sections address investment in improvements to transit infrastructure and services, either new or upgrading existing services. It includes standard models for ben- efit calculations to allow consistent analysis and comparison of different investment projects across the country. â¢ Transport for London (TfL)/London Underground (LU) has developed a set of models for predicting a journey time metric (JTM), as well as for predicting Lost Customer Hours (LCH).
Literature Review Summary 7 It is described in Spy Pond Partners et al. (2015) and Anderson et al. (2013) and detailed fur- ther in an internal LU document (TfL 2014). â¢ All of the service quality approaches reviewed, with the exception of the LU JTM, include narrative descriptions of basic attributes of service quality. The specific attributes listed vary among the approaches and have been integrated in the service quality framework presented in the next chapter. Attributes include measures directly related to the time required by pas- sengers to reach their destination (such as service frequency and reliability) and other factors related to passenger perceptions of service (such as comfort and convenience). The LU JTM is different from the other approaches reviewed, in that it is a metric that incorporates many of the attributes of service quality, but it is not accompanied by a narrative description of service quality attributes. Figure 2-1, adapted from data shown in the LU JTM, illustrates typical journey time components as percentages of total journey time. Two of the resources reviewedâTCRP Report 165 and NZTAâs EEMâinclude formal approaches for quantifying service quality. The multimodal level of service (LOS) model presented in TCRP Report 165 does not vary by asset condition, but shows how to combine various aspects of service quality into a single metric, summarizing overall LOS with an A through F grade. Approaches and Parameters for Relating Asset Condition to Service Quality The literature review did not yield a general model for relating asset condition to service qual- ity as contemplated by the current research effort. However, it did yield examples of quantitative approaches that are directly relevant to the research. Notable recent efforts relevant to modeling the relationship between asset condition and ser- vice quality include the following: â¢ Section 4.4 and Appendix 18 of the EEM (NZTA 2016) detail how to calculate a range of user benefits resulting from a transportation project, including benefits of: reliability, service fre- quency, infrastructure, vehicles, and others. â¢ The model in TCRP Report 165 (KAI et al. 2013), based on earlier work described in NCHRP Report 616 (Dowling et al. 2008), adjusts the value of time to account for crowding. â¢ Reddy et al. (2014) describe the development of a set of key performance indicators (KPIs) for Metropolitan Transit AuthorityâNew York City Transit (MTA-NYCT). They document the weight placed on the KPIs for factors such as delay and discuss how the results of a market research study were incorporated into KPI development. 49% 19% 2% 28% 2% In-Vehicle Time Wait Time Fare Collection Station Conveyance Closures Figure 2-1. Sample journey time components using the LU JTM.
8 The Relationship Between Transit Asset Condition and Service Quality â¢ Paterson and Vautin (2015) describe an effort performed for the Metropolitan Transporta- tion Commission (MTC) to evaluate the user benefits of transit SGR investments in the San Francisco Bay Area. They describe the use of models detailed in TCRP Report 157: State of Good Repair â Prioritizing the Rehabilitation and Replacement of Existing Capital Assets and Evaluating the Implications for Transit (Spy Pond Partners et al. 2012) to predict additional user delay that would result from failing to maintain assets of Bay Area transit systems in SGR. The predicted increased delay per passenger trip is fed into MTCâs regional travel demand model, and the changes in travel times and ridership are estimated. These model results are used to estimate the benefit/cost ratio for investments to achieve SGR (Paterson and Vautin 2015). â¢ Recent research performed by the Volpe National Transportation Systems Center (VNTSC) for FTA develops relationships that show the consequences of asset maintenance levels on transit service quality (VNTSC 2015). The amount of maintenance carried out on assets is linked to their conditionâif insufficient maintenance is carried out, asset condition typically will decline more quickly. The work also explores the relationship between asset performance and the customer experience. The following additional resources detail specific parameters that may be relevant for model- ing transit service quality. â¢ Asset Condition-Related Parameters. The Transit Asset Prioritization Tool (TAPT), a spread- sheet model for prioritizing SGR investments based on the models from TCRP Report 157 (Spy Pond Partners et al. 2012, Robert et al. 2014b), includes default rates for growth in vehicle failures as a function of mileage derived from National Transit Database (NTD) data and default asset deterioration rates derived from the FTA TERM. These models provide defaults for predicting changes in failure rates as a function of asset mileage (for vehicles) or age (for vehicles or fixed assets). Further, FTA recommends default values for vehicle useful life (FTA 2017). These values are intended for use in establishing the percentage of vehicles exceeding an agencyâs useful life benchmark (ULB), a measure U.S. transit agencies are required to include in their Transit Asset Management Plans. Melo et al. detail research on the causes of delay on urban rail systems, analyzing data from 42 metro lines on 15 systems. They report that a 10% increase in line age is predicted to increase the frequency of delay incidents by approximately 2.6% (Melo et al. 2011). â¢ Customer Perceptions of Asset Condition. The NZTA EEM (2016) includes valuation in terms of in-vehicle time (IVT) for factors including cleanliness, availability of information, type of seating, comfort, stop/shelter condition and amenities, and station amenities. In NZTA Report 565, Douglas (2016) reviews 13 studies in which time-based or willingness-to- pay values are established for qualitative aspects of bus or rail service. Most of these are stated preference studies in which passengers are asked to value various types of service improve- ments. Some of those reviewed are revealed preference studies in which a value is estimated based on changes in ridership or revenue following a service improvement. The author finds that the valuation for improved vehicles, translated into an equivalent time per trip, ranges from 2.4 minutes to 32 minutes. Two notable studies included in the review are a study of willingness-to-pay for improved buses in Wellington (Steer Davies Gleave 1991) and a study by Wardman and Whelan (2001) reviewing approaches to valuing improved rail cars from the customerâs perspective. The Wellington study reports a willingness-to-pay value equivalent to 3 minutes per trip for a new bus relative to an old bus, equating to an increase of 20%, assum- ing a duration of 15 minutes for a typical trip. Wardman and Whelan conclude that, based on a revealed preference study of British rail passengers, a transition to new rail cars should be valued at 1 to 2% of IVT, a much lower value than that suggested by the Wellington study and others in the Douglas review. â¢ Valuing Components of Journey Time. Littman (2015) recommends default values to use to adjust value of time to account for excess wait time, congestion, and other factors. The
Literature Review Summary 9 LU JTM (Tf L 2014) includes adjustments to journey time to account for customer percep- tions. A factor of 2.0 is applied to wait time (in other words, the value of time is doubled for wait time). Separate factors are established for walking (2.0), walking upstairs (4.0), riding an elevator or escalator (1.5), and other journey components. Several reports describe using the standard deviation of travel time to characterize buffer time or travel time reliability and recommend adjustment factors to apply to value this time. TRL Report 593 (Balcombe 2004) recommends valuing the standard deviation of travel time in the same manner as wait or IVT to quantify reliability. NCHRP Report 431 (Small et al. 1999) uses the results of a stated preference survey and extensive literature review to recommend multiplying the standard deviation of travel time by a âreliability ratioâ of 1.3 to account for traveler perceptions of reliability. Data Needed for Relating Asset Condition and Service Quality This section summarizes data required for relating condition and service quality and uses the following broad categories of data sources: â¢ Asset Inventory and Condition: basic identifying information on a transit agencyâs assets, including their extent and age. May include observations of the physical conditions of assets, typically established through visual inspections. â¢ Maintenance Data: details on maintenance activity performed, often including information on asset failures and failure causes. May be stored in a transit agencyâs enterprise asset man- agement (EAM) system or other format. â¢ Operations Data: details concerning vehicle mileage, on-time performance, incidents, and ridership. May include data from automated fare collection (AFC) systems, automatic pas- senger counters (APCs) and automatic vehicle location (AVL) devices. â¢ Customer Service Data: customer satisfaction surveys, market research, and/or complaint data. For each of these categories, the following paragraphs discuss current practice and available data established through the review, followed by a summary of issues and implications for the research. Asset Inventory and Condition Data U.S. transit agencies must keep basic inventory data on their transit assets to support NTD reporting, as well as other reporting requirements. Regarding NTD requirements, in the past, transit agencies reported inventory data by vehicle subfleet for revenue vehicles (e.g., number of vehicles in the subfleet, vehicle type, age, and mileage), but relatively little asset data was required for NTD reporting for non-vehicle assets and non-revenue vehicles. Moving forward, transit agencies will have to collect and report more detailed asset inventory data with the implementa- tion of the new NTD Asset Inventory Module. These requirements are detailed in the FTAâs Asset Inventory Module Reporting Manual (2017). Under the new requirements, larger agencies report similar data for non-revenue vehicles as that previously required for revenue vehicles, additional data on facilities such as square footage and replacement cost by facility, and additional data on guideway, such as distribution of guideway by decade of construction. Although the NTD requirements set basic inventory reporting requirements (and determine what data are consistently available at a national level) most transit agencies maintain more detailed inventory data than that required for NTD report- ing. Many have supplemented the data required at a subfleet or facility level with significant additional detail about major systems or components within a vehicle or facility. Where more detailed inventory data are maintained, these often are managed using an agencyâs EAM, dis- cussed further in the next section.
10 The Relationship Between Transit Asset Condition and Service Quality The situation is somewhat different with respect to data on asset condition. At present, little condition-related data is reported to the NTD, and practices vary widely among agencies concern- ing condition assessment approaches. For revenue vehicles, it is common practice to use vehicle age, mileage, or some combination of the two as a proxy for asset condition. These are both reported in the NTD, as is a count of major mechanical failures that can be used to calculate mean distance between failures (MDBF). Some transit agencies, such as the Denver Regional Transit District (RTD) have established condition assessment programs for characterizing vehicle condition by major vehicle component, using the 5-point scale established in the FTA TERM (BAH 2010) to assess conditions. However, formal condition assessment programs for revenue vehicles (con- ducted in addition to routine preventive maintenance inspections) appear to be the exception, rather than the rule. The results of the recent review performed for FTA (Spy Pond Partners et al. 2015) are appli- cable regarding condition assessment approaches for non-vehicle assets. The review concluded that several U.S. transit agencies have implemented approaches based on use of the TERM con- dition scale for inspection of passenger facilities, maintenance facilities, and/or administrative facilities. However, condition assessment approaches are inconsistent in their details among transit agencies, even in cases where the same scale is used. For other fixed assets, such as struc- tures, tunnels and other forms of guideway, various approaches are used for condition assess- ment, and frequently the result of a condition assessment is a set of defects to be addressed, rather than an overall condition rating. A notable recent development regarding transit asset condition data is that FTAâs recently adopted asset management rule (23 CFR Part 625) requires that transit agencies periodically assess the condition of their assets. Also, the Rule requires agencies to report conditions and set targets for the following measures: â¢ For vehicles, the percentage of vehicles exceeding their Useful Life Benchmark (ULB); â¢ For guideway, the percentage of directional route mileage operating under a performance restriction; and â¢ For facilities, the percentage of facilities with a score of 3.0 or greater on the 5-point TERM scale. The recent review performed for FTA (Spy Pond Partners et al. 2015) examined current practice with respect to collecting condition data and tracking performance restrictions for fixed guideway. Implications of the review with respect to asset inventory and condition data include the following: â¢ Condition data are typically not available for vehicles or guideway. Ideally, any models devel- oped for these assets must use readily available data on asset age or mileage (in the case of vehicles) and cannot require supplemental visual inspection data. â¢ Models developed for facilities should use the 5-point TERM scale, but care should be exer- cised in using existing transit agency data, given the variation in condition assessment prac- tices among agencies. â¢ FTAâs existing models for predicting condition on the 5-point TERM scale based on age offer a set of defaults that can be used for the research effort as needed. Maintenance Data This includes data on maintenance activities performed on an asset, as well as material costs and labor costs. Many, but by no means all, U.S. transit agencies have implemented or are imple- menting EAM systems that track maintenance activity and that can be used to manage an asset inventory. EAM systems are most commonly used for fleet management, but increasingly are being enhanced to support maintenance and management of fixed assets. The FTA Transit Asset Management Manual (Parsons Brinckerhoff 2012) and recent review of condition measures for FTA (Spy Pond Partners et al. 2015) discuss current practices regarding EAM systems. Basic issues to consider in using EAM data are as follows:
Literature Review Summary 11 â¢ Where it is available, EAM data provides a valuable resource for linking maintenance activities to specific assets and viewing trends in maintenance needs as an asset ages. When an EAM system is used to track major mechanical failures reported to the NTD, it will typically record additional data on failures of various types and causes, including those reported to the NTD and other more frequently occurring failures that do not result in the vehicle missing a trip (and thus are not reported to the NTD), but that may, nonetheless, result in passenger delay. â¢ The representation of assets in an EAM system is often more detailed than that used for condition assessment, extending beyond the level of a vehicle or facility to a component or subcomponent level (and in some cases further). As documented by Spy Pond Partners et al. (2015), some transit agencies have implemented their condition assessment approach outside of their EAM for this reason, among others. â¢ Practices for classifying asset and maintenance activities vary significantly among different EAM system implementation efforts. It is unlikely that generic models can be developed to rely directly on EAM system data. Operations Data The operations-related data relevant to the research include overall statistics on vehicle mile- age, records of vehicle on-time performance, data on vehicle incidents, and ridership estimates. The NTD contains summary operations data by mode that can be used for the research. How- ever, it will likely be necessary to analyze more detailed data by line or route to try to relate performance to specific assets. In a recent paper, Hendren et al. (2015) discuss sources of operations data and challenges in using this data. The paper profiles the systems for tracking performance used by the Washington Metropolitan Area Transportation Authority (WMATA) and describes an approach to using the available data to compute a customer-focused measure of system reliability. The paper concludes that there is a disconnect between vehicle-focused measures (such as train delay and headway adher- ence) and what customers ultimately care about (how long it will take to get to the destination). It recommends an alternative measure that is customer focused and incorporates consideration of reliability: the percentage of customers with travel time on a route greater than a specified threshold. The review suggests the following regarding the availability of operations-related data for the research: â¢ All transit agencies have some form of operations data. They use this data for various purposes, including calculating on-time performance and ridership and supporting NTD reporting. â¢ Sources of operation data are generally not well integrated with asset inventory and condi- tion data. For instance, it may be readily feasible to determine the on-time performance of a given bus route, but another matter entirely to determine how specific vehicles performed on the route over time. â¢ Of particular value for the research are data on extent and sources of delay. However, practices vary considerably among transit agencies regarding how and how much data they collect in this regard. Figure 2-2 is an example from the literature showing the availability of incident data for a cross section of 22 European rail agencies. In this context, data were classified as âavailableâ if they were captured by the agency, regardless of whether the data were made public. As shown in the figure, all tracked numbers of incidents and could calculate measures such as MDBF, but few could quantify the service effects of a failure, and only one could calculate resulting pas- senger delay. Customer Service Data Many transit agencies have established programs for measuring customer satisfaction and/or tracking customer-reported issues. TCRP Report 47 details how to measure customer satisfac- tion and service quality (Morepace International, Inc.; and Cambridge Systematics, Inc., 1999).
12 The Relationship Between Transit Asset Condition and Service Quality This handbook describes determinants of service quality, provides a set of 48 different quality measures, and details how to compile survey results. Although approaches for measuring cus- tomer satisfaction vary among transit agencies, many transit agencies appear to have patterned their measurement programs on this report or other similar guidance. The customer service information of greatest value for the research includes customer perceptions of system appear- ance and customer perceptions concerning delays. A basic challenge regarding the use of customer satisfaction data is that it tends not to be local- ized. Thus, it is not feasible to correlate customer perceptions to specific lines or groups of assets, only to a transit agencyâs overall system. Also, in some cases, the questions relating to perceptions of asset condition may relate to multiple factors, such as cleanliness, availability of desired ameni- ties, and perceptions of condition. Inconsistent customer satisfaction data underscores the need for time series data to view trends over time. However, even with extensive historical data, questions about overall customer perceptions of a transit system are simply too broad to attempt to link to asset condition. On the other hand, complaint data often can be tied to a specific time and location. An example of a comprehensive approach to customer service measurement is that of Bay Area Rapid Transit (BART). Every 2 years since 1996, BART has conducted an extensive cus- tomer satisfaction survey. Table 2-1, prepared using data from the 2014 report, shows mean scores given by BART customers to various aspects of BARTâs service on a 7-point scale. Items are shown in decreasing order of customer satisfaction. Several of the attributes listed in the figure could relate to asset condition, including those related to reliability (e.g., of elevators and escalators), condition, and on-time performance (to the extent this may be compromised by asset failures). However, even with so many attributes listed, in many cases it may be a challenge to decouple perceptions of condition from general cleanliness or other factors. 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Incidents MDBF Incident Cause Duration of Degraded Service Duration of Stopped Service Total Impact on Trains Total Impact on Passengers Figure 2-2. Percentage of selected European rail transit agencies with incident data.
Literature Review Summary 13 Service Aspect Mean Customer Satisfaction Clipper cards 5.80 Availability of maps and schedules 5.71 BART tickets 5.50 On-time performance 5.46 Timeliness of connections between trains 5.36 BART website 5.30 Timely information about service disruptions 5.26 Reliability of ticket vending machines 5.17 Train interior kept free of graffiti 5.17 Access for people with disabilities 5.13 Reliability of fare gates 5.12 Frequency of train service 5.11 Signs with transfer, platform and/or exit directions 5.06 Length of lines at exit gates 5.04 Availability of bicycle parking 5.01 Hours of operation 4.98 Lighting in parking lots 4.94 Timeliness of connections with buses 4.85 Comfort of seats on trains 4.84 Helpfulness and courtesy of station agents 4.79 Stations kept free of graffiti 4.76 Availability of station agents 4.73 Availability of standing room on trains 4.61 Appearance of train exterior 4.59 Elevator availability and reliability 4.58 Escalator availability and reliability 4.58 Overall station condition 4.57 Personal security in BART system 4.49 Enforcement against fare evasion 4.47 Appearance of landscaping 4.42 Comfortable temperature aboard trains 4.41 Availability of car parking 4.41 Leadership solving regional transportation problems 4.35 Condition/cleanliness of windows on trains 4.32 Train interior cleanliness 4.28 Clarity of P.A. announcements 4.21 Presence of BART Police in stations 4.19 Availability of seats on trains 4.18 Station cleanliness 4.11 Noise levels on trains 4.08 Condition/cleanliness of seats on train 4.07 Availability of space for luggage, bicycles, etc. 4.06 Condition/cleanliness of floors on trains 4.05 Enforcement of no eating and drinking policy 4.05 Presence of BART police in parking lots 3.95 Elevator cleanliness 3.88 Presence of BART police on trains 3.65 Restroom cleanliness 3.52 Table 2-1. Mean customer service scores in BARTâs 2014 satisfaction survey (BART and Corey, Canapary & Galanis Research 2014).