Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 23
23 service problem on customer satisfaction by the percentage of evaluated transit LOS in terms of service coverage, service fre- customers experiencing the service problem. The resulting quency, hours of service, transit travel time versus auto travel score gives the expected change in the customer satisfaction time, passenger loading, and reliability. index for the operator. The evaluation procedure balanced comprehensiveness The steps to developing an impact score system are as (covering as much of the area as possible) with cost. Service follows: coverage and transit-auto travel time were evaluated for the system. For the remaining measures, 6 to 10 major activity 1. Identify attributes with most impact on overall customer centers within the region, resulted in 30 or 90 combinations satisfaction. Compute gap scores. of trips between activity centers. Service frequency and hours 2. Identify percent of customers who experienced the prob- of service were evaluated for all origin-destination (O-D) lem. combinations. Passenger load and on-time performance 3. Create a composite index by multiplying gap score by data were collected for the 15 O-D combinations that had the incidence rate. Result is attribute impact score. highest volumes (total of all modes), as determined from the local transportation planning model. Transit travel times, Example: hours of service, and frequencies were obtained from local transit schedules. The travel demand between centers was Overall satisfaction rating for attribute 1 is obtained from the local travel model. Field measurements were required to obtain reliability data and passenger load- 6.5 for those experiencing problem past 30 days ing data. 8.5 for those with no problem past 30 days The authors point out that there were issues with the se- The Gap score is 2.0 (8.5 - 6.5). If 50% of customers lection of major activity centers, including a general bias to- report having the problem, then the composite impact ward selecting for analysis those activity centers with the best score is 2.0 * 50% or 1.00. existing transit service for analysis. The authors also found that the activity center selection method resulted in work ends of trips being over-represented and home ends of trips A Guidebook for Developing Transit being under-represented Performance Measurement System There were also various issues with the difficulty and cost of data collection (e.g., the validity of mixing field data on Although focused on implementing and applying a transit passenger loads and transit travel times with model estimates performance-measurement program, the Guidebook for of travel times and demand for the computation of some of Developing a Transit Performance Measurement System  the level of service measures). Training to improve consis- provides useful information on more than 400 transit tency and reduce wasted efforts was also necessary. There was performance measures (including some for which levels of a strong concern about the costs of collecting and processing service have been developed) and on various means of the data without receiving additional state funding to cover measuring transit performance. The processes of developing those costs. MPO-estimated costs ranged from "negligible" to customer satisfaction surveys and passenger environment $50,000, with most in the $4,000-$5,000 range. The $50,000 surveys (a "secret shopper" approach to evaluating comfort- cost reflects an MPO that waited until the last minute to start and-convenience factors) are summarized. Performance the work and ended up contracting the work out. measures discussed in the guidebook cover the passenger, agency, community, and driver/vehicle points of view. Twelve case studies are presented in the guidebook on how 3.3 Bicyclist Perceptions of LOS agencies measure performance; 18 additional case studies are presented in a background document provided on an Researchers have used various methods to measure bicy- accompanying CD-ROM. clist satisfaction with the street environment. Methods have included field surveys (e.g., having volunteers ride a desig- nated course), video laboratories, and web-based stated pref- Application of Transit QOS Measures erence surveys. One researcher intercepted bicycle riders in in Florida the middle of their trip in the field. Perk and Foreman (2001)  evaluated the process and Petritsch et al.  compared video lab ratings with field results of the first year's application of the quality of service ratings of segment LOS and found they were similar. measures contained in the TCQSM by 17 metropolitan plan- Some researchers have asked bicyclists which factors are ning organizations (MPOs) in the state of Florida. Each MPO most important to the perception of quality of service. Other