Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 34
34 allow greater leveraging of ITS data. Developing an enterprise (2003), who compared boarding and alighting counts recorded data strategy requires resources, planning, time and expertise by TriMet's APCs against counts recorded by on-board cam- on the front end, but will result in overall savings over time eras directed at vehicles' doors. by providing a sustainable system that allows more people to Given verification of the accuracy of APCs, this system can access better data without forcing practitioners to become then be used to assess the accuracy of other on-board systems. expert users or hiring more technology staff to support daily For example, the CTA intends to check transaction summaries efforts. recorded by its smart card, magnetic card, and electronic Ideally, the departments within a transit organization registering fareboxes against corresponding APC counts. would work with their IT department to enlist the expertise Operator-keyed data from electronic registering fareboxes needed to construct an enterprise data strategy. The overall and mobile data terminals are probably subject to the great- goal in presenting an overview of enterprise data manage- est amount of error among on-board systems in terms of doc- ment has been to sketch a set of general guidelines and to out- umenting the actual incidence of represented events. The line the tasks and issues involved. End user departments will transit literature does not report attempts to validate event need to assist in specifying the relevant data and uses for that data, although analysts at the case study properties noted that data. Typically, success requires a joint effort of users and IT they were wary of interpretations where event data are treated professionals. as "ground truth." There are several possible ways of assessing the validity of operator-keyed event data. One would be to assign "silent ITS Data Validation shoppers" to a sample of trips to record selected events, and To ensure accuracy and integrity, ITS data recovered from then compare the manually recorded data against the operator- on-board systems must be validated before being forwarded to keyed data. Another approach that would likely improve va- the enterprise data system. An essential task in this process lidity would be to develop a data screening process to identify involves matching vehicles' AVL data records to their sched- instances where multiple specific events are keyed at the same ules and the base map of stops and time points associated with time or location, recognizing that in some cases (e.g., fare eva- assigned work. Selected event data recorded at locations other sions) multiple events can be valid. However, the operator than those represented in the base map must also be assigned training process probably offers the best opportunity for en- to base map locations. Examples of such events include suring the validity of event data. In this setting, it can be em- instances where a vehicle drops off or picks up passengers be- phasized that recording events is not simply a part of the job; tween scheduled stops, records for temporary re-routes, the rather, it is providing information that can often be used to record of a vehicle's maximum speed between stops, or a non- improve the conditions of operators' assignments, including stop event record reported by a mobile data terminal or elec- their safety and security. tronic registering farebox. Data are also screened to identify In the customer service area, ITS data can be applied extreme values, which may indicate a malfunctioning unit. beyond its common use in validating customer complaints. Records with extreme values are retained, but flagged to indi- For example, actual arrival time data from AVL can be com- cate that the data are suspect and may be unusable. pared to arrival times predicted by real time arrival software Validation of passenger loads and boarding and alighting to assess the accuracy and reliability of the predictions (Crout count data from APCs is especially important, given the need 2007). Such analysis can also contribute to determining how for accuracy of these data in meeting external and internal re- far out arrive time predictions can be accurately and reliably porting needs. For passenger loads, a procedure must be in made. place to "zero out" the totals at defined service junctures, rang- ing from the conclusion of a vehicle's daily assigned work to the Reporting and Analysis Tools completion of each trip. A routine must then be implemented to proportionately adjust boarding and alighting data to be Reporting and analysis tools drawing on archived ITS data consistent with the passenger load zeroing adjustment (Furth can be grouped into three categories: those developed by the et al. 2006). ITS vendor, those developed in-house, and those developed It is worthwhile to verify the accuracy of the actual boarding by a third party software vendor. and alighting counts recorded by APCs, in the system accept- ITS vendor developed reporting software is available for ance process and periodically thereafter. Some properties test ticket vending machines, electronic registering fareboxes, and accuracy by comparing APC counts against those recorded by AVL and APC systems. The software for ticket vending ma- ride checkers. The maintained assumption that ride checker chines and fareboxes report transactions data, as described in counts are error-free is a strong one, however. An alternative Chapter 2 of the Guidebook. Farebox software can also report to reliance on ride checker counts was used by Kimpel et al. operator-keyed events for systems that include this function-
OCR for page 35
35 ality. Furth et al. (2006) conclude that farebox reporting soft- indicators of specific interest to the agency. Moreover, as in- ware is very inflexible, and that properties desiring to use terests in specific aspects of performance evolve, the report- farebox data to monitor ridership have to first export data ing system can be readily modified to evolve in tandem. from the farebox system to a database developed in-house in Generally, the data queries programmed within in-house order to structure reports. reporting systems can be structured to address virtually any Vendor developed performance reporting software for question represented in the space-time-customer dimensions AVL-APC systems is a fairly recent addition to the transit in- of the ITS database. dustry. This software is in use among properties of varying TriMet's experience with its in-house reporting system is size, but appears to be especially welcomed by smaller prop- approaching the 10-year mark. The system is considered to erties with limited IT resources. Previously, vendor-provided be among the most comprehensive in the transit industry AVL reporting software was limited to data "playback" rou- (Furth et al. 2006), and its periodic performance reports tines that were useful for investigating incidents and customer include indicators that have been designed to correspond to complaints, but had very limited capability to support offline service delivery attributes that the agency's satisfaction sur- performance reporting and analysis. veys have found to be important to customers. An example is More generally, the development of reporting software by shown in Table 4-1. ITS vendors reflects a change in their role in the technology TriMet surveys found that reliability issues were the second life cycle. As Furth et al. (2006: 71) observed, in the initial most important source of dissatisfaction among bus riders phase of advanced technology deployment in the industry, (after frequency of service issues). Table 4-1 provides informa- technology vendors viewed their principal role as being tion on three alternative reliability measures. The first, on-time providers of hardware, and "their job ended when they performance, is the transit industry's traditional measure of handed the transit agency the data." Whether the data could reliability. The second, headway adherence, reports the per- be rationalized and transformed into desired reports was an centage of trips that maintain an actual headway that is within issue that was usually left to the transit property to resolve. 50% of the scheduled headway. This proxy measures the spac- Consequently, in the 1990s era of ITS deployment, many ing or regularity of service. The third reliability proxy, excess properties struggled to produce useful performance reports wait, measures the additional time a typical rider would spend (Casey 2000). waiting for a bus given the actual headway deviations docu- The most important of the vendor-developed reporting mented in the AVL data. packages covers data from AVL and APC systems. An exam- The three measures of reliability often correspond, but not ple summarizing weekday service delivery performance on always. For example, the Route 64-Marquam Hill achieves the Denver Regional Transportation District's South Broad- high marks for on-time performance and headway adher- way route in October 2005 is shown in Figure 4-3. The report ence, but fares much worse on the excess wait measure. The provides information that operations managers usually track same is true for the Route 51-Vista. It could be argued that in assessing service delivery, including on-time performance, the excess wait measure best represents a rider's view of reli- boardings per mile and hour of revenue service, and actual ability. However, the choice of indicator could just as well be average speed compared to schedule speed. Passenger activ- informed empirically by relating the variation of each indica- ity is also reported by service period, and data are sorted to tor over the routes in the system to the route variation in sur- identify to most heavily used stops. Sorting by performance veyed satisfaction. In this case, the "best" customer-oriented category (boardings per revenue hour in this example) can measure would be the one that most closely corresponds to produce rankings among scheduled trips. Finally, perform- riders' reported satisfaction. ance for selected trips is reported. Reporting and analysis software developed by third party Vendor-developed reporting software is very useful for vendors range from fairly elementary packages that docu- communicating performance information at the managerial ment Web and automated telephone system activity to more levels of the organization. Its data querying capabilities be- advanced packages that support statistical and spatial analy- yond this important function, however, are limited. Thus, sis of data extracted from an enterprise database. Data recov- some properties have developed more flexible performance ered by Web and automated telephone system monitoring reporting systems in-house. It should be noted that in-house software are described in Chapter 2 of the Guidebook. reporting systems were often originally developed out of In their most elementary applications, statistical software necessity and were mostly confined to larger properties where packages allow researchers to logically summarize ITS data staff with the necessary advanced database querying skills and report patterns and trends. More advanced applications were in place. involve estimation of systematic relationships among ITS The main advantage of the reporting systems developed in- data elements within user-defined contexts. An important house is that they can be structured to produce performance feature of a statistical software package is its ability to easily
OCR for page 36
36 Figure 4-3. Summary route performance report from vendor-developed software. extract data from an agency's ITS database. At TriMet, re- can then be easily imported for statistical analysis. Another searchers in the operations division use statistical analysis advantage of this software package is its extensive graphing software (SAS) for advanced analysis of ITS data. The main capability, which supports more effective communication of advantage of this package is that its programming features trends and patterns. allow analysts to directly query the Oracle data tables in the An example report from TriMet illustrating SAS graphing agency's enterprise data system and create data records that features is shown in Figure 4-4. This report is produced for