Cover Image

Not for Sale



View/Hide Left Panel
Click for next page ( 16


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 15
15 or schedulers prepare a weekly list of blocks to be sampled connection and real-time dynamic or periodic remote retrieval and transmit the list to the operations department. of APC data are the most common methods. Some agencies in the "other" category use a combination of methods or use TABLE 9 different methods by mode. Percentage of Bus Fleet Equipped with APCs TABLE 10 Agencies Responding Means of APC Data Retrieval Percentage Range No. % Agencies Responding 100 12 26.7 Means of Retrieval No. % 50 to 99 5 11.1 At garage without a physical connection 23 48.9 20 to 49 11 24.4 Real-time dynamic or periodic remote 13 27.7 10 to 19 11 24.4 retrieval Direct download (probe) of APCs with a 1 to 9 6 13.3 4 8.5 physical connection Total responding 45 100.0 Other 7 14.9 Total responding 47 100.0 Some agencies have automated the assignment process using APC system software to identify blocks for which no APC data have been collected. One agency noted that the An important step in APC implementation is to ensure that operations department assigns APC buses randomly for the the data meet the specified level of accuracy. Most respon- first half of a pick, and the software takes over this function dents reported a threshold for acceptance of the APCs at the for the second half to ensure that all blocks are sampled. 90% or 95% level of accuracy. Some were more specific, for However, only 16% of respondents reported use of an auto- example, with a confidence level of 90% that the observations mated assignment process. were within 10% of actual boardings and alightings. A few agencies were even more specific: On average, 80% of daily APC assignments are com- Total boardings and alightings for a trip: maximum error pleted as scheduled. This percentage varies from 40% to of 10% for load along a trip and maximum error of 10% 100% among respondents. Anecdotal evidence suggests on no more than 10% of observations. that the percentage of successfully completed APC assign- ments increases over time, as departments adjust to the Passenger load accuracy should be +/- 5% at each stop and in 95% overall concurrence with manual passenger new procedures. counts. The system is to identify the correct stop location 95% of the time with 100% of APC-generated stops being The number of times each trip is surveyed in a given within +/- 1% of manually observed stops. time frame varies with the percentage of APC-equipped Stop-by-stop: for 85% of stops the ON or OFF count buses. Clearly, APCs provide a richer ridership and travel shall be correct; within +/- 1 person 90% of the time; and time database at a finer level of detail than farebox or man- within +/- 2 persons 97% of the time. Overall, total Ons ual counts, even for agencies with only a few APCs. The and Offs within +/- 5%. increased number of observations lends greater confidence For stops with 15 boardings, the APC count was to have to decisions regarding changes in service levels. an absolute error of 0 in 80% of the cases, of 1 in 90% of the cases, and of 2 in 95% of the cases. For stops with Being rich in data provides clear benefits but can cre- 610 boardings, the APC count was to have an absolute error of 0 in 50% of the cases, of 1 in 75% of the cases, ate its own challenges. How agencies process and manage and of 2 in 90% of the cases. For stops with 11 or more the increased amount of ridership data is addressed in the boardings, the absolute error was to within 10% in at least next section. 90% of the cases. Almost three-quarters of respondents (71%) indicated AUTOMATIC PASSENGER COUNTING DATA: that they use their accuracy requirements on an ongoing PROCESSING, VALIDATING, AND REPORTING basis. An FTA report notes passenger count accuracy in the 2% to 3% error range using APCs (18). This section examines survey responses related to APC data processing, validation, and reports. The first step in data pro- An important question regarding data accuracy is, Com- cessing is to transfer the data from the APC unit. Table 10 pared to what? Manual counts are typically used as the basis indicates that data retrieval at the garage without a physical of comparison, as noted in the preceding examples, although

OCR for page 15
16 APC vendors and some users note correctly that manual TABLE 12 counts are not 100% accurate. Table 11 reveals that com- Examples of Automated Validation Program parison to manual counts is the most common technique to Rules edit and validate APC data. However, agencies use a variety of methods to edit and validate data. Many validation tech- Test Threshold Action niques look for internal inconsistencies without reference to 5% another data source to use as a comparison. Boardings vs. alightings 10% Discard block or trip data if exceed by block or by trip 20% TABLE 11 threshold 30% Methods to Edit and Validate APC Ridership Data Adjust boardings/ Loads Less than 0 alightings at heavi- Agencies Responding est use stops Method Purpose No % within 200 feet Flag stop data if Bus stop location of actual bus Compare with manual counts Accuracy 32 69.6 exceed threshold stop Look for unexplained variance Actual vs. scheduled Discard data Validation 27 58.7 10% across trips block miles/kilometers exceed threshold Compare ridership totals across Actual vs. scheduled Validation 25 54.3 days for reasonableness block pullout/pull-in 30 minutes Rely on professional judgment of times Validation 24 52.2 planners and schedulers 20 minutes Use an automated program to Actual vs. scheduled "significantly Validation 24 52.2 analyze APC data trip start/end times off-schedule" Compare ridership and revenue Accuracy 18 39.1 totals Observed vs. Compare on/off totals by trip and "expected" results at the Assign quality Validation 14 30.4 Not specified adjust as needed route, block, trip, and code to data stop levels Other Varies 7 15.2 Geographic information Assign probable Total responding 46 100.0 vs. computerized sched- Look for match route/block uling software data Block data No data Discard block data Automated data validation programs can make life much simpler for data analysts. These programs are provided by the APC vendor, developed in-house, or purchased from a 1. If the ratio of ons/offs for a bus-date-block is less than third party. Agencies using third-party programs noted that 0.7 or greater than 1.3, then data for all trips operated they feature up to 36 validation routines with adjustable by that bus-date-block are declared invalid. thresholds. Vendor software is not always transparent to the user, and it is important to understand how the validation 2. If 0 ons and 0 offs are counted for a complete bus- checks work. date-block, then the data is declared invalid. The most common test is to compare boardings and 3. Multiple measurements for the same stop (when a bus alightings. As Table 12 shows, agencies reported vari- opens its doors more than once at a stop, for example) ous thresholds for determining validity at the block or trip are aggregated into a single record. level. The table shows reported examples of validation tests, thresholds, and actions. 4. Data for trips measured by more than one bus on a single day (e.g., when an APC bus is traded off with The following annotated version of one of the more another APC bus) are merged. detailed descriptions of APC data editing and validation in the survey is presented as an example: 5. At route terminals at which layovers occur, all offs measured for the arriving trip and for the departing The data collected by the APC buses and then matched to the stop/schedule data is stored in a DB2 database. We use trip are assigned to the last stop of the arriving trip. a [Statistical Analysis System] SAS program to process All ons measured for the arriving trip and depart- this data, screen out "bad" data, and store the validated ing trip are assigned to the first stop of the departing data in SAS datasets. The following tests are used to trip. screen out "bad" data:

OCR for page 15
17 6. Leaving loads are calculated for each stop. Special software packages and APC or AVL systems. Only one- rules are used for open loop routes where passengers quarter of all respondents indicated no changes to existing ride through a terminal. data systems. 7. Trip samples that have a mismatch between date and For data storage and analysis, the most common changes schedule type (as a result of stop matching errors) are were the addition of servers for data storage and new database eliminated from the database. This happens rarely, software for analysis. Software development is discussed in mainly for the first day of a booking that happens on greater detail later. More than 85% of respondents indicated the same day that a switch between standard time and that they archive APC data. The average and median length daylight saving time occurs. of time to keep APC data is 5 years, although four agencies indicate that they plan to keep archived data forever. 8. Trip samples that have large imbalances between total ons and offs are eliminated from the database. Table 13 describes type and frequency of routine APC reports. The most common type of report is boardings and Some agencies use APC data for NTD purposes. FTA alightings by stop, but all types of reports are generated by requires manual checks annually to validate APC data for a majority of agencies that use APCs. Detailed (segment/ NTD submittal. The concept of manual validation of APC stop level) ridership and scheduling-related reports are most data as a one-time or periodic (every 3 years, for example) likely to be generated as needed. exercise is of interest to agencies as they become more con- fident in the accuracy of APC data. Table 14 indicates a variety of sources for data processing and report generation software. A majority of agencies rely The survey included a question on the percentage of raw on the hardware vendor, but several indicated in-house soft- APC data that is converted into useful information for ser- ware development, and it was not uncommon for agencies to vice planners, schedulers, and others. The overall average use an outside vendor other than the hardware vendor. More is 74%, comparable to findings from 10 years ago, with a than 70% of agencies that used an outside vendor indicated median value of 80%. that the process involved customization of the software to meet the agency's specific needs. Processing APC data often requires changes to exist- ing data systems. The majority of respondents reported the More than 90% of responding agencies indicated a capa- need to identify GPS coordinates for stops and to create or bility to generate nonstandardized reports from the APC maintain a bus stop inventory. Several agencies had already system. The most common method was for the end users to done this for implementation of AVL or automated passen- generate these reports, but one-quarter of agencies with this ger announcements. A few agencies noted the establishment capability rely on the outside vendor for specialized report of defined interfaces between computerized scheduling generation. TABLE 13 Types of Reports Routinely Generated from APC Data Frequency of Reports (percentage of agencies responding) No. Agencies Type Responding Annually Quarterly Monthly Weekly Daily As Needed Stop-level boardings and 41 2 15 10 2 10 61 alightings Route-level ridership 38 5 26 16 3 18 32 Route segment ridership 38 11 11 5 3 8 63 System ridership 33 9 15 27 -- 21 27 Performance measures 33 6 27 12 3 12 39 Schedule adherence 32 -- 16 13 -- 16 56 Running times 31 -- 13 3 3 13 68 Total responding 42 7 15.2 100