Cover Image

Not for Sale



View/Hide Left Panel
Click for next page ( 55


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 54
54 CHAPTER 8 Passenger Count Processing and Accuracy Accuracy of automated passenger counts may be reduced factors for counts of 1, 2, and 3+ passengers, yielding non- by many types of errors, including counting error, location integer corrected counts. error, attribution error (i.e., attributing counts to the wrong Test criteria for APC equipment often fail to distinguish trip), modeling error (e.g., assumptions about loops), and between random and systematic error. For example, the crite- sampling error. It is also important to distinguish between the rion "the count should be correct at 97% of stops"does not con- accuracy of raw counts and that of screened and corrected sider whether there might be a tendency to over- or undercount. counts, and between the accuracy of directly measured items Another weakness of this criterion is that many stops may have (ons and offs by stop) and aggregate measures such as load, zero ons and offs, which is rather easy to count correctly. passenger-miles, and trip-level boardings. To control bias, tests should require that the ratio of total Finally, for any type of error, it is important to distinguish counted ons to total "true" ons be close to 1. Using Tri-Met's between bias (systematic error) and random error. While random stop-level error cv of 0.37, the hypothesis that the on random error, like sampling error, shrinks with increased counts have no systematic error can be accepted at the 95% sample size, correcting for bias is usually impractical; therefore, confidence level if this ratio is in the range 1 0.72 n, where controlling bias becomes far more important than controlling n equals the number of stops contributing to the test total. A random error. less stringent test would allow a small degree of bias, for exam- To a large extent, this chapter and the following one repeat ple, 2% (partly in recognition that the "true" count may itself material originally published in Furth et al. (7). contain errors); then the acceptance range becomes 1 (0.02 + 0.72 n), which, with n equal to 5000, is the range 1 0.03. One of STM's tests is that, at the trip level, the average 8.1 Raw Count Accuracy absolute deviation between automated and manual counts of boardings should be less than 5% of average trip boardings. The accuracy of raw counts tends to be the focus of vendors Because it uses absolute deviations, this test masks systematic and many buyers. Kimpel et al. recently studied Tri-Met counts error. However, the strict criterion of 5% effectively forces and found statistically significant bias for one of the two bus both random and systematic error to be small. types tested (bus type affects how sensors are mounted)--an Acceptance tests should specify the screening criteria and the average overcount of 4.24% for ons and 5.37% for offs (37). maximum percentage of trips (or blocks of trips) rejected, and Overall, standard deviation of random count error was found then apply accuracy criteria to the remaining data. STM pro- to be rather large--about 0.5 passenger per stop for both ons vides a good example: it rejects trips with an imbalance of 5 or and offs, for a coefficient of variation (cv) of 0.37. This value more passengers, requires that no more than 85% of trips be is surprisingly large; the researchers suspect newer systems are rejected, and applies accuracy criteria to the remaining trips. more precise. Kimpel et al. suggest applying correction factors to over- 8.1.1 Measuring Ground Truth come biases. However, few agencies can afford the research needed to establish the level of systematic over- or undercount. One problem in testing APC accuracy is the difficulty of They need counts whose biases are small enough to live with. observing ground truth. Conventional manual counts can Less onerous are bias corrections established by the APC ven- have greater counting error than a good APC. One vendor dor. One vendor includes in its processing software correction insists that clients use video cameras, at least one per door, in