National Academies Press: OpenBook

Cost-Effective Performance Measures for Travel Time Delay, Variation, and Reliability (2008)

Chapter: Chapter 5 - Identification of Deficiencies

« Previous: Chapter 4 - Before/After Studies
Page 37
Suggested Citation:"Chapter 5 - Identification of Deficiencies." National Academies of Sciences, Engineering, and Medicine. 2008. Cost-Effective Performance Measures for Travel Time Delay, Variation, and Reliability. Washington, DC: The National Academies Press. doi: 10.17226/14167.
×
Page 37
Page 38
Suggested Citation:"Chapter 5 - Identification of Deficiencies." National Academies of Sciences, Engineering, and Medicine. 2008. Cost-Effective Performance Measures for Travel Time Delay, Variation, and Reliability. Washington, DC: The National Academies Press. doi: 10.17226/14167.
×
Page 38
Page 39
Suggested Citation:"Chapter 5 - Identification of Deficiencies." National Academies of Sciences, Engineering, and Medicine. 2008. Cost-Effective Performance Measures for Travel Time Delay, Variation, and Reliability. Washington, DC: The National Academies Press. doi: 10.17226/14167.
×
Page 39
Page 40
Suggested Citation:"Chapter 5 - Identification of Deficiencies." National Academies of Sciences, Engineering, and Medicine. 2008. Cost-Effective Performance Measures for Travel Time Delay, Variation, and Reliability. Washington, DC: The National Academies Press. doi: 10.17226/14167.
×
Page 40

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

37 5.1 Introduction This chapter describes how to use travel time and delay to identify real performance deficiencies in the transportation system and how to distinguish these deficiencies from ran- dom variations in the data. A diagnosis chart is provided to help analysts identify the likely root causes of identified travel time, delay, and variability deficiencies. The guidance in this chapter is designed to be applied after the analyst has identified the agency’s performance standards and collected the data (or forecasts) on system performance. This chapter provides limited guidance on the inclusion of uncertainty in the treatment of forecasted travel time and delay based upon a limited set of data from California. Ideally, agen- cies will be able to develop their own data on variability and apply it in lieu of default values provided here. The chapter starts by reiterating the key considerations involved in defining agency performance standards and col- lecting data for the purpose of deficiency assessments. Read- ers are referred to the appropriate chapter for additional background information and guidance on selection of per- formance measures, setting of performance standards, data collection, and forecasting of travel time and delay. Guidance is then provided on the statistical tests needed to distinguish between apparent violations of agency standards (due to sampling error) and actual violations. Additional guid- ance is provided on the incorporation of uncertainty into the use of forecasted system performance for assessing deficien- cies. Finally, the chapter provides a diagnosis chart for identi- fying the likely root causes of travel time, delay, and reliability deficiencies. 5.2 Quantifying Agency Standards To know if a patient is sick or not, you need to have some established methods for measuring health (such as body temperature) and standards for each measure that distinguish between healthy and sick. Similarly an agency must establish what it considers to be good health for its transportation system or its vehicle fleet operations. Acceptable levels for transportation system performance measures must usually be determined based on the agency’s experience of what constitutes acceptable per- formance for its decision makers and the constituents they report to. Chapter 2 provides a discussion on the selection of appropriate measures and the setting of acceptable values for each measure. Note that when assessing deficiencies using field meas- urements, the agency performance standards for the facility or the trip must be more precise than simply Level of Service D. The standard must state over how long the measurement is taken and whether or not brief violations can be tolerated. 5.3 Data Collection The development of a data collection plan and determin- ing the required sample size for measurements are discussed in Chapter 3. If travel time and delay data cannot be measured directly, they must be estimated using the meth- ods in Chapter 6. 5.4 Comparing Field Data to Performance Standards Analysts must take great care to ensure that they have measured performance in the field using a method consistent to the performance standard set by the agency. For example, an agency may have a LOS standard for the peak hour of “D” for traffic signals. HCM (5) defines the threshold for LOS “D” as no more than 55 seconds of control delay averaged over the worst contiguous 15 minutes of the peak hour. Thus, there may be individual signal cycles where the average delay for vehicles is greater than 55 seconds, but C H A P T E R 5 Identification of Deficiencies

38 if the average for the highest volume 15-minute period is less than that, then it is LOS “D” or better. Indeed almost half of the vehicles during the worst 15 minutes may experience de- lays greater than 55 seconds and the intersection would still be LOS “D.” Analysts also must exclude delay measurements not related to the performance standard from their computations. For example, HCM (5) excludes from its LOS standards delays caused by accidents, poor weather, etc. Only delay caused by the signal control (control delay) is included in the perfor- mance measurement for establishing signal LOS. Analysts also will need to determine if holidays, weekends, and days with special events are to be excluded from the comparison to the agency performance standard. Other performance measures described in Chapter 2, such as the TTI, include nonrecurring delay from incidents and other causes men- tioned above, but also can be calculated excluding such events. If the situation and analytical framework call for con- sideration of nonrecurring delay in the identification of defi- ciencies and testing of solutions, these measurements should be left in the data computations, and the appropriate per- formance measures used in the analysis. 5.4.1 Taking Luck Into Account In Field Measurements Once the standards have been set and the performance data have been gathered, the next task is to determine if one or more of the performance standards have been violated. With field data, this is more difficult than simply comparing the results to the agency standards. There is usually a great deal of day-to- day, hour-to-hour, and even minute-by-minute fluctuation in travel times, and especially in delays, for a transportation sys- tem component. So the analyst must assess the degree “luck” played a part in meeting or failing to meet the performance standards. Statistical hypothesis testing provides the tool for ruling out luck as a contributor to meeting or failing to meet the agency’s performance standards. To determine whether or not you have gathered sufficient evidence to establish that the agency is meeting or failing to meet its transportation system performance standards, it is necessary to perform a statistical hypothesis test of the differ- ence between the mean result of your field measurements and the agency’s performance standard. To perform a statistical test, analysts must adopt a baseline (null) hypothesis that they then can reject if the test is suc- cessful. The null hypothesis can either be: 1. The actual performance in the field violates the agency’s performance standards, or 2. The actual performance in the field meets the agency’s performance standards. Any statistical test is subject to two types of error. The probability of mistakenly accepting the null hypothesis is a Type I error, called “alpha” in the equations below. This is usually set quite small (e.g., at 5 percent to get a 95 percent confidence level test). There also is the chance of mistakenly rejecting the null hy- pothesis. This is called Type II error and it varies with the difference between the sample means, their standard devia- tion, and the sample size. (Analysts should consult standard statistical textbooks for tables on the Type II errors associated with different confidence intervals and sample sizes.) The analyst has less control over this type of error and its proba- bility can be quite a bit larger than the Type 1 error. The usual approach is to adopt the null hypothesis for which a Type II error (mistakenly rejecting the null hypoth- esis when it is really true) has the least consequences for the agency. This results in the apparently perverse approach of adopting as your null hypothesis the very condition you do not want to be true (e.g., the actual performance violates agency standards). Analysts who wish to be very sure they do not say there is a deficiency when in reality there is no deficiency will adopt the first null hypothesis above (i.e., “everything is not fine”). The test then will have a low probability (completely controlled by the analyst) of mistakenly accepting this null hypothesis (a Type I error) and in effect concluding there is a deficiency when in reality there is no problem. Conversely, analysts who wish to be very sure they do not say that everything is fine, when in reality there actually is a prob- lem will adopt the second null hypothesis (i.e., “everything is fine”). Again, the test will have a low probability of mistakenly accepting the null hypothesis and concluding there is no prob- lem when in fact there is a problem. An example of the first condition might be where the risk or opportunity cost for mistakenly identifying a problem when none exists is very high (e.g., condemning property to expand a facility when the benefits of the expansion are not statistically significant). An example of the opposite situation might involve public safety (e.g., failing to iden- tify a statistically significant increase in accidents at a given location). For each null hypothesis the test is as follows. 5.4.2 First Null Hypothesis (Don’t Cry Wolf Needlessly) The analyst will reject the null hypothesis that the system fails to meet agency standards (with confidence level equal to 1-alpha) if the following equation is true. (Eq. 5.1)x q t s n n< + ⋅− −( );( )1 1α

39 Confidence Level N 50% 85% 90% 95% 99% 5 0.74 1.78 2.13 2.78 4.60 10 0.70 1.57 1.83 2.26 3.25 50 0.68 1.46 1.68 2.01 2.68 100 0.68 1.45 1.66 1.98 2.63 1,000 0.67 1.44 1.65 1.96 2.58 Exhibit 5.1. Student’s t values. where = The mean of the performance measure as measured in the field; q = The maximum acceptable value for the performance measure; s = The standard deviation of the performance measure as measured in the field; n = The number of measurements of the performance measure made in the field; and t = The Student’s t distribution for a level of confidence of (1−alpha) and (n − 1) degrees of freedom (see standard statistics text book, spreadsheet function, or Exhibit 5.1 below for values to use). 5.4.3 Second Null Hypothesis (Cry Fire at the First Hint of Smoke) Reject the null hypothesis that the system meets agency standards (with confidence level equal to 1-alpha) if the fol- lowing equation is true. (Eq. 5.2) where all variables are as explained above for previous equation. 5.5 Comparing Forecasted Performance to Performance Standards Generally the degree of uncertainty present in forecasts or estimates of travel time or delay in not known. Common practice is to completely ignore any uncertainty in the x q t s n n> − ⋅− −( );( )1 1α x forecasts, which tends to result in agencies “painting them- selves into corners” by planning very precisely for ultimately uncertain future conditions. For a first attempt to introduce the concept of uncertainty into the forecasting process, the analyst can use the known and measured uncertainty of direct field measurements of travel time and delay. It is assumed the variance of the fore- casts is at least equal to, if not actually greater than that measured in the field, since in addition to all the other un- certainties in the field, forecasts have uncertainty as to the actual number of vehicles present. So as a first approxima- tion, the analyst might use the field measured variance in travel time and delay (if available) and perform the hypoth- esis tests described above for field measured data. One merely substitutes the forecasted values for the field meas- ured mean values into the equations and uses the standard deviation of the field measured values for the standard de- viation in the equations. The effect of introducing the above described hypothesis tests into the assessment of future deficiencies is to provide for a margin of error in planning for the future. 5.6 Diagnosing the Causes Once one or more deficiencies have been identified, it is valuable to be able to assign a primary cause to the defi- ciency. This will aid the analyst later in generating alterna- tive improvement strategies to mitigate the problem. Exhibit 5.2 below provides some initial suggestions for iden- tifying the root causes of travel time, delay, and variability deficiencies. Other significant resource documents are available for this purpose, and the reader is referred to Sec- tion 7.3 for several references that cover both highway and transit modes.

40 Deficiency Proximate Causes Likely Root Causes Travel Time is excessive, but no significant delay or reliability deficiencies. Free-flow speeds are too low or travel distances are too great. Low-speed facility perhaps due to inadequate design speed (not a freeway). Road System does not provide a straight line path between origin and destination (such as in mountainous terrain). Delay is regular but excessive. (There may be excessive variability in travel time, but delay recurs regularly.) Inadequate capacity when compared to demand. Insufficient number of lanes. Inadequate design. Poor signal timing. Too much demand. Lack of alternative routes or modes for travelers. Excessive Variability in Delay. Facility is prone to incidents and/or response to incidents is inadequate. There may be surges in demand. Facility is accident prone due to poor design. Frequent days of poor weather. Incident detection and response is poorly managed or nonexistent. Travelers not provided with timely information to avoid segments with problems. There are unmetered surges of demand (often from large special generators). Exhibit 5.2. Diagnosis chart for travel time, delay, and variability deficiencies.

Next: Chapter 6 - Forecast Future Performance »
Cost-Effective Performance Measures for Travel Time Delay, Variation, and Reliability Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB's National Cooperative Highway Research Program (NCHRP) Report 618: Cost-Effective Performance Measures for Travel Time Delay, Variation, and Reliability explores a framework and methods to predict, measure, and report travel time, delay, and reliability from a customer-oriented perspective.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!