Cover Image

Not for Sale



View/Hide Left Panel
Click for next page ( 47


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 46
C H A P T E R 6 Measuring Travel Time Reliability Previous chapters discussed data sources; the next step is to calculated, travel time reliability can be classified into empir- characterize the impact of incidents on travel time reliability. ical and practical-based or mathematical and theoretical- This chapter introduces the development of travel time reli- based (1). ability performance measures. A set of specific, measurable, achievable, results-oriented, and timely (SMART) performance Empirical and Practical Measures measures needs to be developed. The measures should pro- vide valuable information for a traveler's decision making FHWA recommended four travel time reliability measures: and facilitate the management of transportation systems. 90th or 95th percentile travel time, a buffer index (the buffer Travel time reliability is a measure of the stability of travel time that most travelers add to their average travel time as a time and is subject to fluctuations in flow and capacity. Typi- percentage, calculated as the difference between the 95th per- cal quantifiable measures of travel time reliability that are centile and average travel time divided by average travel time), widely used include the 90th or 95th percentile travel times (the the planning time index (calculated as the 95th percentile amount of travel time required on the heaviest travel days), the divided by the free-flow travel time), and the frequency that buffer index (extra time required by motorists to reach their congestion exceeds some expected threshold (2). destination on time), the planning time index (total travel time In the FHWA report Monitoring Urban Freeways in 2003: that is planned, including the buffer time), and the frequency Current Conditions and Trends from Archived Operations Data with which congestion exceeds some expected threshold. These (3), several reliability measures were discussed (e.g., statisti- measures are typically applied in cases of recurring congestion. cal range measures, buffer measures, and tardy-trip indicator The same measures can be applied in cases of nonrecurring measures). The recommended measures in this report include congestion. For example, on an urban freeway, traffic backups percentage of variation (the amount of variation is expressed in may not be strictly caused by a recurring bottleneck (e.g., lane relation to the average travel time in a percentage measure), drop) but in many cases may be caused by nonrecurring misery index (length of delay of only the worst trips), and buffer bottlenecks (e.g., incidents). time index (the amount of extra time needed to be on time for 95% of the trips). Florida's Mobility Performance Measures Program pro- Literature Review posed using the Florida Reliability Method. This method was Literature on modeling the relationship between crash reduc- derived from Florida DOT's definition of reliability of a high- tion and travel time reliability improvement is scant. How- way system as the percentage of travel on a corridor that takes ever, there are significant publications that deal with how to no longer than the expected travel time (median travel time) measure travel time reliability, as well as how to estimate travel plus a certain acceptable additional time (a percentage of the time delays caused by incidents. This section provides an over- expected travel time) (4). view of these studies. Monitoring and Predicting Freeway Travel Time Reliability Using Width and Skew of Day-to-Day Travel Time Distribution (5) examined travel time data from a 6.5-km eastbound car- Research on Modeling Travel Time Reliability riageway of the A20 freeway in the Netherlands between 6 a.m. Existing travel time reliability measures have been created by and 8 p.m. for the entire year of 2002. Data were binned into different agencies. Based on the ways the measurements were 15-min intervals. Two reliability metrics (skew and width of 46

OCR for page 46
47 the travel time distribution) were created as reliability mea- The authors suggest that the reliability for a network is equal sures: skew = (T90-T50)/(T50-T10) and var = (T90-T10)/T50, to the product of the reliabilities of its links. As a result, the where TXX denotes the XX percentile values. Plots of these reliability of a series system is always less than the reliability two measures showed an analogous behavior to the rela- of the least reliable link. Therefore, the multipath network tionship between traffic stream flow and density, although system should have a reliability calculated as the authors argued that the general pattern may change if the binning size is set larger (greater than 45 min). The prelimi- W nary conclusion is that for skew1 and var0.1 (free flow is ( Rs = 1e 1 = Rpath J ) J =1 expected most of the time), travel time is reliable; for skew<<1 and var>>0.1 (congested), longer travel times can be expected in most cases; the larger the var, the more unreliable travel where J is the path label and W is the number of possible paths times may be classified. For skew>>1 and var0.1, congestion in the transportation network. The authors independently may set in or dissipate, meaning that both free-flow and high tested different degraded link capacities for a hypothetical net- travel times can be expected. The larger the skew, the more work (with four nodes and five links) with capacity reduction unreliable travel times may be classified. An indicator UIr for at 0%, 10%, 20%, 30%, 40%, and 50%. It was found that a unreliability, standardized by the length of a roadway, was small reduction in link capacity would cause only a small or proposed in this research using skew and var. A reliability map no variation in travel time reliability. was created for each index: skew, var, UIr, as well as a com- In A Game Theory Approach to Measuring the Performance monly used index UIralternative, calculated as the standard devi- Reliability of Transport Networks (8), the network reliabil- ation divided by mean. It was found that the UIr takes both ity in two dimensions is defined as: connectivity and per- congestion and transient periods as periods with unreliable formance reliability. According to the author, the measurement travel times, whereas skew and var each cover one situation of network reliability is a complex issue because it involves the and UIralternative shows much less detail. The author incorporated infrastructure, as well as the behavioral response of users. He the indices to predict long-term travel times using a Bayesian defines the reliability of a network as acceptable expected trip regularization algorithm. The results showed a comparable costs even for users who are extremely pessimistic. A game travel time prediction to the observed travel times. theoretical approach is described in this paper to assess net- work reliability. A sample network composed of nine nodes and 12 links with six possible paths from one origin to one Mathematical and Theoretical Measures destination was used. A nonparametric and noncooperative The report Using Real-Life Dual-Loop Detector Data to Develop model was developed. The model assumes that network users New Methodology for Estimating Freeway Travel Time Relia- do not know with certainty which of a set of possible link bility (6) used real-life dual-loop detector data collected on costs they will encounter, and the evil entity imposing link I-4 in Orlando, Florida, on weekdays in October 2003 to fit costs on the network users does not know which route the four different distributions--lognormal, Weibull, normal, and users would choose. The problem was formulated as a linear exponential--for travel times for seven segments. Anderson- program with path choice probabilities as the primal variable Darling goodness-of-fit statistic and error percentages were and link-based scenario probabilities as the dual variables. used to evaluate the distributions, and lognormal produced Because this requires path enumeration, it is not feasible for the best fit to the data. The developed lognormal distribution large networks. Alternatively, a simple iterative scheme based was used to estimate segment and corridor travel time relia- on the method of successive averages (MSA) was proposed. bilities. The reliability in this paper is defined as follows: A The author believes that the expected trip cost for pessimistic roadway segment is considered 100% reliable if its travel time trip makers offers a quality measure for network reliability. is less than or equal to the travel time at the posted speed Trip Travel-Time Reliability: Issues and Proposed Solutions limit. This definition is different from many existing travel (9) proposed five methods to estimate path travel time vari- time reliability definitions in that it puts more emphasis on ances from its component segment travel time variances to the user's perspective. The results showed that the travel time estimate travel time reliability measures. To test these five reliability for segments was sensitive to the geographic loca- methods and the assumption of travel time normality, field tions where the congested segments have a higher variation data collected on a section of I-35 running through San in reliability. Antonio, Texas, were used. The field data included 4 months New Methodology for Estimating Reliability in Transportation of automatic vehicle identification (AVI) data obtained from Networks (7) defined link travel time reliability as the prob- the TransGuide System at all 54 AVI stations from June 11, ability that the expected travel time at degraded capacity is less 1998, to December 6, 1998. Travel times for detected vehicles than the free-flow link travel time plus an acceptable tolerance. were estimated and invalid data were filtered out. Besides the

OCR for page 46
48 San Antonio network, a fictitious freeway network was cre- the delays caused by the incidents. Seven standard incident ated and modeled using INTEGRATION simulation soft- types were adopted and studied. The peak and off-peak incident ware. The analysis found that, under steady-state conditions, rates for average annual daily traffic over capacity (AADT/C) a lognormal distribution describes the travel times better than 7 were similar across all incident types. Magnitudes of the other distributions. To evaluate the performance of the five peak period rates are sensitive to the degree of congestion. proposed methods, AVI data over a 20-day period were ana- Some rates decline with increasing AADT/C, whereas others lyzed in addition to the simulated data. The results were consis- have a U-shaped relationship. Based on the findings of previ- tent for real-world and simulated data. The method that ous studies, IMPACT estimates the capacity lost because of computes the trip coefficient of variation (CV) as the mean CV incidents. over all segment realizations (Method 3) outperformed other Estimating Magnitude and Duration of Incident Delays (13) methods using the field data, whereas the method that estimates developed two regression models for estimating freeway inci- the median trip CV over all realizations j (Method 4) and the dent congestion and a third model for predicting incident dura- method that estimates the path CV as the midpoint between the tion. According to the authors, factors that affect the impact of maximum and minimum CV over all realizations j (Method 5) nonrecurring congestion on freeway operation include inci- produced the best results. With the simulated data, Method 4 dent duration, reduction in capacity, and demand rate. Two performed the best. When some localized bottlenecks were sets of data were collected in this study for 1 month before introduced to the system, Method 1 performed well and Meth- (from February 16, 1993, to March 19, 1993) and 1 month ods 3 and 4 generated reasonable results. after (from September 27, 1993, to October 29, 1993) imple- menting FSP. The first set covered such incident characteristics as type, severity, vehicle involved, and location. The second set Research on Estimating Delays was related to such traffic characteristics as 30-s speed, flow, from Incidents and occupancy at freeway mainline stations and at on-and- Travel-Time Reliability as a Measure of Service (10) used travel off ramp stations upstream and downstream of the incident time data and incident records collected from a 20-mi corri- location. Two models were developed to estimate incident dor along I-5 in Los Angeles, California, to describe the travel delay. The first depicts incident delay as a function of incident time variation caused by incidents. It was found that both the duration, traffic demand, capacity reduction, and number of standard deviation and the median travel time were larger vehicles involved. The second predicts the cumulative inci- when there were incidents. The research suggested that the dent delay as a function of incident duration, number of lanes 90th percentile travel time is a meaningful way to combine affected, and number of vehicles involved. Model 1 outper- the average travel time and its variability. formed model 2. The incident duration prediction model The I-880 Field Experiment Analysis of Incident Data (11) uses log transformation of duration. This model can predict conducted the I-880 study to evaluate the effectiveness of the 81% of incident durations in a natural log format as a func- Freeway Service Patrol (FSP) implemented at a particular free- tion of six variables: number of lanes affected, number of vehi- way section. The data sources were the field observations made cles involved, dummy variable for truck, dummy variable for by probe-vehicle drivers traveling the freeway section with an time of day, log of police response time, and dummy variable average headway of 7 min, as well as the incident data collected for weather. by the California Highway Patrol computer-aided dispatch Quantifying Incident-Induced Travel Delays on Freeways system, officers' records, tow-truck company logs, and FSP Using Traffic Sensor Data (14) applied a modified deterministic records. In this field study, incident response, clearance times, queuing theory to estimate incident-induced delays using and durations depend on the incident type and severity and 1-min aggregated loop detector data. The delay was computed the availability of incident management measures. Average using a dynamic traffic volumebased background profile, response time decreased after the implementation of FSPs. which is considered a more accurate representation of pre- New Model for Predicting Freeway Incidents and Incident vailing traffic conditions. Using traffic counts collected by Delays (12) constructed a new model called IMPACT using loop detectors upstream and downstream of the accident loca- incident data from six metropolitan areas to predict incidents tion, the research team developed a curve for arrival and depar- and delays based on aggregate freeway segment characteristics ture rates for a specific location. The area between the two and traffic volumes. There are four submodels in IMPACT: an curves was used to compute the total system delay. To vali- incident rate submodel to estimate the annual number of peak date the algorithm, VISSIM software was used to construct and off-peak incidents by type; an incident severity submodel some incident scenarios. Before conducting the simulation to estimate the likelihood that incidents block one or more analysis, the model parameters were calibrated by matching lanes; an incident duration submodel to estimate how long it in-field loop detector and simulated traffic counts. Data col- takes to clear the incident; and a delay submodel to estimate lected on SR-520 at the Evergreen Point Bridge were fed to