Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
C H A P T E R 6 Measuring Travel Time ReliabilityPrevious chapters discussed data sources; the next step is to characterize the impact of incidents on travel time reliability. This chapter introduces the development of travel time reli- ability performance measures. A set of speciï¬c, measurable, achievable, results-oriented, and timely (SMART) performance measures needs to be developed. The measures should pro- vide valuable information for a travelerâs decision making and facilitate the management of transportation systems. Travel time reliability is a measure of the stability of travel time and is subject to ï¬uctuations in ï¬ow and capacity. Typi- cal quantiï¬able measures of travel time reliability that are widely used include the 90th or 95th percentile travel times (the amount of travel time required on the heaviest travel days), the buffer index (extra time required by motorists to reach their destination on time), the planning time index (total travel time that is planned, including the buffer time), and the frequency with which congestion exceeds some expected threshold. These measures are typically applied in cases of recurring congestion. The same measures can be applied in cases of nonrecurring congestion. For example, on an urban freeway, trafï¬c backups may not be strictly caused by a recurring bottleneck (e.g., lane drop) but in many cases may be caused by nonrecurring bottlenecks (e.g., incidents). Literature Review Literature on modeling the relationship between crash reduc- tion and travel time reliability improvement is scant. How- ever, there are signiï¬cant publications that deal with how to measure travel time reliability, as well as how to estimate travel time delays caused by incidents. This section provides an over- view of these studies. Research on Modeling Travel Time Reliability Existing travel time reliability measures have been created by different agencies. Based on the ways the measurements were46calculated, travel time reliability can be classiï¬ed into empir- ical and practical-based or mathematical and theoretical- based (1). Empirical and Practical Measures FHWA recommended four travel time reliability measures: 90th or 95th percentile travel time, a buffer index (the buffer time that most travelers add to their average travel time as a percentage, calculated as the difference between the 95th per- centile and average travel time divided by average travel time), the planning time index (calculated as the 95th percentile divided by the free-ï¬ow travel time), and the frequency that congestion exceeds some expected threshold (2). In the FHWA report Monitoring Urban Freeways in 2003: Current Conditions and Trends from Archived Operations Data (3), several reliability measures were discussed (e.g., statisti- cal range measures, buffer measures, and tardy-trip indicator measures). The recommended measures in this report include percentage of variation (the amount of variation is expressed in relation to the average travel time in a percentage measure), misery index (length of delay of only the worst trips), and buffer time index (the amount of extra time needed to be on time for 95% of the trips). Floridaâs Mobility Performance Measures Program pro- posed using the Florida Reliability Method. This method was derived from Florida DOTâs deï¬nition of reliability of a high- way system as the percentage of travel on a corridor that takes no longer than the expected travel time (median travel time) plus a certain acceptable additional time (a percentage of the expected travel time) (4). Monitoring and Predicting Freeway Travel Time Reliability Using Width and Skew of Day-to-Day Travel Time Distribution (5) examined travel time data from a 6.5-km eastbound car- riageway of the A20 freeway in the Netherlands between 6 a.m. and 8 p.m. for the entire year of 2002. Data were binned into 15-min intervals. Two reliability metrics (skew and width of
47the travel time distribution) were created as reliability mea- sures: λskew = (T90âT50)/(T50âT10) and λvar = (T90âT10)/T50, where TXX denotes the XX percentile values. Plots of these two measures showed an analogous behavior to the rela- tionship between trafï¬c stream ï¬ow and density, although the authors argued that the general pattern may change if the binning size is set larger (greater than 45 min). The prelimi- nary conclusion is that for λskewâ1 and λvarâ¤0.1 (free ï¬ow is expected most of the time), travel time is reliable; for λskew<<1 and λvar>>0.1 (congested), longer travel times can be expected in most cases; the larger the λvar, the more unreliable travel times may be classiï¬ed. For λskew>>1 and λvarâ¥0.1, congestion may set in or dissipate, meaning that both free-ï¬ow and high travel times can be expected. The larger the λskew, the more unreliable travel times may be classiï¬ed. An indicator UIr for unreliability, standardized by the length of a roadway, was proposed in this research using λskew and λvar. A reliability map was created for each index: λskew, λvar, UIr, as well as a com- monly used index UIralternative, calculated as the standard devi- ation divided by mean. It was found that the UIr takes both congestion and transient periods as periods with unreliable travel times, whereas λskew and λvar each cover one situation and UIralternative shows much less detail. The author incorporated the indices to predict long-term travel times using a Bayesian regularization algorithm. The results showed a comparable travel time prediction to the observed travel times. Mathematical and Theoretical Measures The report Using Real-Life Dual-Loop Detector Data to Develop New Methodology for Estimating Freeway Travel Time Relia- bility (6) used real-life dual-loop detector data collected on I-4 in Orlando, Florida, on weekdays in October 2003 to ï¬t four different distributionsâlognormal, Weibull, normal, and exponentialâfor travel times for seven segments. Anderson- Darling goodness-of-ï¬t statistic and error percentages were used to evaluate the distributions, and lognormal produced the best ï¬t to the data. The developed lognormal distribution was used to estimate segment and corridor travel time relia- bilities. The reliability in this paper is deï¬ned as follows: A roadway segment is considered 100% reliable if its travel time is less than or equal to the travel time at the posted speed limit. This deï¬nition is different from many existing travel time reliability deï¬nitions in that it puts more emphasis on the userâs perspective. The results showed that the travel time reliability for segments was sensitive to the geographic loca- tions where the congested segments have a higher variation in reliability. New Methodology for Estimating Reliability in Transportation Networks (7) defined link travel time reliability as the prob- ability that the expected travel time at degraded capacity is less than the free-ï¬ow link travel time plus an acceptable tolerance.The authors suggest that the reliability for a network is equal to the product of the reliabilities of its links. As a result, the reliability of a series system is always less than the reliability of the least reliable link. Therefore, the multipath network system should have a reliability calculated as where J is the path label and W is the number of possible paths in the transportation network. The authors independently tested different degraded link capacities for a hypothetical net- work (with four nodes and ï¬ve links) with capacity reduction at 0%, 10%, 20%, 30%, 40%, and 50%. It was found that a small reduction in link capacity would cause only a small or no variation in travel time reliability. In A Game Theory Approach to Measuring the Performance Reliability of Transport Networks (8), the network reliabil- ity in two dimensions is defined as: connectivity and per- formance reliability. According to the author, the measurement of network reliability is a complex issue because it involves the infrastructure, as well as the behavioral response of users. He deï¬nes the reliability of a network as acceptable expected trip costs even for users who are extremely pessimistic. A game theoretical approach is described in this paper to assess net- work reliability. A sample network composed of nine nodes and 12 links with six possible paths from one origin to one destination was used. A nonparametric and noncooperative model was developed. The model assumes that network users do not know with certainty which of a set of possible link costs they will encounter, and the evil entity imposing link costs on the network users does not know which route the users would choose. The problem was formulated as a linear program with path choice probabilities as the primal variable and link-based scenario probabilities as the dual variables. Because this requires path enumeration, it is not feasible for large networks. Alternatively, a simple iterative scheme based on the method of successive averages (MSA) was proposed. The author believes that the expected trip cost for pessimistic trip makers offers a quality measure for network reliability. Trip Travel-Time Reliability: Issues and Proposed Solutions (9) proposed ï¬ve methods to estimate path travel time vari- ances from its component segment travel time variances to estimate travel time reliability measures. To test these ï¬ve methods and the assumption of travel time normality, ï¬eld data collected on a section of I-35 running through San Antonio, Texas, were used. The ï¬eld data included 4 months of automatic vehicle identiï¬cation (AVI) data obtained from the TransGuide System at all 54 AVI stations from June 11, 1998, to December 6, 1998. Travel times for detected vehicles were estimated and invalid data were ï¬ltered out. Besides the R e Rs path J J W = =( ) = â1 1 1
48San Antonio network, a ï¬ctitious freeway network was cre- ated and modeled using INTEGRATION simulation soft- ware. The analysis found that, under steady-state conditions, a lognormal distribution describes the travel times better than other distributions. To evaluate the performance of the ï¬ve proposed methods, AVI data over a 20-day period were ana- lyzed in addition to the simulated data. The results were consis- tent for real-world and simulated data. The method that computes the trip coefï¬cient of variation (CV) as the mean CV over all segment realizations (Method 3) outperformed other methods using the ï¬eld data, whereas the method that estimates the median trip CV over all realizations j (Method 4) and the method that estimates the path CV as the midpoint between the maximum and minimum CV over all realizations j (Method 5) produced the best results. With the simulated data, Method 4 performed the best. When some localized bottlenecks were introduced to the system, Method 1 performed well and Meth- ods 3 and 4 generated reasonable results. Research on Estimating Delays from Incidents Travel-Time Reliability as a Measure of Service (10) used travel time data and incident records collected from a 20-mi corri- dor along I-5 in Los Angeles, California, to describe the travel time variation caused by incidents. It was found that both the standard deviation and the median travel time were larger when there were incidents. The research suggested that the 90th percentile travel time is a meaningful way to combine the average travel time and its variability. The I-880 Field Experiment Analysis of Incident Data (11) conducted the I-880 study to evaluate the effectiveness of the Freeway Service Patrol (FSP) implemented at a particular free- way section. The data sources were the ï¬eld observations made by probe-vehicle drivers traveling the freeway section with an average headway of 7 min, as well as the incident data collected by the California Highway Patrol computer-aided dispatch system, officersâ records, tow-truck company logs, and FSP records. In this ï¬eld study, incident response, clearance times, and durations depend on the incident type and severity and the availability of incident management measures. Average response time decreased after the implementation of FSPs. New Model for Predicting Freeway Incidents and Incident Delays (12) constructed a new model called IMPACT using incident data from six metropolitan areas to predict incidents and delays based on aggregate freeway segment characteristics and trafï¬c volumes. There are four submodels in IMPACT: an incident rate submodel to estimate the annual number of peak and off-peak incidents by type; an incident severity submodel to estimate the likelihood that incidents block one or more lanes; an incident duration submodel to estimate how long it takes to clear the incident; and a delay submodel to estimatethe delays caused by the incidents. Seven standard incident types were adopted and studied. The peak and off-peak incident rates for average annual daily trafï¬c over capacity (AADT/C) ⤠7 were similar across all incident types. Magnitudes of the peak period rates are sensitive to the degree of congestion. Some rates decline with increasing AADT/C, whereas others have a U-shaped relationship. Based on the ï¬ndings of previ- ous studies, IMPACT estimates the capacity lost because of incidents. Estimating Magnitude and Duration of Incident Delays (13) developed two regression models for estimating freeway inci- dent congestion and a third model for predicting incident dura- tion. According to the authors, factors that affect the impact of nonrecurring congestion on freeway operation include inci- dent duration, reduction in capacity, and demand rate. Two sets of data were collected in this study for 1 month before (from February 16, 1993, to March 19, 1993) and 1 month after (from September 27, 1993, to October 29, 1993) imple- menting FSP. The ï¬rst set covered such incident characteristics as type, severity, vehicle involved, and location. The second set was related to such trafï¬c characteristics as 30-s speed, ï¬ow, and occupancy at freeway mainline stations and at on-and- off ramp stations upstream and downstream of the incident location. Two models were developed to estimate incident delay. The ï¬rst depicts incident delay as a function of incident duration, trafï¬c demand, capacity reduction, and number of vehicles involved. The second predicts the cumulative inci- dent delay as a function of incident duration, number of lanes affected, and number of vehicles involved. Model 1 outper- formed model 2. The incident duration prediction model uses log transformation of duration. This model can predict 81% of incident durations in a natural log format as a func- tion of six variables: number of lanes affected, number of vehi- cles involved, dummy variable for truck, dummy variable for time of day, log of police response time, and dummy variable for weather. Quantifying Incident-Induced Travel Delays on Freeways Using Traffic Sensor Data (14) applied a modiï¬ed deterministic queuing theory to estimate incident-induced delays using 1-min aggregated loop detector data. The delay was computed using a dynamic traffic volumeâbased background profile, which is considered a more accurate representation of pre- vailing traffic conditions. Using traffic counts collected by loop detectors upstream and downstream of the accident loca- tion, the research team developed a curve for arrival and depar- ture rates for a speciï¬c location. The area between the two curves was used to compute the total system delay. To vali- date the algorithm, VISSIM software was used to construct some incident scenarios. Before conducting the simulation analysis, the model parameters were calibrated by matching in-ï¬eld loop detector and simulated trafï¬c counts. Data col- lected on SR-520 at the Evergreen Point Bridge were fed to
49VISSIM software to simulate the incident-induced delay (IID). Even though most of the IIDs estimated by the algo- rithm were smaller than the IID obtained from the simulation models, they were reasonably close with an average difference of 15.3%. The proposed algorithm was applied to two sample corridors: the eastbound section of SR-520 (Evergreen Point Bridge) and the I-405 northbound section between mileposts 1.68 and 15.75. The results validated the algorithm, and the estimated delay was comparable to ï¬eld data. Proposed Modeling Methodology Although the introduced existing measures attempt to quan- tify travel time reliability, they fail to distinguish between congested and noncongested conditions. Consequently, a more sophisticated model is needed to quantitatively meas- ure travel time reliability and, at the same time, reï¬ect the underlying trafï¬c conditions that affect travel time reliability. In achieving these objectives, the team proposes the use of a novel multistate travel time reliability modeling framework to model travel times under complex trafï¬c conditions (15). This chapter provides an overview of the proposed approach. According to the model, trafï¬c could be operating in either a congested state (caused by recurrent and nonrecurrent events) or an uncongested state. Travel time variability in a noncongested state is primarily determined by individual driver preferences and the speed limit of the roadway seg- ment. Alternatively, travel time for the congested state (recur- ring or nonrecurring) is expected to be longer with larger variability compared with free-ï¬ow and uncongested states. The multistate model is used in this research to quantitatively assess the probability of trafï¬c state and the corresponding travel time distribution characteristics within each state. A ï¬nite multistate model with K component distributions has the density function shown in Equation 3, where T is the travel time; f(Tâ λ,θ) is the density function of the distribution for T, representing the distribution of travel time in the corresponding state; λ = (λ1, λ2, . . . λK) is a vector of mixture coefï¬cients and ΣKk =1 λk = 1; θ = (θ1, . . . , θK) is a matrix of model parameters for each component distribu- tion; θk = (θk1 . . . , θkI) is a vector of model parameters for the kth component distribution that determines the characteristics of the kth component distribution; and fk(Tâ θk) is the density function for the kth component distribution corresponding to a specific traffic condition. Depending on the nature of the data, the component distributions fk(.) can be modeled using normal, lognormal, or Weibull distributions. f T f Tk k k k K ( ) ( ) ( )â , θ â θλ λ= = â 1 3The distribution of travel time under a certain traffic con- dition corresponds to a speciï¬c component distribution in Equation 3. For instance, travel time in free-ï¬ow conditions can be reasonably assumed to be generated from a single- mode distribution. For a given period, multiple trafï¬c condi- tions might exist, and the overall distribution of travel time for this period will follow a mixture distribution. The multi- states could be a result of differing trafï¬c conditions spatially (local bottlenecks at different sections), temporally (during the peak buildup or decay), or both. The multistate model has the advantage of better model-ï¬tting in these multiple states and provides a novel approach for interpreting the results. The kth component fk(Tâ θk) in Equation 3 represents the distribution of travel time corresponding to a speciï¬c trafï¬c condition. The parameter vector θk determines the charac- teristics of the kth component distribution. The parameter λk represents the probability of each state and has a signif- icant implication in travel time reliability reporting, which is discussed later. A speciï¬c example is a two-component normal distribu- tion, as shown in Equation 4. With different combinations of mean and variance, as can be seen in Figure 6.1, the model can theoretically generate any form of distribution that ï¬ts any speciï¬c trafï¬c conditions and travel time distributions. where λ is the mixture coefï¬cient for the ï¬rst component dis- tribution, which is a normal distribution with mean μ1 and standard deviation Ï1; the probability for the second compo- nent distribution is 1 â λ, and the parameters for the second normal distribution are μ2 and Ï2. Figure 6.1 (15) shows the density curves of a two-component normal mixture distribu- tion. The parameters for the two-component distribution are μ1 = 10, Ï1 = 5, and μ2 = 35, Ï2 = 10, respectively. The plot shows the variation in the mixture distribution as a function of variations in λ. The model can accommodate multiple modes as commonly observed in travel time data. It is ï¬exi- ble enough to capture a wide range of patterns. In theory, the mixture distribution can approximate any density function. The mixture model is calibrated using the expectation and maximization (EM) method instead of maximum likelihood methods because the data have multiple modes. f T e T ( , , , ) ( ( ) â , 1 2λ μ μ Ï Ï Î» ÏÏ Î¼ Ï 1 2 1 2 = + â 1 2 1 2 1 2 1 1 2 4 2 2 2 2 2 2 â â λ ÏÏ Î¼ Ï) ( ) ( ) e TTo verify the multistate distribution of travel time proposed above, the team randomly examined data from 10 drivers in the 100-Car Study data set. The home and work addresses pro- vided by drivers were geocoded to a geographic information system (GIS) road network database; all the trips made by that
50Figure 6.1. Mixture distribution and multimode travel time distribution.driver were mapped to this database to visualize the trips. Figure 6.2 shows the home-to-work trips made by one par- ticipant. Travel times were then extracted from the relational database and plotted in histograms. As shown in Figure 6.3, the distribution of travel time for work-to-home trips by that par- ticipant is double-mode, which is in accordance with the assumption of the travel time model proposed by the team in the previous section. The start and end points of the trips have been eliminated from the picture to follow IRB rules.Model Interpretation and Travel Time Reliability Reporting The multistate model provides a platform to relate the param- eters of the mixture distribution with the underlying trafï¬cconditions. In particular, the mixture parameter λk in Equa- tion 3 represents the probability that a particular travel time follows the kth component distribution, which corresponds to a particular trafï¬c condition, as discussed earlier. This pro- vides a mechanism for travel time reliability reporting. A novel two-step travel time reliability reporting method is thus pro- posed. The ï¬rst step is to report the probability of each state as indicated by the mixture parameter λk. From a statistical standpoint, λk represents the mixture probability of each component distribution. The interpretation of this proba- bility from the transportation perspective depends on the sampling mechanism. The sampling mechanism refers to how trips were selected for analysis. Two types of sampling schemesâproportional sampling and ï¬xed-size samplingâ could be used, as discussed in this section.
51Figure 6.2. Home-to-work trip visualization.Figure 6.3. Home-to-work travel time histogram.The number of travel time observations for a given period depends on trafï¬c conditions. Typically, the number of trips per unit time is larger for congested periods when compared with trips in a free-ï¬ow state. In a proportional sampling scheme, the number of trips is proportional to the number of trips for any given interval. For example, in a 10%, proportional sampling approach, 10 trips are selected from every 100 trips. For propor- tional sampling, the probability λk can be interpreted from both macro- and micro-level perspectives. From the macro-level perspective, this corresponds to the percentage of vehicles in trafï¬c state k; for example, the percentage of drivers that expe- rience congested trafï¬c conditions. This interpretation can be used to quantitatively assess system performance from a trafï¬c management perspective. The λk can also be interpreted from a micro-level perspective. Because the relative frequency (per- centage) can also be interpreted as the probability for individu- als, the probability λk also represents the probability that a particular traveler will travel in state k in a given period. This is most useful for personal trip prediction. In a ï¬xed-size sampling scheme, a ï¬xed number of trips are sampled for a given period regardless of the total number of trips during this period. For example, 30 trips will be sampled every 10 min. The λk under a ï¬xed sample scheme represents the proportion of the total duration where traffic is in the kth condition. For example, a λ value of 80% for the con- gested component implies that, out of 60 min, the trafï¬c is in
52a congested state for a total of 0.8 à 60 min. = 48 min. The ï¬xed-size sampling scheme also provides useful information for individual travelers, such as the proportion of time trafï¬c will be in a congested condition. The multistate model provides a convenient travel time reliability analog to the well-accepted weather forecasting example. The general population is familiar with the two-step weather forecasting approach (e.g., âthe probability of rain tomorrow is 80%, with an expected precipitation of 2 in. per hourâ). The same method can be used in travel reliability forecasting (e.g., âthe probability of encountering congestion in the morning peak along a roadway segment is 67%, with an expected travel time of 30 minâ). Travel time under each state can be reported by using well-accepted indices such as the percentile and the misery index, which can be readily cal- culated from each component distribution. This two-step reporting scheme provides rich information for both travelers and trafï¬c management agencies. By know- ing the probability of a congested or incident state and the expected travel time in each state, an individual traveler can make better travel decisions. For instance, in the case of an important trip in which the traveler must arrive at his or her destination at a speciï¬c time, the traveler can make a decision based on the worst-case scenario and estimate the required starting time from that scenario. For a flexible trip, the trav- eler can pick a starting time with a lower probability of encountering a congested state. For trafï¬c management agen- cies, the proportion of trips in a congested state and the travel time difference between the congested state and the free-ï¬ow state can provide critical information on the efï¬ciency of the overall transportation system. This can also provide an oppor- tunity to quantitatively evaluate the effects of congestion alle- viation methods. Model Testing To demonstrate and validate the interpretation of the mix- ture multistate model, simulation was conducted using INTEGRATION software along a 16-mi expressway corridor (I-66) in northern Virginia. The validation was not conducted using the in-vehicle data for several reasons. First, the time stamps in the 100-Car data set were not GPS times and had errors up to several hours. Second, the in-vehicle data do not provide the ground truth conditions, given that the data are only available for the subject vehicle and not for all vehicles within the system. However, the simulation environment provides similar probe data with additional information on the performance of the entire system and detailed informa- tion on any incidents that are introduced. An origin-destination (O-D) matrix was developed rep- resenting the number of trips traveled between each O-D pair using field loop detector traffic volume measurements.The goal of the simulation effort was twofold. The first objective was to demonstrate that the model parameters estimated are comparable to the characteristics of each traf- ï¬c state. The second objective was to demonstrate model interpretation under two alternative sampling schemes: pro- portional sampling and ï¬xed-size sampling. Two O-D demands, a congested state and an uncongested state (scaling down from the congested matrix), were used to simulate temporal variations. Speciï¬cally, a database of 1,000 simulation runs, 500 of them with high demand and 500 with low demand, was constructed for various travel demand levels. The mixture scenarios were generated by sampling from the 1,000 time units of simulation output. The simulated travel times were mixed at ï¬xed mixture levels of 10%, 25%, 50%, and 75% of time units in a congested stage. The mixed travel times were ï¬tted to the two-state model, a mixture of two normal distributions. The ï¬tting results demonstrated that the two-state model provides a better match to the simulated travel times when compared with a unimodal model, as shown in Figure 6.4 (16). The results also showed that for proportional sampling in which the number of trips sampled in a given period is pro- portional to the total number of trips in that period, under high-congestion scenarios (75% of time units in congested state), the model underestimates the true proportion and overestimates the variance of the travel time in the free-ï¬ow state. The reason for the bias is that a single normal distribu- tion cannot sufï¬ciently model the travel time in the con- gested state when the percentage is high. This problem can be resolved by introducing a third component or by using alter- native component distributions (e.g., lognormal or gamma). For ï¬xed-size sampling in which a ï¬xed number of trips is sampled for any given period, the model does reï¬ect the char- acteristics of travel time under different trafï¬c conditions. The parameters of the component distribution can be esti- mated satisfactorily, and the interpretation of the mixture parameters depends on the sampling scheme. The multistate model was then applied to a data set collected from I-35 near San Antonio, Texas. The trafï¬c volume near downtown San Antonio varied between 89,000 and 157,000 vehicles per day. The most heavily traveled sections were near the interchange with I-37, with average daily traffic counts between 141,000 and 169,000 vehicles, and between the south- ern and northern junctions with the Loop 410 freeway, with average daily trafï¬c counts between 144,000 and 157,000. Vehicles were tagged with radio frequency (RF) sensors, and the travel time for each tagged vehicle was recorded whenever vehicles passed any pair of AVI stations. A two-component model and a three-component model were ï¬tted to the morn- ing peak travel time. The Akaike information criterion (AIC) was used to compare these models. The smaller the AIC value, the better ï¬tted the model is. The results, shown in Table 6.1,
53Travel time (s) D en si ty 0052000200510001005 0. 00 0 0. 00 2 0. 00 4 0. 00 6 bimodal unimodal Figure 6.4. Comparison between unimodal and bimodal models.Table 6.1. Mixture Normal Model-Fitting for Morning Peak Travel Time Two-Component Model Three-Component Model Mixture Standard Mixture Standard Proportion Mean Deviation Proportion Mean Deviation Comp. 1 33% 588 38 0.33 588 38 Comp. 2 67% 1089 393 0.59 981 230 Comp. 3 NA NA NA 0.08 1958 223 Log likelihood â3567 â3503 AIC 7144 7020 Travel time reliability reporting 1. The probability of encountering congestion is 67%. If congestion is encountered, there is a 90% proba- bility that travel time is less than 1,592 s. 2. The probability of encountering a free-flow state is 33%. If congestion is encountered, there is a 90% probability that travel time is less than 637 s. 1. There is a 59% probability of encountering conges- tion. If congestion is encountered, there is a 90% probability that travel time is less than 1,276 s. 2. There is a 33% probability of encountering free-flow conditions. In this case, there is a 90% probability that travel time is less than 637 s. 3. The probability of encountering an incident is 8%. In this case, there is a 90% probability that travel time is less than 2,244 s.demonstrate that the three-component model provides sub- stantially better model-fitting than does the two-component model. The travel time reliability reporting listed in Table 6.1 clearly expresses the travel time information needed by travel- ers and decision makers. It not only reports the probability of encountering a congested state but also reports the expected travel time under that state.The multistate travel time reliability model is more flexible and provides superior fitting to travel time data compared with traditional single-mode models. The model provides a direct connection between the model parameters and the underlying traffic conditions. It can also be directly linked to the probability of incidents and thus can capture the impact of nonrecurring congestion on travel time reliability.
54Conclusions and Discussion The proposed multistate model provides a better ï¬t to ï¬eld data compared with traditional unimodal travel time models. The distribution of ï¬eld travel time is tested to be multimode. As demonstrated in the last row of Table 6.1, the reliability mea- sures generated from the proposed model are speciï¬c and mea- surable.Speciï¬cally,thetwomodel parametersâthe probability of encountering congestion and the probability of an expected travel timeâare both speciï¬c and measurable. The proposed travel time reliability reporting is achievable because the model can be developed using in-vehicle, loop detector, video, or other surveillance technologies. Running this model is not time-con- suming, so it can provide timely information. Consequently, the proposed model provides valuable information to assist travelers in their decision-making process and facilitates the management of transportation systems. Travel time reliability can be enhanced by modifying driver behavior to reduce incidents. The proposed model is designed to model travel time reliability and congestion before and after incident-induced congestion. The events have been viewed by data reductionists and designated as correctable or preventa- ble by modifying driver behavior. Ideally, the data will also incorporate a data set with sufï¬cient peak hour travel time data with and without the inï¬uence of safety-related events. It is relatively easy to capture correctable driver behavior with the aid of the data reduction tool developed by VTTI. The challenge is to collect travel time data before and after these events. The original plan was to use the in-vehicle naturalistic data collected in the candidate studies by VTTI and related external data using in-vehicle time and location information, but using such data is much more complicated and infeasible in most cases. To develop travel time reliability models, a large data set was required, preferably with numerous trips sharing common origin and destination. The team planned to extract work-to- home trips from the 100-Car data set but realized that this plan had to be abandoned. The 100-Car Study, like most naturalis- tic driving studies, provided incentives to participants. As stated in the 100-Car Study report, One hundred drivers who commuted into or out of the Northern Virginia/Washington, DC metropolitan area were initially recruited as primary drivers to have their vehicles instrumented or receive a leased vehicle for this study. Driv- ers were recruited by placing flyers on vehicles as well as by placing newspaper announcements in the classified section. Drivers who had their private vehicles instrumented received $125 per month and a bonus at the end of the study for com- pleting necessary paperwork. Drivers who received a leased vehicle received free use of the vehicle, including standard maintenance, and the same bonus at the end of the study for completing necessary paperwork. Drivers of leased vehicles were insured under the Commonwealth of Virginia policy. (17)Because participants could receive monetary compensation or free use of leased vehicles, they were, to some extent, self- selected. Data reduction revealed that a relatively large portion of the subjects were students, hourly workers, or from other low-income populations. Consequently, relatively fewer home- to-work or work-to-home trips were collected, resulting in a limited size of trips at regular peak hours collected. Additionally, there are other limitations. For example, the instrumented cars were sometimes driven by other drivers instead of the participant who signed up for the data collection. Consequently, the trips collected by GPS reï¬ect multiple driv- ersâ travel patterns. Instead of making regular trips sharing starting and ending points, some data sets illustrated a rather complicated travel pattern in which the trips recorded are rel- atively scattered, covering an expanded road network. Another limitation is that, in the 100-Car Study, the computer time was used instead of a synchronized time, which resulted in some errors in time stamp. Consequently, even though the team does have a high-quality travel time database collected and maintained by the state of Virginia, it is hard to link the in-vehicle data with such external travel time data. The statistical model proposed in this chapter used other travel time data rather than candidate data sources because of some limitations of those data. If future data collection is care- fully designed with the recommendations the team proposes (as discussed in Chapter 8 of this report), the data will be sig- niï¬cantly improved to serve this research goal. References 1. Chalumuri, R. S., T. Kitazawa, J. Tanabe, Y. Suga, and Y. Asakura. Examining Travel Time Reliability on Han-Shin Expressway Net- work. Eastern Asia Society for Transportation Studies, Vol. 7, 2007, pp. 2274â2288. 2. Ofï¬ce of Operations, Federal Highway Administration. Travel Time Reliability: Making It There On Time, All The Time. http://ops .fhwa.dot.gov/publications/tt_reliability/brochure/index.htm. Accessed May 17, 2011. 3. Lomax, T., D. Schrank, S. Turner, and R. Margiotta. Selecting Travel Time Reliability Features. Texas Transportation Institute and Cam- bridge Systematics, Inc., May 2003. http://tti.tamu.edu/documents/ 474360-1.pdf. Accessed May 17, 2011. 4. Florida Department of Transportation. Florida Mobility Measures. www.dot.state.ï¬.us/planning/statistics/mobilitymeasures. Accessed May 17, 2011. 5. Van Lint, J. W. C., and H. J. Van Zuylen. Monitoring and Predicting Freeway Travel Time Reliability: Using Width and Skew of Day-to- Day Travel Time Distribution. Transportation Research Record: Jour- nal of the Transportation Research Board, No. 1917, Transportation Research Board of the National Academies, Washington, D.C., 2005, pp. 54â62. http://trb.metapress.com/content/n76607qk003v26l1/ fulltext.pdf. Accessed May 17, 2011. 6. Emam, E. B., and H. Al Deek. Using Real-Life Dual-Loop Detector Data to Develop New Methodology for Estimating Freeway Travel Time Reliability. Transportation Research Record: Journal of the Trans-
55portation Research Board, No. 1959, Transportation Research Board of the National Academies, Washington, D.C., 2006, pp. 140â150. http:// trb.metapress.com/content/m0lv677211m12710/fulltext.pdf. Accessed May 17, 2011. 7. Al-Deek, H., and E. B. Emam. New Methodology for Estimating Reliability in Transportation Networks with Degraded Link Capac- ities. Journal of Intelligent Transportation Systems, Vol. 10, No. 3, 2006, pp. 117â129. 8. Bell, M. G. H. A Game Theory Approach to Measuring the Perfor- mance Reliability of Transport Networks. Transportation Research Part B, Vol. 34, 2000, pp. 533â545. 9. Rakha, H., I. El Shawarby, and M. Arafeh. Trip Travel-Time Relia- bility: Issues and Proposed Solutions. Journal of Intelligent Trans- portation Systems, Vol. 14, No. 4, 2010, pp. 232â250. 10. Chen, C., A. Skabardonis, and P. Varaiya. Travel-Time Reliability as a Measure of Service. Transportation Research Record: Journal of the Transportation Research Board, No. 1855, Transportation Research Board of the National Academies, Washington, D.C., 2003, pp. 74â79. http://trb.metapress.com/content/74t6691220058954/fulltext.pdf. Accessed May 17, 2011. 11. Skabardonis, A., K. Petty, R. Bertini, P. Varaiya, H. Noeimi, and D. Rydzewski. I-880 Field Experiment: Analysis of Incident Data. Transportation Research Record 1603, TRB, National Research Council, Washington, D.C., 1997, pp. 72â79. http://trb.metapress .com/content/0356m4h4v853681q/fulltext.pdf. Accessed May 17, 2011.12. Sullivan, E. New Model for Predicting Freeway Incidents and Inci- dent Delays. Journal of Transportation Engineering, Vol. 123, No. 4, 1997, pp. 267â275. 13. Garib, A., A. E. Radwan, and H. Al Deek. Estimating Magnitude and Duration of Incident Delays. Journal of Transportation Engineering, Vol. 123, No. 6, 1997, pp. 459â466. 14. Wang, Y., M. Hallenbeck, and P. Cheevarunothai. Quantifying Inci- dent-Induced Travel Delays on Freeways Using Traffic Sensor Data. Washington Department of Transportation, 2008. 15. Guo, F., H. Rakha, and S. Park. Multistate Model for Travel Time Reliability. Transportation Research Record: Journal of the Trans- portation Research Board, No. 2188, Transportation Research Board of the National Academies, Washington, D.C., 2010, pp. 46â54. http://trb.metapress.com/content/87m631594x815745/fulltext.pdf. Accessed May 17, 2011. 16. Park, S., H. Rakha, and F. Guo. Calibration Issues for Multistate Model of Travel Time Reliability. Transportation Research Record: Journal of the Transportation Research Board, No. 2188, Transportation Research Board of the National Academies, Washington, D.C., 2010, pp. 74â84. http://trb.metapress.com/content/a6447w6uv7684011/fulltext.pdf. Accessed May 17, 2011. 17. Dingus, T. A., S. G. Klauer, V. L. Neale, A. Petersen, S. E. Lee, J. Sudweeks, M. A. Perez, J. Hankey, D. Ramsey, S. Gupta, C. Bucher, Z. R. Doerzaph, J. Jermeland, and R. R. Knipling. The 100-Car Natu- ralistic Driving Study, Phase II: Results of the 100-Car Field Experi- ment. Report DOT HS 810 593. NHTSA, 2006.