Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
39 5.1 Introduction The proliferation of mobile connected technologies has generated a plethora of data on how people and freight move throughout cities. Increasing cooperation between public agencies and technology developers has led some of these data to become available to transportation planners. A prominent example was the development of the National Performance Manage- ment Research Data Set (NPMRDS) by the Federal Highway Administration (FHWA). This data set reports travel times throughout the National Highway System for passenger vehicles and trucks, thus describing traffic conditions at a much higher level of detail than previously available. Other commercial travel time data that provide even higher data resolution and greater network coverage have also become available. This type of data has the potential to improve dramatically how planners assess roadway performance and prioritize investments. However, robust methodologies are required to make sense of it and interpret it in useful ways. The Reliability Valuation Framework described in this chapter attempts to accomplish this by describing methodologies for interpreting travel time data from an economics perspective, using the VOR and VOT parameters estimated in the previous chapter. This allows the framework to develop convincing answers to the following critical freight planning questions: â¢ Is the trucking performance of the roadway system improving? â¢ What roadway segments are causing bottlenecks to trucking? â¢ What are the freight reliability benefits of specific roadway projects and should they be funded? The framework provides answers to these questions by using consistent assumptions and defini- tions, which is important in developing robust planning recommendations that tell a cohesive story. The two main inputs of the framework are the VOR and VOT estimates and the NPMRDS travel time data (or other similar data). These inputs enable the framework to be easy to use and flexible enough to work in a wide range of circumstances. Some of the methodologies are as follows: â¢ How to clean and process travel time data, â¢ How to distinguish between different types of travel time variability and isolate the variability that causes unreliability in shipment delivery schedules, â¢ How to estimate route travel times from link-level data, and â¢ How to model the impacts that a project would have on reliability and traffic conditions in general. While these methodologies were developed for trucking, they could be extended to passenger transportation with minor adjustments. C H A P T E R 5 Reliability Valuation Framework
40 Estimating the Value of Truck Travel Time Reliability 5.1.1 Organization An overview of the Reliability Valuation Framework is shown in Figure 5-1. Each box cor- responds to a section in this chapter. The top third of this figure describes how to measure reli- ability in a roadway network. This includes a brief discussion of the travel time data currently available (focusing on NPMRDS) and recommendations on how to use link-level data to estimate route-level travel times. The middle third includes methods for modeling the impact of projects on reliability. This is an emerging area of research, with many approaches being proposed, however one approach is recommended that is compatible with the rest of the framework. The bottom third of this figure shows how to use the valuation parameters estimated in the previous chapter to answer common freight planning questions. 5.1.2 Reliability Definition Adopted The Reliability Valuation Framework defines truck reliability from the perspective of system users. Just like truck operators, this framework defines reliability as the lack of uncertainty in setting delivery schedules. The possibility that some deliveries may arrive late because of unforeseen traffic conditions generates uncertainty and costs that are different in nature from deliveries that predictably take longer. Late deliveries can result in production disruptions, missed intermodal transfers, and upset customers. This definition follows the spirit of the second Strategic Highway Research Program (SHRP 2) on reliability, which it defined as the lack of travel time variability for a given trip (see Section 2.1.1 for an expanded discussion). The key to operationalizing this definition is to distinguish between travel time variation that is anticipated by truck operators and variation that is random and causes uncertainty. 5.1.3 Types of Travel Time Variation The variation observed in travel time data sets such as the NPMRDS can be categorized as shown in Figure 5-2. Some variation is systematic because it can be anticipated by truck operators; examples include the differences between weekends and weekdays or between peak V al u at io n M o d el in g M ea su re m en t Benefit-Cost Analysis Bottleneck Identification System Performance Link to Route Aggregation 95th Percentile Delay Proposed Projects and Improvements Travel Patterns Travel Time Data NCHRP 07-24 VOR and VOT Roadway Characteristics Reliability Valuation Framework Impact on Reliability Data Cleaning & Preprocessing Figure 5-1. Reliability Valuation Framework.
Reliability Valuation Framework 41 and off-peak times of the day. This variation is typically predicted reasonably well through traffic applications, historical data, and experience. Variation can also be idiosyncratic, in that it arises from the unique behavior of different drivers. Truck operators also anticipate this varia- tion through experience or individualized metrics, and, therefore, it does not cause uncer- tainty in delivery schedules. Planning studies often do not account for this type of variation in the analysis of travel time data, and unreliability measurements are therefore potentially inflated. The approach proposed for measuring unreliability in this framework controls for both systematic and idiosyncratic variation, because neither causes uncertainty for freight users and therefore should not be part of unreliability measurement. Travel time variation can also be random. Not only is roadway travel fundamentally a sto- chastic process, but it occurs in an open system that can be affected by a variety of unforeseeable events. In the literature, this is often called âday-to-dayâ variability. It is useful to catego- rize further random variation by severity. Exceptional variation encompasses rare but major disruptions that cause substantial delays, including infrastructure failure, unexpected natural disasters, and civil unrest. Truck operators typically do not consider these possibilities because they are highly unlikely to occur during any given trip and have systemwide impacts that are difficult to avoid. However, most truck operators do take precautions against unanticipated variation, which is caused by smaller but more common events such as crashes, unannounced work zones, and severe weather. Table 5-1 provides additional examples for each of these types of variation. The Reliability Valuation Framework focuses on random unanticipated variation in travel times. It does not consider exceptional variationânot because exceptional variation is not important, but because the data sources available do not capture exceptional events with any useful degree of precision. Moreover, these events can have widespread impacts and costs that are difficult to capture with the methods in this study. Connectivity and resiliency frameworks would be more appropriate to analyzing these types of disruptions. The definition of reliability adopted in this report also agrees with SHRP 2 in that it focuses on travel time and does not consider other sources of uncertainty that might affect freight users, such as fuel price volatility, mechanical failure, labor strikes, and variable tolls. Many other factors could introduce uncertainty in shipping decisions; however, only those that affect roadway travel time were considered. The time spent loading and unloading trucks also was not considered. Variation in Travel Times Systematic Variation Random/Unexpected Variation Idiosyncratic Variation Unanticipated Variation Exceptional Variation Focus of this Study Expected by Freight Users Figure 5-2. Sources of travel time variation.
42 Estimating the Value of Truck Travel Time Reliability 5.2 Measurement 5.2.1 Travel Time Data 22.214.171.124 National Performance Management Research Data Set The NPMRDS reports average travel times throughout the National Highway System at 5-minute intervals for the past several years. Records are only available for times of the day when GPS-instrumented vehicles took measurements. The NPMRDS includes data for both trucks and passenger vehicles, and its coverage has been expanding in recent years, thereby providing a more complete picture of roadway performance throughout the country. Before 2017, the NPMRDS was compiled by HERE Technologies, which used truck GPS data from the American Transportation Research Institute (ATRI). Since then, the data have been compiled by INRIX, which uses different sources of information. Because of this change, it is recommended that data published before 2017 not be combined or compared with data pub- lished afterward. The current version of the NPMRDS includes historical data back to January 2017. It also reports the level of âdata densityâ for each record, indicating whether the record came from an average of 1â4 trucks, 5â9 trucks or 10 or more trucks. 126.96.36.199 Commercial Data INRIX sells customized travel time data with greater geospatial coverage and time resolution than the NPMRDS. It could cover roads outside of the National Highway System or provide data for smaller analysis segments. HERE also sells a range of travel time data that can be customized for specific needs and provide greater coverage and detail than the NPMRDS. Type Cause Primary Impact Expected by freight users Systematic Large construction work zones Supply Commuting patterns and recurring congestion Demand Large special events Demand Idiosyncratic Driver preferences Demand Truck speed governors Demand Hours-of-service regulations Demand Unexpected by freight users; typically not considered in decision-making Exceptional Infrastructure failure Supply Natural disaster Demand and supply Demonstrations and strikes Demand and supply Unexpected by freight users; typically considered in decision-making Unanticipated Traffic accidents Supply Severe weather Supply Small or unannounced construction work zones Supply Random fluctuation in demand Demand Small or unannounced special events Demand Traffic control devices malfunction Supply Problems with alternative modes Demand Table 5-1. Type and cause of travel time variability by user perspective.
Reliability Valuation Framework 43 Despite not being used for the NPMRDS anymore, ATRIâs GPS data continue to be used in a wide range of truck planning and congestion studies nationwide. These data are said to come from approximately 800,000 instrumented trucks traveling around the country. 5.2.2 Data Cleaning and Preprocessing Travel time data need to be cleaned and preprocessed to measure reliability according to the definition proposed in Section 5.1.2. This involves â¢ Removing systematic and idiosyncratic sources of variation to ensure that reliability measure- ment reflects actual delivery uncertainty and â¢ Removing exceptional sources of variation that are outside the scope of this study. Several actions that accomplish this purpose are recommended; however their implementa- tion ultimately depends on the context of the analysis, availability of data, and questions being answered. For many applications, only the simplest cleaning actions may be necessary. All of these actions assume that travel time data are only available at the link level, which is currently the case in the United States. Route travel time data could potentially be available through ATRI or another source that uses GPS probes; however, confidentiality concerns have generally pre- vented these data from being available to planners. 188.8.131.52 Exclude Idiosyncratic Variation Idiosyncratic variation is partially controlled for in NPMRDS because it averages records at 5-minute intervals. The effect of any individual driver is averaged with other records during that 5-minute period. However, when only one measurement is available per period, as is common in low-volume roads, idiosyncratic variation could inflate unreliability estimates. Either or both of the following actions can be used to reduce this type of bias: â¢ Aggregate records at 15-minute intervals. Travel times can be averaged at 15-minute intervals to reduce the impact that any given truck can have on the results. Averages can be weighted by the data density to improve precision. This action is recommended in virtually all cases, as it improves reliability measurement at no cost. â¢ Consider excluding records with data density of 1â4 trucks. If enough data are available, ignoring records that have low data density can reduce idiosyncratic bias. Travel time records estimated on the basis of more than four trucks will be more stable across the population and less susceptible to the habits of individual drivers. However, this benefit in accuracy should be weighed against the downside of reducing data coverage, particularly on roads with a limited number of records. The adequacy of this action depends on the focus of the study (urban high-volume roads versus lower-volume roads) and the amount of data available. 184.108.40.206 Exclude Systematic Variation Systematic sources of variation, which are easily anticipated by truck operators, should also be cleaned from the data. This can involve the following actions: â¢ Exclude weekends and holidays. Traffic during weekends and holidays is predictably differ- ent from regular weekday traffic. These systematic differences do not cause uncertainty for truck operators, and, therefore, should not be included in reliability measurements. Exclud- ing weekends and holidays from the data is commonplace because traffic engineering tradi- tionally focuses on weekday travel. However, it is possible for unreliability to be higher on weekends than weekdays, because truck operators could be expecting free-flow conditions when faced with unforeseen traffic events. In these cases, the reliability of weekends and holi- days can be analyzed separately, in recognition that traffic patterns are different.
44 Estimating the Value of Truck Travel Time Reliability â¢ Potentially exclude major known events. Truck drivers are typically aware of major sporting events, festivals, or demonstrations that affect traffic. Records that coincide with these events can be excluded from the data, given the type of event, the likelihood that truck operators knew ahead of time, and other local factors. â¢ Exclude roads affected by construction activities. Systematic variability can also be caused by roadway construction projects. Large projects that add lanes or change geometrics are likely to reduce capacity during construction. Reduced roadway capacity often results from speeds being lowered or lane width being decreased, which, under periods of high demand, can lead to congestion. Most truck operators consider this when making travel plans, partic- ularly with large projects that take several months to complete. Many popular trip-routing applications display work zone information, which helps truck operators anticipate this type of congestion. Whether affected travel time data need to be removed or not depends on the ultimate purpose of the analysis. Analyses that seek to identify long-run performance issues should exclude data that coincide with construction projects because after they are finished, conditions are likely to improve considerably. On the other hand, if the purpose of the analysis is to assess reliability in the present, then construction projects should not be excluded, as they can certainly cause unreliability. Some projects can even evolve over time and thereby generate further uncertainty in traffic conditions. Either way, smaller projects that last less than 3 months are unlikely to be noticed by the majority of truck operators and should not be excluded. Intraday systematic variation caused by morning and evening peak period congestion also needs to be excluded; however, this is better handled through the formulation of the metrics presented in the following sections. 220.127.116.11 Exclude Exceptional Variation As mentioned before, exceptional variability should also be excluded because the data and tools used in this study do not capture it adequately. This can involve the following actions: â¢ Exclude outliers. Excluding records with implausible travel speeds will improve the realism of the data. Local knowledge is required to set appropriate cut-offs; however, removing records with speeds higher than 90 mph could be a reasonable starting point. These exclusions are needed because GPS-derived speed data are not perfect and sometimes include records that are physically impossible. Setting a lower bound could help exclude situations in which excep- tional events might be causing standstill traffic or a road closure, especially if entire parts of the roadway network are affected. This action is best implemented before aggregating to 15-minute intervals. â¢ Exclude major disruptions. Analysts should also exclude data in and around exceptional events such as natural disasters (e.g., severe earthquakes, hurricanes, wildfires), demonstra- tions, strikes, infrastructure failure, and so forth. This information is not included in tra- ditional transportation databases; however, it should be easy to identify from firsthand experience or news reports. 18.104.22.168 Summary The data cleaning and preprocessing actions recommended are summarized in Table 5-2. These actions should be considered to help remove sources of variation that are likely to bias reliability measurement. Analysts should consider local conditions and context when imple- menting these exclusions. The availability of historical information on construction work zones or major disruptions could limit the actions that can be implemented.
Reliability Valuation Framework 45 5.2.3 Route Travel Times from Link Data While travel time data are typically only available at the link level, the modeling of trans- portation decisionsâparticularly how users are affected by reliabilityâneeds to occur at the trip level. However, estimating the distribution of travel times on a route using link data has historically been challenging, for several reasons. Foremost, link travel times are spatially correlated, because traffic conditions in one link are frequently indicative of conditions in adjacent links. Figure 5-3 shows the correlations between links in a typical highway corridor. Objective Scope Reduce idiosyncratic variability Only for high-volume roads where data loss is unlikely to be an issue. In all cases. Reduce systematic variability In all cases. Consider local factors to expand exclusions. Separate weekend/holiday analysis might be needed, depending on location. In all cases where data are available and objective is long-term view of system performance. Reduce exceptional unexpected variability Action Exclude records based on 1â4 trucks. Average data at 15-minute intervals. Exclude weekends, holidays, or any other day that might have particular traffic conditions. Exclude significant roadway construction projects. Exclude records implying unrealistic speeds. In all cases. major system disruptions. Exclude records coinciding with known In all cases where data are available. Table 5-2. Recommended cleaning and preprocessing actions for NPMRDS. Source: Lu (2017). Figure 5-3. Heat map of correlations between links on a corridor of I-235.
46 Estimating the Value of Truck Travel Time Reliability The level of spatial correlation has been found to be influenced by many factors, including the degree of communality of traffic flows, the configuration of the roadway network, and the level of congestion (Gupta et al. 2018). This implies that as congestion patterns change throughout the day, so will travel time correlations (Rachtan et al. 2013). So-called temporal compatibilityâ the time at which a vehicle enters a link is determined by the travel time of the preceding linkâ implies that congestion leads vehicles to enter links later, causing correlation patterns to change. The distance between links also has a strong effect on correlations, as links that are farther apart will exhibit fewer similarities in traffic conditions. To illustrate the effect that correlations have on route reliability, assume that there are two identical adjacent links, each with a travel time standard deviation of s. If their travel times are assumed to be independent, then the combined standard deviation would be 2s (equivalent to adding the variances). However, if travel times are assumed to be perfectly correlated, the combined standard deviation would be 2s (equivalent to adding the standard deviations). In a practical sense, assuming links are independent provides a lower bound to route-level reliability, while assuming they are perfectly correlated provides an upper bound. In this simple example, the lower bound estimate was 70 percent of the upper bound. However, for a route with 50 idealized links, the lower bound estimate would be just 14 percent of the upper bound, indicating that as the length of the route increases, so does the difference between the two extreme assumptions. Therefore, to measure the reliability of a corridor, it is critical to char- acterize the correlation patterns between links. Ultimately, the reliability of a route depends as much on link-to-link correlations as on link-level variation. Planning analyses that focus only on link-level variation (e.g., through travel time indices) are missing a critical aspect of reliability. This section presents several strategies for overcoming these challenges and deriving route- level reliability measures that work within the Reliability Valuation Framework. To describe these strategies, it is useful first to define the travel time data typically available (after implementing the cleaning and preprocessing actions described previously). Assume that the data set aggre- gates travel times into H periods of the day, indexed by h = 1, 2, . . . , H. Also assume that the days for which data are available can be indexed by i = 1, 2, . . . , N. With this notation, the travel time on a link s can be described by tsi,h. The links along a particular route are defined as belonging to a set S. The travel time of a route between origin o and destination d, starting at h in day i, such that s â S, can be defined as ti,h. The discussion below assumes that travel times need to be quantified for a discrete number of routes, corresponding to a truck trip originâdestination matrix Tod, moving Qod tons of com- modities per year. The routes taken are assumed to be known to the analyst, either from routed commodity flow data (such as IHSâs Transearch data set) or from traffic assignment models. Once routes have been assigned to trips, they are assumed to be fixed. A more complex analysis could assume that routes change in response to unreliability, but that would require assignment models beyond the scope of this study. If an analyst has complete travel time dataâa record for every combination of s, i, and hâ calculating the travel time of a route is relatively simple. One could start tracing the travel times along the route, so that the start times of links line up approximately with the end times of previous links, which naturally considers actual correlation patterns. However, this is almost never possible because travel time data sets are never complete, with the possible exception of high-volume roads during certain times of the day. Analysts will typically have data with gaps, requiring approximate methods for estimating the travel times of routes. Below are some of the methods that could be used.
Reliability Valuation Framework 47 22.214.171.124 Comonotonicity Assumption This approach, first proposed by Dhaene et al. (2002), assumes conveniently that the 95th percentile travel time on a route is equal to the summation of the 95th percentile travel times on the links along the route, as shown in Equation 19. Other route time percentiles can also be approximated in the same way. List et al. (2014) found that this assumption performed reasonably well for a short highway corridor in California, although that study also found that congestion decreased its accuracy. This approximation assumes that links are perfectly correlated, which is an unrealistic assumption in most cases. If it is assumed for simplicity that link travel times are normally dis- tributed, then the 95th percentile travel time can be calculated through 95%t = tâ + ks, where k â 1.65. Substituting this relationship in both sides of Equation 19 leads to Equation 20, which can be simplified to Equation 21 by noting that the average route travel time is equal to the sum of the average of link travel times. Application. This approach assumes links are highly correlated, so it is only appropriate for short routes on access-controlled highways. Long routes or routes that travel through arterials will exhibit much lower levels of link correlation. This approach could serve as a first-order approximation, provided that it is recognized that it would yield an upper bound estimate of route that could be many times higher than the actual value. The main strength of this approach is its simplicity. 126.96.36.199 Simulation The most common approach in the literature involves characterizing the distribution of link travel times, considering link-to-link correlation, and then using simulation to approximate the route-level distribution. Caceres et al. (2016) estimated a probabilistic conditional distribution on historical link travel times and used Monte Carlo simulation to approximate the distribution of route travel times. Dong et al. (2016) estimated lognormal distributions on link travel times, considering correlation between adjacent links, and then extracted corridor level reliability mea- sures. Racca and Brown (2012) used an algorithm developed by Ruscio and Kaczetow (2008) to draw random samples directly from travel time data, considering the correlations between each grouping of three consecutive links. While this study only considered a limited set of correla- tions, the results approximated the actual distribution of trip times closely. List et al. (2014) described a simplified version of this approach, which was first proposed by Hu (2011). In this approach, an incidence matrix is created that shows the conditional frequency of the travel times on two consecutive links (approximating the conditional probability function). The matrix could be 10 by 10, showing 10 travel rate bins (seconds per mile) for one link and 10 travel rate bins for the other. First, a random travel time is drawn for the first link. Then, the duration in the first link is used to determine when the vehicle enters the following link. At this point, a second draw is made from the incidence matrix conditional on the travel rate of the first link, providing the travel rate for the second link. This process can be repeated for as many links as needed. Monte Carlo simulation can then be used to sample the route time distribution. 95% 95% (19),th i hs s S â= t â (20)t k k kh h hs hs s S h s s S h s s S â â â( )+ s = t + s = t + s â â â (21)h hs s S âs = s â
48 Estimating the Value of Truck Travel Time Reliability Application. This approach works well with the density of data typically available in NPMRDS. The methodologies used by Caceres et al. (2016) and Racca and Brown (2012) are likely to provide more robust results; however, they require the estimation of statistical models. In contrast, the approach suggested by Hu (2011) is significantly easier to implement; however, the results lack accuracy, as the approach coarsely discretizes the probability distributions and does not consider correlations between nonadjacent links. It is recommended that Hu (2011) be implemented only if Caceres et al. (2016) or Racca and Brown (2012) are deemed impractical for the specific application. 188.8.131.52 Modeling Another approach involves modeling traffic conditions throughout the corridor or roadway network. These can model link correlations directly or can model traffic behavior at a level of detail that reveals these correlations. Gupta et al. (2018) developed an approach to model cor- relations directly by blending the two extreme assumptions discussed above: assuming links are perfectly correlated (upper bound) and assuming links are independent (lower bound). The blending weights are estimated statistically as a function of trip length, proportion of freeway travel, and demand commonality (proportion of demand common across the route). Estimating the model with data from the Phoenix region showed that blending weights ranged from 60 to 70 percent for the lower bound model and 30 to 40 percent for the upper bound model. This approach is particularly useful when the travel time data are sparse, even when data are not available for all links. Similarities in traffic conditions and roadway configuration are used to develop average correlation patterns across routes. Analysts are recommended to estimate their own blending model on local data; however, using the estimates provided by Gupta et al. (2018) could provide a first order approximation of the degree of correlation along certain routes. Lu (2017) developed a more detailed model, called the Spatial-Correlated Travel Time Esti- mation model (SCTTE), which considered even more factors in explaining link correlations. This approach uses queuing models to represent traffic conditions on a corridor and develops a ground-up understanding of the causes of correlation. Lu (2017) found that this model fit probe data from INRIX. Because of the level of detail captured in the model, this approach is compli- cated to implement. However, it can provide more insights on the causes of link correlations and travel time unreliability than other approaches. Application. Models work best when travel time data are sparse and additional information is available about demand patterns and network characteristics (needed to estimate the model). This is especially true for the model of Gupta et al. (2018), because fewer data inputs are required. A single model can be estimated to represent the correlations of multiple routes that have common features. The main drawback of this approach is the effort required to successfully estimate the model so that it replicates conditions on the ground. This could be time consuming and beyond the scope of most reliability analyses. Gupta et al. (2018) developed their approach originally for incorporating reliability in travel demand models, particularly for traffic assignment. 184.108.40.206 Gap Approximation and Simulation In this approach, the analyst simply traces travel times along a route (with consistent link start times), and once a link without data is reached, the analyst approximates the missing value by taking the speed of an adjacent similar link with comparable traffic patterns. If no such adjacent link exists, the analyst could look to other days for representative values. Simulation can then be used to trace enough travel times along a route to characterize its distribution. Application. This approach only works when missing records are rare. In NPMRDS, this is sometimes the case with high-volume roads during busy times of the day.
Reliability Valuation Framework 49 220.127.116.11 Recommendations Ultimately, the best approach for a reliability analysis depends on the amount of travel time data available, the desired accuracy of the results, and the willingness to estimate complex models. Figure 5-4 describes how the approaches described above fall within these criteria. 5.2.4 Reliability Measure Depending on the analysis, it might be necessary to measure reliability along one route or a collection of routes. A corridor study could focus on a single route; however, a regional study could need reliability to be measured for hundreds if not thousands of routes (for large regions, it might be necessary to consider only the main routes connecting important freight origins and destinations to reduce the number of computations). Either way, any of the approaches dis- cussed above can be used to approximate route travel time distributions. With this information, the 95th percentile delay reliability measure can be calculated through where Because idiosyncratic, systematic, and exceptional variation have been excluded from the data in the cleaning and preprocessing steps, the calculated 95th percentile delay measure captures the level of uncertainty faced by truck operators when setting delivery schedules. 5.3 Reliability Modeling In some cases, planners need to be able to predict how the reliability of a roadway will change in response to a project or policy. To evaluate the benefits of a project, for example, it is necessary to know not only how average travel times would change, but also how improve- ments in traffic conditions would reduce unreliability. As mentioned previously, this is critical for projects that have significant freight benefits, because many shippers and motor carriers 95% 95% (22),t th h i h hj = Ï = - 1 (23), 1 t N th i h i N â= = Caceres et al. (2016) Lu (2017) Gupta et al. (2018) D at a D en si ty R eq ui re d Accuracy Gap Approximation Racca and Brown (2012) Hu (2011) Comonotonicity Easier Implementation More Complex Implementation Figure 5-4. Trip travel time estimation methods.
50 Estimating the Value of Truck Travel Time Reliability care most about reliability. This section describes methodologies for modeling roadway reli- ability in ways that are compatible with the rest of the Reliability Valuation Framework. 5.3.1 Previous Approaches Two approaches have been used in the literature to model roadway reliability: (1) statistical relationships based on travel time data and (2) event simulation models. SHRP 2 Project L03 was one of the first studies to develop statistical relationships explaining how the Travel Time Index (at the 95th, 80th, and 50th percentile levels) varied as a function of the demand-to- capacity (D/C) ratio, hours of rainfall per year, and lane-hours lost because of incidents or work zones. SHRP 2 Project L07 improved these statistical relationships and included additional explanatory variables, such as snowfall. SHRP 2 Project C11 incorporated these statistical relationships into a tractable framework for analysts and planners to use in predicting changes in roadway reliability as a function of exogenous factors, with the objective of evaluating projects. This framework first predicts changes in the mean Travel Time Index as an analytical function of roadway capacity, demand, traffic mix, terrain, incident rates, and incident durations, among other factors. This index is defined to consider both recurring congestion and average delay caused by incidents. Once the mean Travel Time Index has been estimated, analysts can use the statistical relationships provided to predict the corresponding 95th percentile Travel Time Index (as well as indices at other percentile levels). These statistical relationships were recently estimated again by SHRP 2 Project L38, which used more-detailed travel time data from NPMRDS. Table 5-3 compares these and other approaches that have been used in the literature. Study Model Type Input Data Reliability Metrics Model Calculations SHRP 2 L03 Statistical model Traffic and travel time; incident; work zone; weather; geometric; operating; improvement data Mean, standard deviation, median, mode, minimum, and percentiles (10th, 80th, 95th, and 99th) of travel time D/C ratio; lane hours lost; hours of rainfall 0.05 inches SHRP 2 L07 Statistical model Traffic and travel time; incident; work zone; weather; geometric; operating; improvement data Four TTI percentiles (10th, 80th, 95th, and 99th) D/C ratio; lane hours lost; hours of rainfall 0.05 inches; hours of snowfall 0.01 inches SHRP 2 L04 Simulation model Exogenous and endogenous sources of unreliability Mean travel time; standard deviation of travel time Vehicle trajectories SHRP 2 L35B Simulation model Travel time series (time- ordered data set); VOT Average travel time; 95th percentile travel time VOR; reliability ratio SHRP 2 C11 Analytical framework using statistical model Highway types; projected traffic volumes; speed; lanes; capacities; incidents 95th percentile TTI 80th percentile TTI 50th percentile TTI Mean TTI Average delay; buffer time; cost of delay; traffic variability factors Maryland Department of Transportation, Statistical model Automatic Traffic Recorder (ATR) data and NPMRDS 95th percentile TTI 80th percentile TTI 50th percentile TTI Mean TTI Average delay; buffer time; cost of delay; traffic variability factors State Highway Administrationa (2018) Note: TTI = Travel Time Index. aSHRP 2 L38. Table 5-3. Literature in predictive modeling of reliability.
Reliability Valuation Framework 51 18.104.22.168 Limitations of Statistical Relationships Relying on statistical relationships for predicting reliability has several shortcomings. Fore- most, these relationships are specific to local conditions, such as demand patterns, roadway configuration, occurrence of special events, and weather. Future research should evaluate the stability of these relationships across a wide range of circumstances and geographies. It is also unclear whether the statistical relationships were estimated with data that were cleaned of pre- dictable sources of variation (systematic and idiosyncratic). If not, the relationships will over- predict unreliability. A more fundamental issue with this approach is that it assumes that recurring delay and crashes have the same net effect on unreliability, because both are added to calculate the mean Travel Time Index. A corridor with a high number of crashes and low recurring congestion would have the same 95th percentile Travel Time Index as a corridor with high recurring con- gestion and no accidents, as long as the mean Travel Time Index is the same. This is unrealistic, as recurring congestion does not increase uncertainty for truck operators, which is the basis of reliability measurement. 22.214.171.124 Limitations of Event Simulation Models Roadway reliability can also be modeled through the simulation of discrete events. One of the most comprehensive efforts to date was SHRP 2 Project L04, which developed a multi- module simulation of endogenous and exogenous causes of unreliability, including special events, crashes, work zones, adverse weather, and closures of alternative modes, among other causes. The simulation considered heterogeneity in routes, responses, vehicles, car-following behavior, traffic control, and crash types. This detailed traffic simulation was run long enough to characterize travel time distributions and calculate reliability metrics. While the resulting model describes clearly how different factors contribute to unreliability, this approach is difficult to implement because it requires vast data and many assumptions about the distribution of exogenous inputs and the nature of endogenous relationships. The model is also computationally complex and requires a significant computing environment to code and run the simulation. 5.3.2 Recommendations Discrete event simulations provide the most detailed way of modeling reliability. However, this approach is unlikely to be practical in most situations, in that it requires too many data inputs and too much model development effort. Eventually these types of simulation models could be built for regions, just like travel demand models, but at the moment they are impracti- cal in most cases. In the meantime, the statistical relationships in SHRP 2 Project L38 can be used to estimate the 95th percentile Travel Time Index from average travel times, which are easy to estimate for different scenarios. 126.96.36.199 Predicting Changes in Average Travel Times Average travel times can be modeled through a variety of approaches, including traffic simulation software, traffic assignment models, and analytical equations. The most appropri- ate approach depends on various factors; however, emphasis should be placed on using the approach that best represents local conditions and that is proportional to the magnitude of the impact being considered. Analytical equations could be useful for modeling a small change in an isolated corridor; however, for a new construction project on a major road, detailed traffic models are needed that consider local traffic dynamics and regional travel patterns.
52 Estimating the Value of Truck Travel Time Reliability SHRP 2 Project C11 Analytical Equations. Simple analyses could rely on the analytical equations described in SHRP 2 Project C11. This study defined the mean Travel Time Index (TTIm) as where tâ is the average travel time of a roadway segment and tf is the free-flow travel time, which is calculated as a function of the speed limit. This index represents how much longer it takes on average to traverse a roadway segment relative to free-flow conditions. This study then developed analytical equations for modeling changes in TTIm as a function of various traffic factors. This can be summarized by where FFS = free-flow speed, recurring delay rate = extra time required to cover a mile on average because of recurring congestion, and incident delay rate = extra time required to cover a mile on average because of incidents. Analytical equations are provided that can be used to estimate the recurring delay rate as a func- tion of volume, capacity, and free-flow speed. Capacity is in turn modeled as a function of the number of lanes, type of roadway, proportion of heavy vehicles, grade, and effective green light per cycle. These equations use common traffic engineering relationships that have been tested in many applications. SHRP 2 Project C11 also provides equations for approximating changes in the incident delay rate as a function of incident frequency and duration. It is recommended that analysts use the analytical equations in SHRP 2 Project C11 only when the project being considered is expected to have a small impact or no traffic data are avail- able locally. It is also recommended that, if available, the 10th percentile travel time be used to calculate the free-flow speed instead of the speed limit functions provided. This will make the analysis more representative of local conditions. Travel Demand Models. It is recommended that analysts use travel demand models for large projects that are likely to have substantial effects on traffic. These models will also be able to capture induced demand effects. Traffic Simulation and Safety Analyses. Traffic simulation software could also be used to predict changes in average travel times, assuming that demand is fixed. Dong et al. (2016) simu- lated traffic conditions using Vissim to approximate corridor level reliability measures. If the change in tâ is expected to come from a reduction of incident rates, a detailed safety analysis is warranted. Reductions in incident delay can be combined with improvements in traffic opera- tions to estimate the total impact on average travel times. 188.8.131.52 Predicting Changes in Reliability Once the changes in average travel times have been predicted, the statistical relationships estimated in SHRP 2 Project L38 can be used to predict the corresponding changes in reliability. These relationships are summarized in Table 5-4 and Table 5-5. TTI95 should be calculated for all roadway segments in the study area, not just those directly affected by the project. These calculations should be performed with and without the project. TTI (24)m f = t t TTI 1 FFS recurring delay rate + incident delay rate (25)m ( )( )= +
Reliability Valuation Framework 53 While the statistical relationships were estimated for Maryland, they are likely to hold for a wide range of traffic conditions. It is possible for analysts with high-quality travel time data to reestimate these relationships for local roads (only the TTI95 equations would be needed). The calculated TTI95 and TTIm can now be used to estimate how j would change because of a project. Given that TTI95 and TTIm are calculated at the link level and j is defined over routes, aggregation assumptions are required. The most straightforward approach is to invoke the comonotonicity assumption discussed previously by summing the 95th percentile travel times of all links on the route. With Equations 21 and 24, the 95th percentile delay of a link can be found by using (TTI95 - TTIm) tf . Subsequently, the 95th percentile delay j for a trip S can be found by summing through s â S. For consistency, it is recommended that the predicted change Formula Range of Applicable 8.3207 1 + 12.757 Ã ( . Ã ) TTI < 1.08 0.0625 Ã TTI + 0.9375 Otherwise 6.5451 1 + 16.9881 Ã ( . Ã ) All TTI 21.9751 Ã 0.0496 All TTI Source: Maryland Department of Transportation, State Highway Administration. 2018. Implementation of SHRP 2 Reliability Data and Analysis Tools (L38) in Maryland. Unpublished final report. m m m m m m m Table 5-5. Arterial reliability relationships by Travel Time Index level. Formula Range of Applicable 1.0049 Ã 128.5906 + 11.7451 Ã TTI . 128.5906 + TTI . TTI < 1.1 1.1174 Ã TTI â 0.1174 1.1 â¤ TTI â¤ 5.0 TTI Otherwise 0.6509 + 7.3038 Ã TTI . 2.9353 . + TTI . All TTI 17.626 Ã . TTI > 1.01 2.6177 Ã TTI â 1.6177 Otherwise Source: Maryland Department of Transportation, State Highway Administration. 2018. Implementation of SHRP 2 Reliability Data and Analysis Tools (L38) in Maryland. Unpublished final report. m m m m m m m m m m m m Table 5-4. Freeway reliability relationships by Travel Time Index level.
54 Estimating the Value of Truck Travel Time Reliability in 95th percentile delays be used to scale up or down the observed 95th percentile delays without the project. This process is summarized by where j1 = predicted 95th percentile delay after the project, j0 = observed 95th percentile delay without the project, and Îj = change. The comonotonicity assumption is less restrictive in Equations 26 and 27 because only relative differences matter. It essentially assumes that the project does not affect link-to-link correlation patterns. This assumption is less appropriate for large projects, in which case one of the more accurate methods described in Section 5.2.3 can be used. Following a similar logic, the changes in average travel times can be calculated by Reliability should be modeled for each time period of the day h. TTIm and TTI95 can be cal- culated for every time period, and the equations above can be used to quantify the impact the project would have on travel throughout the day. 5.4 Valuation 5.4.1 VOR and VOT Estimates The VOR estimates recommended for planning analyses are summarized in Table 4-12. These values were estimated using ML models with random parameter coefficients for time and reliability. Additionally, to improve the representativeness of the results, the sample was reweighted by the commodity and distance shares of truck shipments in the United States. Only statistically significant values are shown. This table shows estimates in two units: $/shipment-hour and $/ton-hour. The $/shipment-hour values should be used when analyzing truck trips, whereas the $/ton-hour values should be used when analyzing commodity flows. Any of the estimates provided in Table 4-12 could be used in analyses; however, the VOR estimate of $160/hour for the whole sample is most appropriate for general analyses. To improve valuation, analyses could consider shipment distance, company size, and shipment type to select the most appropriate VORs from this table. The VOT was not found to be statistically significant for the whole sample; however, more- detailed models showed VOT values ranging from $14.5/hour to $411.6/hour. Overall, there was a weaker effect for VOT than VOR, suggesting that the sample respondents in the present TTI TTI TTI TTI (26)1 0 95 1 1 95 0 0 ms S ms S â â ( ) ( ) j = j - - â â TTI TTI TTI TTI 1 (27)0 95 1 1 95 0 0 ms S ms S â â ( ) ( ) Îj = j - - - ï£« ï£ ï£¬ ï£¶ ï£¸ ï£·â â TTI TTI (28)1 0 1 0 t t ms S ms S â â = â â Î = -ï£«ï£ ï£¶ ï£¸ TTI TTI 1 (29)0 1 0 t t m m
Reliability Valuation Framework 55 study cared most about reliability. For planning analyses, it is recommended that analysts use the marginal cost of trucking estimated by ATRI. Their most recent study estimated that trucking costs around the United States averaged $1.69/mile in 2017 (Hooper and Murray 2018). When ATRIâs estimates of the commercial speed of trucks are used, these costs translate to $66.7/hour. ATRIâs estimates do not consider the overall costs and uncertainty caused by unreliabilityâ particularly the costs accrued to shippersâwhich allows its estimates to be used in conjunction with the VORs in Table 4-12. 5.4.2 Illustrative Examples An idealized example could be helpful in gaining intuition about the magnitude of the VOR estimates. Assume that a shipper makes a 400-mile shipment frequently throughout the year, and, by observing arrival times, notices that 1 out of 20 shipments is late by 2 hours (95th percentile delay). Some shipments over the year are late by more than 2 hours, but the majority arrive on time. By using ATRIâs cost estimate, the average trucking cost per shipment can be estimated to be $676. Using the VOR of $160/hour, the average unreliability costs per shipment can be estimated to be $320. This calculation does not imply that every shipment incurs $320 of unreliability costs, but rather that, over 100 shipments, some arriving on time and others arriving late by a varying number of hours, the costs related to unreliability will likely total $32,000. To explore this issue further, consider another example. Assume that a shipment to a pro- duction facility is on time if it arrives within 10 hours of schedule, which historically happens 99 percent of the time. However, when the shipment is late, assume it causes the assembly line to stop, which leads to $50,000 dollars of costs in missed revenues and restart expenses. In this example, it can be said that every shipment has an average unreliability cost of $500/shipment (1% probability of being late times $50,000 in cost), just considering the potential of produc- tion disruptions. In this example, the 99th percentile delay of this shipment is 10 hours, and the 95th percentile delay could be calculated from the distribution of trip times. Assume that the 95th percentile delay is 5 hours. Therefore, the unreliability cost per shipment can be calculated as $50 per hour of 99th percentile delay, or, equivalently in this case, $100 per hour of 95th per- centile delay. Delays of this magnitude could also affect other aspects of shippersâ operations and even affect motor carriers as well, requiring all these other costs to be considered. The VOR estimates represent the combined effect of all these costs. 5.5 Planning Applications 5.5.1 BenefitâCost Project Evaluation Benefitâcost analysis is one of the most important tools in transportation planning, yet it almost never considers reliability explicitly. This is particularly important for freight proj- ects, because reliability is one of the most important variables for freight usersâpotentially even the most important. In ignoring reliability, current practice systematically underes- timates the benefits of freight projects, leading them to be given lower priority than they should relative to nonfreight projects. This section describes approaches for bridging this gap and considering reliability in freight benefitâcost analysis. 184.108.40.206 Tonnage Perspective The VOT and VOR parameters estimates can be used to monetize the changes in reli- ability and average travel time predicted by Section 5.3 because of a project. The monetary
56 Estimating the Value of Truck Travel Time Reliability benefits of these changes, for truck trips traveling between origin o and destination d, can be expressed as where Qod is the freight tonnage and VOT and VOR are specified per ton moved. This formula- tion assumes that freight demand and routing do not change in response to the project, which is reasonable for medium to small projects. For larger projects, induced demand could be con- sidered by calculating the change in consumer surplus under the demand curve from the shift in supply (costs of roadway travel decreased). Induced demand can also be considered in Îjod, as improvements in traffic conditions are partially offset by additional traffic. 220.127.116.11 Time of Day Perspective Particularly in urban areas, the benefits of projects should be calculated for different periods of the day. This becomes critical if freight demand varies considerably throughout the day, so that the change in Îjod might not be representative of how trucks are affected. Time-of-day demand information is typically available from travel demand models or can be generated by disaggregating commodity flow information by using time-of-day factors (see Section 18.104.22.168). With this information, it is possible to calculate the benefits as 22.214.171.124 Link Approximation Equations 30 and 31 calculate the benefits of a project at the trip level. In some cases, how- ever, the approaches recommended in Section 5.2.3 cannot be implemented to estimate route travel times. In these cases, it might be tempting to describe the benefits through where Îjh,s and Ît â h,s represent the changes in unreliability and average travel times for indi- vidual links and Th,s is the truck volume at a certain time of the day h. While easy to calculate, this approach could lead to a significant overestimation of the benefits of projects, because it is only accurate when links are perfectly correlated. On long routes, this could lead to estimates of unreliability that are several times higher than in reality. One potential remedy could involve weighting the benefits of each link calculated through Equation 32 by the marginal contribution of that link to route unreliability. Gupta et al. (2018) suggested that this marginal contribution could be approximated as being proportional to the variance of the link and inversely proportional to the standard deviation of the route. Horowitz and Granato (2012) presented a more generalized formulation that considers link-to-link cor- relations; however, additional research is required to assess the adequacy of these methods for benefitâcost analysis. 126.96.36.199 Hourly Truck Volumes NCHRP Research Report 854 described several approaches for obtaining truck volume data for different hours of the day (Ahanotu et al. 2017). Data from automatic traffic recorders that distinguish between different vehicle classes would provide the best information for character- izing time-of-day patterns. The second best approach would be to rely on short-term volume VOR VOT (30)B Q t Qod od od od odod ââ= - Îj - Î ââ VOR VOT (31), , , ,B Q t Qh od h od odh h od h od odh ââ ââ= - Îj - Î ââ ââ VOR VOT (32), , , ,B T Th s h s sh h s h s sh ââ ââ= - Îj - Ît ââ ââ
Reliability Valuation Framework 57 counts, which are often collected by using tube detectors or other methods. These counts should be adjusted seasonably following the guidance provided by AASHTO (2009). If no data are available for certain locations, assumptions could be made on the basis of regional patterns to approximate hourly volumes. 5.5.2 System-Level Performance Measurement 188.8.131.52 Truck Perspective The performance of the roadway system can also be assessed with metrics that have a consistent economic interpretation. Assume that a truck trip table Th,od is available with time-of-day detail. The vehicle miles traveled (VMT) by trucks can be calculated as where, lod is the distance of different trips. The average vehicle hours of travel (VHT) can be calculated as and the vehicle hours of unreliability (VHU) can be calculated as This last equation sums all the hours of unreliability faced by truck operators in the region, measured by the 95th percentile delay. The average speed of all trucks can be calculated as and the average miles of travel per hour of unreliability can be calculated as This last measure indicates how far, on average, trucks can travel per hour of 95th percentile delay. This measure of accessibility could be useful to compare the performance of different cities, freight markets, or corridors and to track progress toward achieving performance objectives. 184.108.40.206 Commodity Perspective Assume that commodity tonnages are available instead of time-of-day detail. The ton-miles (TM) of freight moved by truck can be calculated as the ton-hours (TH) can be calculated as VMT (33),l Tod h od odh ââ= ââ VHT (34), ,t Th od h od odh ââ= ââ VHU (35), ,Th od h od odh ââ= j ââ VMT VHT (36)V = VMT VHU (37)U = TM (38)l Qod odc odc ââ= ââ TH (39)t Qod odc odc ââ= ââ
58 Estimating the Value of Truck Travel Time Reliability and the ton-hours of unreliability (THU) can be calculated as By using these measures, the average ton-velocity (TV) can be calculated through and the average ton-miles of travel per hour of unreliability (TU) can be calculated through This measure, like Equation 37, captures how many ton-miles of goods movement the system is able to handle per hour of unreliability accumulated. 220.127.116.11 Cost Perspective Finally, the performance of the system can also be assessed from a cost perspective. The average time cost per ton-mile (TCTM) of goods movement can be calculated as and the average unreliability cost per ton-mile (UCTM) can be calculated as The TCTM and UCTM provide a practical way of measuring how freight users are being affected by roadway performance. This formulation takes into account secular changes in demand, changes in the commodity composition, and the costs associated with different levels of unreliability. Because it was not possible to estimate VOT and VOR for all commodities, Equations 43 and 44 need to collapse commodity detail until more detailed valuations are available. From the vehicle perspective, the average time cost per mile (TCM) can be calculated as and the average unreliability cost per mile (UCM) can be calculated as where VOT and VOR are measured on a shipment basis. No summations are required in these formulations because commodity detail is not considered; however, it could be considered if needed. The total costs could be calculated by adding the time costs and unreliability costs. 5.5.3 Bottleneck Identification The identification of truck bottlenecks has received much interest recently at the state and national levels. The FHWA Transportation Performance Management program requires that state departments of transportation (DOTs) identify freight bottlenecks on a recurring basis and describe how they are being addressed. This program covers bottlenecks that are caused by ââ= j ââ THU (40)Qod odc odc TV TM TH (41)= TU TM THU (42)= TCTM VOT TM (43)t Qc od odc odcc ââ= ââ UCTM VOR TM (44)Qc od odc odcc ââ= j ââ TCM VOT (45)V= UCM VOR (46)U=
Reliability Valuation Framework 59 congestion (recurring and nonrecurring) as well as infrastructure restrictions that affect truck operations (causing inconvenient routing and other inefficiencies). The approach described in this section could be used to identify bottlenecks caused by conges- tion, particularly those from nonrecurring sources. The FHWA Truck Freight Bottleneck Report- ing Guidebook (2018) provides general recommendations for how state DOTs can meet the federal reporting requirement, but it does not recommend that a prescriptive methodology be followed. This allows states to implement their preferred methodology, as long as it aligns with state planning efforts, particularly their most recent freight plan. The main challenge in measuring the reliability performance of roads is that reliability is best measured at the route level, not the link level. As discussed in Section 5.2.3, variability in an individual link could cause a lot of variability for truck trips or very little, depending on the correlation patterns along the route. This section proposes an approach for estimating the costs of traversing different links in a network, ignoring for the moment issues of link correlations, as a way of identifying truck bottlenecks. The costs of traversing roadway link s can be described by where VOT and VOR are measured at the shipment level. This equation adds the costs from unreliability to the costs related to average travel time. The 95th percentile delay of links is expressed as 95%ti,h â t â h. If hourly volume data are not available, analysts should consult Sec- tion 18.104.22.168 for ways of approximating this information. Equation 47 can be rewritten to normal- ize the cost by segment distance to obtain a rate of cost accrual per segment length (either lane miles or centerline miles could be used, depending on the application). After calculating the travel costs, the top 5 percent bottlenecks can be identified through A key simplification of Equation 47 is that the VOT and VOR are applied at the segment level, despite being estimated at the trip level. Therefore, the estimates of cs should not be interpreted as the real-life costs that truck operators face, because these estimates ignore travel time cor- relation between roadway links. In fact, Equation 47 is more accurate at describing actual costs when the correlation between links is higher (potentially occurring in access-controlled high- ways with few on-ramps or off-ramps). Nonetheless, this formulation will have the tendency to overestimate the unreliability costs of any given segment. One way to counteract this bias is to estimate bottlenecks caused by unreliability separately from bottlenecks caused by slow travel speeds. The top 5 percent in each measure could be identified as a bottleneck. This would lead the biases in reliability cost estimates to not influence the selection of bottlenecks through average speeds. VOR 95% VOT (47),c T Ts i h h h h h h h â â( )= t - t + t â â = â¥ï£± ï£² ï£³ï£´ 1 95% 0 else (48)B c c s s