National Academies Press: OpenBook

Validation of Urban Freeway Models (2014)

Chapter: Appendix A - Review of L03 and Related Models

« Previous: Chapter 4 - Enhanced Models and Application Guidelines
Page 26
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 26
Page 27
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 27
Page 28
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 28
Page 29
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 29
Page 30
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 30
Page 31
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 31
Page 32
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 32
Page 33
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 33
Page 34
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 34
Page 35
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 35
Page 36
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 36
Page 37
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 37
Page 38
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 38
Page 39
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 39
Page 40
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 40
Page 41
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 41
Page 42
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 42
Page 43
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 43
Page 44
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 44
Page 45
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 45
Page 46
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 46
Page 47
Suggested Citation:"Appendix A - Review of L03 and Related Models." National Academies of Sciences, Engineering, and Medicine. 2014. Validation of Urban Freeway Models. Washington, DC: The National Academies Press. doi: 10.17226/22282.
×
Page 47

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

26 Purpose The purpose of this background technical memorandum, the deliverable for Task 1 of the second Strategic Highway Research Program (SHRP 2) L33 project, is to identify important find- ings and lessons learned that will help validate and extend the L03 predictive reliability models. To meet this objective, follow- ing this introduction the document is divided into four main sections. The first section conducts a review of SHRP 2 L03, including the modeling concept, data collection and processing procedures, and calibration and validation results. While the L33 research team had knowledge of the L03 project before con- ducting this literature survey, the team felt that it was critical to fully document the major components of the L03 process in order to identify opportunities for enhancement in data and modeling-related techniques. The second section conducts a survey of other predictive reliability models developed through the SHRP 2 Reliability program. The third section presents other reliability research conducted through SHRP 2 and other initiatives that can provide value to the L33 process. The final conclusions section summarizes the lessons learned from the background material for consideration in the L33 data collec- tion, model validation, and model enhancement tasks. Review of SHRP 2 L03 Overview The purpose of the SHRP 2 L03 project was to develop ana- lytical procedures to determine the impacts of reliability mitigation strategies. The project team explored this issue through the following analyses: • Congestion by Source. This part of the project used data collected in Seattle to assess methodologies for assigning delay to the causes of congestion. • Before/After Studies. The project team identified 17 improve- ments, categorized into the following, that could be analyzed with the project’s continuously collected traffic data. These before/after reliability metrics were used to produce reliabil- ity adjustment factors that agencies can apply to various improvement scenarios to estimate the impact to travel time reliability. 44 Ramp metering; 44 Freeway service patrol implementation; 44 Bottleneck improvement; 44 General capacity increases; 44 Aggressive incident clearance program; and 44 High occupancy/toll (HOT) lane conversion. • Cross-Sectional Statistical Modeling. Because only a limited number of before/after studies were observable in the proj- ect data sets, this portion of the project developed macro- scale cross-sectional models that predict the overall travel time characteristics of a highway section. Two model forms were developed: data-rich and data-poor. These models are described and assessed in further sections of this appendix. The remainder of this L03 project overview details the cross- sectional statistical modeling that ultimately produced the data- rich and data-poor models that will be validated and enhanced in the L33 project. It explains the modeling concept, the data collection and data processing methodologies used to generate travel time reliability statistics and explanatory factors, the esti- mation of independent variables, the final analysis data set, the model calibration and validation results, and the application guidelines. The summary concludes with a list of recommenda- tions suggested by the L03 project team for further analysis. Modeling Concept The Phase 1 report of the L03 project proposed two model forms to focus on in the remainder of the project: 1. “A detailed deterministic model that uses all of the data being collected to a maximum degree (Data-Rich Model)”; and A P P e n d i x A Review of L03 and Related Models

27 2. “A simpler model based on the fact that many of the appli- cations [Highway Capacity Manual (HCM) and travel demand forecasting models] work in an environment with limited data (Data-Poor Model).” Figure A.1 shows the conceptual form of the data-rich model, which is composed of tiers of causal mechanisms that influence each other and, ultimately, travel time reliability. At a first level, the model conceives travel time reliability as a function of (1) the number of lanes; (2) the demand-to- capacity ratio; (3) primary incident capacity-hours lost; (4) secondary incident capacity-hours lost; (5) work zone capacity-hours lost; (6) weather factors; (7) traffic fluctua- tion; (8) active control; and (9) opposite direction incident- hours. All variables in the first tier, except for the number of lanes, are functions of further explanatory variables. For example, work zone capacity-hours lost is a function of lane- hours lost and shoulder-hours lost, which are functions of the work zone type and the work zone duration (which is a func- tion of an agency’s work zone policy). This model form allows high-level variables to be estimated from roadway character- istics and agency operational policies, which gives the model the power to estimate reliability improvements from capacity and demand-related interventions. The data-poor model was first envisioned to take advan- tage of commonly available independent variables (such as annual collisions per million, vehicle miles traveled, speed limit, and yearly demand profiles). However, exploratory analysis showed promising relationships between the mean travel time and all selected reliability metrics. Because the mean travel time is a ready output from planning and opera- tional tools such as travel demand and simulation models, this relationship became the focus of the data-poor model development. Data Collection The L03 project team developed a site selection design plan to collect data in metropolitan areas and along segments that meet a broad range of different criteria. These criteria are shown in Table A.1. Ultimately the project team elected to collect data in eight metropolitan areas that had mature data collection programs that could be leveraged in this project: (1) Atlanta, Georgia; (2) Houston, Texas; (3) Jackson- ville, Florida; (4) Los Angeles, California; (5) Minneapolis, Minnesota; (6) San Diego, California; (7) San Francisco, California; and (8) Seattle, Washington. Details of these metropolitan areas and the types of data collected are shown in Table A.2. The remainder of this section describes the data collection process for the key types of data collected in L03: (1) traffic, (2) incidents and work zones, (3) weather, and (4) capacity. Traffic Data Urban freeway traffic data were largely assembled from traffic management centers (TMCs) that have a history of main- taining quality traffic data. All of the study sections outside of Houston were monitored by fixed-point detectors that report volume as well as occupancy and/or speed. In Houston, the research team collected travel times from toll tag matches. In the San Francisco Bay area, data were collected from fixed- point sensors and toll tag matches. A key piece of the traffic data collection was to select the segments to monitor and model. The L03 Phase 2 report states that “based on previous analyses conducted by the research team, such as those for the Federal Highway Admin- istration’s (FHWA’s) Mobility Monitoring Program, the sec- tion length for urban freeways has generally been set at a length between two to five miles.” Figure A.2 shows the distribution of segment lengths studied in the L03 project. The segments had an average length of 5 miles, though some much longer segments were selected for the before-and-after analysis. Appendix G of the L03 final report states that sections should have the fol- lowing characteristics: 1. Be relatively homogeneous in terms of traffic and geometric conditions; 2. Represent portions of trips taken by travelers; and 3. Have no mid-section freeway-to-freeway interchanges. Incidents and Work Zones For most sites, incident and work zone data were obtained from the private vendor Traffic.com. Traffic.com gathers inci- dent data from a variety of sources and standardizes them into information on traffic incidents, special events, con- struction, severe weather, and other potentially traffic-influ- encing events. Each incident is either reported or confirmed and indicates the number of travel lanes blocked and the inci- dent start and end time. In some regions (Jacksonville, Atlanta, and Seattle), TMC-reported incidents were used as the primary incident data set. Weather Hourly weather data from weather stations in the study region were obtained from the National Climatic Data Center (NCDC) of the National Oceanic and Atmospheric Adminis- tration (NOAA). The hourly data contained information on the sky condition, visibility, obstructions to visibility, type and intensity of precipitation, precipitation accumulation, temperature, and wind characteristics.

28 Figure A.1. Data-rich modeling concept.

29 Table A.1. Site Selection Design Criteria Factors Levels Highway Type Urban Rural Freeways Signalized Arterials Freeways Area size Small, medium • • Large, very large • • Base congestion Low (AADT/Ca < 7) • Moderate (AADT/C ~ 9) • • Severe (AADT/C ~ 12) • • Number of lanes 4 • • • 6 • • 8+ • • Base crash rateb Low • • • High • • • Trucks (%) <10% • • • >10% • • • Traffic variabilityc Low • • • High • • • Traffic signal density <2/mile • 2–5/mile • >5/mile • Proximity to major bottleneck <1 mile downstream from segment • >5 miles downstream from segment • Improvement type Incident management • • • Work zone management • • • Weather managementd • • Traffic device controle • • Demand management • • Special event management • • Traveler information • • • Physical expansion and/or changes • • • a AADT/C is annual average daily traffic-to-capacity ratio (specifically, two-way hourly capacity). b Categories were based on comparison to each state’s average crash rate by type of highway. c For urban highways, traffic variability was determined based on the coefficient of variation (CV) of weekday peak period travel. For rural highways, the CV of the 24-hour volume was used. d Weather management depended on what was being covered in other research activities, such as FHWA’s Road Weather Research and Development Program. e Ramp meter control on freeways; signal control on signalized arterials.

30 Table A.2. SHRP 2 L03 Study Sites City Number of Sections Traffic Data Incident/Work Zone Data Weather Data Houston 13 Toll Tag Traffic.com NCDC/NOAA Minneapolis 16 Fixed Point Traffic.com NCDC/NOAA Los Angeles 3 Fixed Point Traffic.com NCDC/NOAA San Francisco Bay 4 Toll Tag/Fixed Point Traffic.com NCDC/NOAA San Diego 6 Fixed Point Traffic.com NCDC/NOAA Atlanta 10 Fixed Point, AirSage GDOT (NaviGAtor) NCDC/NOAA Jacksonville 8 Fixed Point TMC NCDC/NOAA Seattle 21 Fixed Point TMC & CAD NCDC/NOAA Capacity The project team also collected information to calculate the capacity of study segments. Geometric data were obtained from satellite photographs and 2007 Highway Performance Monitoring data. Relevant operating and improvement data were obtained from the state departments of transpor- tation (DOTs). Incident Management Activities Incident management information was collected from the traffic incident management (TIM) self-assessment proce- dure, developed by the FHWA to capture the sophistication of incident management policies for modeling. The process results in a single numeric score. The TIM self-assessment score ended up being available in only a few of the study locations, so it was ultimately not used in the statistical models. Data Processing Data were assembled for 81 urban freeway study segments. The ultimate statistical analysis data set summarizes reliability met- rics for every study section over an entire year by peak hour, peak period, midday (weekdays 11:00 a.m. to 3:00 p.m.), week- day (all hours), and weekend and holiday. It consists of infor- mation in the categories listed in Table A.3 (intended to be illustrative, not exhaustive). A number of computational steps were required to trans- form the raw data sets into the final cross-sectional analysis Figure A.2. Distribution of L03 segment lengths.

31 Table A.3. L03 Final Analysis Data Set Category Sample Measures Reliability Metrics • Mean, standard deviation, median, mode, minimum, and percentile travel times and travel time indices (TTIs) • Buffer indices, planning time index, skew statistics, and misery index • On-time percentages Area Operations Characteristics • Number of service patrol trucks • Service patrol trucks per mile • Quick clearance law? • Number of ramp meters, dynamic message signs, and closed-circuit tele- visions (CCTVs) Service Patrols • Number of service patrol trucks covering section • Percentage of time periods when trucks are active Capacity and Volume Characteristics • Start and end times of peak hour and peak period • Calculated and imputed vehicle miles traveled • Average of demand-to-capacity ratio on all section links • Highest demand-to-capacity ratio of all links on the section Incident Characteristics • Number of incidents • Incident rate per 100 million vehicle miles • Incident lane-hours lost • Incident shoulder-hours lost • Mean, standard deviation, and 95th percentile of incident duration Event Characteristics • Number of work zones • Work zone lane-hours lost • Work zone shoulder-hours lost • Mean, standard deviation, and 95th percentile of work zone duration Weather Characteristics • Number of hours with precipitation amounts exceeding various thresholds • Number of hours with measurable snow • Number of hours with frozen precipitation • Number of hours with fog data set listed in Table A.3. The key steps described in this section are (1) quality control, (2) calculating speed, (3) cal- culating the travel time index, (4) defining the peak hour and peak period, (5) calculating demand in oversaturated condi- tions, and (6) associating incidents with segments. Quality Control The L03 final report states, “The processing began with quality control of the data as received from the TMCs. The data quality checks used were those developed for FHWA.” The FHWA report cited is Quality Control Procedures for Archived Operations Traffic Data: Synthesis of Practice and Recommendations: Final Report (Texas Transportation Insti- tute 2007). Calculating Speed The calculation of speed is a necessary processing step for data collected by single loop detectors. The L03 team did not have to do any of this processing because all of the collected data already supplied speeds that were either directly mea- sured by the detector or were estimated in an upstream pro- cessing module. For example, in the case of the San Francisco, Los Angeles, and San Diego sites, traffic data were obtained from the Freeway Performance Measurement System (PeMS), which computes speeds based on 5-min measurements of volume and occupancy using a lane-, day of week-, and time- of-day specific g-factor (estimate of the average vehicle length). Calculating Travel Time Index All collected detector data were first aggregated to the 5-min level. At the 5-min level, volume and speed data were spatially aggregated across all lanes in a given direction then turned into vehicle miles traveled (VMT) and vehicle hours traveled (VHT), where • VMT = volume  detector zone length; and • VHT = VMT/(Min(FreeFlowSpeed,Speed)).

32 The detector zone length spans the distance between the current detector and halfway to its nearest neighboring detec- tors in the upstream and downstream directions. When aggregating to the section level, VMT and VHT were marked as missing if less than half of the detectors reported valid data for each of the 5-min periods. Otherwise, VMT and VHT were summed across all detectors on the segment, weighting by segment length. From these 5-min, segment VMT and VHT, TTI was computed through the following equations: • SpaceMeanSpeed = VMT/VHT; • TravelRate = 1/SpaceMeanSpeed; and • TTI = MAX(1.0,[TravelRate/(1/FreeFlowSpeed)]). In L03, the urban freeway free-flow speed was set to 60 mph. Under this computational framework, the TTI can never be lower than 1. The ultimate outputs of this processing are 5-min TTIs by segment. Defining Peak Hour and Peak Period Both the data-rich and data-poor models were structured to predict reliability within the peak hour, peak period, midday, and weekend time periods. Of these time periods, the peak hour and peak period are allowed to vary from segment to segment. L03 defined the peak hour as the continuous 60-min period during which the space mean speed is less than 45 mph. For segments where this condition occurs for longer than 60 min, the peak hour is selected by comparing the following criteria among adjacent 60-min periods: • Low space mean speed; • High vehicle hours of travel; and • High vehicle miles of travel. Ultimately, the analyst selects the peak hour based on com- paring observed data with local knowledge on conditions. The peak period is defined as a continuous time period of at least 75 min during which the space mean speed is less than 45 mph. The distribution of the durations of the peak periods for the L03 segments is shown in Figure A.3. Calculating Demand in Oversaturated Conditions Demand is a critical explanatory variable in the L03 models. Since roadway detectors measure volume not demand, the L03 research team created a methodology for computing the demand during the oversaturated conditions that all selected study corridors experience during the peak hour and peak period. The methodology takes inputs of 5-min link volumes and speeds. For any 5-min speed that falls below a defined thresh- old (35, 40, or 45 mph), the link is assumed to be in conges- tion, and the measured volume is not considered representative of the demand. Single 5-min periods when the speed increases above the defined threshold then decreases below the thresh- old in the subsequent 5-min period are also assumed to be congested. Figure A.3. L03 freeway segment peak period duration distributions.

33 Once the congested time period has been defined, it is split into two halves. The demand in the first half of congestion is assumed to be equal to the average volume measured in the two 5-min periods before the start of congestion. The demand in the second half of congestion is set such that the cumula- tive volume measured over the congested period is equal to the estimated cumulative demand. An illustration of the con- gestion definition methodology, applied to single loop detec- tor in San Diego, is shown in Figure A.4. The L03 final report states that the two 5-min periods after the termination of congestion need to be checked to ensure that the estimated demand curve fits smoothly to the observed cumulative volume curve. Additionally, the observed 5-min volume should not be significantly higher than the estimated demand for the second half of congestion. If necessary, the con- gested period can be extended to ensure a smooth transition. The importance of these steps is illustrated in Figure A.5, which shows the application of the demand-estimation process on a single day at six vehicle detector stations (VDSs) in Orange County, California. For VDSs 1201292 and 1202105, the esti- mation process appears to produce reasonable results. For VDSs 1201348 and 1201839, the estimated demands for the second half of the congested period are significantly lower than the measured volumes immediately following the congestion. For VDSs 1201419 and 1217710, the estimated second-half demands are higher that those estimated for the first half of congestion. Associating Incidents with Segments Spatially, incidents were assigned to segments if the incident’s linear referencing information indicated that it occurred on the segment. Temporally, for the peak hour, peak period, and midday models, an incident was assigned to a time slice if it began in or 15 min before the time slice, ended in the time slice, or spanned the time slice. Estimation of Independent Variables The final data-rich models contained a combination of up to three independent variables: • The demand-to-capacity ratio (critical or average); • Incident lane-hours lost (ILHL); and • Hours of precipitation exceeding 0.05 in. This section describes how each independent variable was calculated from the processed data sets. Calculating the Demand-to-Capacity Ratio The demand-to-capacity ratio is a critical input into all forms of the data-rich model. The previous section describes the process for calculating 5-min demand values from link-level measured volumes. Two forms of demand-to-capacity ratio were computed, stored, and used in the data-rich model: • Critical demand-to-capacity ratio. The critical demand of a section is calculated as the highest 99th-percentile demand measured on a link on the segment during the given time period (peak hour or peak period) over a year. • Average demand-to-capacity ratio. The average demand is calculated as the average demand measured on all links on the segment during the given time period (peak hour or peak period) over a year. Speed Threshold Congestion Start Congestion End Figure A.4. L03 demand calculation concept.

34 VDS 1201292 VDS 1201419 VDS 1202105 VDS 1201839 VDS 1201348 VDS 1217710 Figure A.5. Demand-estimation methodology applied to detector data in Orange County, California.

35 The capacity used in both ratios is the hourly capacity according to HCM methods. Calculating Lane-Hours Lost The lane-hours lost term in the model is meant to be the sum of lane-hours lost because of incidents and lane-hours lost because of work zones. The L03 project team only considered incidents in its developed models. Over a year, ILHL is calcu- lated as follows: ILHL = number of incidents  lanes blocked  incident durations Through exploratory analysis, the L03 team developed the following guidelines for estimating the above parameters: • If the incident rate is unavailable, it can be estimated by multiplying the crash rate by 4.545. • If lanes blocked per incident is unavailable, it can be esti- mated as follows: 44 0.476 if a usable shoulder is present and the agency moves lane-blocking incidents to the shoulder as quickly as possible; 44 0.580 if lane-blocking incidents are not moved to the shoulder; and 44 1.140 if usable shoulders are unavailable. The L03 team concluded that while they had hoped to develop a statistical relationship between incident manage- ment policies and average incident duration, sufficient data were not available. The final report contains average incident durations in all of the study locations for use by practitioners. Since the models are used to predict reliability measures within defined time periods (like the peak hour or peak period), the lane-hours lost because of a particular incident have to be assigned to these time periods. In L03, the total lane-hours lost caused by an incident were calculated and attributed to time periods based on the percentage of the active incident time spent in the time period. For example, if an incident that causes 10 lane-hours lost lasts from 8:00 a.m. to 9:00 a.m. on a section that has a peak period from 6:00 a.m. to 10:00 p.m. and a peak hour from 7:30 a.m. to 8:30 a.m., 10 lane-hours lost are contributed to the peak period and 5 lane-hours lost are contributed to the peak hour. Calculating Precipitation Hourly weather data from the National Weather Service (NWS) were used to compute the number of hours that had precipitation exceeding defined thresholds (ultimately, the number of hours where rainfall exceeded 0.05 in. was included in the data-rich model). Final Analysis Data Set The final analysis data set summarizes segment travel time reliability, demand, capacity, incidents, and weather con- ditions over an entire year. For TTI, the distribution and moments were computed as the volume-weighted average of all of the 5-min TTIs in the given time period over the year. This is a critical piece of the analysis chain, as it means that the ultimate travel time distributions and results are weighted toward the time periods that are the most heavily traveled. This is in contrast to a facility-level perspective, which treats each measurement equally regardless of how many vehicles experienced it. Model Calibration Data-Rich The data-rich model contains three independent variables that predict travel time reliability over a year: • The critical and average demand-to-capacity ratio; • The ILHL; and • The number of hours when precipitation exceeded 0.05 in. Equations were fit for the peak hour, peak period, midday, and weekday time periods, and were developed to predict the mean and 10th, 50th, 80th, 95th, and 99th percentile for urban freeway sections. The equations are all listed in the attachment of this document. Figure A.6 shows which of the independent variables (icons) were used in the models for the different TTI moments (columns) and time periods (row). The colors of the table show the root mean square error (RMSE) for each model during the calibration process. While the critical demand-to- capacity ratio and the ILHL were used in all of the peak hour and peak period equations, the hours of precipitation term was only used in six of the time period/moment equations. During the midday period, reliability is predicted only by the critical demand-to-capacity ratio. The RMSE generally increases with the higher TTI moments, likely because there is significantly more variability in the higher-percentile TTIs than in the mean, 10th-, and 50th-percentile TTIs across dif- ferent segments. Data-Poor The data-poor model has only one independent variable: the mean TTI. Unlike the data-rich model, the data-poor predic- tive equations are not calibrated to specific time periods. Similar to the data-rich model, equations were developed to predict specific reliability metrics: the 80th-, 90th-, and 95th- percentile TTIs; the standard deviation TTI; the percentage of

36 on-time trips made within 1.1 and 1.25 times the median TTI; and the percentage of on-time trips with 30-, 45-, and 50-mph speed thresholds. Two sets of data-poor equations are presented in the L03 final report and have been included in the attachment of this document. The equations presented in the main body of the final report use an exponential form to relate the mean TTI with reliability. Appendix H, which supersedes the models in the body of the L03 report, presents a revised set of equations to account for the fact that the exponential form does not do well at estimating TTIs that exceed 2.0 (which are common in planning applications). The revised equations use the follow- ing forms: • Natural log relationship for the percentile predictions; • Exponential relationship with revised coefficients for the standard deviation prediction; • Negative exponential form for the on-time measures for 45 and 50 mph; and • Sigmoidal form for the on-time measure for 30 mph. No revised equations were presented for the percentage of on-time trips made within 1.1 and 1.25 times the median TTI. Figure A.7 shows the RMSE for each equation during the calibration process. No calibration results were presented for the revised equations in the final report. Model Validation The data-rich and data-poor models were both validated on 26 urban freeway sections in Seattle. The L03 final report presents validation errors (measured in percent difference between the actual and predicted values) for the following equations: • Data-rich: mean, 80th-percentile, and 95th-percentile TTIs during the peak period and weekday (all 24-h) time peri- ods, and • Data-poor: 80th- and 95th-percentile TTIs. The validation errors are shown in Figure A.8. The solid colors indicate sections on which the model overpredicted the TTI (thus predicting that the segment is less reliable than it actually is) and the striped colors indicate sections on which the model underpredicted the TTI (thus predicting that the segment is more reliable than it actually is). As noted by the L03 project team, the models tend to underpredict the weekday TTIs in the Seattle region. The final report authors speculate that this may be because of Figure A.7. Calibration root mean square error, data-poor equations. Figure A.6. RMSE of data-rich model calibration.

37 the lack of a rain variable in the weekday models; rain is an important factor in Seattle congestion. The data-poor model exhibits the same underprediction trend, particularly with the 95th-percentile equation. The L03 project team recom- mended further validation of the models to address these high errors. Application Guidelines Chapter 8 of the L03 final report contains application guide- lines for using the project findings, including the data-rich and data-poor models to estimate the reliability impacts of various improvement scenarios. With respect to the model, it concludes that the data-poor models can be used to generate reliability statistics for many planning-level applications. Since the overall TTI from planning models includes only recurrent congestion, analysts must figure out how to incor- porate nonrecurrent events into an overall mean TTI for use in the models. L03 provides an adjustment factor for doing this. For the data-rich models, the application guidelines include tables to link improvement actions with changes in the independent variables. Figure A.8. Validation errors.

38 Recommendations The L33 project team reviewed the L03 final report and final technical expert task group (TETG) presentation and com- municated with the L03 principal investigator to assess the lessons learned and final conclusions from that project. The major findings and opportunities for L33 to further explore are as follows: Geographic Scope One issue with the L03 model validation and calibration steps is that winter weather was a factor in only one of the seven cities (Minneapolis). Additionally, all of the regions studied had well-developed incident management programs and other real-time operational activities. Further research should include more winter weather locations as well as more opera- tionally diverse metropolitan areas. Section Characteristics All of the study sections shared two key characteristics: (1) all had three or more lanes per direction of travel, and (2) all regularly experienced severe congestion. Further work should consider sections with more diverse cross-sections and levels of congestion. These may be important factors because the impact of a lane blockage increases when there are fewer available lanes. Additionally, on severely congested segments, the relative impact of incidents, work zone, and inclement weather is less than on segments with less recurrent delay. Additionally, L03 had some concerns about the perfor- mance of the data-poor models during extremely congested conditions. This is why revised models were included in the appendix of the final L03 report. It is recommended to vali- date and potentially recalibrate on sections that experience extremely congested conditions. A further consideration is how capacity-restricting events like incidents and work zones are assigned to roadway sections. L03, as well as the SHRP 2 L08 project, Incorporation of Travel Time Reliability into the Highway Capacity Manual, assigned incidents to the section that they occur on. However, incidents that occur on one section often have impacts off of the section that are not captured in the L03 models. Similarly, incidents at the upstream end of a section may improve oper- ations further downstream because of metering. These assumptions could be explored in further work. Modification of Independent Variables The L03 project team identified some opportunities for improv- ing the estimation of independent variables in the models as well as potentially modifying them to produce better results. On the data-poor side, the team recommends a more rigorous approach for translating a recurring mean TTI into an overall mean TTI, ideally one that uses section-level incident and weather characteristics. On the data-rich side, results may be improved by altering the lane-hours-lost variable such that it is normalized by the total number of lanes at the location. Further exploration is also needed to figure out why this vari- able was not significant during the off-peak hours. Additional Independent Variables The two main identified opportunities for additional inde- pendent variables are (1) a representation of the number of lanes along a segment, and (2) a snowfall term. Predictive Reliability Models This section reviews the development and usage of other pre- dictive travel time reliability models within the SHRP 2 Reli- ability program. The SHRP 2 L05 project created a framework for how the various predictive reliability outputs of the SHRP 2 program can support different levels of analysis. Their findings, pre- sented in Table A.4, identify three projects besides L03 that Table A.4. Analysis Supported By SHRP 2 Reliability Predictive Models Analysis Type/Scale Supporting Tools Sketch planning L03 reliability prediction equations Project planning L07 hybrid method where data inputs are limited L08 multiscenario methods where additional data are available and more resolution in results are desired Facility performance L08 multiscenario methods most directly applicable L04 preprocessor (simulation manager) and postprocessor (trajectory processor) could be used, then the performance of an individual facility can be isolated Travel demand forecasting L03 reliability prediction equations and L07 method can be adapted as postprocessors L08 multiscenario methods could be used to develop custom functions for postprocessing Traffic simulation L04 preprocessor (simulation manager) and postprocessor (trajectory processor) most appropriate L08 scenario generator can be adapted

39 developed predictive travel time reliability models: L07, L04, and L08. Together the four projects support analyses at the sketch planning, project planning, facility performance, travel demand forecasting, and traffic simulation levels. SHRP 2 L07: Evaluating Cost-Effectiveness of Highway Design Features The purpose of the L07 project, which is still active, is to assess the role of various treatments in reducing nonrecur- rent congestion. The output of the project is a spreadsheet- based tool that allows users to input specific roadway information and view the predicted reliability curve based on the L03 equations. The tool lets users compare an untreated TTI curve with treated TTI curves to view the effect of each treatment of reliability on a particular section of roadway. The L07 project team made some revisions to the L03 models to adapt them to their spreadsheet application. The main motivation for the adaptations was to improve their applicability to sections on which congestion is dominated by nonrecurrent events. This largely applies to rural areas and small and medium urban areas (relevant to L33). The revised models have been submitted in a draft final report and are awaiting approval by the L07 TETG. Revisions were made only to the data-rich models. The L07 team focused on the peak hour data-rich model from L03 but generalized it such that it could be applied to any hour of any day. The draft equations are presented in the attachment of this appendix. The first major change is that L07 split the data-rich model into two separate equations: to be applied to sections and hours with a critical demand-to-capacity ratio of less than or more than 0.8. This change was made to allow for better results on lower demand sections given that the L03 peak hour equation focused on heavily congested segments. The equations use the same independent variables as the L03 peak hour model but also include a snowfall term that measures the number of hours during the time period when snowfall exceeded 0.01 in. Both equations also split the predicted TTI into two components, the nonprecipitation portion of the predicted TTI (which is exponential, with independent vari- ables lane-hour lost and critical demand-to-capacity ratio) and the precipitation TTI. For the low-demand model, the coefficients are continuous, so the data-poor model can be used to calculate a continuous TTI density function. For the data-rich model, coefficients were developed to predict the 10th-, 50th-, 80th-, 95th-, and 99th-percentile TTIs. The L07 revised equations were calibrated using data from Minnesota, processed by the L07 project team. According to the project team, calibration errors were not calculated; rather, the reasonableness of the values output by the spread- sheet were assessed and deemed to be acceptable for the spreadsheet application. Both the L03 and L07 project team noted that the predic- tive equations are not optimal for predicting the travel time impacts of extremely rare events that affect the highest per- centiles, because these events are so rare over the time frame of one year. As such, the L07 team is also developing a way to account for the reliability impacts of multihour incidents to directly manipulate the TTI curve after the predictive equa- tions have been implemented. SHRP 2 L04: Incorporating Reliability Performance Measures in Operations and Planning Modeling Tools The purpose of the L04 project, which was completed in March 2013, was to develop software to apply simulation models in a way that more fully accounts for the factors that cause nonrecurrent congestion. The software consists of two modules: (1) a scenario generator that produces random inputs of incidents, work zones, weather, and other nonre- current congestion factors for the simulation; and (2) a tra- jectory processor that generates travel time distributions. The L33 project team reviewed the SHRP 2 L04 Task 7 Report to understand the overlap between the two projects. Similar to the L03 team, the L04 team chose to focus their reliability analysis on the relationship between the mean travel times and measures of reliability (in the case of L07, the standard deviation travel time). In the exploratory analysis phase of the project, the L07 team tested three possible rela- tionships between the mean travel time per mile and the stan- dard deviation travel time per mile: (1) linear, (2) square root, and (3) quadratic. Since no real-world trajectory data were readily available, the relationships were tested using simu- lated trajectory data at the network, origin–destination, path, and link levels in Irvine, California; Baltimore, Maryland/ Washington, D.C., and New York City. Travel time variability was considered in two ways: (1) the variation among vehicle travel times departing at the same time (origin–destination, path, and link levels); and (2) the variation by time of day (network-level). Ultimately, the quadratic model had the best goodness-of-fit (R-squared), but some of its coefficients had high p-values and violated accepted theory. The linear regres- sion model was selected as the best model because it generally exhibited higher R-squared values than the quadratic model. In all models, the network-level had the highest slope (stan- dard deviation increases faster with mean travel time) and the network-level the smallest slope. The linear relationship was validated using GPS probe data collected near Puget Sound. The validation yielded the following R-squared values: • Origin–destination: 0.5770; • Path: 0.3861; and • Link: 0.6675.

40 These relationships were ultimately used to validate the results of the travel times output by the project’s mesoscopic model. The L07 project team fit a linear regression model to the mean travel time per mile and standard deviation travel time per mile output by their mesoscopic model and com- pared the coefficients with those obtained from fitting the lin- ear model to 4 hours of GPS trajectory data in New York City purchased from TomTom. The magnitudes of the coefficients were deemed comparable to those obtained for the simulated data, except at the network-level. SHRP 2 L08: Incorporation of Travel Time Reliability into the Highway Capacity Manual The SHRP 2 L08 project began in 2011 and is anticipated to end in the spring of 2013. The purpose of the L08 project is to develop analytic methods for potential incorporation of travel time reliability into the HCM. According to the draft final report, the project had two objectives: (1) to incorpo- rate nonrecurring congestion impacts into the HCM and (2) to expand the HCM analysis horizon from a single study period to several weeks or months to assess variability. The project’s methodology for freeways contains three compo- nents: (1) a data depository; (2) a scenario generator; and (3) a computational processor, each of which is described in turn. The data depository contains required inputs to the sce- nario generator. At a segment-specific level, this includes seg- ment geometries, free-flow speeds, lane patterns, segment types, and demand [which can be directly measured from field sensors over a sample of days or estimated from projec- tions of annual average daily traffic (AADT)]. The depository also includes information about nonrecurrent congestion, such as the varying impacts it can have on traffic (for an inci- dent, a shoulder closure versus a one-lane closure versus a two-lane closure); the probability of its occurrence during a particular time period; its duration; and the impact that it has on free-flow speed, demand, and capacity. These nonrecur- rent congestion inputs have default values for cases where local data collection is not feasible. These inputs are fed into the scenario generator. The freeway scenario generator (FSG) develops operational scenarios that a freeway facility may experience and the prob- ability that they may occur during a particular time period. These scenarios are based on the nonrecurrent congestion inputs in the data depository. The methodology assumes that events are independent (thus, the probability that an incident and precipitation occur at the same time is equal to the product of their individual probabilities). The scenarios are ultimately expressed as demand and capacity parameters and fed into the core computation engine, which is an extension of the freeway evaluation tool (called FREEVAL-RL). The FREEVAL-RL tool extended on past methodologies and was developed to generate a reliability report that char- acterizes the travel time distribution of a particular scenario. Other Reliability Research This section describes other recent research and implemen- tation efforts into other aspects of understanding travel time reliability. It details predictive models in practice, best practices in data processing techniques, current research on the optimal reliability metrics, and recently developed methodologies for understanding the relationship between non recurrent congestion and reliability in the SHRP 2 program. Predictive Models in Practice The Florida Department of Transportation’s (FDOT) Reli- ability Model was featured in the SHRP 2 L05 final report as a best practice example of using reliability performance mea- sures in planning and programming. FDOT’s preferred reli- ability statistic is the percentage of trips that arrive on time, defined as within 10 mph of the free-flow speed (the posted speed limit plus 5 mph) of the section. FDOT’s predictive model calculates expected travel times for a set of predefined scenarios, along with the probability of each scenario occur- ring. Each scenario assumes some set of conditions including congestion level, weather, incidents, and work zones. For a particular section of road, the estimated travel times for each scenario are combined with the expected frequency of the scenario to create the travel time distribution for the section. This methodology is applied to the entire state freeway sys- tem, regardless of instrumentation. Each freeway is divided into sections, and the model applied to each of the 24 h in a day for each segment direction. The model uses four major causes of congestion: recur- ring, incidents, weather, and work zones. Data inputs include hourly demand-to-capacity ratios derived from AADT and hourly and directional distributions of traffic. Travel times are determined for each segment for each scenario according to the following: • Recurring congestion component. Determined through HCM planning applications and CORSIM (corridor simu- lation traffic software) travel time estimations. • Incident component. Impact determined by capacity reduc- tion. Probability determined from FDOT crash data dur- ing different work zone and precipitation conditions, and an assumed ratio of non-blocking to lane-blocking events. • Weather component. Impact assumed to be 6% speed reduc- tion for light rain and 12% for heavy rain. Probability of clear weather (<0.01 in./h), light rain (0.01 to 0.5 in./h), and

41 heavy rain (>0.5 in./h) determined from Weather Under- ground data. • Work Zone component. Impact determined by capacity reduction. No data available, so constant probabilities assumed during particular times of day (3% overnight, 1% otherwise). The data produced by this model are used for systemwide reporting and to set project priorities. Data Processing Applying best practices of traffic data processing techniques is an important component of the L33 project; how the data is quality controlled, aggregated, and turned into travel times as input into the model calibration ultimately affects the validity and applicability of the final results. The SHRP 2 project that most fully addressed traffic data processing is L02, Establishing Monitoring Programs for Mobility and Travel Time Reliability. Chapters 3, 4, and 6 of the draft final report document methodologies for identify- ing and imputing bad traffic data from point detectors and filtering unrepresentative travel times from automated vehi- cle identification (AVI) and automated vehicle location (AVL) data sources. Figure A.9 illustrates these processing steps to show the computations that need to be performed on each type of data. Many of the findings in the L02 final documents directly relate to the required data processing needed to validate the L03 data-rich and data-poor models, including 1. Filtering detector data to remove samples with poor data quality; 2. Filtering AVI travel times to remove unrepresentative travel times; 3. Calculating segment and route travel times from time- mean-speeds; and 4. Estimating individual vehicle travel time probability den- sity functions (PDFs) from facility-average travel times. Reliability Metrics Many of the SHRP 2 Reliability projects have performed user surveys and analysis to determine the optimal measures for summarizing reliability for different audiences. The L03 proj- ect explored measures commonly used in the United States and Europe to identify the set of measures to use in projects. Exploratory analysis showed that index measures (like the buffer index and planning time index) are not optimal for tracking reliability improvements because some improve- ments can make the mean (or median) travel time improve more than the 95th percentile travel time, thus showing a worsening in reliability. General consensus among the SHRP 2 Reliability projects and other research is that the best mea- sures are those that provide information on the underlying travel time distributions. The L03 reliability models predict the mean, median, and 10th-, 80th-, 90th-, and 95th- percentile travel times. From these values, skew can be computed. The L07 extended-L03 models generate continuous probability density functions in the low-demand equation of their pre- dictive model. While the goal of reliability monitoring and prediction is to provide a full PDF of travel time conditions, the PDF can be developed in different ways. In an ideal monitoring envi- ronment, and one that will be possible in the future, travel times can be collected from every individual vehicle travers- ing a segment. In this case, a PDF of 5-min-level travel times can be assembled in one of two ways: (1) an individual traveler-level PDF, which is based on the full set of travel times measured across all vehicles that made the trip during Figure A.9. L02 data processing by technology.

42 that 5-min period over a year, or (2) a facility-level PDF, in which the individual vehicle travel times are averaged within each 5-min period, and the PDF is composed of the 5-min average travel times across the year. The L02 project developed methodologies and guidance on developing both types of PDFs from different detection technologies. The L03 project produced a different PDF; the average 5-min travel times across a year were put into travel time bins, then each bin was weighted by the number of travel times in the bin as well as the average volume on the segment across all the time periods that experienced that travel time. This is similar to the individual traveler-level PDF in that it weights travel times by the number of vehicles that experienced them, but different in that it does not capture the variability of travel times within a single 5-min period. Nonrecurrent Congestion Methodologies A major piece of the SHRP 2 Reliability program is figuring out how the factors of nonrecurrent congestion affect travel time reliability. FHWA identified seven sources of nonrecur- rent congestion: (1) incidents, (2) weather, (3) work zones, (4) fluctuation in demand, (5) special events, (6) traffic con- trol devices, and (7) inadequate base capacity. This section addresses the outputs from the SHRP 2 program that seek to quantify the relationship between nonrecurrent congestion and reliability. SHRP 2 L03 Outside of the predictive reliability model research, the L03 team performed a detailed congestion-by-source analysis using the collected Seattle data. The work was performed by associating disruptions with travel times and delay. The ana- lysts used 5-min delay and travel time data and data on the sources of congestion to assign influence variables to time periods affected by disruptions, grouped into incidents, inci- dents involving lane closures, vehicle crashes, active construc- tion events, bad weather, and rubbernecking (delay in the opposite direction of travel of the incident). Methodologies were also developed to relate off-segment congestion influences to the segment being studied. Performance during the disrup- tions was compared with the segment’s baseline performance. The results of the analysis ultimately summarize the percentage of delay caused by the different types of disruptions. SHRP 2 L02 The L02 project performed similar analyses but related the sources of congestion to the underlying travel time PDFs. The project’s guidebook recommends six steps for assessing the reliability impacts of influencing factors: 1. Select the region or facilities of interest. 2. Select a timeframe of interest. 3. Assemble travel rate data for each facility. 4. Generate PDFs for each facility. 5. Understand variations in reliability as a result of congestion. 6. Develop cumulative distribution functions (CDFs) for each combination or recurring congestion level and non- recurring event. An example of the final step is shown in Figure A.10. SHRP 2 L08 The L08 project has performed significant analysis into quan- tifying the reliability impact of nonrecurrent congestion events to feed into the HCM update. In the L08 project, vari- ability in demand, weather, and incidents are the nonrecur- rent congestion factors that affect travel time reliability. The Figure A.10. Influencing factor CDF, L02.

43 L08 methodology incorporates these factors by discretizing a particular factor into a category and estimating the probabil- ity that each category of factor will occur during a particular time period. Demand is categorized into different demand patterns that are facility-specific and organized by day of week and month. The probability of each demand pattern occurring within a particular time period is then easily com- puted from the frequency of that demand pattern by day of week and month in the study period. Weather is categorized into the HCM categories shown to impact travel time: medium rain, heavy rain, light snow, light-medium snow, medium-heavy snow, heavy snow, very low visibility, minimal visibility, and normal weather. The frequency of these catego- ries can easily be estimated from hourly weather data. Inci- dents are grouped into six categories based on their severity or capacity impacts: no incident, shoulder closure, one-lane closure, two-lane closure, three-lane closure, and four-lane closure. Probabilities can be computed from empirical data. Conclusions This final section summarizes lessons learned from the back- ground review and details potential opportunities for further exploration in the L33 project. Validating in Multiple Regions with Diverse Characteristics The L03 data-rich and data-poor models were validated in only one location, Seattle, a metropolitan area that had vastly different weather patterns from any of the calibration loca- tions. As such, it is critical that the L33 project performs the validation at multiple sites with a wide range in climate and operational policies. Finding Sufficiently Detailed Data Sets The L03 team experienced significant challenges in acquiring data sets that had a sufficient level of detail needed to calibrate the predictive models, particularly with regard to disruptions like incidents and lane closures. In one example, the L03 team thought that the Traffic.com data it had purchased in most of the study areas contained lane closure data but, on further investigation, determined that lane closure data was infre- quently and inconsistently reported. In another example, the L03 team had planned to incorporate an agency’s incident clearance policies into the data-rich predictive model by using agency-reported TIM scores, but subsequently learned that only a few of the study areas had reported TIM scores. These experiences highlight the importance of seeking out data sources, guaranteeing their availability, and making sure that together they can be used to estimate all of the desired model variables. Implementing Best Practices of Data Processing A major portion of L03 resources were spent on quality con- trolling and processing the collected traffic and incident data, a necessary effort for ensuring valid model results. In the time since the L03 analysis was conducted, the L02 proj- ect, which focused on monitoring travel time reliability, has been completed and published and is in the process of being implemented. The timing of the L33 project is such that it is well-positioned to take advantage of the best practices in traf- fic and nonrecurrent congestion source data processing established by the SHRP 2 program. Accurately Capturing Demand SHRP 2 research has found the complex interaction between demand and capacity to be a critical determinant of travel time reliability. The L03 project established an empirical approach for estimating demand for every 5-min period, but exploratory analysis performed by the L33 team has shown that this can produce inaccurate results in a number of cases. This estimation process appears to be an opportunity for improvement in the L33 project. Capturing the Right Independent Variables The L03 project found that the demand-to-capacity ratio, the ILHL, and the number of hours with precipitation exceeding 0.05 in. to be the key predictors of reliability in a data-rich environment. However, the L03 team recommended that fur- ther investigation consider modifying the ILHL variable to account for the total number of lanes at the location. In extending the L03 data-rich models, the L07 project team modified the precipitation variable to also include hours of snowfall exceeding 0.01 in. A major focus of the L33 project will be to assess whether the modification of existing inde- pendent variables or the addition of further independent variables reduce the calibration and validation errors. Further Investigation of the Relationship Between the Mean Travel Time and Reliability Both the L03 and L04 projects independently arrived at the conclusion that measures of travel time reliability can be pre- dicted with reasonable accuracy from the mean travel time. However, when the L03 team validated the data-poor model in Seattle, they noted that it significantly underpredicted high percentile travel times, even though the model had strong goodness-of-fit in the calibration stage. The L33 team plans to investigate whether this relationship is the ideal form for a predictive data-poor model.

44 Measuring the Travel Time Probability Density Function In measuring the travel time probability density functions of each study section for calibration and validation of the data- rich and data-poor models, the L03 project team weighted each measured travel time bin by the frequency it occurred as well as the average volume on the segment across the 5-min time periods that experienced that travel time. According to the L03 principal investigator, this was done to ensure that the models will still be applicable in the future when it is possible to directly measure travel times from every vehicle traversing a segment. However, the PDF approximated in L03 is still fundamentally different from that assembled from individual vehicle travel times, which accounts for the variability of travel times between different vehicles traversing the same segment at the same time. In L33, the project team wants to explore the PDF assumption made in L03 and assess the value of providing models to predict other PDF forms. Predicting a Travel Time Probability Density Function Recent reliability research concludes that the closer one can get to measuring or predicting the full travel time probability density function the better the understanding of reliability will be. A full PDF can support the computation of any travel time reliability measure. The L07 team has already extended the L03 models such that, for low-demand conditions, they can predict a continuous PDF. The L33 team plans to evaluate revised equations and methodologies that can provide a fuller picture of segment-level travel time reliability. References Barkley, T., R. Hranac, and K. Petty. Relating Travel Time Reliability and Nonrecurrent Congestion with Multistate Models. In Transporta- tion Research Record: Journal of the Transportation Research Board, No. 2278, Transportation Research Board of the National Acade- mies, Washington, D.C., 2012. Cambridge Systematics, Inc. Final Report. SHRP 2 Report S2-L05-RW-1: Incorporating Reliability Performance Measures into the Transporta- tion Planning and Programming Processes. Transportation Research Board of the National Academies, Washington, D.C., 2014. Cambridge Systematics, Inc. Technical Reference. SHRP 2 Report S2-L05-RW-1: Incorporating Reliability Performance Measures into the Transportation Planning and Programming Processes. Transportation Research Board of the National Academies, Washington, D.C., 2014. Cambridge Systematics, Inc.; Texas Transportation Institute; University of Washington; Dowling Associates; Street Smarts; H. Levinson, and H. Rakha. Final Report. SHRP 2 Report S2-L03-RR-1: Analytical Pro- cedures for Determining the Impacts of Reliability Mitigation Strate- gies. Transportation Research Board of the National Academies, Washington, D.C., 2013. Cambridge Systematics, Inc.; Texas Transportation Institute; University of Washington; Dowling Associates; Street Smarts; H. Levinson, and H. Rakha. Phase 1 Report. SHRP 2 L03 Project, Analytical Proce- dures for Determining the Impacts of Reliability Mitigation Strate- gies. Transportation Research Board of the National Academies, Washington, D.C., 2009. Cambridge Systematics, Inc.; Texas Transportation Institute; University of Washington; Dowling Associates; Street Smarts; H. Levinson, and H. Rakha. Phase 2 Report. SHRP 2 L03 Project, Analytical Proce- dures for Determining the Impacts of Reliability Mitigation Strate- gies. Transportation Research Board of the National Academies, Washington, D.C., 2009. Delcan, Northwestern University, and Parsons Brinckerhoff. Draft Task 7 Report. SHRP 2 Report S2-L04-RW-2: Incorporating Reliability Performance Measures in Operations and Planning Modeling Tools. Transportation Research Board of the National Academies, Wash- ington, D.C., 2013. ITRE, Iteris, Kittelson & Associates, National Institute of Statistical Sci- ences, University of Utah, Rensselaer Polytechnic University, and A. Khattak of Planitek. Final Report. SHRP 2 Report S2-L02-RR-1: Establishing Monitoring Programs for Travel Time Reliability. Trans- portation Research Board of the National Academies, Washington, D.C., 2012. Kittelson & Associates, Inc.; Cambridge Systematics; ITRE; and Texas A&M Research Foundation. Draft Final Report. SHRP 2 Report S2-L08-RW-1: Incorporation of Travel Time Reliability into the High- way Capacity Manual. Transportation Research Board of the National Academies, Washington, D.C., 2013. Kwon, J., T. Barkley, R. Hranac, K. Petty, and N. Compin. Decomposi- tion of Travel Time Reliability into Various Sources: Incidents, Weather, Work Zones, Special Events, and Base Capacity. In Trans- portation Research Record: Journal of the Transportation Research Board, No. 2229, Transportation Research Board of the National Academies, Washington, D.C., 2011. Kwon, J., M. Mauch, and P. Varaiya. Components of Congestion. In Transportation Research Record: Journal of the Transportation Research Board, No. 1959, Transportation Research Board of the National Academies, Washington, D.C., 2006. Mahmassani, H., T. Hou, and H. Dong. Characterizing Travel Time Reliability in Vehicular Networks: Deriving a Robust Relation for Reliability Analysis. In Transportation Research Record: Journal of the Transportation Research Board, No. 2315, Transportation Research Board of the National Academies, Washington, D.C., 2012. McLeod, D., L. Elefteriadou, and L. Jin. Travel Time Reliability as a Per- formance Measure: Applying Florida’s Predictive Model on the State’s Freeway System. Submitted for Presentation and Publication to the TRB 2012 Annual Meeting. October 2011. MRI Global. Draft Final Report. SHRP 2 Report S2-L07-RR-1: Identifi- cation and Evaluation of the Cost-Effectiveness of Highway Design Features to Reduce Nonrecurrent Congestion. Transportation Research Board of the National Academies, Washington, D.C., 2013. Pu, W. Analytic Relationships Between Travel Time Reliability Mea- sures. In Transportation Research Record: Journal of the Transporta- tion Research Board, No. 2254, Transportation Research Board of the National Academies, Washington, D.C., 2011. Texas Transportation Institute. 2007. Quality Control Procedures for Archived Operations Traffic Data: Synthesis of Practice and Recom- mendations: Final Report. Federal Highway Administration, U.S. Department of Transportation, Washington, D.C. Van Lint, J. W. C., and H. J. van Zuylen. Monitoring and Predictive Travel Time Reliability. In Transportation Research Record: Journal of the Transportation Research Board, No. 1917, Transportation Research Board of the National Academies, Washington, D.C. 2005.

45 Appendix A Attachment This attachment lists the equations for the data-rich and data- poor models from L03 and L07 projects, which are noted in Appendix A. L03 data-Rich equations, Chapter 7 of Final Report Peak Period mean TTI (1)0.09677 dc 0.00862 ILHL 0.00904 Rain05Hrscrite   = ( )+ + RMSE = 18.8%; alpha level of coefficients: <0.0001, <0.0001, 0.0189 (in order of appearance in the equations) 99th-percentile TTI (2) 0.33477 dc 0.012350 ILHL 0.025315 Rain05Hrscrite   = ( )+ + RMSE = 39.8%; alpha level of coefficients: <0.0001, 0.0002, 0.0022 95th-percentile TTI (3) 0.23233 dc 0.01222 ILHL 0.01777 Rain05Hrscrite   = ( )+ + RMSE = 32.3%; alpha level of coefficients: <0.0001, <0.0001, 0.0078 80th-percentile TTI (4) 0.13992 dc 0.01118 ILHL 0.01271 Rain05Hrscrite   = ( )+ + RMSE = 25.8%; alpha level of coefficients: <0.0001, <0.0001, 0.0163 50th-percentile TTI (5)0.09335 dc 0.00932 ILHLcrite  = ( )+ RMSE = 20.5%; alpha level of coefficients: <0.0001, <0.0001 10th-percentile TTI (6)0.01180 dc 0.00145 ILHLcrite  = ( )+ RMSE = 6.7%; alpha level of coefficients: 0.0169, 0.0060 Peak Hour mean TTI (7)0.27886 dc 0.01089 ILHL 0.02935 Rain05Hrscrite   = ( )+ + RMSE = 26.4%; alpha level of coefficients: 0.0008, 0.0094, 0.0838 99th-percentile TTI (8)1.13062 dc 0.01242 ILHLcrite  = ( )+ RMSE = 41.3%; alpha level of coefficients: <0.0001, 0.0477 95th-percentile TTI (9) 0.63071 dc 0.01219 ILHL 0.04744 Rain05Hrscrite   = ( )+ + RMSE = 38.3%; alpha level of coefficients: <0.0001, 0.0436, 0.0553 80th-percentile TTI (10)0.52013 dc 0.01544 ILHLcrite  = ( )+ RMSE = 34.1%; alpha level of coefficients: <0.0001, 0.0031 50th-percentile TTI (11)0.29097 dc 0.01380 ILHLcrite  = ( )+ RMSE = 28.3%; alpha level of coefficients: <0.0001, 0.0015 10th-percentile TTI (12)0.07643 dc 0.00405 ILHLcrite  = ( )+ RMSE = 15.2%; alpha level of coefficients: 0.0081, 0.0748 Midday (11:00 a.m. to 2:00 p.m., Weekdays) mean TTI (13)0.02599 dccrite = ( ) RMSE = 7.5%; alpha level of coefficient: <0.0001 99th-percentile TTI (14)0.19167 dccrite = ( ) RMSE = 33.4%; alpha level of coefficient: <0.0001

46 95th-percentile TTI (15)0.07812 dccrite = ( ) RMSE = 21.8%; alpha level of coefficient: <0.0001 80th-percentile TTI (16)0.02612 dccrite = ( ) RMSE = 9.2%; alpha level of coefficient: <0.0001 50th-percentile TTI (17)0.01134 dccrite = ( ) RMSE = 21.8%; alpha level of coefficient: <0.0001 10th-percentile TTI (18)0.00389 dccrite = ( ) RMSE = 5.1%; alpha level of coefficient: <0.0016 Weekday mean TTI (19) 0.00949 dc 0.00067 ILHLaveragee   = ( )+ RMSE = 29.3%; alpha level of coefficients: <0.0001, 0.0051 99th-percentile TTI (20) 0.07028 dc 0.00222 ILHLaveragee p p = ( )+ RMSE = 38.9%; alpha level of coefficients: <0.0001, 0.0261 95th-percentile TTI (21) 0.03632 dc 0.00282 ILHLaveragee   = ( )+ RMSE = 31.8%; alpha level of coefficients: <0.0001, 0.0007 80th-percentile TTI (22) 0.00842 dc 0.00117 ILHLaveragee   = ( )+ RMSE = 14.7%; alpha level of coefficients: 0.0004, 0.0023 50th-percentile TTI (23) 0.0021 dcaveragee  = ( ) RMSE = 4.7%; alpha level of coefficients: <0.0001 10th-percentile TTI (24) 0.00047 dcaveragee  = ( ) RMSE = 2.0%; alpha level of coefficients: 0.0121 L03 Data-Poor Equations, Appendix H of Final Report meanTTI 1.0274 RecurringmeanTTI (25)1.2204= More work remains to be done to make this adjustment more sensitive to the effect of disruptions. Revised section-level equa- tions are as follows: 95th-percentile TTI 1 3.6700 ln meanTTI (26)+  ( )= 90th-percentile TTI 1 2.7809 ln meanTTI (27) ( )= + 80th-percentile TTI 1 2.1406 ln meanTTI (28)= +  ( ) StdDevTTI 0.71 meanTTI 1 (29)0.56=  ( )− = [ ]( )− −PctTripsOnTime50mph (30)2.0570 meanTTI 1e p − −PctTripsOnTime45mph (31)( 1.5115 [meanTTI 1])= e p PctTripsOnTime30mph 0.333 0.672 1 (32) 5.0366 meanTTI 1.8256e[ ]( )= + + [ ]( )∗ − L03 Data-Poor Equations, Chapter 7 of Final Report 95th-percentile TTI meanTTI (33) RMSE 15.7%; alpha level of coefficient 0.0001 1.8834 < = = 90th-percentile TTI meanTTI (34) RMSE 9.4%; alpha level of coefficient 0.0001 1.6424 < = = 80th-percentile TTI meanTTI (35) RMSE 4.5%; alpha level of coefficient 0.0001 1.365= = < median TTI meanTTI (36) RMSE 6.3%; alpha level of coefficient 0.0001 0.8601 < = = 10th-percentile TTI MeanTTI (37) RMSE 5.4%; alpha level of coefficient 0.0001 0.1524 < = = PctTripsOnTime10 1 0.4396 meanTTI 1 (38) RMSE 8.4% 0.4361= p( )[ ]− − = where PctTripsOnTime10 is the percentage of trips that occur below the threshold of 1.1  median TTI. PctTripsOnTime25 1 0.2861 meanTTI 1 (39) RMSE 7.5% 0.5251 p( )[ ]= − − = where PctTripsOnTime25 is the percentage of trips that occur below the threshold of 1.25  median TTI. pPctTripsOnTime50mph 1 0.8985 meanTTI 1 RMSE 18.0% (40) 0.6387( )[ ]= − − = where PctTripsOnTime50mph is the percentage of trips that occur at space mean speeds above the threshold of 50 mph. pPctTripsOnTime45mph 1 0.8203 meanTTI 1 RMSE 14.0% (41) 0.7692( )[ ]= − − =

47 where PctTripsOnTime45mph is the percentage of trips that occur at space mean speeds above the threshold of 45 mph. pPctTripsOnTime30mph 1 0.4139 meanTTI 1 RMSE 4.4% (42) 1.5527( )[ ]= − − = where PctTripsOnTime30mph is the percentage of trips that occur at space mean speeds above the threshold of 30 mph. ( )−standard deviation 0.6182 meanTTI 1 (43) 0.781; alpha levels of coefficients 0.0001 0.5404 2 = R = < p L07 Model Equations, Chapter 4 of Final Report TTI TTI 0.8 TTI 1 2 TTI 1 2 TTI 0.8 (44) NP, NP, days NP FF 05" FF NP, 01" FF NP, 05" 01" = e for D C N N V R c V c S d V d for D C n n c R d s n n n n n n n n n× ≤ × + + + +             >      ( )+ where TTIn = the predicted nth-percentile travel time index TTINP,n = the nonprecipitation portion of TTIn = e(an D/C + bn LHL) LHL = lane-hours lost due to incidents and work zones (see L07 report Chapter 4) D/C = demand-to-capacity ratio (see L07 final report Section 4.2.2) R05″ = number of hours in time slice with rain exceeding 0.05 in. (See L07 final report Chapter 4) S01″ = number of hours in time slice with snow exceed- ing 0.01 in. (See L07 final report Chapter 4) Ndays = number of hours in time slice (365) NNP = number of hours in time slice with no precipita- tion = Ndays - R05 in. - S01 in. VFF = free-flow travel time on segment, mph an, bn = nth-percentile coefficients for nonprecipitation components (D/C and LHL). (See L07 final report Table 4.5) cn, dn = nth-percentile coefficients for rain and snow components, respectively (D/C < 0.8). (See L07 final report Table 4.5) c1n, c2n = nth-percentile coefficients for rain component (D/C > 0.8). (See L07 final report Table 4.5) d1n, d2n = nth-percentile coefficients for snow component (D/C > 0.8). (See L07 final report Table 4.5) For the D/C ≤ 0.8 models, the four coefficients (an, bn, cn, dn) were developed as continuous functions of the TTI per- centile (n), allowing prediction of any percentile value (the entire cumulative TTI curve), not just the five percentiles shown in Table A.5. These coefficient functions are built with subcoefficients, as shown in the equation below (with values in Table A.6). coeff (45)1= wn xyn z n+ ( )− where coeffn = one of the four coefficients in the TTIn formula (an, bn, cn, dn) n = percentile (scaled between 0 and 1.0) w, x, y, z = subcoefficient (shown in Table A.6) Table A.5. TTI Prediction Model Coefficients N (percentile) D/C <– 0.8a D/C > 0.8 an bn cn dn an bn c1n c2n d1n d2n 10 0.01400 0.00099 0.00015 0.00037 0.07643 0.00405 1.364 -28.34 0.178 15.55 50 0.07000 0.00495 0.00075 0.00184 0.29097 0.01380 0.966 -6.74 0.345 3.27 80 0.11214 0.00793 0.00120 0.00310 0.52013 0.01544 0.630 6.89 0.233 5.24 95 0.19763 0.01557 0.00197 0.01056 0.63071 0.01219 0.639 5.04 0.286 1.67 99 0.47282 0.04170 0.00300 0.02293 1.13062 0.01242 0.607 5.27 0.341 -0.55 a Coefficients for D/C ≤ 0.8 are continuous functions of n. See text below for more description. Table A.6. Subcoefficient Values for TTI Prediction Model (D/C < 0.8) coeffn Subcoefficients w x y z an 0.14 0.504 96 9 bn 0.0099 0.0481 96 9 cn 0.00149 0.0197 68 6 dn 0.00367 0.0248 36 7

Next: Appendix B - Validation Plan »
Validation of Urban Freeway Models Get This Book
×
 Validation of Urban Freeway Models
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) Report S2-L33-RW-1: Validation of Urban Freeway Models documents and presents the results of a project to investigate, validate, and enhance the travel time reliability models developed in the SHRP 2 L03 project titled Analytical Procedures for Determining the Impacts of Reliability Mitigation Strategies.

This report explores the use of new datasets and statistical performance measures to validate these models. As part of this validation, this work examined the structure, inputs, and outputs of all of the L33 project models and explored the applicability and validity of all L03 project models. This report proposes new application guidelines and enhancements to the L03 models.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!