National Academies Press: OpenBook

Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington (2014)

Chapter: CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product

« Previous: CHAPTER 6: Pilot Testing and Analysis on SHRP 2 L07 Product
Page 79
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 79
Page 80
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 80
Page 81
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 81
Page 82
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 82
Page 83
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 83
Page 84
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 84
Page 85
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 85
Page 86
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 86
Page 87
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 87
Page 88
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 88
Page 89
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 89
Page 90
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 90
Page 91
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 91
Page 92
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 92
Page 93
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 93
Page 94
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 94
Page 95
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 95
Page 96
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 96
Page 97
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 97
Page 98
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 98
Page 99
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 99
Page 100
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 100
Page 101
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 101
Page 102
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 102
Page 103
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 103
Page 104
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 104
Page 105
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 105
Page 106
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 106
Page 107
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 107
Page 108
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 108
Page 109
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 109
Page 110
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 110
Page 111
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 111
Page 112
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 112
Page 113
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 113
Page 114
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 114
Page 115
Suggested Citation:"CHAPTER 7: Pilot Testing and Analysis an SHRP 2 L08 Product." National Academies of Sciences, Engineering, and Medicine. 2014. Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington. Washington, DC: The National Academies Press. doi: 10.17226/22254.
×
Page 115

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

79 CHAPTER 7 Pilot Testing and Analysis an SHRP 2 L08 Product 7.1 Introduction SHRP 2 L08 develops methods on incorporating travel time reliability into the HCM’s analytical procedures. A guide is developed to provide step-by-step processes for predicting travel time reliability for freeway and urban street facilities. The basis of the methodology is the nonrecurrent congestion factors that cause the unreliability of travel time. By using a scenario generator to allow user input on the specifics of the scenario (e.g., weather, time of day, lane closure, and duration of incidents), the HCM’s full range of performance measures can be generated and the impacts of variability on facility performance over the course of a year can be estimated. Excel-based HCM computational engines (e.g., FREEVAL and STREETVAL for freeway and urban street, respectively) are developed to automate the generation of reliability scenarios and to calculate the reliability results. Figure 7.1 illustrates the components of the methodology developed in SHRP 2 L08. Figure 7.1. Methodology components in SHRP 2 L08 (Kittelson & Associates, Inc. 2013). 7.2 Tool Operability Both of the L08 reliability tools, STREETVAL and FREEVAL, were tested on Windows 7 and Windows 8 operating systems as well as on a Mac computer running the most current operating

80 system, OS X 10.9. The specifications of the computers tested—operating system, system type, and version of MS Office installed—are listed in Table 7.1. Table 7.1. Specifications of Computers Used in Installation Tests Operating System Windows 8 Windows 7 Windows 7 OS X 10.9 System type 64-bit 32-bit 64-bit N/A MS Office version MS Office 2010 MS Office 2010 MS Office 2010 Office 2011 for Mac Both STREETVAL and FREEVAL ran successfully on Windows 7 operating system for the 32- and 64-bit system types. In attempting to run STREETVAL on Windows 8, the program gave the following error message shown in Figure 7.2. FREEVAL, on the other hand, ran on Windows 8 with no problems. Figure 7.2. Compilation error message for Windows 8 test for STREETVAL. Neither STREETVAL nor FREEVAL was able to run on the Mac computer. When attempting to run FREEVAL, the interface was responsive, enabling the user to enter the name and general project information for Step 1; however, when the user progressed to Step 2, the program would crash. The results of running STREETVAL were equally disappointing: The Urban Streets Computational Engine (USCE) macro buttons were unresponsive to the user’s actions. The research team believes these errors are because of compatibility issues between the Mac operating system, which is UNIX based, and the Windows operating system that the

81 software was created with. Given that the vast majority of computers used today are Windows based, this incompatibility is not a major concern. 7.3 FREEVAL Introduction and Interface Learning to use the FREEVAL tool is challenging because of the complexity of the tool itself and the lack of clear instruction on how help information can be obtained. Although a user manual on FREEVAL exists, the user manual requires knowledge that borrows from several other chapters of the HCM, which may not be available when using the tool. Use of the tool itself can be broken down into five steps that a user must follow in order to conduct a reliability assessment of a freeway section:  Step 1: Enter project summary information;  Step 2: Create seed file;  Step 3: Manage scenario;  Step 4: Create FREEVAL input file; and  Step 5: Generate scenarios and results. Step 1 is straightforward; the user enters his or her name and gives a brief summary of the project for informational purposes. In Step 2, the user must enter in the study period, start and end times of the reliability reporting period, the demand seed day, the number of HCM segments, terrain type, and whether there is ramp meter control in the study section. It should be noted that when specifying the number of HCM segments, the user must select three or more to make the tool work. If the user selects two segments (as shown in Figure 7.3) the program will seem fine. However, once the user gets to the last step an error message will appear and the user will have to start all over. Also, if the user forgets to specify the ramp meter control (as shown in Figure 7.4), the program does not warn the user that something is wrong until the last step. Fixing these issues would make this tool much more user friendly.

82 Figure 7.3. FREEVAL segment number selection. In order to finish making the seed file in Step 2, the user must enter the 15-minute hourly volumes for the entire study period of the specified seed day. In addition to demand data, the user also must specify the percentage of trucks on the study section and the length of each HCM segment. The demands must be manually entered in multiple Excel spreadsheets. There is one sheet for every 15-minute increment in the study period. If the specified study period is 6 hours, the user must input data into 24 separate spreadsheets, and this can be very time consuming. Consolidating these multiple spreadsheets would streamline the data entry process and allow the user to copy and paste demand values into the form.

83 Figure 7.4. FREEVAL ramp metering option selection. Step 3: In this step, the user opens a new macro program called the scenario generator and loads into this program the seed file created in Step 2. Next, the user must enter the demand ratios for the different times of the year to describe how the daily demand fluctuates across the year (as shown in Figure 7.5) and the user must specify the number of demand patterns to describe how travel behavior changes throughout the year and between days of the week. Weather data must also be inputted, and the user has the option of manually entering the probability of occurrence of the 11 different weather events if known, or the user can use the weather data generated from the built-in historical weather database, which includes the 10 years of weather data from a multitude of U.S. cities. Finally, the user must enter the incident data. This part of the data entry is very flexible and can be used with data-rich areas, and it also includes a prediction model that will predict the incident probabilities if crash data are unavailable.

84 Figure 7.5. FREEVAL demand multiplier. Step 4: The user selects a minimum probability threshold for a given scenario to eliminate unwanted low probability scenarios and generate the list of scenarios. After generating the list of all scenarios, the user can change the probability threshold to either include more or less scenarios, or, if the user is satisfied, click “Create FREEVAL input file” to create the input file. Step 5: The final step in the program involves the user loading the input file created in Step 4 back into the original FREEVAL macro and evaluating the scenarios by clicking “Click generate scenarios.” This part of the program takes the longest to complete, and each scenario in the input file may take 20–60 seconds to be evaluated. The primary issues identified with tool use were those addressed in Step 2; warning messages displayed by the program would alert the users of their mistakes for them to fix. Also, consolidating the demand input sheets would definitely streamline the data entry process of this program, which can easily take several hours depending on the length of study period and number of HCM segments. One issue not addressed in any of the literature regarding FREEVAL is how long of a study section is good for a particular reliability test. It would seem intuitive that for urban areas with more access ramps, longer study sections would be preferred, and for more rural areas, a

85 shorter study section might suffice. More guidance on selecting the appropriate study period would be helpful. In addition, the software does not address causes of congestion that may occur outside of the study section; a weaving section located upstream of the test site might be a source of recurring congestion and will be ignored in an analysis. Because of this, the results of the reliability test may be skewed. 7.4 Performance Test for FREEVAL Tests were completed to determine the accuracy of the FREEVAL reliability software by comparing the outputted travel time reliability from the software to the actual travel time reliability computed from historical dual-loop detector data. The tests were conducted for two separate study locations in Seattle, Washington, and are circled in the map (Figure 7.6). The green circle shows the I-5 study site, which goes from the Northgate Mall to Shoreline (roughly 3 miles long), and the red circle indicates the I-405 study site (roughly 2 miles long), which is located just outside the city in a less urban environment.

86 Map data © 2014 Google Figure 7.6. Map of two study locations (pin located at Northgate Mall).

87 7.4.1 Test 1. I-405 Facility, Seattle (Mileposts 27–29) of Test Site B The I-405 facility was selected as a study location because it contains relatively good dual-loop detector data, and it is also known to be one of the most congested facilities in Washington State, which makes it more interesting to study from a reliability point of view. The chosen study location is about 2 miles long and stretches from milepost 27 to milepost 29 on I-405. Volume data were obtained from the loop detectors to satisfy the demand data requirements of the software, and the demand ratios were calculated accordingly. The supplied default values were used for the demand profile data. The Highway Economic Requirements System (HERS) prediction model, built into the software, was used to predict the quantity of incidences along the facility. The FREEVAL software generated a total of 454 scenarios for the analysis, including 360 different incident scenarios and 94 different weather scenarios. The details of this reliability test, including the study period and the reliability reporting period, can be seen in Table 7.2. Table 7.2. Summary of Reliability Test on I-405 Reliability Test 1 Summary Study section Interstate 405 (miles 27–29) Study period 2:00 p.m.–8:00 p.m. Reliability reporting period All week days in 2011 (~260 days) The reliability outputs of the software were compared to the ground truth reliability for consistency. The ground truth reliability was calculated using speed data collected from dual- loop detectors located on the facility. A sample of the dual-loop data is shown in Figure 7.7. The flag value of 0 indicates that the loop is malfunctioning. Comparison will be conducted only with the data obtained from good condition loop stations. The WSDOT Gray Notebook procedure for calculating travel time reliability was used in order to determine the distribution of travel times for the facility. Figure 7.8 illustrates the calculated distribution of travel times along the 2-mile facility. This is considered the ground truth reliability.

88 Figure 7.7. Sample of dual-loop data on I-405. Figure 7.8. Distribution of travel times for I-405 study site. Figure 7.9 compares the cumulative distributions between the ground truth data (b) and the generated software output (a).

89 (a) (b) Figure 7.9. Comparison of cumulative distributions for TTI on I-405: (a) FREEVAL output; (b) Dual-loop data. Table 7.3 clearly shows that the FREEVAL estimate of reliability tends to be overly optimistic; its TTI values are almost all smaller than the ground truth. The semi-standard deviation (the standard deviation taken about the free-flow travel time instead of the mean) estimated by FREEVAL is more or less the same with the ground truth value while the 80th percentile TTI and 95th percentile TTI values for the ground truth data are larger than FREEVAL outputs. Table 7.3. Performance Measure Comparison Performance Measure FREEVAL Ground Truth Mean TTI 1.11 1.16 50th percentile TTI 1.08 1.03 80th percentile TTI 1.14 1.30 95th percentile TTI 1.25 1.55 Semi-standard dev. 0.45 min 0.46 min 7.4.2 Test 2. I-5 Facility, Seattle (Mileposts 173–176) on Test Site A The second study facility is I-5 near the Northgate Mall. This site was chosen because it is a well-known congested section of roadway, and it has a high density of access ramps that makes it different from the I-405 test site, which had no access ramps. The on-ramps and off-ramps for the Northgate Mall are located along the facility, and the mall traffic causes this section of roadway to be rather chaotic.

90 Volume data were collected in a similar manner as those in Test 1 in order to satisfy the demand data requirements of FREEVAL. The incident data were predicted using the HERS model, and the default values were used for the demand profile values. A summary of this test is shown in Table 7.4. Table 7.4. Summary of Reliability Test on I-5 Reliability Test 2 Summary Study section Interstate 5 (miles 173–176) Study period 2:00 p.m.–8:00 p.m. Reliability reporting period All week days in 2012 (~260 days) The ground truth reliability was calculated similarly as in Test 1 that uses dual-loop detector data for the facility travel time reliability calculations. The distribution of travel times calculated using the WSDOT Grey Notebook procedure for the approximately 3-mile-long study section is shown as Figure 7.10. Figure 7.10. Distribution of travel times for I-5 study site. Figure 7.11 compares the cumulative probability distributions between the ground truth data (b) and the generated software output (a). Similar to the I-405 test results, FREEVAL tends to be conservative when estimating travel time reliability and often predicts smaller TTI values than the ground truth data as shown in Table 7.5. The exception of this is the 50th percentile TTI for which the ground truth value is smaller. FREEVAL also predicts a much smaller variability in travel times as is noted by the difference in the semi-standard deviation values.

91 (a) (b) Figure 7.11. Comparison of cumulative distributions for TTI on I-5: (a) FREEVAL output; (b) Dual-loop data. Table 7.5. Performance Measure Comparison on I-5 Performance Measure FREEVAL Ground Truth Mean TTI 1.12 1.25 50th percentile TTI 1.06 1.00 80th percentile TTI 1.10 1.53 95th percentile TTI 1.23 2.13 Semi-standard dev. 0.19min 1.97min 7.5 Precision Testing for FREEVAL One of the primary steps in completing a reliability analysis on FREEVAL is inputting the demand data for the specified seed day. A convenient facet of the seed day is that it only requires the user to enter data for one day versus many days in the reliability reporting period. A caveat of this is that depending on the particular traffic demand occurring on the seed day, the results of FREEVAL may change drastically. In addressing this issue, it is of relevance to determine the sensitivity of a given test run to the selection of the seed day. An additional test run was completed on the I-405 study site using demand data from a new seed day, but keeping all other data inputs the same. The TTI curves of each of these tests are shown as Figure 7.12 for comparison.

92 (a) (b) Figure 7.12 Comparison of cumulative distributions for TTI on different seed days: (a) 4- 18-2012 Wednesday; (b) 2-22-2012 Tuesday. A comparison of the outputted reliability performance measures for each of the two trial runs is shown in Table 7.6. Table 7.6. Test Result Comparison between Seed Days Performance Measures Run 1 4-18-2012 Wednesday Run 2 2-22-2012 Tuesday Mean TTI 1.11 1.12 50th percentile TTI 1.08 1.08 80th percentile TTI 1.14 1.15 95th percentile TTI (PTI) 1.25 1.27 Misery index 1.90 1.96 Semi-standard deviation 0.45 0.45 Reliability rating 95.75% 90.30% Percent VMT at TTI >2 1.04% 1.27% Figure 7.12 and Table 7.6 show that the difference on MOEs between the two trial runs is not large; nonetheless, the selection of the seed day can affect the results. Also, it is not sufficient to only complete one trial run. Doing so may grossly misrepresent the actual reliability of a facility. Multiple runs must be completed, and the results must be analyzed statistically in order to be confident in the results of a FREEVAL reliability test.

93 The only instructions given to the user for selecting the seed day are that the seed day should be included in the reliability reporting period and that it should be a day in which no special events, such as big sports games, are occurring. There is no indication that multiple runs should be completed using the demand data from several different seed days in order for a test to be reliable. This should be clearly addressed in the L08 documents. 7.6 Test Conclusions for FREEVAL In summary, although it is impossible to evaluate the accuracy of FREEVAL based on the results of two tests, it is fair to say that the reliability estimates of the software seem reasonable compared to the ground truth reliability determined from the dual-loop detector data. Overall, FREEVAL tends to be overoptimistic in its estimates and produced consistently smaller TTI values and smaller semi-standard deviations. 7.7 STREETVAL Introduction and Interface The Urban Streets Reliability Engine tool (STREETVAL) was developed for the purpose of assessing the long-term travel time reliability along a signalized arterial. In order to carry out its procedure for predicting long-term travel time reliability, two specific methodologies are implemented and are referred to in the literature as the reliability methodology and the HCM methodology. These two methodologies are described briefly below; a more detailed description can be found in the in the STREETVAL user guide. 1. The reliability methodology uses a random statistical procedure, guided by an inputted base data set, to simulate the traffic demand, weather, and incident conditions over each of many small time periods (analysis periods) within the study period and for each day in the reliability reporting period. This process is also referred in the literature as the scenario generation procedure. 2. The HCM methodology predicts the travel times on the specified corridor, given the predetermined traffic, weather, and incident conditions from the reliability methodology, for each of the analysis periods within the study period and for each day in the reliability reporting period. Note that this methodology includes procedures for estimating travel times during work zones and special events. The flow chart shown in Figure 7.13 illustrates how these two methodologies interact in order to perform a reliability assessment. To further elaborate on the STREETVAL reliability procedure from a software analyst perspective, use of the tool has been broken down into five main actions: Action 1. selection of project purpose, location, and scope; Action 2. HCM input data file creation; Action 3. scenario generation; Action 4. scenario evaluation; and Action 5. result interpretation.

94 7.7.1 Step 1: Project Purpose Before beginning an analysis, it is recommended that the user has a clear idea of what is to be gained in doing such an analysis, namely what is the project purpose. There are many possible motivations for using STREETVAL, which are discussed in the literature. These include the following:  Evaluating potential improvements (e.g., signal retiming, infrastructure improvements, etc.);  Determining key sources of travel time unreliability; and  Quantifying problems. Figure 7.13. STREETVAL methodology flowchart. A manageable project scope must be selected that consists of the project study site and the temporal scope. In selecting the study location the user is constrained on the length of roadway that can be evaluated. The study location must contain no more than nine signalized intersections (eight analysis segments). For the temporal scope, the user must specify three parameters: analysis period, study period, and reliability reporting period. These parameters are briefly defined below.

95 7.7.1.1 Study Period It is recommended that the study period for a given project be a maximum of 6 hours, and no less than 1 full hour. The study period should be selected such that the first analysis period within the study period is uncongested. 7.7.1.2 Analysis Period The analysis period essentially defines the resolution analysis that will be performed by the software. This period can range anywhere from 15 minutes to 1 hour. It is, however, recommended for operational analyzes, that a 15-minute period be selected. The selection of a longer period may cause incident and weather events lasting only a short period of time (such as a brief hard hailstorm) to be ignored. 7.7.1.3 Reliability Reporting Period The reliability reporting period should be relatively long (not less than 200 days). The analyst may choose which days of the week to be included (e.g., exclude weekends, or all Mondays, etc.). 7.7.2 Step 2: HCM Input Data File Creation This step requires the user to input the required input data into the USCE program (see screen shot in Figure 7.14), which is an Excel macro, in order to create an input file of type .txt that can be read by the Urban Streets Reliability Engine (USRE). The necessary input data required includes the following:  Demand data for each intersection and access point located;  Study section roadway geometric data; and  Signal timing data for each intersection. These sources of data will be further discussed in the following section. The USCE divides the study location into analysis segments, bounded on either end by a signalized intersection, and allocates an Excel sheet for each analysis segment as well as one sheet for the first segment intersection (as shown in Figure 7.15). The three previously listed types of data must be entered for each individual segment along the study section.

96 Figure 7.14. Urban Streets Computational Engine (USCE). Figure 7.15. STREETVAL segment schematic. After entering all the necessary data, the user writes the data to a file, which is saved to a user-specified directory. He or she is then prepared for the next step. 7.7.3 Step 3: Scenario Generation The scenario generation step is carried out using the USRE, which is also an Excel macro program.

97 Figure 7.16. USRE 2010 HCM. The user must first upload the HCM input file created in Step 2, specify the time and date of the seed demand data specified in the input file (1 hour of collected volume data), and enter the three temporal scope variables for the project. These values are entered in the “Set Up” layer of the tool. In addition to these values, crash data, peak hour factors (PHF) for traffic (if using 15 analysis periods and wish to randomize demand within 15-minute periods), and work zones and special event input files if they are deemed necessary and relevant. As previously mentioned, the scenario generation is a stochastic process, and it relies on the selection of user defined seed values. Three random seed values must be defined for each of the three stochastic variables: weather, incident, and demand. It is the combination of the weather, demand, and incidents occurring during a given analysis period that make up a given scenario.

98 After coding in the necessary inputs, the scenarios are generated by clicking the “Start Calculations” button. The generation process will take several minutes to complete and will vary depending the number of scenarios being evaluated. This process generates one scenario per analysis period in the reliability reporting period. For example, given a 0.25-hour analysis period, 3-hour study period, and 365-day reliability reporting period, there will be 3/0.25x365 = 1460 scenarios. For each scenario generated, one 8kb .txt file is created and saved to the directory. It should be noted that these files can quickly become a nuisance as a user may want to run several trials for a given test with different random seeds; these files quickly add up as well as take up precious hard-disk space (1460 files/test x 8kb/file ~14 Mb/test). An improvement would be to generate one output file for all the scenarios in a test. A screen shot of the tool illustrating the main input variables is shown in Figure 7.17. Figure 7.18 shows a supplemental input screen for random seed numbers and PHF. Figure 7.17. Principal inputs for scenario generation. Figure 7.18. Random seed numbers and PHF (peak hour factor).

99 7.7.4 Step 4: Scenario Evaluation This is the start of the HCM methodology, and it consists of evaluating the scenarios that were previously generated. In order to evaluate each scenario, the scenario engine is used. The scenario engine, which has not yet been mentioned in this report, is a .zip file that contains the operational procedures based on previously conducted research to estimate travel time performance measures for a given scenario. This step is the second most computationally intensive step after scenario generation and typically takes 3–6 minutes depending on the number of scenarios being evaluated. An evaluation interval parameter gives the user the choice to evaluate either all of the generated scenarios or a subset of them. This can greatly reduce the required computation time however at the cost of an overall smaller sample size. Figure 7.19 shows a screen shot of the scenario evaluation sheet of USRE. Two sources of data are entered in this sheet: engine path, which is the location of the scenario generator .zip file, and the evaluation interval that has just been discussed. Figure 7.19. STREETVAL scenario generation. 7.7.5 Step 5: Result Interpretation In this step, the program outputs the findings of the scenario evaluation step in an easy and user- friendly fashion. The program allows the user to choose from a list of performance measures including:

100  Travel time;  Travel speed;  Stopping rate;  Through delay; and  Total delay. The user can also select if they would like to see results of the entire facility or only a particular segment. In Figure 7.20, a screen shot of the performance summary sheet of the software is shown. The user may select a different performance measure, direction of travel, or system component by clicking on the drop down menu and selecting the appropriate item. Figure 7.20. Tool testing results. A histogram is created as a friendly visual illustration of the results, and a table summarizing the certain statistical properties of the histogram such as average, variance, and 80th, and 95th percentiles is also displayed. A list showing the incremental performance measures for each of the evaluated scenarios is also displayed and can be copied and pasted into

101 another data file for additional analyses. Figure 7.21 shows the outputted list of each scenario, its date and time, and its corresponding performance measure. Figure 7.21. Tool results: List of performance measures for each evaluated scenario. 7.8 Overall Evaluation of Tool Interface It is worth noting that from an operator perspective, this tool is far from perfect. The interface is sloppy with many random numbers just floating in space on the spreadsheet (as shown in Figure 7.22). This is distracting from a user’s point of view and undermines the integrity of the tool. Users may be unaware if they accidently entered these values or if the numbers are somehow part of the program. Although this may be a small flaw compared to the overall performance, further improvements to the aesthetics of this tool should definitely be considered. Another distracting glitch of this tool was the buttons. The buttons on the tool would shrink every time they were pressed. Figure 7.23 shows a shrunken button from the USRE tool. Figure 7.22. Distracting floating numbers.

102 Figure 7.23. Malfunctioning button circled in red. 7.9 Input Data Requirements for STREETVAL The data requirements for this tool are extensive and include demand data, incident data, signal timing data, roadway geometric data, and data from work zones and special events, if they are present during the period of time being analyzed (reliability reporting period). For the demand data requirements, the user must enter the traffic volumes for each approach of each intersection located along the study segment. In many instances, however, such a thorough data set for a given corridor is non-existent. This makes any kind of retrospective analysis difficult. If no demand data for the segment exist, a traffic count study must be conducted along the corridor. In addition to the demand requirements for intersections, demand data must also be collected for each access point along the study corridor. What exactly qualifies as an access point is, however, highly subjective and is based on the analyst’s opinion. According to the HCM 2010, an access point is any unsignalized entryway located along a corridor that receives enough traffic volume to influence travelers along the main arterial. This begs the question of what types of volumes would require an analyst to appropriately define an entryway as an access point for which demand data will need to be collected. If multiple smaller access points are located along the corridor, the tool recommends combining these access points into one single access point that is located at the average distance of each smaller access point from the upstream intersection and

103 that receives the combined volumes of each of the smaller access. In most cases, where access point demand data are unavailable, a traffic count study is required, and this process is labor intensive and costly to an agency. One improvement to the tool might be to provide a method to estimate access point data along an urban arterial based on a number of built-environmental factors that are likely to be of influence, such as land type, population density, parking lot size, time of day, and distance from a central business district. For the incident data requirements, crash segment frequencies must be specified for each intersection and each segment. Crashes are considered to be intersection-related if they occur within the bounds of the intersection itself, if they occur as a result of a queue formed from the intersection bottleneck, or if they are caused by a traffic signal controller malfunction. If an incident cannot be classified as intersection-related, it is classified as segment-related. In most cases, the cause of the incident can be used to deduce the type of crash (intersection-related or segment-related). The user manual suggests two methods to calculate the crash segment frequencies (expected number of crashes at given location (crashes/year)). The first method requires the user to have access to at least 3 years’ worth of crash data. These data may then be used to calculate the crash frequencies, based on the average crash frequency during the 3 years of collected data. The second method involves using the 2010 Highway Safety Manual methodology, which is described in Chapter 12 of the manual. Signal timing data must be acquired from each of the traffic signals located along the study corridor and are crucial in the estimation of segment-level travel times. STREETVAL software is capable of accommodating both pretimed control and actuated/semi-actuated control operating under coordinated conditions, where several adjacent intersections are in sync and timed to a master controller or isolated control, where adjacent intersections have no communication with one another and act as independent entities. In addition to the previously described data types, STREETVAL also requires weather data for the given study location including average monthly rainfall, days with rainfall greater than 0.01 inches, average monthly snowfall, and average monthly temperatures. The STREETVAL tool contains a large databank with 10 years of weather data for many prominent U.S. cities and towns. This eliminates the need to acquire adequate weather data and streamlines the overall reliability testing procedure. Before collecting and gathering this data (signal timing, demand, crash, and weather data) from multiple various sources, the user must first determine when is the best time to collect or gather this data. The analyst must be certain that the demand data (collected for the 1-hour period during the study period) is collected at an appropriate time. Before this can be done, the analyst must appropriately define the temporal scope of the project. In STREETVAL, the temporal scope is defined equivalently as it was for FREEVAL. The user must choose the study period, analysis period, and reliability reporting period. These three parameters were defined previously.

104 7.10 Performance Test for STREETVAL Test Site Location: In order to test the accuracy of this tool, a test was conducted on an urban arterial using real traffic data. The test location selected is a roughly 1-mile stretch of SR 522, an urban arterial located in Kenmore, Washington, just outside of Seattle. This site, shown in the map (Figure 7.24), provides travelers access to both I-5 and I-405 and serves as a major route around Lake Washington for those commuting into the city from the neighboring suburbs. This particular site location was selected because it acts as a major daily commute route for intercity travel and because of the abundance of sensor infrastructure that is currently installed along it, including ALPR cameras, which collect very accurate travel time data. The travel time data gathered from these cameras served as a ground truth base from which to assess the accuracy of STREETVAL. Map data © 2014 Google Figure 7.24. Study site location along SR 522, Kenmore, Washington. As mentioned previously, testing the reliability of a corridor using the STREETVAL tool requires that a large amount of data first be gathered. Even though the research team currently has access to all of the loop detector data and live video feeds from several gantry-mounted video cameras along the study site, not to mention, a range of travel time data obtained from Bluetooth sensors and ALPR cameras, the demand data requirements of the tool could not be satisfied using data collected from the existing sensors. The reason for this was that the tool

105 requires demand data for each movement of each intersection, and the rich sensor infrastructure installed along the study site gave the researchers complete demand data only for the main SR 522 arterial and not the side streets. When the research team became aware of this problem members had one of two choices: (1) try to find another urban arterial with more complete demand data, or (2) manually collect the missing demand data for SR 522. After some debate, it was decided that SR 522 would stay as the test sight and that the missing data for the other intersections approaches would be collected manually. There were two primary reasons for this decision. The first reason is that researchers were not able to find complete demand data for all signalized intersection approaches on an urban arterial. The second reason is that SR 522 was the only known arterial in Seattle that had accessible ground truth travel time data. Volume Data Collection: Because of limited resources for the manual data collection, the originally proposed study site of roughly 4 miles in length was shrunk down to a manageable 1- mile section, stretching along SR 522 from 68th Avenue to 83rd Place (see Figure 7.25). Map data © 2014 Google Figure 7.25. Study site location. To satisfy the traffic volume data input requirements of the tool, 1-hour traffic volumes were simultaneously captured for all 5 intersections located along the study site using 7 tripod- mounted video cameras for two 1-hour periods. The complexity of the intersections and high rate of vehicle arrivals made it necessary to capture the volume data with a camera. These camera data were later viewed at a slower, more convenient pace, and the traffic volumes for each direction were obtained. Images captured from each of the 7 tripod-mounted cameras are shown directly in Figure 7.26. These cameras were situated so that all traffic from each individual approach could be observed.

106 (a)

107 (b)

108 (c) (d)

109 (e) Figure 7.26 Camera-captured images of studied intersections: (a) 68th Avenue camera- captured images from video of four-way intersection EB/WB (upper figure) and NB/SB (lower figure) approaches; (b) 73rd Avenue camera-captured images of four-way intersection EB/WB (upper figure) and NB/SB (lower figure) approaches; (c) 77th Avenue camera-captured images of T-intersection; (d) 80th Avenue camera-captured images of T- intersection; and (e) 83rd Place camera-captured image of T-intersection. To further aid in the traffic counting process, a software program called Traffic Counter (shown in Figure 7.27) was developed by STAR Lab members; the program allows users to count traffic on the computer by pressing the appropriate key for a given direction. The advantage of this software is that users can touch-type the keys and thereby not have to take their eyes off the video and risk missing a count. This was crucial because traffic is often running at or near the saturation flow rate at the startup of the green phase, which requires a high level of visual attention to count. Volume data were also collected via manual counting at all of the major access points along the study site. To aid with the data collection, a team of 10 volunteers was needed. Each volunteer was given a particular task: either manually counting cars at an access point or filming vehicles passing through an intersection using a tripod-mounted camcorder. In total, approximately 67 man hours were spent collecting data at the sight and counting vehicles from the videos that were recorded. This is worth mentioning because any agency that will in the future be using this tool will want to consider the potential costs of collecting the data to use it. The cost of 67 hours of labor is not trivial, and that’s not considering the opportunity cost of sending 10 trained engineers from an agency or consulting firm to count cars.

110 Figure 7.27. Traffic Counter software user interface. 7.10.1 Weather Data As mentioned previously, it was not necessary to gather weather data along this corridor as the tool contains a built-in weather databank that contains 10 years of historical weather data for many prominent cities, including Seattle. 7.10.2 Incident Data The incident data used in this study were obtained from the WITS database. Since the tool requires a minimum of only 3 years of incident data be collected, researchers more than met the data requirements. After querying the database, it was determined that zero incidents have been reported along the study section since 2002. This is not surprising, given that incidences are rare events, and the length of the study section was only 1 mile. 7.10.3 Traffic Signal Timing Data Current traffic signal timing plans were obtained from WSDOT for each of the five intersections located along the study site. All of these five intersections are operating under coordinated actuated control, which is supported by STREETVAL, and the coordination plan selection is based on the time of day. It was crucial for the study, that the signal timing plans were current and that no signal retiming had occurred during the selected reliability reporting period,

111 otherwise, the results of the test might be skewed. In this case, the plans had not been modified since July 2012, well before the first day in the reliability reporting period. To summarize the previous section of this report, data were collected from a myriad of sources to suffice the requirements of STREETVAL. Complete demand data were unavailable, so a manual data collection was conducted at the test site. Despite the challenges of gathering all of the data, all of the necessary data requirements were successfully fulfilled. 7.10.4 Testing Results Before running the software, it was first necessary to define the temporal scope of the test. The temporal scope parameters that were chosen are listed below:  Analysis period, 0.25 hour;  Study period, 7:00 a.m.–12:00 p.m.;  Reliability reporting period, 228 days (8/16/13–3/31/14); and  Days considered, Monday–Friday. An analysis period of 0.25 hour was chosen because it is the shortest possible analysis period and will give the highest resolution test result possible. STREETVAL will ignore any incident or weather event that is shorter in duration than the selected period. The impetus for this was that it would minimize the chance of any intense but brief weather events, which might impact arterial travel times, from going unnoticed. The study period was selected as a 5-hour period that overlaps the morning peak commute. There was no specific reason for selecting 5 hours other than it was a medium length of time, and not too short that it would fail to test the software’s ability to predict reliability across many hours in a day, while not too long as to be excessive and irrelevant. The only constraint, described in the user guide, for selecting the study period is that it must include the hour of day of the specified seed volume. In this specific case, the seed volumes were manually collected for two different 1-hour time periods during the same day: from 10:00–11:00 a.m. and 13:00–14:00 p.m. Selecting 7:00–12:00 allowed the research team to satisfy this constraint. In order to assess the accuracy of the STREETVAL software, the software reliability outputs were compared to the ground truth reliability of the corridor, which was calculated using real historical travel time data collected from ALPRs. For this analysis, ALPR travel time data were used to approximate the ground truth travel times on the corridor. Although no current studies have physically verified the accuracy of the travel times obtained from using ALPR technology, it is a widely accepted fact in the industry that these data are highly reliable. The technology has therefore been deemed to be a good estimator for the ground truth travel time data. The ALPR data were queried for a specific travel link corresponding to the travel link closest to the study, and researchers were interested only in the data within their previously defined temporal scope site. For this selected travel link, the travel time is measured from mileposts 7.21 to 8.18, which line up reasonably close to the origin and destination of the

112 selected study site (mileposts 7.21 to 8.15). It should, however, be noted that because the destinations differ by 0.03 mile between the selected study site and the ALPR link the comparison is slightly biased. Before using the ALPR travel time data, it was first cleaned to eliminate outliers and unreasonable data points using the recommended data quality control procedure discussed in Chapter 3 of this report. The ALPR data are aggregated in 5-minute periods, and for each 5- minute period, an average travel time value for a given travel link is given. This is not problematic; however, because STREETVAL produces 15-minute average travel time values, it was necessary to convert the cleaned ALPR average 5-minute travel time values into 15-minute average travel time values. This was a very important step for this test in order to provide reliable and sound test results because a histogram of 5-minute average travel times will have an inherently larger variance than a histogram of 15-minute average travel times. The distribution of 15-minute average travel times obtained from this data is shown in Figure 7.28. Given that STREETVAL is simulation software, and it is sensitive to the selection of random seed values, three separate trial tests were conducted using three distinct sets of random seed values. Each trial test produced one travel time value for each generated scenario. The number of total scenarios evaluated in each trial, given the reliability reporting period of 228 days, a study period of 5 hours, an analysis period of 15 minutes, and an evaluation interval of 2 (generate scenarios for every other day) was 2,280 scenarios (5 hours/day * 4 analysis periods/day * 228days/2). Given the 15-minute analysis period, each scenario travel time value represents the average travel time for a specific 15-minute period. The test results from each of the three trial tests were combined into one large data file that amounted to 6,840 average travel time values. A histogram of these 6,840 average travel time values was then generated for comparison to the ground truth travel time distribution.

113 Figure 7.28. Ground truth data distribution of travel times. (Note: This graph shows the distribution of 15-minute average travel times as calculated from the ALPR data.) The two histograms (Figure 7.29) illustrate the distribution of travel times of the test trial runs as compared to the ground truth ALPR travel time distribution. It should be noted that the histogram of the ground truth reliability is much more dispersed than that of the test results. However, despite the drastic difference in the widths of the distributions, the mean and median values of each distribution are quite similar as can be seen from the graphs. Figure 7.29. Distribution of travel times from STREETVAL (gold) and ALPR (purple). To further illustrate these results, Figure 7.30 shows the cumulative distribution of travel times for the ground truth (shown in purple) compared to the test results (shown in gold). From this graph, it can clearly be seen the test results tend to overpredict the travel time for the lower

114 probability range, and underpredict travel times for the higher probability range (0.9 and greater). In addition, the steepness of each curve is a good indicator of the travel time reliability. In this case, the slope of the ground truth curve (gold) is much steeper than the purple curve, which denotes a significantly greater reliability than the actual reliability (purple). These results indicate that STREETVAL provides an overoptimistic prediction of reliability. Figure 7.30. Cumulative distribution of travel times from STREETVAL (gold) and ALPR (purple). Figure 7.31 compares several common reliability performance measures, derived from the travel time distributions for the ground truth travel time data and the predicted test results. Figure 7.31. Comparison of reliability performance measures between ground truth and test results. Performance Measure Ground Truth Test Results 5th percentile 90.3 110.8 10th percentile 93.0 112.2 80th percentile 117.7 123.0 85th percentile 121.3 124.4 95th percentile 133.7 127.6 mean 107.7 118.3 standard deviation 13.6 5.2 median 105.0 117.5

115 From the results presented, it is clear that there is a large disparity between the predicted reliability of STREETVAL and the actual reliability obtained from the ALPR data. There are many potential explanations for this disparity. However, the researchers believe that this error is most likely a result of a bias in the estimation of the travel demand for each scenario. In STREETVAL, the travel demand is estimated for each scenario using two main sources of information: (1) AADT volume factors for each month, day of week, and hour of day and (2) 1- hour seed volumes. It is possible that the demand from the seed day is not representative of the average demand on a given day, and this may introduce a small to very large bias in the software’s prediction. Another possibility for the large discrepancy is that there is an additional factor that has not been accounted for, which, if included, would significantly decrease the prediction error. It is possible that better accounting of unpredictable driver behavior, accounting for variability in driver speed because of the presence of traffic lights or the glare caused by the reflection of the sun through the windshield, would improve the prediction accuracy. It is also worth noting that this software was originally tested and shown to work well for traffic in North Carolina, however, Seattle traffic and its drivers may be very different. Additional model calibration may be necessary to see if, for example, adjusting the average headway or driver acceleration will significantly improve results and help explain discrepancy. 7.11 Test Conclusion for STREETVAL Based on test results, it was shown that STREETVAL was unable to provide a reasonable travel time reliability prediction for the urban arterial test site. The difference in variance and widths of the ground truth travel time distribution, and the predicted travel time distribution from STREETVAL is significant. Although the assessment of the software is biased because of a 0.03-mile difference in the lengths of travel time links between the ground truth data and STREETVAL results, an only 3% margin of error is not sufficient to explain this large of a discrepancy. This error is likely a result of both inaccurate demand prediction and not accounting for some principal factor influencing travel times. A redeeming quality of the software is that it was able to provide a reasonable prediction for the mean and median travel times, differing by less than 10%.

Next: CHAPTER 8: Pilot Testing and Analysis on SHRP 2 C11 Product »
Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington Get This Book
×
 Pilot Testing of SHRP 2 Reliability Data and Analytical Products: Washington
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) Reliability Project L38 has released a prepublication, non-edited version of a report that tested SHRP 2's Reliability analytical products at a Washington pilot site. This research project tested and evaluated SHRP 2 Reliability data and analytical products, specifically the products for the L02, L05, L07, L08, and C11 projects.

Other pilots were conducted in Southern California, Minnesota, and Florida,

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!