National Academies Press: OpenBook

Guide to Establishing Monitoring Programs for Travel Time Reliability (2014)

Chapter: A--MONITORING SYSTEM ARCHITECTURE

« Previous: 5 SUMMARY
Page 151
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 151
Page 152
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 152
Page 153
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 153
Page 154
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 154
Page 155
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 155
Page 156
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 156
Page 157
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 157
Page 158
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 158
Page 159
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 159
Page 160
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 160
Page 161
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 161
Page 162
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 162
Page 163
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 163
Page 164
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 164
Page 165
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 165
Page 166
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 166
Page 167
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 167
Page 168
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 168
Page 169
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 169
Page 170
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 170
Page 171
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 171
Page 172
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 172
Page 173
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 173
Page 174
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 174
Page 175
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 175
Page 176
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 176
Page 177
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 177
Page 178
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 178
Page 179
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 179
Page 180
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 180
Page 181
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 181
Page 182
Suggested Citation:"A--MONITORING SYSTEM ARCHITECTURE." National Academies of Sciences, Engineering, and Medicine. 2014. Guide to Establishing Monitoring Programs for Travel Time Reliability. Washington, DC: The National Academies Press. doi: 10.17226/22614.
×
Page 182

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

149 OVERVIEW This appendix supports the discussion presented in Chapter 2 and provides more detailed examples of the fi ve general types of tables commonly used for storing and managing data in a travel time reliability monitoring system (TTRMS): 1. Confi guration information; 2. Raw data; 3. Travel time information by sensor type; 4. Travel time density functions for the segment and route regimes; and 5. Reliability summaries. Database design usually takes the form of a schema that formally describes the database structure, including the tables, their relationships, and constraints on data value types and lengths. Rather than defi ning an implementable schema, this chap- ter presents example tables that can store information generated during all steps of the reliability monitoring computation process, from the raw data to the travel time density functions and reliability metrics. The exact tables, fi elds, and relationships are fl exible to the needs of the agency, the data available, and the desired reporting capabilities. CONFIGURATION INFORMATION The confi guration information stored by a reliability monitoring system must include certain details about the freeways and arterials that underlie the routes being moni- tored. The most important information associates sensors with specifi c locations on a A MONITORING SYSTEM ARCHITECTURE

150 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY facility (usually marked by postmiles), so that the physical distance between sensors can be computed. In addition to the geographic location, it is important to know which lanes a sensor monitors and whether the lane is a mainline or managed facil- ity lane. After information has been established for key freeways and arterials, routes must be designed in the configuration tables. In this context, a route is composed of contiguous segments of facilities and can include both freeway segments and arterial segments. The travel times for these contiguous segments are ultimately aggregated to produce a total route travel time. Table A.1 shows an example of the database table for route configuration information and Table A.2 shows an example of the definition of route segments. TABLE A.1. ROUTE CONFIGURATION TABLE Column Field Description 1 Route ID Unique identifier for the route. 2 Route name Name of the route. 2 Length Length of the route. 3 Region ID Unique identifier of the route’s region. TABLE A.2. ROUTE SEGMENT CONFIGURATION TABLE Column Field Description 1 Route ID Unique identifier for the route. 2 Route segment ID Unique identifier for the route segment. This is the relative position of the segment down a route. 3 Facility ID Unique identifier for the facility and direction that the route is on. Links to tables with detector or sensor configuration information for the facility. 4 Managed facility? Whether the route segment has managed facility lanes. 5 Start point Description of the route segment’s start point. 6 Start point postmiles Absolute postmiles of the route segment’s start point. 7 End point Description of the route segment’s endpoint. 8 End point postmiles Absolute postmiles of the route segment’s end point. RAW DATA The form of the raw data, and thus the structure of the raw data tables in the database, depends on whether the data is from an infrastructure-based detector, an automated vehicle identification (AVI) sensor, or automated vehicle location (AVL) technology. Infrastructure-based detectors usually transmit data every 30 seconds, and data con- sist of some combination of flow, occupancy, and speed values. In the raw database table, each record represents the 30-second data summary. AVI sensors transmit data every time a vehicle equipped for sampling passes by. In the raw database table, each record represents a vehicle passing a sensor. AVL systems provide similar data if virtual

151 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY monuments are defined that function the same as AVI tag readers. The raw AVL data are message packets containing the latitude, longitude, speed, and heading of the vehicle at some sampling rate, often every few seconds. Since the data from all three technology types is different, each will require its own table in the database. Tables A.3, A.4, and A.5 show sample tables for raw infrastructure-based data, time stamps for raw AVI and AVL data, and raw AVL data, respectively. TABLE A.3. RAW INFRASTRUCTURE-BASED DATA TABLE Column Field Description 1 Time ID Time stamp for the 30-s period. 2 Detector ID Unique identifier for the reporting detector. Links to information about the detector’s facility (direction, location, and lane number). 3 30-s flow Vehicles counted in the 30-s period. 4 30-s occupancy Average occupancy over the 30-s period. 5 30-s speeda Average speed over the 30-s period.a a Where observed. TABLE A.4. RAW AVI AND AVL DATA TABLE: TIME STAMPS AT SPECIFIC LOCATIONS Column Field Description 1 Time ID Time stamp of the vehicle’s arrival at the sensor or monument location. 2 Sensor ID Unique identifier for the sensor or monument. Links to information about the sensor or monument’s facility (direction, location, and lane number). 3 Vehicle ID Unique identifier for the reported vehicle (e.g., tag ID, vehicle ID, or Bluetooth address with changes to protect privacy). TABLE A.5 RAW AVL DATA TABLE Column Field Description 1 Time ID Time stamp of the vehicle polling. 2 Vehicle ID Unique identifier for the vehicle being polled. 3 Longitude Vehicle’s longitude. 4 Latitude Vehicle’s latitude. 5 Speed Vehicle’s speed. 6 Bearing Vehicle’s bearing.

152 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY TRAVEL TIME INFORMATION The information in the raw data tables needs to be processed so that travel time infor- mation can be developed for each segment, route, and time period. For infrastructure- based sensors, estimating travel times requires imputing missing data values, computing speeds from volume and occupancy values, and extrapolating point speeds over spatial segments to derive travel time information. For AVI systems, estimating travel times requires computing travel times for matched vehicles, filtering out bad travel times that are likely representative of longer trip times, and calculating travel time information from the good samples. For AVL technologies, estimating travel times requires match- ing the raw data to segment or route end points, and calculating travel time information from the observed values. When agencies blend these technologies to develop travel time information for a given segment or route, data fusion can provide more accurate travel time information. From a database standpoint, data fusion requires the travel time information derived from each individual technology type to be aggregated up to a common temporal and spatial level so that fusing of the data can occur. The examples in this section assume that travel time information is aggregated at the 5-minute level and spatially aggregated to the segment and route level. This section illustrates sample database tables for each technology type. The flow of tables reflects the computational process used to turn raw data into final travel time information. Finally, it presents data tables that can store fused 5-minute travel time information and hourly travel time in- formation summaries from which reliability statistics can be computed. Infrastructure-Based Detectors This section shows four sample database tables for infrastructure-based detectors, which store data produced by the steps used to compute route-level, 5-minute travel time information. Table A.6 stores the results following imputation of missing or bad 30-second flow and occupancy raw data samples. Table A.7 shows the results of aggre- gating all data to the 5-minute level and computing speeds from the 5-minute flow and occupancy values. It also includes a field for storing the percentage observed, a measure of the data validity. Table A.8 stores the results of extrapolating the 5-minute detector speed data over a defined segment to compute segment-level travel time infor- mation. In this table, the travel time information represents travel times for vehicles ending their travel on that segment during the 5-minute period. Finally, Table A.9 stores the final infrastructure-based travel time information for each defined route. The route-level travel time data are computed by “walking the travel time field” along each segment in the route.

153 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY TABLE A.6. 30-SECOND INFRASTRUCTURE-BASED DATA WITH IMPUTATION Column Field Description 1 Time ID Time stamp for the 30-s period. 2 Detector ID Unique identifier for the reporting detector. Links to information about the detector’s facility (direction, location, and lane number). 3 30-s Flow 30-s volume count. 4 30-s Occupancy 30-s average occupancy. 5 Imputed? Whether the data values are observed or imputed. TABLE A.7. 5-MINUTE INFRASTRUCTURE-BASED DATA WITH SPEEDS Column Field Description 1 Time ID Time stamp for the 5-min period. 2 Detector ID Unique identifier for the reporting detector. Links to information about the detector’s facility (direction, location, and lane number). 3 5-min Flow Vehicles counted in the 5-min period. 4 5-min Occupancy Average occupancy over the 5-min period. 5 5-min Speed Average speed over the 5-min period. Computed from the flow, occupancy, stored g-factor for that detector, time of day, and day of week. 6 Percentage observed Percentage of data points directly observed (as opposed to imputed) from the detector. TABLE A.8 5-MINUTE INFRASTRUCTURE-BASED SEGMENT TRAVEL TIMES Column Field Description 1 Time ID Time stamp for the 5-min period. 2 Segment ID Unique identifier for the segment. 4 Travel time Average travel time for the route segment. 5 Operative regime Regime that was operative during the 5-min period. 6 Percentage observed Percentage of data points directly observed (as opposed to imputed) on the route segment. TABLE A.9 5-MINUTE INFRASTRUCTURE-BASED ROUTE TRAVEL TIMES Column Field Description 1 Time ID Time stamp for the 5-min period. 2 Route ID Unique identifier for the route. 3 Travel time Average travel time for the route. 4 Operative regime Regime that was operative during the 5-min period. 5 Percentage observed Percentage of data points directly observed (as opposed to imputed) on the route.

154 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Automated Vehicle Identification Data This section shows three sample database tables designed to store AVI information derived during the processing of trip time information. The database tables would contain travel times that have been extracted from raw trip times collected by AVI sensors. Table A.10 stores travel times for all matched vehicles between specific sensor pairs. In some instances the starting (Sensor ID 1) and ending (Sensor ID 2) sensors represent a single network segment. In other instances, they represent a segment se- quence or an entire route. Table A.11 filters the data in Table A.10 to describe travel times for routes that have been defined in the system. It also stores whether the travel time is considered valid after the filtering process. Table A.12 stores the final results of the AVI-based travel time computations as 5-minute travel time information for each segment and route. TABLE A.10. AVI VEHICLE TRAVEL TIMES FOR SEGMENTS AND SENSOR-TO-SENSOR PAIRS Column Field Description 1 Time ID Time stamp of the vehicle’s arrival at the first sensor. 2 Sensor ID 1 Unique identifier for the first sensor. 3 Sensor ID 2 Unique identifier for the second sensor at which the vehicle is matched. 4 Vehicle ID Unique identifier for the reported vehicle (e.g., tag ID or Bluetooth address with changes to protect privacy). 5 Travel time Travel time between the two sensors. 6 Valid? Whether the travel time is considered valid after filtering. TABLE A.11. AVI VEHICLE ROUTE TRAVEL TIMES Column Field Description 1 Time ID Time stamp of the vehicle’s arrival at the first sensor on the route. 2 Route ID Unique identifier for the route. 3 Vehicle ID Unique identifier for the reported vehicle (e.g., tag ID or Bluetooth address with changes to protect privacy). 4 Travel time Travel time between the first and last sensors on the route. 5 Valid? Whether the travel time is considered valid after filtering.

155 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY TABLE A.12. 5-MINUTE AVI-BASED REPRESENTATIVE SEGMENT OR ROUTE TRAVEL TIMES Column Field Description 1 Time ID Time stamp of the 5-min period. 2 Segment or route ID Unique identifier for the segment or route. 3 Number of valid samples Number of valid vehicle travel times observed. 4 Average travel time Average travel time measured from the valid AVI matches for the segment. 5 Operative regime Regime that was operative on the segment or route during the 5-min period. Automated Vehicle Location Technologies This section shows two database tables used to store AVL-based travel time informa- tion. Table A.13 stores the results of matching the raw data to a route in the system and calculating route travel times for individual vehicles. It also contains a field indi- cating whether each vehicle’s travel time is considered valid after the filtering process. Table A.14 stores the final results of the computation process for AVL data: a representative travel time for the 5-minute period computed from all valid vehicle samples collected in the time period. Similar to the final AVI travel time table, this table includes a field to store the travel time variability among individual travelers, as well as the error inherent in the representative travel time estimate. TABLE A.13. AVL VEHICLE TRAVEL TIMES Column Field Description 1 Time ID Time stamp of the vehicle’s arrival at the beginning of the segment or route. 2 Segment or route ID Unique identifier for the segment or route. 3 Vehicle ID Unique identifier for the vehicle being observed (may need to be encrypted to protect privacy). 4 Travel time Route travel time for the vehicle being observed. 5 Valid? Whether the travel time is considered valid after filtering. TABLE A.14. 5-MINUTE AVL-BASED SEGMENT OR ROUTE TRAVEL TIMES Column Field Description 1 Time ID Time stamp of the 5-min period. 2 Segment or route ID Unique identifier for the segment or route. 3 Number of valid samples Number of valid vehicle travel times observed. 4 Median travel time Median travel time on the segment or route measured from the valid AVL samples. 5 Operative regime Regime that was operative on the segment or route during the 5-minute period.

156 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Data Fusion For routes monitored by more than one technology type, data fusion can be used to improve the accuracy of the travel time information derived from a single technology. Data fusion requires travel time data from the individual sensor types to be aggregated to the same temporal and spatial level. These technology-specific data can be com- bined using weighting factors to produce enhanced information. Table A.15 shows a sample database table that contains the final, fused travel time information. It includes fields indicating the time of day and day of week to facilitate specific query requests. It also includes fields indicating which technologies contributed data to the final in- formation. Table A.16 shows the same data aggregated to the hourly level to facilitate more high-level travel time and reliability analysis. TABLE A.15. FUSED 5-MINUTE SEGMENT OR ROUTE TRAVEL TIMES Column Field Description 1 Time ID Time stamp of the 5-minute period. 2 Time of day 5-minute period (without the day). 3 Day of week Day of the week. 4 Segment or route ID Unique identifier for the segment or route. 5 Average travel time Average segment or route travel time during the 5-minute period. 6 Includes infrastructure- based estimate? Whether infrastructure-based data are included in the average travel time. 7 Includes AVI-based estimate? Whether AVI-based data are included in the average travel time. 8 Includes AVL-based estimate? Whether AVL-based data are included in the average travel time. 9 Operative regime Regime that was operative on the segment or route during the 5-minute period. 10 Error An estimate of error based on the percentage observed (infrastructure-based) and sample sizes (AVI or AVL, or both).

157 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY TABLE A.16. FUSED HOURLY SEGMENT OR ROUTE TRAVEL TIME SUMMARIES Column Field Description 1 Time ID Time stamp of the hourly period. 2 Time of day Hourly period (without the day). 3 Day of Week Day of the week. 4 Segment or route ID Unique identifier for the segment or route. 5 Average travel time Average segment or route travel time during the hour. 6 Includes infrastructure- based data? Whether infrastructure-based data are included in the summary. 7 Includes AVI-based data? Whether AVI-based data are included in the summary. 8 Includes AVL-based data? Whether AVL-based data are included in the summary. 9 Operative regime Regime that was operative on the segment or route during the hour. 10 Error An estimate of error based on the percentage observed (infrastructure-based) and sample sizes (AVI or AVL, or both). TRAVEL TIME DENSITY FUNCTIONS Determination of the travel time reliability regime for each segment or route and 5-minute time period is based on a matching process in which real-time observations of segment or route travel times are compared against nonparametric probability density functions that represent the regimes in which the segment or route has been found to operate. This section describes the data tables in which the nonparametric probability density functions are stored that describe the various regimes. The methodology described in Chapter 3 is based on the assumption that these regimes can be described by the per- centiles of the distribution. The table shown in Table A.17 stores the percentile values. TABLE A.17. NONPARAMETRIC DENSITY FUNCTION SUMMARY Column Field Description 1 Segment or route ID Unique identifier for the segment or route. 2 Regime Unique identifier for the specific regime to which the information in the record pertains. 3–102a Percentile value Percentiles of the nonparametric density function from 0% to 100%, one value for each percentile. a May be reduced with larger percentile bins.

158 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY RELIABILITY SUMMARIES Some summary database tables are useful for storing highly aggregated reliability mea- sures. In terms of temporally aggregated measures, these are tables that store, for a given route, reliability information for a single calendar month, a quarter, or a year. This aggregation is useful for users who want to examine route-specific reliability trends over time for selected days of the week or times of the day. Table A.18 illustrates an example database table that summarizes reliability infor- mation for each route by month. To create this table, travel time distributions are reviewed for each time of day (5-minute period or hourly) and day of the week, and reliability measures (e.g., the operative regime) are reviewed and tabulated. The same table can also be generated for quarterly and yearly time periods. These types of tables support queries letting users investigate other measures such as the buffer time for all weekdays over a month or the planning time on Sundays at 5:00 p.m. over one year. Reliability information can also be aggregated in the spatial dimension. An exam- ple would be the storage of quarterly reliability summaries for each region within the network. This allows high-level comparisons of performance between time periods and across regions. Table A.19 shows an example of the storage of quarterly reliability information by region. TABLE A.18. MONTHLY SEGMENT- OR ROUTE-LEVEL RELIABILITY SUMMARY TABLE Column Field Description 1 Time stamp Month and year. 2 Segment or route ID Unique identifier for the route. 3 Day of week Day of the week. 4 Time of day Time of day (can be a 5-min period or higher aggregation, such as an hour). 5 Regime Regime to which the record pertains. 6 Percentage time Percentage of time during which the regime specified in Column 5 was operative during the time and segment or route to which this record corresponds.

159 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY TABLE A.19. QUARTERLY REGIONAL-LEVEL RELIABILITY SUMMARY TABLE Column Field Description 1 Time stamp Quarter and year. 2 Region ID Unique identifier for the region. 3 Weekday? Whether the reliability summary is for weekdays or weekends. 4 Time of day Time of day (can be a 5-minute period or higher aggregation, such as an hour). 5 Average buffer time index Median travel time over the quarter for that time of day and day of the week. 6 Average planning time index 95th percentile travel time over the quarter for that time of day and day of the week. SUMMARY The final database design for a reliability monitoring system must reflect the technolo- gies used to collect data and the processes used to derive travel time estimates from the raw data. In general, five types of table are needed to fully describe the monitor- ing system and its outputs. Configuration tables are needed to define the routes and route segments for which travel times are to be computed, including their starting and ending point; where their detectors are located; and what type of detection they have. Raw data tables are needed to store the unaltered inputs from the various detection types. Travel time tables and travel time density function tables are needed to store the representative travel times calculated for each route segment, time period, and detec- tion type, as well as the intermediary information generated during the computation. Finally, reliability tables can be used to store high-level monthly, quarterly, and yearly summaries of reliability statistics for individual routes or higher spatial aggregations. HIGH-LEVEL FUNCTIONAL REQUIREMENTS LIST This high-level functional requirements list summarizes the functional requirements described in the Guide. For agency staff, this list represents an overview of the types of capabilities that should be required for a TTRMS. These items are organized into the same categories used in the Guide to describe the components of a TTRMS. Data Collection • A defined plan for collecting traffic data, including detector types and locations, detector spacing, and the frequency of data collection; • Communication hardware in place to collect data and transmit it to a central hub; • A defined plan for collecting incident information, including the data source, the types of data needed, how they will be obtained, and the frequency of data collection; • A defined plan for collecting weather information, including the data source, the types of data needed, how they will be obtained, and the frequency of data collection;

160 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY • A defined plan for collecting work zone information, including the data source, the types of data needed, how they will be obtained, and the frequency of data collection; • A defined plan for collecting special event information, including the data source, the types of data needed, how they will be obtained, and the frequency of data collection; • A defined plan for collecting traffic control information, including the data source, the types of data needed, how they will be obtained, and the frequency of data collection; • A defined plan for measuring or estimating demand and demand fluctuations; • A defined plan for measuring capacity and determining if it is inadequate; • A defined plan for collecting data on exogenous events (optional); • A defined plan for collecting transit-specific data from AVL-equipped vehicles (optional); • Interagency agreements in place for sharing data in real time (optional); and • Agreements with private data distributors to obtain needed data in real time (optional). Data Management • Use of an industry standard database; • A defined data warehouse model; • A database architecture that supports the storage of both traffic and nontraffic data; • The capability of storing all raw sensor data if privacy issues do not disallow it, or to irreversibly encrypt sensitive data; • The intention to store data for every sensor at the lowest level of granularity pos- sible, if it cannot be stored in raw form; • The intention to store both raw data and imputed data in parallel and never replace raw data with imputed data; • Clearly specified filtering methods and algorithms for removing bad data from malfunctioning infrastructure-based sensors; • Clearly specified filtering methods and algorithms for removing unrepresentative travel time data from AVI sensors; • Clearly specified algorithms for map matching vehicle-based data to specific routes; • Defined thresholds for the vehicle sampling rates required for valid data; • Clearly specified methods and algorithms for imputing missing or damaged data; • Clearly specified methods for tracking imputed data points;

161 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY • Clearly specified methods for tracking the imputation measure used on a data point; • Clearly specified methods for storing metadata on what percentage of data points have been imputed and how they have been imputed to evaluate the statistical validity of reliability estimates; and • Clearly specified methods for imputing reliability measures in locations that lack detection technologies (optional). Computation Engine • Defined methodologies for calculating speeds and travel times from infrastructure- based sensors; • Calculations for deriving travel times from vehicle-based sensors and calculating the standard error of estimates; • Methods for fusing travel times from different data sources that account for valid- ity differences between different detection technologies and travel time estimation techniques; • Performance metrics that report the statistical validity of reliability estimates to account for the uncertainties inherent in data aggregation and fusion; • Clearly defined reliability metrics and corresponding equations; • Defined spatial and temporal aggregations that the system will perform and the corresponding methodologies for performing them; • Spatial aggregation capabilities to compute and store reliability measures for links, routes, subareas, and regions; • Temporal aggregation capabilities to compute and store reliability measures for minute-to-minute, hourly, daily, weekly, monthly, quarterly, and yearly time periods; • Defined algorithms for predicting travel times from historical and current data; and • Defined algorithms for linking travel time variability with each relevant source of unreliability (seven sources or a locally appropriate subset). Report Generation • A fully defined list of reports that the system can deliver, including the reliability measures that they will convey and the format they will take. • For each report, a description of the user it is intended to serve, the steps the user will need to take to create the report, and what the report will look like. • A defined maximum amount of time that it should take to generate a report. • User flexibility to choose the spatial and temporal aggregation levels in reports.

162 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Systems Interactions • A list of applicable intelligent transportation system (ITS) standards to which the system must adhere, with clearly specified methods for adhering to these standards (optional); • A clearly defined plan for adhering to the applicable regional architecture; • A method of transferring data between regional systems using the Traffic Manage- ment Data Dictionary (TMDD) (optional); and • A method of transferring reliability measures to other user interfaces in real time using the Data Dictionary for Advanced Traveler Information Systems (ATIS) (optional). SUMMARY OF APPLICABLE ITS STANDARDS AND GUIDES Tables A.20 and A.21 provide summaries of ITS standards and guides that the practi- tioner can use in developing and using a TTRMS. ITS STANDARDS GLOSSARY Center: A connection point on a network (ranging in size from a single laptop to a complex multicomputer environment) that is capable of exchanging messages with other centers. Data Dictionary: An organized listing of dialogs, messages, data frames, data ele- ments, and their properties that are required so that both the user and the system developer have a common understanding of input, output, components of storage, and intermediate calculations. Data Element: A syntactically formal representation of some single unit or infor- mation of interest (such as a fact, proposition, or observation) with a singular instance value at any point in time, about some entity of interest (e.g., a person, place, process, property, object, concept, association, state, or event). A data element is considered indivisible in a certain context. Data Type: A classification of the collection of letters, digits, or symbols (or a combination of these) used to encode values of a data element based on the operations that can be performed on the data element. Dialog: A sequence of messages. Event: Broadly defined to include any set of travel circumstances an agency may wish to report, such as incidents, descriptions of road and traffic conditions, weather conditions, construction, and special events. Events can be current or forecasted. Interchangeability: Reflects the capability to exchange devices of the same type on the same communications channel and have those devices interact with other devices of the same type using standards-based functions.

163 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Interoperability: Allows system components from different vendors to communi- cate with each other to provide system functions and work together as a whole system. Message: Groupings of data elements that include information about how the data elements are combined and used to convey information among ITS centers and sys- tems. Messages are abstract descriptions using a message set template, not specific instances of transmissions. TABLE A.20. SUMMARY OF APPLICABLE ITS STANDARDS AND GUIDES Standard Author Organization and Contact Summary National ITS Architecture U.S. Department of Transportation A flexible framework for planning, defining, and integrating ITS systems. Defines the logical (processes and information flows) and physical (transportation agencies and communications) architectures that govern ITS systems. Real-Time System Management Information Program FHWA http://ops.fhwa.dot.gov/ Focuses on providing traveler information in real time to decrease congestion; includes an Information Sharing Specifications and Data Exchange Formats guide that standardizes the communication interface for exchanging traffic data and event information. Travel Time Reliability: Making It There On Time, All the Time FHWA http://ops.fhwa.dot.gov/ A general guide to measuring and distributing travel time reliability information. Traffic Management Data Dictionary (TMDD) and Message Sets for External Traffic Management Center Communications ITE–AASHTO http://www.ite.org/standards/ TMDD/ Defines data elements for roadway links, incidents, traffic-disruptive events, traffic control, ramp metering, traffic modeling, video camera control traffic, parking management, weather forecasting, detectors, actuated signal controllers, vehicle probes, and changeable message signs. Defines message sets for communications between traffic management centers and other ITS centers. TMDD Guide ITE–AASHTO Describes how to use the TMDD and provides context for the system engineering process. Data Dictionary for Advanced Traveler Information Systems (ATIS) SAE Provides a set of core data elements needed by information service providers for ATIS. Data Dictionary provides the foundation for ATIS message sets for all stages of travel (pretrip and en route), all types of travelers, all categories of information, and all platforms for delivery of information (e.g., in vehicle, portable devices, kiosks). Traffic Message Channel Segmentation NavTeq and TeleAtlas Defines standardized roadway segments for major arterials and freeways. Note: FHWA = Federal Highway Administration; ITE–AASHTO = Institute of Transportation Engineers and American Association of State Highway and Transportation Officials.

164 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY TABLE A.21. ADDITIONAL ITS STANDARDS Standard Author Organization and Location Summarya Standard Specifications for Archiving ITS- Generated Traffic Monitoring Data ASTM Available for purchase from ASTM website Specifies a data dictionary for archiving traffic data, including conventional traffic monitoring data, data collected directly from ITS systems, and travel time data from probe vehicles. Standard Practice for Metadata to Support Archived Data Management Systems ASTM Available for purchase from ASTM website Describes a hierarchical outline of sections and elements to be used in developing metadata to support archived data management systems. NTCIP 1201: Global Object Definitions ITE/AASHTO/NEMA http://www.ntcip.org/library/ documents/ Defines those pieces of data likely to be used in multiple device types such as actuated signal controllers and dynamic message signs. Examples of these data include time, report generation, and scheduling concepts. NTCIP 1206: Object Definitions for Data Collection and Monitoring (DCM) devices ITE/AASHTO/NEMA http://www.ntcip.org/library/ documents/ Specifies object definitions that may be supported by data collection and monitoring devices, such as roadway loop detectors. NTCIP 1209: Data Element Definitions for Transportation Sensor Systems (TSS) ITE/AASHTO/NEMA http://www.ntcip.org/library/ documents/ Provides object definitions that guide the data exchange content between advanced sensors and other devices in an NTCIP network, including video- based detection sensors, inductive loop detectors, sonic detectors, infrared detectors, and microwave or radar detectors. NTCIP 1204: Object Definitions for Environmental Sensor Stations ITE/AASHTO/NEMA http://www.ntcip.org/library/ documents/ Defines objects specific to environmental sensor stations. NTCIP 2202: Internet (TCP/IP and UDP/IP) Transport Profile ITE/AASHTO/NEMA http://www.ntcip.org/library/ documents/ Defines a set of transport and network layer protocols to provide connectionless and connection-oriented transport services. NTCIP 8003: Profile Framework ITE/AASHTO/NEMA http://www.ntcip.org/library/ documents/ Defines a framework and classification scheme for developing combinations or sets of protocols related to communication in ITS application environment. Standard for Traffic Incident Management Message Sets for Use by Emergency Management Centers IEEE Available for purchase from IEEE website Enables consistent standardized communications among incident management centers, fleet and freight management centers, information service providers, emergency management centers, planning subsystems, traffic management centers, and transit management centers. Standard for Common Incident Management Sets for Use by Emergency Management Centers IEEE Available for purchase from IEEE website Provides standards describing the form and content of the incident management messages sets for emergency management systems (EMS) to traffic management systems (TMS) and from EMS to the emergency telephone system (ETS) or (E911). continued

165 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Standard Author Organization and Location Summarya Standard for Message Sets for Vehicle/Roadside Communications IEEE Available for purchase from IEEE website Standard messages for commercial vehicle, electronic toll, and traffic management applications. ISP-Vehicle Location Referencing Standard SAE Available for purchase from SAE website For the communication of spatial data references between central sites and mobile vehicles on roads. References can be communicated from central sites to vehicles or from vehicles to central sites. May be used when appropriate by other ITS applications requiring location references between data sets. Note: ASTM = American Society for Testing and Materials; NTCIP = National Transportation Communications for ITS Protocol; NEMA = National Electrical Manufacturers Association; IEEE = Institute of Electrical and Electronics Engineers. a Summaries are taken from http://www.standards.its.dot.gov/StdsSummary.asp?ID=372. TABLE A.21. ADDITIONAL ITS STANDARDS (continued) APPLICABLE ITS MARKET PACKAGES Table A.22 describes various ITS market packages that the practitioner can use to es- tablish, characterize, and monitor travel time reliability. APPLICABLE TRAFFIC MANAGEMENT DATA DICTIONARY SECTIONS Volume 1: Concept of Operations Section 2.3.4: Need to Share Event Information 2.3.4.1 Need for an Index of Events 2.3.4.2 Need to Correlate an Event with Another Event 2.3.4.3 Need to Provide Free Form Event Descriptions 2.3.4.4 Need to Provide Free Form Event Names 2.3.4.6 Need for Current Event Information 2.3.4.7 Need for Planned Event Information 2.3.4.8 Need for Forecast Event Information 2.3.4.9 Need to Share the Log of a Current Event 2.3.4.10 Need to Reference a URL 2.3.4.11 Need to Filter Events 2.3.4.11.1 Need to Filter Event Recaps 2.3.4.11.2 Need to Filter Event Updates Section 2.3.5: Need to Provide Roadway Network Data 2.3.5.1 Need for Roadway Network Inventory 2.3.5.1.1 Need for Node Inventory 2.3.5.1.2 Need for Link Inventory 2.3.5.1.3 Need for Route Inventory 2.3.5.2 Need to Share Node, Link, and Route Status

166 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY TABLE A.22. APPLICABLE ITS MARKET PACKAGES Market Package Description Applicability to Travel Time Reliability Network Surveillance Includes traffic detectors and other surveillance equipment that transmit data back to the traffic management subsystem through fixed-point to fixed- point communications. Freeway and arterial traffic data collection from infrastructure-based sensors, incident detection. Traffic Probe Surveillance Supports wireless communications between the vehicle and the center or dedicated short-range communications between passing vehicles and the roadside for traffic data collection. Traffic data collection from AVI and AVL technologies. Weather Information Processing and Distribution Processes and distributes the environmental information collected from the Road Weather Data Collection market package. Correlation of travel time reliability with weather conditions. Maintenance and Construction Activity Coordination Supports the dissemination of maintenance and construction activity to centers. Correlation of travel time reliability with lane closures and construction activities. ISP-Based Trip Planning and Route Guidance Offers the user trip planning and en route guidance services. User output of travel time reliability monitoring process. Broadcast Traveler Information Collects traffic conditions and other information and broadcasts the information to travelers using technologies such as FM subcarrier, satellite radio, cellular data broadcasts, and Internet webcasts. User output of travel time reliability monitoring process. Interactive Traveler Information Provides tailored information in response to a traveler request. User output of travel time reliability monitoring process. Dynamic Route Guidance Offers advanced route planning and guidance that is responsive to current condition. User output of travel time reliability monitoring process. ITS Data Warehouse Includes all the capabilities outlined in ITS Data Mart and adds the functionality and interface definitions that allow the collection of data from multiple agencies and data sources across modal and jurisdictional boundaries. Archiving reliability data. ITS Virtual Data Warehouse Provides the same broad access to multimodal, multidimensional data from varied data sources as in the ITS Data Warehouse Market Package, but provides this access using increased interoperability between physically distributed ITS archives that are each locally managed. Archiving reliability data. Note: ISP = information service provider.

167 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY 2.3.5.2.1 Need to Share Node State 2.3.5.2.2 Need to Share Link State 2.3.5.2.3 Need to Share Route State 2.3.5.3 Need to Share Link Data 2.3.5.4 Need to Share Route Data Section 2.3.6: Need to Provide Control of Devices 2.3.6.1 Need to Share Detector Inventory 2.3.6.1.2 Need Updated Detector Inventory 2.3.6.1.3 Need to Share Detector Status 2.3.6.1.4 Need for Detector Metadata 2.3.6.1.5 Need for Detector Data Correlation 2.3.6.1.6 Need for Detector Data Sharing 2.3.6.1.7 Need for Detector History 2.3.6.5 Need to Share Environment Sensor Station (ESS) Data 2.3.6.5.1 Need to Share ESS Inventory 2.3.6.5.2 Need to Share Updated ESS Inventory 2.3.6.5.3 Need to Share ESS Device Status 2.3.6.5.4 Need to Share ESS Environmental Observations 2.3.6.5.5 Need to Share ESS Environmental Observation Metadata 2.3.6.5.6 Need to Receive a Qualified ESS Report 2.3.6.5.7 Need to Share ESS Organizational Metadata 2.3.6.6 Need to Share Lane Closure Gate Control 2.3.6.6.1 Need to Share Gate Inventory 2.3.6.6.2 Need to Share Updated Gate Inventory 2.3.6.6.3 Need to Share Gate Status 2.3.6.6.7 Need to Share Gate Control Schedule 2.3.6.8 Need to Share Lane Control and Status 2.3.6.8.1 Need to Share Controllable Lanes Inventory 2.3.6.8.7 Need to Share Controllable Lanes Schedule 2.3.6.9 Need to Share Ramp Meter Status and Control 2.3.6.9.1 Need to Share Ramp Meter Inventory 2.3.6.9.2 Need to Share Updated Ramp Meter Inventory 2.3.6.9.3 Need to Share Ramp Meter Status 2.3.6.9.8 Need to Share Ramp Metering Schedule 2.3.6.9.9 Need to Share Ramp Metering Plans 2.3.6.10 Need to Share Traffic Signal Control and Status 2.3.6.10.1 Need to Share Signal System Inventory 2.3.6.10.2 Need to Share Updated Signal System Inventory 2.3.6.10.3 Need to Share Intersection Status 2.3.6.10.8 Need to Share Controller Timing Patterns 2.3.6.10.9 Need to Filter Controller Timing Patterns 2.3.6.10.10 Need to Share Controller Schedule 2.3.6.10.11 Need to Share Turning Movement and Intersection Data

168 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Section 2.3.7: Need to Share Data for Archiving 2.3.7.1 Need for Traffic Monitoring Data Volume 1: Requirements Section 3.3.4: Events Information Sharing 3.3.4.3 Subscribe to Event Information 3.3.4.4 Contents of Event Information Request 3.3.4.6 Required Event Information Content 3.3.4.7 Optional Event Information Content 3.3.4.8 Action Logs 3.3.4.9 Event Index Section 3.3.5: Provide Roadway Network Data 3.3.5.1 Share Traffic Network Information 3.3.5.2 Share Node Information 3.3.5.3 Share Link Information 3.3.5.4 Share Route Information Section 3.3.6: Provide Device Inventory, Status, and Control 3.3.6.1 Generic Devices 3.3.6.2 Traffic Detectors 3.3.6.6 Environment Sensors 3.3.6.7 Lane Closure Gates 3.3.6.9 Lane Control Signals 3.3.6.10 Ramp Meter 3.3.6.11 Traffic Signal Controllers Section 3.3.7: Share Archive Data 3.3.7.1 Share Traffic Monitoring Data for Data Archiving 3.3.7.2 Share Processing Documentation Metadata Volume 2: Design Content Section 3.0 TMDD ISO 14817 ASN.1 and XML Data Concept Definitions Section 3.1: Dialogs 3.1.1 Archived Data Class Dialogs 3.1.3 Connection Management Class Dialogs 3.1.4 Detector Class Dialogs 3.1.5 Device Class Dialogs 3.1.7 Environmental Sensor Station (ESS) Class Dialogs 3.1.8 Event Class Dialogs 3.1.9 Gate Class Dialogs 3.1.11 Intersection Signal Class Dialogs 3.1.12 Lane Control Status (LCS) Dialogs 3.1.13 Link Class Dialogs

169 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY 3.1.14 Node Class Dialogs 3.1.16 Ramp Meter Class Dialogs 3.1.17 Route Class Dialogs 3.1.18 Section Class Dialogs 3.1.19 Transportation Network Class Dialogs Section 3.2: Messages 3.2.1 Archived Data Class Messages 3.2.3 Connection Management Class Messages 3.2.4 Detector Class Messages 3.2.5 Device Class Messages 3.2.7 ESS Class Messages 3.2.8 Event Class Messages 3.2.9 Gate Class Messages 3.2.11 Intersection Signal Class Messages 3.2.12 LCS Class Messages 3.2.13 Link Class Messages 3.2.14 Node Class Messages 3.2.16 Ramp Meter Class Messages 3.2.17 Route Class Messages 3.2.18 Section Class Messages 3.2.19 Transportation Network Class Messages Section 3.3: Data Frames 3.3.1 Archived Data Class Data Frames 3.3.3 Connection Management Class Data Frames 3.3.4 Detector Class Data Frames 3.3.5 Device Class Data Frames 3.3.7 ESS Class Data Frames 3.3.8 Event Class Data Frames 3.3.9 Gate Class Data Frames 3.3.11 Intersection Signal Class Data Frames 3.3.12 LCS Class Data Frames 3.3.13 Link Class Data Frames 3.3.14 Node Class Data Frames 3.3.16 Ramp Meter Class Data Frames 3.3.17 Route Class Data Frames 3.3.18 Section Class Data Frames 3.3.19 Transportation Network Class Data Frames Section 3.4: Data Elements 3.4.1 Archived Data Class Data Elements 3.4.3 Connection Management Class Data Elements 3.4.4 Detector Class Data Elements 3.4.5 Device Class Data Elements

170 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY 3.4.7 ESS Class Data Elements 3.4.8 Event Class Data Elements 3.4.9 Gate Class Data Elements 3.4.11 Intersection Signal Class Data Elements 3.4.12 LCS Class Data Elements 3.4.13 Link Class Data Elements 3.4.14 Node Class Data Elements 3.4.16 Ramp Meter Class Data Elements 3.4.17 Route Class Data Elements 3.4.18 Section Class Data Elements 3.4.19 Transportation Network Class Data Elements Section 3.5: Object Classes 3.5.1 Archived Data 3.5.3 Connection Management 3.5.4 Detector 3.5.5 Device 3.5.7 ESS 3.5.8 Event 3.5.9 External Center 3.5.10 Gate 3.5.13 Intersection Signal 3.5.14 LCS 3.5.15 Link 3.5.16 Node 3.5.19 Ramp Meter 3.5.20 Route 3.5.21 Section 3.5.22 Transportation Network APPLICABLE SECTIONS OF DATA DICTIONARY FOR ADVANCED TRAVELER INFORMATION SYSTEMS 6.43 Information request, linkTravelTime 6.99 Estimate of travel time returned to the traveler based upon route 6.100 Estimate of travel time between way points or from–to origin–destination and way point DETECTOR DIAGNOSTIC ALGORITHM The most pervasive data quality problem inherent to infrastructure sensors is malfunc- tioning equipment. For example, on an average day in California, only about 70% of the freeway loop detectors statewide are transmitting good data; the remaining 30% are either transmitting bad data or no data. The travel time data management

171 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY component must recognize when reported speed, occupancy, or flow values are inac- curate so that they can be imputed. In the wider academic literature, algorithms have been developed to determine when detectors are bad so that processing can remove the data that they report. Such an algorithm is described below as an example of best practices for error checking. Rather than making a diagnostic decision based on an individual sample, the algorithm uses the time-series of flow and occupancy measurements to determine the health of a detector. It is designed to make an end-of-day decision on whether a detector was good or bad during that day. Detectors are considered bad for a day if their data outputs fall into one of the following four error types: • Mostly zero occupancy and flow; • Nonzero occupancy and zero flow; • Very high occupancy; or • Constant occupancy and flow. A daily statistics algorithm recognizes the above four error types. The algorithm takes in inputs of the time-series of 30-second detector measurements (q(d,t) and k(d,t) (where q is flow, k is occupancy, d is the index of the day, and t = 0, 1, 2, . . . is the 30-second sample number) and outputs the diagnosis for the dth day, where Δd = 0 if the loop is good and Δd = 1 if the loop is bad. The detailed equations for this algorithm are given below. Because the algorithm is less reliable at low traffic levels, the threshold levels that trigger a bad detector diagnosis are based on samples collected between 5 a.m. and 10 p.m. Meeting one of the following four criteria triggers a detector to be diagnosed as bad: • More than 1,200 data samples in which occupancy = 0; • More than 50 data samples in which occupancy > 0 and flow = 0; • More than 200 data samples in which occupancy > 0.35; or • More than four data sample exceeding an entropy (i.e., randomness) threshold. This principle and a modified version of the above algorithm have been applied in practice in California’s Performance Measurement System (PeMS). Table A.23 dis- plays the PeMS diagnostic tests that are run at the end of each day on each detector’s data. When a detector is diagnosed as bad, all of its data will be removed and imputed on the following day. (For Tests 3, 4, and 5, the exact threshold percentages are not shown because they are adjusted from time to time by PeMS operators.)

172 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY TABLE A.23. PEMS DIAGNOSTIC TESTS Test No. Detector Type Condition Description Diagnostic Testa Data Used Diagnostic State 1 ML, ramp No data samples received None of the detectors on same communication line are reporting data Number of samples received for all detectors on same communication line = 0 30-s Line down None of the detectors attached to same controller are reporting data Number of samples received for all detectors attached to the controller = 0 30-s Controller down Individual detector not reporting data; other detectors on same controller are sending samples Number of samples received = 0; other detectors on same controller reporting data 30-s No data 2 ML, ramp Too few data samples received Not enough samples received to perform diagnostic tests; other detectors reported more samples Number of samples <60% of the maximum collected samples during the test period 30-s Insufficient data 3 ML, ramp High values Too many samples with occupancy >0% (ML) or flow >1 vehicle/20 s (RM) ML: Number of high- occupancy samples > X% of the maximum collected samples during the test period. RM: Number of high-flow samples > Y% of the maximum collected samples during the test period 30-s Card off 4 ML, ramp Zero occupancy or flow Too many samples with an occupancy (ML) or flow (RM) of zero ML: Number of zero occupancy samples > X% of the maximum collected samples during the test period. RM: Number of zero-flow samples > Y% of the maximum collected samples during the test period 30-s High value 5 ML Flow– occupancy mismatch Too many samples for which flow is zero and occupancy is nonzero Number of flow-occupancy mismatch samples > X% of the maximum collected samples during the test period 30-s Intermittent 6 ML Constant occupancy Same occupancy value being reported Number of repeated occupancy values > 5-min samples 5-min Constant a For Tests 3, 4, and 5, the exact threshold percentages are not shown because they are adjusted from time to time by PeMS operators. Note: ML = mainline; RM = ramp.

173 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY The exact algorithms employed for monitoring detector health in a reliability monitoring system will depend on the location and facility type being monitored. For example, rural routes may have lower traffic volumes, and thus require a higher threshold of zero-flow samples for a detector to be considered bad. In addition, many arterial detectors are located just upstream of the stopbar and thus can have 100% occupancies during the duration of a red light; these locations would require a differ- ent threshold for high-occupancy values than would freeways. A TTRMS should thus employ different detector health tests for facilities in different locations with different characteristics. Equations A.1 through A.5 present the diagnostic algorithms: ∑ ( )( )= = ≤ ≤ S i d k d t( , ) , 0i a t b 1 (A.1) ∑ ( ) ( )( ) ( )= > • = ≤ ≤ S i d k d t q d t( , ) , 0 , 0i a t b i2 (A.2) ∑ ( )( )= >  =• ≤ ≤ •S i d k d t k k( , ) , , 0.35i a t b 3 (A.3) ∑ ( )( ) ( ) ( )= − ( )> S i d p x p x( , ) 1 ˆ log ˆ x p x 4 : 0 (A.4) ∑ ∑ ( )( ) ( ) = = ≤ ≤ ≤ ≤ p x k d t x ˆ 1 , 1 i a t b a t b (A.5) where d = day index, t = 30-s sample number, k = density, and q = flow. = = = = • • • • S S S S 1,200 50 200 4 1 2 3 4 > > > >           • • • • S i d S S i d S S i d S S i d S Malfunction if: ( , ) ( , ) ( , ) ( , ) 1 1 2 2 3 3 4 4

174 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY PEMS CALCULATIONS PeMS Speed Calculation PeMS uses a g-factor, which represents the effective length of a vehicle, to calculate speed from flow and occupancy detector outputs. The g-factor is a combination of the average length of the vehicles in the traffic stream and the tuning of the loop detec- tor. Traditionally, a constant value for the g-factor is used, but this practice leads to inaccurate speed estimates because the g-factor varies by lane, time of day, and loop sensitivity. PeMS estimates a g-factor for each loop every 5 minutes over an average week to provide accurate speed estimates. The algorithm implemented in PeMS is adapted from van Zwet et al. (1). The steps for estimation of speed from that paper are as follows: 1. Assume that speed on the freeway at free-flow conditions is known and constant. a. Free flow is defined by having occupancy less than a certain threshold. b. The free-flow speed is only a function of the type of freeway (meaning the total number of lanes) and the particular lane that the detector is in. 2. Using this assumption for each loop, work backward and compute the g-factor for a number of points during a number of days. 3. Smooth the g-factor using a robust adaptive regression method to obtain a g-factor for each loop in the system over a typical week. 4. Use the g-factor to compute the initial estimate of speed for each loop in real time. 5. Pass the initial estimate through an exponential filter with weights that vary as a function of flow. When the flow at the loop is low, the smoothing is severe; when the flow is high, there is little smoothing. Weighting allows the estimate to quickly adapt to periods of congestion as well as to have stable speeds when there are very little data (such as in the middle of the night). The resulting speed is the speed estimate. Table A.24 shows the free-flow speeds assumed in the g-factor calculation. These speeds were taken from double-loop detector data in the San Francisco Bay Area of California.

175 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY TABLE A.24. FREE-FLOW SPEEDS FOR g-FACTOR CALCULATION Lane Type No. of Lanes Lane Number 1 2 3 4 5 6 7 HOV 1 65.0 HOV 2 65.0 65.0 ML 1 65.0 ML 2 71.2 65.1 ML 3 71.9 69.7 62.7 ML 4 74.8 70.9 67.4 62.8 ML 5 76.5 74.0 72.0 69.2 64.5 ML 6 76.5 74.0 72.0 69.2 64.5 64.5 ML 7 76.5 74.0 72.0 69.2 64.5 64.5 64.5 Note: HOV = high-occupancy vehicle. PeMS Linkage Between Delay and Events PeMS uses a congestion pie algorithm to assign delay on a freeway to one of three categories: 1. Collisions: Based on the set of all accidents that take place on the freeway system. 2. Bottlenecks: Based on anything caught by the PeMS bottleneck identification algo- rithm. The cause of a bottleneck on any one day is not determined. 3. Miscellaneous: All of the delay that cannot be assigned to either of the two previ- ous categories. Delay is assigned a cause on a quarterly basis. The steps used to assign delay to its cause area as follows: 1. Compute total delay, Dtot — Calculate with respect to 60 mph for each county–freeway–direction in the quarter; 2. Compute delay due to collisions, Dcol — Extract the number of collisions per day from the incident data set provided by the agency. — Compute a straight-line linear regression relating the delay on each day to the number of collisions. The intercept, alpha, is the average daily delay for a collision-free day. — Compute the delay due to collisions, Dcol = Dtot − alpha (limited by zero).

176 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY 3. Compute delay due to bottlenecks, Dbn — Take the recurrent bottleneck locations that are active more than 20% of the days in the quarter. — For these bottlenecks, estimate the average daily delay due to the bottleneck, Dbn, from the results of the bottleneck identification algorithm. — Limit Dbn such that Dbn + Dcol ≤ Dtot. 4. Compute the miscellaneous delay, Dmisc — Any delay that cannot be assigned to either bottlenecks or collisions. 5. Subdivide Dbn into potential delay savings, Dpot, and excess delay, Dexcess — For each corridor, if there are any bottlenecks, compute the potential savings that result from running an ideal ramp-metering algorithm at each bottleneck. An ideal ramp-metering algorithm is a ramp-metering strategy that restricts the rate at which vehicles enter the freeway such that the traffic at this location operates at capacity. Capacity is defined as the maximum observed 15-minute flow at each location. — The result of restricting the demand at the bottleneck is that the freeway oper- ates at free-flow conditions, and the delay on the freeway is reduced to zero. — The side effect of this is that the vehicles that have to wait at the ramps incur delay. This is excess delay, Dexcess. There are several limitations to the delay-assignment strategy outlined above: • The method is applicable only to freeways. • No delay is attributed to weather, lane closures, or special events. • Incident data supplied by local agencies can be incomplete or incorrect. • There are sections of freeways that are not covered by detectors but where inci- dents are reported. This leads to a mismatch between the regime covered by fixed measurement devices and the regime covered by the incident data collection. • The ideal ramp-metering algorithm relies on many ideal assumptions, including — The ramps at each location have enough storage capacity to hold all metered vehicles. — It is politically feasible to run this algorithm. — Drivers do not take any detours or diversions. Thus, the potential delay savings result needs to be interpreted with care as an estimate of the maximum savings possible (rather than the realized savings).

177 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Travel Time Predictions PeMS graphically displays a prediction of the travel time for a selected route from the time selected through the rest of the day. The travel time prediction is done by examin- ing the collection of historical travel times for the route and choosing the days with the three closest travel time profiles. A weighted vector over the last few samples is used to measure distance between travel time profiles. The prediction is formed by taking the median of those three closest travel time profiles and plotting it for the rest of the day. The user plot shows the measured travel time up until that point and then the predic- tion for the rest of the day. EXISTING AVI FILTERING ALGORITHMS Transguide: San Antonio, Texas Summary Transguide filters AVI travel time estimates by defining a set of valid recorded travel times during each 2-minute evaluation period based on those that are within 20% of the estimated travel times between the same two points for the previous 2-minute time period. Algorithm Equations A.6 and A.7 show the algorithm developed by the Southwest Research Institute: { }( ) ( )= − − ≤ ≤ ′ − ≤ − ≤ ′ −Stt t t t t t t tt l t t tt l, 1 1AB Bi Ai w Bi ABi th Bi Ai ABi thi (A.6) ∑ ( ) = − =tt t t Stt AB Bi Ai i Stt AB 1 i ABi i (A.7) where SttABi = set of valid recorded travel times used at each evaluation time to esti- mate the current average travel time between two AVI Readers A and B; tAi and tBi = detection times of vehicle i at Readers A and B; t = time at which estimation takes place; tt′ABi = previously estimated travel time from Reader A to Reader B; tw = rolling average window, which determines the period of time that should be considered when estimating the current average travel time (Transguide uses 2 minutes); lth = link threshold time, which is used to identify and remove outlier obser- vations (Transguide uses 0.20); and ttABi = estimated average travel time for the time period.

178 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY TranStar: Houston, Texas Summary TranStar filters AVI travel time estimates by defining a set of valid recorded travel times during each 30-second evaluation period based on those that are within 20% of the esti- mated travel times between the same two points for the previous 30-second time period. Algorithm TranStar’s algorithm was also developed by the Southwest Research Institute and is the same as that used by TransGuide, but travel times are updated each time new travel time information is obtained from a vehicle instead of being updated at fixed intervals. Like TransGuide, TranStar uses a link threshold parameter of 0.2, but it uses a shorter rolling average window of 30 seconds. TRANSMIT: New York City Metropolitan Area Summary TRANSMIT filters AVI travel times by averaging link travel times over a 15-minute observation interval. An equation is then used to smooth the estimated travel time against historical data from the same 15-minute interval in the same day of the previ- ous week to obtain an updated historical average travel time that is used as the current estimate. Algorithm Equations A.8 and A.9 show the TRANSMIT algorithm: ∑( ) = − =tt t t nAB Bi Ai i n k 1 k k (A.8) α α( )′′ = ⋅ + − ⋅ ′′ − tth tth tth1AB AB ABk k k l (A.9) where ttABk = estimated current average travel time for the interval; nk = number of link travel times collected for each interval k, up to a maxi- mum of 200 observations; tthABk = historical smoothed travel time for the kth sampling interval; tth″ABk and tth″ABk–1 = updated historical smoothed travel times for the current (k) and previous (k – 1) sampling intervals; and α = smoothing factor (set at 0% when an incident is detected, and 10% otherwise).

179 GUIDE TO ESTABLISHING MONITORING PROGRAMS FOR TRAVEL TIME RELIABILITY Dion and Rakha Algorithm An AVI filtering algorithm was developed by Dion and Rakha to provide travel time estimates in areas where there is low market penetration of AVI sensors (2). Summary The algorithm is designed to handle stable and unstable traffic conditions, provide accurate travel time estimates in areas where there are few AVI sensors, and work for both freeway and signalized arterial roadways. The algorithm does this by applying the following filters: 1. Expected average trip time and trip time variability in a future time interval; 2. Number of consecutive intervals without any readings since the last recorded trip time; 3. Number of consecutive data points either below or above the validity range; and 4. Variability in travel times within an analysis interval. Algorithm The filtering algorithm first calculates the expected smoothed average travel time and smoothed travel time variance between Readers A and B for a given sampling interval through a smoothing low-pass filter. The algorithm then uses a robust data-filtering process that identifies valid data within a dynamically varying validity window. The size of the validity window var- ies as a function of the number of observations within the current sampling interval, the number of observations in the previous intervals, and the number of consecutive observations outside the validity window. REFERENCES 1. van Zwet, E., C. Chen, Z. Jia, and J. Kwon. A Statistical Method for Estimating Speed from Single Loop Detectors. University of California, Berkeley, 2003. 2. Dion, F., and H. Rakha. Estimating Dynamic Roadway Travel Times Using Automatic Vehicle Identification Data for Low Sampling Rates. Transportation Research Part B, Vol. 40, No. 9, 2006, pp. 745–766.

Next: B--METHODOLOGICAL DETAILS »
Guide to Establishing Monitoring Programs for Travel Time Reliability Get This Book
×
 Guide to Establishing Monitoring Programs for Travel Time Reliability
MyNAP members save 10% online.
Login or Register to save!

TRB’s second Strategic Highway Research Program (SHRP 2) Report S2-L02-RR-2: Guide to Establishing Monitoring Programs for Travel Time Reliability describes how to develop and use a Travel Time Reliability Monitoring System (TTRMS).

The guide also explains why such a system is useful, how it helps agencies do a better job of managing network performance, and what a traffic management center (TMC) team needs to do to put a TTRMS in place.

SHRP 2 Reliability Project L02 has also released Establishing Monitoring Programs for Travel Time Reliability, that describes what reliability is and how it can be measured and analyzed, and Handbook for Communicating Travel Time Reliability Through Graphics and Tables, offers ideas on how to communicate reliability information in graphical and tabular form.

A related paper in TRB’s Transportation Research Record, “Synthesizing Route Travel Time Distributions from Segment Travel Time Distributions,” examines a way to synthesize route travel time probability density functions (PDFs) on the basis of segment-level PDFs in Sacramento, California.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!