Click for next page ( 48


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 47
Table 4.1 Data Required by the Pavement Design Guide Software Required Data Source for Data a AADTi for up to 13 VCs Continuous classification counts, or (1, 2, and 3A)b Short-duration classification counts adjusted for day of week and season AADT and Percent Trucks (3B) Short-duration volume counts, adjusted for day of week and season and State estimates of truck percentages (from a combination of short and continuous classification counts) Truck Traffic Classification Group (3B) Judgment Monthly Traffic Distribution Factors Continuous classification counts by VC Axle-Load Distribution Factors Weigh-in-motion data collection Site Specific (1) Axle-Load Distribution Factors Weigh-in-motion data collectionstatewide program Regional (2) Axle-Load Distribution Factors Weigh-in-motion data collectionstatewide program Statewide (3) Linear or Exponential Growth Rate Various sources Directional Distribution Factor Set to 1.0, except for Level 3B analyses Axle Groups per Vehicle (for each VC) Weigh-in-motion data collection Hourly Distribution Factors Continuous classification counts or Short-duration classification counts a AADTi is AADT by VC. b Numbers in parentheses identify the input levels for which the data are used. 4.2 Data Analysis Once data are collected from the field, the data must be analyzed. This process consists of Quality control review of the collected data (to ensure that the equipment operated correctly), Summarization of the data into statistics and record formats that can be readily used by others inside and outside the state highway department, and 2-47

OCR for page 47
Table 4.2 Data-Collection Elements for TrafLoad Type of Traffic Data Collection Data Produced for TrafLoad Short-Duration Volume Provides a "counted" measure of average daily traffic (ADT), Counts which serves as an input to the computation of AADT (Class Level 3) Continuous Traffic Counts Used to compute the seasonal and day-of-week adjustment factors necessary to compute AADT from ADT values Short-Duration Vehicle Actual truck volumes (by type of truck) on the road for which Classification Counts the measurement was made (Level 1 or 2 class data) TOD distribution factors by VC Continuous Vehicle Day-of-week and seasonal adjustment factors for trucks Classification Counts Actual truck volumes for Level 1 (class) sites Monthly traffic distribution factors by VC Trend measurements used when forecasting future truck volumes Short-Duration WIM Current load spectra datasets (Weight Level 1) (if a well- Measurements calibrated site) Used in the computation of Level 2 (weight) regional axle-load spectra by Truck Weight Road Group (TWRG) and Level 3 statewide axle load spectra Used to correctly assign a specific roadway to a specific TWRG Continuous WIM Seasonal and current load spectra datasets (Weight Level 1) Measurements Day-of-week and seasonal adjustments for load spectra datasets developed from short-duration WIM measurements Used in the computation of Level 2 (weight) regional axle-load spectra by Truck Weight Road Group (TWRG) and Level 3 statewide average axle load spectra Also used for continuous classification data Storage of summary statistics in a form that permits ready retrieval and use by other analy- sis tools. Data that are not reviewed, summarized, and stored for easy use simply waste the available data-collection resources. Mechanistic pavement design does not require that state highway agencies perform these tasks in a particular manner. It does require that specific output reports be made available 2-48

OCR for page 47
from the collected data. It also requires that effective quality assurance procedures be adopted and followed in order to maintain the quality of the data being used as input to the design process. The key components of this process are discussed below. Quality Control Data-collection equipment does not always work as intended. Sensors fail, come loose, or are improperly installed. Settings can be inappropriate. The equipment may not be properly cal- ibrated, or the calibration may drift over time as environmental conditions change. In some cases, operating conditions may not allow the equipment to function as designed. Data from equipment that is not operating correctly yield inaccurate measurements of traffic loads that in turn result in poor design of pavement depths. Quality control programs are intended to identify malfunctioning or poorly calibrated equipment and to remove data col- lected by that equipment from the analysis process. In some cases, this means that additional data must be collected to replace the invalid data. In other cases, alternative data may be avail- able (e.g., loss of 2 weeks of data from a continuous-count location is not serious). Performing quality checks quickly allows repair or recalibration efforts to be undertaken quickly, which in turn prevents loss of a large volume of data. Quality control is particularly important for weigh-in-motion data, as many WIM scales are subject to calibration drift. Calibration drift of as little as 10 percent can result in errors of up to 40 percent in the estimates of pavement damage.2 For these reasons, each data-collection agency should have a quality assurance process that checks incoming data for errors. This can be a significant task, depending on the type of data collection being performed, the volume of data being collected, and the amount of automa- tion present in the traffic data processing system operated by the state highway agency. A pooled-fund study led by the Minnesota DOT developed a knowledge-based system for performing data quality checks for volume, classification, and weight data.3 Other projects, such as FHWA's Long-Term Pavement Performance project, have also developed and pub- lished basic quality assurance procedures.4 A summary of the most common data quality checks is provided in Section 5.5 of a companion report.5 All quality check procedures compare measured traffic characteristics with a set of known val- ues. Known values are drawn either from previous data-collection experience for that location 2 WIM Calibration, a Vital Activity, FHWA Publication Number FHWA-RD-98-104, July 1998. 3 Intelligent Decision Technologies, Ltd., Traffic Data Quality Procedures, Pooled-Fund Study, Expert Knowledge Base, Interim Task A3 Report, prepared for Minnesota DOT, November 1997. 4 FHWA, LTPP Division, Data Collection Guide for Long-Term Pavement Performance Studies, Operational Guide No. SHRP-LTPP-OG-001, Revised October 1993. 5 Cambridge Systematics, Inc., and Washington State Transportation Center, Equipment for Collecting Traffic Load Data, prepared under NCHRP Project 1-39, June 2003, available online at http://trb.org/ news/blurb_detail.asp?id=4403. 2-49

OCR for page 47
or from independently measured sources. (For example, to determine if the clock on a data- collection device correctly distinguishes daytime from nighttime, 1:00 a.m. and 1:00 p.m. vol- umes might be compared using the known fact that 1:00 p.m. volumes normally exceed 1:00 a.m. volumes.) A key to the quality assurance effort is to make sure the known values against which collected data are compared are accurate measures of the expected traffic patterns. For example, traffic volume on the freeway connecting Los Angeles and Las Vegas often has 1:00 a.m. traffic vol- umes that are large enough to exceed 1:00 p.m. volumes. Thus, the check described above is not an appropriate quality control check for this location, even though it is quite applicable to most other roadways in the nation. This same key point is important when known values are used for automatically adjusting the calibration of data-collection equipment such as WIM scales. Such algorithms can work, but only when the known values are correct and when a sufficient number of vehicles cross the scale in the time period observed. If any of the key assumptions used for auto-calibration are incorrect, the auto-calibration system will not work effectively and can actually decrease the accuracy of the data collected. Auto-calibration problems may exist if the average axle weights of either passenger cars or Class 9 truck steering axles are not known, if either of these aver- ages varies over time, or if adequate samples of these two vehicle types are not observed dur- ing any calibration period. Periodic collection of independent data is required to confirm that the values used for quality assurance checks are correct. These independent tests include (1) the calibration of WIM and classifier systems when they are first installed and used at a site and (2) visual confirmation that portable classifiers are correctly functioning when they are placed on a roadway. Once the initial equipment operation can be verified, datasets can be collected and used for deter- mining the known traffic patterns against which new data are compared. This type of quality control procedure is designed to identify suspect data (i.e., data that do not fit expected patterns). If unexpected patterns are observed, additional forensic work is required. In some cases, it is readily apparent that equipment or sensors have failed. For per- manent data-collection sites, such failures indicate that repairs are needed as quickly as prac- tical. In the case of short-duration data collection, the affected data must be discarded and, usually, replaced by new data. In other cases, the unusual data are plausible but unexpected (for example the Los Angeles/ Las Vegas TOD patterns mentioned above). In these cases, additional data should be collected to confirm or invalidate the unusual data. For these second-chance data-collection efforts, par- ticular attention should be paid to setting up and calibrating the equipment to ensure that the confirmation dataset is accurate. If the new data support the unexpected traffic pattern, then the known value for this site must be updated to reflect the new information. Data Summarization Once the collected traffic data have successfully passed through the quality assurance process, an efficient mechanism is needed for storing and summarizing the data so that they can be used when needed for pavement design. Most states have existing programs that collect and store both volume and classification data on a section-by-section or count-by-count basis. In 2-50

OCR for page 47
many states, these data can be retrieved by section through the state highway agency's geo- graphic information system. Changes in existing data summarization procedures that may be required to support mecha- nistic pavement design include the creation of some additional summary statistics that not all states currently compute and store. These statistics are intended to provide better site-specific traffic loading estimates and thus provide for better pavement designs. Among the statistics that are computed by TrafLoad for use by the pavement design software are Seasonal (monthly) patterns of truck volumes; TOD distributions for truck volumes; Load spectra for different roads and roadway groups; and Numbers of axles, by type of axle, for each class of trucks. The last two statistics have been discussed in some detail in Chapter 2.0, and other needed sta- tistics (including the first two) have been discussed in Chapter 3.0. Use of TrafLoad If TrafLoad is used to process traffic data and generate traffic data inputs for the Pavement Design Guide software, then the required data must be loaded into the system. There are two primary forms of data to be entered: Hourly vehicle classification records from specific count locations and Axle-load data by vehicle for specific sites. Hourly vehicle classification records are assumed to be available in the FHWA C-card (or four- card) record format. Data from both short-duration and continuous sites should be supplied in this format. While all state highway agencies can currently create C-card records easily, con- siderable change may be needed within current data processing systems in order to make hourly classification data available to pavement designers. Many states only provide access to summary statistics such as average daily traffic and overall percent trucks. Making hourly records available to TrafLoad may require modification to current systems or changes in administrative procedures used to store, request, and report traffic data. Axle-load data for individual vehicles are assumed to be available in the FHWA W-card (or seven-card) record format. As in the case of C-card records, making the W-card records avail- able to TrafLoad may require modification to current systems. It is expected that some software development work will be required at most state highway agencies to simplify the extraction of data items from existing traffic databases and to make the appropriate files accessible to TrafLoad. In most cases, these development efforts should be modest. 2-51