Click for next page ( 50

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 49
49 ment deterioration, surface types, and operating speed. The tion is unknown to the service provider. For larger surveys, the sites are surveyed manually at 50-m intervals to determine the initial test sites are also resurveyed periodically. Blind sites are reference values (Figure 23). The rut depth is determined generally scheduled once every three days during the surveys. taking manual transverse profile measurements in each wheel Each day during the production surveys, the service provider path at 10-m intervals using a 2-m straight edge. The longitu- is required to contact and update the BCMoT representative as dinal profile and IRI in each wheel path is obtained using a to their progress. At this time, the service provider is informed Digital Profilite 300, which is a Class 1 profiler. that they have passed over a site on the previous day and is provided with the site location, whereupon he or she imme- Following the reference value determination, the service diately submits by fax the surface distress survey ratings, provider and BCMoT personnel conduct an on-site review smoothness, and rut depth measurements (at 50-m intervals) where they compare the semi-automated survey results with the for that section. Because of possible referencing differences, results of the manual survey. They walk over the site compar- the service provider is required to submit 1.0 km of data with ing the results to resolve ambiguities and, if necessary, adjust 250 m on either side of the blind site. The acceptance criteria the rating procedures and/or revise the manual ratings. are the same as for the initial test. The service provider is authorized to continue with the production surveys upon satis- The service provider's ability to accurately and repeatedly factorily completing the blind site quality acceptance test (99). rate pavement distress is assessed by completing a series of This criteria is being reviewed and consideration is given to the five runs over each site, generating ratings at 50-m intervals, use of the Cohen's weighted Kappa Statistic to compare indi- and comparing the results for each run with the manual survey. vidual distress types and give more weight to those that have The distress comparisons are based on (1) a combined Pave- the most effect on PMS decisions (86). ment Distress Index (PDI) to assess accuracy and repeatabil- ity, and (2) severity and density rating totals for each distress Submitted Data Quality Acceptance type present over the entire site to highlight possible discrep- ancies. The accuracy criteria are 1 PDI (a 10-point scale The last step in the quality acceptance process is the assess- index) value of the manual survey, and the repeatability crite- ment of the submitted data, which is conducted using a 3-step ria 1 standard deviation of the PDI values for five runs. Lan- process that involves both manual and system checks. The ders at al. (86) reported that the range of PDI errors was 0.0 to first step consists of conducting a thorough manual review of 0.6 between 1999 and 2001, and the standard deviation (from the submitted data files that verifies that data exist for all road 5 runs) was 0.2 for all the initial test sites. segments, data file structure is correct, segment location and definition are correct, and data are within acceptable ranges. The service provider's smoothness and rut depth measure- The initial quality acceptance results are summarized and pro- ments are also compared for the 50-m segments for each wheel vided to the service provider for correction. The second step path and for the 500 m test site. The IRI criteria establishes that involves comparing the current year submitted survey data to the measurements must be within 10% of the Class I profile previously collected data to determine if there are any signif- survey for each wheel path (accuracy) using 100- and 500-m icant variations from cycle to cycle. The third and final step integration intervals, and have a maximum repeatability of involves uploading the distress, smoothness, and rut depth 0.1 m/km standard deviation for five runs. The rut depth mea- data to the PMS, which conducts internal standardized and surements must have an accuracy of 3 mm of manual survey, user-defined verification tests. The PMS generates a log report and a repeatability of 3 mm standard deviation for five runs. listing all discrepancies that can be reviewed, confirmed, or input data corrected and reloaded as required. Production Survey Quality Acceptance SUMMARY During production surveys, quality acceptance is primarily done using blind sites situated along various highways in each This chapter documented the data management practices of region. These sites are manually surveyed in advance using the four DOTs. The review included an agency that conducts same procedure described for the initial checks, and their loca- most of the data collection in-house, and three agencies that Class I Roughness Survey Manual Distress Survey Rut Depth Survey FIGURE 23 Reference value determination in British Columbia, Canada (99).

OCR for page 49
50 contract most of the network-level pavement condition data level sensor, geometric, and distress data by automated data collection with data collection service providers. collection techniques. The quality control plan developed by the data collection service provider includes quality control The first case study reviewed the MDSHA experience checks at all stages of the data collection, processing, reduc- using an in-house automated system to measure smoothness, tion, and delivery processes. The quality acceptance proce- rutting, cracking, and other data. Its quality control plan dure includes testing of known control and verification sec- includes checks to verify that all fields are processed, reviews tions, checks of distress ratings on batches of submitted data of section-level data in search of abnormalities, and checks using a modified version of the service provider's distress to verify that the data have been saved. The quality control rating software, and automatic data quality assurance checks program also monitors data repeatability and the accuracy of using specially developed software. ODOT has also recently test equipment using control sections. The quality acceptance begun using GIS for complementing the agency's quality is conducted by a quality assurance auditor, who is not the acceptance procedures. operator. The auditor checks the data management spread- sheet, verifies that the data are complete and have been saved The final example reviews the experience in the BCMoT. and backed-up, and re-checks a random sample of 10% of the The network-level surveys are conducted by contracted service data collected. Time-series comparisons of the percentage of providers that collect surface distress, rutting, and smoothness the network in acceptable condition by route, county, district, using automated equipment. The quality management proce- and for the entire state, are used to flag potential data quality dures consist of three levels of testing: (1) initial tests com- problems. pleted by the service provider before the surveys, (2) blind site monitoring during the production surveys, and (3) final assess- The second case study covers VDOT's most recent expe- ment of the submitted data files. The initial quality tests com- rience using a data collection service provider. It highlights pare the results of five runs of the service providers' equip- two interesting approaches for comparing time-history pave- ment with reference measurements on four 500-m test sites. ment condition data and presents an example of a service These sites are also resurveyed periodically for quality con- provider-supplied quality control process that includes an trol. Production quality acceptance is primarily done using independent validation and verification. Among other criteria, blind sites situated along various highways in each region, the acceptance plan requires that 95% of the data checked fall which are manually surveyed in advance using the same pro- within plus or minus 10 index points of the data collected by cedure described for the initial checks. The final step in the a third-party validation and verification rater. The third party quality acceptance process is the assessment of the submitted evaluates a 10% random sample of the pavement deliverables. data using manual reviews and automated software time- series comparisons, and standardized and user-defined veri- The third case study summarizes the ODOT experience fication tests after the data have been entered into the pave- using a data collection service provider to collect network- ment management database.