National Academies Press: OpenBook

Quality Management of Pavement Condition Data Collection (2009)

Chapter: Chapter Five - Case Studies

« Previous: Chapter Four - Quality Management Practices
Page 43
Suggested Citation:"Chapter Five - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2009. Quality Management of Pavement Condition Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/14325.
×
Page 43
Page 44
Suggested Citation:"Chapter Five - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2009. Quality Management of Pavement Condition Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/14325.
×
Page 44
Page 45
Suggested Citation:"Chapter Five - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2009. Quality Management of Pavement Condition Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/14325.
×
Page 45
Page 46
Suggested Citation:"Chapter Five - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2009. Quality Management of Pavement Condition Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/14325.
×
Page 46
Page 47
Suggested Citation:"Chapter Five - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2009. Quality Management of Pavement Condition Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/14325.
×
Page 47
Page 48
Suggested Citation:"Chapter Five - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2009. Quality Management of Pavement Condition Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/14325.
×
Page 48
Page 49
Suggested Citation:"Chapter Five - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2009. Quality Management of Pavement Condition Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/14325.
×
Page 49
Page 50
Suggested Citation:"Chapter Five - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2009. Quality Management of Pavement Condition Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/14325.
×
Page 50

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

43 This chapter documents the data management practices of a select group of transportation agencies. The case studies include an agency that conducts most of the data collection in- house and three agencies that contract most of the network- level pavement condition data collection with data collec- tion service providers and use different quality acceptance approaches. MARYLAND MDSHA uses an in-house automated system to measure smoothness, rutting, and cracking, in addition to other data such as right-of-way images, longitudinal and transverse slopes, and GPS coordinates (66). The network-level data collection process includes: (1) data management, (2) pre- processing, (3) processing, (4) quality control, (5) quality acceptance (denoted as quality assurance), (6) classification and rating, and (7) data reduction. The quality control and acceptance procedures for the automated crack detection are discussed by Groeger et al. (66). Quality Control As discussed in chapter four, the quality control plan includes checks to verify that all fields are processed, reviews of section- level data in a search for abnormalities, and checks to verify that all the data have been saved. The reviewer then inputs a subjective evaluation of the crack detection process (good, fair, or poor). Section-Level Review The section-level review is conducted by looking at the total quantity of cracking by station and searching for abnormali- ties; for example, a road segment with many spikes. The oper- ator reviews the segments with abnormalities by manually superimposing the cracks detected with the actual pictures. At the time the plan was published, this process was applied to approximately 50% of the pictures and the goal was to recog- nize 80% of the cracks. The operator also looks at the last rehabilitation date and verifies that the amount of cracking is consistent with the age of the surface; for example, a pave- ment recently rehabilitated would have little cracking. Subjective Rating In the last step, the operator verifies that the data have been saved and inputs a subjective evaluation of the crack detec- tion process, following a set of recommendations summa- rized in Table 5. For example, if more than 90% of the sta- tions reviewed pass the crack detection criteria and all the data were saved in the hard drive and the network, the batch is given a “good” rating. Repeatability and Accuracy Examination MDSHA has also implemented a quality control program to monitor data repeatability and the accuracy of test equipment using a test loop. The experiences of Virginia and other states have been incorporated into the program. The test loop is measured 20 times at the beginning of each data collection season, then run once every three weeks during the season. To analyze data accuracy of a particular test-loop run, the moving average of all the previous runs, including the initial 10-run results, is considered as the reference value for that particular test. This test is also compared with the previous one to check data repeatability. Quality Acceptance Quality acceptance is done by a quality assurance auditor, who is not the equipment operator. This process verifies that the data collection and quality control processes have been conducted properly. The independent auditor checks the data management spreadsheet, verifies that the data are complete, verifies that the data have been saved and backed-up, and re- checks a random sample of 10% of the data files collected implementing the same procedure used for quality control. This sample includes any files that have comments that are out of the ordinary. If there is one discrepancy in a file, it is noted on the data management form. If more than two dis- crepancies are detected, 50% of the file is reviewed to deter- mine if there is a systematic error. If more than 10% of the quality acceptance samples have discrepancies, consideration is given to repeating the crack detection process (66). All data are backed-up on the server on a daily basis and copied to tapes once a week. CHAPTER FIVE CASE STUDIES

Classification and Rating After the data have been processed for crack detection, qual- ity control, and quality acceptance, the next step in the process is to classify and rate cracks using an automated process. Cracks are classified as longitudinal and transverse, their locations in the pavement (outside wheel path, inside wheel path, center, left edge, or right edge) are determined, and the severity is rated as low, medium, or high using the AASHTO cracking protocol definition and the crack width determined by the system. Data Reduction The cracking data are reduced to a condition rating of 0 to 100 and assigned a condition state of very good, good, fair, mediocre, or poor using a software program known as the MDSHA Automated Distress Analysis Tool. This tool also performs the final quality check through a suite of logic, range, and trend checks on the data and generates a progress report to document the pace of data collection and data pro- cessing. If the checks detect any problems, the file is flagged and a note is output to an error log. If the file passes all the checks, data are converted into U.S. units and reformatted to 44 match PMS specifications, and the pavement condition index (PCI) is calculated. Time-Series Comparisons MDSHA also monitors network-level time-series data with the help of a software tool developed and implemented in 2004. This quality acceptance tool checks reasonableness of the data trend. The tool is routinely used to test each data sub- mission and includes two steps: data trend monitoring and data quality investigation. Data Trend Monitoring MDSHA uses the percentage of the network in acceptable condition—called acceptable rate—to monitor the network-level pavement condition. A software program summarizes the acceptable rates in terms of IRI, rutting, cracking, and friction individually by route, county, district, or statewide. A table is prepared for each of the pavement condition indicators along with the values obtained for the last five years. As an example, Table 6 shows 2 years of the 5-year comparison for IRI for a sample of routes. For each year, columns 1 through 5 indicate the percentage of pavements in each condition state and the last column the percentage of road in acceptable condition (states 1 through 3). QC Procedure Good Fair Poor Stations Processed <100% Criterion 1: Detected > 80% of Cracking >90% Stations 70%–90% Stations <70% Stations Data Saved to Hard Drive Yes No Data Saved to Network Yes No Source: Groeger et al. (66). TABLE 5 EXAMPLE OF QUALITY CONTROL RATING MATRIX USED BY MDSHA 2007 2006 Route 1 2 3 4 5 Acc.Rate 1 2 3 4 5 Acc. Rate IS68E 41.04 36.57 18.16 3.23 1 95.77 30.6 37.81 25.87 4.23 1.49 94.28 IS68W 33.83 41.79 20.65 3.48 0.25 96.27 24.38 43.78 27.11 2.99 1.74 95.27 MD35N 0 33.33 58.33 4.17 4.17 91.67 0 33.33 58.33 8.33 0 91.67 MD35S 0 33.33 58.33 4.17 4.17 91.67 0 41.67 50 4.17 4.17 91.67 MD36N 12.97 40.61 29.35 7.85 9.22 82.94 9.62 39.52 34.02 9.28 7.56 83.16 MD36S 12.24 42.52 28.91 7.48 8.84 83.67 11.6 41.98 32.08 7.85 6.48 85.67 MD47N 0 11.76 58.82 23.53 5.88 70.59 0 17.65 58.82 23.53 0 76.47 MD47S 0 5.88 76.47 17.65 0 82.35 0 11.76 70.59 17.65 0 82.35 MD49E 0 9.09 72.73 9.09 9.09 81.82 0 10 80 10 0 90 MD49W 0 10 90 0 0 100 0 10 90 0 0 100 Source: W. Xiong, personal communication, 2008. Note: Columns 1–5: the percentage of pavements in each condition; last column percentage of road in acceptable condition. In bold, routes where pavement condition indicators differ more than 2% from previous year. Acc. = acceptable. TABLE 6 IRI TREND MONITORING BY ROUTE

45 Similar summaries are prepared by the county, district, and for the entire state. If the acceptable rate for any of the pavement condition indicators differs more than 2% (routes MD47N and MD49E in the example) from the previous year, the record is highlighted for further investigation. Data Quality Investigation A data quality investigation is required for those sections in which the trend analysis indi- cates a potential data quality problem. This investigation aims to identify the reason for the suspicious rate change, determine if there was a problem, and, if there is one, find a solution to fix it or to prevent it from happening again. Historic treatment information, test and equipment event records, pavement images, and weather conditions during testing are collected and analyzed to determine which factors may have contributed to the suspicious condition variation. If test operation or equip- ment condition is identified as a concern, a notice is sent to the data collection staff requesting that the data be re-collected or suggesting modifications to the data collection procedures. Data Collection Equipment Comparison After replacing its automated data collection equipment, MDSHA conducted a data comparison study to evaluate the consistency of the data collected between the old and new devices. The two systems were used to collect data on a 250-mile loop. The smoothness (IRI), cracking, and rutting values for a sample of one hundred 0.1-mile segments were compared. The comparison showed that the two systems pro- duced similar IRI data, but statistically different rutting and cracking measurements. Cracking data were collected from pavement images using a proprietary automated cracking detection software tool. To resolve the cracking data consistency problem, MDSHA initiated a study to compare the results of the two systems with reference ratings determined visually from the same pictures collected with the data collection systems for the same 100 seg- ments. This ground truth determination is critical for hardware and software calibration to improve data accuracy. VIRGINIA VDOT has used different pavement distress data collection methodologies over the past 15 years. These changes have resulted in a continuous improvement process through which the department has gained significant experience and devel- oped sophisticated quality control and assurance procedures. VDOT collects data over 0.1-mile- (161-m)-long manage- ment units. Background Larson et al. (96) presents some interesting approaches for comparing time-history pavement condition data. Figure 20 shows the comparison for the overall PCI and IRI for 1996 and 1997 after removing all sections that received preserva- tion treatments. The PCI plot pointed out a deficiency in the rating procedure used in 1997, which overestimated the PCI for the pavements in poor condition. The IRI plot also sug- gests a problem, because the smoothness was lower in 1997 than in 1996; this was attributed to the switch from ultrasonic sensors to laser sensors. The network-level comparison prompted a review of the pavement data collection approach, which helped enhance data quality requirements in successive years and establish formal quality assurance/quality control processes. Most sig- nificantly, VDOT defined the following vision statement for data collection “to collect pavement condition data with suf- ficient detail and accuracy to model deterioration and per- form multiyear planning with the PMS. Data variability for each data element must be smaller than the year-to-year change in that element.” The study also prompted the agency to require the cali- bration of smoothness measuring equipment against a ref- erence device and its verification against VDOT equip- ment, and pilot testing of a sub-network during the data collection contract inception phase. It also provided the data that was used to develop precision (±12%) and bias (±5%) criteria for the PCI. a b FIGURE 20 VDOT yearly pavement condition comparisons (96): (a) Pavement Condition Index; (b) Smoothness (IRI, in./mi).

Current Data Collection Practice In 2005, after a formal solicitation process, VDOT contracted with a service provider to collect, process, and deliver network- level pavement condition data (51). The equipment specified included digital pavement imaging to a resolution of at least 2 mm, laser measurements of longitudinal and transverse profiles, and automated or semi-automated distress quantifi- cation. The potential service providers were required to pro- vide documentation of their quality control plans for all aspects of the project, ranging from equipment calibration through data delivery. The selected service provider had an estab- lished quality control plan, but added an outside third party to provide an independent verification and validation of the data before delivery to VDOT for this project. The service provider-supplied quality process flow diagram (Figure 21) outlined the flow of data collection, data processing, quality control, independent validation and verification, and data acceptance processes. Initial Calibration The calibration of the service provider’s longitudinal pro- file, transverse profile, and pavement distress measurement processes was done using 13 known-location control sections. The control sites varied in length, smoothness, and distress 46 conditions. Data collected by VDOT were used as reference values. The sites were used to establish the service provider’s precision and bias, which in turn were compared with the ones required in the RFP. For calibration of the pavement distress measurements, the service provider used an automated crack detection rating process and semi-automated ratings of the additional dis- tresses. The reference distress surveys were conducted by VDOT staff and the independent third party using the equip- ment collected images. This effort also served to train all dis- tress raters, unify criteria, and made the necessary adjust- ments to the process. Comparisons were made based on the overall pavement condition index; the allowable difference was ±10 points. Independent Verification and Validation The verification and validation of the pavement distress data by an independent quality auditor was performed after the service provider had completed all in-house quality control reviews and believed the data were ready for sub- mittal to VDOT. Acceptance criteria require that 95% of the data checked fall within plus or minus 10 index points of the third-party data. The third party evaluated a 10% ran- dom sample of the pavement deliverables. This process Start up Process - Control Sites Control Site Adequate (VDOT) Deliverables - Data - Report - Documents NO Production Data Collection - Verification Sites - Image Quality - Field QC - SOP Data Processing - Semi-Auto - Automatic - QA Deliverables - QC Report Deliverables - QA Report Internal QA NO Independent Validation &Verification - 5% Data Review - Data Completeness - Index Limits Deliverables - IV&V Report - Deliverable Tables Pass IV&V NO Deliver to VDOT - Deliverable Files via ftp site - Images via portable hard drive Deliverables - QA IV&V Report - 0.1 mi Delivery Table - Homogeneous sections delivery table Batch Acceptance (VDOT) PMS Database AMS Database Video Database NO ` FIGURE 21 VDOT quality process flow diagram [after Shekharan et al. (51)].

47 provided a high-level check of the deliverable tables to ver- ify data completeness and data reasonableness, as well as a direct pavement distress comparison between automated/ semi-automated ratings and manual ratings from experienced pavement raters. The verification and validation also helped identify ran- dom and systematic errors. Because several systematic errors (e.g., erroneous classification of a particular distress type) were identified, the service provider had to adjust the process and reanalyze specific types of sections. For example, the process identified problems with the rating of patches in the jointed reinforced concrete pavement sections and in the clas- sification of cracks in asphalt pavements. These problems required adjustments in the data analysis criteria. The effect of these adjustments had a very significant effect on the clas- sification of the pavements in the various condition cate- gories, the number of deficient pavements, and the subse- quent estimated budget needs, as was discussed previously in the synthesis. In the latest completely audited available survey (corre- sponding to 2006), using a sample of 5% of each deliverable, the independent verification found that the percentage of the distress data meeting the tolerance requirements varied between 93% and 98% for the various deliverables. The independent quality auditor also compared the repeatability of each vehicle used by the service provider, and repro- ducibility between the two service provider’s devices and VDOT’s profiler and performed a high-level data review for reasonableness and completeness (98). OKLAHOMA As the Oklahoma (ODOT) started implementing a PMS, qual- ity pavement condition data were identified as a key compo- nent. The agency recognized the importance of checking the quality of data before they are used for important manage- ment decisions and has implemented detailed quality control and acceptance processes. Pavement Data Collection ODOT established a 4-year contract with a data collection service provider to collect network-level sensor, geometric, and distress data by automated data collection techniques. The data are processed using a combination of automated and semi-automated techniques. Data on roughly half of the network are collected each year of the contract. The con- tract includes sensor data (IRI, rutting, faulting, and macro- texture), distress ratings (type and severity) based on visual analysis of pavement video, and geometric data (longitudinal slope, crossfall, horizontal curve radii, and GPS coordinates). Data are collected over the entire length of each section (i.e., sampling is not used) and reported in 0.01-mile (161-m) increments (52). Quality Control The quality control plan was developed by the data collection service provider and includes quality control checks at all stages of the data collection, processing, reduction, and deliv- ery processes. Some of the quality control steps included con- trol and verification site testing, inter-rater consistency test- ing, and numerous checks of data quality and completeness. Quality Acceptance ODOT initially instituted additional quality acceptance checks, which are applied to the data submitted by the contactor and include the following: • Control site testing to help identify factors that could affect the accuracy and repeatability of sensor data mea- surements and evaluate the quality of the collected video. • Checks of distress ratings on batches of submitted data using a modified version of the service provider’s distress rating software. Because these distress rating checks proved to be very time-consuming and labor- intensive, ODOT contracted the review of the distress ratings for the third year of collection to a consultant. • Additional data quality assurance checks of every data element in the pavement condition database. After 3 years of consistently instituting more checks, the agency developed an automated procedure to rapidly and efficiently check the data delivered by the service provider. Figure 22 presents the main screen for the Visual Basic qual- ity acceptance tool developed within the Access database. The software tool automates the following four groups of checks: • Preliminary checks verify a variety of essential “gen- eral” information included in the condition database. This step checks the district number, type of data entered in each field (e.g., integer versus characters), general section identification data, GPS values, pavement type, events (bridges, etc.), geometric values, and missing data, among others. • Sensor checks for all sensor-related data elements (i.e., those data elements collected using lasers or sensors to determine properties of the pavement section) look for duplicate records in adjacent sections, date, number of sensors used for rutting, and out-of-range values for IRI, rutting, faulting, and macrotexture. • Distress checks verify the specific distress for a given surface type to confirm that they are in accordance with ODOT distress rating protocols and within the expected values not only on an individual basis but also when considering various distresses in combination with one another. • Special checks include more specific elements such as maximum asphalt concrete patch length, number of

railroad crossings and bridges, and nonmatching dis- tress types (e.g., an asphalt concrete distress assigned to a concrete pavement). ODOT has found that these checks provide a wealth of information that has been helpful for evaluating the data pro- vided by the data collection service provider. The software also provides a useful interface for accessing and changing the data in the database. Use of Geographic Information System in Quality Acceptance Checks ODOT has recently begun using GIS for complementing the agency’s quality acceptance procedures. The visualization and spatial analysis tools available in GIS can be very useful for detecting missing sections, inconsistencies in the location of some sections, and unexpected changes in pavement condition. BRITISH COLUMBIA The British Columbia Ministry of Transportation’s (BCMoT) Pavement Surface Condition Rating Manual (99) originally released in 1994 and updated in 2002 includes a detailed quality assurance section. The Manual’s rating methodology was designed to be applicable to both automated and manual surveys so that it can be used for network- and project-level analysis. The surveys and data post-processing are guided by quality management procedures to ensure that the data are collected accurately and repeatedly from year to year. 48 Data Collection Practices At the network level, the survey includes surface distress, rut- ting, and smoothness (in addition to crack sealing, patching rating, and right-of-way pictures), which the Ministry believes provides sufficiently accurate and consistent information for network-level analyses. The surveys are conducted by service providers using automated road testing vehicles. The project-level surveys consist of manual surface dis- tress surveys conducted during the detailed evaluations that are carried out for candidate rehabilitation projects. In addi- tion to distress surveys, this evaluation can include geo- technical investigations, strength testing, coring, and labora- tory testing. The distresses are evaluated every 20 m and plotted on a map. The quality acceptance procedures consist of three levels of testing: (1) initial tests completed by the service provider before the surveys, (2) blind site monitoring during the production surveys, and (3) final assessment of the submitted data files. Initial Quality Acceptance Tests The initial tests verify the service providers’ application of the BCMoT rating system and the operation of the smooth- ness and transverse profile instrumentation. The service provider is required to pass all checks before starting pro- duction data collection. The agency selects four 500-m-long test sites that exhibit a variety of distress types, range in pave- FIGURE 22 Main menu for the ODOT pavement data quality assurance software (52).

49 ment deterioration, surface types, and operating speed. The sites are surveyed manually at 50-m intervals to determine the reference values (Figure 23). The rut depth is determined taking manual transverse profile measurements in each wheel path at 10-m intervals using a 2-m straight edge. The longitu- dinal profile and IRI in each wheel path is obtained using a Digital Profilite 300, which is a Class 1 profiler. Following the reference value determination, the service provider and BCMoT personnel conduct an on-site review where they compare the semi-automated survey results with the results of the manual survey. They walk over the site compar- ing the results to resolve ambiguities and, if necessary, adjust the rating procedures and/or revise the manual ratings. The service provider’s ability to accurately and repeatedly rate pavement distress is assessed by completing a series of five runs over each site, generating ratings at 50-m intervals, and comparing the results for each run with the manual survey. The distress comparisons are based on (1) a combined Pave- ment Distress Index (PDI) to assess accuracy and repeatabil- ity, and (2) severity and density rating totals for each distress type present over the entire site to highlight possible discrep- ancies. The accuracy criteria are ±1 PDI (a 10-point scale index) value of the manual survey, and the repeatability crite- ria ±1 standard deviation of the PDI values for five runs. Lan- ders at al. (86) reported that the range of PDI errors was 0.0 to 0.6 between 1999 and 2001, and the standard deviation (from 5 runs) was 0.2 for all the initial test sites. The service provider’s smoothness and rut depth measure- ments are also compared for the 50-m segments for each wheel path and for the 500 m test site. The IRI criteria establishes that the measurements must be within 10% of the Class I profile survey for each wheel path (accuracy) using 100- and 500-m integration intervals, and have a maximum repeatability of 0.1 m/km standard deviation for five runs. The rut depth mea- surements must have an accuracy of ±3 mm of manual survey, and a repeatability of ±3 mm standard deviation for five runs. Production Survey Quality Acceptance During production surveys, quality acceptance is primarily done using blind sites situated along various highways in each region. These sites are manually surveyed in advance using the same procedure described for the initial checks, and their loca- tion is unknown to the service provider. For larger surveys, the initial test sites are also resurveyed periodically. Blind sites are generally scheduled once every three days during the surveys. Each day during the production surveys, the service provider is required to contact and update the BCMoT representative as to their progress. At this time, the service provider is informed that they have passed over a site on the previous day and is provided with the site location, whereupon he or she imme- diately submits by fax the surface distress survey ratings, smoothness, and rut depth measurements (at 50-m intervals) for that section. Because of possible referencing differences, the service provider is required to submit 1.0 km of data with 250 m on either side of the blind site. The acceptance criteria are the same as for the initial test. The service provider is authorized to continue with the production surveys upon satis- factorily completing the blind site quality acceptance test (99). This criteria is being reviewed and consideration is given to the use of the Cohen’s weighted Kappa Statistic to compare indi- vidual distress types and give more weight to those that have the most effect on PMS decisions (86). Submitted Data Quality Acceptance The last step in the quality acceptance process is the assess- ment of the submitted data, which is conducted using a 3-step process that involves both manual and system checks. The first step consists of conducting a thorough manual review of the submitted data files that verifies that data exist for all road segments, data file structure is correct, segment location and definition are correct, and data are within acceptable ranges. The initial quality acceptance results are summarized and pro- vided to the service provider for correction. The second step involves comparing the current year submitted survey data to previously collected data to determine if there are any signif- icant variations from cycle to cycle. The third and final step involves uploading the distress, smoothness, and rut depth data to the PMS, which conducts internal standardized and user-defined verification tests. The PMS generates a log report listing all discrepancies that can be reviewed, confirmed, or input data corrected and reloaded as required. SUMMARY This chapter documented the data management practices of four DOTs. The review included an agency that conducts most of the data collection in-house, and three agencies that Class I Roughness Survey Manual Distress Survey Rut Depth Survey FIGURE 23 Reference value determination in British Columbia, Canada (99).

contract most of the network-level pavement condition data collection with data collection service providers. The first case study reviewed the MDSHA experience using an in-house automated system to measure smoothness, rutting, cracking, and other data. Its quality control plan includes checks to verify that all fields are processed, reviews of section-level data in search of abnormalities, and checks to verify that the data have been saved. The quality control program also monitors data repeatability and the accuracy of test equipment using control sections. The quality acceptance is conducted by a quality assurance auditor, who is not the operator. The auditor checks the data management spread- sheet, verifies that the data are complete and have been saved and backed-up, and re-checks a random sample of 10% of the data collected. Time-series comparisons of the percentage of the network in acceptable condition by route, county, district, and for the entire state, are used to flag potential data quality problems. The second case study covers VDOT’s most recent expe- rience using a data collection service provider. It highlights two interesting approaches for comparing time-history pave- ment condition data and presents an example of a service provider-supplied quality control process that includes an independent validation and verification. Among other criteria, the acceptance plan requires that 95% of the data checked fall within plus or minus 10 index points of the data collected by a third-party validation and verification rater. The third party evaluates a 10% random sample of the pavement deliverables. The third case study summarizes the ODOT experience using a data collection service provider to collect network- 50 level sensor, geometric, and distress data by automated data collection techniques. The quality control plan developed by the data collection service provider includes quality control checks at all stages of the data collection, processing, reduc- tion, and delivery processes. The quality acceptance proce- dure includes testing of known control and verification sec- tions, checks of distress ratings on batches of submitted data using a modified version of the service provider’s distress rating software, and automatic data quality assurance checks using specially developed software. ODOT has also recently begun using GIS for complementing the agency’s quality acceptance procedures. The final example reviews the experience in the BCMoT. The network-level surveys are conducted by contracted service providers that collect surface distress, rutting, and smoothness using automated equipment. The quality management proce- dures consist of three levels of testing: (1) initial tests com- pleted by the service provider before the surveys, (2) blind site monitoring during the production surveys, and (3) final assess- ment of the submitted data files. The initial quality tests com- pare the results of five runs of the service providers’ equip- ment with reference measurements on four 500-m test sites. These sites are also resurveyed periodically for quality con- trol. Production quality acceptance is primarily done using blind sites situated along various highways in each region, which are manually surveyed in advance using the same pro- cedure described for the initial checks. The final step in the quality acceptance process is the assessment of the submitted data using manual reviews and automated software time- series comparisons, and standardized and user-defined veri- fication tests after the data have been entered into the pave- ment management database.

Next: Chapter Six - Findings and Suggestions for Future Research »
Quality Management of Pavement Condition Data Collection Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Synthesis 401: Quality Management of Pavement Condition Data Collection explores the quality management practices being employed by public highway agencies for automated, semi-automated, and manual pavement data collection and delivery.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!