National Academies Press: OpenBook

Automated Pavement Condition Surveys (2019)

Chapter: Chapter 5 - Case Examples

« Previous: Chapter 4 - Summary of Agency Data Quality Procedures
Page 75
Suggested Citation:"Chapter 5 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 75
Page 76
Suggested Citation:"Chapter 5 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 76
Page 77
Suggested Citation:"Chapter 5 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 77
Page 78
Suggested Citation:"Chapter 5 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 78
Page 79
Suggested Citation:"Chapter 5 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 79
Page 80
Suggested Citation:"Chapter 5 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 80
Page 81
Suggested Citation:"Chapter 5 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 81
Page 82
Suggested Citation:"Chapter 5 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 82
Page 83
Suggested Citation:"Chapter 5 - Case Examples." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 83

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

75 In order to showcase agency practices for conducting automated pavement condition surveys, three case examples were developed from agency-provided DQMPs and documents. The three case examples represent two SHAs and one Canadian provincial government; two of these agencies conduct automated condition surveys through vendor contracts, and one conducts data collection and analysis using agency equipment and staff. The three case examples also illustrate one agency that uses fully automated analysis, one agency that uses fully automated analysis with two manually determined distresses, and one agency that uses both semi- and fully automated analysis methods. The three case examples include the British Columbia MoTI, the Pennsylvania DOT, and the North Dakota DOT. Case Example 1: British Columbia Ministry of Transportation and Infrastructure Introduction The British Columbia MoTI contracts with a vendor for a fully automated pavement condition survey of asphalt pavements. The survey is conducted on approximately 5,590 mi (9,000 km) each year. Condition and distress types collected and analyzed are summarized in Table 41. The MoTI quality management process consists of a three-tier approach and includes initial QA testing, monitoring during data collection, and assessing submitted data. Initial Quality Assurance Testing Initial QA testing is conducted to ensure the contractor’s equipment for determining IRI and rut depth is operating properly and that it identifies surface distresses in accordance with the MoTI condition rating manual. Prior to production testing, the MoTI assesses IRI, rut depth, and surface distress on four 1,640-ft (500-m) test sites. At each test site, MoTI pavement raters conduct a walking survey to assess existing surface distress (type, severity, and extent) on each test site, summarizing on 164-ft (50-m) segments. A walking profiler (Class 1) is used to measure the longitudinal profile of each wheel path for determining IRI, and a straightedge, at 33-ft (10-m) intervals, is used to determine rut depth. The vendor is required to complete five runs to assess both accuracy and repeatability of measurements. IRI is determined by averaging the data from the outside wheel path, and rut depth is based on averaging the results for both wheel paths. Distress type, severity, and extent are used to determine the pavement distress index (PDI) for comparison with the MoTI results. When compared to the MoTI results, the vendor’s data must meet the criteria summarized in Table 42. C H A P T E R 5 Case Examples

76 Automated Pavement Condition Surveys Monitoring During Data Collection During data collection the vendor is required to send weekly progress reports to the MoTI. The progress report includes the total kilometers collected of each type and the percent complete to-date. In addition, the vendor is required to retest the initial QA sites for IRI and rut depth after 50% and 100% of the network has been surveyed. For monitoring surface distress (PDI only), vendor PDI results from blind site testing (manually rated by the MoTI and location unknown to the vendor) are compared and must meet the criteria shown in Table 42. In addition, MoTI test sites 3.1 mi (5 km) in length are selected based on current condition and prior cycle distress condition and compared to the vendor’s results and must meet the criteria shown in Table 42. For the 2017 condition survey, the British Columbia MoTI hired an independent consultant to perform QA services on the initial QA test, blind site testing, and the retest of initial QA sites for IRI and rut depth (at 50% and 100% completion). To reflect the MoTI-conducted testing, the independent consultant determined IRI using a MoTI-supplied Class 1 walking profiler, rut depth measurements using a straightedge, and manual distress ratings in accordance with the agency’s condition rating manual on the initial QA sites, blind sites, and retest of initial QA sites. Although all sites meet MoTI requirements, the independent consultant noted a number of issues and observations (OPUS Consultants Limited 2017): • The alligator cracking algorithm has a minimum area of 10.8 ft2 (1 m2), so many of the smaller areas identified by the manual survey were missed by the automated crack detection. Category Condition or Distress Type Protocol Cracking Alligator cracking, longitudinal joint cracking, longitudinal wheel path cracking, meandering longitudinal cracking, pavement edge cracking, and transverse cracking Agency rating manual Defects Bleeding and potholes Agency rating manual Rutting Rut depth AASHTO PP 70 (2017a) Roughness IRI ASTM E950 (2018a), ASTM E1926 (2015a), and AASHTO M 328 (2018a) Images Pavement surface and ROW Data collection contract Table 41. British Colombia MoTI condition and distress data collection (British Columbia MoTI 2016). Category Criteria Acceptance Criteria Value Surface Distress Measure PDI value (range 0 to 10) Calculation 1,640 ft (500 m) average based on 164 ft (50 m) values Accuracy ± 1 PDI value of manual survey Repeatability PDI ± 1 std (5 runs) Roughness Measure IRI Calculation 1,640 ft (500 m) average based on 164 ft (50 m) values Accuracy 10% of Class I profile survey Repeatability ± 6.3 in/mi (0.1 mm/m std) (5 runs) Rutting Measure Rut depth (mm) Calculation 1,640 ft (500 m) average based on 164 ft (50 m) values Accuracy ± 0.12 in (3 mm) of manual survey Repeatability Std ± 0.12 in (3 mm) (5 runs) Table 42. British Columbia MoTI initial QA criteria (adapted from British Columbia MoTI 2016).

Case Examples 77 Although the algorithm can be adjusted to allow for smaller areas, based on a MoTI validation site, it was recommended the minimum area not be adjusted to minimize the likelihood of overrating. • During the manual survey, several cracks were classified as transverse cracks; however, since the orientation of the crack was more than 10 degrees from transverse, the automated crack detection did not classify these cracks as transverse cracks. In addition, several low-severity transverse cracks were on the borderline of detection by the automated survey, but noted in the manual survey. At this time, it was recommended that the MoTI not make any adjustments to the transverse cracking algorithm. • A discrepancy was noted in the reported crack width between the manual and automated survey. Under further review, it was found that the manual survey did not use as accurate a measure of crack width as possible. It was noted that in all cases, the severity of the raw crack widths reported by the crack detection algorithm were correctly calculated. • Although the 2017 condition survey was only the second year for use of the fully automated crack detection technology (i.e., 3D), the results of the independent review showed the crack ratings to be sufficiently accurate. • Based on the knowledge gained, it was recommended that the MoTI discontinue the requirement for blind site testing. Testing of the initial QA sites is sufficient as long as they have well-represented distress types. In addition, it was recommended that the number of initial QA sites be increased from four to five. Acceptance The third step is to assess the final quality of the submitted data, where both manual and system checks are performed. Items checked include data completeness, correct file structure, and screening data for null or negative values, and values outside established tolerances. IRI, rut depth, and distress data are uploaded into the MoTI’s pavement management system, where additional system verification testing is conducted. After pavement management system verification testing, a list of discrepancies is generated, reviewed, and provided to the vendor for correction if needed. Case Example 2: Pennsylvania Department of Transportation Introduction Pennsylvania DOT contracts with a vendor to conduct automated pavement condition surveys. Pavement condition data are collected on over 28,000 mi (45,061 km) of state-owned roadways in addition to about 2,000 mi (3,219 km) of local federal aid roadways each year. Data collection is conducted on 100% of the length of Interstate and NHS roads annually (approximately 4,900 mi [7,886 km]) and every 2 years on the remaining network. Deliverables Pennsylvania DOT data deliverables are summarized in Table 43. Quality Control Pennsylvania DOT has used a three-tier QC process since 2008. The three tiers include dis- tress calibration and roughness verification sites, blind verification sites, and random 2.5% data selection.

78 Automated Pavement Condition Surveys Deliverable Protocols Resolution Accuracy Repeatability (3 runs) Roughness (IRI) AASHTO M 328 (2018a) AASHTO R 43 (2017c) AASHTO R 56 (2018c) AASHTO R 57 (2018b) N/A ± 10% walking profiler ± 5% Rut depth AASHTO R 48 (2013a) AASHTO PP 70 (2017a) 0.01 in. (2.5 mm) ± 10% agency survey ± 5% JPCP Faulting AASHTO R 36 0.01 in. ± 10% agency survey ± 5% (2013a) AASHTO R 57 (2018b) (2.5 mm) Broken slabs AASHTO PP 68 (2017e) AASHTO R 55 (2013b) AASHTO PP 67 (2017f) 1% ± 10% agency survey ± 5% Transverse joint spalling Agency Manual1 N/A ± 10% agency survey ± 5% Transverse cracking AASHTO PP 68 (2017e) AASHTO R 55 (2013b) AASHTO PP 67 (2017f) 1% ± 10% agency survey ± 5% Bituminous patching Agency Manual1 N/A ± 10% agency survey ± 5% Bituminous Pavements Fatigue cracking AASHTO PP 68 (2017e) AASHTO R 55 (2013b) AASHTO PP 67 (2017f) 1% ± 10% agency survey ± 5% Transverse cracking Agency Manual1 N/A ± 10% agency survey ± 5% Miscellaneous cracking Agency Manual1 N/A ± 10% agency survey ± 5% Edge deterioration Agency Manual1 N/A ± 10% agency survey ± 5% Left-edge joint Agency Manual1 N/A ± 10% agency survey ± 5% Patching Agency Manual1 N/A ± 10% agency survey ± 5% 1 Automated Pavement Condition Survey Field Manual (Pennsylvania DOT 2018b). Table 43. Summary of deliverables (adapted from Pennsylvania DOT 2018).

Case Examples 79 Distress Calibration and Roughness Verification Sites Distress calibration sites include six sites that are manually rated by three DOT raters per- forming at least two ratings each (total of six distress ratings per calibration site) for establishing the distress reference values. The vendor conducts distress data collection, three to five runs, before network data collection and monthly during the annual survey for correlation testing. If the collected data fail the percentage and absolute value thresholds summarized in Table 44, then the test vehicle fails certification. The DOT maintains two roughness verification sites: one bituminous site with IRI values ranging from 40–55 in./mi (2.5–3.5 mm/km) and one jointed concrete pavement with IRI values ranging from 70–85 in./mi (4.4-5.4 mm/km). Each site is 528 ft (161 m) long with 1,056 ft (322 m) lead in and lead out sections. The DOT uses a walking profiler to access IRI annually at both verification sites. Testing requirements for each site include the following: • For the concrete site, eight passes of the walking profiler are performed in each wheel path. If the standard deviation of the five most repeatable runs are within 3% of the mean, the DOT will make five passes with the DOT inertial profilers. If the IRI standard deviation of the five inertial profiler runs are within 3% of the mean and within 3% of the IRI mean from the walking profiler, then the average IRI values from the walking profiler will serve as the reference value. • For the asphalt site, four DOT inertial profilers using five sensor pairs make 10 passes each. The values from each pass are averaged for each wheel path and for each sensor pair. If the standard deviation of the roughness values of the sensor pairs are within 3% of the mean roughness value, then the mean of the sensor pairs for each wheel path is used as the reference value. • Vendor IRI results must be within 3% of the DOT reference values and within 7% of run-to-run repeatability. Blind Verification Sites Blind verification sites include 125 DOT-selected pavement segments. Pavement distress is evaluated by the DOT using pavement images and the same evaluation procedure as used for the distress calibration sites. Vendor’s distress results must be within 10 percent of the average DOT distress ratings and within 50 in./mi (15 mm/km) of DOT-determined IRI values. Random 2.5% Data Selection The third and final tier of the QC process is the selection of a 2.5% random sample of each batch for data acceptance. The DOT uses the collected images to rate each segment and compares Data Type Percent Absolute Value Unit IRI 5 1.000 Inch per mile Rutting 5 0.050 Inch Macro texture MPD 10 0.080 Inch Macro texture root mean square 10 0.050 Inch Cross fall 5 0.500 Percent slope Roll 5 0.500 Percent slope Grade 5 1.000 Percent slope Pitch 5 1.000 Percent slope Fault height 5 0.039 Inch Table 44. Distress calibration requirements (adapted from Pennsylvania DOT 2018a).

80 Automated Pavement Condition Surveys the results to the vendor results. Table 45 summarizes the acceptance criteria and corrective action for the random 2.5% of the data selection sites. Data Upload The final activity is to upload the accepted data into the DOT’s roadway data warehouse (Roadway Management System). In the Roadway Management System, the data are subjected to several error processing steps to check, for example, survey date, invalid LRS keys, pavement surface type, segment length, and condition data length. Data errors are flagged and returned to the vendor for review, correction, and resubmission. Case Example 3: North Dakota Department of Transportation Introduction The third case example includes agency data collection procedures, QC, and acceptance for conducting in-house automated pavement condition surveys. This case example is based on the procedures and practices of North Dakota DOT (North Dakota DOT 2018). The DOT has been conducting semiautomated pavement condition surveys since 1991 using agency-owned collection equipment. In 2013, the DOT DCV was updated to include a 3D system for fully automated crack detection and analysis. The DOT collects data on approximately 8,500 mi (13,679 km) annually. Data are collected on 100% of the network length and are reported every 0.1 mi (0.16 km) for HPMS and on preset segment lengths for the DOT’s pavement management system. Deliverables The deliverables for the DOT’s pavement condition data collection are summarized in Table 46. Quality Control The DCV calibration process starts with IRI calibration using the bounce test and block test. The block test is used to calibrate the wheel path lasers such that the block thickness Reported Value Initial Criteria PWL Action if Criteria Not Met IRI ± 25% 95 Reject deliverable Individual distress severity combination ± 30% 90 Feedback on potential bias or drift in ratings. Retrain of definitions Total fatigue cracking ± 20% 90 Reject deliverable Total nonfatigue cracking ± 20% 90 Reject deliverable Total joint spalling ± 20% 90 Reject deliverable Transverse cracking, JPCP ± 20% 90 Reject deliverable Location, segment and offset Correct segment 100 Return deliverable for correction Location, section begin 95 Return deliverable for correction and system check Panoramic images Legible signs 80 Report problem, reject subsequent deliverable ± 40 ft (12 m) Table 45. Acceptance criteria for 2.5 percent data selection sites (Pennsylvania DOT 2018a).

Case Examples 81 is within ±0.005 in. (0.13 mm). The bounce test is used to verify that all accelerometers are properly functioning as compared to the wheel path lasers. Afterward, the vehicle is taken to the MnROAD facility for IRI certification. To pass the certification, at least five IRI runs must be performed on both asphalt and concrete pavements and meet the following criteria: • Average IRI values must be within 5% of the MnROAD’s IRI values. • Coefficient of variation must be within 3%. • Average cross-correlation of the runs must be at least 90%. • Each individual cross-correlation must be within 85%. • Measured length, as measured by the DMI, must be within 0.2% of the actual test section length. Each week, the DCV traverses a verification site that is 1,000 ft (305 m) long. The verification sites are selected to ensure that no maintenance or rehabilitation is conducted on the site during the collection cycle. Initial IRI and rut depth values are obtained and serve as a baseline for all subsequent weekly verification testing. QC activities are summarized in Table 47. Acceptance The criteria for accepting the collected data involve several checks. All pavement segments are visually viewed at a work station to ensure start and end location for each record. Image quality Deliverable Protocols Resolution Accuracy1 Repeatability2 Longitudinal profile AASHTO M 328 (2010a), AASHTO PP 70 (2017a), AASHTO R 56 (2018c), AASHTO R 57 (2018b), ASTM E950 (2018a) 0.002 in. (0.05 mm) ± 5% ± 5% IRI (left, right, and average) AASHTO M 328 (2018a), AASHTO R 43 (2017c), AASHTO R 57 (2018b), ASTM 1 in./mi (0.06 mm/km) ± 5% ± 5% E1926 (2015a) Rut depth (average and maximum) AASHTO PP 69 (2010c, 2017d), AASHTO PP 70 (2017a), AASHTO R 48 (2013a) 0.01 in. (0.25 mm) ± 0.019 in. (0.48 mm) 0.06 in. (1.5 mm) Faulting (average) AASHTO R 36 (2013c) 0.01 in. (0.25 mm) 0.06 in. (1.5 mm) 0.06 in. (1.5 mm) Distress identification and rating AASHTO PP 67 (2010d, 2014), AASHTO PP 68 (2010e, 2017e), AASHTO R 55 (2013b), ASTM E1656 (2016), LTPP Distress Identification Manual, Agency Distress Scoring Guide Varies ± 20% N/A GPS (latitude and longitude) N/A Submeter (static) Submeter (static) N/A Perspective and ROW images N/A 2,500 x 2,000 Signs legible, proper exposure and color balance N/A Pavement images N/A N/A 0.08-in. (2- mm) cracking visible and detected N/A 1 Compared to agency-established reference values. 2 3 repeat runs. Table 46. North Dakota DOT pavement condition survey deliverables.

82 Automated Pavement Condition Surveys is checked while manually rating patching. Manual distress surveys are conducted on 2% to 3% of all pavement types (1-mi-[1.6-km-] long sections) and compared to automated results, where differences greater than 20% are considered unacceptable. In addition, sections with distress score differences of 9 or greater are reviewed in more detail. Random samples are selected for additional review, with half of the segments being the same as previous years, and the other half are new segments. During the review, segments with an overall distress score difference of 12 points less than or 6 points greater than the previous year’s overall distress score are reviewed in more detail. All seal coats and thin lift overlays are manually rated. If the DCV is upgraded or replaced, 5% of all pavement types will be subjected to review for the first 2 years. These data will also be compared with the most recent condition data before the replacement/upgrade to determine if any changes to the analysis algorithms are needed. Table 48 summarizes the DOT acceptance criteria. Reporting The data collection team is responsible for filing daily journals, documenting equipment calibration and maintenance and the results of all verification, control, and blind site testing. In addition, all encountered problems not previously documented and corrective actions taken are reported by the data collection team. The pavement management engineer is responsible for discussing the annual collection with the data collection team and evaluating ways to make the collection more efficient. Deliverable Quality Expectation QC Activity Frequency/Interval IRI, DMI 95% compliance with standards Equipment configuration, calibration, verification Precollection (annually) Equipment checks and monitor real- time Daily Control, blind, or verification testing Weekly Inspect uploaded data samples Weekly Inspect processed data During manual QC Final data review Prior to RIMS1 upload Rut depth, faulting, GPS coordinates, longitudinal grade 95% compliance with standards Equipment configuration, calibration, verification Precollection (at time of equipment purchase) Equipment checks and monitor real- time Daily Control, blind, or verification testing Weekly Inspect uploaded data samples Weekly Inspect processed data During manual QC Final data review Prior to upload Distress rating 80% match manual to automated Rater training Precollection (as needed) Intra-rater checks During manual QC Final data review Prior to upload Perspective, ROW, and pavement images 98% compliance with standards: each control section and < 5 consecutive images failing to meet criteria Startup checks, real-time monitoring, and field review Daily Uploaded samples review Weekly Final review Prior to processing 1 Roadway Information Management System. Table 47. Summary of quality control activities.

Case Examples 83 Table 48. Acceptance criteria. Deliverable Acceptance Acceptance Testing and Frequency Action if Criteria Not Met IRI, rut depth, faulting, GPS coordinates, longitudinal grade 95% compliance with standards Weekly verification testing. Global database check for range, completeness, logic, and completeness and inspection of all suspect data. Daily monitoring of data completeness during collection. Recalibration and possible re-collection. Distress rating 80% match (±6 points) between manual and automated results At end of annual collection, check accuracy of automated crack detection and QC preset percentage of automated distress scores compared to manual distress score. Contact vendor to discuss correction of crack detection software. Perspective, ROW, and pavement images 98% compliance with standards each control section and < 5 consecutive images failing to meet criteria Weekly verification testing. Daily monitoring for clarity and brightness, and no bugs or raindrops during collection. Clean camera, contact vendor if issues can’t be resolved. Possible re- collection.

Next: Chapter 6 - Conclusions »
Automated Pavement Condition Surveys Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Synthesis 531 documents agency practices, challenges, and successes in conducting automated pavement condition surveys.

The report also includes three case examples that provide additional information on agency practices for conducting automated pavement surveys.

Pavement condition data is a critical component for pavement management systems in state departments of transportation (DOTs). The data is used to establish budget needs, support asset management, select projects for maintenance and preservation, and more.

Data collection technology has advanced rapidly over the last decade and many DOTs now use automated data collection systems.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!