National Academies Press: OpenBook

Automated Pavement Condition Surveys (2019)

Chapter: Chapter 4 - Summary of Agency Data Quality Procedures

« Previous: Chapter 3 - State of the Practice
Page 41
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 41
Page 42
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 42
Page 43
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 43
Page 44
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 44
Page 45
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 45
Page 46
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 46
Page 47
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 47
Page 48
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 48
Page 49
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 49
Page 50
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 50
Page 51
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 51
Page 52
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 52
Page 53
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 53
Page 54
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 54
Page 55
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 55
Page 56
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 56
Page 57
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 57
Page 58
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 58
Page 59
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 59
Page 60
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 60
Page 61
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 61
Page 62
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 62
Page 63
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 63
Page 64
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 64
Page 65
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 65
Page 66
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 66
Page 67
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 67
Page 68
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 68
Page 69
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 69
Page 70
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 70
Page 71
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 71
Page 72
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 72
Page 73
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 73
Page 74
Suggested Citation:"Chapter 4 - Summary of Agency Data Quality Procedures." National Academies of Sciences, Engineering, and Medicine. 2019. Automated Pavement Condition Surveys. Washington, DC: The National Academies Press. doi: 10.17226/25513.
×
Page 74

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

41 The timing of this synthesis provided an opportunity to obtain and summarize SHA DQMPs in response to the federal requirements. As previously discussed, 23 CFR 490 required agencies to develop and submit DQMPs to the FHWA by May 18, 2018 (Code of Federal Regulations 2017). During the follow-up questions, agencies were asked to provide their DQMPs, most of which were received by June 30, 2018. It should be noted that some of the agency DQMPs may not have received FHWA approval at the time of the follow-up interview. In total, 29 SHAs provided their DQMPs and four Canadian Provinces (Alberta, British Columbia, Quebec, and Saskatchewan) provided similar documentation. Figure 19 illustrates the agency DQMPs (or other quality management documents) received and summarized in this chapter. The following sections summarize various components of the DQMPs, including protocols, condition and distress types included in the plan, QC requirements, control, verification-site and blind-site requirements, and acceptance requirements. Data Quality Process The data quality process is how the agency determines the quality of the collected and submitted condition data. This process can span the entire condition survey process, from predeployment to acceptance. Figure 20 illustrates the quality process for Virginia DOT. The process starts with control sites, where the collected data must meet certain criteria for full production of data collection to begin. After data processing, the vendor must perform and pass an internal QA check. If the data fail, then they are reprocessed; if they pass, the data are then subjected to an independent review of about 5% of the total data. Next, the data are analyzed by the agency to conduct final data acceptance. At any point, if the data fail to meet the quality check criteria, they are returned to the vendor for reprocessing. Once all of the data have been accepted, they are loaded into the various applicable databases. Figure 21 shows the data quality process for Illinois DOT. The process is similar to the one used by Virginia DOT in that the DCV(s) must meet certain criteria before full-production data collection can begin. The collected data are processed by the vendor, at which point the data and images must meet the contract specifications for data quality. These data are then broken down into 0.1-mi (0.16-km) segments and analyzed for completeness and consistency. Data not meeting these checks are returned to the vendor to fix and resubmit. After all data have been accepted, they are uploaded into the Illinois Roadway Information System and reported to HPMS. Standards and Protocols Table 24 summarizes the data collection standards and protocols required in the agen- cies’ DQMPs. Table 24 is summarized according to data category (e.g., condition manual, C H A P T E R 4 Summary of Agency Data Quality Procedures

WA OR CA MT ID NV AZ UT WY CO NM TX OK KS NE SD ND MN IA MO AR LA MS AL GA FL SC TN NC IL WI MI OH IN KY WV VA PA NY ME VTNH NJ DE MD MA CT RI AK HI Data Quality Plan Received Figure 19. Data quality plans received from SHAs. Figure 20. Virginia DOT quality process flow diagram (Flintsch and McGhee 2009, as adapted by Shekharan et al. 2007).

Summary of Agency Data Quality Procedures 43 equipment, sensor measurements), standard or protocol, and description of the standard or protocol. Condition and Distress Types The FAST Act requires agencies to include quality management procedures for IRI, rut depth, faulting, and cracking (see Table 5); however, a number of agencies have applied these practices to other agency-collected distress types. Tables 25 through 27 list distress types included in agency-provided DQMPs for asphalt pavements, JPCP, and CRCP, respectively. For asphalt pavements, the majority of agencies include percent cracking (FAST Act reporting), alligator cracking, longitudinal cracking, transverse cracking, block cracking, patching, potholes, and raveling in their DQMPs. For JPCP, the majority of agencies include cracked slabs, transverse cracking, longitudinal cracking, corner cracking, patching, multiple-cracks slabs (broken or shattered slabs), and joint spalling. For CRCP, the majority of agencies include longitudinal cracking, transverse cracking, punchouts (count), and patching. Note: PM2 – Performance Measure Number 2 (FHWA designation); IRIS – Illinois Roadway Inventory System Figure 21. Illinois DOT quality process flow diagram (Illinois DOT 2018).

44 Automated Pavement Condition Surveys Category Standard/Protocol Description Number of Agencies Condition manual HPMS Field Manual (FHWA 2016) Standards for condition assessment on NHS roadways 24 Agency manuals Agency-specific distress identification manual 14 LTPP Manual (Miller and Bellinger 2014) Pavement distress rating manual for the LTPP program 6 ASTM D6433 (2018c) Determine Pavement Condition Index from visual condition surveys 1 Profile equipment AASHTO R 56 (2018c) Longitudinal profile equipment 22 AASHTO M 328 (2010a, 2018a) Hardware and software for inertial profilers 18 AASHTO R 57 (2018b) Operating and calibrating inertial profiler 17 Faulting AASHTO R 36 (2013c) Method for quantifying faulting 18 Roughness AASHTO R 43 (2017c) Method for quantifying IRI 17 ASTM E1926 (2015a) Method for quantify IRI 4 AASHTO PP 37 (2004) Method for quantifying IRI 2 ASTM E1489 (2013a) Method for quantifying ride number 1 Measuring profile AASHTO PP 70 (2010b, 2017a) Collecting transverse profile (automated) 16 ASTM E950 (2018a) Measuring and recording profile using an inertial profiler 15 ASTM E1656 (2016) Collect profiles and cracking at posted speed 4 ASTM E2133 (2013b) Measure profile using walking profiler 1 Rutting/ Deformation AASHTO R 48 (2013a) Method for quantifying rut depth (> five-point system) 12 ASTM E1703 (2015b) Method for quantifying rut depth with a straightedge 3 AASHTO PP 38 (2005) Method for quantifying rut depth (> five-point system) 2 AASHTO PP 69 (2010c, 2017d) Method for quantifying deformation parameters 13 Asphalt cracking AASHTO R 55 (2013b) Method for quantifying cracking (manual or automated) 8 AASHTO PP 67 (2010d, 2014, 2017f) Method for quantifying cracking (automated) 6 Images AASHTO PP 68 (2010e, 2017e) Collect surface images (automated) 6 Macrotexture ASTM E1845 (2015c) Calculate macrotexture MPD 2 Precision and bias ASTM C670 (2015d) Develop precision and bias statements 1 ASTM C802 (2014a) Determine test-method precision 1 Table 24. Agency data collection standards and protocols (total agencies = 57).

Arkansas — — British Columbia — — — — — California — — — — — — — Connecticut — — — — — — Delaware — — — — Illinois — — — — — — — — — Maryland — — — — — — — — — Minnesota — — — — — — — — — New Hampshire — — — — — — — — — New Mexico — — — — New York — — — — — — — — — North Carolina — — — — — North Dakota — — — — — — Oregon — — — — — — Pennsylvania — — — — Quebec — — — — — — — — Saskatchewan — — — — — — Tennessee — — — — — Texas — — — — — — Utah — — — — Vermont — — — — West Virginia — — — — — — Washington — — — — — — — Total 15 1 18 19 19 9 5 5 5 1 Note: — = N/A. Agency Percent Crack (HPMS) PSR Allig. Crack Long. Crack Trans. Crack Block Crack Misc Crack Edge Crack Long. Joint Crack Shldr Crack Alaska — — — — — — — Alberta — — — — — Dips/ Bumps Bleeding Patching Pothole Porosity Raveling — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — 2 8 12 8 1 8 Table 25. Agency asphalt pavement distress types included in DQMP (total agencies = 25).

46 Automated Pavement Condition Surveys Control, Verification, and Blind Site Testing Control, verification, and blind site testing are used for monitoring and ensuring data quality of the collected pavement condition data before and during data collection. Control site testing is conducted by the agency before production testing to certify, calibrate, and verify data collection equipment meets the agency-specified quality standards. This testing is often used to establish reference values (or ground truth testing) for condition and distress types collected by the agency. Control sites are typically located within the vicinity of the SHA office responsible for pavement condition data collection. They are representative of network pavement condition and are typically the same sites year to year until major rehabilitation is performed, in which case they are removed and replaced by a different control site location. Verification sites are typically spread across the entire highway network, and as with control sites, pavement condition is Agency Cracked Slabs (HPMS) Trans. Crack Long. Crack Corner Crack ASR/D -Crack Joint Seal Damage Patching Multiple Crack Joint Spalling Arkansas — — — — California — — — — Delaware — — — Illinois — — — — — — — — Maryland — — — — — — — — Minnesota — — — — — — — — New Mexico — — New York — — — — — — — — North Carolina — — — North Dakota — — Oregon — — — Pennsylvania — — — — Tennessee — — — — Texas — — — — — — Utah — West Virginia — — — — — — — Washington — — — — Total 11 11 11 8 2 3 10 8 9 Note: — = N/A. Table 26. Agency JPCP distress types included in DQMP (total agencies = 17). Agency Crack (HPMS) Long. Crack Trans. Crack Punchout Patching Pumping Multiple Crack Spalling Arkansas — — California — — — — New Mexico — — — — North Dakota — — — — Oregon — — — — — Texas — — — — — Total 1 5 3 5 5 1 1 2 Note: — = N/A. Table 27. Agency CRCP distress types included in DQMP (total agencies = 6).

Summary of Agency Data Quality Procedures 47 representative of highway pavement conditions. Pavement condition assessment at verification sites is conducted by the highway agency, and typically is not used to establish reference values. Verification site location is typically known by the data collection team and is often traversed multiple times during the data collection effort. Each time the DCV traverses a verification site, data are collected and submitted for review and analysis (e.g., ensure image clarity, verify precision and bias). Blind sites are typically located across the entire highway network, but are unknown to the data collection team. Pavement condition on blind sites has been determined by the SHA. As with verification sites, once a DCV traverses the blind site location, the data are reviewed and analyzed for compliance. Rater Training Rater training in its most simple form is ensuring that the raters who identify and measure pavement distresses are doing so in the correct manner. This is important for vendor-based analysis because raters may review data collected for several different agencies. Each agency might have different definitions for distress type as well as for distress severity and extent. Examples of agency pavement-rater training include the following: • California DOT (Caltrans 2018). Before a production survey, Caltrans requires all staff involved with the pavement condition data effort to participate in a 1-week training course. The intent of the training course is to minimize the discrepancies for crack identification and classification between the vendor and the agency QA team. • New Hampshire DOT (New Hampshire DOT 2018). New Hampshire DOT requires personnel certification for assessment and review of cracking data. Fifteen certification sections were developed for data and images collected in 2009 and 2010. Each section is 0.3 mi (0.5 km) long, representing a wide range of distress types. Personnel are required to rate the certification sections to a satisfactory level (based on experienced pavement condition rating technicians) before rating production survey data. • Pennsylvania DOT (Pennsylvania DOT 2018a). The Pennsylvania DOT requires the vendor to train all pavement condition rating technicians. After training is completed, pavement condition raters are required to evaluate six distress calibration sites. The pavement condition assessment results must meet agency accuracy and repeatability requirements before network data reporting. • Texas DOT (Texas DOT 2018a). All staff involved with postprocessing surface distress data from collected images must be certified annually by attending surface distress ratings classes. Certification requires the successful completion of a written test (scoring 70% or higher). Quality Control Requirements QC is defined as activities conducted by the data collection team (agency or vendor) to ensure the collected data (and images) are free of errors. These range from equipment checks and cali- brations to assessing the validity of the collected data. Table 28 provides examples of common vendor QC activities, and Table 29 summarizes the QC requirements from agency DQMPs. Control, Verification, and Blind Site Requirements As previously discussed, control, verification, and blind sites are used by the agency to deter- mine the quality of the collected data and resulting outputs. Some agencies require only a single data collection run per site, while others require multiple collection runs. The collected data are checked for accuracy and repeatability. Table 30 summarizes examples of the requirements for collected data from agency control, verification, and blind sites.

48 Automated Pavement Condition Surveys Category Activity Data completeness • Total length matches expected length. • Total number of sections matches expected number of sections. • No data have been previously rejected. • No section delivered w/o valid exception. • Sections shorter than expected length have valid exception. Locator information • No blank or null values. • Combined locator values are the sum of their component reported parts. • All locator values match agency values. • All locator hierarchical relationships are maintained. Length • All rubber banded segments match expected length. • Validate rubber banded segments with adjustment > 20%. • Validate rechained adjustment > 5%. • No segments with 0 or negative length. • Validate segment lengths ≥ 5% difference from historical length. Linear Reference System (LRS) • No blank or null values. • Direction and chainage match agency. • Direction values meet agency specification. • Chainage flows within contiguous sections. • No overlapping chainage. • No duplicate mile points. GPS • No blank or null values. • GPS coverage is within tolerance. • Latitude, longitude, and elevation are within expected boundaries. • Latitude and longitude within agency location definition. • Elevation, latitude, and longitude are within agency tolerance of historical data. Speed • No blank, null, zero, or negative values. – Date • No blank or null values. • Date format matches specification. • Date of collection is within data collection period. Road geometry • No blank or null values. • Validate values outside of typical tolerances. • Use images and geographic information system (GIS) map to ensure algorithms are detecting features. • Reprocess exception data caused by vehicle deviation or required lane changes. • Compare features in the opposite direction and reprocess to ensure optimal curve representation in both directions. IRI • No negative, blank, or null values. • Values within expected ranges. • Validate large discrepancies between wheel paths, check images for potential cause. • Review sections > 5% improvement w/o rehabilitation, and sections > 15% deterioration compared to historical data. Faulting • Number of faults less than number of rated joints. • Faulting on jointed pavements only. • Determine causes of values outside expected range. • Review sections > 5% improvement without rehabilitation, and sections > 15% deterioration compared to historical data. Rutting • Ensure sufficient valid transverse profiles to represent the section. • Determine causes of values outside expected range. • No negative, blank, or null values. • Review sections > 5% improvement without rehabilitation, and sections > 15% deterioration compared to historical data. Distress • Check quantity before and after segmentation to ensure no missing data. • Identify and correct errors resulting from segmentation. • Check section length and begin/end discrepancies, partial collections, and missing sections without explanation. • Check pavement-specific distresses match pavement type. • Check distresses have been rated properly according to agency requirements. • Check distress ratings’ minimum and maximum thresholds. • Check rated lane widths and pavement widths are present and accurate. Note: — = N/A. Table 28. Example of vendor QC checks (adapted from Vermont Agency of Transportation 2018).

Summary of Agency Data Quality Procedures 49 Agency Requirements Alaska (Alaska DOT 2018) • Equipment calibrated and certified • Profiler o Repeatability ≥ 95% o Accuracy ≥ 90% o Bounce test ≤ 1% o Block check ± 0.01 in. (0.25 mm) o Crack measurement system height o Image quality • Distance measurement instrument (DMI) pulse ≤ 0.1 in. (2.5 mm) (5 runs) • LRS ≤ 0.15% compared to wheel or tape • Cracking distress (validation sites) o ± 5% of agency values (10 runs) • Data reduction review o Image quality o Crack measurement for anomalies o Route begin and end points o Data completeness o Null and invalid data o Data consistency o Automated distress algorithms Arkansas (Arkansas DOT 2018) • Preproduction survey o Define and verify equipment configuration o Equipment calibration o Personnel certification • During production survey o Data completeness o Subsystem checks o Real-time monitoring o End-of-day verification o DMI calibration California (Caltrans 2018) • Vehicle configuration checks • Profiler o Repeatability ± 5% (3 runs) o Accuracy ± 10% of agency value o Bound test ≤ 8 in./mi (0.5 mm/km) o Block check ± 0.1 in. (2.5 mm) • Crack measurement system height comparable to previous day • Imagery focus, color, luminance quality • DMI pulse ≤ 0.1% difference (3 runs) • LRS ≤ 30 ft (9 m) of wheel or tape • IRI: std ≤ 5% (1.5 mm) (3 runs) and ± 10% agency value • Rut: std ≤ 0.06 in. (1.5 mm) (3 runs) and ± 0.06 in. (1.5 mm) agency value • Fault: Std ≤ 15% (multiple runs and/or historical avg) Connecticut (Connecticut DOT 2018) • Calibration checks • Validation testing • Daily equipment checks • Real-time monitoring • End-of-day review Delaware (Delaware DOT 2018) • Vendor certification of equipment and methods • Approval of vendor QMP • Initial and monthly equipment calibration • Ongoing discrepancy monitoring • Bounds and format checking (monthly) o IRI 30–400 in./mi (1.9–25.3 mm/km) Wheel paths differ ≤ 50 in./mi (3.2 mm/km) o Rut ≤ 1.0 in. (25 mm) Wheel paths differ ≤ 0.25 in. (6.4 mm) o Crack: area ≤ 100% o Fault ≤ 1.0 in. (25 mm) > 0 when joints are present • Compare with previous year and flag > 10% difference • Images (monthly) o Random sample of 10 images o Confirm distress data accuracy • Distance and location (monthly) o Random sample of 10 sections o Compare GPS accuracy to base map • Final data review o Data coverage > 99% o Data within bounds > 99% Illinois (Illinois DOT 2018) • Preproduction (sample route, 10 runs) o IRI: current runs < 10% std of last validation and current std < 5% o Rutting: current runs < 0.08 in. (2 mm) std of last validation and current std < 0.04 in. (1 mm) o Cracking: current runs < 15% std of last validation and current std < 15% • Repeatability sections (monthly) o IRI ± 10% of baseline o Rut ± 0.08 in. (2 mm) • Image and data quality checks before submittal to agency Maine (Maine DOT 2018) • Daily field operations o Diagnostic checks o Random test to verify reasonableness o Monitor systems for errors o Random review at end of day • Office postprocessing o 100% review Identify missing data Check crack type and severity Table 29. Agency QC requirements. (continued on next page)

50 Automated Pavement Condition Surveys Maryland (Maryland DOT 2018) • Calibration and quality checks o DMI runs < 1.0 pulse/ft (0.3 pulse/m) o Block check (AASHTO R 57 [2018b]) o Bounce test Max. IRI < 0.1 in./mi (0.006 mm/km) static Bounce < 0.5 in./mi (0.03 mm/km) Avg < 0.4 in./mi (0.025 mm/km) o Verify roll, pitch, heading ± 0.4% • Daily o Confirm subcomponent functionality o Confirm weather conditions o Conduct safety check o Clean apertures and lenses o Check data elements collected o Check right-of-way (ROW) and pavement images o Check IRI measurements Minnesota (Minnesota DOT 2018) • Equipment calibration o Water pan test (3D laser) o Block and bound test (before and monthly during data collection) o DMI • Equipment and operator certification • Daily checks (e.g., tires, camera, lasers) • During data collection (e.g., image quality, DMI measurements, road closures) • End of day (e.g., view images, review records, transfer data to portable hard drive) New Hampshire (New Hampshire DOT 2018) • Preproduction o Equipment verification and calibration o Camera check o Block check and bounce test • During data collection o Evaluate image quality o Compare surveyed to planned length o Monitor sensor output o Monitor image quality • Data and image checks (100%) o GPS points o Collected matches planned roadway o Line scan, ROW, side, and rearview images collected properly o IRI are reported as expected o Laser rut measuring system sensor data displaying correctly Agency Requirements New Mexico (New Mexico DOT 2018) • Preproduction o System requirements and checks o Block check ± 0.01 in. (0.25 mm) o Bounce test 8 in./mi (0.5 mm/km) o Profiler Repeatability ≥ 92% Accuracy ≥ 90% o Image focus, color, luminance o DMI pulse ≤ 0.1 (5 runs) o LRS ≤ 15% of wheel or steel tape o IRI (10-0.1-mi [0.16-km] runs) Std ≤ 5% Std ≤ 10% (historical avg) Symmetrical appearance o Rut (10-0.1 mi [0.16 km] runs) Std ≤ 0.40 in. (10 mm) Std ≤ 0.40 in. (10 mm) (historical avg) o Distress (10- 0.1 mi [0.16 km] runs) Std ≤ 15% total length Std ≤ 15% (historical avg) • During production o GPS accuracy ≤ 9.8 ft (3 m) o Image quality and lane placement o Monitor collection system errors o Data completeness • Data reduction o Sample image quality and coverage o Review crack measurement o Confirm route begin/end o Confirm data completeness o Confirm roadway features o Review distress data for consistency • Data delivery o IRI 30–400 in./mi (1.9–25 mm/km) Wheel paths differ ≤ 50 in./mi (3.2 mm/km) o Rut ≤ 0.35 in. (8.9 mm) Wheel paths differ ≤ 0.25 in. (6.4 mm) o HPMS crack% Asphalt ≤ 50% JPCP and CRCP ≤ 100% o Crack % (AASHTO PP 67 [2017f]) ≤ 100% o Faulting ≤ 1.0 in. (25 mm) > 0 when joints are present o Accurate description items (100%) o ≤ 10 consecutive fixed segments with missing data (500 ft [152 m]) New York (New York DOT 2018) • Presurvey o Equipment calibration and certification o Precision and bias testing • During collection (monthly) o Precision and bias testing Table 29. (Continued).

Summary of Agency Data Quality Procedures 51 Agency Requirements North Dakota (North Dakota DOT 2018) • IRI and DMI o > 95% compliant with standards Equipment configuration, calibration, verification Daily equipment checks and real- time monitoring Inspect uploaded data samples Inspect processed data Final data review • Rut, fault, GPS, and grade o > 95% compliant with standards Initial equipment configuration, calibration, verification Daily equipment checks and real- time monitoring Inspect uploaded data samples Inspect processed data Final data review • Distress rating o > 80% match with manual survey Initial rater training Intra-rater checks Final data review • Images o > 98% compliant with standards of each control section o < 5 consecutive images failing to meet criteria Startup checks, real-time monitoring, field review Uploaded sample review Final review Oregon (Oregon DOT 2018) • Preproduction o System requirements and checks o Profiler Repeatability ≥ 92% Accuracy ≥ 90% o Image focus, color, luminance o DMI pulse ≤ 0.1 difference o IRI Block check ± 0.01 in. (0.25 mm) Bounce test ≤ 3 in./mi (0.2 mm/km) static and ≤ 8 in./mi (0.5 mm/km) ProVAL cross correlation repeatability score ≥ 0.92 (5 runs) o Rut (3 runs) ± 0.05 in. (1.3 mm) o Fault (3 runs) ± 0.06 in. (1.5 mm) o Distress (3 runs or historical avg) std ≤ 10% • During production o GPS accuracy ≤ 9.8 ft (3 m) o Image quality and lane placement o Monitor collection system errors o Data completeness • Data reduction o Sample image quality o Review sample of crack measurement system for anomalies o Confirm route begin/end o Confirm data completeness o Confirm placement of roadway features o Manual review and correction of automated results when image analysis is in error o Review distress data for consistency o Perform data reasonableness checks • Data delivery o Confirm LRS coding and lane o Milepoint ± 0.03 mi (0.05 km) of actual o Confirm correct pavement type o Confirm image quality o Confirm events marked as required o No missing values without valid exclusion and reason codes o IRI: 20-800 in./mi (1.3-51 mm/km) o Rut: < 2.0 in. (51 mm) o Fault: < 1.0 in. (25 mm) o Patching: ≤ 6,336 ft2 (205 m2) o Asphalt (0.1 mi (0.16 km) segments) Fatigue cracks ≤ 1,056 ft (190 m) Longitudinal cracks ≤ 1,584 ft (285 m) Potholes ≥ 6 in. (152 mm) wide and count ≤ 44/0.1 mi (26/0.1 km) Raveling ≤ 1,584 ft (285 m) Transverse cracks ≥ 6 ft (1.8 m) long and count ≤ 44/0.1 mi (26/0.1 km) o JPCP (0.1 mi (0.16 km) segments) Corner breaks ≤ 36/0.1 mi (21/0.1 km) Longitudinal cracks (non-wheel path) ≤ 1,584 ft (285 m) Longitudinal crack (wheel path) ≤ 1,056 ft (190 m) Shattered slabs ≤ 36/0.1 mi (21/0.1 km) Transverse crack count < no. of slabs o CRCP (0.1 mi (0.16 km) segments) Longitudinal crack (non-wheel path) ≤ 1,584 ft (285 m) Longitudinal crack (wheel path) ≤ 1,056 ft (190 m) Punchouts ≤ 36/0.1 mi (21/0.1 km) Table 29. (Continued). (continued on next page)

Agency Requirements Pennsylvania (Pennsylvania DOT 2018a) • Equipment calibration and certification o Block calibration and test o Roughness calibration and bounce test o DMI calibration o Laser Crack Measuring System (LCMS)™ testing and calibration o Image alignment and quality testing • Extended section road test o 4-25 mi (6.4-40.2 km) in length o Conducted at 55 mph (89 km/h) Confirm systems working correctly Data/images are properly collected • Data analysis o Completeness o Location information o Section length o Linear reference o Sensor data (e.g., IRI, fault, rut) • Corrective action plan Saskatchewan (Saskatchewan Ministry of Highways and Infrastructure 2017) • Equipment checks on profiler, LCMS™, GPS, and DMI • Image quality is clear and properly stitched together • All distress is visible • Distress correctly identified and quantified Tennessee (Tennessee DOT 2018) • Before production testing o Equipment calibration 2 months before production testing o Control site testing after completion of calibration • During production testing o Control site testing monthly o Control site repeatability (weekly) • Data checks o Format and completeness o Sensor data: Check for large difference between wheel paths for IRI and rut o Distress data: Check results are within expected ranges o Image quality (e.g., viewing path, minimal or no debris, legible signs) Texas (Texas DOT 2018a) • Implementation schedule • Logical sequence of tasks and deliverables • Clear definition of tasks and deliverables • Staffing by task and deliverable • Target completion data for each task and deliverable • Strategies and process to promote quality • Procedures for measuring and reporting quality performance • Controls to assure quality and consistency • Personnel certification training • Validation of equipment accuracy and precision, and daily and ongoing QC procedures Vermont (Vermont Agency of Transportation 2018) • Data collection personnel training and certification • Equipment configuration, setup, and calibration • Control site testing and verification • Daily system checks • Real-time data checks • Data processing personnel training and certification • Data processing, review, and analysis • Project reporting • Corrective action Virginia (Virginia DOT 2015) • Personnel certification training • Equipment accuracy and precision o < 5% sensor data items not documented by vendor • Daily and ongoing QC procedures • Establish appropriate variation limits for each data item • Weekly equipment calibration schedule and maintain calibration records West Virginia (West Virginia DOT 2018) • Preproduction o Equipment calibration and verification o Rater training o Validate site rating calibration o Image checks and monitoring • Daily o Equipment checks o End-of-day review o Inspect uploaded data samples o Inspect processed data o Mileage review • Weekly o Compare location with shape file o Uploaded image sample review • Prior to data submittal o Final data review o Final distress rating review o Final segment location review o Final image review Wyoming (Wyoming DOT 2018) • Preproduction o DCV certified at Texas A&M Transportation Institute, MNROAD,1 National Center for Asphalt Technology, or vendor certification center • Control site precision and accuracy o Before, midway through, and on completion of production survey o Faulting, texture, rutting, and IRI 1 Pavement test track owned and operated by Minnesota DOT. Table 29. (Continued).

Summary of Agency Data Quality Procedures 53 Agency Requirements Alaska (Alaska DOT 2018) One control site for profiler and DMI certification and 6 verification sites Condition IRI Alligator cracking Rut Crack length Images Criteria (10 runs) Std < 5% Class 1 profiler Std < 15% agency value Std < 0.04 in. (1.02 mm) Class 1 profiler, dipstick or straightedge Std < 15% agency value Minimal skipped, uniform and consistent illumination, color balance, exposure and clarity, and stitching Resolution to identify 0.125 in. (0.32 mm) crack at 60 mph (97 km/h) Alberta (Alberta Transportation nd) Condition IRI Rut Accuracy (5 runs) ± 10% Class 1 profiler Avg ± 0.08 in. (2 mm) Class 1 profiler Repeatability (5 runs) Each run ± 5% of 5-run avg Each run avg ± 10% of 5-run avg British Columbia1 (British Columbia MoTI 2016) 4 control sites, 1,640 ft (500 m) in length Condition IRI Rut Distress Accuracy Avg ± 10% Class 1 profiler Avg ± 0.12 in. (3 mm) straightedge Avg ± 1 pavement distress index (PDI) manual survey Repeatability (5 runs) Std ± 3.9 in./mi (0.25 mm/km) Std ± 0.12 in. (3 mm) Std PDI ± 1 California (Caltrans 2018) 8 control sites (4 asphalt and 4 concrete) Condition IRI Rut Fault Distress Images Criteria (3 runs) Std ± 5% Class 1 profiler Std ± 0.06 in. (1.52 mm) Class 1 profiler Std ± 0.06 in. (1.52 mm) manual survey ± 10% manual survey Displayable and clear, continuous, correctly stitched with no missing or overlapping images, synchronized with geographic locations and associated attributes ≤ 10 images/mi (16 images/km) or ≤ 2 consecutive images/mi (3 images/km) with poor quality 0.125-in. (3.175-mm) wide cracks are visible Delaware (Delaware DOT 2018) • Initial calibration (10 runs per site) o 9 asphalt, 7 composite, 8 surface treated, and 6 concrete sites o ≥ 90 percent within limits (PWL) and ≤ 5% Fault: ± 50 count and std < 5 count IRI ± 10 in./mi (0.6 mm/km) and std < 5 in./mi (0.3 mm/km) IRI, rut, and fault: Reference value is avg of repeat runs. failing multiple criteria o Distress/condition evaluated: Bleeding: ± 50 ft2 (4.6 m2) and std < 5% Block and fatigue cracking: ± 50 ft2 (4.6 m2) and std < 5 ft2 (0.5 m2) Joint reflection cracking, patch deterioration, and potholes: ± 5 ft2 (0.5 m2) and std < 5 ft2 (0.5 m2) Unclassified cracking: < 5% area and std < 5% Crown, cross slope, edge and non- wheel path longitudinal cracking, raveling, surface defects, crack length: ± 50 ft (15 m) and std < 5 ft (1.5 m) Joint spalling, joint reflection and map cracking, joint seal damage, alkali-silica reactivity, slab count, and joint count: ± 5 count and std < 5 count Slab and transverse cracking: - ± 50 ft (15 m) and std < 5 ft (1.5 m) - ± 5 count and std < 5 count o Compare with previous reference value to check within acceptable limits • Verification (monthly) o Re-run initial calibration sites (5 runs) o ≥ 90% sections within limits and ≤ 5% failing to meet multiple criteria • Ongoing discrepancy monitoring o Difference between observed and LCMS™ o ≥ 90% of sections within all bounds and ≤ 5% failing to multiple bounds criteria • Independent bounds and format check o ≥ 90% of sections within all bounds and ≤ 5% failing to multiple bounds criteria (see initial calibration site requirements) • Independent image sample check (random sample of 10 images) o 100% of samples free of major problems • Independent distance and location check (random sample of 10 sections) o ≥ 90% of sections within limits and ≤ 5% failing to multiple criteria Table 30. Examples of agency control, verification, and blind site requirements. (continued on next page)

54 Automated Pavement Condition Surveys Iowa (Iowa DOT 2018) • Control sites o 4 asphalt and 4 concrete sites o 1,500 ft (457 m) in length o IRI, fault, and rut: preproduction and monthly testing o Distress: preproduction Condition IRI Fault Rut Distress Downward images Criteria ± 10% agency measured value; 3 replicate runs std < 5% ± 0.10 in. (2.5 mm) agency measured value; 3 replicate runs std < 5% ± 0.10 in. (2.5 mm) agency measured value; 3 replicate runs std < 5% ± 10% manual review of 3D scans/images Minimal skipped, illumination, color, exposer, clarity, stitching, synchronization with windshield images Maryland (Maryland DOT 2018) • Test loop (monthly) o 45 sections, 13.1 mi (21.1 km) o 3 runs every 3 weeks o Compare repeatability to 10-run QC test loop results • DMI o 2 1-mi (1.6-km) sections o Calibrate every 3 weeks Minnesota (Minnesota DOT 2018) • Equipment Certification o 1 asphalt and 1 concrete section o IRI (5 repeat runs) Reference values from walking profiler Avg ± 5% reference profile • Operator Certification o Successful completion of Pavement Surface Smoothness Specification Training • Verification Sites Agency Requirements Joint spacing: ± 5 ft (1.5 m) and std < 5 ft (1.5 m) Rut - ± 50 ft (15 m) and std < 5 ft (1.5 m) - ± 0.05 in. (13 m) and std < 0.05 in. (13 mm) • Final data review o Data coverage > 99% within bounds o Data within bound checks > 99% o Rut depth Pass water tray test Avg ± 0.10 in. (2.5 mm) fabricated beam with 0.5 in. (12.7 mm) ruts o Faulting Avg ± 0.05 in. (1.3 mm) fault meter o Cracking Manual crack mapped > 90% cracks shown on crack map o New van, establish baseline values from 5 repeat runs o Bi-monthly Check consistency of IRI, rutting, faulting, and compass heading to baseline plot New York (New York DOT 2018) • 1 control site: presurvey and tested 5 times per month o IRI precision and bias Avg 5 runs ± 1 in./mi (0.06 mm/km) Avg each run ± 3 in./mi (0.19 mm/km) 0.1-mi (0.16-km) individual runs within band of historical values • Rut, fault, and distress consistent and representative of DOT records at these locations North Carolina (North Carolina DOT 2018) • No more than 20 DOT-selected sites • Determine precision and bias limits and corrective actions o Vendor determined value or ≤ 5, whichever is lower • Verify camera angles and coverage, data calculation methods, and standard operating procedures. North Dakota (North Dakota DOT 2018) • Annual vehicle calibration conducted before production survey (5 runs) Condition IRI DMI Criteria ± 5% reference profiler (SurPRO) COV ± 3% Mean cross-correlations of runs > 90% Individual cross-correlations run > 85% Distance of each run ± 0.2% agency value Table 30. (Continued).

Summary of Agency Data Quality Procedures 55 Oregon (Oregon DOT 2018) • 1 control site for IRI and 1 control site for rutting o Preproduction testing, monthly verification testing, and postsurvey exit controls Condition LRS IRI Rut Criteria Correct code and lane, location ± 0.03 mi (0.05 km) of agency location ProVAL cross-correlation repeatability score ≥ 0.92 (5 runs) and accuracy score ≥ 0.90 (5 runs) compared to Oregon DOT SurPRO ± 0.05 in. (1.3 mm) run to run (3 runs) ± 0.10 in. (2.5 mm) compared to agency survey Pennsylvania (Pennsylvania • Control sites o 4 asphalt, 2 JPCP, and 2 additional sites for IRI verification Agency Requirements • Weekly IRI and rut depth verification (1 site, 1,000 ft [305 m] in length, 1 run) o Baseline for avg IRI and rut based on 10 repeat runs o IRI ± 5% of baseline o Rut ± 0.05 in. (1.3 mm) of baseline o Compare wheel path IRI and rut, grade, heading, and cross slope to baseline for acceptable data • Upload images and data to the office work station, then verify o Image completeness, and begin/end image o Sensor data complete for left and right IRI o Run analysis process and verify all sensor data present for IRI, half-car roughness index, rutting, texture, faulting, and gyro DOT 2018a) Condition IRI Rut Fault Distress Accuracy ± 10% walking profiler ± 10% agency value (inertial profiler) ± 10% agency value (inertial profiler) ± 10% agency value (3 or more agency raters, 2 ratings per rater) Repeatability (3 runs) ± 5% run to run ± 5% run to run ± 5% run to run ± 5% run to run • Blind verification sites (125 segments) o Minimum 3 agency raters, at least 2 ratings per site o Vendor’s data ± 10% avg agency ratings • Vendor data compared to data from previous 2 years o Distress > ±10% o IRI > ±50 in./mi (3.2 mm/km) Quebec (Quebec MoT 2017) • 1 control site, 1,312 ft (400 m) in length o Conducted begin and end of survey o 10 runs, compared to SurPRO o IRI (each wheel path) Avg bias ≤ 1.25 Repeatability ≤ 0.38 o Cracking qualified using artificially simulated cracks • 5 verification sites, 3,281 ft (1,000 m) long o Reference values (10 runs, at least 4 survey vehicles) Lowest and highest variable runs removed Median of 3 runs from each vehicle 5 remaining runs used for verification • Verification standards o IRI Bias and repeatability < 5% Bias 75% sections < 10% and 90% sections < 15% Deviation between 2 devices < 10%3 o Rut Bias < 0.04 in. (1.0 mm) and repeatability < 1.0% Bias 75% sections < 0.45 in. (1.5 mm) and 90% < 1 in. (25 mm) o Longitudinal cracking per zone2 80% sections ± 16 ft (5 m) and 97.5% ± 33 ft (10 m) Bias ± 16 ft (5 m) o Longitudinal cracking by severity3 3 severities or more must have ± 16 ft (5 m) in. > 75% of cases 3 severities or more must have ± 33 ft (10 m) in. > 90% of cases Bias ± 16 ft (5 m) o Overall cracking index > 95% sections ± 10 points of reference values > 80% sections ± 5 points of reference values Bias and repeatability ± 2 Table 30. (Continued). (continued on next page)

56 Automated Pavement Condition Surveys Agency Requirements Saskatchewan (Saskatchewan Ministry of Highways and Infrastructure 2017) 1,2 • 1 control site, 492 ft (150 m) in length o 5 runs o IRI > 92% compared to Class 1 profiler o Crack > 90% compared to manual survey1,2 o Texture > 90% compared to agency survey • 2 verification sites, 656 ft (200 m) in length o Test every 3,100 lm-mi (5,000 ln-km) o IRI > 92% compared to agency o Rut > 90% average rut depth o Crack > 90% type and width1,2 o Texture > 90% type and affected percent Tennessee (Tennessee DOT 2018) • Control site o 15 sites o 5 runs (agency and vendor) o Paired t-test of average values to determine difference of collected data • Verification sites (weekly) o Check repeatability and time series o Compare with historical values: ∆IRI –10 to 30 in./mi (–0.6 to 1.9 mm/km) ∆Rut < 0.2 in. (5.1 mm) Texas (Texas DOT 2018a) • 5 control sites o IRI ± 6 in./mi (0.38 mm/km) agency profiler o Rut ± 0.06 in. (1.5 mm) agency profiler o Distress score ± 15 points (scale 0 to 100 points) • 37 verification sites at least 1 in. each district (compare to agency vehicle) o Every week, every vehicle IRI ± 6 in./mi (0.38 mm/km) Rut ± 0.06 in. (1.5 mm) Distress (6% random sample) score ± 15 points Utah (Utah DOT 2018) • Compare to random sample ground truth data o Distress: 90% of data ± 15% o IRI: 95% of data ± 5% o Rut: 95% of data ± 0.1 in. (2.5 mm) o GPS location: 95% of data ± 5 ft (1.5 m) o LRS location: 95% of data ± 0.005 mi (0.008 km) Vermont (Vermont Agency of Transportation 2018) • 4 control sites o Calibrate distress rating process o IRI and rut precision and bias > 95% compliance with standards • 5 verification sites (weekly) o 10-0.05 mi (0.08 km) sections > 95% compliance with standards Virginia (Virginia DOT 2015) • No more than 20 agency-selected sites • Determine precision and bias • Verify calibration procedures, camera angles, coverage, data calculation methods, and standard operating procedures Washington State (Washington State DOT 2018) • 2 IRI sites, 1,000 ft (305 m) • 1 weekly site for presurvey IRI, faulting, and rutting • Accuracy and repeatability, compare to historical values (< 5% variation) o IRI presurvey (5 runs) and weekly (3 runs) compare to SurPRO o Rut weekly 3 runs compare to SurPRO • Check DMI monthly West Virginia (West Virginia DOT 2018) • Vendor profile equipment and operators certified by National Center for Asphalt Technology or agency-approved method • Agency staff must obtain West Virginia DOT profiler operator certification • Agency profile equipment calibrated and certified by agency-approved process • Calibration Sites o Established by vendor o Vendor and agency equipment calibration • Verification Sites o 2 asphalt and 1 concrete site o Compare automated data to agency- conducted manual surveys and agency- automated data collection Wyoming (Wyoming DOT 2018) • 2 asphalt sites o Verify IRI, rutting, texture, geometrics, positioning, and 3D crack detection • 1 concrete site o Verify IRI, rutting, texture, faulting, geometrics, positioning, and 3D crack detection • Compare data to previously collected data • Verify image quality through spot checks Note: Avg = average, Std = standard deviation. 1 Sum of all severities within same zone - right wheel path, center of lane, and left wheel path. 2 Sum of longitudinal cracks all zones by severity (very low, low, medium, or high). 3 % Deviation (Manual measurement–Test vehicle measurement) Manual measurement 100%. Table 30. (Continued).

Summary of Agency Data Quality Procedures 57 Acceptance Requirements Agency acceptance requirements are activities performed to assess the quality of the submitted condition data. There can be a wide range of agency acceptance requirements depending on agency needs. Table 31 summarizes agency acceptance requirements. To illustrate an agency’s process for acceptance testing, the process used by Caltrans is described below in more detail. Caltrans acceptance testing is conducted using in-house staff reviewing 5% to 10% (2,500 to 5,000 mi [4,023 to 8,046 km]) of the submitted data and images. The Caltrans acceptance process includes field verification, QA field review, and office review. Field Verification Field verification of IRI is conducted using Caltrans-certified profilers. Field verification sites are located across California (14 sites per district, total 168 statewide sites), are selected based on pavement type (asphalt and concrete) and range of IRI values, and are approximately 5 mi (8 km) in length. Caltrans and vendor IRI are calculated on 0.1-mi (0.16-km) segments and compared. If the Caltrans and vendor IRI distributions for each 0.1-mi (0.16-km) segment of each field verification site are within 10%, then the segment passes. The vendor data are accepted if 85% of the field verification site segments pass the criteria. For the LRS, the linear and georeference locations for images are randomly reviewed at selected bridges, county lines, and intersections and compared with Caltrans survey data. Location data are accepted if 95% of the landmarks are within 30 ft (9.1 m) of Caltrans locations. QA Field Review The Caltrans QA field review is intended to validate roadway segments based on the presence of distress and the category of treatment (e.g., preservation, minor rehabilitation, major rehabilitation) for asphalt pavements and JPCP. The QA field crews (two members per team) conduct manual reviews of 0.1-mi (0.16-km) segments with safe shoulder access on each field site. The following details apply to asphalt pavements: • A “pass” rating is applied if both the vendor and agency rating of each 0.1-mi (0.16-km) segment indicates 30% or more alligator “B” cracking. Alligator “B” cracking is defined as “interconnected or interlaced cracks in the wheel path, forming a series of small polygons (generally less than 1 ft [0.3 m] on each side). The cracking resembles the appearance of alligator skin or chicken wire” (Caltrans 2015). • For all other project types, a “pass” rating is applied if both the vendor and agency rating of each 0.1-mi (0.16-km) segment indicate less than 30% alligator cracking. • The vendor’s results are accepted if more than 85% of the 0.1-mi (0.16 km) segments meet the “pass” criteria. If these criteria are not meet, the vendor is notified for corrective measures. The following details apply to JPCP: • A “pass” rating is applied if both the vendor and agency rating of each 0.1-mi (0.16-km) segment indicate 10% or more of panels cracked into three or more pieces. • For all other project types, a “pass” rating is applied if both the vendor and agency rating of each 0.1-mi (0.16-km) segment indicate less than 10% panels cracked into three or more pieces. • The vendor’s results are accepted if more than 85% of the 0.1-mi (0.16-km) segments meet the “pass” criteria. If these criteria are not meet, the vendor is notified for corrective measures. An example of the reporting process is shown in Table 32.

58 Automated Pavement Condition Surveys Agency Requirements Alaska (Alaska DOT 2018) • Data o > 98% complete o > 98% populated with required data elements o 100% description information o > 98% < 500 ft (152 m) of consecutive missing segments • IRI, rut, and cracking 95% compliant with verification testing • Distress ratings 95% compliant with protocol requirements and quality expectations • Location information 100% accurate and complete (database check) • Pavement images review 20% random sample, 100% compliant with verification testing requirements Arkansas (Arkansas DOT 2018) • Data and images o 100% correct data type and format o > 98% data and image completeness o > 98% accurate location information o > 98% correct data for surface type o > 98% data within acceptable range o 100% correct lane marking and joint location (100% manual review of images and 5% independent sample) o > 98% correct crack detection (5% sample), flag when more than 20% cracking misdetection • IRI o > 98% collected > 40 mph (64 km/h) o 30–400 in./mi (1.9–25.3 mm/km) o < 30% difference between wheel paths • Rut > 98% with ∆rut < 30% between wheel paths, 0 to 1 in. (25 mm) • Fault > 98% with fault value < 1 in. (25 mm) for any wheel path, 0 to 1 in. (25 mm) • Curve data > 98% with curve data classified correctly • Percent cracking (HPMS) > 98% within expected ranges • 5% random sample (compare to historical values) o > 95% IRI data ± 10% o > 95% rut depth ± 0.05 in. (1.27 mm) o > 95% faulting ± 0.05 in. (1.27 mm) o > 95% distress data ± 20% o > 95% geometric properties ± 15% o > 98% quality ROW and pavement images California (Caltrans 2018) • 5 to 10% random sample • Review vendor reports, data, and images for completeness • Conduct field verification • Verify images and vendor results • Confirm upload into pavement management system • Conduct year-by-year consistency checks • IRI > 95% ± 10% agency value • Rut > 95% ± 0.06 in. (1.5 mm) agency value • Fault > 95% ± 0.06 in. (1.5 mm) agency value • MPD > 95% ± 0.06 in. (1.5 mm) agency value • Cracking > 85% ± 10% agency value • Major rehabilitation segment > 85% of segments ± 10% area agency value • Element review > 85% ± 10% agency value • 100% data completeness • LRS > 95% ± 30 ft (9.1 m) • Downward and ROW images > 95% meet criteria • 100% data upload Connecticut (Connecticut DOT 2018) • Reproducibility between vehicles o IRI ± 10 in./mi (0.63 mm/km) o Rut ± 0.06 in. (1.52 mm) o Asphalt cracking Total < 10% COV Longitudinal, transverse, or area cracking < 20% COV Total wheel path cracking < 40% COV Total non-wheel path cracking < 60% COV o Cross-slope difference ± 5% o Longitudinal grade difference ± 0.1% • Repeatability (5 runs) o IRI each run ± 5% avg of 5 runs o Rut each run ± 0.06 in. (1.52 mm) avg of 5 runs o Asphalt cracking Total < 10% COV Longitudinal, transverse, or area cracking < 15% COV Total wheel path cracking < 30% COV HPMS 30–400 in./mi (1.9–25.3 mm/km) o Rut CTDOT: 99% ≤ 0.5 in. (12.7 mm) HPMS: ≤ 1.0 in. (25.4 mm) o Fault CTDOT: 99% ≤ 0.5 in. (12.7 mm) HPMS: ≤ 1.0 in. (25.4 mm) o Asphalt cracking DOT: 99% ≤ 300 ft/image (91 m/image) HPMS: 0–54% area o Concrete cracking DOT: to be determined HPMS: 0–100% cracked slabs o Cross slope DOT: 100% ≤ 10% HPMS: N/A o Longitudinal grade DOT: 99% ≤ 16% HPMS: N/A Table 31. Agency acceptance requirements.

Summary of Agency Data Quality Procedures 59 Maryland (Maryland DOT 2018) • IRI o Completeness > 85% o Speed check > 35 mph (56 km/h) o Settings check as expected o Flag IRImeasured – 0.21xIRIcalculated < 0 • LCMS™ (100% review) o Image acceptable quality o Reasonableness of crack length (asphalt pavements) Crack has minimal zero values Lane width > 0 Crack detection > 50% length • Transverse profile and rut (100% review) o Visual inspection of graphs and longitudinal plots • IRI change in speed adjustment (100% review) o < 8% of unadjusted IRI value, original value reported o Speed > 15 mph (24 km/h) and > 8% of original IRI, report adjusted IRI • Concrete pavements (5% manual check) o Surface and distress type o Missing data • HPMS (100% review) flag for evaluation o Missing data o > 1% change in rating groups from previous year o > 2% change in statewide avg for IRI, cracking percent, rutting, and faulting o Total lane-mi (km) > 1% change from previous year o Total lane-mi (km) ± 10 mi (16 km) of previous year • Image (100% review) flag for evaluation o Missing ROW or pavement images o Abnormalities (e.g., lighting, spots) o Start/end points > 22.2 ft (6.8 m) of GPS coordinate o Start/end section coordinates > 21 ft (6.4 m) from historical inventory • Review updated Business Plan o Total lane-mi (km) < 50 mi (80 km) different from previous years mileage o Total treated section length as expected, compared to last year’s treated lane-mi (km), and current years allocated budget Minnesota (Minnesota DOT 2018) • Check pavement type • 10% segment review o Manual review of images o Compare to automated results • Final checks and data formatting o Error check (e.g., out of range, mismatched distress, high rut or fault) • Load into pavement management o Compare to last year’s data for reasonable trend o Compare overall percent good and poor to last year’s data Agency Requirements o Total non-wheel path cracking < 50% COV o Cross slope std ± 0.05% o Longitudinal grade std ± 0.1% • Data range requirements, 0.1-mi (0.16-km) segments o IRI 99% within 40–450 in./mi (2.5–28.5 mm/km) o Begin/end locations DOT: 99% with < 1 mismatch segment per 10 mi (16.1 km) HPMS: N/A o Pavement images DOT: 99% with < 1 missing image per 0.062 mi (0.01 km) HPMS: 30–400 in./mi (1.9–25.3 mm/km) Illinois (Illinois DOT 2018) • Image quality • LRS accuracy • Correct route collected and reported • Correct begin/end point • Correct segment length • Value date recorded (month and year) • Acceptance o IRI, rut, fault, cracking > 90% accuracy (random sample) o LRS 100% compared to GIS and agency (100% review) o Image 100% checked for clarity and ratability Iowa (Iowa DOT 2018) • Deliver > 98% of collectable miles • Missing < 500 ft (152 m) consecutive fixed segments • 100% description items populated and accurate • > 95% segments o IRI ± 10% agency value o Fault ± 0.05 in. (1.27 mm) agency value o Rut ± 0.05 in. (1.27 mm) agency value o Distress ± 10% manual review of 3D scans/images Maine (Maine DOT 2018) • 10–15% sample • Review image quality • Review distress data (field verify if needed) • Upload into pavement management system o Data completeness check o Expected ranges check Table 31. (Continued). (continued on next page)

60 Automated Pavement Condition Surveys New York (New York DOT 2018) • 10% random sample (collected by DOT and compared to vendor results) o IRI ± 10% o Rut max ± 0.2 in. (5.1 mm) o Rut avg ± 0.13 in. (3.33 mm) o Fault count –1 o Fault sum ± 10% o Fault avg ± 0.05 in. (1.27 mm) • Verify segment GPS coordinates • Distress (image review) o 5% random sample o < 5 missed distress type or severity per 0.1-mi (0.16-km) segment o < 1 missed high-severity distress per 0.1- mi (0.16-km) segment • Check image quality • Historical comparison o Avg IRI ± 10% o Rut max ± 0.2 in. (5.1 mm) o Rut avg ± 0.13 in. (3.3 mm) o Fault count –1 o Fault sum ± 10% o Fault avg ± 0.05 in. (1.27 mm) o Area crack (per zone) ± 10% o Wt. avg crack width ± 20% o Total crack length ± 10% o % crack asphalt ± 10% o % crack concrete ± 10% North Carolina (North Carolina DOT 2018) • State routes o Independent consultant review o 1.28% random sample o > 90% indices or distress ± 2 std Alligator, transverse, longitudinal, and lane joint cracking, patching, bleeding, concrete patching, corner break, spalling • Images o 5% random sample o < 5 out of 100 continuous images with inferior quality North Dakota (North Dakota DOT 2018) • 2–3% random sample review o < 20% difference in deduct values o > 9% deduct difference requires detailed review o Year-to-year comparison Overall minimum distress score < 12 points from previous year Overall distress score > 6 points from previous year • 100% review start/end points all segments • 100% review of all images • Manually assess all segments with a microsurfacing, slurry seal, or chip seal • Manually assign all thin lift asphalt overlays with a patch score • Verify distresses with substantial variations • IRI, rut, faulting, GPS, grade o > 95% compliance with standards Weekly verification testing Global database checks for range, consistency, logic, completeness, and inspection of suspect data. • Distress rating o > 80% with < 6 point difference automated to manual rating • Images o > 98% compliance with standards, each control section < 5 consecutive images failing to meet criteria Clarity, brightness, no objects on lens New Mexico (New Mexico DOT 2018) • Data completeness o > 98% total network miles tested o 100% accurately populated with description information o > 98% populated with required data elements o > 98% with < 10 consecutive missing segments (< 500 ft [152 m]) • IRI, rut, and fault > 95% compliant with requirements • Distress rating > 95% compliant with requirements • Location 100% accurate and complete • Images 100% compliant with requirements (20% random sample) Agency Requirements Oregon (Oregon DOT 2018) • Route, lane, direction, LRS 100% • Images < 5 of 100 consecutive images with inferior quality • Pavement type 100% • Data completeness o Total collection length (excludes inaccessible locations) > 99% o No blank distress data fields without exclusion code and reason 100% o Compared to agency DCV IRI ± 20% Rut ± 0.20 in. (5.1 mm) • Distress ratings o Compliant with control site test requirements 100% o Year-to-year comparison Flag changes good/fair/poor categories, overall index > 5 or < 15 Table 31. (Continued).

Summary of Agency Data Quality Procedures 61 Pennsylvania (Pennsylvania DOT 2018a) • 2.5% random sample • Minimum of 3 agency raters perform at least 2 ratings per site • Analysis of historical data: plot 3 years of condition data, summed and normalized for all segments in batch by pavement type; differences are checked and sent to vendor for review and resubmission as needed • Image brightness, clarity, focus • Check the reported location of the images for all interstates and 4 or 5 routes from each county in each batch • Cross check pavement surface type with agency maintenance and construction work history • Check upload of data into Roadway Management System • Average for each distress and severity on each site is used to evaluate vendor’s results Condition IRI Distress Location Section begin ROW images Criteria ± 25% avg agency value ± 20% avg agency Correct segment surveyed ± 40 ft (12 m) agency value Legible signs Percent within limits 95 90 100 95 80 Quebec (Quebec MoT 2017) • Drift of measurements o 328 ft (100 m) segments o IRI, rut, and cracking 95% ± x– ± 3 × σ • Comparison of 2 surveys o Avg annual difference (e) between current year and previous year survey measurements IRI –0.2 ≤ e ≤ 0.4 Rut –1 ≤ e ≤ 2.25 Cracking –0.2 ≤ e ≤ 1.38 Agency Requirements o No data outside allowable range 100% o Bridge, construction detours, and lane deviations marked correctly > 90% • IRI, rut, and fault o Compliant with control and verification test requirements 100% o Data within expected values (from previous survey) > 95% IRI ± 10% Rut ± 0.10 in. (2.5 mm) Fault ± 0.05 in. (1.3 mm) points, windshield rating ± 10 point difference Interstate > 95% Non-interstate > 90% All routes < 10% of 0.1 mi (0.16 km) segments rated incorrectly Tennessee (Tennessee DOT 2018) • Vendors quality management report o Equipment certification o Collection procedures and protocols o Personnel training information o Equipment calibration, checks, maintenance, equipment issues, and corrective actions o Verification test results o Data format, sensor data, distress, image, and time-series check • HPMS submittal • Upload into pavement management system Texas (Texas DOT 2018a) • IRI and rut: 100% o Delivery verification testing o Daily equipment checks o Weekly verification site testing • Distress ratings: 90% o Location and format o 6% sample tested for distress rating agreement Utah (Utah DOT 2018) • Review data and images for completeness • Data within expected values • Check data for consistency and compare to historical values Vermont (Vermont Agency of Trans 2018) • Data completeness • Check of invalid condition assessment • Video/condition assessment correct at first and last mile of each road segment • Fatigue cracking and miscellaneous cracking ≤ 100% • 5% random sample of lowest and highest percentile o Evaluate IRI, rut, and cracking from ROW and pavement images • Images o Completeness o Invalid images o Alignment and begin/end o Start/stop measures o Clarity and brightness (random sample) • Upload into pavement management o Evaluate import errors o Evaluate missing data Table 31. (Continued). (continued on next page)

62 Automated Pavement Condition Surveys Virginia (Virginia DOT 2015) • Bridge start and end locations ± 0.001 mi (0.0016 km) • LRS > 90% landmarks ± 0.01 mi (0.016 km) for sections < 1 mi (1.6 km) and ± 0.05 mi (0.08 km) for sections > 1 mi (1.6 km) • Initial data screening < 10% of length failing o IRI Reject < 0 and > 500 in./mi (32 mm/km) Investigate < 30 in./mi (1.9 mm/km) and > 300 in./mi (19 mm/km) o Rut Reject < 0 and > 2.5 in. (64 mm) Investigate > 1 in. (25 mm) o Speed follows vendor’s procedure • Images o < 5 of 100 continuous images shall have o Downward image Resolution to identify 0.125-in.- (3.175-mm-) wide crack Illumination to provide sufficient contrast and crack-shadows o Forward image Resolution to identify 0.125-in. (3.175-mm) wide crack o ROW image Sufficiently clear to identify roadway assets o Images and data located ± 5.28 ft (1.61 m) or better • Distress indices o > 90% randomly selected sections ±10 points inferior quality Washington State (Washington State DOT 2018) 5% random sample of NHS and 5% random sample of local NHS Condition DMI IRI (annual) Rut (annual) Fault (annual) IRI (weekly) Rut (weekly) Distress (NHS) Distress (Local NHS) Accuracy ± 3 ft (0.9 m) 95% ProVAL ± 0.08 in. (2 mm) manual survey ± 0.08 in. (2 mm) manual survey Moving avg ± 20 in./mi (1.3 mm/km) Moving avg ± 0.08 in. (2.0 mm) 90% of t-test and R2 (2 rater) > 90% of manual survey Repeatability COV < 10% (3 runs) 92% ProVAL COV < 10% (3 runs) COV < 10% (3 runs) Std < 8.5 in./mi (0.54 mm/km) Std < 0.025 in. (0.6 mm) N/A N/A Wyoming (Wyoming DOT 2018) • Quality images • Correct section (milepost) representation • Proper reference identification • Accurate crack detection on concrete pavement and bridge decks • DMI ± 500 ft (152 m) • Flag all construction areas • Plot GPS data for locating anomalies • Data checks o IRI check for zero, null, and > 400 in./mi (25 mm/km) values o Rut check for zero and > 0.5 in. (13 mm) o Re-evaluate HPMS concrete cracking > 70% for > 2.0 mi (3.2 km) • Distress data checks on segment condition that varies from year to year o Inter-rater and intra-rater evaluation o 100% review of all rated roads o Accuracy target > 80% on concrete pavements > 90% on asphalt pavements Note: Avg = average; Std = standard deviation; COV = coefficient of variation. Agency Requirements West Virginia (West Virginia DOT 2018) • Weekly control site (up to 10% sample) o IRI, rutting, faulting, GPS coordinates, % cracking, cross slope, grade, horizontal and vertical curves > 95% compliant • Database consistency and completeness o Distress ratings: > 95% compliant o GPS & location: 100% compliant • Imagery o 20% sample check upon delivery o > 98% compliant Table 31. (Continued).

Summary of Agency Data Quality Procedures 63 In addition, an element-level review of asphalt pavements and JPCP is conducted. An element is defined as 26.4 ft (8 m) for asphalt pavements and a single slab for JPCP. The element-level review serves as a spot check in the event that a more-detailed review is warranted. The element- level review also allows for direct comparison of vendor-collected data with the office image quality review (see the section below for details). For asphalt pavements, a manual survey, maximum length of 0.05 mi (0.08 km), is conducted. The QA field crews measure and record the length of alligator “A” and alligator “B” cracking. Alligator “A” cracking is defined as “a single or double unconnected cracks in the wheel path parallel to the centerline” (Caltrans 2015). For JPCP, the QA field crews record the total number of transverse, longitudinal, and corner cracks (crack count only). The criteria for the element-level review are the same as described above. Office Review The Caltrans office review consists of an image quality assessment of the downward facing images. Images are viewed for clarity, stitching, and synchronization with geographic locations and attributes; the office reviews from 5% to 10% of all submitted images. The office review is conducted on 0.1-mi (0.16-km) segments for up to 20 consecutive images. The amount of cracking is recorded and compared to the vendor’s value, which shall be within 10% of the agency value. The vendor’s image quality is accepted if 85% or more of the reviewed segments “pass” the criteria. Table 33 illustrates an example of the office review report. Error Resolution During agency acceptance, there is a possibility that portions of the collected data (or images) do not meet agency criteria. It is important to establish the actions to be taken should any of the collected data fail to meet the established requirements. Three agency examples are shown in Site Location Information: Total Segments Major Rehab Non-Major Rehab Passed Passed Passed Pass Rating Pass Rating Pass Rating Site Location Information: Total Segments Major Rehab Non-Major Rehab Passed Passed Passed Pass Rating Pass Rating Pass Rating Site Location Information: Total Segments Major Rehab Non-Major Rehab Passed Passed Passed Pass Rating Pass Rating Pass Rating Table 32. Example of Caltrans QA field review report. Site Absolute Difference Readings, % % Acceptable Action A 6, 7, 4, 5, 0, 8, 10, 3, 9, 11, 5, 9, 5, 6, 13, 8, 9, 8, 6, 3 18/20=90% Accept B 5, 0, 3, 11, 9, 5, 7, 4, 12, 3, 15, 8, 9, 11, 6, 8, 7, 16, 3, 2 15/20=75% Reject Note: Bold and underlined numbers indicate that the absolute difference is greater than 10%. Table 33. Example of Caltrans office review report.

64 Automated Pavement Condition Surveys Table 34 through Table 36 summarizing Illinois DOT’s, New Mexico DOT’s, and Oregon DOT’s corrective actions, respectively. Independent Review Of the responding agencies, North Carolina DOT and Texas DOT provided documentation related to independent review of vendor-conducted pavement condition surveys. The following sections provide a summary of agency requirements and results for the independent review. North Carolina DOT North Carolina DOT hired an independent QA consultant to rate, evaluate, and compare pavement distress data collected by the vendor. During the evaluation process for the 2015–2016 Product QA/QC by Agency Acceptance Criteria Error Resolution Predeployment, Daily Start-up, and Monthly Testing (performed by vendor) IRI, rut, faulting, DMI, image quality, LCMS™ crack identification • Certification letter provided by vendor at start and monthly • 100% review and set baseline values • Monthly collection compared to baseline • Production survey cannot proceed Post Collection IRI • Compared to historical data • Random sample review • Test section ± 10% of initial collection • Pass reasonability check of random sample • Reprocess data • Re-collection at discretion of agency Rut • Compared to historical data • Random sample review • Test section ± 0.08 in. (2 mm) of initial collection • Pass reasonability check of random sample • Reprocess data • Re-collection at discretion of agency Faulting • Compared to historical data • Random sample review • Pass reasonability check of random sample • Reprocess data • Re-collection at discretion of agency Cracking CRCP • Random sample review • Pass reasonability check of random sample • Reprocess data • Re-collection at discretion of agency Cracking JRCP • Random sample review • Pass reasonability check of random sample • Reprocess data • Re-collection at discretion of agency Cracking asphalt • Random sample review • Pass reasonability check of random sample • Reprocess data • Re-collection at discretion of agency LRS • 100% review • Compared to GIS • Begin/end station compared to Illinois Roadway Inventory System • 100% • Contact vendor for LRS reprocessing and alignment • Re-collection at discretion of agency Images • Quality check for clarity and ratability • 100% • Contact vendor for image enhancement solution • Re-collection at discretion of agency Table 34. Illinois DOT corrective action (Illinois DOT 2018).

Summary of Agency Data Quality Procedures 65 Deliverable Acceptance Testing Action if Criteria Not Met Data completeness > 98% Total network miles (excludes areas closed to construction) Return deliverable for re-collection 100% Delivered data accurately populated with description information (system, route, direction, and begin and end latitude/longitude) Return deliverable for correction > 98% Delivered data accurately populated with required data elements. Excludes areas with expected limitations (e.g., IRI in low-speed areas) Return deliverable for correction > 98% Delivered data < 10 consecutive fixed missing segments (500 ft [152 m] total) Return deliverable for correction IRI, rut depth, and faulting > 95% Must be compliant with the verification testing requirements Return deliverable for re-collection Distress ratings > 95% Must be compliant with the verification testing requirements Return deliverable for re-collection Route no., direction, begin/end, GPS coordinates, district, and date collected 100% Database check of accuracy and completeness Return deliverable for correction Photolog and pavement images 100% 20% random sample compliant with verification requirements Return deliverable for re-collection Table 35. New Mexico DOT corrective action (New Mexico DOT 2018). data collection cycle, the QA consultant identified the following challenges and resolutions for conducting the independent review (North Carolina DOT 2018): • Converting multiple severity types per distress into one value. The representative quantity was determined by assigning different weights to severity levels for each distress type (alligator, transverse, longitudinal, and lane joint cracking, patching, and bleeding). The lowest sever- ity was assigned a weighting value of 1, moderate severity a value of 1.5, and high severity a value of 2. Weighting values were multiplied by the total quantity of distress for each severity category and summed to represent each distress quantity. • Averaging total quantity of each distress type over the sample length. Using the distress severity weighted quantities, all area-based distress quantities were divided by the maximum sample area. For longitudinal and lane joint cracking, the weighted quantity was divided by the total sample length (5,280 ft [1,609 m]). For transverse cracking, the total weighted length was summed and divided by an assumed full-lane width (12 ft [3.7 m]), resulting in a total number of transverse cracks (maximum 5,280 cracks per mi [3,280 cracks per km]). • Determining reasonable control limits, by distress type, to minimize the variability between the QA consultant and the vendor ratings. Data from the 2015 survey, consisting of 1.28% (943 segments) random sample of asphalt pavements on the state highway network were evaluated. Sample ratings were carefully inspected to ensure accuracy of the distress types, severities, and quantities. A representative quantity and weighted percent was determined for each distress type on each sample for both the QA consultant and vendor ratings. The dif- ference between the weighted percent of each distress type was calculated for all samples and the standard deviation was used to create a unique set of control limits for each distress type. Line-of-equality graphs were plotted to assist in identifying the presence of any bias and trends in over- or underrating distresses. Figure 22 and Figure 23 illustrate example equality plots of vendor versus QA consultant assessment for alligator and transverse cracking, respectively.

66 Automated Pavement Condition Surveys Sensor data: IRI, rut, and faulting (by district) 100% Compliant with control site and verification testing requirements Reject all data since last passing verification; re-calibrate DCV and re-collect affected routes 95% Data within expected values based on year-to-year time series • IRI ± 10% • Rut ± 0.10 in. (2.5 mm) • Fault ± 0.05 in. (1.3 mm) Flag discrepancies and investigate Re-collect if wet weather or traffic congestion create issues that can reasonably be avoided Accept data on case-by-case basis if differences are due to construction, maintenance, or deterioration more than expected, or where data appears reasonable based on visual observation of road surface 95% Compare with ODOT’s DCV on sample of routes • IRI ± 20% • Rut ± 0.20 in. (5.1 mm) Flag discrepancies and investigate Approve data on a case-by-case basis if differences can be reasonably explained When significant differences exist and cause cannot be reasonably determined, verify calibrations for all DCVs, review data for systematic errors, re-collect if equipment issues are found Distress ratings (by district) 100% Compliant with control-site testing requirements Return deliverable for reevaluation Interstate 95% Non-Interstate 90% All routes < 10% of 0.1-mi (0.16- km) segments rated incorrectly Compare current year versus previous year (considering recent construction and maintenance) and flag • Good/fair/poor category changes • Sections where current year overall index difference exceeds +5 or –15 points from previous year Compare overall index with windshield rating and flag • Sections with ± 10 points difference Flag discrepancies and investigate Compare distress quantities and review severities, check distresses are within lane limits, check distress length and area measurements marked on pavement images and summarized in shell table Report incorrect distress ratings and return deliverable for correction Accept the data if the current year distress ratings appear valid, regardless of previous year’s ratings Deliverable (and Frequency) Acceptance Checks Performed Action If Criteria Not Met Route, lane, direction, LRS 100% Review previous week’s images for correct location information Reject deliverable and re-collect route Images < 5 consecutive images with inferior quality Review previous week’s images for coverage and quality (lighting, exposure, obstructions, focus) Reject deliverable and re-collect route Pavement type 100% Compare to provided type < 2 per 10 incorrect segments Resolve all discrepancies prior to final distress rating Data completeness (by district) 99% Total collection miles Reject deliverable, re-collect route 100% No blank fields without exclusion code and reason Return deliverable for correction 100% No data outside the allowable ranges 90% Bridge events, construction detours, and lane deviations marked correctly Table 36. Oregon DOT corrective action (Oregon DOT 2018).

Summary of Agency Data Quality Procedures 67 y = 1.1097x + 0.853 R² = 0.8538 0 20 40 60 80 100 0 20 40 60 80 100 Ve nd or - Al lig at or C ra ck in g Pe rc en t Independent Quality Assurance - Alligator Cracking Percent Lower and Upper Limit Line of Equality Linear Regression Figure 22. Line of equality plot for alligator cracking data (adapted from North Carolina DOT 2018). y = 1.3243x + 0.1165 R² = 0.842 0 5 10 15 20 0 5 10 15 20 Ve nd or - Tr an sv er se C ra ck in g Pe rc en t Independent Quality Assurance - Transverse Cracking Percent Lower and Upper Limit Line of Equality Linear Regression Figure 23. Line of equality plot for transverse cracking data (adapted from North Carolina DOT 2018).

68 Automated Pavement Condition Surveys As shown in Figure 22, there may be a potential difference in ratings when the vendor noted alligator cracking percentage greater than 50%; however, additional sampling may be warranted to confirm this trend (North Carolina DOT 2018). As shown in Figure 23, there appears to be a positive bias when the vendor identified more and higher severity level transverse cracking (North Carolina DOT 2018). Texas DOT Texas DOT conducts a statewide audit of the vendor pavement condition survey (Texas DOT 2018b). The audit includes the identification of a 6% sample of pavement segments, represent- ing each county’s centerline length and pavement type (asphalt pavement, JPCP, CRCP). Since some counties do not construct all three pavement types, the 6% sample is based on the available pavement types. Each sampled segment is 2 mi (3.2 km) long and has a previous fiscal year rated distress score between 60 and 90 (score range of 0 to 100). Once each sample segment has been manually rated by Texas DOT, the distress score is determined and compared to the vendor- collected data. If discrepancies exist between the manual and vendor results, Texas DOT will first review the vendor images and determine whether or not a follow-up site visit is required. Acceptance criteria for the annual audit are the same as used during the acceptance of the vendor survey (see Table 31). Data Integration, Storage, and Retention Requirements Data integration, storage, and retention requirements provide challenges with automated data collection due to the rapid collection of data and image (e.g., images and distress data can be collected every 26 ft [7.9 m]). The following sections provide a summary of agency responses to data integration, data and image storage requirements, and data and image retention schedule. Integration Data integration is the “process of combining or linking two or more data sets from different sources to facilitate data sharing, promote effective data gathering and analysis, and support overall information management activities in an organization” (FHWA 2010). For example, pavement management systems typically require data from LRS (or GIS), traffic, construction history, maintenance and rehabilitation activities, and pavement condition, all or some of which may be managed by different offices within a given agency. Table 37 provides a summary of agency responses to their pavement condition data integration process, issues, and resolutions. The data integration process varies by agency, but in general, many agencies provide a spatial file for data collection, and it is returned by the data collection team with populated data and images for incorporation into the pavement management system (with or without additional processing). Agencies reported a number of challenges with the automated data collection process, including the following: • Matching LRS locations (four agencies); • Software formats and systems (two agencies); • Comparison of manual and automated cracking results not matching (one agency); • Information technology support (one agency); • Image storage (one agency); • Data consistency (one agency);

Summary of Agency Data Quality Procedures 69 Oklahoma • Vendor submits data • Agency checks integrity, conducts database queries, and reviews manual survey • Updating antiquated proprietary software • Staffing with qualified and experienced personnel • Working through issues with vendor and agency personnel Ontario • LCMS™ data processed and aggregated • Export to internally developed software to determine pavement metrics and indices • Large volume of data and difference in metrics required development of new algorithms and verification protocols • Recruited staff with big data experience to develop solutions in handling data set • Redeveloped asset management system to integrate new data set and used artificial intelligence for distress categorization and Pennsylvania • Condition data delivered by flat file • Data summarized by roadway segments • Load into Roadway Management System • Roadway Management System is an old mainframe dating back to the mid-80s • Funding for new system, many of the agency systems “don’t speak the same language” Wyoming • Import data into pavement management system • Overlapping sections due to equations and phantom “over-runs.” • Manually fix the equation problem; the phantom “over- run” sections are deleted Utah • Not provided • Changing technologies • LiDAR to locate "assets" and lost control of pavement condition collection process • Not provided Agency Integration Process Data Issues Issue Resolution Arizona • Agency provides spatial file • Vendor delivers data in GIS format • Data stored in SQL database • 2017 comparison of manual and automated cracking did not match • Challenges with converting from milepost to measurement system • Compare manual to automated, if no correlation, use recent automated data • Training and revising GIS database to accommodate new data British Columbia • Load data directly into pavement management system • Matching with other agency referencing systems • Use GPS coordinates to match data California • GIS to develop an interactive map • Information technology support • Hire a GIS expert Connecticut • Working on integrating LRS into pavement management database • 2 LRS makes it very labor intensive to migrate data into pavement management system • Not provided Georgia • Not fully developed at this time • Locating segments • Software format, LRS, and network change propagation • Standardize segmentation • Address through software developers Illinois • LRS joined with other collected data • Linked to roadway database and a structure database • Changes made to the LRS creates data alignment problems • Reduce the number of changes to the LRS • Coordinate and document so staff can adjust the data based on LRS changes New Hampshire • Joined through HPMS coordinator • Numerous and subtle; one was data consistency • Snapshotting data North Dakota • Data exported to mainframe and averaged per segment • Mainframe system in need of replacement • Looking into software solutions to possibly replace the mainframe system • Pavement management for additional analysis • Image storage • Import to pavement management software • Revisions required on how new metrics contribute to performance metrics and strategy decision process quantification of LCMS™ data Table 37. Agency pavement condition data integration process, issues, and resolutions.

70 Automated Pavement Condition Surveys • New algorithms and verification protocols required and impact on performance metrics and strategy decision process (one agency); • Changing technologies (one agency); and • LiDAR system to locate assets, changed project scope (one agency). Storage As previously discussed, data and images collected from the pavement condition survey can quickly amass to terabytes of storage requirements. Therefore, agencies were asked to provide the types of data (and images) stored, and the format, from the pavement condition survey (Table 38). Based on the results of the 16 responding agencies, the information stored includes images (16 agencies), raw data (14 agencies), condition index (10 agencies), 0.1-mi (0.16-km) data (2 agencies), and correspondences and sign/striping inventory (1 agency each). Data are stored in a database (7 agencies), database and spreadsheets (3 agencies), and database and TXT format or database and native format (1 agency each); images are stored in JPEG format (6 agencies); and data and images are accessed via vendor-hosted site (3 agencies). Retention Schedule A retention schedule documents the type of information and length of time for retaining the information. Table 39 provides a summary of agency responses related to data and image retention schedules. Of the 16 responding agencies, 5 agencies retain data and images indefinitely, and 3 agencies retain only the data indefinitely. Two agencies retain all data and images for 4 years. Two agen- cies retain all data and images for 10 or more years, while one agency only retains all data for 10 years. One agency retains all data and images for 20 years and condition ratings indefinitely. One agency’s retention schedule plan is to retain all data for 30 or more years. Costs of Data Collection, Processing, Quality Control, and Acceptance Agencies were asked to provide costs associated with data collection, processing, QC, and acceptance. However, separating costs according to activity was difficult (or not possible) for the responding agencies. For example, the vendor contracting process may be based on lump sum rather than line item, making it difficult to associate costs for individual activities. In addition, agencies typically do not track employee hours by specific tasks, making it difficult to associate the level of effort by activity. Another challenge for vendor-based surveys is comparing costs based on economy of scale (i.e., potential costs savings with increased network length). Conceivably, agencies with smaller pavement network lengths would result in higher per-mile (kilometer) costs than agencies with larger network lengths. However, duration of data collection may also impact costs. It is also challenging to compare agency- and vendor-conducted surveys, since vendor costs will be based on all costs associated with data collection and analysis (e.g., building costs, computers, equipment costs, employee benefits). Finally, agencies require distress analysis using either semiautomated, fully automated, or a combination of both, and not all agencies require assessment of the same distress types. Agencies requiring more semiautomated analysis may result in higher costs than those requiring primarily fully automated analysis. Therefore, direct comparison of costs should be done cautiously.

Summary of Agency Data Quality Procedures 71 Agency Information Stored Format Arizona • Photo and video log, LCMS™ images • Raw distress data • Good-fair-poor rating • Sign and striping inventory • Data in SQL database • Photo, video log, and LCMS™ images viewable online from vendor-hosted site British Columbia • Raw data • Images • Oracle database • Photolog application California • Images • Elemental data (26.4 ft [8.0 m]) • Condition data and indices • Database Connecticut • Raw data • Indices • Images • Database for indices and condition data • Images in JPEG • Excel, Access, and 2 databases Georgia • Video images • Road surface and raw sensor data • Sensor data on 0.1-mi (0.16-km) segments • 3D cracking data being evaluated • Video in JPEG format • Sensor readings in database Illinois • All data • All images • Sensor data stored in tabular format • Images in JPG Kansas • Images • Profile data • Processed data • Summary indices. • All collected data and processed indices are stored in its native format • Summarized data are stored in a database New Hampshire • All collected data and images (approximately 25 MB/year) • Database North Dakota • Images • Raw data until processed • Images in JPG format • Data in TXT format Oklahoma • Images • Tabular and testing data • Correspondence • JPEG • Database and spreadsheets • PDF Ontario • Raw data (approximately 17 TB/year) • Pavement distress data • Pavement performance metrics • Images • Stored in database that interfaces with various software to extract, display, and report information in various formats Oregon • 0.1-mi (0.16-km) sensor and distress data • Images • Raw data • Database • Raw data and images stored on USB drives Pennsylvania • Images • Pavement indices • Images in JPG format • Condition data in text and database files Rhode Island • Distress data • Pavement condition scores • Images • Distress data in Access • Condition scores in Access or Excel • Images stored on hard drives Wyoming • Images • Raw data • Vendor website Utah • Data files (< 1 TB) • Forward images (4–8 TB) • Pavement images (4–8 TB) • Shape files and spreadsheets in 0.1-mi (0.16-km) segments Table 38. Agency data and image storage.

72 Automated Pavement Condition Surveys Agency Items Schedule Arizona All data (since 2014) Indefinitely British Columbia All data Indefinitely California All data and images 10 or more years Connecticut All data (since 2001) Indefinitely, currently under review Georgia All data and images 10 years for data, images to be determined Illinois Tabular sensor data and images (since 2007) Condition rating results (IRI, rutting, faulting, identified distress, and condition rating) 20 years Indefinitely Kansas All data and images Indefinitely New Hampshire All processed data and images (since 2009) All images and processed data indefinitely, high-resolution images stored separately North Dakota All images and processed data 4 years Oklahoma Control and verification site testing data, video log, reports, database, correspondence, etc. 4 years video log (older images stored on external drives and archived) All other data indefinitely Ontario Data (last 6 years) on external drives, network (with mirror redundancy) hosts previous years processed data and current year field data > 30 years (planned) Oregon 0.1-mi (0.16-km) sensor and distress data, images, and raw data Indefinitely Pennsylvania All data and images (since 1997) Indefinitely Rhode Island All distress data and pavement condition scores Indefinitely Utah All images Forward images indefinitely, pavement images until new ones are collected, marked-up pavement images kept indefinitely for comparison Wyoming All data 10 years Table 39. Agency data and image retention schedule. Agencies were asked to provide costs for conducting pavement condition surveys. Since only one agency was able to provide estimated costs for acceptance testing, the costs summarized in Table 40 exclude agency acceptance costs. Table 40 is arranged according to network length (small to extra-large); cost per mile (kilometer); whether distress is assessed according to semi- automated, fully automated, or a combination of both; if the agency or the vendor collect and analyze the data and images; and the distress types collected and analyzed. The last column of Table 40 represents only distress data since sensor-based data (e.g., IRI, faulting, rutting) are collected and analyzed automatically by the majority of agencies. As shown in Table 40, one of the responding agencies conducts fully automated analysis on asphalt pavements only and on a small network length for $43/mi ($27/km). Four of the responding agencies conduct semiautomated analysis, with costs ranging from $34 to $101/mi ($21 to 63/km). Semiautomated costs, in general, show a potential effect of economy of scale, where the larger networks result in lower costs, and the shorter networks result in the higher costs. For agencies that require the vendor to conduct both semi- and fully automated distress assessments, costs range from $28 to $115/mi ($17 to $71/km). As with semiautomated analysis, the longer network length, in general, relates to a lower cost compared to shorter network lengths.

Summary of Agency Data Quality Procedures 73 Network Length1 Cost per mi (km) Semi-, Full, or both Collects/Analyzes2 Distress Types Collected and Analyzed3 Medium $199 ($165) Full Agency Cracking and potholes (asphalt pavements), cracking, durability, joint seal damage, and broken slabs (JPCP) Small $159 ($99) Semi Agency Cracking (asphalt pavements) Large $82 ($51) Semi Vendor/ Agency Cracking, potholes, and durability (asphalt pavement, JPCP, and CRCP) Extra large $34 ($21) Semi Vendor/ Agency Cracking (asphalt pavement, JPCP, and CRCP) Extra large $50 ($31)4 Semi Vendor/ Agency Cracking, surface characteristics, and punchouts (asphalt pavement, JPCP, and CRCP) Extra large $58 ($36) Both Vendor/ Agency Full: potholes and raveling (asphalt pavement), durability, cracking, and blowups (JPCP, CRCP) Semi: faulting, polishing, broken slabs (JPCP), blowups, durability, punchouts, spalling (CRCP), texture, and patching (JPCP, CRCP) Small $115 ($71) Both Vendor Full: alligator, block, edge, longitudinal, and transverse cracking (asphalt pavement only) Semi: bleeding, patching, and potholes (asphalt pavement only) Extra large $101 ($63) Semi Vendor Cracking, potholes, punchouts (asphalt pavement, JPCP, and CRCP) Extra large $76 ($47) Semi Vendor Cracking, patching, raveling, weathering, and joint deterioration (asphalt pavements), cracking, joint seal damage, patching, and spalling (JPCP) Small $75 ($47) Both Vendor Full: cracking (asphalt, JPCP, and CRCP) Semi: bleeding, patching, potholes, raveling (asphalt pavement), cracking, faulting, joint seal damage, patching, spalling and shattered slabs (JPCP), and patching, punchout, and lane/shoulder condition (CRCP) Medium $65 ($40) Both Vendor Full: cracking (asphalt pavement and JPCP) Semi: bleeding, patching, potholes, raveling, spalling (asphalt pavement and JPCP) Medium $43 ($27) Full Vendor Cracking and bleeding (asphalt pavement) Large $28 ($17) Both Vendor Full: cracking, potholes, and raveling (asphalt pavement) Semi: cracking, durability, and polishing (JPCP) 1 Small < 5,000 mi (8,000 km); medium 5,000–10,000 mi (8,000–16,000 km); large 10,000–15,000 mi (16,000–24,000 km); and extra large ≥ 15,000 mi (24,000 km). 2 “Agency” collects and analyzes data, “vendor” collects and analyzes data, or “vendor/agency” vendor collects data and agency analyzes data. 3 Excludes fully automated sensor data and includes only general distress categories (i.e., cracking includes multiple crack types). 4 Includes only data collection costs. Table 40. Summary of pavement condition survey costs.

74 Automated Pavement Condition Surveys Accomplishments and Challenges of Automated Condition Surveys As a follow-up to the survey, agencies were asked to provide information related to their successes and challenges with automated condition surveys. The following provides a summary of responses. Accomplishments • Safer, faster, and more efficient and consistent pavement condition data collection compared to manual surveys. • Automated crack detection performs to the agencies satisfaction in identifying crack type and severity. • Ability to use the pavement condition data and images by various agency users. For example, extracting guardrail asset data from photo logs and using the automated crack detection results to evaluate performance of different pavement treatment types. • Information from the automated pavement condition survey has been one of the greatest tools for assisting with identifying projects for the Statewide Transportation Improvement Program. Challenges • Determining data quality tolerances that are reasonable in relation to equipment capabilities and pavement management requirements. • Identifying a method to accurately determine ground truth values for pavement distress (e.g., cracking) that is efficient and less labor-intensive. • Quantifying pavement distress (e.g., cracking, patching, potholes, punchouts, raveling, bleeding, and joint condition) is difficult without having a standardized method. • Measuring consistent rut depth using different data collection equipment. • Developing protocols and distress detection algorithms for new data sets and performance metrics. • Getting all the required resources together from data collection to delivery. • Generating meaningful reports, performance trends, and project assessments from the processed data. • Maintaining consistent distress ratings year to year and vendor to vendor.

Next: Chapter 5 - Case Examples »
Automated Pavement Condition Surveys Get This Book
×
 Automated Pavement Condition Surveys
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Synthesis 531 documents agency practices, challenges, and successes in conducting automated pavement condition surveys.

The report also includes three case examples that provide additional information on agency practices for conducting automated pavement surveys.

Pavement condition data is a critical component for pavement management systems in state departments of transportation (DOTs). The data is used to establish budget needs, support asset management, select projects for maintenance and preservation, and more.

Data collection technology has advanced rapidly over the last decade and many DOTs now use automated data collection systems.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!