Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
41 The timing of this synthesis provided an opportunity to obtain and summarize SHA DQMPs in response to the federal requirements. As previously discussed, 23 CFR 490 required agencies to develop and submit DQMPs to the FHWA by May 18, 2018 (Code of Federal Regulations 2017). During the follow-up questions, agencies were asked to provide their DQMPs, most of which were received by June 30, 2018. It should be noted that some of the agency DQMPs may not have received FHWA approval at the time of the follow-up interview. In total, 29 SHAs provided their DQMPs and four Canadian Provinces (Alberta, British Columbia, Quebec, and Saskatchewan) provided similar documentation. Figure 19 illustrates the agency DQMPs (or other quality management documents) received and summarized in this chapter. The following sections summarize various components of the DQMPs, including protocols, condition and distress types included in the plan, QC requirements, control, verification-site and blind-site requirements, and acceptance requirements. Data Quality Process The data quality process is how the agency determines the quality of the collected and submitted condition data. This process can span the entire condition survey process, from predeployment to acceptance. Figure 20 illustrates the quality process for Virginia DOT. The process starts with control sites, where the collected data must meet certain criteria for full production of data collection to begin. After data processing, the vendor must perform and pass an internal QA check. If the data fail, then they are reprocessed; if they pass, the data are then subjected to an independent review of about 5% of the total data. Next, the data are analyzed by the agency to conduct final data acceptance. At any point, if the data fail to meet the quality check criteria, they are returned to the vendor for reprocessing. Once all of the data have been accepted, they are loaded into the various applicable databases. Figure 21 shows the data quality process for Illinois DOT. The process is similar to the one used by Virginia DOT in that the DCV(s) must meet certain criteria before full-production data collection can begin. The collected data are processed by the vendor, at which point the data and images must meet the contract specifications for data quality. These data are then broken down into 0.1-mi (0.16-km) segments and analyzed for completeness and consistency. Data not meeting these checks are returned to the vendor to fix and resubmit. After all data have been accepted, they are uploaded into the Illinois Roadway Information System and reported to HPMS. Standards and Protocols Table 24 summarizes the data collection standards and protocols required in the agen- ciesâ DQMPs. Table 24 is summarized according to data category (e.g., condition manual, C H A P T E R 4 Summary of Agency Data Quality Procedures
WA OR CA MT ID NV AZ UT WY CO NM TX OK KS NE SD ND MN IA MO AR LA MS AL GA FL SC TN NC IL WI MI OH IN KY WV VA PA NY ME VTNH NJ DE MD MA CT RI AK HI Data Quality Plan Received Figure 19. Data quality plans received from SHAs. Figure 20. Virginia DOT quality process flow diagram (Flintsch and McGhee 2009, as adapted by Shekharan et al. 2007).
Summary of Agency Data Quality Procedures 43 equipment, sensor measurements), standard or protocol, and description of the standard or protocol. Condition and Distress Types The FAST Act requires agencies to include quality management procedures for IRI, rut depth, faulting, and cracking (see Table 5); however, a number of agencies have applied these practices to other agency-collected distress types. Tables 25 through 27 list distress types included in agency-provided DQMPs for asphalt pavements, JPCP, and CRCP, respectively. For asphalt pavements, the majority of agencies include percent cracking (FAST Act reporting), alligator cracking, longitudinal cracking, transverse cracking, block cracking, patching, potholes, and raveling in their DQMPs. For JPCP, the majority of agencies include cracked slabs, transverse cracking, longitudinal cracking, corner cracking, patching, multiple-cracks slabs (broken or shattered slabs), and joint spalling. For CRCP, the majority of agencies include longitudinal cracking, transverse cracking, punchouts (count), and patching. Note: PM2 â Performance Measure Number 2 (FHWA designation); IRIS â Illinois Roadway Inventory System Figure 21. Illinois DOT quality process flow diagram (Illinois DOT 2018).
44 Automated Pavement Condition Surveys Category Standard/Protocol Description Number of Agencies Condition manual HPMS Field Manual (FHWA 2016) Standards for condition assessment on NHS roadways 24 Agency manuals Agency-specific distress identification manual 14 LTPP Manual (Miller and Bellinger 2014) Pavement distress rating manual for the LTPP program 6 ASTM D6433 (2018c) Determine Pavement Condition Index from visual condition surveys 1 Profile equipment AASHTO R 56 (2018c) Longitudinal profile equipment 22 AASHTO M 328 (2010a, 2018a) Hardware and software for inertial profilers 18 AASHTO R 57 (2018b) Operating and calibrating inertial profiler 17 Faulting AASHTO R 36 (2013c) Method for quantifying faulting 18 Roughness AASHTO R 43 (2017c) Method for quantifying IRI 17 ASTM E1926 (2015a) Method for quantify IRI 4 AASHTO PP 37 (2004) Method for quantifying IRI 2 ASTM E1489 (2013a) Method for quantifying ride number 1 Measuring profile AASHTO PP 70 (2010b, 2017a) Collecting transverse profile (automated) 16 ASTM E950 (2018a) Measuring and recording profile using an inertial profiler 15 ASTM E1656 (2016) Collect profiles and cracking at posted speed 4 ASTM E2133 (2013b) Measure profile using walking profiler 1 Rutting/ Deformation AASHTO R 48 (2013a) Method for quantifying rut depth (> five-point system) 12 ASTM E1703 (2015b) Method for quantifying rut depth with a straightedge 3 AASHTO PP 38 (2005) Method for quantifying rut depth (> five-point system) 2 AASHTO PP 69 (2010c, 2017d) Method for quantifying deformation parameters 13 Asphalt cracking AASHTO R 55 (2013b) Method for quantifying cracking (manual or automated) 8 AASHTO PP 67 (2010d, 2014, 2017f) Method for quantifying cracking (automated) 6 Images AASHTO PP 68 (2010e, 2017e) Collect surface images (automated) 6 Macrotexture ASTM E1845 (2015c) Calculate macrotexture MPD 2 Precision and bias ASTM C670 (2015d) Develop precision and bias statements 1 ASTM C802 (2014a) Determine test-method precision 1 Table 24. Agency data collection standards and protocols (total agencies = 57).
Arkansas â â British Columbia â â â â â California â â â â â â â Connecticut â â â â â â Delaware â â â â Illinois â â â â â â â â â Maryland â â â â â â â â â Minnesota â â â â â â â â â New Hampshire â â â â â â â â â New Mexico â â â â New York â â â â â â â â â North Carolina â â â â â North Dakota â â â â â â Oregon â â â â â â Pennsylvania â â â â Quebec â â â â â â â â Saskatchewan â â â â â â Tennessee â â â â â Texas â â â â â â Utah â â â â Vermont â â â â West Virginia â â â â â â Washington â â â â â â â Total 15 1 18 19 19 9 5 5 5 1 Note: â = N/A. Agency Percent Crack (HPMS) PSR Allig. Crack Long. Crack Trans. Crack Block Crack Misc Crack Edge Crack Long. Joint Crack Shldr Crack Alaska â â â â â â â Alberta â â â â â Dips/ Bumps Bleeding Patching Pothole Porosity Raveling â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â 2 8 12 8 1 8 Table 25. Agency asphalt pavement distress types included in DQMP (total agencies = 25).
46 Automated Pavement Condition Surveys Control, Verification, and Blind Site Testing Control, verification, and blind site testing are used for monitoring and ensuring data quality of the collected pavement condition data before and during data collection. Control site testing is conducted by the agency before production testing to certify, calibrate, and verify data collection equipment meets the agency-specified quality standards. This testing is often used to establish reference values (or ground truth testing) for condition and distress types collected by the agency. Control sites are typically located within the vicinity of the SHA office responsible for pavement condition data collection. They are representative of network pavement condition and are typically the same sites year to year until major rehabilitation is performed, in which case they are removed and replaced by a different control site location. Verification sites are typically spread across the entire highway network, and as with control sites, pavement condition is Agency Cracked Slabs (HPMS) Trans. Crack Long. Crack Corner Crack ASR/D -Crack Joint Seal Damage Patching Multiple Crack Joint Spalling Arkansas â â â â California â â â â Delaware â â â Illinois â â â â â â â â Maryland â â â â â â â â Minnesota â â â â â â â â New Mexico â â New York â â â â â â â â North Carolina â â â North Dakota â â Oregon â â â Pennsylvania â â â â Tennessee â â â â Texas â â â â â â Utah â West Virginia â â â â â â â Washington â â â â Total 11 11 11 8 2 3 10 8 9 Note: â = N/A. Table 26. Agency JPCP distress types included in DQMP (total agencies = 17). Agency Crack (HPMS) Long. Crack Trans. Crack Punchout Patching Pumping Multiple Crack Spalling Arkansas â â California â â â â New Mexico â â â â North Dakota â â â â Oregon â â â â â Texas â â â â â Total 1 5 3 5 5 1 1 2 Note: â = N/A. Table 27. Agency CRCP distress types included in DQMP (total agencies = 6).
Summary of Agency Data Quality Procedures 47 representative of highway pavement conditions. Pavement condition assessment at verification sites is conducted by the highway agency, and typically is not used to establish reference values. Verification site location is typically known by the data collection team and is often traversed multiple times during the data collection effort. Each time the DCV traverses a verification site, data are collected and submitted for review and analysis (e.g., ensure image clarity, verify precision and bias). Blind sites are typically located across the entire highway network, but are unknown to the data collection team. Pavement condition on blind sites has been determined by the SHA. As with verification sites, once a DCV traverses the blind site location, the data are reviewed and analyzed for compliance. Rater Training Rater training in its most simple form is ensuring that the raters who identify and measure pavement distresses are doing so in the correct manner. This is important for vendor-based analysis because raters may review data collected for several different agencies. Each agency might have different definitions for distress type as well as for distress severity and extent. Examples of agency pavement-rater training include the following: â¢ California DOT (Caltrans 2018). Before a production survey, Caltrans requires all staff involved with the pavement condition data effort to participate in a 1-week training course. The intent of the training course is to minimize the discrepancies for crack identification and classification between the vendor and the agency QA team. â¢ New Hampshire DOT (New Hampshire DOT 2018). New Hampshire DOT requires personnel certification for assessment and review of cracking data. Fifteen certification sections were developed for data and images collected in 2009 and 2010. Each section is 0.3 mi (0.5 km) long, representing a wide range of distress types. Personnel are required to rate the certification sections to a satisfactory level (based on experienced pavement condition rating technicians) before rating production survey data. â¢ Pennsylvania DOT (Pennsylvania DOT 2018a). The Pennsylvania DOT requires the vendor to train all pavement condition rating technicians. After training is completed, pavement condition raters are required to evaluate six distress calibration sites. The pavement condition assessment results must meet agency accuracy and repeatability requirements before network data reporting. â¢ Texas DOT (Texas DOT 2018a). All staff involved with postprocessing surface distress data from collected images must be certified annually by attending surface distress ratings classes. Certification requires the successful completion of a written test (scoring 70% or higher). Quality Control Requirements QC is defined as activities conducted by the data collection team (agency or vendor) to ensure the collected data (and images) are free of errors. These range from equipment checks and cali- brations to assessing the validity of the collected data. Table 28 provides examples of common vendor QC activities, and Table 29 summarizes the QC requirements from agency DQMPs. Control, Verification, and Blind Site Requirements As previously discussed, control, verification, and blind sites are used by the agency to deter- mine the quality of the collected data and resulting outputs. Some agencies require only a single data collection run per site, while others require multiple collection runs. The collected data are checked for accuracy and repeatability. Table 30 summarizes examples of the requirements for collected data from agency control, verification, and blind sites.
48 Automated Pavement Condition Surveys Category Activity Data completeness â¢ Total length matches expected length. â¢ Total number of sections matches expected number of sections. â¢ No data have been previously rejected. â¢ No section delivered w/o valid exception. â¢ Sections shorter than expected length have valid exception. Locator information â¢ No blank or null values. â¢ Combined locator values are the sum of their component reported parts. â¢ All locator values match agency values. â¢ All locator hierarchical relationships are maintained. Length â¢ All rubber banded segments match expected length. â¢ Validate rubber banded segments with adjustment > 20%. â¢ Validate rechained adjustment > 5%. â¢ No segments with 0 or negative length. â¢ Validate segment lengths â¥ 5% difference from historical length. Linear Reference System (LRS) â¢ No blank or null values. â¢ Direction and chainage match agency. â¢ Direction values meet agency specification. â¢ Chainage flows within contiguous sections. â¢ No overlapping chainage. â¢ No duplicate mile points. GPS â¢ No blank or null values. â¢ GPS coverage is within tolerance. â¢ Latitude, longitude, and elevation are within expected boundaries. â¢ Latitude and longitude within agency location definition. â¢ Elevation, latitude, and longitude are within agency tolerance of historical data. Speed â¢ No blank, null, zero, or negative values. â Date â¢ No blank or null values. â¢ Date format matches specification. â¢ Date of collection is within data collection period. Road geometry â¢ No blank or null values. â¢ Validate values outside of typical tolerances. â¢ Use images and geographic information system (GIS) map to ensure algorithms are detecting features. â¢ Reprocess exception data caused by vehicle deviation or required lane changes. â¢ Compare features in the opposite direction and reprocess to ensure optimal curve representation in both directions. IRI â¢ No negative, blank, or null values. â¢ Values within expected ranges. â¢ Validate large discrepancies between wheel paths, check images for potential cause. â¢ Review sections > 5% improvement w/o rehabilitation, and sections > 15% deterioration compared to historical data. Faulting â¢ Number of faults less than number of rated joints. â¢ Faulting on jointed pavements only. â¢ Determine causes of values outside expected range. â¢ Review sections > 5% improvement without rehabilitation, and sections > 15% deterioration compared to historical data. Rutting â¢ Ensure sufficient valid transverse profiles to represent the section. â¢ Determine causes of values outside expected range. â¢ No negative, blank, or null values. â¢ Review sections > 5% improvement without rehabilitation, and sections > 15% deterioration compared to historical data. Distress â¢ Check quantity before and after segmentation to ensure no missing data. â¢ Identify and correct errors resulting from segmentation. â¢ Check section length and begin/end discrepancies, partial collections, and missing sections without explanation. â¢ Check pavement-specific distresses match pavement type. â¢ Check distresses have been rated properly according to agency requirements. â¢ Check distress ratingsâ minimum and maximum thresholds. â¢ Check rated lane widths and pavement widths are present and accurate. Note: â = N/A. Table 28. Example of vendor QC checks (adapted from Vermont Agency of Transportation 2018).
Summary of Agency Data Quality Procedures 49 Agency Requirements Alaska (Alaska DOT 2018) â¢ Equipment calibrated and certified â¢ Profiler o Repeatability â¥ 95% o Accuracy â¥ 90% o Bounce test â¤ 1% o Block check Â± 0.01 in. (0.25 mm) o Crack measurement system height o Image quality â¢ Distance measurement instrument (DMI) pulse â¤ 0.1 in. (2.5 mm) (5 runs) â¢ LRS â¤ 0.15% compared to wheel or tape â¢ Cracking distress (validation sites) o Â± 5% of agency values (10 runs) â¢ Data reduction review o Image quality o Crack measurement for anomalies o Route begin and end points o Data completeness o Null and invalid data o Data consistency o Automated distress algorithms Arkansas (Arkansas DOT 2018) â¢ Preproduction survey o Define and verify equipment configuration o Equipment calibration o Personnel certification â¢ During production survey o Data completeness o Subsystem checks o Real-time monitoring o End-of-day verification o DMI calibration California (Caltrans 2018) â¢ Vehicle configuration checks â¢ Profiler o Repeatability Â± 5% (3 runs) o Accuracy Â± 10% of agency value o Bound test â¤ 8 in./mi (0.5 mm/km) o Block check Â± 0.1 in. (2.5 mm) â¢ Crack measurement system height comparable to previous day â¢ Imagery focus, color, luminance quality â¢ DMI pulse â¤ 0.1% difference (3 runs) â¢ LRS â¤ 30 ft (9 m) of wheel or tape â¢ IRI: std â¤ 5% (1.5 mm) (3 runs) and Â± 10% agency value â¢ Rut: std â¤ 0.06 in. (1.5 mm) (3 runs) and Â± 0.06 in. (1.5 mm) agency value â¢ Fault: Std â¤ 15% (multiple runs and/or historical avg) Connecticut (Connecticut DOT 2018) â¢ Calibration checks â¢ Validation testing â¢ Daily equipment checks â¢ Real-time monitoring â¢ End-of-day review Delaware (Delaware DOT 2018) â¢ Vendor certification of equipment and methods â¢ Approval of vendor QMP â¢ Initial and monthly equipment calibration â¢ Ongoing discrepancy monitoring â¢ Bounds and format checking (monthly) o IRI 30â400 in./mi (1.9â25.3 mm/km) Wheel paths differ â¤ 50 in./mi (3.2 mm/km) o Rut â¤ 1.0 in. (25 mm) Wheel paths differ â¤ 0.25 in. (6.4 mm) o Crack: area â¤ 100% o Fault â¤ 1.0 in. (25 mm) > 0 when joints are present â¢ Compare with previous year and flag > 10% difference â¢ Images (monthly) o Random sample of 10 images o Confirm distress data accuracy â¢ Distance and location (monthly) o Random sample of 10 sections o Compare GPS accuracy to base map â¢ Final data review o Data coverage > 99% o Data within bounds > 99% Illinois (Illinois DOT 2018) â¢ Preproduction (sample route, 10 runs) o IRI: current runs < 10% std of last validation and current std < 5% o Rutting: current runs < 0.08 in. (2 mm) std of last validation and current std < 0.04 in. (1 mm) o Cracking: current runs < 15% std of last validation and current std < 15% â¢ Repeatability sections (monthly) o IRI Â± 10% of baseline o Rut Â± 0.08 in. (2 mm) â¢ Image and data quality checks before submittal to agency Maine (Maine DOT 2018) â¢ Daily field operations o Diagnostic checks o Random test to verify reasonableness o Monitor systems for errors o Random review at end of day â¢ Office postprocessing o 100% review Identify missing data Check crack type and severity Table 29. Agency QC requirements. (continued on next page)
50 Automated Pavement Condition Surveys Maryland (Maryland DOT 2018) â¢ Calibration and quality checks o DMI runs < 1.0 pulse/ft (0.3 pulse/m) o Block check (AASHTO R 57 [2018b]) o Bounce test Max. IRI < 0.1 in./mi (0.006 mm/km) static Bounce < 0.5 in./mi (0.03 mm/km) Avg < 0.4 in./mi (0.025 mm/km) o Verify roll, pitch, heading Â± 0.4% â¢ Daily o Confirm subcomponent functionality o Confirm weather conditions o Conduct safety check o Clean apertures and lenses o Check data elements collected o Check right-of-way (ROW) and pavement images o Check IRI measurements Minnesota (Minnesota DOT 2018) â¢ Equipment calibration o Water pan test (3D laser) o Block and bound test (before and monthly during data collection) o DMI â¢ Equipment and operator certification â¢ Daily checks (e.g., tires, camera, lasers) â¢ During data collection (e.g., image quality, DMI measurements, road closures) â¢ End of day (e.g., view images, review records, transfer data to portable hard drive) New Hampshire (New Hampshire DOT 2018) â¢ Preproduction o Equipment verification and calibration o Camera check o Block check and bounce test â¢ During data collection o Evaluate image quality o Compare surveyed to planned length o Monitor sensor output o Monitor image quality â¢ Data and image checks (100%) o GPS points o Collected matches planned roadway o Line scan, ROW, side, and rearview images collected properly o IRI are reported as expected o Laser rut measuring system sensor data displaying correctly Agency Requirements New Mexico (New Mexico DOT 2018) â¢ Preproduction o System requirements and checks o Block check Â± 0.01 in. (0.25 mm) o Bounce test 8 in./mi (0.5 mm/km) o Profiler Repeatability â¥ 92% Accuracy â¥ 90% o Image focus, color, luminance o DMI pulse â¤ 0.1 (5 runs) o LRS â¤ 15% of wheel or steel tape o IRI (10-0.1-mi [0.16-km] runs) Std â¤ 5% Std â¤ 10% (historical avg) Symmetrical appearance o Rut (10-0.1 mi [0.16 km] runs) Std â¤ 0.40 in. (10 mm) Std â¤ 0.40 in. (10 mm) (historical avg) o Distress (10- 0.1 mi [0.16 km] runs) Std â¤ 15% total length Std â¤ 15% (historical avg) â¢ During production o GPS accuracy â¤ 9.8 ft (3 m) o Image quality and lane placement o Monitor collection system errors o Data completeness â¢ Data reduction o Sample image quality and coverage o Review crack measurement o Confirm route begin/end o Confirm data completeness o Confirm roadway features o Review distress data for consistency â¢ Data delivery o IRI 30â400 in./mi (1.9â25 mm/km) Wheel paths differ â¤ 50 in./mi (3.2 mm/km) o Rut â¤ 0.35 in. (8.9 mm) Wheel paths differ â¤ 0.25 in. (6.4 mm) o HPMS crack% Asphalt â¤ 50% JPCP and CRCP â¤ 100% o Crack % (AASHTO PP 67 [2017f]) â¤ 100% o Faulting â¤ 1.0 in. (25 mm) > 0 when joints are present o Accurate description items (100%) o â¤ 10 consecutive fixed segments with missing data (500 ft [152 m]) New York (New York DOT 2018) â¢ Presurvey o Equipment calibration and certification o Precision and bias testing â¢ During collection (monthly) o Precision and bias testing Table 29. (Continued).
Summary of Agency Data Quality Procedures 51 Agency Requirements North Dakota (North Dakota DOT 2018) â¢ IRI and DMI o > 95% compliant with standards Equipment configuration, calibration, verification Daily equipment checks and real- time monitoring Inspect uploaded data samples Inspect processed data Final data review â¢ Rut, fault, GPS, and grade o > 95% compliant with standards Initial equipment configuration, calibration, verification Daily equipment checks and real- time monitoring Inspect uploaded data samples Inspect processed data Final data review â¢ Distress rating o > 80% match with manual survey Initial rater training Intra-rater checks Final data review â¢ Images o > 98% compliant with standards of each control section o < 5 consecutive images failing to meet criteria Startup checks, real-time monitoring, field review Uploaded sample review Final review Oregon (Oregon DOT 2018) â¢ Preproduction o System requirements and checks o Profiler Repeatability â¥ 92% Accuracy â¥ 90% o Image focus, color, luminance o DMI pulse â¤ 0.1 difference o IRI Block check Â± 0.01 in. (0.25 mm) Bounce test â¤ 3 in./mi (0.2 mm/km) static and â¤ 8 in./mi (0.5 mm/km) ProVAL cross correlation repeatability score â¥ 0.92 (5 runs) o Rut (3 runs) Â± 0.05 in. (1.3 mm) o Fault (3 runs) Â± 0.06 in. (1.5 mm) o Distress (3 runs or historical avg) std â¤ 10% â¢ During production o GPS accuracy â¤ 9.8 ft (3 m) o Image quality and lane placement o Monitor collection system errors o Data completeness â¢ Data reduction o Sample image quality o Review sample of crack measurement system for anomalies o Confirm route begin/end o Confirm data completeness o Confirm placement of roadway features o Manual review and correction of automated results when image analysis is in error o Review distress data for consistency o Perform data reasonableness checks â¢ Data delivery o Confirm LRS coding and lane o Milepoint Â± 0.03 mi (0.05 km) of actual o Confirm correct pavement type o Confirm image quality o Confirm events marked as required o No missing values without valid exclusion and reason codes o IRI: 20-800 in./mi (1.3-51 mm/km) o Rut: < 2.0 in. (51 mm) o Fault: < 1.0 in. (25 mm) o Patching: â¤ 6,336 ft2 (205 m2) o Asphalt (0.1 mi (0.16 km) segments) Fatigue cracks â¤ 1,056 ft (190 m) Longitudinal cracks â¤ 1,584 ft (285 m) Potholes â¥ 6 in. (152 mm) wide and count â¤ 44/0.1 mi (26/0.1 km) Raveling â¤ 1,584 ft (285 m) Transverse cracks â¥ 6 ft (1.8 m) long and count â¤ 44/0.1 mi (26/0.1 km) o JPCP (0.1 mi (0.16 km) segments) Corner breaks â¤ 36/0.1 mi (21/0.1 km) Longitudinal cracks (non-wheel path) â¤ 1,584 ft (285 m) Longitudinal crack (wheel path) â¤ 1,056 ft (190 m) Shattered slabs â¤ 36/0.1 mi (21/0.1 km) Transverse crack count < no. of slabs o CRCP (0.1 mi (0.16 km) segments) Longitudinal crack (non-wheel path) â¤ 1,584 ft (285 m) Longitudinal crack (wheel path) â¤ 1,056 ft (190 m) Punchouts â¤ 36/0.1 mi (21/0.1 km) Table 29. (Continued). (continued on next page)
Agency Requirements Pennsylvania (Pennsylvania DOT 2018a) â¢ Equipment calibration and certification o Block calibration and test o Roughness calibration and bounce test o DMI calibration o Laser Crack Measuring System (LCMS)â¢ testing and calibration o Image alignment and quality testing â¢ Extended section road test o 4-25 mi (6.4-40.2 km) in length o Conducted at 55 mph (89 km/h) Confirm systems working correctly Data/images are properly collected â¢ Data analysis o Completeness o Location information o Section length o Linear reference o Sensor data (e.g., IRI, fault, rut) â¢ Corrective action plan Saskatchewan (Saskatchewan Ministry of Highways and Infrastructure 2017) â¢ Equipment checks on profiler, LCMSâ¢, GPS, and DMI â¢ Image quality is clear and properly stitched together â¢ All distress is visible â¢ Distress correctly identified and quantified Tennessee (Tennessee DOT 2018) â¢ Before production testing o Equipment calibration 2 months before production testing o Control site testing after completion of calibration â¢ During production testing o Control site testing monthly o Control site repeatability (weekly) â¢ Data checks o Format and completeness o Sensor data: Check for large difference between wheel paths for IRI and rut o Distress data: Check results are within expected ranges o Image quality (e.g., viewing path, minimal or no debris, legible signs) Texas (Texas DOT 2018a) â¢ Implementation schedule â¢ Logical sequence of tasks and deliverables â¢ Clear definition of tasks and deliverables â¢ Staffing by task and deliverable â¢ Target completion data for each task and deliverable â¢ Strategies and process to promote quality â¢ Procedures for measuring and reporting quality performance â¢ Controls to assure quality and consistency â¢ Personnel certification training â¢ Validation of equipment accuracy and precision, and daily and ongoing QC procedures Vermont (Vermont Agency of Transportation 2018) â¢ Data collection personnel training and certification â¢ Equipment configuration, setup, and calibration â¢ Control site testing and verification â¢ Daily system checks â¢ Real-time data checks â¢ Data processing personnel training and certification â¢ Data processing, review, and analysis â¢ Project reporting â¢ Corrective action Virginia (Virginia DOT 2015) â¢ Personnel certification training â¢ Equipment accuracy and precision o < 5% sensor data items not documented by vendor â¢ Daily and ongoing QC procedures â¢ Establish appropriate variation limits for each data item â¢ Weekly equipment calibration schedule and maintain calibration records West Virginia (West Virginia DOT 2018) â¢ Preproduction o Equipment calibration and verification o Rater training o Validate site rating calibration o Image checks and monitoring â¢ Daily o Equipment checks o End-of-day review o Inspect uploaded data samples o Inspect processed data o Mileage review â¢ Weekly o Compare location with shape file o Uploaded image sample review â¢ Prior to data submittal o Final data review o Final distress rating review o Final segment location review o Final image review Wyoming (Wyoming DOT 2018) â¢ Preproduction o DCV certified at Texas A&M Transportation Institute, MNROAD,1 National Center for Asphalt Technology, or vendor certification center â¢ Control site precision and accuracy o Before, midway through, and on completion of production survey o Faulting, texture, rutting, and IRI 1 Pavement test track owned and operated by Minnesota DOT. Table 29. (Continued).
Summary of Agency Data Quality Procedures 53 Agency Requirements Alaska (Alaska DOT 2018) One control site for profiler and DMI certification and 6 verification sites Condition IRI Alligator cracking Rut Crack length Images Criteria (10 runs) Std < 5% Class 1 profiler Std < 15% agency value Std < 0.04 in. (1.02 mm) Class 1 profiler, dipstick or straightedge Std < 15% agency value Minimal skipped, uniform and consistent illumination, color balance, exposure and clarity, and stitching Resolution to identify 0.125 in. (0.32 mm) crack at 60 mph (97 km/h) Alberta (Alberta Transportation nd) Condition IRI Rut Accuracy (5 runs) Â± 10% Class 1 profiler Avg Â± 0.08 in. (2 mm) Class 1 profiler Repeatability (5 runs) Each run Â± 5% of 5-run avg Each run avg Â± 10% of 5-run avg British Columbia1 (British Columbia MoTI 2016) 4 control sites, 1,640 ft (500 m) in length Condition IRI Rut Distress Accuracy Avg Â± 10% Class 1 profiler Avg Â± 0.12 in. (3 mm) straightedge Avg Â± 1 pavement distress index (PDI) manual survey Repeatability (5 runs) Std Â± 3.9 in./mi (0.25 mm/km) Std Â± 0.12 in. (3 mm) Std PDI Â± 1 California (Caltrans 2018) 8 control sites (4 asphalt and 4 concrete) Condition IRI Rut Fault Distress Images Criteria (3 runs) Std Â± 5% Class 1 profiler Std Â± 0.06 in. (1.52 mm) Class 1 profiler Std Â± 0.06 in. (1.52 mm) manual survey Â± 10% manual survey Displayable and clear, continuous, correctly stitched with no missing or overlapping images, synchronized with geographic locations and associated attributes â¤ 10 images/mi (16 images/km) or â¤ 2 consecutive images/mi (3 images/km) with poor quality 0.125-in. (3.175-mm) wide cracks are visible Delaware (Delaware DOT 2018) â¢ Initial calibration (10 runs per site) o 9 asphalt, 7 composite, 8 surface treated, and 6 concrete sites o â¥ 90 percent within limits (PWL) and â¤ 5% Fault: Â± 50 count and std < 5 count IRI Â± 10 in./mi (0.6 mm/km) and std < 5 in./mi (0.3 mm/km) IRI, rut, and fault: Reference value is avg of repeat runs. failing multiple criteria o Distress/condition evaluated: Bleeding: Â± 50 ft2 (4.6 m2) and std < 5% Block and fatigue cracking: Â± 50 ft2 (4.6 m2) and std < 5 ft2 (0.5 m2) Joint reflection cracking, patch deterioration, and potholes: Â± 5 ft2 (0.5 m2) and std < 5 ft2 (0.5 m2) Unclassified cracking: < 5% area and std < 5% Crown, cross slope, edge and non- wheel path longitudinal cracking, raveling, surface defects, crack length: Â± 50 ft (15 m) and std < 5 ft (1.5 m) Joint spalling, joint reflection and map cracking, joint seal damage, alkali-silica reactivity, slab count, and joint count: Â± 5 count and std < 5 count Slab and transverse cracking: - Â± 50 ft (15 m) and std < 5 ft (1.5 m) - Â± 5 count and std < 5 count o Compare with previous reference value to check within acceptable limits â¢ Verification (monthly) o Re-run initial calibration sites (5 runs) o â¥ 90% sections within limits and â¤ 5% failing to meet multiple criteria â¢ Ongoing discrepancy monitoring o Difference between observed and LCMSâ¢ o â¥ 90% of sections within all bounds and â¤ 5% failing to multiple bounds criteria â¢ Independent bounds and format check o â¥ 90% of sections within all bounds and â¤ 5% failing to multiple bounds criteria (see initial calibration site requirements) â¢ Independent image sample check (random sample of 10 images) o 100% of samples free of major problems â¢ Independent distance and location check (random sample of 10 sections) o â¥ 90% of sections within limits and â¤ 5% failing to multiple criteria Table 30. Examples of agency control, verification, and blind site requirements. (continued on next page)
54 Automated Pavement Condition Surveys Iowa (Iowa DOT 2018) â¢ Control sites o 4 asphalt and 4 concrete sites o 1,500 ft (457 m) in length o IRI, fault, and rut: preproduction and monthly testing o Distress: preproduction Condition IRI Fault Rut Distress Downward images Criteria Â± 10% agency measured value; 3 replicate runs std < 5% Â± 0.10 in. (2.5 mm) agency measured value; 3 replicate runs std < 5% Â± 0.10 in. (2.5 mm) agency measured value; 3 replicate runs std < 5% Â± 10% manual review of 3D scans/images Minimal skipped, illumination, color, exposer, clarity, stitching, synchronization with windshield images Maryland (Maryland DOT 2018) â¢ Test loop (monthly) o 45 sections, 13.1 mi (21.1 km) o 3 runs every 3 weeks o Compare repeatability to 10-run QC test loop results â¢ DMI o 2 1-mi (1.6-km) sections o Calibrate every 3 weeks Minnesota (Minnesota DOT 2018) â¢ Equipment Certification o 1 asphalt and 1 concrete section o IRI (5 repeat runs) Reference values from walking profiler Avg Â± 5% reference profile â¢ Operator Certification o Successful completion of Pavement Surface Smoothness Specification Training â¢ Verification Sites Agency Requirements Joint spacing: Â± 5 ft (1.5 m) and std < 5 ft (1.5 m) Rut - Â± 50 ft (15 m) and std < 5 ft (1.5 m) - Â± 0.05 in. (13 m) and std < 0.05 in. (13 mm) â¢ Final data review o Data coverage > 99% within bounds o Data within bound checks > 99% o Rut depth Pass water tray test Avg Â± 0.10 in. (2.5 mm) fabricated beam with 0.5 in. (12.7 mm) ruts o Faulting Avg Â± 0.05 in. (1.3 mm) fault meter o Cracking Manual crack mapped > 90% cracks shown on crack map o New van, establish baseline values from 5 repeat runs o Bi-monthly Check consistency of IRI, rutting, faulting, and compass heading to baseline plot New York (New York DOT 2018) â¢ 1 control site: presurvey and tested 5 times per month o IRI precision and bias Avg 5 runs Â± 1 in./mi (0.06 mm/km) Avg each run Â± 3 in./mi (0.19 mm/km) 0.1-mi (0.16-km) individual runs within band of historical values â¢ Rut, fault, and distress consistent and representative of DOT records at these locations North Carolina (North Carolina DOT 2018) â¢ No more than 20 DOT-selected sites â¢ Determine precision and bias limits and corrective actions o Vendor determined value or â¤ 5, whichever is lower â¢ Verify camera angles and coverage, data calculation methods, and standard operating procedures. North Dakota (North Dakota DOT 2018) â¢ Annual vehicle calibration conducted before production survey (5 runs) Condition IRI DMI Criteria Â± 5% reference profiler (SurPRO) COV Â± 3% Mean cross-correlations of runs > 90% Individual cross-correlations run > 85% Distance of each run Â± 0.2% agency value Table 30. (Continued).
Summary of Agency Data Quality Procedures 55 Oregon (Oregon DOT 2018) â¢ 1 control site for IRI and 1 control site for rutting o Preproduction testing, monthly verification testing, and postsurvey exit controls Condition LRS IRI Rut Criteria Correct code and lane, location Â± 0.03 mi (0.05 km) of agency location ProVAL cross-correlation repeatability score â¥ 0.92 (5 runs) and accuracy score â¥ 0.90 (5 runs) compared to Oregon DOT SurPRO Â± 0.05 in. (1.3 mm) run to run (3 runs) Â± 0.10 in. (2.5 mm) compared to agency survey Pennsylvania (Pennsylvania â¢ Control sites o 4 asphalt, 2 JPCP, and 2 additional sites for IRI verification Agency Requirements â¢ Weekly IRI and rut depth verification (1 site, 1,000 ft [305 m] in length, 1 run) o Baseline for avg IRI and rut based on 10 repeat runs o IRI Â± 5% of baseline o Rut Â± 0.05 in. (1.3 mm) of baseline o Compare wheel path IRI and rut, grade, heading, and cross slope to baseline for acceptable data â¢ Upload images and data to the office work station, then verify o Image completeness, and begin/end image o Sensor data complete for left and right IRI o Run analysis process and verify all sensor data present for IRI, half-car roughness index, rutting, texture, faulting, and gyro DOT 2018a) Condition IRI Rut Fault Distress Accuracy Â± 10% walking profiler Â± 10% agency value (inertial profiler) Â± 10% agency value (inertial profiler) Â± 10% agency value (3 or more agency raters, 2 ratings per rater) Repeatability (3 runs) Â± 5% run to run Â± 5% run to run Â± 5% run to run Â± 5% run to run â¢ Blind verification sites (125 segments) o Minimum 3 agency raters, at least 2 ratings per site o Vendorâs data Â± 10% avg agency ratings â¢ Vendor data compared to data from previous 2 years o Distress > Â±10% o IRI > Â±50 in./mi (3.2 mm/km) Quebec (Quebec MoT 2017) â¢ 1 control site, 1,312 ft (400 m) in length o Conducted begin and end of survey o 10 runs, compared to SurPRO o IRI (each wheel path) Avg bias â¤ 1.25 Repeatability â¤ 0.38 o Cracking qualified using artificially simulated cracks â¢ 5 verification sites, 3,281 ft (1,000 m) long o Reference values (10 runs, at least 4 survey vehicles) Lowest and highest variable runs removed Median of 3 runs from each vehicle 5 remaining runs used for verification â¢ Verification standards o IRI Bias and repeatability < 5% Bias 75% sections < 10% and 90% sections < 15% Deviation between 2 devices < 10%3 o Rut Bias < 0.04 in. (1.0 mm) and repeatability < 1.0% Bias 75% sections < 0.45 in. (1.5 mm) and 90% < 1 in. (25 mm) o Longitudinal cracking per zone2 80% sections Â± 16 ft (5 m) and 97.5% Â± 33 ft (10 m) Bias Â± 16 ft (5 m) o Longitudinal cracking by severity3 3 severities or more must have Â± 16 ft (5 m) in. > 75% of cases 3 severities or more must have Â± 33 ft (10 m) in. > 90% of cases Bias Â± 16 ft (5 m) o Overall cracking index > 95% sections Â± 10 points of reference values > 80% sections Â± 5 points of reference values Bias and repeatability Â± 2 Table 30. (Continued). (continued on next page)
56 Automated Pavement Condition Surveys Agency Requirements Saskatchewan (Saskatchewan Ministry of Highways and Infrastructure 2017) 1,2 â¢ 1 control site, 492 ft (150 m) in length o 5 runs o IRI > 92% compared to Class 1 profiler o Crack > 90% compared to manual survey1,2 o Texture > 90% compared to agency survey â¢ 2 verification sites, 656 ft (200 m) in length o Test every 3,100 lm-mi (5,000 ln-km) o IRI > 92% compared to agency o Rut > 90% average rut depth o Crack > 90% type and width1,2 o Texture > 90% type and affected percent Tennessee (Tennessee DOT 2018) â¢ Control site o 15 sites o 5 runs (agency and vendor) o Paired t-test of average values to determine difference of collected data â¢ Verification sites (weekly) o Check repeatability and time series o Compare with historical values: âIRI â10 to 30 in./mi (â0.6 to 1.9 mm/km) âRut < 0.2 in. (5.1 mm) Texas (Texas DOT 2018a) â¢ 5 control sites o IRI Â± 6 in./mi (0.38 mm/km) agency profiler o Rut Â± 0.06 in. (1.5 mm) agency profiler o Distress score Â± 15 points (scale 0 to 100 points) â¢ 37 verification sites at least 1 in. each district (compare to agency vehicle) o Every week, every vehicle IRI Â± 6 in./mi (0.38 mm/km) Rut Â± 0.06 in. (1.5 mm) Distress (6% random sample) score Â± 15 points Utah (Utah DOT 2018) â¢ Compare to random sample ground truth data o Distress: 90% of data Â± 15% o IRI: 95% of data Â± 5% o Rut: 95% of data Â± 0.1 in. (2.5 mm) o GPS location: 95% of data Â± 5 ft (1.5 m) o LRS location: 95% of data Â± 0.005 mi (0.008 km) Vermont (Vermont Agency of Transportation 2018) â¢ 4 control sites o Calibrate distress rating process o IRI and rut precision and bias > 95% compliance with standards â¢ 5 verification sites (weekly) o 10-0.05 mi (0.08 km) sections > 95% compliance with standards Virginia (Virginia DOT 2015) â¢ No more than 20 agency-selected sites â¢ Determine precision and bias â¢ Verify calibration procedures, camera angles, coverage, data calculation methods, and standard operating procedures Washington State (Washington State DOT 2018) â¢ 2 IRI sites, 1,000 ft (305 m) â¢ 1 weekly site for presurvey IRI, faulting, and rutting â¢ Accuracy and repeatability, compare to historical values (< 5% variation) o IRI presurvey (5 runs) and weekly (3 runs) compare to SurPRO o Rut weekly 3 runs compare to SurPRO â¢ Check DMI monthly West Virginia (West Virginia DOT 2018) â¢ Vendor profile equipment and operators certified by National Center for Asphalt Technology or agency-approved method â¢ Agency staff must obtain West Virginia DOT profiler operator certification â¢ Agency profile equipment calibrated and certified by agency-approved process â¢ Calibration Sites o Established by vendor o Vendor and agency equipment calibration â¢ Verification Sites o 2 asphalt and 1 concrete site o Compare automated data to agency- conducted manual surveys and agency- automated data collection Wyoming (Wyoming DOT 2018) â¢ 2 asphalt sites o Verify IRI, rutting, texture, geometrics, positioning, and 3D crack detection â¢ 1 concrete site o Verify IRI, rutting, texture, faulting, geometrics, positioning, and 3D crack detection â¢ Compare data to previously collected data â¢ Verify image quality through spot checks Note: Avg = average, Std = standard deviation. 1 Sum of all severities within same zone - right wheel path, center of lane, and left wheel path. 2 Sum of longitudinal cracks all zones by severity (very low, low, medium, or high). 3 % Deviation (Manual measurementâTest vehicle measurement) Manual measurement 100%. Table 30. (Continued).
Summary of Agency Data Quality Procedures 57 Acceptance Requirements Agency acceptance requirements are activities performed to assess the quality of the submitted condition data. There can be a wide range of agency acceptance requirements depending on agency needs. Table 31 summarizes agency acceptance requirements. To illustrate an agencyâs process for acceptance testing, the process used by Caltrans is described below in more detail. Caltrans acceptance testing is conducted using in-house staff reviewing 5% to 10% (2,500 to 5,000 mi [4,023 to 8,046 km]) of the submitted data and images. The Caltrans acceptance process includes field verification, QA field review, and office review. Field Verification Field verification of IRI is conducted using Caltrans-certified profilers. Field verification sites are located across California (14 sites per district, total 168 statewide sites), are selected based on pavement type (asphalt and concrete) and range of IRI values, and are approximately 5 mi (8 km) in length. Caltrans and vendor IRI are calculated on 0.1-mi (0.16-km) segments and compared. If the Caltrans and vendor IRI distributions for each 0.1-mi (0.16-km) segment of each field verification site are within 10%, then the segment passes. The vendor data are accepted if 85% of the field verification site segments pass the criteria. For the LRS, the linear and georeference locations for images are randomly reviewed at selected bridges, county lines, and intersections and compared with Caltrans survey data. Location data are accepted if 95% of the landmarks are within 30 ft (9.1 m) of Caltrans locations. QA Field Review The Caltrans QA field review is intended to validate roadway segments based on the presence of distress and the category of treatment (e.g., preservation, minor rehabilitation, major rehabilitation) for asphalt pavements and JPCP. The QA field crews (two members per team) conduct manual reviews of 0.1-mi (0.16-km) segments with safe shoulder access on each field site. The following details apply to asphalt pavements: â¢ A âpassâ rating is applied if both the vendor and agency rating of each 0.1-mi (0.16-km) segment indicates 30% or more alligator âBâ cracking. Alligator âBâ cracking is defined as âinterconnected or interlaced cracks in the wheel path, forming a series of small polygons (generally less than 1 ft [0.3 m] on each side). The cracking resembles the appearance of alligator skin or chicken wireâ (Caltrans 2015). â¢ For all other project types, a âpassâ rating is applied if both the vendor and agency rating of each 0.1-mi (0.16-km) segment indicate less than 30% alligator cracking. â¢ The vendorâs results are accepted if more than 85% of the 0.1-mi (0.16 km) segments meet the âpassâ criteria. If these criteria are not meet, the vendor is notified for corrective measures. The following details apply to JPCP: â¢ A âpassâ rating is applied if both the vendor and agency rating of each 0.1-mi (0.16-km) segment indicate 10% or more of panels cracked into three or more pieces. â¢ For all other project types, a âpassâ rating is applied if both the vendor and agency rating of each 0.1-mi (0.16-km) segment indicate less than 10% panels cracked into three or more pieces. â¢ The vendorâs results are accepted if more than 85% of the 0.1-mi (0.16-km) segments meet the âpassâ criteria. If these criteria are not meet, the vendor is notified for corrective measures. An example of the reporting process is shown in Table 32.
58 Automated Pavement Condition Surveys Agency Requirements Alaska (Alaska DOT 2018) â¢ Data o > 98% complete o > 98% populated with required data elements o 100% description information o > 98% < 500 ft (152 m) of consecutive missing segments â¢ IRI, rut, and cracking 95% compliant with verification testing â¢ Distress ratings 95% compliant with protocol requirements and quality expectations â¢ Location information 100% accurate and complete (database check) â¢ Pavement images review 20% random sample, 100% compliant with verification testing requirements Arkansas (Arkansas DOT 2018) â¢ Data and images o 100% correct data type and format o > 98% data and image completeness o > 98% accurate location information o > 98% correct data for surface type o > 98% data within acceptable range o 100% correct lane marking and joint location (100% manual review of images and 5% independent sample) o > 98% correct crack detection (5% sample), flag when more than 20% cracking misdetection â¢ IRI o > 98% collected > 40 mph (64 km/h) o 30â400 in./mi (1.9â25.3 mm/km) o < 30% difference between wheel paths â¢ Rut > 98% with ârut < 30% between wheel paths, 0 to 1 in. (25 mm) â¢ Fault > 98% with fault value < 1 in. (25 mm) for any wheel path, 0 to 1 in. (25 mm) â¢ Curve data > 98% with curve data classified correctly â¢ Percent cracking (HPMS) > 98% within expected ranges â¢ 5% random sample (compare to historical values) o > 95% IRI data Â± 10% o > 95% rut depth Â± 0.05 in. (1.27 mm) o > 95% faulting Â± 0.05 in. (1.27 mm) o > 95% distress data Â± 20% o > 95% geometric properties Â± 15% o > 98% quality ROW and pavement images California (Caltrans 2018) â¢ 5 to 10% random sample â¢ Review vendor reports, data, and images for completeness â¢ Conduct field verification â¢ Verify images and vendor results â¢ Confirm upload into pavement management system â¢ Conduct year-by-year consistency checks â¢ IRI > 95% Â± 10% agency value â¢ Rut > 95% Â± 0.06 in. (1.5 mm) agency value â¢ Fault > 95% Â± 0.06 in. (1.5 mm) agency value â¢ MPD > 95% Â± 0.06 in. (1.5 mm) agency value â¢ Cracking > 85% Â± 10% agency value â¢ Major rehabilitation segment > 85% of segments Â± 10% area agency value â¢ Element review > 85% Â± 10% agency value â¢ 100% data completeness â¢ LRS > 95% Â± 30 ft (9.1 m) â¢ Downward and ROW images > 95% meet criteria â¢ 100% data upload Connecticut (Connecticut DOT 2018) â¢ Reproducibility between vehicles o IRI Â± 10 in./mi (0.63 mm/km) o Rut Â± 0.06 in. (1.52 mm) o Asphalt cracking Total < 10% COV Longitudinal, transverse, or area cracking < 20% COV Total wheel path cracking < 40% COV Total non-wheel path cracking < 60% COV o Cross-slope difference Â± 5% o Longitudinal grade difference Â± 0.1% â¢ Repeatability (5 runs) o IRI each run Â± 5% avg of 5 runs o Rut each run Â± 0.06 in. (1.52 mm) avg of 5 runs o Asphalt cracking Total < 10% COV Longitudinal, transverse, or area cracking < 15% COV Total wheel path cracking < 30% COV HPMS 30â400 in./mi (1.9â25.3 mm/km) o Rut CTDOT: 99% â¤ 0.5 in. (12.7 mm) HPMS: â¤ 1.0 in. (25.4 mm) o Fault CTDOT: 99% â¤ 0.5 in. (12.7 mm) HPMS: â¤ 1.0 in. (25.4 mm) o Asphalt cracking DOT: 99% â¤ 300 ft/image (91 m/image) HPMS: 0â54% area o Concrete cracking DOT: to be determined HPMS: 0â100% cracked slabs o Cross slope DOT: 100% â¤ 10% HPMS: N/A o Longitudinal grade DOT: 99% â¤ 16% HPMS: N/A Table 31. Agency acceptance requirements.
Summary of Agency Data Quality Procedures 59 Maryland (Maryland DOT 2018) â¢ IRI o Completeness > 85% o Speed check > 35 mph (56 km/h) o Settings check as expected o Flag IRImeasured â 0.21xIRIcalculated < 0 â¢ LCMSâ¢ (100% review) o Image acceptable quality o Reasonableness of crack length (asphalt pavements) Crack has minimal zero values Lane width > 0 Crack detection > 50% length â¢ Transverse profile and rut (100% review) o Visual inspection of graphs and longitudinal plots â¢ IRI change in speed adjustment (100% review) o < 8% of unadjusted IRI value, original value reported o Speed > 15 mph (24 km/h) and > 8% of original IRI, report adjusted IRI â¢ Concrete pavements (5% manual check) o Surface and distress type o Missing data â¢ HPMS (100% review) flag for evaluation o Missing data o > 1% change in rating groups from previous year o > 2% change in statewide avg for IRI, cracking percent, rutting, and faulting o Total lane-mi (km) > 1% change from previous year o Total lane-mi (km) Â± 10 mi (16 km) of previous year â¢ Image (100% review) flag for evaluation o Missing ROW or pavement images o Abnormalities (e.g., lighting, spots) o Start/end points > 22.2 ft (6.8 m) of GPS coordinate o Start/end section coordinates > 21 ft (6.4 m) from historical inventory â¢ Review updated Business Plan o Total lane-mi (km) < 50 mi (80 km) different from previous years mileage o Total treated section length as expected, compared to last yearâs treated lane-mi (km), and current years allocated budget Minnesota (Minnesota DOT 2018) â¢ Check pavement type â¢ 10% segment review o Manual review of images o Compare to automated results â¢ Final checks and data formatting o Error check (e.g., out of range, mismatched distress, high rut or fault) â¢ Load into pavement management o Compare to last yearâs data for reasonable trend o Compare overall percent good and poor to last yearâs data Agency Requirements o Total non-wheel path cracking < 50% COV o Cross slope std Â± 0.05% o Longitudinal grade std Â± 0.1% â¢ Data range requirements, 0.1-mi (0.16-km) segments o IRI 99% within 40â450 in./mi (2.5â28.5 mm/km) o Begin/end locations DOT: 99% with < 1 mismatch segment per 10 mi (16.1 km) HPMS: N/A o Pavement images DOT: 99% with < 1 missing image per 0.062 mi (0.01 km) HPMS: 30â400 in./mi (1.9â25.3 mm/km) Illinois (Illinois DOT 2018) â¢ Image quality â¢ LRS accuracy â¢ Correct route collected and reported â¢ Correct begin/end point â¢ Correct segment length â¢ Value date recorded (month and year) â¢ Acceptance o IRI, rut, fault, cracking > 90% accuracy (random sample) o LRS 100% compared to GIS and agency (100% review) o Image 100% checked for clarity and ratability Iowa (Iowa DOT 2018) â¢ Deliver > 98% of collectable miles â¢ Missing < 500 ft (152 m) consecutive fixed segments â¢ 100% description items populated and accurate â¢ > 95% segments o IRI Â± 10% agency value o Fault Â± 0.05 in. (1.27 mm) agency value o Rut Â± 0.05 in. (1.27 mm) agency value o Distress Â± 10% manual review of 3D scans/images Maine (Maine DOT 2018) â¢ 10â15% sample â¢ Review image quality â¢ Review distress data (field verify if needed) â¢ Upload into pavement management system o Data completeness check o Expected ranges check Table 31. (Continued). (continued on next page)
60 Automated Pavement Condition Surveys New York (New York DOT 2018) â¢ 10% random sample (collected by DOT and compared to vendor results) o IRI Â± 10% o Rut max Â± 0.2 in. (5.1 mm) o Rut avg Â± 0.13 in. (3.33 mm) o Fault count â1 o Fault sum Â± 10% o Fault avg Â± 0.05 in. (1.27 mm) â¢ Verify segment GPS coordinates â¢ Distress (image review) o 5% random sample o < 5 missed distress type or severity per 0.1-mi (0.16-km) segment o < 1 missed high-severity distress per 0.1- mi (0.16-km) segment â¢ Check image quality â¢ Historical comparison o Avg IRI Â± 10% o Rut max Â± 0.2 in. (5.1 mm) o Rut avg Â± 0.13 in. (3.3 mm) o Fault count â1 o Fault sum Â± 10% o Fault avg Â± 0.05 in. (1.27 mm) o Area crack (per zone) Â± 10% o Wt. avg crack width Â± 20% o Total crack length Â± 10% o % crack asphalt Â± 10% o % crack concrete Â± 10% North Carolina (North Carolina DOT 2018) â¢ State routes o Independent consultant review o 1.28% random sample o > 90% indices or distress Â± 2 std Alligator, transverse, longitudinal, and lane joint cracking, patching, bleeding, concrete patching, corner break, spalling â¢ Images o 5% random sample o < 5 out of 100 continuous images with inferior quality North Dakota (North Dakota DOT 2018) â¢ 2â3% random sample review o < 20% difference in deduct values o > 9% deduct difference requires detailed review o Year-to-year comparison Overall minimum distress score < 12 points from previous year Overall distress score > 6 points from previous year â¢ 100% review start/end points all segments â¢ 100% review of all images â¢ Manually assess all segments with a microsurfacing, slurry seal, or chip seal â¢ Manually assign all thin lift asphalt overlays with a patch score â¢ Verify distresses with substantial variations â¢ IRI, rut, faulting, GPS, grade o > 95% compliance with standards Weekly verification testing Global database checks for range, consistency, logic, completeness, and inspection of suspect data. â¢ Distress rating o > 80% with < 6 point difference automated to manual rating â¢ Images o > 98% compliance with standards, each control section < 5 consecutive images failing to meet criteria Clarity, brightness, no objects on lens New Mexico (New Mexico DOT 2018) â¢ Data completeness o > 98% total network miles tested o 100% accurately populated with description information o > 98% populated with required data elements o > 98% with < 10 consecutive missing segments (< 500 ft [152 m]) â¢ IRI, rut, and fault > 95% compliant with requirements â¢ Distress rating > 95% compliant with requirements â¢ Location 100% accurate and complete â¢ Images 100% compliant with requirements (20% random sample) Agency Requirements Oregon (Oregon DOT 2018) â¢ Route, lane, direction, LRS 100% â¢ Images < 5 of 100 consecutive images with inferior quality â¢ Pavement type 100% â¢ Data completeness o Total collection length (excludes inaccessible locations) > 99% o No blank distress data fields without exclusion code and reason 100% o Compared to agency DCV IRI Â± 20% Rut Â± 0.20 in. (5.1 mm) â¢ Distress ratings o Compliant with control site test requirements 100% o Year-to-year comparison Flag changes good/fair/poor categories, overall index > 5 or < 15 Table 31. (Continued).
Summary of Agency Data Quality Procedures 61 Pennsylvania (Pennsylvania DOT 2018a) â¢ 2.5% random sample â¢ Minimum of 3 agency raters perform at least 2 ratings per site â¢ Analysis of historical data: plot 3 years of condition data, summed and normalized for all segments in batch by pavement type; differences are checked and sent to vendor for review and resubmission as needed â¢ Image brightness, clarity, focus â¢ Check the reported location of the images for all interstates and 4 or 5 routes from each county in each batch â¢ Cross check pavement surface type with agency maintenance and construction work history â¢ Check upload of data into Roadway Management System â¢ Average for each distress and severity on each site is used to evaluate vendorâs results Condition IRI Distress Location Section begin ROW images Criteria Â± 25% avg agency value Â± 20% avg agency Correct segment surveyed Â± 40 ft (12 m) agency value Legible signs Percent within limits 95 90 100 95 80 Quebec (Quebec MoT 2017) â¢ Drift of measurements o 328 ft (100 m) segments o IRI, rut, and cracking 95% Â± xâ Â± 3 Ã Ï â¢ Comparison of 2 surveys o Avg annual difference (e) between current year and previous year survey measurements IRI â0.2 â¤ e â¤ 0.4 Rut â1 â¤ e â¤ 2.25 Cracking â0.2 â¤ e â¤ 1.38 Agency Requirements o No data outside allowable range 100% o Bridge, construction detours, and lane deviations marked correctly > 90% â¢ IRI, rut, and fault o Compliant with control and verification test requirements 100% o Data within expected values (from previous survey) > 95% IRI Â± 10% Rut Â± 0.10 in. (2.5 mm) Fault Â± 0.05 in. (1.3 mm) points, windshield rating Â± 10 point difference Interstate > 95% Non-interstate > 90% All routes < 10% of 0.1 mi (0.16 km) segments rated incorrectly Tennessee (Tennessee DOT 2018) â¢ Vendors quality management report o Equipment certification o Collection procedures and protocols o Personnel training information o Equipment calibration, checks, maintenance, equipment issues, and corrective actions o Verification test results o Data format, sensor data, distress, image, and time-series check â¢ HPMS submittal â¢ Upload into pavement management system Texas (Texas DOT 2018a) â¢ IRI and rut: 100% o Delivery verification testing o Daily equipment checks o Weekly verification site testing â¢ Distress ratings: 90% o Location and format o 6% sample tested for distress rating agreement Utah (Utah DOT 2018) â¢ Review data and images for completeness â¢ Data within expected values â¢ Check data for consistency and compare to historical values Vermont (Vermont Agency of Trans 2018) â¢ Data completeness â¢ Check of invalid condition assessment â¢ Video/condition assessment correct at first and last mile of each road segment â¢ Fatigue cracking and miscellaneous cracking â¤ 100% â¢ 5% random sample of lowest and highest percentile o Evaluate IRI, rut, and cracking from ROW and pavement images â¢ Images o Completeness o Invalid images o Alignment and begin/end o Start/stop measures o Clarity and brightness (random sample) â¢ Upload into pavement management o Evaluate import errors o Evaluate missing data Table 31. (Continued). (continued on next page)
62 Automated Pavement Condition Surveys Virginia (Virginia DOT 2015) â¢ Bridge start and end locations Â± 0.001 mi (0.0016 km) â¢ LRS > 90% landmarks Â± 0.01 mi (0.016 km) for sections < 1 mi (1.6 km) and Â± 0.05 mi (0.08 km) for sections > 1 mi (1.6 km) â¢ Initial data screening < 10% of length failing o IRI Reject < 0 and > 500 in./mi (32 mm/km) Investigate < 30 in./mi (1.9 mm/km) and > 300 in./mi (19 mm/km) o Rut Reject < 0 and > 2.5 in. (64 mm) Investigate > 1 in. (25 mm) o Speed follows vendorâs procedure â¢ Images o < 5 of 100 continuous images shall have o Downward image Resolution to identify 0.125-in.- (3.175-mm-) wide crack Illumination to provide sufficient contrast and crack-shadows o Forward image Resolution to identify 0.125-in. (3.175-mm) wide crack o ROW image Sufficiently clear to identify roadway assets o Images and data located Â± 5.28 ft (1.61 m) or better â¢ Distress indices o > 90% randomly selected sections Â±10 points inferior quality Washington State (Washington State DOT 2018) 5% random sample of NHS and 5% random sample of local NHS Condition DMI IRI (annual) Rut (annual) Fault (annual) IRI (weekly) Rut (weekly) Distress (NHS) Distress (Local NHS) Accuracy Â± 3 ft (0.9 m) 95% ProVAL Â± 0.08 in. (2 mm) manual survey Â± 0.08 in. (2 mm) manual survey Moving avg Â± 20 in./mi (1.3 mm/km) Moving avg Â± 0.08 in. (2.0 mm) 90% of t-test and R2 (2 rater) > 90% of manual survey Repeatability COV < 10% (3 runs) 92% ProVAL COV < 10% (3 runs) COV < 10% (3 runs) Std < 8.5 in./mi (0.54 mm/km) Std < 0.025 in. (0.6 mm) N/A N/A Wyoming (Wyoming DOT 2018) â¢ Quality images â¢ Correct section (milepost) representation â¢ Proper reference identification â¢ Accurate crack detection on concrete pavement and bridge decks â¢ DMI Â± 500 ft (152 m) â¢ Flag all construction areas â¢ Plot GPS data for locating anomalies â¢ Data checks o IRI check for zero, null, and > 400 in./mi (25 mm/km) values o Rut check for zero and > 0.5 in. (13 mm) o Re-evaluate HPMS concrete cracking > 70% for > 2.0 mi (3.2 km) â¢ Distress data checks on segment condition that varies from year to year o Inter-rater and intra-rater evaluation o 100% review of all rated roads o Accuracy target > 80% on concrete pavements > 90% on asphalt pavements Note: Avg = average; Std = standard deviation; COV = coefficient of variation. Agency Requirements West Virginia (West Virginia DOT 2018) â¢ Weekly control site (up to 10% sample) o IRI, rutting, faulting, GPS coordinates, % cracking, cross slope, grade, horizontal and vertical curves > 95% compliant â¢ Database consistency and completeness o Distress ratings: > 95% compliant o GPS & location: 100% compliant â¢ Imagery o 20% sample check upon delivery o > 98% compliant Table 31. (Continued).
Summary of Agency Data Quality Procedures 63 In addition, an element-level review of asphalt pavements and JPCP is conducted. An element is defined as 26.4 ft (8 m) for asphalt pavements and a single slab for JPCP. The element-level review serves as a spot check in the event that a more-detailed review is warranted. The element- level review also allows for direct comparison of vendor-collected data with the office image quality review (see the section below for details). For asphalt pavements, a manual survey, maximum length of 0.05 mi (0.08 km), is conducted. The QA field crews measure and record the length of alligator âAâ and alligator âBâ cracking. Alligator âAâ cracking is defined as âa single or double unconnected cracks in the wheel path parallel to the centerlineâ (Caltrans 2015). For JPCP, the QA field crews record the total number of transverse, longitudinal, and corner cracks (crack count only). The criteria for the element-level review are the same as described above. Office Review The Caltrans office review consists of an image quality assessment of the downward facing images. Images are viewed for clarity, stitching, and synchronization with geographic locations and attributes; the office reviews from 5% to 10% of all submitted images. The office review is conducted on 0.1-mi (0.16-km) segments for up to 20 consecutive images. The amount of cracking is recorded and compared to the vendorâs value, which shall be within 10% of the agency value. The vendorâs image quality is accepted if 85% or more of the reviewed segments âpassâ the criteria. Table 33 illustrates an example of the office review report. Error Resolution During agency acceptance, there is a possibility that portions of the collected data (or images) do not meet agency criteria. It is important to establish the actions to be taken should any of the collected data fail to meet the established requirements. Three agency examples are shown in Site Location Information: Total Segments Major Rehab Non-Major Rehab Passed Passed Passed Pass Rating Pass Rating Pass Rating Site Location Information: Total Segments Major Rehab Non-Major Rehab Passed Passed Passed Pass Rating Pass Rating Pass Rating Site Location Information: Total Segments Major Rehab Non-Major Rehab Passed Passed Passed Pass Rating Pass Rating Pass Rating Table 32. Example of Caltrans QA field review report. Site Absolute Difference Readings, % % Acceptable Action A 6, 7, 4, 5, 0, 8, 10, 3, 9, 11, 5, 9, 5, 6, 13, 8, 9, 8, 6, 3 18/20=90% Accept B 5, 0, 3, 11, 9, 5, 7, 4, 12, 3, 15, 8, 9, 11, 6, 8, 7, 16, 3, 2 15/20=75% Reject Note: Bold and underlined numbers indicate that the absolute difference is greater than 10%. Table 33. Example of Caltrans office review report.
64 Automated Pavement Condition Surveys Table 34 through Table 36 summarizing Illinois DOTâs, New Mexico DOTâs, and Oregon DOTâs corrective actions, respectively. Independent Review Of the responding agencies, North Carolina DOT and Texas DOT provided documentation related to independent review of vendor-conducted pavement condition surveys. The following sections provide a summary of agency requirements and results for the independent review. North Carolina DOT North Carolina DOT hired an independent QA consultant to rate, evaluate, and compare pavement distress data collected by the vendor. During the evaluation process for the 2015â2016 Product QA/QC by Agency Acceptance Criteria Error Resolution Predeployment, Daily Start-up, and Monthly Testing (performed by vendor) IRI, rut, faulting, DMI, image quality, LCMSâ¢ crack identification â¢ Certification letter provided by vendor at start and monthly â¢ 100% review and set baseline values â¢ Monthly collection compared to baseline â¢ Production survey cannot proceed Post Collection IRI â¢ Compared to historical data â¢ Random sample review â¢ Test section Â± 10% of initial collection â¢ Pass reasonability check of random sample â¢ Reprocess data â¢ Re-collection at discretion of agency Rut â¢ Compared to historical data â¢ Random sample review â¢ Test section Â± 0.08 in. (2 mm) of initial collection â¢ Pass reasonability check of random sample â¢ Reprocess data â¢ Re-collection at discretion of agency Faulting â¢ Compared to historical data â¢ Random sample review â¢ Pass reasonability check of random sample â¢ Reprocess data â¢ Re-collection at discretion of agency Cracking CRCP â¢ Random sample review â¢ Pass reasonability check of random sample â¢ Reprocess data â¢ Re-collection at discretion of agency Cracking JRCP â¢ Random sample review â¢ Pass reasonability check of random sample â¢ Reprocess data â¢ Re-collection at discretion of agency Cracking asphalt â¢ Random sample review â¢ Pass reasonability check of random sample â¢ Reprocess data â¢ Re-collection at discretion of agency LRS â¢ 100% review â¢ Compared to GIS â¢ Begin/end station compared to Illinois Roadway Inventory System â¢ 100% â¢ Contact vendor for LRS reprocessing and alignment â¢ Re-collection at discretion of agency Images â¢ Quality check for clarity and ratability â¢ 100% â¢ Contact vendor for image enhancement solution â¢ Re-collection at discretion of agency Table 34. Illinois DOT corrective action (Illinois DOT 2018).
Summary of Agency Data Quality Procedures 65 Deliverable Acceptance Testing Action if Criteria Not Met Data completeness > 98% Total network miles (excludes areas closed to construction) Return deliverable for re-collection 100% Delivered data accurately populated with description information (system, route, direction, and begin and end latitude/longitude) Return deliverable for correction > 98% Delivered data accurately populated with required data elements. Excludes areas with expected limitations (e.g., IRI in low-speed areas) Return deliverable for correction > 98% Delivered data < 10 consecutive fixed missing segments (500 ft [152 m] total) Return deliverable for correction IRI, rut depth, and faulting > 95% Must be compliant with the verification testing requirements Return deliverable for re-collection Distress ratings > 95% Must be compliant with the verification testing requirements Return deliverable for re-collection Route no., direction, begin/end, GPS coordinates, district, and date collected 100% Database check of accuracy and completeness Return deliverable for correction Photolog and pavement images 100% 20% random sample compliant with verification requirements Return deliverable for re-collection Table 35. New Mexico DOT corrective action (New Mexico DOT 2018). data collection cycle, the QA consultant identified the following challenges and resolutions for conducting the independent review (North Carolina DOT 2018): â¢ Converting multiple severity types per distress into one value. The representative quantity was determined by assigning different weights to severity levels for each distress type (alligator, transverse, longitudinal, and lane joint cracking, patching, and bleeding). The lowest sever- ity was assigned a weighting value of 1, moderate severity a value of 1.5, and high severity a value of 2. Weighting values were multiplied by the total quantity of distress for each severity category and summed to represent each distress quantity. â¢ Averaging total quantity of each distress type over the sample length. Using the distress severity weighted quantities, all area-based distress quantities were divided by the maximum sample area. For longitudinal and lane joint cracking, the weighted quantity was divided by the total sample length (5,280 ft [1,609 m]). For transverse cracking, the total weighted length was summed and divided by an assumed full-lane width (12 ft [3.7 m]), resulting in a total number of transverse cracks (maximum 5,280 cracks per mi [3,280 cracks per km]). â¢ Determining reasonable control limits, by distress type, to minimize the variability between the QA consultant and the vendor ratings. Data from the 2015 survey, consisting of 1.28% (943 segments) random sample of asphalt pavements on the state highway network were evaluated. Sample ratings were carefully inspected to ensure accuracy of the distress types, severities, and quantities. A representative quantity and weighted percent was determined for each distress type on each sample for both the QA consultant and vendor ratings. The dif- ference between the weighted percent of each distress type was calculated for all samples and the standard deviation was used to create a unique set of control limits for each distress type. Line-of-equality graphs were plotted to assist in identifying the presence of any bias and trends in over- or underrating distresses. Figure 22 and Figure 23 illustrate example equality plots of vendor versus QA consultant assessment for alligator and transverse cracking, respectively.
66 Automated Pavement Condition Surveys Sensor data: IRI, rut, and faulting (by district) 100% Compliant with control site and verification testing requirements Reject all data since last passing verification; re-calibrate DCV and re-collect affected routes 95% Data within expected values based on year-to-year time series â¢ IRI Â± 10% â¢ Rut Â± 0.10 in. (2.5 mm) â¢ Fault Â± 0.05 in. (1.3 mm) Flag discrepancies and investigate Re-collect if wet weather or traffic congestion create issues that can reasonably be avoided Accept data on case-by-case basis if differences are due to construction, maintenance, or deterioration more than expected, or where data appears reasonable based on visual observation of road surface 95% Compare with ODOTâs DCV on sample of routes â¢ IRI Â± 20% â¢ Rut Â± 0.20 in. (5.1 mm) Flag discrepancies and investigate Approve data on a case-by-case basis if differences can be reasonably explained When significant differences exist and cause cannot be reasonably determined, verify calibrations for all DCVs, review data for systematic errors, re-collect if equipment issues are found Distress ratings (by district) 100% Compliant with control-site testing requirements Return deliverable for reevaluation Interstate 95% Non-Interstate 90% All routes < 10% of 0.1-mi (0.16- km) segments rated incorrectly Compare current year versus previous year (considering recent construction and maintenance) and flag â¢ Good/fair/poor category changes â¢ Sections where current year overall index difference exceeds +5 or â15 points from previous year Compare overall index with windshield rating and flag â¢ Sections with Â± 10 points difference Flag discrepancies and investigate Compare distress quantities and review severities, check distresses are within lane limits, check distress length and area measurements marked on pavement images and summarized in shell table Report incorrect distress ratings and return deliverable for correction Accept the data if the current year distress ratings appear valid, regardless of previous yearâs ratings Deliverable (and Frequency) Acceptance Checks Performed Action If Criteria Not Met Route, lane, direction, LRS 100% Review previous weekâs images for correct location information Reject deliverable and re-collect route Images < 5 consecutive images with inferior quality Review previous weekâs images for coverage and quality (lighting, exposure, obstructions, focus) Reject deliverable and re-collect route Pavement type 100% Compare to provided type < 2 per 10 incorrect segments Resolve all discrepancies prior to final distress rating Data completeness (by district) 99% Total collection miles Reject deliverable, re-collect route 100% No blank fields without exclusion code and reason Return deliverable for correction 100% No data outside the allowable ranges 90% Bridge events, construction detours, and lane deviations marked correctly Table 36. Oregon DOT corrective action (Oregon DOT 2018).
Summary of Agency Data Quality Procedures 67 y = 1.1097x + 0.853 RÂ² = 0.8538 0 20 40 60 80 100 0 20 40 60 80 100 Ve nd or - Al lig at or C ra ck in g Pe rc en t Independent Quality Assurance - Alligator Cracking Percent Lower and Upper Limit Line of Equality Linear Regression Figure 22. Line of equality plot for alligator cracking data (adapted from North Carolina DOT 2018). y = 1.3243x + 0.1165 RÂ² = 0.842 0 5 10 15 20 0 5 10 15 20 Ve nd or - Tr an sv er se C ra ck in g Pe rc en t Independent Quality Assurance - Transverse Cracking Percent Lower and Upper Limit Line of Equality Linear Regression Figure 23. Line of equality plot for transverse cracking data (adapted from North Carolina DOT 2018).
68 Automated Pavement Condition Surveys As shown in Figure 22, there may be a potential difference in ratings when the vendor noted alligator cracking percentage greater than 50%; however, additional sampling may be warranted to confirm this trend (North Carolina DOT 2018). As shown in Figure 23, there appears to be a positive bias when the vendor identified more and higher severity level transverse cracking (North Carolina DOT 2018). Texas DOT Texas DOT conducts a statewide audit of the vendor pavement condition survey (Texas DOT 2018b). The audit includes the identification of a 6% sample of pavement segments, represent- ing each countyâs centerline length and pavement type (asphalt pavement, JPCP, CRCP). Since some counties do not construct all three pavement types, the 6% sample is based on the available pavement types. Each sampled segment is 2 mi (3.2 km) long and has a previous fiscal year rated distress score between 60 and 90 (score range of 0 to 100). Once each sample segment has been manually rated by Texas DOT, the distress score is determined and compared to the vendor- collected data. If discrepancies exist between the manual and vendor results, Texas DOT will first review the vendor images and determine whether or not a follow-up site visit is required. Acceptance criteria for the annual audit are the same as used during the acceptance of the vendor survey (see Table 31). Data Integration, Storage, and Retention Requirements Data integration, storage, and retention requirements provide challenges with automated data collection due to the rapid collection of data and image (e.g., images and distress data can be collected every 26 ft [7.9 m]). The following sections provide a summary of agency responses to data integration, data and image storage requirements, and data and image retention schedule. Integration Data integration is the âprocess of combining or linking two or more data sets from different sources to facilitate data sharing, promote effective data gathering and analysis, and support overall information management activities in an organizationâ (FHWA 2010). For example, pavement management systems typically require data from LRS (or GIS), traffic, construction history, maintenance and rehabilitation activities, and pavement condition, all or some of which may be managed by different offices within a given agency. Table 37 provides a summary of agency responses to their pavement condition data integration process, issues, and resolutions. The data integration process varies by agency, but in general, many agencies provide a spatial file for data collection, and it is returned by the data collection team with populated data and images for incorporation into the pavement management system (with or without additional processing). Agencies reported a number of challenges with the automated data collection process, including the following: â¢ Matching LRS locations (four agencies); â¢ Software formats and systems (two agencies); â¢ Comparison of manual and automated cracking results not matching (one agency); â¢ Information technology support (one agency); â¢ Image storage (one agency); â¢ Data consistency (one agency);
Summary of Agency Data Quality Procedures 69 Oklahoma â¢ Vendor submits data â¢ Agency checks integrity, conducts database queries, and reviews manual survey â¢ Updating antiquated proprietary software â¢ Staffing with qualified and experienced personnel â¢ Working through issues with vendor and agency personnel Ontario â¢ LCMSâ¢ data processed and aggregated â¢ Export to internally developed software to determine pavement metrics and indices â¢ Large volume of data and difference in metrics required development of new algorithms and verification protocols â¢ Recruited staff with big data experience to develop solutions in handling data set â¢ Redeveloped asset management system to integrate new data set and used artificial intelligence for distress categorization and Pennsylvania â¢ Condition data delivered by flat file â¢ Data summarized by roadway segments â¢ Load into Roadway Management System â¢ Roadway Management System is an old mainframe dating back to the mid-80s â¢ Funding for new system, many of the agency systems âdonât speak the same languageâ Wyoming â¢ Import data into pavement management system â¢ Overlapping sections due to equations and phantom âover-runs.â â¢ Manually fix the equation problem; the phantom âover- runâ sections are deleted Utah â¢ Not provided â¢ Changing technologies â¢ LiDAR to locate "assets" and lost control of pavement condition collection process â¢ Not provided Agency Integration Process Data Issues Issue Resolution Arizona â¢ Agency provides spatial file â¢ Vendor delivers data in GIS format â¢ Data stored in SQL database â¢ 2017 comparison of manual and automated cracking did not match â¢ Challenges with converting from milepost to measurement system â¢ Compare manual to automated, if no correlation, use recent automated data â¢ Training and revising GIS database to accommodate new data British Columbia â¢ Load data directly into pavement management system â¢ Matching with other agency referencing systems â¢ Use GPS coordinates to match data California â¢ GIS to develop an interactive map â¢ Information technology support â¢ Hire a GIS expert Connecticut â¢ Working on integrating LRS into pavement management database â¢ 2 LRS makes it very labor intensive to migrate data into pavement management system â¢ Not provided Georgia â¢ Not fully developed at this time â¢ Locating segments â¢ Software format, LRS, and network change propagation â¢ Standardize segmentation â¢ Address through software developers Illinois â¢ LRS joined with other collected data â¢ Linked to roadway database and a structure database â¢ Changes made to the LRS creates data alignment problems â¢ Reduce the number of changes to the LRS â¢ Coordinate and document so staff can adjust the data based on LRS changes New Hampshire â¢ Joined through HPMS coordinator â¢ Numerous and subtle; one was data consistency â¢ Snapshotting data North Dakota â¢ Data exported to mainframe and averaged per segment â¢ Mainframe system in need of replacement â¢ Looking into software solutions to possibly replace the mainframe system â¢ Pavement management for additional analysis â¢ Image storage â¢ Import to pavement management software â¢ Revisions required on how new metrics contribute to performance metrics and strategy decision process quantification of LCMSâ¢ data Table 37. Agency pavement condition data integration process, issues, and resolutions.
70 Automated Pavement Condition Surveys â¢ New algorithms and verification protocols required and impact on performance metrics and strategy decision process (one agency); â¢ Changing technologies (one agency); and â¢ LiDAR system to locate assets, changed project scope (one agency). Storage As previously discussed, data and images collected from the pavement condition survey can quickly amass to terabytes of storage requirements. Therefore, agencies were asked to provide the types of data (and images) stored, and the format, from the pavement condition survey (Table 38). Based on the results of the 16 responding agencies, the information stored includes images (16 agencies), raw data (14 agencies), condition index (10 agencies), 0.1-mi (0.16-km) data (2 agencies), and correspondences and sign/striping inventory (1 agency each). Data are stored in a database (7 agencies), database and spreadsheets (3 agencies), and database and TXT format or database and native format (1 agency each); images are stored in JPEG format (6 agencies); and data and images are accessed via vendor-hosted site (3 agencies). Retention Schedule A retention schedule documents the type of information and length of time for retaining the information. Table 39 provides a summary of agency responses related to data and image retention schedules. Of the 16 responding agencies, 5 agencies retain data and images indefinitely, and 3 agencies retain only the data indefinitely. Two agencies retain all data and images for 4 years. Two agen- cies retain all data and images for 10 or more years, while one agency only retains all data for 10 years. One agency retains all data and images for 20 years and condition ratings indefinitely. One agencyâs retention schedule plan is to retain all data for 30 or more years. Costs of Data Collection, Processing, Quality Control, and Acceptance Agencies were asked to provide costs associated with data collection, processing, QC, and acceptance. However, separating costs according to activity was difficult (or not possible) for the responding agencies. For example, the vendor contracting process may be based on lump sum rather than line item, making it difficult to associate costs for individual activities. In addition, agencies typically do not track employee hours by specific tasks, making it difficult to associate the level of effort by activity. Another challenge for vendor-based surveys is comparing costs based on economy of scale (i.e., potential costs savings with increased network length). Conceivably, agencies with smaller pavement network lengths would result in higher per-mile (kilometer) costs than agencies with larger network lengths. However, duration of data collection may also impact costs. It is also challenging to compare agency- and vendor-conducted surveys, since vendor costs will be based on all costs associated with data collection and analysis (e.g., building costs, computers, equipment costs, employee benefits). Finally, agencies require distress analysis using either semiautomated, fully automated, or a combination of both, and not all agencies require assessment of the same distress types. Agencies requiring more semiautomated analysis may result in higher costs than those requiring primarily fully automated analysis. Therefore, direct comparison of costs should be done cautiously.
Summary of Agency Data Quality Procedures 71 Agency Information Stored Format Arizona â¢ Photo and video log, LCMSâ¢ images â¢ Raw distress data â¢ Good-fair-poor rating â¢ Sign and striping inventory â¢ Data in SQL database â¢ Photo, video log, and LCMSâ¢ images viewable online from vendor-hosted site British Columbia â¢ Raw data â¢ Images â¢ Oracle database â¢ Photolog application California â¢ Images â¢ Elemental data (26.4 ft [8.0 m]) â¢ Condition data and indices â¢ Database Connecticut â¢ Raw data â¢ Indices â¢ Images â¢ Database for indices and condition data â¢ Images in JPEG â¢ Excel, Access, and 2 databases Georgia â¢ Video images â¢ Road surface and raw sensor data â¢ Sensor data on 0.1-mi (0.16-km) segments â¢ 3D cracking data being evaluated â¢ Video in JPEG format â¢ Sensor readings in database Illinois â¢ All data â¢ All images â¢ Sensor data stored in tabular format â¢ Images in JPG Kansas â¢ Images â¢ Profile data â¢ Processed data â¢ Summary indices. â¢ All collected data and processed indices are stored in its native format â¢ Summarized data are stored in a database New Hampshire â¢ All collected data and images (approximately 25 MB/year) â¢ Database North Dakota â¢ Images â¢ Raw data until processed â¢ Images in JPG format â¢ Data in TXT format Oklahoma â¢ Images â¢ Tabular and testing data â¢ Correspondence â¢ JPEG â¢ Database and spreadsheets â¢ PDF Ontario â¢ Raw data (approximately 17 TB/year) â¢ Pavement distress data â¢ Pavement performance metrics â¢ Images â¢ Stored in database that interfaces with various software to extract, display, and report information in various formats Oregon â¢ 0.1-mi (0.16-km) sensor and distress data â¢ Images â¢ Raw data â¢ Database â¢ Raw data and images stored on USB drives Pennsylvania â¢ Images â¢ Pavement indices â¢ Images in JPG format â¢ Condition data in text and database files Rhode Island â¢ Distress data â¢ Pavement condition scores â¢ Images â¢ Distress data in Access â¢ Condition scores in Access or Excel â¢ Images stored on hard drives Wyoming â¢ Images â¢ Raw data â¢ Vendor website Utah â¢ Data files (< 1 TB) â¢ Forward images (4â8 TB) â¢ Pavement images (4â8 TB) â¢ Shape files and spreadsheets in 0.1-mi (0.16-km) segments Table 38. Agency data and image storage.
72 Automated Pavement Condition Surveys Agency Items Schedule Arizona All data (since 2014) Indefinitely British Columbia All data Indefinitely California All data and images 10 or more years Connecticut All data (since 2001) Indefinitely, currently under review Georgia All data and images 10 years for data, images to be determined Illinois Tabular sensor data and images (since 2007) Condition rating results (IRI, rutting, faulting, identified distress, and condition rating) 20 years Indefinitely Kansas All data and images Indefinitely New Hampshire All processed data and images (since 2009) All images and processed data indefinitely, high-resolution images stored separately North Dakota All images and processed data 4 years Oklahoma Control and verification site testing data, video log, reports, database, correspondence, etc. 4 years video log (older images stored on external drives and archived) All other data indefinitely Ontario Data (last 6 years) on external drives, network (with mirror redundancy) hosts previous years processed data and current year field data > 30 years (planned) Oregon 0.1-mi (0.16-km) sensor and distress data, images, and raw data Indefinitely Pennsylvania All data and images (since 1997) Indefinitely Rhode Island All distress data and pavement condition scores Indefinitely Utah All images Forward images indefinitely, pavement images until new ones are collected, marked-up pavement images kept indefinitely for comparison Wyoming All data 10 years Table 39. Agency data and image retention schedule. Agencies were asked to provide costs for conducting pavement condition surveys. Since only one agency was able to provide estimated costs for acceptance testing, the costs summarized in Table 40 exclude agency acceptance costs. Table 40 is arranged according to network length (small to extra-large); cost per mile (kilometer); whether distress is assessed according to semi- automated, fully automated, or a combination of both; if the agency or the vendor collect and analyze the data and images; and the distress types collected and analyzed. The last column of Table 40 represents only distress data since sensor-based data (e.g., IRI, faulting, rutting) are collected and analyzed automatically by the majority of agencies. As shown in Table 40, one of the responding agencies conducts fully automated analysis on asphalt pavements only and on a small network length for $43/mi ($27/km). Four of the responding agencies conduct semiautomated analysis, with costs ranging from $34 to $101/mi ($21 to 63/km). Semiautomated costs, in general, show a potential effect of economy of scale, where the larger networks result in lower costs, and the shorter networks result in the higher costs. For agencies that require the vendor to conduct both semi- and fully automated distress assessments, costs range from $28 to $115/mi ($17 to $71/km). As with semiautomated analysis, the longer network length, in general, relates to a lower cost compared to shorter network lengths.
Summary of Agency Data Quality Procedures 73 Network Length1 Cost per mi (km) Semi-, Full, or both Collects/Analyzes2 Distress Types Collected and Analyzed3 Medium $199 ($165) Full Agency Cracking and potholes (asphalt pavements), cracking, durability, joint seal damage, and broken slabs (JPCP) Small $159 ($99) Semi Agency Cracking (asphalt pavements) Large $82 ($51) Semi Vendor/ Agency Cracking, potholes, and durability (asphalt pavement, JPCP, and CRCP) Extra large $34 ($21) Semi Vendor/ Agency Cracking (asphalt pavement, JPCP, and CRCP) Extra large $50 ($31)4 Semi Vendor/ Agency Cracking, surface characteristics, and punchouts (asphalt pavement, JPCP, and CRCP) Extra large $58 ($36) Both Vendor/ Agency Full: potholes and raveling (asphalt pavement), durability, cracking, and blowups (JPCP, CRCP) Semi: faulting, polishing, broken slabs (JPCP), blowups, durability, punchouts, spalling (CRCP), texture, and patching (JPCP, CRCP) Small $115 ($71) Both Vendor Full: alligator, block, edge, longitudinal, and transverse cracking (asphalt pavement only) Semi: bleeding, patching, and potholes (asphalt pavement only) Extra large $101 ($63) Semi Vendor Cracking, potholes, punchouts (asphalt pavement, JPCP, and CRCP) Extra large $76 ($47) Semi Vendor Cracking, patching, raveling, weathering, and joint deterioration (asphalt pavements), cracking, joint seal damage, patching, and spalling (JPCP) Small $75 ($47) Both Vendor Full: cracking (asphalt, JPCP, and CRCP) Semi: bleeding, patching, potholes, raveling (asphalt pavement), cracking, faulting, joint seal damage, patching, spalling and shattered slabs (JPCP), and patching, punchout, and lane/shoulder condition (CRCP) Medium $65 ($40) Both Vendor Full: cracking (asphalt pavement and JPCP) Semi: bleeding, patching, potholes, raveling, spalling (asphalt pavement and JPCP) Medium $43 ($27) Full Vendor Cracking and bleeding (asphalt pavement) Large $28 ($17) Both Vendor Full: cracking, potholes, and raveling (asphalt pavement) Semi: cracking, durability, and polishing (JPCP) 1 Small < 5,000 mi (8,000 km); medium 5,000â10,000 mi (8,000â16,000 km); large 10,000â15,000 mi (16,000â24,000 km); and extra large â¥ 15,000 mi (24,000 km). 2 âAgencyâ collects and analyzes data, âvendorâ collects and analyzes data, or âvendor/agencyâ vendor collects data and agency analyzes data. 3 Excludes fully automated sensor data and includes only general distress categories (i.e., cracking includes multiple crack types). 4 Includes only data collection costs. Table 40. Summary of pavement condition survey costs.
74 Automated Pavement Condition Surveys Accomplishments and Challenges of Automated Condition Surveys As a follow-up to the survey, agencies were asked to provide information related to their successes and challenges with automated condition surveys. The following provides a summary of responses. Accomplishments â¢ Safer, faster, and more efficient and consistent pavement condition data collection compared to manual surveys. â¢ Automated crack detection performs to the agencies satisfaction in identifying crack type and severity. â¢ Ability to use the pavement condition data and images by various agency users. For example, extracting guardrail asset data from photo logs and using the automated crack detection results to evaluate performance of different pavement treatment types. â¢ Information from the automated pavement condition survey has been one of the greatest tools for assisting with identifying projects for the Statewide Transportation Improvement Program. Challenges â¢ Determining data quality tolerances that are reasonable in relation to equipment capabilities and pavement management requirements. â¢ Identifying a method to accurately determine ground truth values for pavement distress (e.g., cracking) that is efficient and less labor-intensive. â¢ Quantifying pavement distress (e.g., cracking, patching, potholes, punchouts, raveling, bleeding, and joint condition) is difficult without having a standardized method. â¢ Measuring consistent rut depth using different data collection equipment. â¢ Developing protocols and distress detection algorithms for new data sets and performance metrics. â¢ Getting all the required resources together from data collection to delivery. â¢ Generating meaningful reports, performance trends, and project assessments from the processed data. â¢ Maintaining consistent distress ratings year to year and vendor to vendor.