National Academies Press: OpenBook

Automated Pavement Distress Collection Techniques (2004)

Chapter: Chapter Six - Quality Assurance

« Previous: Chapter Five - Contracting Issues
Page 35
Suggested Citation:"Chapter Six - Quality Assurance." National Academies of Sciences, Engineering, and Medicine. 2004. Automated Pavement Distress Collection Techniques. Washington, DC: The National Academies Press. doi: 10.17226/23348.
×
Page 35
Page 36
Suggested Citation:"Chapter Six - Quality Assurance." National Academies of Sciences, Engineering, and Medicine. 2004. Automated Pavement Distress Collection Techniques. Washington, DC: The National Academies Press. doi: 10.17226/23348.
×
Page 36
Page 37
Suggested Citation:"Chapter Six - Quality Assurance." National Academies of Sciences, Engineering, and Medicine. 2004. Automated Pavement Distress Collection Techniques. Washington, DC: The National Academies Press. doi: 10.17226/23348.
×
Page 37
Page 38
Suggested Citation:"Chapter Six - Quality Assurance." National Academies of Sciences, Engineering, and Medicine. 2004. Automated Pavement Distress Collection Techniques. Washington, DC: The National Academies Press. doi: 10.17226/23348.
×
Page 38
Page 39
Suggested Citation:"Chapter Six - Quality Assurance." National Academies of Sciences, Engineering, and Medicine. 2004. Automated Pavement Distress Collection Techniques. Washington, DC: The National Academies Press. doi: 10.17226/23348.
×
Page 39
Page 40
Suggested Citation:"Chapter Six - Quality Assurance." National Academies of Sciences, Engineering, and Medicine. 2004. Automated Pavement Distress Collection Techniques. Washington, DC: The National Academies Press. doi: 10.17226/23348.
×
Page 40
Page 41
Suggested Citation:"Chapter Six - Quality Assurance." National Academies of Sciences, Engineering, and Medicine. 2004. Automated Pavement Distress Collection Techniques. Washington, DC: The National Academies Press. doi: 10.17226/23348.
×
Page 41
Page 42
Suggested Citation:"Chapter Six - Quality Assurance." National Academies of Sciences, Engineering, and Medicine. 2004. Automated Pavement Distress Collection Techniques. Washington, DC: The National Academies Press. doi: 10.17226/23348.
×
Page 42

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

35 Relatively few agencies provided significant feedback on QC and QA procedures used for pavement data collection and processing. However, some Canadian provinces are heavily involved in QC and QA, especially with sensor-related data, because they have found that those issues must be addressed if high-quality data are to be received from either contract or in-house data collection. Very few states indicated having gone to the extent that Canada has of applying statistical con- cepts to QM. Therefore, much of this chapter relates to the Canadian experience. The NQI developed a glossary of highway QA terms that focused primarily on highway construction materials and pro- cesses (62). Three definitions deemed appropriate to pave- ment condition data collection and processing have been adapted from that publication. Quality management (QM)—QM is the umbrella term for the entire package of making sure that the quality of the product, process, etc., is what it should be. Quality control—QC is defined as those actions taken by the pavement data collector, either contract or in-house, to ensure that the equipment and processes involved are in control, such that high-quality results can be obtained. Quality assurance—QA is defined as those actions (re- views, tests, etc.) taken by the buyer or user of the data to ensure that the final product is in compliance with contract provisions or specifications. Note that this is a different definition than that used in the materials arena, where QA is defined as making sure that the quality of the product is what it should be. Thus, QM and QA are synonymous from a materials perspective. These definitions are consistent with, but more specific than, standards issued by ASTM (63) and are philosophically consistent with concepts put forth by the International Orga- nization for Standardization (64). These definitions will be used throughout the remaining portions of this synthesis such that, for example, QM will refer to the overall process of obtaining high-quality data for a given element. However, it will be evident in some of the discussions that not all partici- pants in pavement data collection follow the same definitions and that the delineations between QA, QC, and acceptance are not always clear. A key feature of data collected in-house and that collected through contract is the QM philosophy and procedures applied. QM has clearly become a major issue with pavement condition data, as more and more agencies are collecting sig- nificant amounts of the data and some have found that the quality is not as it should be. The approaches to QM being used by agencies and vendors are summarized and discussed in this chapter. Some agencies use the guidelines put forth in the AASHTO provisional standards as the basis for their procedures (12). An example for asphalt pavement cracking is reproduced here (guidelines for other data elements are similar) (17 ): Quality Assurance Plan—each agency shall develop an ade- quate quality assurance plan. Quality assurance includes survey personnel certification training, accuracy of equipment, daily quality control procedures, and periodic and ongoing control activities. The following guidelines are suggested for developing such a plan. Qualification and Training—agencies are individually re- sponsible for training and qualifying their survey personnel and/or qualifying contractors for proficiency in pavement rating or in operating equipment that must be used as a part of quality assurance. Equipment—the basic output of any equipment used shall be checked or calibrated according to the equipment manufacturer’s recommendations. The equipment must operate within the manu- facturer’s specifications. A regular maintenance and testing pro- gram must be established for the equipment in accordance with the manufacturer’s recommendations. Validation Sections—sections shall be located with estab- lished cracking types and levels. These sections shall be sur- veyed on a monthly basis during data collection season. Com- parison of these surveys can provide information about the accuracy of results and give insight into which raters/operators need additional training. Validation sections shall be rotated or replaced on a regular basis in order to assure that raters/opera- tors are not repeating known numbers from prior surveys. As an alternate to this procedure, up to 5% of the data may be audited and compared as a basis for a quality check. Additional Checks—additional checks can be made by com- paring the previous years’ survey summaries with current sur- veys. At locations where large changes occur, the data shall be further evaluated for reasonableness and consistency of trends. Those general statements from AASHTO define a QM framework, but they provide few specifics, because those are left to the individual agencies. One specific concept was pro- vided by Larson et al. (65) in defining a vision statement for PMS data collection by Virginia: “To collect pavement con- dition data with sufficient detail and accuracy to model dete- rioration and perform multi-year planning with the PMS. CHAPTER SIX QUALITY ASSURANCE

Data variability for each data element must be smaller than the year to year change in that element.” Although apparently self-evident, the statement is important because it is easy to overlook the implications of not meeting the implied require- ments for data quality. If there is too much inherent variability in the data as a result of equipment, human involvement, or process components, it is entirely possible that there will be too much “noise” to permit meaningful year-to-year compar- isons. Depending on the level of noise and whether the data are intended for project- or network-level use, the data may be of limited value. The elements of QM have been applied to pavement data collection and processing for only a relatively brief period. A major reason appears to be that contract data collection is rel- atively new to the pavement community. As long as agencies collected their own data, users tended to accept the data deliv- ered as “gospel.” Once vendors became active and began to deliver large quantities of data, it became evident that QM was an important issue. It is now recognized as important regard- less of who collects the data. The LTPP program has recog- nized that data variability is a critical issue and has released two major reports addressing manual and image surface dis- tress (13) and profile data (10). These reports are discussed in more detail in the following sections. QM of pavement data collection and processing has reached a point similar to that experienced in the past by those working with the QM of highway materials and construction processes. That is, there is no clear delineation between what is the responsibility of the data collector (agency or vendor) and what is the responsibility of the buyer or user of the data collected. The control of data quality can be viewed as the responsibility of the collector, because that entity produces the data and has the tools and resources to influence the qual- ity of those data. On the other hand, the buyer or user is in the best position to assess acceptability of the data provided, because that entity is the ultimate owner of the data. The dif- ferent responsibilities typically would be reflected in two very different elements of the overall QM plan: the QC plan and the QA Plan. Morian et al. (66 ) make the point that collection of pave- ment data can be quite different from a production process. While the principles of statistical quality assurance, including quality control, acceptance and independent assurance, are well developed, their application to the collection of pavement man- agement data is quite different. In most cases, these statistical tools are applied to processes in which the desirable product is known and the purposes of the control measures are to ensure the efficient production of that product. However, in the case of pavement management data, the right product is not known. The product itself is data indicating the actual variability in the condition of the roadway. Thus, the control limits are not con- stant and are a function of the data itself. It is extremely impor- tant to identify the sources of variability in each form of data, and to isolate those that can be controlled in the process from those that must be reflected in the data. 36 Because automated pavement data collection and process- ing is both a relatively new and a rapidly evolving area, one of the difficulties with developing QM plans is that there are little usable data, especially for surface distress work. For example, the development of a realistic QM plan for the eval- uation of surface distress from images would require at least minimal knowledge of several parameters that have not been addressed by most agencies or vendors. The inherent variabil- ities of those parameters include the following: • The condition of the pavement when imaging takes place—How accurately does the condition of the pave- ment at the time of imaging reflect the “true” pavement condition? The many factors contributing to variability in this instance include the moisture and thermal condi- tions of the pavement, the surface texture, the degree of shading, and the angle of the sun. • The imaging process itself—With what degree of accu- racy does imaging characterize the roadway it repre- sents? The variability no doubt depends on what type of imaging is used, the characteristics of the cameras employed, the geometric configuration of the data col- lection vehicle, the lighting employed, and many other factors. For that reason, there will almost certainly be a different set of answers for each vehicle, even from the same vendor or manufacturer. • The data reduction process—How accurately does the data reduction process from images reflect the true pave- ment conditions? Again, there is no doubt that there are numerous factors contributing to variability, not the least of which are image quality, the hardware and soft- ware used in the evaluation, the training of the operators (or raters), and the protocols used. The literature does not reveal full treatment of those issues or even complete identification of the issues by the pavement community. Therefore, there are numerous areas of potential fruitful research; however, the community is left to do the best it can without complete information until that research is completed. The various QM procedures discussed here may be viewed as interim procedures that will be revised as time passes and the needed information becomes available. It should be noted that there has been more work in and better quantification of some of the variability issues for sensor-measured data: roughness, rut depths, and joint fault- ing. In general, the QM of sensor-collected data is much more straightforward than those collected from images. After all, the former are objective measurements, whereas the lat- ter often are subjective ones. Before discussing QA issues related to the various pave- ment distresses, it should be noted that some agencies have such requirements for location reference. Virginia, for exam- ple, has proposed a requirement that 90% of pavement man- agement section locations reported by the data collection contractor must fall within 0.016 km (0.01 mi) of the logged

37 locations on Virginia’s mile point system [Pavement Data Collection and Analysis Invitation for Bid (IFB), VDOT, 2002, unpublished]. Because there are significant differences in the way that QC and QA issues are addressed for the various pavement data elements, the major approaches identified for those ele- ments are discussed separately. SURFACE DISTRESS Several agencies have developed QA requirements for data reduced from images. Generally, the process is to have data collectors (contract or agency) do pilot runs on selected test sections before beginning production testing. After process- ing and data reduction, the results of these pilot runs are com- pared with manually collected data from the same sites. If acceptable, these comparisons establish the data collectors’ ability to do the work. Then, during production, there is usu- ally a quality monitoring process employed, usually in the form of a blind testing program whereby the collectors’ data and the monitor’s data are compared and acceptance criteria are applied. What are needed are better definitions of what constitutes “in control” and what constitutes “acceptable” data quality. Generally, agencies see the need to compare vendor- furnished distress data to the distresses actually appearing on the roadway. For example, Alabama reported that rather than doing a QA process directly on images, a rating team is sent to random roadway locations, and what the team observed at those locations is compared with the vendor’s ratings. No details of the process were provided. LTPP Work The LTPP work on distress data reported variability, preci- sion, and bias studies involving comparisons of field manual distress ratings performed during rater training sessions with those on black-and-white photographs of the same sections (13). Among the findings was that the level of variability in distress ratings from individuals was unacceptably high. The concern was the range of ratings obtained from individual raters, because that was deemed to reflect the likely variabil- ity in the ratings on LTPP sections. It was speculated that dis- crepancies observed in distress time histories may result from this high variability. Note that this finding is directly related to the Virginia vision statement mentioned earlier, that “vari- ability for each data element must be smaller than the year-to- year change in that element.” The same LTPP study showed that the overall variability of manual distress data is lower than for that taken from film interpretations. Furthermore, the bias (average difference between manual and film interpretations on the same sec- tions) was much higher for the film interpretation than for the manual surveys. However, there was a reasonable correlation between manual and film interpretation values for most pave- ment distresses. The general trend was that field-determined distress levels were higher than those from photographs, pos- sibly reflecting the relative difficulty in discerning low- severity distress from film as compared with field observa- tions. This finding suggests that generally it is more difficult to discern surface distresses from images than from field observations. It may also follow that surface distress vari- ability needs additional research and quantification before realistic QA provisions can be incorporated in distress data collection contracts. As noted earlier, the LTPP work is of a research nature such that the findings might not be directly applicable to network-level pavement management work. Other Agencies Virginia In an effort to deal objectively with highly variable pavement distress data, Morian et al. (66) examined the sources of vari- ability in distress and roughness data in Virginia and recom- mended an overall process scheme for QA. In addition, they emphasized the importance of basing control and acceptance limits on sample sizes greater than one. These researchers mentioned the following sources of variation in surface dis- tress data: • Variation in pavement condition linearly along a high- way; • Variation resulting from the method of data collection employed (sample rate and sample size are important considerations); • Variation owing to a lack of uniformity in rating proce- dures over time; • Variation in pavement condition over time; • Variation between multiple raters; and • Variation owing to data referencing, processing, and handling errors. The authors continued their research by addressing each of these sources of variability in the development of both a collection process control plan and an acceptance plan that recognize the inherent variabilities. Their effort is summa- rized here: The effect of these multiple, compounded sources of variability is that the “true” distress condition of a roadway is never known. How then can a reference for controlling pavement management data processing be developed? The answer is that statistical eval- uation of distress results must be established which includes all the potential sources of variability inherent in a particular pro- cess. Using this approach, it is possible to effectively define an acceptable range of variability, within which results should be maintained. A change in any of the conditions of a distress sur- vey may adversely affect the reliability of the results. As an example, comparing field-collected information with distress interpreted from imaging is analogous to comparing apples and oranges. Each is a different process, and therefore can be

expected to produce different results. Neither inherently repre- sents the “truth” (66). In earlier work, Stoeffels et al. (67) applied an analogy between pavement rating groups and laboratories testing ma- terials, and applied the difference two standard deviation (D2S) criteria (68) to pavement condition indices in Virginia (46). Those criteria state that the difference between two lab- oratories running the same test on the same material should not exceed D2S more than 1 time out of 20 or 5% of the time (i.e., there is a 95% confidence limit) (68). In that relation- ship, S is the pooled standard deviation of all paired test results to be compared. In practice, it is possible to apply a similar approach to either process control or as an acceptance criterion. That is, for control purposes, one can have a QC rater who monitors the production and applies the D2S crite- ria to production versus QC work. No more than 1 rating in 20 should vary by more than D2S. Similarly, an acceptance team could randomly sample the production and apply the D2S criteria to production versus acceptance results. The process as applied was to pavement condition indices; how- ever, it could as well be applied to individual distresses mak- ing up the indices. To establish precision and bias statements for the Virginia rating procedure, the research team evaluated ratings from the production contractor and quality monitoring rater pools. The D2S process was used to define the precision. Average results from the two individual rating pools were used to establish the acceptable process bias. Although the details of the statistical approach applied by those researchers (66,67 ) is beyond the scope of this synthe- sis, it is clear that the researchers have laid the groundwork for additional studies that could address the further use of applied statistics in the automated collection and processing of pave- ment condition data. Although there is no doubt that every rat- ing procedure will involve a different set of statistical parame- ters and that it may never be possible to establish generally accepted limits of variability, etc., a general framework for QM procedures needs to be established. Such a framework would provide for defensible approaches to both process con- trol and to the acceptance of automated pavement data. Quebec In Quebec’s 2002 cracking analysis contract, QA provisions state that the Ministry will select from 2% to 5% of the road- way images for analysis of data quality (60). The Ministry uses the same images as those used by the contractor and rates these images themselves according to its standard crack iden- tification protocol. If the bias between the results of its ratings and the results presented by the contractor does not meet the requirements the Ministry stipulates, and no explanation can be furnished, the lot of 100 km will be rejected. The Ministry 38 reserves the right to return the lot to the contractor for re- evaluation. The following are the Ministry criteria: • Cracking index—When the computed index is within ±15% of the Ministry measured index; • Longitudinal cracking—±10 m/100 m in 100% of the cases and ±5 m/100 m in 80% of the cases; and • Transverse cracking—±5 cracks/100 m in 100% of the cases and ±3 cracks/100 m in 80% of the cases. Note that the Ministry recognizes variability and that there will not be a perfect match between Ministry and contrac- tor evaluations of the same images. The acceptance criteria address both a cracking index and its major components. The transverse and longitudinal cracking criteria operate on two levels, with the 80-percentile criteria more stringent than the 100-percentile criteria. The structure of these criteria is rem- iniscent of the bell-shaped or normal curve, where the major- ity of the population is close to the mean, yet some results may vary by a relatively large amount from that mean. British Columbia The British Columbia Ministry of Transportation and High- way (BCMoTH) has contracted pavement condition data collection for many years and has gradually evolved a QA philosophy. Excerpts are quoted here. Since 1993, BCMoTH has contracted out over 40,000 lane-km of automated network level, pavement surface condition surveys on its main highway network. The surveys include surface distress ratings, rut depth and roughness measurements in both wheel-path and video-logs of the right-of-way. Because the Ministry is committed to open contracting, QA plays a critical role in ensuring that the data [are] accurately col- lected and repeatable from year to year. The Ministry has devel- oped and implemented comprehensive QA procedures that con- sist of multiple levels of field-testing. BCMoTH has worked closely with its contractors in an open effort to ensure the testing is practical and representative of the intended end use of the data for pavement management. Both of these interrelated factors played a key role in the development of the QA procedures. The entire methodology is dependent upon the contractor having real time processing capabilities with on- board computers to not only address and resolve any processing issues arising from the initial calibration process, but also to enable rapid response during the production QA at any time. Practicality was important for two reasons: firstly the process must provide a realistic test of the contractor’s capabilities and, secondly, the process should not present a huge burden to Min- istry personnel to implement and monitor. A data QA program that cannot be effectively implemented provides little value in terms of agency understanding and thereby erodes the level of accuracy and acceptance of the survey results. Similarly, the scope of the QA procedures was driven by the intended end use of the data. This is an important distinction and can sometimes be overlooked in the effort to collect accurate data. In the Ministry’s case, automated pavement surface condi- tion data is collected with the clear understanding that it is to be

39 used primarily for network level, pavement condition analyses. Hence, the degree of data accuracy and field-testing required is dictated by this fact. The Ministry’s quality assurance program is divided into two phases: initial quality assurance where the contractors’ methods and equipment are initially calibrated and production quality assurance where the survey is monitored to ensure continuing compliance (69). The initial QA step serves to qualify the contractor on four QA sites chosen by the Ministry. First, using the standard Ministry rating manual (based in part on the LTPP distress evaluation manual), Ministry personnel conduct manual sur- face distress, roughness, and rut-depth surveys at the control sites. Then, for video-based surveys, the contractor is required to video record the four sites five times each and do pavement distress index (PDI on a 0 to 10 scale) ratings for each time. The results are provided to the Ministry where the multiple runs serve to test the accuracy and repeatability of the process. For acceptance, the contractor’s averages must meet criteria of ±1 PDI unit for accuracy and ±1 standard deviation of the PDI for the five runs for repeatability. The contractor may proceed with production work after the initial QA criteria are met. For production QA, the contractor’s production is mea- sured against blind QA sites randomly located throughout the system and evaluated by agency personnel. The same crite- ria as used in the initial QA also apply to production. When the contractor satisfactorily completes a blind site QA test, it is authorized to continue the production surveys. However, if the test results fail to meet the criteria, the contractor is required to review the videologs of the blind site and make equipment repairs or modifications and, if necessary, repeat the surveys from the time of the last blind site test. Finally, BCMoTH places QC responsibility on the con- tractor. That QC focuses on two areas: data integrity and data continuity. Data integrity relates to making sure that all data fields are complete and accurate and are delivered on time. Data continuity is concerned with ensuring that the data are correctly referenced and that there are no breaks in the data. The contractor is given criteria for establishing QC procedures that are reviewed by the Ministry. These criteria are as follows: The contractor’s QC program should include, but not be limited to, on-board equipment/sensor confirmation tests, ensuring the correct contract quantities and lane configurations, checking the data for anomalies and reasonableness, cross checking all data with vehicle sensors, and a thorough review of the created file contents and format (70). Mississippi The Mississippi DOT provided some very general guidelines on the QA and QC program followed with pavement surface distress data (71). Surface distress data are checked by using an Image Processing Workstation and video logs for a 5% random sampling of the contractor’s work. Distresses checked are cracking, potholes, spalling, and punch-outs. The LTPP distress identification manual is used as the standard for type and severity of distresses. No specific acceptance criteria were given. Still other agencies are known to do QA work on images, although few details are given in the questionnaire responses or the published literature. Washington State, for example, noted that although QA procedures used on the windshield surveys done previously were “difficult and costly,” that QA performed on images is a “straightforward and is a routine process.” Iowa is in the process of implementing a new image QA program in cooperation with the University of Iowa. Maryland remarked that QA and QC are “paramount” to pro- ducing high-quality cracking data (49). Its QA process is dis- cussed more fully in chapter eight. South Dakota is still developing its QA process for distress surveys as given in attachments to its distress survey manual (72,73). In that case, the agency is trying to balance the rele- vance, reliability, and affordability of the data, and it recog- nizes that greater reliability means increased costs. It also pro- vided few details of their procedures. Vermont, too, has been actively developing QA procedures. SENSOR DATA Sensors measure either longitudinal or transverse pavement profile and, for that reason, it is most convenient to discuss those data in the aggregate, such that roughness (IRI), rut- depth measurements, and joint-faulting measurements are all included in this section. Still, most of the emphasis is on roughness, because that is the parameter measured by almost every agency. Because of the emphasis on roughness monitoring over the past decade, largely brought about by the HPMS program, there has been a good deal of attention paid to the QA of those data. The HPMS field manual (5) recommends the AASHTO roughness quantification standard QA plan consisting of sev- eral very general guidelines almost identical to those listed earlier for the asphalt cracking standard (28). Again, the guide- lines are helpful in describing the steps to be taken, but they provide almost no details, which are left as an agency respon- sibility. However, some help is available in addressing those details, as described in the following sections. Profiling Errors Perera and Kohn (74) recently provided guidelines on profil- ing errors and how to avoid them. They noted that there are three major components to profiling: the height sensor, the accelerometer, and the distance measuring instrument—and that an error in any of these components will affect the qual-

ity of the profile data. The authors listed the following pro- cedures for ensuring that inertia profile data are error free: • Calibrate height sensor(s), accelerometer(s), and dis- tance measuring systems following manufacturers’ recommended procedures. • Clean lenses in sensors and check tire pressure before profiling. • Perform daily checks on profiler—bounce test and sta- tic height sensor check. • Set sensor spacing to spacing specified in smoothness specification. • Collect profile data along path specified in smoothness specification. Follow consistent path without lateral wander during profiling. • Do not collect profile data outside the speed range that is specified for the profiler. • Maintain a constant speed during data collection. Do not accelerate or decelerate during data collection. If you stop the profiler in a middle of a profiling run, dis- card the data for that run. • Have an adequate lead-in distance prior to test section to initialize data collection filters and come up to speed. Strictly follow manufacturers’ guidelines. • Initiate data collection at specified location. If the profiler is equipped with an automated method to initiate data col- lection (e.g., photocell), use it to initiate data collection. • Do not profile wet pavements. • Do not collect data on pavements that have surface con- taminants (e.g., gravel, construction debris). • Evaluate collected profile data for presence of spikes (74). LTPP Work The LTPP study of the quality and variability of IRI data in the LTPP database addressed all profiles collected between 1989 and 1997 after correction for obvious problems (10). Those studies are comprehensive and too voluminous to fully address in a synthesis. However, the profiles and IRI values analyzed represent five replicate runs on each test section for each visit of a profilometer. Although that degree of testing is needed in research work, it clearly would not be feasible for network- level surveys. Nevertheless, some of the major efforts and findings are applicable and are summarized here. From the LTPP analysis of more than 2,000 test sections where profiles were collected with K.J. Law Model 690DNC optical sensor profilometers “confidence limits were devel- oped for expected variability between repeated profile testing runs and for the expected change in IRI between subsequent visits” (10). As noted earlier, if the testing variability exceeded the expected visit-to-visit change in IRI, time series data would be of diminished value. From the same LTPP study it was found that the run-to-run IRI coefficient of variation is less than 2%. The study further reported significant seasonal impacts on IRI results, espe- cially for PCC pavements. This effect must be quantified and 40 considered when evaluating the significance of day-to-day variations in IRI measurements. In addition to conducting and documenting the studies described, LTPP has provided a manual for profile measure- ments that covers all aspects of LTPP profile data collection (with the ICC MDR 4086L3 Road Profiler), including equip- ment calibration and reporting requirements (75). In addition to discussing profiler issues, the document provides guide- lines on the use of the Face Company Dipstick, as well as on the use of rod and level surveys. Detailed guidelines are pro- vided on field testing, calibration, and equipment mainte- nance and repair. The calibration section addresses distance measuring instruments, accelerometers, and lasers. The field testing portion provides a procedure for evaluating the accept- ability of the multiple runs on an LTPP site. This procedure employs profile QA software (ProQual) developed for LTPP. Briefly, the procedure is to obtain five error-free runs and then determine acceptability if the average IRI of the left and right wheel paths satisfy the following criteria: • The IRI of three of the runs are within 1% of the mean IRI of the five runs, and • The standard deviation of the five runs is within 2% of the mean IRI of the five runs (i.e., if the coefficient of variation is less than 2%). The two criteria ensure a reasonable degree of accuracy and precision, respectively. Again, five runs are not practical for network-level data collection; however, the LTPP proce- dures could serve as a starting point for other agencies to use in precision and bias evaluations. The LTPP document as a whole should be a useful guideline for agencies in establish- ing their own QA procedures. The Canadian provinces have been active and progres- sive in the QA of roughness data. An Ontario procedure for acceptance of calibration tests was described in the previ- ous chapter. Three other provinces have made significant contributions to the QA of sensor-collected data. These are described briefly here. British Columbia The British Columbia QA process discussed earlier for sur- face distress is extended to sensor-collected data. For initial QA, the contractor does five profiler runs on the Ministry QA sites, and the approach used is as follows: The roughness testing consists of validating the Contractor’s auto- mated surveying equipment by field comparisons to the known longitudinal profile at each test site. The survey vehicle com- pletes a series of five runs over each site in order to assess both accuracy and repeatability. The International Roughness Index (IRI) values for each wheel path are generated and compared to the manual values for each as per the acceptance criteria [pre- sented in Table 14]. Because rut depth measurements are fully automated using a multi-laser rut bar, the rut depth QA tests are designed to validate

41 the Contractor’s automated surveying equipment by field com- parisons to known transverse profiles. The survey vehicle com- pletes five runs over the site to measure the accuracy and repeata- bility of the rut bar measurements. The average rut depth value is calculated for each wheel path and compared to the manual values as per the acceptance criteria [presented in Table 14]. The QA acceptance criteria for the accuracy and repeata- bility of the surface distress, roughness, and rut-depth mea- surements were developed on the basis of Ministry experience and are presented in Table 14. Again, the contractor may do production testing once the cri- teria in Table 14 are met. At that time, the Ministry uses the blind sites described under surface distress QA and the criteria in Table 14 again apply. The contractor’s QC philosophy as dis- cussed applies to sensor data as well as to surface distress data. Alberta Alberta also uses a statistical QM approach for the initial evaluation of its sensor-collected data (76). The contractor is required to do on-site calibrations before beginning produc- tion work and again before leaving the province. The IRI calibration consists of validating the Contractor’s auto- mated surveying equipment by field comparisons to the known longitudinal profile at the calibration site. The survey vehicle will complete a series of 3 runs over the site, which is 500 meters in length. The IRI values for each wheel-path shall be calculated and compared to the manual values for each run. The IRI derived through automated data collection must be within 10% of the manual survey and will be considered repeatable if the IRI from each repeated run is within 5% of the mean for the 3 runs. The rut depth calibration validates the Contractor’s auto- mated surveying equipment by field comparisons to the known transverse profiles. The Contractor is required to conduct 3 runs over the site to measure the accuracy and repeatability of the rut bar measurements. This test is performed at one calibration site near Edmonton. The average rut depth over the 500-meter site derived through automated data collection must be within 3 mm of the average rut depth for the manual survey. The automated survey will be considered repeatable if the average rut depth over the 500 meter test section from each repeated run is within ±1 standard deviation of the mean for the 3 runs (76). Alberta also requires the contractor to monitor data accu- racy during production (the QC process) using verification sites established by the contractor. Generally, these sites are scheduled every 7 days or 2000 km (1,250 mi) of data collec- tion. The contractor is responsible for submitting these data promptly to agency personnel. Then, the department repre- sentative may require the surveys to be halted if an acceptable level of accuracy is not provided. The agency’s terms of ref- erence do not define that acceptable level. Manitoba Manitoba also applies QA provisions to its sensor-collected data (77). In that case, the contractor is required to satisfacto- rily complete a specified number (contract specific) of “repeat run” sites before the beginning of production work. These sites have been thoroughly analyzed by the province and are used to establish the contractor’s equipment capability. The same sites are retested at least once each 3000 km (1,875 mi) of production survey completed. The minimum acceptable equipment standards are given in Table 15. During production, the province monitors contractor pro- duction through the use of blind sites. Immediately after a blind site is run, the contractor is requested to submit the site data to the province staff, where the data are compared with those originally found for the site. The tolerances given in Table 15 again apply. If those tolerances are not met, produc- tion is stopped and the contractor is required to recalibrate and to rerun the blind site until the tolerances are met. During post-processing, the contractor is required to im- plement a QC process that includes at least verification of quantities and lane configuration, reasonableness of data, and a thorough review of the content and format of files. The con- tractor is also required to note any sections that were omitted from the evaluation program. Parameter Roughness Rut Depth Measure Survey Interval Report Interval Unit Accuracy Repeatability IRI 100 m 500 m average Each wheel path 10% of Class I survey 0.1 m/km SD for 5 runs Millimeter 10 m 500 m average Each wheel path ±3 mm of manual survey ±3 mm SD for 5 runs Notes: SD = standard deviation. Attribute Equipment Standard Chainage Roughness Rutting, Faulting Distance measuring instrument (±0.1% accuracy) FHWA class II profiler (±10% accuracy) Laser or ultrasonic sensors (±2 mm accuracy) TABLE 14 BRITISH COLUMBIA ACCEPTANCE CRITERIA FOR SENSOR DATA TABLE 15 MANITOBA MINIMUM ACCEPTABLE EQUIPMENT STANDARDS

Mississippi The Mississippi DOT (MDOT) provides some QA guidelines on sensor data collection (71). It provides for calibration sites to be set up in each district where contract data collection will take place. Asphalt, jointed concrete, and composite pave- ments are represented in those sites. MDOT’s profiler, rut bar, and Georgia Faultmeter are run on the calibration sites. Then, during production, the contractor calibrates its equipment on these sites at the beginning of each workday. Baseline production sensor data are collected by MDOT on a 5% random sampling of sites from each pavement type a few weeks before or after the contractor’s data collection. Average IRI, rut-depth, and faulting data for each sample are noted and entered into a database to be used for comparison with the con- tractor’s work. For a 2001 contract, it was agreed with the con- tractor that a calibration site would be traversed at least once each day and that the following acceptance criteria would apply when comparing contract and agency data: IRI ±0.30 mm/m, Rut depth ±0.09 in., and Faulting ±0.07 in. When data failed to meet those tolerances, the procedure agreed on with the contractor was to disregard any data col- lected between the failing site and the last passing site. Mis- sissippi’s procedures will be discussed in more detailed in the case study in chapter eight. Louisiana The LADOT specifies a sensor data QC program for data col- lection contractors (58). This program requires the contractor to administer a plan that will ensure that data are collected accurately and that they reflect actual pavement condition within specified precisions. The contractor’s equipment is checked against an agency profiler and a Class I profiling instrument (Dipstick, etc.) before beginning testing. During production, the contractor is required to use QC sections of known IRI, rutting, and faulting values. An interesting aspect is that the sites are permitted to “roll”; that is, the contractor is not required to use the same sites all the time. Rather, the contractor may, on a given day, test a site that was tested 1 week previously. These reruns are evaluated to determine if the profiler is still in calibration. Such tests are documented in writing and delivered to the agency weekly. This feature is helpful in testing over a widespread area, because it is not nec- essary to do extensive backtracking to do the control sites. Although the questionnaire response provided little informa- tion on acceptance criteria, it did address the question of data reasonableness. For example, the maximum reporting value for IRI is given at 10 m/km (632 in./mi). 42 Other Agencies As was the case with surface distresses, a few agencies addressed sensor QA issues but provided little information in questionnaire responses. New Mexico and Arizona, for exam- ple, cooperate in running IRI control sites, a number of which are listed in New Mexico’s response. The general procedure is that each agency runs the same sites and the results are com- pared. Although on average these comparisons are excellent, there are occasions in which the two sets of data vary widely. The agency does not address how those differences are han- dled. Oklahoma requires that sensor-collected IRI must be within 5% of measurements made by rod and level dipstick or other Class I profiler (78). It further requires repeatability within 5% for three repeat runs. Oklahoma has a unique con- tract feature that requires the prime contractor to contract with a third party to provide dipstick profiling of 0.80 km (0.50 mi) control sites to be used as “ground truth” for calibration of data collection vehicles. SUMMARY Some agencies have done extensive work in and have devel- oped thorough QA requirements or practices. Some of the Canadian provinces have been exceptionally productive in this area and have established procedures that could well provide the foundation for national or international approaches. There are several general conclusions concerning QA issues, including the following: • Lower-severity distresses are more difficult to discern from images than from the roadway. Therefore, deduct- based indices done from images often will be higher than those from the roadway. • The determination of typical variability values for sur- face distresses (cracking and patching) may be an area needing some future research. Work has been done by LTPP primarily with manual surveys, but more is needed with automated data for network purposes. • Furthermore, it is necessary to develop data on typical year-to-year changes in pavement distress quantities as well as typical precision and bias statements, such that realistic and meaningful QA provisions can be incor- porated into data collection and analysis protocols and contracts. • There are no widely used acceptance criteria for pave- ment condition parameters. Because nearly all agencies deal with essentially the same kinds of distresses and data collection issues, it seems that more generally accept- able approaches and criteria would be reasonable and desirable.

Next: Chapter Seven - Costs, Advantages, And Disadvantages »
Automated Pavement Distress Collection Techniques Get This Book
×
 Automated Pavement Distress Collection Techniques
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Synthesis 334: Automated Pavement Distress Collection Techniques examines highway community practice and research and development efforts in the automated collection and processing of pavement condition data techniques typically used in network-level pavement management. The scope of the study covered all phases of automated pavement data collection and processing for pavement surface distress, pavement ride quality, rut-depth measurements, and joint-faulting measurements. Included in the scope were technologies employed, contracting issues, quality assurance, costs and benefits of automated techniques, monitoring frequencies and sampling protocols in use, degree of adoption of national standards for data collection, and contrast between the state of the art and the state of the practice.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!