Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
B-1 APPENDIX B Draft Guidelines to Improve the Quality of Element-Level Bridge Inspection Data
B-2 CONTENTS Â 1Â Introduction ........................................................................................................................................ B-4Â 1.1Â Development of Element-Level Inspection .................................................................... B-4Â 1.1.1Â Applications and Requirements .......................................................................... B-6Â 2Â Purpose and Use of the Guidelines ..................................................................................................... B-6Â 2.1.1Â Objectives ........................................................................................................... B-6Â 2.1.2Â Scope and Applicability ...................................................................................... B-6Â 2.1.3Â How to Use the Visual Guides in the MBEI ....................................................... B-6Â 2.1.4Â Spatial Estimation Diagrams ............................................................................... B-7Â 3Â Accuracy Requirements for Decision Making ................................................................................... B-8Â 3.1Â Relationship between Accuracy, Inspection Quality and Decision Making ................... B-9Â 3.2Â Procedure for Developing an Accuracy Requirement .................................................. B-10Â 3.3Â Methods for Improving and Measuring Inspection Quality .......................................... B-12Â 3.3.1Â Inspections Using the Control Bridge Model ................................................... B-12Â 3.3.2Â Improving the Quality of Inspection Data ........................................................ B-13Â 3.4Â Sample Data Analysis for Assessing Inspection Accuracy .......................................... B-14Â
B-3 LIST OF FIGURES Figure B-1. Visual Standard for defect element 1080 â Delamination / spall / patched area. ................. B-7Â Figure B-2. Example comparison image for estimating area (5% of area). ............................................. B-8Â Figure B-3. Example probability density functions for normally distributed inspection results. .......... B-10Â Figure B-4. Model for accuracy requirements showing different distributions of inspection results. ... B-11Â Figure B-5. Distribution of inspection results for a bridge deck. ........................................................... B-14Â LIST OF TABLES Table B-1. Quantity of damage and suitable actions. .............................................................................. B-9Â Table B-2. Recommended accuracy requirements for different decision intervals. .............................. B-11Â
B-4 1 INTRODUCTION The objective of this guideline is to improve the quality of element-level data collection for bridges on the National Highway System (NHS) with reference to the American Association of State Highway and Transportation Officials (AASHTO) Manual for Bridge Element Inspection (MBEI)(AASHTO 2013). The collection of element-level data for NHS bridges became a requirement for all agencies in 2014 in order to meet the requirements of the âMoving Ahead for Progress in the 21st Century Act (MAP-21)â legislation. This guideline includes tools and recommendations intended to improve consistency in data collection and assessment of bridge element conditions. The guideline includes descriptions of how to use visual standards and spatial estimating guides that have been included in recent revisions to the MBEI. The guideline describes methods for calibrating and performance testing of inspectors. The guideline also describes a process for establishing accuracy levels for element conditions and applicable defect quantities. The established accuracy levels are intended to support decision-making for bridge preservation, maintenance and repair, and bridge management system deterioration forecasting. The following section provides an overview of the development of element-level inspection in the United States. Chapter 2 describes the purpose and use of the guidelines, including a description of how to use visual guides and other tools. Chapter 3 of the guideline describes how to develop accuracy requirements for implementing element-level data collection to support decision-making and bridge management. Visual standards and other tools to improve the consistency of element-level data collection that were originally part of this guideline have been incorporated into revisions to the MBEI and therefore are not reported herein. 1.1 Development of Element-Level Inspection Prior to the implementation of the National Bridge Inspection Standards (NBIS), bridge inspection practices in the U.S. varied significantly between bridge owners with only a few owners inspecting bridges on a regular basis. Bridge inspection transitioned from a function completed on an as-needed basis to a federal mandate following the collapse of the Silver Bridge on December 15, 1967 (NTSB 1970). Following this historic bridge collapse, the Federal Highway Administration (FHWA) developed requirements for bridge inspection, first for bridges on the Federal-aid system in 1971 and later extended to all bridges on public roadways in 1978. Historically, NBIS inspections have been completed on a component-level basis that rated three primary components of a bridge; the deck, superstructure, and substructure. NBIS inspections also apply to culverts. Additional data on the condition, design characteristics, service level, and location of the structure are also recorded (FHWA 1995). Conventional component-level bridge inspection results are used by bridge managers to identify maintenance needs from a safety perspective. A subjective condition rating scale ranging from 0 (Failed condition- out of service) to 9 (Excellent condition) is used to characterize the condition of bridge components relative to the as-built condition. The purpose of the inspection is to determine the physical and functional condition of the bridge, document bridge condition and rate of deterioration, and form a basis for analysis. Priorities for maintenance, repair, and rehabilitation are also established. Bridge inspection has been evolving from the component-level approach toward a more detailed element-level approach that focuses more specifically on the materials and design characteristics of the bridge. The modern day bridge element data collection began with FHWAâs Demonstration Project 71 in October 1989 (O'Connor 1989). This project was the beginning of the AASHTOWare Pontis Bridge Management Software and required bridge inspection data to have more granularity than the NBIS categories of deck, superstructure, substructure, and culverts. The report concluded that for an effective bridge management system, additional information would be required to support the management function. The report cited several states that had begun the process of adding sub-categories to the NBIS data, which inspectors would need to evaluate and record as part of a field inspection. Improvements for bridge management software based on element-level inspection data (Pontis 2.0) were developed in 1993 (FHWA 1993). The users of this software were required to supplement the NBIS data with the condition of the
B-5 bridge based on certain specific elements. The AASHTO Subcommittee on Bridges and Structures (SCOBS) developed a defined listing of Commonly Recognized Bridge Elements (CoRe) utilizing common engineering language (AASHTO 1997). In 1998, the FHWA formally adopted the CoRe elements. The CoRe guide was revised in 2002 and 2010 to improve the way inspection data was recorded. The CoRe element guide defined bridge elements and described different condition states (CSs) for each element. Different elements had different numbers of CSs, and for some elements, the quantity of damage was included in the description of the CS. For example, a concrete bridge deck had five different possible CSs depending on the quantity of damage in the deck (i.e., CS 1 less than 10% damage, CS 2 10% - 25 % damage, etc.), and the deck element was measured in units of each (ea). In contrast, a steel bridge member was assigned four different CSs if the member was unpainted, and five different CSs if the member was painted. The unit of measure for the steel girder was assigned linear meters, and the descriptions of the different CSs did not include quantities of damage. During this period, the development of BRIDGIT bridge management software was sponsored by National Cooperative Highway Research Program (NCHRP) (NCHRP 1999). This system required the inspector to collect the condition of the structural members and the protective system of those members. The philosophy of member definitions and CSs was similar to the CoRe definitions (i.e., up to five CSs within each element type or group). By the year 2000, a majority of the states had moved to collecting at least some element-level data based on the CoRe guide, BRIDGIT, or an agency-developed system. The international scan tour, âBridge Evaluation Quality Assurance in Europe,â found that many of the European nations had added additional granularity to their element inspection process, which provided them with additional information and data quality (Everett, Weykamp et al. 2008). In 2010, an effort was initiated by SCOBS Technical Committee 18 to improve the effectiveness of CoRe and BRIGIT element definitions. In 2011, the Guide Manual for Bridge Element Inspection (GMBEI) was adopted in order to provide improvements in the approach for element-level inspection. Key changes in the new guide manual included changing the units of measure for decks and slabs to provide increased precision in the recording of deck conditions. New elements describing wearing surfaces and protective coatings were also included. Importantly, the number of CSs for elements was standardized to four different CSs (CS): CS 1 Good, CS 2 Fair, CS 3 Poor, and CS 4 Severe. An important change in the GMEI was to incorporate specific defect descriptions for determining the appropriate assignment of CSs. This change incorporated element smart flags (defect flags) directly into the assessment of the elements. The GMEI also defined certain subjective engineering terms such as ânarrow,â âsmall,â âlarge,â âsum,â etc. For example, quantitative crack width descriptions were included for assigning CSs rather than more subjective terms such as âminor crackingâ that had been previously implemented. These improvements were made to better capture the condition of the elements by reconfiguring the element language using specific defects within the defined CSs. In this way, a single element could be affected by different defects, and these defects might have different deterioration characteristics. The defect elements provide data for developing deterioration modeling in the future that will more explicitly consider the active deterioration mechanisms in a bridge. For example, deterioration of a concrete deck may be due to spalling of concrete, cracking of concrete, or both. Using the defect elements provided in the GMBEI, it was possible to record the specific quantities of each defect according to its quantity on the deck. In this way, different deterioration rates could be assigned if the distress path were spalling or if the distress path were cracking. The GMBEI was later adopted as the AASHTO Manual for Bridge Element Inspection (MBEI). The first edition was published in 2013; First Edition 2013 with the 2015 Interims was subsequently issued, and was the current manual at the time of this research (AASHTO 2013). A revised version of the Manual that incorporates visual standards developed as part of this guideline is currently in publication. The engineering aim of element-level inspection is to achieve consistent, data-driven systematic investment in preventive maintenance, rehabilitation, and capital investment. The administrative benefits of element-level inspection include better funding schemes and performance measurements for asset
B-6 management. To achieve these outcomes, the element-level data collected during routine bridge inspections needs to be of adequate quality to support decision-making for maintenance and repair, and support effective deterioration modeling. This guideline is intended to improve the quality of data collected during routine inspections by providing tools for the inspector to better identify the element condition (i.e., assign CS) and estimate the quantities necessary for element-level data collection. 1.1.1 Applications and Requirements Currently, element level data must be collected for all bridges on the NHS. Section 1111 of MAP-21 modified the NBIS to require each State and appropriate Federal agency to report bridge element-level data to the FHWA. This requirement to collect element-level data on a national scale has required many states that have never collected element-level data to modify existing programs. The federal data reporting requirements focus on the National Bridge Elements (NBEs) and certain Bridge Management Elements (BMEs). The required NBEs and BMEs are documented in the Federal Highway Administration (FHWA) âSpecification for the National Bridge Inventory Bridge Elements.â The Guidelines for Improving the Quality of Element-Level Bridge Inspection Data applies to element-level data collection intended to meet the FHWA requirements. 2 PURPOSE AND USE OF THE GUIDELINES The purpose of this guideline is to provide a tool for improving the quality of element-level data collection. The guideline is intended for use by bridge inspectors, inspection program managers, and other individuals concerned with the quality of element-level data. This chapter of the guideline describe the scope and objectives of the guideline, and provided direction on how to use certain visual guides that have been implemented in the MBEI (Appendix C). Chapter 3 of the guideline provides a model for developing accuracy requirements for field data collection and describes methods for improving the quality of element- level inspection data. 2.1.1 Objectives The objective of the guideline is to improve the quality of element-level data collection. To meet this objective, the guideline includes visual guides, recommendation for determining accuracy requirements for field data collection, and procedures for improving the quality of data. 2.1.2 Scope and Applicability The scope of the manual applies to the routine inspection of bridges on the NHS using element-level inspection according to the MBEI. The guideline can be applied to improve the collection of element-level data in the field and to determine the accuracy requirements for element-level data. 2.1.3 How to Use the Visual Guides in the MBEI This section of the report describes how to use the visual guide for identifying the appropriate CS and estimating quantities of damage. The visual guide includes visual standards and spatial estimation diagrams. Visual standards are photographs that represent the text description contained in the MBEI for a given defect element. The purpose of the photograph is to define a standard representation of the text description. Defects observed in the field can then be compared with the standard photograph for determining the appropriate CS assignment. The visual standards are organized according to element material in the visual guide. The materials currently included are reinforced concrete, prestressed concrete, and steel. Steel protective coating, movable bearings, and joints are also included. The material-based defect elements shown in the visual standard are applicable to any element formed from that material. For example, a concrete spall has the same description, and the same photographic standard, if the defect is in a concrete deck, girder, column, pier, or abutment.
B-7 Figure B-1 shows an example from the visual guide for defect 1080, Delamination/Spall/Patched areas with three relevant CSs, CS 1, CS 2 and CS 3. Characteristics of the typical page are shown in this figure. The number and name of the element is shown in the upper left corner of the page. The top row of the table shows the different CSs, with CS 1 shown in green, CS 2 in yellow, CS 3 in a darker tone of yellow; CS 4, if relevant, shown in red. The text description of the CS is shown. If there are quantitative data from the commentary that are relevant, these appear below the primary description in smaller italicized text. The boundary images between CSs are shown in the lower portion of the page, where appropriate. The visual guide is used for determining the appropriate CS for a given defect in the relevant parent material. The boundary images shown are intended to describe the boundary between two different CSs. For example, for the defect shown in Figure B-1, the definition of CS 2 is a spall 1 in. or less deep or 6 in. or less in diameter; the description for CS 3 is a spall greater than 1 in. deep or 6 in. in diameter. The boundary image between CS 2 and CS 3 shows a spall that is 6 in. in diameter and ~1 in. deep. Therefore, any spalling defect that is larger (or deeper) than the spall shown in the image should be assigned to CS 3, while any spalling defect that is smaller (or less deep) than the spall shown in the image should be assigned to CS 2. In this way, the photograph represents the boundary, i.e., a boundary image. Figure B-1. Visual Standard for defect element 1080 â Delamination / spall / patched area. 2.1.4 Spatial Estimation Diagrams The visual guide includes diagrams intended to assist in making accurate quantity estimates in the field based on visual inspection. Figure B-2 shows an example of a spatial estimating diagram for use on bridge elements recorded in units of sq ft. The diagram is used by comparing the area shown in the diagram with the appearance of damage in the bridge element being assessed. The diagram depicts damage of 5% of the area. Different possible distributions of damage are shown. For example, the top diagram depicts 5% of the area, with damage distributed throughout the element. The bottom diagram shows 5% of the area with the damage depicted as a single large area. Linear estimating diagrams for comparison to elements with units of ft are also included. A guide for estimating the area (sq ft) affected by pattern cracking is included
B-8 in the guide, as well as diagrams of different crack spacing. The scaled diagrams for elements with units of sq ft represent an area 40 ft x 100 ft (4000 sq ft). The scaled diagrams for elements with unit of ft represent a length of 100 ft. Figure B-2. Example comparison image for estimating area (5% of area). 3 ACCURACY REQUIREMENTS FOR DECISION MAKING Accuracy requirements represent the acceptable tolerance or variability in the results from inspection. For example, an accuracy requirement for the inspection of a bridge deck could be described in terms of the percentage of the deck assigned to CS 3 being within +/- 5 % of the deck area actually in CS 3. Such an accuracy requirement impacts decision-making because it affects how frequently an incorrect decision would be made; if the tolerance is very large, then incorrect decisions might be made frequently. If the tolerance is very small, incorrect decisions would be made less frequently. However, the capability of the inspection procedure used to estimate the damage may or may not be capable of providing results meeting a particular accuracy requirement. Therefore, the capabilities of the inspection procedure must be considered for establishing an accuracy requirement. The accuracy of the inspection result also has an impact on the effectiveness of deterioration modeling, because inaccurate results will provide inaccurate forecasting by the model. This can affect long-term planning and decision-making. This section of the guideline presents a simple method for identifying a suitable accuracy requirement with a statistical basis. In this model, it is assumed that a normal statistical distribution describes the variation in inspection results. The actual variation in the inspection results is not considered in this model itself. The actual variation in inspection results can be compared with the results of the model to determine if the capabilities of the inspection process are adequate to meet the desired accuracy requirement. The actual variation in inspection results needs to be measured from field results, and methods for collecting these data are included in the guidelines. If the capabilities of the inspection process are not adequate to meet the needs of the accuracy requirement, the inspection procedure or accuracy requirement may need to be adjusted. The analysis of data for the purpose of evaluating whether the inspection results meet the accuracy requirements is also described.
B-9 3.1 Relationship between Accuracy, Inspection Quality and Decision Making This portion of the guideline describes the relationship between accuracy requirements, inspection quality, and decision-making. Accuracy requirements are typically in the form of a tolerance for the inspection result, such as the total quantity of a deck assigned to CS 3 being within +/-5% of the actual quantity of deck in CS 3. This requirement is related to the inspection quality because it indicates an expectation that inspection results will provide a quantity that is within +/-5% of the actual quantity. If it is assumed that the variation in inspection results fits a normal distribution, the tolerance of +/-5% could be represented as the expected standard deviation, Ï, of inspection results from a group of inspectors. Statistically, this indicates that at least 68% of the inspection results from that group of inspectors should fall within +/-5% of the actual quantity. In this analysis, it is assumed that the mean value of normally- distributed inspection results is equal to the actual quantity of damage. To illustrate this relationship, consider that decision-making based on element-level inspection results is described in terms of the quantity of an element with damage (i.e., CS 2 or 3). For example, a decision of preservation, maintenance, or replacement of a concrete bridge deck may be based on the quantity of the deck with damage, as shown in Table B-1. For bridge decks with less than 10% damage, preservation actions will be implemented. For bridge decks with greater than 30% damage, replacement or major rehabilitation actions will be taken, and for bridges with damage equal to 10 to 30% maintenance actions will be taken. In this way, 10% and 30% are boundaries for decision-making. (Note: The values and actions presented in Table B-1 are hypothetical and presented for illustration.) Table B-1. Quantity of damage and suitable actions. Damage (%) Action <10% Preservation 10% to 30% Maintenance >30% Replacement The decision boundaries form a âdecision intervalâ that is bounded by the high and low values that will result in a different decision. For example, maintenance will be recommended for bridge decks with 10% or more, and 30 % or less, of damage. Therefore, the decision interval is 20%, extending from 10% to 30% as shown in Figure B-3. Figure B-3 shows these boundaries, as well as a normal distribution of inspection results represented on the graph. The normal distributions represented in the figure have mean values of 20% and 30%, with Ï = 5% representing the variation in inspection results. Considering the distribution with a mean of 20%, the normal distribution curve indicates that 50% of inspections would report a quantity greater than 20%, and 50% of inspections would report a quantity less than 20%. Statistically, 68% of inspections would report a quantity between 15% and 25% of the mean value (+/- 1Ï), and 95% of inspections would report a quantity between 10% and 30% (+/- 2Ï). If the actual condition (mean) of the deck were 30% damaged, then the normal distribution would shift to the right, as shown in Figure B-3. In this case, 50% of the inspection results would exceed the 30% threshold, and therefore indicate replacement correctly. This example illustrates the considerations for determining an accuracy requirement. The accuracy requirement should consider the threshold values for decision making as compared with the variability of the inspection results. The impact of an accuracy requirement on decision making is to affect probability of a correct decision. Decision thresholds may need to be changed if the importance of a correct decision is high. For example, assume it was desirable that an incorrect decision to implement maintenance be very infrequent when the damage in the deck was actually 30% or greater, such that the decision of replacement should be made. In other words, it is desirable that all, or almost all, decks that have 30% damage receive replacement, and not maintenance. If the threshold were set at 30%, then a correct decision would result from only 50% of the inspections when the actual value was 30%, again assuming that the mean value of inspection results is the actual quantity of damage. This can be visualized from the distribution curve with
B-10 a mean of 30% shown in Figure B-3 (dashed line). However, if the threshold were set at 25% instead of 30%, then it would be more likely that a deck with damage of 30% be correctly identified for replacement. If the actual condition of the deck was 30% damaged, and the threshold of 25% were used, then 84% of the inspections would result in the correction decision of replacement. Again, this can be visualized by moving the mean of the normal distribution to 30%, and observing that 84% of the distribution would exceed the 25% threshold (-1Ï). In this way, most of the bridge decks with 30% damage would be identified for replacement. Figure B-3. Example probability density functions for normally distributed inspection results. This example illustrates the relationship between an accuracy requirement, inspection quality, and decision-making. The analysis assumes that the inspection procedure results in the desired accuracy tolerance, i.e., the variation in the inspection results is described by the standard deviation of a normal distribution and that standard deviation is 5% or less. However, the actual variation in the inspection results may not be known or may need to be estimated based on experience and engineering judgement. The variation in inspection results can be analyzed using inspector performance testing or implementing other quality assurance measures as described in the section 3.3. 3.2 Procedure for Developing an Accuracy Requirement The following procedure can be used to develop an accuracy requirement for inspection in a rational manner, based on the desired interval for decision-making. First, assume that for the mean value of a given decision interval, it is acceptable for âtypicalâ decisions that 68% of decisions are correct (i.e., +/- 1Ï). For example, Figure B-4 shows two normal distribution with mean values of 15%. The boundaries for decision- making shown in the figure are 10% and 20%, such that the decision interval is 10%. If the accuracy requirement is equal to the decision interval divided by 2, +/-5% in this example, than 68% of inspections would lie within the decision interval when the mean value is 15%. For an âimportant decision,â where a higher degree of accuracy is required, it is assumed that it is desirable to have 95% of decisions be correct. In this case, the accuracy requirement is equal to the interval divided by 4 (+/- 2Ï), again assuming the mean value is the actual quantity. Figure B-4 illustrates these accuracy requirements graphically. For a âtypicalâ decision, an accuracy tolerance of +/- Ï can be used, in which a portion (32%) of the expected results would lie outside the decision interval and result in an incorrect decision. However, if the decision were âimportant,â such that a higher level of accuracy was desired, the accuracy requirement would be +/- 2Ï. For this accuracy requirement, very few inspection results would lie outside the decision interval (~5%) when the actual condition of the element were at the mean of the decision interval, as shown in Figure B- 4.
B-11 Figure B-4. Model for accuracy requirements showing different distributions of inspection results. Using this model, when the actual value is at the decision boundary, for example, 20%, then 50% of the inspection results would exceed 20%, and 50% would be less than 20%. If it were desirable to improve the likelihood of the correct decision being made, then the decision boundary could be redefined, as described above, or an inspection procedure with less variation in results should be implemented. Table B-2 shows accuracy requirements and the resulting percentage of correct results based on this model. As mentioned previously, these data indicate the percentage of correct decisions when the actual condition of the element is the mean of the inspection results. The Ï values shown in the tables indicate the number of standard deviations between the mean and the boundary. The preferred values are shown in bold in the table. These values consider the effect of the accuracy requirements on deterioration modeling and forecasting. The preferred values were determined based on an analysis of the impact of different accuracy requirements on future decisions, described in the report âGuidelines to Improve the Quality of Element- Level Bridge Inspection Data.â Table B-2. Recommended accuracy requirements for different decision intervals. Decision Interval Accuracy Requ. Std dev. Ï % Correct 10% +/-2.5% 2Ï 95 +/-5% Ï 68 20% +/-5% 2Ï 95 +/-10% Ï 68 30% +/- 5 % 3Ï 99.7 +/-10 % 1.5Ï 87 +/-15% Ï 68 It should also be noted that the incorrect decision rate would increase if the actual condition of the deck were more or less than the mean value of the decision interval, to a minimum of a 50-50 likelihood at the decision boundaries. This model can be used for estimating a suitable accuracy requirement for element-level inspection when the decision boundaries are known. However, this method does not consider if a given inspection procedure or practice can achieve the accuracy requirement desired. The following section provides
B-12 information regarding methods for testing the inspection procedure to determine if the procedure is adequate for a given accuracy requirement. These methods are also applicable for inspector training and assessment for improving the quality of element-level data. 3.3 Methods for Improving and Measuring Inspection Quality This portion of the guideline describes methods for measuring and improving quality for element-level bridge inspection. The purposes of quality measurements are to characterize the variation in inspection results, provide a basis for rational accuracy requirements, and to improve the consistency of inspection results. Section 3.3.1 describes the use of a control bridge as a tool for evaluating the quality of element-level inspections. Section 3.3.2 describes inspector calibration exercises that can be used as a training tool to improve the consistency of inspection results within a bridge inspection program and for inspector certification. 3.3.1 Inspections Using the Control Bridge Model The Control Bridge Model (CBM) is a method of performance testing bridge inspectors for the purpose of assessing variation in bridge inspection results. The method consists of having multiple inspectors or inspection teams complete inspections of the same bridge. The inspection results are compared to a standard to evaluate the accuracy and variability of the results. The CBM can be used as a training tool, for the certification of inspectors, or to assess quality in an inspection program. When used for training, the CBM can be used to train inspectors on the proper implementation of element-level inspection procedures and practices in order to âcalibrateâ inspection results in terms of the standard. When used for certification, the CBM can be used to assess the performance of an individual inspector relative to the standard. When used as a quality tool, the CBM can be used to assess the overall variation among a group of inspectors or inspection teams. The standard is typically provided by an expert or expert group that performs a âcontrol inspectionâ that documents the desirable inspection results. The control inspection should implement the appropriate procedures and practices in order to properly identify defects, assign condition states, and determine quantities. The control inspection should record the actual condition of the elements as accurately as possible. In this way, it can be determined if the inspection results from the inspectors are similar to the control inspection result, i.e., the actual condition of the element. Results from implementing the CBM can be used to evaluate the performance of individual inspectors or inspection teams in order to identify sources of inconsistency in their inspection results. The comparison of individual results with the standard (i.e., control inspection) can be used to calibrate the inspection results to the standard. This provides a means to identify training needs for individual inspectors or teams and to correct improper implementation of procedures and practices. The inspection results can be analyzed to quantify the variability in the inspection results for the purpose of confirming if accuracy requirements are achievable. If the variation in the inspection results is too large to achieve the required accuracy, additional training may be needed to improve the quality of the inspection, the inspection procedures may need to be modified to improve consistency, or the accuracy requirement may need to be modified. 184.108.40.206 Control Inspection Program managers or a team of experienced inspectors typically provide the control inspection. The control inspection should represent the proper application of inspection procedures and practices in order to properly identify defects, assign CSs, and evaluate damage quantities. The control inspection may include quantitative measurements of damaged areas in order to ensure the accuracy of the quantities assigned to different CS. The control inspection is typically completed prior to the inspection exercise and provides an âanswer keyâ for comparison with the inspection results from individual inspectors or inspection teams.
B-13 220.127.116.11 Selection of Control Bridges Control bridges are normally highway bridges of common design and with common elements, such as reinforced concrete deck, steel, or concrete superstructures, etc. Factors to consider in selecting a control bridge include the following: ï· Does the bridge represent a common bridge in the inventory? ï· Is there suitable and safe access to the bridge for inspection? ï· Does the bridge have adequate damage to allow for different CSs and quantity estimates to be assessed? ï· Is the bridge of reasonable size that allows for the timely completion of typical inspection procedures? Control bridges are typically bridges of common design, however, bridges can be selected for the purpose of highlighting a particular element. For example, a truss bridge may be selected to assess the variation in inspection results for gusset plates. 3.3.2 Improving the Quality of Inspection Data The bridge inspection process is generally subjective such that maintaining consistent training and qualification of inspectors is important for obtaining consistent results. Variability in inspection results can occur between nominally qualified inspectors. Inspectors may implement procedures and practices with differing levels of thoroughness and care and may interpret their findings differently. Inspectors may have different interpretations of certain defect element CS descriptions. Methods for estimating damage quantities and different interpretations of the appropriate assignment of CSs can occur. Human factors can also affect the implementation of the procedures and interpretation of results. Different inspectors may have different levels of experience, work habits, education, etc., and inspection results may be affected by the past experience of a particular inspector. As a result, the consistency of the inspection results cannot be assumed among any group of nominally qualified inspectors. Calibration exercises can help mitigate these factors and adjust the performance of inspectors to achieve the desired result of improved quality. The purpose of inspector calibration exercises is to improve the quality of element-level inspection. Inspector calibration can help train inspectors on the proper implementation of procedures and practices for element-level inspections, such that inspection results have more consistency, i.e., quality. The CBM can be used for the calibration of inspectors by providing a basis for comparison of individual inspection results to the standard or desired result represented by the control inspection. Differences between the individual inspectorâs results and the standard (i.e., control inspection) can be identified and corrected. In this way, inspector performance is calibrated to more closely achieve the desired outcome of inspection, as defined by the control inspection. This includes properly identifying defects, assigning appropriate CSs, and making accurate quantity estimates. 18.104.22.168 Inspector Certification The CBM can be used to confirm the qualifications of individual inspectors for the purpose of establishing certification. The results from individual inspectors can be compared with the control inspection in order to assess if the inspection is being properly conducted according to existing procedures and practices. This provides a means of ensuring that required training has resulted in the desired outcome, namely that the inspector can properly identify defects, assign CSs, and accurately estimate quantities. Acceptable tolerances for variation between the control inspection and the individual inspection results are needed in order to establish acceptable performance to achieve certification. 22.214.171.124 Use of Contractors Many bridge owners use contractors for routine element-level bridge inspection. The use of contractors can make it more difficult to ensure the consistency of inspection results because different contractors may have different methods or practices for implementing element-level inspection. Contractors may work with different bridge agencies that have different specific procedures and practices for element-level inspection. Different contractors may also have different levels of knowledge and experience in implementing element-
B-14 level bridge inspection. The CBM can be used to calibrate contractor inspections to ensure that inspections are conducted with the suitable level of quality for a given agency. The CBM can also be used to certify that an individual is properly implementing the desired procedures and practices in order to properly identify defects, assign CSs, and estimate quantities. 3.4 Sample Data Analysis for Assessing Inspection Accuracy Data analysis for evaluating the relationship between inspection results and the accuracy requirement includes developing suitable bridge inspection results from control bridge testing or other suitable performance test. These studies should include bridges of typical design with significant damage in the CS from which decisions on preservation and maintenance will be made. Typically, this is for damage described by CS 3, but could be for damage described in CS 2. The analysis of data to determine the standard deviation is all that is needed to determine if a given accuracy requirement is achievable from the field inspections. For example, Figure B-5 illustrate the results from a control inspection in which 15 inspectors have assessed the same bridge deck and assigned CS 2 quantities as shown in the figure. The standard deviation, Ï, for these data is 57 sq ft, such that on average, the inspectors identified the area of deck within +/- 57 sq ft of the mean. For this bridge deck, with a total area of 6000 sq ft, the standard deviation expressed in terms of a percentage is less than +/- 1 %. This data can be calculated using any spreadsheet type software. A sample size of at least 20 inspections is recommended for determining the sample standard deviation so as to assess if a given accuracy requirement is met by the inspection procedure being used. It is also necessary for the assumption that the mean of the inspection results represents the actual condition. This is where a control inspection by an expert team can play a role in the analysis, to confirm the assumption that the mean of the inspection results are consistent with the expert opinion regarding the actual amount of damage in the bridge deck. Figure B-5. Distribution of inspection results for a bridge deck.
B-15 References AASHTO (1997). Commonly Recognized (CoRe) Structural Elements. Washington, D.C. AASHTO (2013). Manual for Bridge Element Inspection. Washington, D.C. Everett, T. D., P. Weykamp, H. A. Capers, W. R. Cox, T. S. Drda, L. Hummel, P. Jensen, D. A. Juntunen, T. Kimball and G. A. Washer (2008). Bridge Evaluation Quality Assurance in Europe, FWHA-PL-08-016: 56. FHWA (1993). PONTIS Version 2.0 Technical Manual. FHWA. Washington, D.C. , U.S.D.O.T. FHWA (1995). Recording and Coding Guide for the Structure Inventory and Appraisal of the Nation's Bridges. U.S.D.O.T. Washington, D.C. : 78. NCHRP (1999). "BRIDGIT" Bridge Management System User's Manual and Technical Manual. Washington, D.C. NTSB (1970). Highway Accident Report: Collapse of the U.S. 35 Highway Bridge, Point Pleasant, WV. Washington, D.C., National Transportation Safety Board: 142. O'Connor, D. S., Hyman, W.A. (1989). Bridge Management System. Washington, D.C. , FHWA-DP-71- 01R, Technical Report, FHWA.