Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
18 Researchers at the Georgia Institute of Technology are developing a novel technology that would facilitate close monitoring of [bridge] structures for strain, stress, and early formation of cracks. Their approach uses wireless sensors that are low cost, require no power, can be implemented on tough yet flexible polymer substrates, and can identify structural problems at a very early stage. The only elec- tronic component in the sensor is an inexpensive radio frequency identification (RFID) chip. Moreover, these sensor designs can be inkjet printed on various substrates, using methods that optimize them for operation at radio frequency. The result would be low-cost, weather-resistant devices that could be affixed by the thousands to various kinds of structures.29 This chapter presents an application of STREAM to illus- trate how each step could be applied to bridge deck inspec- tion and monitoring. This allows us to discuss STREAM more fully while keeping the treatment rooted in practical appli- cation. This chapter presents only a cursory analysis of the various technologies associated with bridge inspection and monitoring. The research team selected the function of bridge deck inspection and monitoring for several reasons: â¢ There are many different alternative technology approaches to performing these functions, with no clear indication that one dominates the others. Some are emerging, while others (e.g., GPR) have been available for some time. This presents an instance where many technology choices are present at differing states of maturity but there has been no clear movement toward any one â or, indeed, for mov- ing beyond the state of practice current for most of the 20th century. â¢ These technologies represent a wide range of different technological sectors and approaches. This allows us to examine how they can be placed in a common field for assessment. â¢ This example is relevant to many stakeholders in that it spans planning (and long perspectives) and operations (and shorter term concerns). All participants in the tech- nology assessment and adoption decision at the agency level need to find sufficient value in STREAM for it to be accepted for wider utilization. â¢ The research team possessed some prior exposure to this area. Step 1. Framing Bridge Deck Evaluation for STREAM The United States has approximately 600,000 bridges (AASHTO, 2008). This represents the largest inventory of bridges in the history of the world and one that has had a remarkable safety record. However, the inventory is aging during a time of unprecedented cost increases and limited resources for making repairs. The 2008 AASHTO report, Bridging the Gap, points out that 50 percent of the nationâs bridge capacity, measured in terms of deck area, is between 35 and 55 years old â the age range during which structural repair needs increase (p. 11). The same report notes that â[b]ridge rehabilitation needs dwarf the amount of funds currently available and compel states to remain in a âtriageâ mode of managing deficiencies as best they can for the next foreseeable decadesâ (p. 30). Sustaining this bridge inventory is a critical issue for the nation (p. 13). Motivating the Use of STREAM for Bridge Deck Evaluation Bridge inspection is the basis for identifying and prioritizing maintenance. The most frequently used form of inspection C h a p t e r 4 STREAM in Application: Bridge Deck Evaluation 29 âWireless âsmart skinâ sensors to provide remote monitoring of infrastruc- ture,â R&D website, 17 April 2013 (http://www.rdmag.com/news/2013/04/ wireless-smart-skin-sensors-provide-remote-monitoring-infrastructure?et_ cid=3201021&et_rid=524660799&linkid=http%3a%2f%2fwww.rdmag.com %2fnews%2f2013%2f04%2fwireless-smart-skin-sensors-provide-remote- monitoring-infrastructure. Last accessed 1 May 2013.)
19 is by human vision. However, the subjectivity of inspections and the absence of coordination among inspection, mainte- nance, and bridge design are seen as significant technical bar- riers to effective bridge management (Oh et al., 2009; Aktan et al., 1996; ASCE-SEI, 2008; Graybeal et al., 2002). One study notes: There is uncertainty in measuring bridge performance as it is not well defined, understood or documented. It relies too heavily on expert opinion and not on objective data, and it is based on significant assumption or generalization based on very simplistic understanding of bridge behavior. (ASCE-SEI, 2008, p. 21) Accurate data on bridge performance has been noted as particularly important in facilitating an integrated approach (ASCE-SEI, 2008). Coupled with the growing need for bridge repairs and declining resources available to carry them out, this suggests that improving inspection methods and âencourag- ing a circular design process that better integrates design, construction, inspection, maintenance and researchâ (Spy Pond Partners, 2010, p. 55 referencing ASCE-SEI, 2008) are matters of urgency. These considerations provide an explicit framing for STREAM. We focus on the technological possibilities of improving the function of nondestructive evaluation of bridge decks as carried out by transportation agencies.30 As emerged through discussions with experts and DOTs, this function addresses transportation agency mission goals of preservation and mobility.31 Definition of Metrics for Bridge Deck Evaluation First, a metric for each of the mission objectives of pres- ervation and mobility needs to be developed. One could try to measure these in their end-units, e.g., preservation in terms of the cost of maintaining bridges or mobility in terms of the number of vehicles-hours of delay; however, these measures are difficult, if not impossible, to estimate as they are functions of many uncertain factors (e.g., traf- fic volumes) which vary from location to location and over time. In practice, it is effective and appropriate to consider more directly measurable differences in the technologies. For this example, we measure preservation in terms of a technologyâs ability to distinguish the condition state of the bridge, and we measure mobility in terms of the number of hours a bridge lane must be closed in order to use the tech- nology for inspection. For each metric, performance thresh- olds on a 1â4 metric value scale have been developed, which allow comparison of performance across metrics with dif- ferent natural units. Metric for Preservation In this example of bridge deck inspection, preservation is measured in terms of how well each of the technology alter- natives can determine the condition state of the bridge deck. The AASHTO Bridge Guide Manual referenced above defines four condition states for several types of defects (e.g., cracks, spalls, delaminations). The higher the condition state, the greater the underlying damage and so the greater the need for action. For each Condition State, the Manual recommends different feasible actions, as shown in Table 4-1. For Condition States 1 and 2, either nothing is done or some prophylactic measures are taken. For Condition State 3, repair and rehabilitation are considered. For Condition State 4, pro- tection and repair are not even considered, and, in addition to rehabilitation, replacement is considered. The value of an inspection technology resides in its ability to distinguish between condition states, and especially in its ability to distin- guish Condition States 3 and 4 from Condition States 1 and 2, as well as from each other, because the actions to be con- sidered differ significantly in the last three condition states. Thus, failure to correctly identify Condition State 3 or 4 may lead to not recognizing the necessity of repair, rehabilitation, 30 We illustrate the method with the example of bridge deck inspection. However, bridge decks/slabs are one of 12 ânational bridge elements,â and complete bridge inspection requires determining the condition state of each of these elements, as well as 5 âbridge management elementsâ (e.g., joints and approach slabs). Carrying out a STREAM exercise for all of the ele- ments that exist on a specific bridge or bridge type would identify the best technology alternatives for each element and would elucidate any synergies that might make a particular technology attractive when the entire bridge is considered. 31 STREAM focuses decisionmakersâ attention on a functionâs main mission goals, though a function may have many secondary goals. Bridge deck inspec- tion also affects safety, to the extent that cracks in the bridge deck create unsafe driving conditions and inspection is a safety hazard for inspectors. It also affects environmental sustainability, e.g., through added emissions from delays on a bridge undergoing inspection. These minor mission goals for bridge deck inspection are ignored so as to concentrate on the major mis- sion goals. Condition State 1 Condition State 2 Condition State 3 Condition State 4 Do Nothing Protect Do Nothing Protect Protect Repair Rehabilitate Rehabilitate Replace Table 4-1. AASHTO guide manual condition states for bridge decks.
20 or replacement with potential negative influence on the goal of preservation. Preservation metrics are defined in Table 4-2. Metric for Mobility Effects on mobility of bridge deck inspection will primar- ily result from bridge closures and the associated traffic jams. This metric is based on the extent to which lane closures are required to complete an inspection of the bridge deck, with the metric values provided in Table 4-3.32 Step 2. Identify Technology Applications for Bridge Deck Evaluation Current State of Practice This step includes establishing the current state of practice in the function of concern. This forms the baseline against which other technologies are characterized and compared in the Characterize and Compare steps. Accepted practice for bridge deck NDE in the United States is for an inspector to walk the bridge deck and perform a visual inspection. The inspector sometimes augments visual inspection with audi- ble inspection methods, such as chain dragging or hammer sounding. Visual inspection has been shown to be relatively inaccurate. In one controlled study, multiple inspectors were asked to rate the same bridge deck. The study found that only 68% of inspectors agreed to within one ranking out of ten, and that important defects are likely to go undetected (Gray- beal et al., 2002, p. 82). Visual inspection also requires one or more lanes of traffic to be closed during the inspection which can take from 1Â½ hours (for visual inspection) to as much as 12 hours (if audible inspection methods are used). Current and Prospective Technology Alternatives The next step is to scan and inventory current and prospec- tive technology alternatives. Many diverse technologies for bridge deck inspection and monitoring exist and Table 4-4â drawing on studies from the Minnesota DOT (Gastineau et al., 2009) and a workshop on bridge performance (ASCE-SEI, 2008)âprovides an initial inventory. These technologies vary along several functional dimen- sions. They can be used for short- or long-term monitoring, to assess local or global bridge features and performance, and can be continuous or triggered by specific events (ASCE-SEI, 2008, p. 52). They also monitor different properties such as strain or corrosion. They further vary in their technological basis (e.g., radar and optics), technological and market matu- rity, and whether they have been used in practice. An initial scan might note these various features in the inventory. The research team noted whether, during review of the literature, there was evidence that a particular technology is being used for bridge inspection and monitoring.33 Metric Value Metric Definition 1 Inability to distinguish Condition State 3 or 4 from Condition State 1 or 2 2 Ability to distinguish Condition State 3 or 4 from Condition State 1 or 2, but inability to distinguish Condition State 4 from Condition States 1, 2, or 3 3 Ability to distinguish Condition State 3 or 4 from Condition State 1 or 2, plus ability to distinguish Condition State 4 from Condition States 1, 2, or 3; 4 Same as Metric Value 3, plus ability to characterize bridge element condition sufficiently to provide quantitative input to bridge design models. Metric Value Metric Definition 1 Lane closure required for more than 10 hours 2 Lane closure required for 5-10 hours 3 Lane closure required for more than 1 but less than 5 hours 4 If lane closure required, duration is less than 1 hour Table 4-2. Preservation metric for bridge deck evaluation technologies. Table 4-3. Mobility metric for bridge deck evaluation technologies. 32 Other thresholds for both of these metrics are possible (e.g., lane closure for less than 30 minutes, 30 minutes to 2 hours, etc.) This may be tuned to bridge type, bridge usage, and local conditions. 33 That no evidence was found in the review to date does not necessarily mean that a given technology is not being used.
21 The list in Table 4-4 is not exhaustive. These technologies can assist or augment the functional capabilities of bridge inspection to â¢ Enable inspectors to measure properties more accurately than could be done otherwise (e.g., using visual techniques); â¢ Enable inspectors to measure new properties that are not routinely part of current inspection practice (e.g., such new properties might be related to modeling, design or opera- tion of the bridge); and â¢ Reduce the need for visual inspection by continuously col- lecting data that instead could be analyzed by the inspector or others in order to provide information about whether or not the bridge is operating according to design models and within safety margins. Sufficiently revolutionary changes in technology can also change agency functions. A STREAM analysis could inform agencies on how the functions may themselves change in the future and what, if anything, they should be doing to prepare. For example, while todayâs NDE technologies are used for periodic inspection, structural health monitoring, in which sensors embedded on the bridge continuously monitor it, may well be the future. Thus, the function of âbridge inspec- tionâ may itself become obsolete having been replaced by the function of âbridge monitoringâ.34 In this case, agencies might choose to undertake pilot studies and processes to facilitate structural health monitoring. In any case, STREAM is flexible enough to detect when embedded sensors may evolve to the point at which they begin to emerge from the analysis as an important alternative to then-current technologies. The research team examined the use of fiber-optic sensors (FOS) for structural health monitoring (SHM) as an example of how technologies may change the function under consid- eration or even eliminate it.35 Many projects have used forms of FOS technology applications for research and demonstra- tion purposes to advance the state of practice. Yet there is evi- dence that in application these may also affect the monitoring function in fundamental ways. A survey of 40 SHM projects from the last 15 years suggests that SHM is increasingly being Item Technology Evidence of Use 1 3-D Laser Scanning 2 Accelerometers 3 Acoustic Emission (AE) 4 Automated Laser Total Station 5 Chain Dragging Yes, (Gastineau et al., 2009) 6 Concrete Resistivity 7 Digital Image Correlation (DIC) 8 Electrochemical Fatigue Sensing System 9 Electrical Impedance (Post-Tensioning Tendons) 10 Electrical Resistance Strain Gauges 11 Fatigue Life Indicator 12 Fiber Optics 13 Global Positioning System (GPS) 14 GPR Yes, (ASCE-SEI, 2008) 15 Impact Echo Yes, (ASCE-SEI, 2008) 16 Infrared Thermography Yes, (ASCE-SEI, 2008) 17 Linear Polarization Resistance (LPR) 18 Linear Potentiometer (String Pots) 19 Linear Variable Differential Transformer 20 Macrocell Corrosion Rate Monitoring 21 Potential Measurements/Chloride Content 22 Scour Devices Yes, (Gastineau et al., 2009) 23 Tiltmeters/Inclinometers 24 Ultrasonic C-Scan 25 Vibrating Wire Strain Gauge 26 Radiography Yes, (ASCE-SEI, 2008) 27 Dye penetrant Yes, (ASCE-SEI, 2008) 28 Magnetic particle Yes, (ASCE-SEI, 2008) 29 Eddy Current Yes, (ASCE-SEI, 2008) 30 Magnetic Flux Leakage Yes, (ASCE-SEI, 2008) 31 Hammer Sounding Yes, (ASCE-SEI, 2008) 32 Ultrasonic Pulse Velocity Yes, (ASCE-SEI, 2008) 33 Spectral analysis of surface waves Yes, (ASCE-SEI, 2008) 34 Ultrasonic Acoustic Emissions Yes, (ASCE-SEI, 2008) Table 4-4. Bridge inspection and monitoring technologies. 34 This would also require changes in the federal regulations to which local and state transportation agencies must respond. 35 Examples of FOS projects are listed in Table D.1 in Appendix D.
22 used to extend bridge life and confirm designs and has moved beyond the demonstration stage (Inaudi et al., 2009a.) For this illustration of STREAM, the research team identi- fied six technology bundles to be examined in the next steps of the assessment. The current baseline in all agencies consists of using visual inspection with or without the assistance of audible inspection. These provide the first two alternatives. The other four were selected to provide a range of technology applications, levels of maturity, and prospective benefit: â¢ GPR. GPR sends a microwave signal that penetrates the bridge deck and then analyzes the return signal to highlight cracks, delaminations, and other defects at or below the surface, including those that may not have visible effects on the surface of the bridge deck. It can be performed from a moving vehicle, so does not require lane closures during inspection. â¢ NDE Suite. This represents a suite of methods and tools chosen to allow comprehensive evaluation of the bridge deck surface and subsurface and that can be performed from a moving vehicle, so does not require lane closures during inspection. Such a suite might include GPR, ultrasonic inspection, and ultraviolet (UV) inspection. By combining microwaves, sound, and UV, the suite of NDE methods pro- vides a means to image defects that might be missed by any one of the individual methods. â¢ Robotic Inspection. This bundle would use any of the other NDE methods, most likely an NDE suite, via an unmanned vehicle that could be operated during traffic flow and would not require lane closures. â¢ FOS Systems. FOS systems differ from the other NDE methods in that they facilitate SHM, rather than periodic inspection. FOS are embedded in the bridge deck, either when the bridge is constructed or repaired, or placed on the surface of the bridge in places not subject to traffic. FOS can collect data continuously or at desired intervals and can identify surface or subsurface damage through changes in the optical signal. Step 3. Characterize The characterize step applies the measures from the Frame step to the candidate technologies identified in the Iden- tify step. We illustrate this in two ways. First, we provide an example of an analytical reasoning process based on literature review and interviews that would lead to characterization of the six alternative approaches we have selected. For simplicity, we focus here on only the value metrics. Second, we describe a survey exercise to gather input from subject matter experts to gain an aggregate assessment of the same six technology applications. Here we consider POSI scores and cost estimates, in addition to value metrics. In each case, resources and efforts were sufficient only to provide an illustration and are intended to be neither as conclusive nor detailed as would be the case in actual application within a transportation agency or in a col- laborative effort by several agencies and other bodies. Before applying the measures, however, we first describe how the POSI, introduced in Chapter 3, would be estimated in the case of bridge deck inspection. Illustration of Analytical Characterization Table 4-5 summarizes the metric value characterization based on the analysis of each technologyâs technical capabili- ties and use cases. As noted above, the current methods of inspection introduce uncertainty into the determination of the condition state and are likely to miss important defects. However, depending on the specific characteristics of the bridge and the damage, visual inspection may or may not be able to distinguish Condition States 3 and 4 from Condition States 1 and 2. Visual inspection requires closing at least one traffic lane for more than an hour, which corresponds to a value of 3 for mobility.36 Supplementing visual inspection with audible inspection methods will provide subsurface information that would improve the chances for distinguishing Condition States 3 and 4 from Condition States 1 and 2. Therefore, we assign a value of 2 for the value of preservation to visual inspection plus audible inspection. With audible inspection, lane closure can be more than 12 hours, which corresponds to a value of 1 for the mobility value measure. Because they provide data on the full substructure of the bridge, both GPR and FOS should be able to distinguish Condition State 3 or 4 from Condition States 1 and 2, and also distinguish Condition State 4 from Condition States 1, Technology Application Preservation Value (Max 4) Mobility Value (Max 4) Visual 1 3 Visual + Audible 2 1 FOS 3 4 GPR 3 4 NDE Suite 4 4 Robotic 4 4 Table 4-5. Analytical characterization of value metrics for bridge deck evaluation technology alternatives. 36 Throughout this analytical characterization, only whole numbers (1, 2, 3, and 4) are used to characterize value metrics. However, analysts could use real num- bers (e.g., â3.2â) to indicate that performance may fall between the defined per- formance levels. Analysts could also use ranges of values to reflect uncertainty in judgments.
23 2, and 3. However, each method uses only one type of prob- ing excitation (microwave for GPR and optical for FOS), so they may miss some defects. We would therefore assign equal metric value for safety of 3 for GPR and FOS. Neither method requires lane closures, given that GPR can be performed from a moving vehicle and FOS are embedded in the structure itself. Thus we assign a value of 4 for mobility to both. Use of a suite of NDE methods allows the combination of several types of probing excitations (e.g., microwave, acous- tic, and UV), which should allow for a complete subsurface characterization of the bridge deck to definitively identify the condition state. Thus the research team assigned a value for preservation of 4. The suite of NDE methods can be per- formed from a moving vehicle, not requiring lane closure, so the research team assigned a metric value for mobility of 4. The research team assumed that a robotic inspection sys- tem would use a suite of NDE methods and would be per- formed from a moving vehicleâthe difference being that the vehicle would not require a driver but could be operated remotely. This would eliminate the need for one or more staff (e.g., driver or sensor operator) during the inspection, but would not change the basic information obtained or how it is analyzed. Thus the robotic inspection will have the same values for the safety and preservation measures as the suite of NDE methods. Assuming that the vehicle can be operated in a manner that does not require lane closure, the research team assigned a value for Mobility of 4. Illustration of Survey-Based Characterization One can also use surveys of experts to characterize tech- nologies instead of, or in addition to, analytical characteriza- tions as shown above. On 12 April 2012, the research team visited the Minnesota DOT (MnDOT) at their headquarters in St. Paul, Minnesota. During a full day, the research team conducted several sessions with staff and administrators on the work of the project. For the final session of the day, the research team asked the 10 participants from headquarters and field offices and planning and operations staffs who were selected for their familiarity with bridge inspection to provide their own ratings of the benefits, barriers, and costs of the six alternative technology approaches for bridge deck evaluation that had been discussed. This exercise was more than just a field test of STREAM itself. Because of the level of experience and familiarity with specific technology alternatives gained by MnDOT through studies they performed following the collapse of the I-35W bridge over the Mississippi River, these results would serve as a proxy for the results that would be produced by an expert panel or a survey of experts in the field. Table 4-6 shows the result of this proxy expert survey. The single-view assessment of value metrics provided in Table 4-5 is largely consistent with these results. In this table we report the mean of the expert inputs for each of the three measures along with the standard deviation across the survey responses in brackets.38 Characterization of Costs for Alternatives Cost characterizations were framed according to the fixed and variable cost structure described in the previous chap- ter and based on the review of the literature of each of these Technology Application Preservation Value (Max 4) [std. deviation] Mobility Value (Max 4) [std. deviation] POSI (Max 4) [std. deviation] Visual 1.1 [0.33] 1.3 [0.50] 3.8 [0.05] Visual + Audible 2.0 [0.71] 1.2 [0.44] 3.6 [0.12] FOS 2.2 [1.09] 2.7 [0.87] 1.7 [0.35] GPR 3.1 [0.33] 2.7 [1.00] 2.6 [0.40] NDE Suite 3.1 [0.33] 3.1 [0.60] 2.2 [0.36] Robotic 2.9 [0.97] 2.9 [0.78] 1.5 [0.34] Table 4-6. Characterization of bridge deck evaluation technology alternatives, expert survey result.37 38 In the case of the POSI measure, we asked each individual to assess each of the nine underlying barrier issues that constitute the POSI index. We then used the mean score for the ratings from each respondent across the nine cat- egories. The score reported in Table 4-6 is the mean of those means. We did not apply different weights to the nine different potential barrier areas. This is in accord with the findings from the investigation of obstacles to innovation by transportation agencies that any significantly large barrier may be sufficient to affect POSI seriously and that it is not possible to point to any one of these areas as being consistently a dominating problem. Individual agencies may choose to apply a weighting system. 37 In Appendix D we provide Table D-2 giving the coefficient of variation for each technology to provide a more intuitive feel for the degree of uncertainty or difference of opinion these survey scores represent.
24 technologies.39 Table 4-7 shows the breakdown of net fixed and recurring costs for each technology during the initial 5-year period of its use. Weighing Indirect Effects Adoption of different technologies may have indirect effects in addition to the anticipated direct costs and benefits. Assessing unintended consequences is important but, by definition, not easy. Specifically, consideration must be given to the questions: â¢ What are the potential positive or negative indirect effects of using this technology? â¢ How could positive effects be enhanced and negative effects mitigated? Although the literature review and interviews did not reveal specific unintended consequences, the research team can anticipate several potential issues. On the one hand, bridge preservation and mobility may be key desired effects, but bridge inspection technologies may also have positive effects on safety, environmental goals, and improved bridge designs. On the other hand, the research team can hypothesize several negative unintended results from employing some technologies. There could be overreliance on sensors or complacency, though there is not yet evidence of this. The opposite might also be trueâtoo much information may lead to a type of hypochondria, especially if some bridges have advanced sensors and others do not: â¢ False positives may lead to needless concern or repairs that would, in themselves, have indirect consequences for mobility, budgets, and highway operations. Technology Cost Factors Visual Visual +Audible GPR Fiber Optics (New) Sensor Suite (GPR +IR) Sensor Suite on Robotic Platform N et F ix e d Co st s Acquisition (net value of redundant equipment) $87,285 $87,285 $129,463 $4,404,041 $217,311 $471,246 Taxes/Penalties/Fees (net TPF no longer required) $0 $0 $0 $0 $0 $0 Training (net training no longer required) $0 $0 $43,200 $10,000 $97,200 $97,200 Licenses, Royalties, etc. (net) $0 $0 $15,000 $6,186 $24,405 $61,518 N et R ec ur rin g Co st s O&M (net O&M of eqpt. made redundant) $278,588 $278,588 $269,813 $278,588 $282,975 $282,975 Training (net training no longer required) $52,647 $52,647 $115,822 $67,271 $194,792 $194,792 Taxes/Penalties/Fees (net TPF no longer required) $0 $0 $0 $0 $0 $0 Licenses, Royalties, etc. (net) $0 $0 $0 $0 $0 $0 Personnel (net personnel made redundant) $1,491,652 $2,680,586 $2,024,867 $1,491,652 $2,249,852 $2,249,852 Total $1,910,172 $3,099,106 $2,598,165 $6,257,738 $3,066,535 $3,357,583 Table 4-7. Fixed and recurring costs for candidate bridge inspection technology applications over 5-year period following implementation. 39 Key base cost assumptions are provided in Table D-3 in the appendix.
25 â¢ Highly sensitive sensor systems could lead to discovery of previously imperceptible anomalies that either may never manifest as problems or for which amelioration may be beyond current financial or technical means. For these reasons, the research team advocates the use of STREAM as a framework and as a platform for discussion and analysis to occur but not as a substitute for agency-level decision making. STREAM is designed to make assumptions explicit and to allow comparison among alternatives on a level playing field of assessment. Step 4. Compare The next step helps decisionmakers compare technologies across multiple dimensions, all of which will play a role in choosing an ultimate course of action. The approach uses the best available data to compare explicitly how well each tech- nology alternative (including the current approach employed by DOTs and MPOs) allows the function being assessed to meet mission objectives. The research team draws on the pre- viously defined metrics for mission values as well as the POSI metric that measures the relative difficulty of implementing each technology alternative. Exposing the distribution and range of uncertainty of these values is a central feature of the method. Explicit representation of uncertainties can help decisionmakers clarify their understanding of the technology alternatives (as well as gain insight into what may have led to differing expert assessments), debate potential contributions to achieving mission goals, and identify the most important issues associated with successful implementation. Value and Implementation of Current Methods and Technology Alternatives The research team uses the values found in Table 4-8 to illus- trate the Compare step for bridge tech evaluation technologies. (The values in the first two data columns of Table 4-8 repro- duce the means found in the first two columns of Table 4-6, while the fourth data column of Table 4-8 is the third column from Table 4-6). First, the research team can compare the NDE options by creating an overall Value Metric that provides an aggregate measure of the benefit across all relevant agency goals (in this case, preservation and mobility). This value is the product of the preservation metric and the mobility metric and is listed in the third column in Table 4-8.40 Second, the research team can multiply this overall Value Metric by the POSI and obtain an expectation of the actual benefit from application of the technology alternative when implemented, which the research team calls the Expected Value of the technology alter- native.41 These last two values are shown in the fourth and final columns of Table 4-8, respectively. Table 4-8 illustrates the tradeoffs involved in implement- ing each of the technology alternatives, as well as the tradeoff involved in supplementing visual inspection with audible inspection methods. For example, while the addition of audi- ble methods improves the determination of the bridge deck condition state (reflected in a higher preservation value), it has a negative effect on mobility. So the decision depends on how much lane closure an agency is willing to endure for the Technology Application Preservation Value (Max 4) Mobility Value (Max 4) Value Metric = Preservation x Mobility (Max 16) POSI (Max 4) Expected Value = VM x POSI (Max 64) Visual 1.1 1.3 1.5 3.8 5.6 Visual + Audible 2.0 1.2 2.4 3.6 8.8 FOS 2.2 2.7 5.9 1.7 10.1 GPR 3.1 2.7 8.4 2.6 21.8 NDE Suite 3.1 3.1 9.7 2.2 21.3 Robotic 2.9 2.9 8.3 1.5 12.8 Table 4-8. Value-based comparison for bridge deck evaluation technology alternatives. 40 The statistical reasoning behind this simple approach is more sophisticated than might first appear. Each metric value can be regarded as a random variable measuring how each technology alternative may affect each of these outcomes. The distribution of this random variable depends on how the technology is imple- mented and used as well as the specific characteristics of the bridge to which it is applied. We could then provide an estimate of the Value Metric of the overall effect of implementing the technology alternative as the product of these distri- butions. Distributions of such observable variables often take the form of a log- normal distribution. Because the observable variable appears in the exponential of such a distribution, the product distribution gives the combined effect. This distribution of the resulting product, in this case the Value Metric, also takes the form of a log-normal distribution. 41 Both the Preservation Value and Mobility Value in Table 4-8 carry an implicit presumption that the weight to be given each mission-specific value is equal. This is a factor that could be modified at the level of the individual transporta- tion agencyâin some settings, the factor of mobility may be given a different weight than that for preservation. In this example, these implicit weights are set equal to 1.
26 benefit of audible methods that may better determine the condition state. All of the technology alternatives improve the ability to determine condition state compared to the cur- rent visual or visual plus audible approaches. Of these, GPR appears to be easiest to implement because it requires less of a change from current practice and has a well-documented set of demonstrations. It achieves the highest Expected Value calculation as a result. Figure 4-1 shows the Value Metric (the product of the preser- vation and mobility metric values as shown in the third column of Table 4-8) on the vertical axis against the qualitative mea- sure of POSI (the fourth column of Table 4-8) on the horizon- tal axis. Each point plot represents the mean values while the âcross hairsâ show the revealed uncertainty in each dimension as measured by the dispersion of expert opinion. This shows that GPR, the NDE suite, and the robotic inspection have very close value metrics, but that GPR is judged the most likely to be successfully implemented. Additionally, there is much less uncertainty in the implementation judgments than in the value metric judgments, with the robotic inspection system having a high degree of uncertainty in terms of meeting agency goals. The plot also shows a tradeoff between the Value Metric and POSI: technologies that significantly improve mobility or preservation (high Value Metric) may be more difficult to implement (POSI) because they reflect a greater departure from the state of practice. This tradeoff is captured by the expected value measure in Table 4-8. To facilitate comparison, the research team plotted equivalency curves, which identify points on the graph that have the same expected value. For example, the expected value for GPR is 21.8.42 It is the highest such value among the six technologies. Any point connected by the equal-value curve that also passes through the GPR point would have the same expected value of 21.8. The low equivalency curve passes through the point with the lowest expected value. This is visual inspection, with an expected value of 5.6. The middle curve captures the points on the graph that have the middle expected value (between the high- est and the lowest, in this case (5.6+21.8) / 2 = 13.7.) These curves help identify technologies that have different expected impact on agency function.43 A view such as the one in Figure 4-1 makes it possible to present complicated information clearly: the consensus expert opinion, the range of uncertainty among experts, and the tradeoffs between the characteristics (e.g., assessed abil- ity to enhance disparate agency mission goals and likelihood of obstacles of various heterogeneous types) of a group of technologies now placed on the same scale. It does have limi- tations in that it presents only a 2-dimensional slice through a space with multiple dimensions. But this shortcoming can be addressed by providing other 2-dimensional slices that may serve to illuminate more clearly the relevant tradeoffs. This may be done easily as well as examining the consequences of weightings and preferences among mission goal effects as well as different examinations of how to treat uncertainty.44 One such view is shown in Figure 4-2. The logic is similar to that of Figure 4-1. In this case, the two principal mission values â preservation and mobility â are placed on the two axes and the characteristics of six candidate technology appli- cations are shown. This view helps explain the placement of the technologies along the value metric (vertical) axis in Fig- ure 4-1. The current approaches, visual and audible inspection, score the least well with the use of both together providing some additional benefit on the preservation axis. In the case of Visual+Audible, however, it shows how the addition of chain dragging or hammer sounding represents a tradeoff between improved performance for one mission value (preservation) while causing a deterioration in another (mobility.) The three alternatives of GPR, the NDE suite, and the robotic platform suite cluster together at the highest level with uncertainty ranges that cause them to be largely indistinguishable. These visuals were shared with the MnDOT experts group. In the discussion that followed, the reaction was favorable in two ways. First, the results seemed to capture the actual relationship among these technological alternatives as under- stood individually and collectively by this group. Second, even Figure 4-1. Value measure (preservation î³ mobility values) compared to implementation barriers 43 This figure and others of this type presented in this report were generated by a simple Excel-based model developed for this purpose. This tool allows the entry of many differing expert opinions (and allows for two different rounds of voting to support a Delphi-like expert panel process) and then converts these inputs into a series of plots similar to that shown in Figure 4-1 and the others found in this report. This tool is available on the TRB website and can be found by searching on âNCHRP Report 750, Volume 3â. 44 In this example we merely plotted the high and low responses, thus the full range of responses. The dotted lines of uncertainty may also be made to reflect standard deviations, coefficients of variation, and other representations of uncertainty and distribution of opinion. 42 The Value Metric is 3.1 (preservation) multiplied by 2.7 (mobility) or 8.4 as shown on the vertical scale. As shown in the figure, the POSI is 2.6. Multiplying this number times 8.3 yields 21.6.
27 though not surprising (given that the outputs were generated from inputs by the same group,) the results provided insights that were considered valuable. They provided a comprehen- sive means to represent disparate information in a manner that could be understood and could (and did) provoke addi- tional rounds of discussion that could then be most usefully focused toward specific agency purposes. There was apprecia- tion of the fact that the goal was not to cause the computer to spit out âthe answerâ deus ex machina, but rather to provide a platform to facilitate weighing of alternatives within a group of technical staff, managers, and decisionmakers. Cost of Current Methods and Technology Alternatives The analysis presented so far deals only with the value of different NDE options in meeting mission objectives and the likelihood that they can be successfully implemented. To fin- ish the comparison, the research team looked at the cost of implementing such alternatives. The costs can be represented in the Compare step in STREAM in views similar to those shown in Figures 4-1 and 4-2: one could retain the same axes but report the mean (as well as other measures that might be deemed useful such as range) of the cost scores for each technology alternative in text associated with the points rep- resenting the technology alternatives in the resulting plane (see Figure 4-3). Another approach would be to produce a view similar to these figures, retaining the benefit score on the vertical axis but replacing the POSI score with the cost score on the hor- izontal axis. Both views can be of value in presenting the choices and tradeoffs presented by a particular set of technol- ogy alternatives. However, in this case, one cannot then also include the level-set contours showing equivalent values: this would imply a cost-benefit tradeoff that the analysis devel- oped by the current STREAM method would not represent Vsl V+e GPR FO Sui Rob 0 0.5 1 1.5 2 2.5 3 3.5 4 0 1 2 3 Pr es er va tio n Va lu e Mobility Value Figure 4-2. Preservation values compared to mobility values. Figure 4-3. Value measure (preservation î³ mobility values) compared to implementation barriers with notations on equivalent costs
28 accurately.45 It is one thing to strive for simplicity; it is another to go beyond a firm grounding in decision theory and sta- tistics or imply such a foundation when it is not present. In designing STREAM the research team strove to create simplic- ity without violating sound analytical practice. Step 5. Decide The previous step, Compare, should yield a set of tradeoffs and other comparative information about different actions that the agency may take in response to different technolo- gies, rather than any one simple answer. The step should have also generated considerably more interaction among those on whom the ultimate decision rests, as well as more mean- ingful interaction with the body of knowledge that exists on each alternative, than is supported by most current practices in transportation agencies. In the final step, both these formal assessments via STREAM and the informal interactions that have occurred during the conduct of the STREAM process are used to come to a decision. In particular, the cost analysis coupled with the value metrics suggests that a key question is whether the benefit of improved condition state determination and reasonable likelihood of successful implementation is commensurate with cost. A comparison of alternatives in Figure 4-3 suggests that the NDE sensor suite and GPR are lead contenders for improve- ment over the state of practice: they have the highest expected values of all other new options and have lower cost than the FOS and robotic systems. Between these two, GPR has a higher POSI score while the sensor suite has a higher value metric score. GPR costs less as wellâ$2.6 million compared to $3.1 million. The research team did not implement a formal, quantita- tive cost-benefit assessment for the reasons discussed in the prior sections. The same data can have different implications for different agencies. With respect to the data given here, an agency prioritizing on cost concerns or one that seeks to avoid higher-risk options might prefer GPR out of all others: it is lower cost than other alternatives but could still substan- tially improve the state of practice. In meetings with MnDOT, participants suggested that such results would lend weight to arguments for investing in research or taking administrative measures to improve the POSI of GPR, thereby increasing its expected value and reducing uncertainty around its perfor- mance and other characteristics. Alternatively, an agency with interest in pushing the tech- nology boundaries might pursue an NDE suite, or even experiment with robotic systems. As one pathway for prepar- ing for major shifts in bridge deck inspection, agencies could monitor advancements in structural health monitoring and FOS through working groups, fund research, or undertake pilot studies. Other agencies might choose to do none of the above, preferring to wait until benefits are further proven or costs decrease further. This, too, is a valuable result from the STREAM process because it is made explicitly and supported by analysis, rather than implicitly due to lack of information. Decisions such as these depend on the specific agency context. Different agencies face different barriers to adop- tion and implementation, cost concerns, cost versus benefit preferences, and so forth. The value that STREAM provides is greater clarity over the characteristics of most importance and the bases on which decisions should be made. It also allows for drilling down to understand on what explicit bases and assumptions the apparent results depend. Other Applications of STREAM The STREAM application for bridge deck inspection served well during the period in which the research team were devel- oping STREAM. However, the research team also wanted to make certain that the method as it evolved was not affected by the particular transportation agency function the research team chose to examine. The research team wanted to ascertain that STREAM is, indeed, a generalizable approach to expe- diting technology adoption in transportation. The research team could not preclude the possibility that different technol- ogy applications and different agency functions might lead to problems in applying STREAM at the agency level. The research team also recognized that it would be desir- able to have a wider, more disparate set of examples. Doing so could increase the chances of conveying the value of STREAM to potential users in transportation agencies and national bodies while also perhaps suggesting new applica- tions. In this sense, having more examples will enhance the outreach portion of the project. For these reasons the research team conducted two further STREAM applications. The first is intended to be firmly in the realm of operations. It examines alternative approaches to road de-icing and ice prevention. This study is provided in Appendix B. The second approach, a focused look at one aspect of ITS by applying STREAM to alternative means for providing real-time traffic information to drivers, is provided in Appendix C. 45 See footnote 25 for an explanation. It is tempting to present a three-dimensional representation similar to Fig- ures 4-1 and 4-2 that retains the same x-axis and y-axis but projects a third, z-axis for cost that results in a cube rather than a plane. We do not do so because of the great possibility of misunderstanding. This would in effect be to place cost and POSI on the same basis from a decision theoretic perspective when, as we have discussed above, their interpretation must be different.