National Academies Press: OpenBook

Guidelines to Improve the Quality of Element-Level Bridge Inspection Data (2019)

Chapter: Appendix E. Field Exercise Results

« Previous: Appendix D. Field Exercise Plan
Page 177
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 177
Page 178
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 178
Page 179
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 179
Page 180
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 180
Page 181
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 181
Page 182
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 182
Page 183
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 183
Page 184
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 184
Page 185
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 185
Page 186
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 186
Page 187
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 187
Page 188
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 188
Page 189
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 189
Page 190
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 190
Page 191
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 191
Page 192
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 192
Page 193
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 193
Page 194
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 194
Page 195
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 195
Page 196
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 196
Page 197
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 197
Page 198
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 198
Page 199
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 199
Page 200
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 200
Page 201
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 201
Page 202
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 202
Page 203
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 203
Page 204
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 204
Page 205
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 205
Page 206
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 206
Page 207
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 207
Page 208
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 208
Page 209
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 209
Page 210
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 210
Page 211
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 211
Page 212
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 212
Page 213
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 213
Page 214
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 214
Page 215
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 215
Page 216
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 216
Page 217
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 217
Page 218
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 218
Page 219
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 219
Page 220
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 220
Page 221
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 221
Page 222
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 222
Page 223
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 223
Page 224
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 224
Page 225
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 225
Page 226
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 226
Page 227
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 227
Page 228
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 228
Page 229
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 229
Page 230
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 230
Page 231
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 231
Page 232
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 232
Page 233
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 233
Page 234
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 234
Page 235
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 235
Page 236
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 236
Page 237
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 237
Page 238
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 238
Page 239
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 239
Page 240
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 240
Page 241
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 241
Page 242
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 242
Page 243
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 243
Page 244
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 244
Page 245
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 245
Page 246
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 246
Page 247
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 247
Page 248
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 248
Page 249
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 249
Page 250
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 250
Page 251
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 251
Page 252
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 252
Page 253
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 253
Page 254
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 254
Page 255
Suggested Citation:"Appendix E. Field Exercise Results." National Academies of Sciences, Engineering, and Medicine. 2019. Guidelines to Improve the Quality of Element-Level Bridge Inspection Data. Washington, DC: The National Academies Press. doi: 10.17226/25397.
×
Page 255

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

E-1 APPENDIX E Field Exercise Results

E-2 Contents E.1  FIELD EXERCISE OVERVIEW .......................................................................................... E-6  E.1.1  Field Exercise Plans ................................................................................................ E-7  E.1.2  Training on Use of the Guidelines .......................................................................... E-7  E.2  FIELD EXERCISES IN INDIANA ....................................................................................... E-7  E.2.1  Introduction and Testing ......................................................................................... E-7  E.2.2  Pretest Questionnaire .............................................................................................. E-9  E.2.3  S-BRITE Center Exercises .................................................................................... E-10 E.2.4  Task S-BRITE6 Truss and Gusset Plate Elements ................................................ E-23  E.2.5  Bridge I1 & I2 Inspection Exercise Tasks ............................................................ E-28  E.3  MICHIGAN FIELD TEST RESULTS ................................................................................ E-48  E.3.1  Pre-test Questionnaire ........................................................................................... E-49  E.3.2  Spatial Area Estimation......................................................................................... E-50  E.3.3  Bridge Inspection Exercise Results ....................................................................... E-51  E.3.4  Post-test Questionnaire ......................................................................................... E-70  E.3.5  Inspection Times ................................................................................................... E-72  E.4  DISCUSSION ........................................................................................................................ E-73  E.5  CONCLUSIONS .................................................................................................................... E-77  E.6  RECOMMENDATIONS ...................................................................................................... E-78 

E-3 List of Figures Figure E-1. Image of a Snellen eye test chart. ......................................................................................... E-8  Figure E-2. Typical plate used to assess color blindness. ........................................................................ E-8  Figure E-3. Photograph of plate girder showing inspectors assessing simulated areas of damage. ...... E-12  Figure E-4. A) Example of a page test for estimating area based on percentage showing 5% (top) and 13% (bottom), B) Participants performing page estimate test. ....................................................................... E-14  Figure E-5. Results from S-BRITE Task 2 (A) and Task 3 (B) showing area estimates provided by TGA and TGB. ................................................................................................................................................. E-16  Figure E-6. Results from estimating the area of simulated damage on the webs of two plate girders. . E-17  Figure E-7. Results of S-BRITE Tasks 4 (A) and 5 (B) for length estimates from TGA and TGB. ...... E-19  Figure E-8. Results of S-BRITE Tasks 4 and 5 estimating simulated damage in using units of ft. ....... E-20  Figure E-9. Summary of task times for S-BRITE tasks 2 through 5. .................................................... E-22  Figure E-10. Inspectors conducting an assessment of the truss during field exercise. .......................... E-23  Figure E-11. Examples of corrosion damage on the truss bridge showing gusset plate (A), side-view of bottom chord (B), upper chord and diagonals (C), and underside of bottom chord (D). ........................ E-24  Figure E-12. Inspector results for Defect 1000, Corrosion damage in a truss. ...................................... E-26  Figure E-13. Photograph of test bridges I1 (ID #I65-179-05487 BNBL) and I2 (ID # I65-179-05487 BSBL). .................................................................................................................................................... E-29  Figure E-14. Photographs of bridge I1 (left) and I2 (right) showing general conditions of the beam elements with corrosion and coating damage. ....................................................................................................... E-30  Figure E-15. Photographs of typical areas of section loss observed on bridges I1 and I2. .................... E-31  Figure E-16. Individual inspection results for element 107- Open Steel Girder (ft). ............................ E-33  Figure E-17. Photographs of the deck surface illustrating some damage in the decks of I1 and I2. ..... E-35  Figure E-18. Diagrams of damage in the deck soffit for bridges I1 (A) and I2 (B). ............................... E-35  Figure E-19. Individual inspection results for element 12- RC deck (sq ft). .......................................... E-37  Figure E-20. Diagrams of damage in the wearing surface of bridge I1 (A) and I2 (B). ......................... E-38  Figure E-21. Individual inspection results for the wearing surface of bridges I1 and I2, (sq ft). .......... E-39  Figure E-22. Photograph of typical conditions of compression seals for bridge I1 (A) and I2 (B). ....... E-40  Figure E-23. Photograph of two moveable bearings in bridge I1 showing corrosion damage. ............. E-44  Figure E-24. Photographs of bridges M1 (A), M2 (B), and typical damage (C and D). ......................... E-52  Figure E-25. Inspection results for PS girder damage for bridges M1 (top) and M2 (bottom). ............. E-54  Figure E-26. Photograph of overall condition of columns in Bridge M1. .............................................. E-55  Figure E-27. Photographs of typical damage in columns showing spalling (A) and cracking (B). ........ E-56  Figure E-28. Photographs of typical conditions for the abutment (top) and bearings (bottom). ............ E-59  Figure E-29. Bridge M3 span 5 deck surface showing delamination/Spalls/Patched Area defects. ....... E-63  Figure E-30. Inspection results for RC deck M3 showing areas of damage in the deck. ....................... E-64  Figure E-31. Bridge M4 top of deck showing delamination/spalls/patched area on spans 4 and 6 ........ E-66  Figure E-32. Inspection results for RC deck M4 showing areas of damage in the deck. ....................... E-67  Figure E-33. NBIS ratings provided for the deck (DK), superstructure (SS), and substructure (Sub) for test bridges in Michigan. ............................................................................................................................... E-69  Figure E-34. Standard deviation (σ) as a function of damage for CS 3 (A) and CS 2 + CS 3(B). ........ E-75  Figure E-35. Rate of detection for CS 3 as a function of the mean. ...................................................... E-75 

E-4 List of Tables Table E-1. Results from pretest questionnaire for Indiana field exercise. ............................................... E-9  Table E-2. Summary of S-BRITE Center tasks 1-5. .............................................................................. E-11  Table E-3. Areas of defect used for the spatial estimating task. ............................................................ E-13  Table E-4. Results of estimating areas on guide sheets. ......................................................................... E-15  Table E-5. Results from S-BRITE Tasks 2 and 3 showing damage estimates and analysis results. ..... E-18  Table E-6. Results of S-BRITE Tasks 4 and 5 estimating damage by length (ft). ................................ E-21  Table E-7. Error analysis of the combined results from TGA and TGB for Tasks 2-5. ........................ E-22  Table E-8. Inspection results for steel truss corrosion (Defect 1000) reported as percentage of the total quantity. .................................................................................................................................................. E-25  Table E-9. Inspection results for steel protective coating assessed in units of ft. .................................. E-26  Table E-10. Inspection result for effectiveness of steel protective coating (sq ft) reported as percentage of the total quantity. .................................................................................................................................... E-27  Table E-11. Inspection results for gusset plates reported as percentage of the total quantity. ................ E-28  Table E-12. Inspection results for Element 107-Open steel girder. ....................................................... E-32  Table E-13. Bridge I1 and I2 inspection result for element 515-Steel Protective Coating reported as percentage of the total quantity (sq ft). ................................................................................................... E-33  Table E-14. Inspection results for bridge I2 for steel protective coatings assigned as length (ft), shown as percentage. .............................................................................................................................................. E-34  Table E-15. Results for element 12-RC deck for bridges I1 and I2. ...................................................... E-36  Table E-16. Results for element 12-RC deck for bridges I1 and I2 with two outliers removed from TGB. ................................................................................................................................................................ E-37  Table E-17. Results for element 510-Wearing Surface for bridges I1 and I2. ....................................... E-38  Table E-18. Assignment of defect elements 3210 (delam) and 3220 (cracking) to element 510, wearing surface for bridges I1 and I2. .................................................................................................................. E-40  Table E-19. Results for Element 302-Compression Joint Seal. ............................................................. E-41  Table E-20. Defects for compression seal identified by inspectors with CS 2, 3 or 4 for bridges I1 and I2. ................................................................................................................................................................ E-42  Table E-21. Results for Element 210-Reinforced Concrete Pier Wall, bridge I1. ................................. E-43  Table E-22. Results for Element 311-Movable Bearings. ..................................................................... E-45  Table E-23. Table showing the frequency of defect assignment for the decks for bridge I1 and I2. ..... E-45  Table E-24. Frequency of defect element selection for concrete elements. ............................................ E-46  Table E-25. Condition ratings for bridges I1 and I2. ............................................................................. E-47  Table E-26. Qualification of the bridge inspectors for Michigan inspection exercise. ........................... E-49  Table E-27. Simulated area estimation as percentage of the total area of the sheet of paper. ................ E-51  Table E-28. Inspection result for element 109 - Prestressed Concrete Girder/Beam for bridge M1 and M2 reported as percentage of the total quantity ............................................................................................ E-53  Table E-29. Inspection result for element 205 - Reinforced Concrete Column for bridge M1 and M2 inspected using units of (ea) and reported as percentage of the total quantity ........................................ E-56  Table E-30. Inspection results for columns of Bridges M1 and M2, ea. ............................................... E-57  Table E-31. Inspection result for element 205/227 - RC Column or Pile for bridge M1 and M2 using units of ft, reported as a percentage of the total quantity. ................................................................................ E-58  Table E-32. Inspection result for element 215 - RC Abutment and Element 234 - RC Pier Cap for bridge M1 and M2 reported as a percentage of the total quantity. ..................................................................... E-61  Table E-33. Inspection result for element 313-Fixed Bearing and element 310 - Elastomeric Bearing for bridge M1 and M2 reported as percentage of the total quantity. ............................................................ E-62  Table E-34. Inspection result for element 12 - RC Deck for bridge M3 reported as percentage of the total quantity. .................................................................................................................................................. E-64 

E-5 Table E-35. Inspection result for element 300 - Strip Seal for bridge M3 reported as a percentage of the total quantity. .......................................................................................................................................... E-65  Table E-36. Inspection result for element 12 - RC Deck for bridge M4 reported as a percentage of the total quantity ................................................................................................................................................... E-66  Table E-37. Inspection result for element 300 - Strip Seal for bridge M4 reported as a percentage of the quantity ................................................................................................................................................... E-68  Table E-38. Inspection result for element 301 - Pourable Joint Seal for bridge M4 reported as a percentage of the total quantity ................................................................................................................................. E-68  Table E-39. Statistical results from NBI ratings of bridges in Michigan. .............................................. E-69  Table E-40. Frequency table showing number of inspectors assigned defects for PSC girder for bridge M1 and M2. ................................................................................................................................................... E-70  Table E-41. Frequency table showing number of inspectors assigned defects for an element for bridge M1, M2, M3, and M4. .................................................................................................................................... E-71  Table E-42. Average time reported by TGA and TGB for routine bridge inspection exercises ............. E-73  Table E-43. COV values determined for TGA and TGB for the field exercises. ................................... E-74 

E-6 E.1 Field Exercise Overview This appendix describes the results of the field exercises completed as part of the research. The objectives of the field exercises were to evaluate the effectiveness of the guideline developed through the research, to assess potential changes to the Manual for Bridge Element Inspection (MBEI), and to assess the quality of element-level inspections overall. The guidelines used in the field exercises consists primarily of a visual guide and several spatial estimating tools. The experimental approach used for the field exercises was a Control Bridge Model (CBM) which consists of a group of inspectors assessing the same bridges (“control bridges”) such that inspection results from different inspectors can be compared directly. In this way, the same bridge elements were inspected under the same test conditions such as access, location, materials, and construction. Each participant in the study conducted a routine inspection and entered data in a workbook prepared for the study. The workbook documented the relevant elements to be assessed in each bridge and included the total quantities for each element. Each participant was issued a workbook with a unique number on the workbook; names of the inspector completing each workbook was not documented. The results of the inspections were analyzed to assess the variation in results between different inspectors and the effect of using the visual guide. The results were also compared to reference inspections by a reference “control” team that documented the damage in the bridge and the relevant quantities. The primary analysis completed for the inspection results was the evaluation of the variation between different inspectors, the mean inspection results in terms of quantities in different condition states (CSs), and the differences between inspector groups. Test bridges in Indiana and Michigan were used as field test sites for evaluating the effect of using newly developed guidelines (i.e., the visual guide) in the field. Several changes to the MBEI that were being considered as part of the larger study were assessed using the field test sites. This included assessing the effect of changing the unit of measure for steel protective coatings from sq ft to ft, and changing the units for concrete columns from units of ea to ft. Portions of the field exercises were completed at the Steel Bridge Research, Inspection, Training, and Engineering Center (S-BRITE) at Purdue University. Bridge inspectors from Indiana and Michigan participated in the study. To provide data for comparison between use of the newly developed guide and existing inspection procedures, participating inspectors were divided into two groups. Test Group A (TGA) utilized the newly developed visual guide, while Test Group B (TGB) performed inspections according to existing practices. The results from these two groups of inspectors were compared to each other and the control inspection. These data were used to assess and characterize the variation in inspection results stemming from routine element-level visual inspections. Inspection results based on the traditional National Bridge Inspection Standards (NBIS) were also assessed to complement the element-level assessment. The first objective of the field exercises was to compare the quality of element-level data when using the visual guide developed through the research with traditional approaches for element-level inspection. This included analyzing the accuracy of inspection in terms of spatial estimates and assignment of CSs and comparison of these data between different groups of inspectors, within groups of inspectors, and with a control inspection. A second objective of the field exercises is to capture data on the effect of making changes to the units of measure for certain elements in the MBEI. The changes that were evaluated through the field exercise consisted of using ft instead of ea for columns and using ft instead of sq ft for steel bridge coatings. The third objective was to evaluate the quality of element-level inspection data overall. The field exercises provided data on the quality (consistency) of element-level data. These data are needed for rational assessment of tolerances (i.e. accuracy requirements) for inspection results used for preservation, maintenance, and repair decision-making.

E-7 E.1.1 Field Exercise Plans This portion of the report provides a brief overview of the execution of the field exercises. Complete details on the test bridges and protocols used during the field exercises are included in Appendix D, which documents the field exercise plans. There were a total of six bridges included in the field exercises. This included two pairs of twin bridges, i.e., bridges of the same design and construction date, but with different element conditions. These bridges provided the opportunity to inspect similar bridge elements with different levels of damage. There were two bridges where only the bridge deck was assessed. These decks were selected to provide test specimens with significant damage in CS 3 for the important element of bridge decks. The field exercises were completed in the states of Indiana and Michigan. The population of inspectors included 14 inspectors in Indiana and 10 inspectors in Michigan. The characteristics of the inspectors that participated in the study are documented within this report. The inspectors participated in one day of field testing that generally consisted of the routing inspection of two highway bridges. Inspectors in Indiana also completed certain tasks at the S-BRITE Center. E.1.2 Training on Use of the Guidelines The visual guide developed as part of the research has been designed to be in a format that is familiar to inspectors that are familiar with the MBEI. Therefore, substantial training on the use of the guide was not thought to be necessary. However, some prior training and development of a familiarity with the guide was provided through a webinar for inspectors participating in the field exercises. This webinar was approximately 60 minutes in length and included a presentation and a question and answer period. The webinar provided an overview of the objectives and purpose of the study, a review of the visual guide, and an overview of the inspection tasks to be completed in the field. Inspectors that used the visual guide in the field exercise were provided with a printed color hardcopy of the visual guide as a stand-alone document on at the beginning of the field exercise. Electronic copies of the guide were provided prior to the field exercise when practical. Logistical issues with identifying the inspectors that would participate in the field exercise precluded providing the guide prior to the field trials in certain cases. E.2 Field Exercises in Indiana This portion of the report describes the field exercise results in Indiana. The field exercise in Indiana included two primary activities. Exercises were completed at the S-BRITE Center at Purdue University that were intended to develop data on the accuracy of visual inspection for estimating quantities of damage. A pre-test questionnaire and vision testing of inspectors was also completed at the S-BRITE Center. Field exercises were conducted that consisted of the routine inspection of twin highway bridges with steel superstructures and concrete decks. This portion of the report describes the results from the field exercises in Indiana, beginning with an introduction that describes the preparation activities for the field exercises, which included an assessment of inspector vision and completing a pre-test questionnaire. These preparation activities were the same in both Indiana and Michigan, and as such are only presented herein and not repeated for the Michigan portion of the report. This section also presents results from the S-BRITE Center tasks and the field exercise results from the two highway bridges. E.2.1 Introduction and Testing Upon arriving at the S-BRITE facility, the study participants (inspectors) were introduced to the field inspection exercises and provided a short refresher on use of the guideline. This task reviewed information previously provided in a webinar. The participant’s vision was assessed using a standard vision test and a color blindness test. The purpose of these tests was simply to characterize the population of inspectors

E-8 participating in the study test relative to the overall population or other groups of inspectors participating in future (or past) studies. The testing established if the participants have “typical” characteristics of vision, or if there is some anomaly in their vision characteristics that should be noted. For example, individuals would typically be expected to have 20/20 vision with correction. If all of the participants in fact had 20/40 or worse, it would be desirable to know that information. A person with 20/40 vision can see at 20 ft with the same acuity of a person with 20/20 vision can see from 40 ft. The vision test that the participants completed was using the conventional Snellen eye chart, which is a very common vision test familiar to participants. The eye exam used the chart shown in Figure E-1, which is observed from a distance of 20 ft. Figure E-1. Image of a Snellen eye test chart. The Snellen eye test was administered to each of the participants. If the participant wore corrective lenses (glasses), then the exam was administered with the corrective lenses. All inspectors were found to have 20/20 vision. The color vision test completed by participants was the Ishihara test, which is a color perception test for red-green color deficiencies. A typical test plate from an Ishihara test is shown in Figure E-2; there can be up to 38 different color plates used for assessing color deficiencies. There were 14 test plates used during this study. The 14-plate test is typical for occupational settings. During the test, each plate is observed for 3 seconds and the participant is then asked to identify the number on the plate. Figure E-2. Typical plate used to assess color blindness. The color vision test was administered to each of the participants, and there were no color vision deficiencies found from the testing.

E-9 The participants were divided into two test groups: Test Group A (TGA) and Test Group B (TGB). An effort was undertaken to make the distribution of bridge inspection experience approximately the same between the two groups. TGA was provided the visual guide for use in the testing, while TGB was not provided with the visual guide and used typical procedures during the inspection tasks. E.2.2 Pretest Questionnaire The pre-test questionnaire was used to capture information about the inspectors’ level of education, bridge inspection training courses attended, and the portion of their job duties dedicated to bridge inspection. The inspectors were also asked to indicate whether they wear eyeglasses during inspection and have any form of color blindness. Finally, there were some questions specific to the methods the individual used for inspecting the deck and superstructure of a bridge. The results of the pre-test questionnaire indicated that the two test groups had similar experience, educational and training backgrounds as shown Table E-1. TGA consisted of two members with a high school education, three members with a BS and two members with an MS. Six of the seven members of TGA were qualified as team leaders. TGB consisted of a single member with a high school education and six members with the BS. Six of seven members of TGB were qualified as team leaders. Both groups had limited experience with element-level inspection of 4 years or less, with TGA having and average experience of 2.5 yrs and TGB having an average experience of 3 yrs. The training experience for both groups was also similar. All member of both groups had attended NHI course 130055, Safety Inspection of In-Service Bridges, and four member of each group had attended NHI 130053, Bridge Inspection Refresher Training. All of TGA and six of the seven members of TGB had attended NHI 130078, Fracture Critical Inspection Techniques for Steel Bridges. TGA consisted of five members that had attended the FHWA Introduction of Element-Level Bridge Inspection, and TGB consisted of four members that had attended the FHWA training. Four members of TGA and three members of TGB indicated they had attended a state-specific refresher training. Overall, these data indicated that the training and experience of the two test groups were similar. Table E-1. Results from pretest questionnaire for Indiana field exercise. Category Option # of Inspectors TGA TGB Education H.S. 2 1 B.S. 3 6 M.S. 2 0 Br. Insp. Experience < 5yrs 2 2 5-15 yrs 2 1 15 + yrs 3 4 Element-Level Insp. Exp. 1-2 yrs 3 1 3-4 yrs 4 6 Training NHI 130055, Safety Insp. of Bridges 7 7 FHWA Element Training 5 4 Team Leader 6 6

E-10 Several questions on the pre-test questionnaire sought to gain insight into the typical methods that were used by inspectors to determine the quantities of damage in the field. The pre-test questionnaire sought to determine the methods that were used for estimating areas of damage for elements rated in sq ft (e.g. deck) and those rated in ft (e.g. superstructure members). Participants were provided with the following choices for estimating areas of damage: (A) Draw each area of damage (spall, crack, etc.) on a diagram with estimated dimensions, and then tally up the total area. (tallying) (B) Estimate the total area of damage as a percentage, and then multiply by the total quantity for that element. (percentage estimate) (C) Measure each area of damage individually with a ruler, and then sum the total area. (summing) (D) Other (Explain) It was found that there were some differences between the two test groups. For estimating sq ft quantities for a bridge deck, TGA had four members that indicated they commonly estimate the area as a percentage and then multiplied that percentage by the total element quantity. One member indicated that a diagram was made with estimated dimensions of damage, and then those areas were tallied to determine the total quantity. One member indicated that all three methods (A, B and C) were used, and one member did not answer the question. In contrast, five members of TGB indicated that tallying was commonly used, and two members indicated that percentages were used. For estimating superstructure elements (ft), four members of TGA indicated that the percentage approach was used and one member indicated that a ruler was used to sum the length (tallying). Two members of TGA did not answer the question. For TGB, four members used the tallying approach, and three members indicated that a percentage was used. Overall, these data indicated that most members of TGB commonly used the tallying approach, while most members of TGA commonly used the percentage approach, although there was clearly a mix in both groups. The effectiveness of these two approaches was examined during the field exercises as described later in this report. E.2.3 S-BRITE Center Exercises This portion of the report describes the field exercises completed at the S-BRITE Center at Purdue University. The objectives of this portion of the field exercise included assessing the capabilities of inspectors for estimating quantities of damage and analyzing different methods of making quantity estimates. The exercise included a task to evaluate the variability of truss element inspections. Finally, tasks were completed to assess the effect of changing unit of measure for sq ft to ft. E.2.3.1 Task S-BRITE 1-5: Spatial Estimating Tasks This section describes the results of testing conducted to evaluate the capability of an inspector to estimate quantities of damage using units of measure of area (sq ft) and length (ft). There were two methods of making the quantity estimate examined in the test. The two methods included estimating the quantity based on the percentage of total quantity, and estimating the quantity by tallying individual areas. The testing consisted of five individual tasks that were designed to compare the performance of inspectors using the visual guide (TGA) to the performance of inspectors not using the visual guide (TGB), as shown in Table E-2. The tasks were also intended to provide fundamental data on the capabilities of inspectors overall. Task S-BRITE 1 consisted of inspectors estimating the area of simulated damage on an 8.5 x 11 in. page. Tasks S-BRITE 2-5 consisted of making estimate of areas of simulated damage on the web of a plate girder. Tasks S-BRITE 2 and 3 included estimation using units of area (sq ft), first by making an estimate based on the percentage of damaged areas, and second by tallying individual areas to make the estimate. Tasks S-BRITE 4 and 5 consisted of making estimates in length (ft), first by making a percentage

E-11 estimate and second by tallying lengths. Background that describes the motivation for these exercises is provided in the following section, as well as the results of the testing. Table E-2. Summary of S-BRITE Center tasks 1-5. Task Activity Test Group Method Time Accuracy Time S-BRITE1 Standard test of area estimation on a page All % area - x 20 min S-BRITE2 Use of visual guide for area estimate TGA % area x x 30 min Ad hoc assessment of area TGB % area x x S-BRITE 3 Tallying individual areas based on estimates TGA sq ft x x 60 min. Tallying individual areas using a measurement device TGB sq ft x x S-BRITE 4 Use of visual guide for ft estimate. TGA ft x x 30 min. Ad hoc estimate of ft TGB ft x x S-BRITE 5 Tallying ft based on estimates TGA ft x x 60 min Tallying ft using a measurement device TGB ft x x E.2.3.1.1 Background The survey conducted as part of the research indicated that agencies are using different approaches to estimating quantities of damage. In some cases inspectors will estimate individual areas (e.g., sq ft or ft of damage), and subsequently tally the total area of damage. Measurement of each individual area of damage using a tape measure or other device may be a part of making the estimate. Other agencies are estimating the area of damage based on percentage then multiplying that percentage by to total amount of the element to determine the quantity. For example, estimating a deck with an area of 1000 sq ft has a 5% damage = 50 sq ft, as compared with tallying each damage area. For many agencies, the method used in the field may not be known, or may vary between different inspectors or inspection teams. For agencies using the percentage of area and multiplying by the total amount of that element, the visual guide includes spatial estimating diagrams intended to improve the accuracy of the estimate. Tallying individual areas is a more time-consuming task, which may or may not include actually physically measuring areas of damage. Tallying individual areas may provide increased accuracy as compared with estimating based on a percentage, but also requires more time to complete the inspection. The objective of this task was to develop data on the effect of these different approaches on the accuracy and efficiency of element-level inspection. The tasks also studied the effect of changing the units of measure for elements such as element 515, Steel Protective Coating. This element is currently rated based on sq ft, but a typical parent element - 107 Steel Open Girder/Beam - is estimated based on linear feet (ft). Corrosion damage and other defects are applied based on linear feet, to match the parent element. The estimate of protective coatings based on area has the advantage of aligning with bridge management needs in terms of estimating the area of repair (e.g., recoating) needed. However, there are also several limitations. First, the area of an open section, which includes the surface area of the web and the top and bottom of the flange, can be difficult to estimate

E-12 efficiently. Second, not all surface areas of a member may be observable in the field. For example, the top of the bottom flange may not be available for inspection when inspections are conducted from the ground. Third, the estimate of area may be more time consuming than simply estimating the linear feet. During this task, a comparison of the different methods for documenting the damage in a steel girder was evaluated. Through this testing, the difference in accuracy resulting from applying different units of measure could be assessed. This task also illustrated the significantly different values, in terms of the percentage of damage estimated, that resulted from using different units of measure. Simulated damage in the form of white appliques were mounted on the surface of a steel plate girder to provide an idealized model of damage in a bridge member. The plate girders were 77 ft in length with a 6 ft tall web section, providing four available surfaces for testing with each surface measuring 462 sq ft. Appliques of different sizes and shapes were used to simulate damage quantities of different amounts. Figure E-3 illustrates the test arrangement with inspectors participating in the testing at the S-BRITE center. The use of simulated areas of damage in the form of appliques applied to the surface of the plate girder allowed for the assessment of spatial estimating capabilities. The rationale for using these idealized representations of damage on the surface of the steel is to remove any inconsistencies that could be related to undefined boundaries of areas in different CSs, or different inspectors assigning different CSs to areas of the element. In this way, the test approach isolated the inspector task of estimating the quantity from the task of deciding the appropriate CS. Figure E-3. Photograph of plate girder showing inspectors assessing simulated areas of damage. As shown in the image, irregular shapes were attached to the steel girder, and inspectors were asked to make an estimate of the quantity of the irregular shapes as a percentage of web area of plate girders. There were four separate tasks completed using this test arrangement. The areas that were mounted on the surface of the member during each task are shown in Table E-3. These areas were selected to provide areas that were close to relevant number for decision-making: ~ 5%, 10%, 20% and more than 30%. The specific values are linearly related with a slope of three to provide support analysis of results. The slope was selected for convenience in selecting quantities close to typical decision-making thresholds. As shown in the table, the damage areas were varied during the execution of the tasks to ensure the assessment completed by the

E-13 inspector are not bias by repeating the same estimates. Two estimating tasks were completed (S-BRITE 2 and S-BRITE 3), then the appliques were rearranged with different total quantities for two other tasks (S- BRITE 4 and S-BRITE 5). Table E-3. Areas of defect used for the spatial estimating task. Side Areas Area (sq ft) Tasks S-BRITE 2 and 3 1 6% 28 2 12% 56 3 18% 83 4 36% 166 Tasks S-BRITE 4 and 5 Side Areas Area (sq ft) 1 21% 97 2 15% 69 3 30% 139 4 9% 42 Task S-BRITE 1: Visual Estimation of Area-Page Test This task consisted of each inspector making a visual estimate of area percentage based on areas printed on an 8.5 x 11 paper sheet. The objective of this task was to provide fundamental data on the capabilities of an individual to make a visual estimate of an area on a percentage basis. This measurement also provided data on the fundamental capabilities of each test group TGA and TGB. An example of a visual standard is provided in Figure E-4A to illustrate the process. In this figure, an area with irregular shapes comprise different portions of the page, with the top showing an area of 5% and the bottom figure shows an area of 13%. Figure E-4B shows participants in Indiana estimating areas on the page and recording their results in the test booklet provided as part of the study. Outcome The results of the page test are shown in Table E-4. The data is presented in the table in the following way. The results from the inspectors in group TGA were analyzed separately from the results for group TGB. The mean value was calculated as the average value from all of the responses in the particular group (TGA or TGB). The combined group of inspectors (TGA + TGB) was also analyzed to provide measures of the overall population of inspectors in the study. This was done because of the relatively small number of samples provided within each test group. The error was calculated by subtracting the actual value from the mean value. The error was analyzed by determining the average normalized error according to the equation: 1 Where is the actual value and is the average inspector’s estimate. In this way, the difference between the estimated value (in %) and the actual value is presented as a fraction (%) of the actual value, which adjusts the data considering the magnitude of the actual value. For example, if the actual value was 12%,

E-14 and the average (mean) value from both test groups combined was 8%, the error was calculated as - 4% and normalized error was calculated as 4/12 = 33%. Figure E-4. A) Example of a page test for estimating area based on percentage showing 5% (top) and 13% (bottom), B) Participants performing page estimate test. The sample standard deviation, σ, is also presented. The σ value provides a statistical estimate of the variation in the results, and +/- 1σ from the mean represent 68% of the responses, based on an assumed normal distribution. Lower σ values denote less scatter in the results. It should be noted that other statistical distributions, such as log-normal, were analyzed as part of the research. It was found that the normal distribution provided the most suitable match with the test data provided from the field exercises. Finally, the Coefficient of Variation (COV) is presented. The COV is calculated by dividing σ by the mean value. The COV is useful for understanding the variation expressed by σ as a fraction of the mean value. For example, if the mean value is 3%, and the σ is 2%, then the COV would be 66%. If the mean were 50% and the σ is 2%, then the COV is only 4%. These data are useful for determining the magnitude of variation in the inspection results. The COV was utilized because it was found through the analysis of the data that the σ values commonly increased as the quantity increased, therefore, the COV value provided a means of normalizing the results for comparison. Also shown in the table is the average normalized error, and the average of the individual COVs calculated from each page assessed during the test. This average of the COV values is useful for summarizing the general magnitude of the COVs from the individual test pages. It should be noted that the average of the COV values does not represent the average COV. The data presented is simply the linear average of the COV values determined from the data presented to illustrate the general magnitude of the results found in the study. Data from the other tasks in the field exercises were analyzed in a similar manner. Analyzing the data in this way indicated that the average normalized error was slightly smaller for TGA than for TGB, indicating a slight increase in accuracy for TGA when using the visual guide. However, this difference was small. The page test images were very similar to the spatial estimating diagrams provided to inspectors in the visual guide. The variation of the data from TGA was generally larger than for TGB, as shown by the average value of the COV’s calculated, which were greater for TGA than for TGB. Consequently, it would difficult to conclude that the visual guide provided a measurable improvement in the quality of results, although there was an increase in accuracy for TGA relative to TGB based on the mean values. These results indicate that the visual guidance provided to TGA did not result in more consistent estimates as had been expected. These data may indicate that the capabilities of TGB participants were better for estimating areas than the participants in TGA.

E-15 Table E-4. Results of estimating areas on guide sheets. Plate No. Actual simulated damage (%) TGA Results (%) TGB Results (%) Mean Error Norm. Error σ COV Mean Error Norm. Error σ COV 1 6 3 -3 55 1 51 8 +2 29 3 45 2 18 16 -2 13 7 47 18 0 2 6 35 3 1 1 -0.07 7 0 20 1 0 0 0 29 4 48 44 -4 8 15 35 46 -2 4 7 15 5 12 10 -2 14 4 36 17 +5 43 7 44 6 36 40 +4 11 13 31 51 +15 43 17 34 - Average 18 - 37 Average 20 - 33 Task S-BRITE 2 and 3: Estimating areas based on Percentage or Tallying Areas. During S-BRITE Task 2 and 3, inspectors made area estimates of the damaged area using the idealized model consisting of irregular shapes mounted to the web of a plate girder as shown in Figure E-3. During task S-BRITE 2, the inspectors were asked to estimate, in terms of percentage, the area of damage represented by the white appliques applied to the member. During this task, TGA was asked to use the area estimating guides provided in the visual guide to assist in making an estimate of the area. TGB provided an estimate of the total area of damage based on an ad-hoc (i.e., without the use of the guide) basis. In task S-BRITE 3, participants were asked to estimate the area of damage by making a diagram of the damage and tallying the quantity for each area of damage. TGB participants were allowed to use measuring devices if they wanted to in making the estimate, and several different devices were provided including measuring wheels and tape measures. However, none of the participants used any measuring devices. These tasks were timed to determine how quickly an area estimate was made for each test group. The results of tasks S-BRITE 2 and 3 are shown in Figure E-5, showing the area estimates provided by the participants. The results from tallying the individual areas (on a sq ft basis) were converted to a percentage for presentation in the figure. The actual value of the area is also shown in the figure, with a dashed line connecting the points to highlight these data. This figure illustrates that there was a high degree of scatter in the area estimates. For example, in Figure E-5A, side #4, an inspector from TGA estimated the area of damage to be 70% and an inspector from TGB estimated the area to be 10%. It can also be observed in Task 3, during which the areas were tallied, the results provided by inspectors tended to overestimate the area of damage, as shown in Figure E-5B. Qualitatively, it can be observed that there tended to be less scatter in the results from Task 3 as compared with Task 2. It can also be observed that the difference between the actual value and the estimates from the inspectors increases as the actual value becomes larger.

E-16 To characterize the scatter shown in Figure E-5, results were analyzed statistically to provide the mean (average) results for each of the two inspector groups, and the σ and COV are typically presented to summarize the test results efficiently and express the variation in the data. In this way, the scatter in the data is characterized by the σ value and this value is expressed as a percentage of the mean by the COV value, as previously described. Figure E-6 presents, in graphical form, the result based on the mean values from the results shown in Figure E-5. In the figure, the mean (average) values for TGA and TGB are plotted along with the actual value. There are several interesting results illustrated in the figure. First, the area estimate based on percentage (shown as unfilled markers) was generally more accurate than tallying individual areas. Second, tallying individual areas generally over-estimated the amount of damage. The practice of “squaring off” the irregular shapes representing damage may contribute to this overestimate. The tendency to overestimate the actual area when using the tallying approach could be observed qualitatively in Figure E-5. Figure E-5. Results from S-BRITE Task 2 (A) and Task 3 (B) showing area estimates provided by TGA and TGB.

E-17 Figure E-6. Results from estimating the area of simulated damage on the webs of two plate girders. The detailed results from Tasks 2 and 3 are shown in Table E-5. The table shows the mean, σ, and the COV for each of the girder sides, for TGA and TGB. It should be noted that the percentages shown in the table are rounded using two significant figures for presentation. The percentage error and normalized percentage of error were calculated from the actual values and consequently the values are sometimes slightly different than would be calculated from the data presented in the tables due to rounding. The results showed that TGA, which was able to use the spatial estimating guides in making the percentage area estimates, had greater variation in results than did TGB. The average COV for TGA was 57% and the average COV for TGB was 50% for Task 2, showing that there was actually slightly less variation among the test group that did not use the spatial estimating guides. In terms of accuracy, the normalized error was lower for TGB (11%) as compared with TGA (25%). For Task 3, in which estimates were made by tallying the damage areas, there was also less variation in TGB as compared with TGA. The average COV values for TGB were only 28% as compared with 36% for TGA. Significantly, the normalized error was much less for TGB (26%) as compared with TGA (56%) when tallying individual areas. These data may indicate that the inspectors in TGB were simply more accurate in their assessments generally than TGA. These results indicate that the variation in the estimates when using a percentage was larger as compared with tallying individual areas. Combining all the inspectors into a single population, the average COV was 54% when estimating the areas as a percentage, but only 35% when tallying individual areas. The normalized error values for Task 2 were smaller, on average, than the normalized error in Task 3. It can also be seen that the individual error values were typically negative for Task 2, meaning the area was underestimated, and always positive to Task 3, meaning that the areas were overestimated. It can also be observed that the average of the magnitude of the COV values were smaller for Task 3 than for Task 2, as mentioned. These data can be interpreted in the following way: In S-BRITE Tasks 2 and 3, estimating the area by percentage was more accurate than estimating the area by tally, when examining the mean values. However, there was less variation between individual inspectors when tallying areas, as compared with estimating percentage. In other words, there was more consistency between inspectors when tallying areas as compared with estimating as a percentage.

E-18 Table E-5. Results from S-BRITE Tasks 2 and 3 showing damage estimates and analysis results. Web Side No. Actual sim. Damag e (%) TGA Results (%) TGB Results (%) Mean Error Norm. Error σ COV Mean Error Norm. Error σ COV S-BRITE Task 2-Estimating areas by percentage 1 6.0 3.7 -2.2 38 2.5 67 5.4 -0.57 10 1.5 28 2 12 7.6 -4.4 37 4.2 56 10 -1.4 12 6.8 65 3 18 13 -4.4 25 8.0 59 19 1.5 9 11 60 4 36 36 0.43 1.2 17 47 31 -4.8 13 14 46 Average 25 - 57 Average 11 - 50 S-BRITE Task 3-Estimating areas by tally 1 6.0 10 4.7 70 4.3 42 7.7 1.7 28 1.2 16 2 12 19 7.4 62 7.2 37 16 4.2 35 5.1 32 3 18 28 10 59 10 37 23 5.6 31 3.8 16 4 36 47 12 33 13 28 39 3.1 9 18 46 Average 56 - 36 Average 26 - 28 Task S-BRITE 4 and 5 Estimating Damage using Linear Feet. The objective of these tasks was to provide data on the efficiency, accuracy, and quality of using ft as the unit of measure for an element to estimate damage, as compared to using sq ft. Prior to the execution of this task, the damage areas on the surface of the plate girder were adjusted as indicated in Table E-3. During task 4, the inspectors made estimates of the length (ft) of damage based on an overall percentage, and in task 5 the inspectors were asked to tally the individual lengths (ft) and provide an estimate. Tools for measuring including a tape measure and measurement wheels were available but were not used by any of the inspectors.

E-19 The results from Tasks 4 and 5 are shown in Figure E-7, showing length estimates provided by the participants in TGA and TGB. These data show the length estimates for both tasks in terms of percentage. It can be observed qualitatively that there is less scatter in the data as compared with those data shown in Figure E-5 showing estimates in sq ft. It can also be observed qualitatively that there is much less scatter in the results for Task 5 (Figure E-7B), during which tallying was used to make the estimate, as compared with Task 4 (Figure E-7A), during which estimates were made by percentage. The results of these two tasks are summarized in Figure E-8, which shows the mean estimates of the damage quantity based on length (ft) expressed in percent length for illustration. The actual value for damage based on percent length is shown on the figure, highlighted with a dashed line. The actual area Figure E-7. Results of S-BRITE Tasks 4 (A) and 5 (B) for length estimates from TGA and TGB.

E-20 based on sq ft and presented as percentage is also shown for illustrative purposes. Both percentage estimation and tallying methods demonstrated a large overestimate as compared with the actual percentage based on sq ft, which is not surprising given the test arrangement. This simply illustrates that using the unit of length (ft) did not represent the area of damage (sq ft). Similar to results from Tasks 2 and 3, tallying of lengths results in a larger estimate than simply making a percentage estimate. The results also showed that tallying individual lengths was more consistent with the actual value in terms of linear ft. The numerical results from Tasks 4 and 5 are shown in Table E-6. This table shows the average lengths (ft) estimated by TGA and TGB, by estimating the percentage of length (ft) (Task 4) and by tallying lengths (Task 5). This table also presents the σ and the COV values for TGA and TGB. The results showed that TGA had a larger normalized error as compared to TGB when estimating the damage by percentage, but a smaller normalized error when tallying lengths. The results also showed that the COV values were smaller when tallying was used to estimate the length, which was consistent with Tasks 2 and 3. Combining all the inspectors into a single population, the average COV was 34% when estimating the length as a percentage, but only 16% when tallying individual lengths. The data indicate that there was less variation in the inspector results when the unit of ft was used as compared with the unit of sq ft used for Tasks 2 and 3, in which COV values for the combined group were 54 and 35% for percentage and tally estimates, respectively. This may not be surprising, since estimating the length requires an estimate in only one dimension as compared with estimating a two-dimensional area. Consequently, there was generally less variation in the results. Figure E-8. Results of S-BRITE Tasks 4 and 5 estimating simulated damage in using units of ft.

E-21 Table E-6. Results of S-BRITE Tasks 4 and 5 estimating damage by length (ft). Plate Girder Web Side No. Damage length (%) TGA Results (%) TGB Results (%) Mean Error NormError σ COV Mean Error Norm. Error σ COV S-Brite Task 4-Estimating ft. by Percentage Side-1 29 16 -12 43 5.6 34 24 -4.3 29 11 46 Side-2 55 29 -25 46 10 36 49 -5.1 28 18 37 Side-3 78 62 -16 20 11 18 80 2.5 8.5 18 23 Side-4 78 83 4.9 6.3 9.5 11 75 -2.5 1.6 18 24 Average 29 - 25 Average 7.7 - 32 S-Brite Task 5-Estimating ft. by Tally Side-1 29 32 3.5 12 6.0 19 32 3.6 13 4.6 14 Side-2 55 55 0.4 0.7 8.5 15 57 2.5 4.6 9.3 16 Side-3 78 84 6.7 8.6 4.9 5.8 82 4.3 5.5 9.4 11 Side-4 78 82 4.3 5.5 3.7 4.5 91 13 17 28 31 Average 6.8 - 11 Average 10 - 18 Error Analysis To analyze the error from the inspector estimates, the data from the test groups was combined to form a larger set of data and provide a more generalized result, as shown in Table E-7. For Tasks 2 and 3, the analysis indicated that estimating areas by percentage resulted in a smaller normalized error of 16% as compared with tallying individual areas which resulted in a greater normalized error of 41%. For Task 4 and 5, this analysis indicated that the normalized error when estimating the length as a percentage was 17% but only 8% when lengths were tallied. These data indicate that the area estimates and length estimates had a similar accuracy when the quantity was estimated as a percentage, but when using tallying, the length estimate had significantly more accuracy as compared with the area estimate. The time required to complete the different methods of estimating quantities for Tasks 2 through 5 were documented during the testing and are summarized in Figure E-9, which shows a bar graph of the time consumed by TGA and TGB for each task. As shown in the figure, the method of tallying individual areas (Task 3) was significantly greater than simply making a percentage estimate (Task 2). It was also notable that the time required to tally and estimate the damage in terms of length did not take significantly longer that estimating length by percentage.

E-22 Table E-7. Error analysis of the combined results from TGA and TGB for Tasks 2-5. Plate Girder Web Side No. Actual simulated damage (%) TGA and TGB Combined Results (%) Mean Error Norm. Error Mean Error Norm. Error Task 2 Task 3 Side-1 6 4.6 -1.4 24 8.9 2.9 49 Side-2 12 9.1 -2.9 24 18 5.8 48 Side-3 18 17 -1.4 7.9 26 8.2 45 Side-4 36 34 -2.2 6.2 43 7.4 21 Avg. 16 Avg. 41 Task 4 Task 5 Side-1 29 20 -8.2 29 32 3.6 13 Side-2 55 39 -15 28 56 1.4 2.6 Side-3 78 71 -6.6 8.5 83 5.5 7.0 Side-4 78 79 1.2 1.6 87 8.6 11 Avg. 17 Avg. 8 Figure E-9. Summary of task times for S-BRITE tasks 2 through 5.

E-23 E.2.4 Task S-BRITE6 Truss and Gusset Plate Elements This task consisted of inspection of the decommissioned steel truss erected at the S-BRITE center shown in Figure E-10. As shown in the figure, the single-span through-truss has corrosion damage and protective coating damage throughout the truss. The truss (Element 120-Steel Truss) was assessed based on the linear feet of truss panel measured longitudinally along the travel way (182 ft); the steel protective coating (Element 515) was assessed based on square feet (3440 sq ft), and the gusset plates (Element 162) were assessed based on units of each (72 ea). Therefore, this task included each of the elements measurement units included in the MBEI (i.e., sq ft, ft and ea). The protective coating was also assessed based on units of ft to provide a comparison of results with the conventional approach of using sq ft. This test bridge had unobstructed access to the truss members, as shown in the figure. Examples of the corrosion damage in the truss element are shown in Figure E-11, which shows a gusset plate with severe section loss resulting in holes (Figure E-11A) and other truss members (Figure E-11B, C and D). During this task, TGA used the visual guide, including images of the corrosion damage element and linear estimating guides. TGA assessed the steel protective coating in terms of linear feet for the truss. TGB did not use the visual guide and used the conventional units of sq ft for the steel protective coating. The time required to complete the inspection was recorded. Figure E-10. Inspectors conducting an assessment of the truss during field exercise.

E-24 Figure E-11. Examples of corrosion damage on the truss bridge showing gusset plate (A), side- view of bottom chord (B), upper chord and diagonals (C), and underside of bottom chord (D). The data from the inspection were analyzed in an effort to evaluate the accuracy and variation in the inspection results, which was one of the primary objectives of the field exercises. This section provides an overview of how data from the element-level routine inspection tasks were analyzed and describes the results typically provided in the report from the routine inspection tasks. The inspection results are presented as mean (average) quantities in relevant CSs, the corresponding σ value, and COV values as previously described. To obtain these values, the data were analyzed in the following way. The results from the inspection are reported on a percentage basis to normalize the results, regardless of the original unit of measure (ft, sq ft, or ea). The table includes the result provided by the control team and the mean values calculated from the results reported by TGA and TGB. The CSs reported by the inspectors which were assigned to different defect elements were combined to provide a single quantity for a given CS. For example, if an inspector assigned 10 ft CS 2 for the defect of cracking, and 10 ft in CS 2 for the defect of corrosion damage, the data was analyzed as 20 ft in CS 2. Analysis of the variation in defect element assignment by inspectors was completed separately, as appropriate. This was necessary because of the diversity of defect elements assigned in most cases, resulting in too few quantities assigned in any particular defect element to calculate meaningful averages separately by defect. The mean values were calculated as the average value among inspectors that reported a quantity in a particular CS. If there were two or fewer inspectors reporting a quantity in a particular condition state, the mean value was disregarded, unless otherwise noted in the text. This occurred infrequently among the primary elements included in this report.

E-25 It was more common for CS 4, where only one or two inspectors may have assigned CS 4. Consequently, mean values for CS 4 are not typically included in the tables; these data are addressed in the text. The mean value was calculated by summing the quantities reported by the inspectors and dividing by the number of inspectors that had assigned quantities in that CS. For example, if only five of the seven inspectors assigned a quantity in CS 3, these quantities were summed and divided by five. Also, because the mean values of inspectors assigning a quantity in a particular CS are presented, the sum of the mean values for all CSs may exceed 100% when considering the mean values reported. Data was also analyzed for the “combined” group to provide an overall measure of variation that included the entire population of inspectors participating in the study. Results are typically provided for CS 2 and CS 3. Additionally, the reported quantity in CS 2 and CS 3 was summed to provide a measure of the damage reported by the inspectors (CS 2 + CS 3), and due to the large variation found in the reported results. It should be noted that the number of samples, as compared with the variability of the data overall, was limited for a statistical analysis. Error analyses such as those presented in Tasks 1-5 are not included, because the actual values are not known with certainty, and the variation in the results was found to be too large to provide meaningful analysis. The control inspection value was used to provide qualitative comparisons in some cases. E.2.4.1.1 Element 120-Steel Truss Table E-8 shows the results from the inspection of Element 120-Steel Truss. As shown in Table E-8, both groups assigned more quantity of the element in CS 2 and less in CS 3 as compared to the control inspection. The COV values indicate the very high degree of scatter in the inspection results. The smallest COV value found when examining CS 2 and CS 3 separately was 56%. An important feature revealed in these data is that there is agreement between both inspection groups and the control inspection in terms of the total length of damage being 100% or close to 100% of the truss. The average for combined CS 2 and CS 3 was 91% for TGA and 100% for TGB, while the control inspection indicated 100%. The data was also analyzed by combining all inspectors into a single group, which resulted in an average value of 95% with a COV of 15%, as shown in the table. These data indicate that there was general agreement in the total amount of damage in the truss, although the amounts rated in CS 2 and CS 3 by the combined group had relatively high variation, as revealed by COV values of 61% for and 82% for CS 2 and CS 3, respectively. Table E-8. Inspection results for steel truss corrosion (Defect 1000) reported as percentage of the total quantity. Condition State (CS). Control Inspection Result (%) TGA Results (%) TGB Results (%) Combined Group (%) Mean σ COV Mean σ COV Mean σ COV CS 2 11 52 37 72 58 32 56 55 34 61 CS 3 89 39 38 97 50 37 75 44 37 82 CS 2 + 3 100 91 20 22 100 1.0 1.0 95 15 15 The scatter in the results for CS 2 and CS 3 is illustrated in Figure E-12, in which the results from individual inspectors are plotted for CS 2, CS 3, and the combined CS 2 + CS 3. The values in the figure are quantities in ft recorded by the inspectors in order to illustrate the reported values in practical terms. In this figure, it can be observed that the variation in the quantity assigned to CS 2 and CS 3 is very large, as indicated by the large COV values shown in Table E-8. The total amount of damage (CS 2 + 3) has much less variation. It can also be noted that there is a single inspector with results that are much different for CS 2 + CS 3; neglecting that outlier there is agreement regarding the amount of damage in the truss.

E-26 Neglecting this outlier, the COV for the total amount of damage for the combined groups would be reduced from 15% shown in Table E-8 to only 2%. These data illustrate that the quantities of damage assigned to CS 2 and CS 3 vary significantly in both groups TGA and TGB, but there is agreement in the total amount of damage in the truss (CS 2 + CS 3). E.2.4.1.2 Element 515-Steel Protective Coating The protective coating on the truss bridge was evaluated by using the unit of linear ft of truss panel by TGA and using the unit of sq ft by TGB. The control inspection indicated that measured by length (ft), 100% of the length of the truss panels had some severe damage to the protective coating and consequently 100% of the element was rated in CS 4. Three of the seven inspectors in TGA reported 100% of the length was CS 4, and five of the seven rated 100% in CS 3 or CS 4. These data indicated that there was general agreement between the control inspection and the TGA assessment that the majority on the coating was in poor or severe condition. In terms of variation in the results, the COV value for the combined CS 3 + CS 4 was only 25%, and the mean value was 88% of the length of the truss, as shown in Table E-9. For total damage (CS 2, 3, and 4), the mean value was 98% with a COV of only 4.2%. Table E-9. Inspection results for steel protective coating assessed in units of ft. Condition State (CS). Control Inspection Result (%) TGA Results (%) Mean σ COV CS 2 0 - - - CS 3 0 46 40 48 CS 4 100 61 75 67 CS 3 + CS 4 100 88 40 25 CS 2-4 100 98 4.2 4.2 Figure E-12. Inspector results for Defect 1000, Corrosion damage in a truss.

E-27 TGB rated the protective coating system in the conventional manner of sq ft. Six of the seven inspectors in TGB provided an assessment of the protective coating, and each of these inspectors assigned quantities to CS 2, 3, and 4. One inspector did not provide data for the protective coating for unknown reasons. The inspection result using sq ft by TGB is shown in Table E-10. In this table, the results for CS 2, 3, and 4 are shown, as well as combined results for CS 3 + CS 4, and total amount of damage, CS 2, 3 and 4 combined. From these results it can be observed that there is relatively large scatter in the results regardless of how the data is divided. The values for COV were typically on the order of 50% for each CS, and the least COV is found when combining all CS to show the total amount of damage recorded (CS 2, 3, and 4). In this case, the COV was 33%, showing that the total amount of damage rated by the inspectors was on average 72% of the total area of coating, with a σ of 24%. Table E-10. Inspection result for effectiveness of steel protective coating (sq ft) reported as percentage of the total quantity. Condition State (CS). Control Inspection Result (%) TGB Results (%) Mean σ COV CS 2 25 41 26 64 CS 3 50 18 7 43 CS 4 25 13 9 65 CS3+CS4 75 31 13 43 CS 2-4 100 72 24 33 Comparing the results from TGA assigning CS by units of ft and TGB assigning CS by units of sq ft, it can be observed that the COV values for CS 3 and C3 4 were very similar. The amount of damage in each CS and the total amount of damage was increased when using unit of ft as compared with sq ft. For example, for CS 4, the mean value for TGA was 61% but only 13% for TGB. This is not surprising, given the different units of measure used by each group. The total amount of damage (CS 2-4) was much larger for TGA (98%) as compared with TGB (72%), and the COV value was much smaller. The COV for TGA was only 4%, while the COV for TGB was 33%. These data indicate that there was more consistency in assessing the total amount of damage when units of ft were used, although the COV values for CS 3 and CS 4 were very similar. However, the quantity assigned to each CS is greater when units of ft are used. E.2.4.1.3 Element 162-Gusset Plate The gusset plates of the truss were assessed by inspectors in both groups using units of ea. Inspectors were asked to rate 72 gusset plates individually in the two trusses. This was intended to provide additional data for analysis, as compared with rating pairs of gusset plate at each connection as one value. The truss bridge has a number of gusset plates with severe corrosion damage resulting in section loss. Table E-11 shows the results from the inspection for groups TGA and TGB, and for the control inspection. Important results from this task include the variation in the rating of gusset plates in CS 4, which indicates a condition requiring structural review. The control inspection rated 14 gusset plates in CS 4. Generally, inspectors in TGA and TGB rated between 2 and 5 gusset plates in CS 4; one member of TGA indicated that 22 gusset plates were in CS 4. In total, fourteen inspectors provided a rating for this element, and five (about 1/3) of these inspectors did not indicate any gusset plate in CS 4. The variation in the inspection results for CS 4 is reflected in the COV value of 130% for the combined group. This result is important in illustrating that there was variation in the reporting of the need for structural review of damaged gusset plates. The data was analyzed to assess the assignment of CS 3 (poor) or CS 4 (severe) to the gusset plates, to assess the assignment of advanced deterioration. The assignment of CS 4 to indicate the need for structural

E-28 review is a subjective assessment, such that one inspector might consider the level of damage requiring review while another might assign CS 3. These data indicated that for the combined group, the mean value was 39% with a COV of 56%. In practical terms, these data indicate that the mean number of gusset plates assigned either CS 3 or CS 4 was 28 plates (0.39 x 72), and typical values would range +/- 16 plates (0.22 x 72) (i.e., +/- 1σ). Similar values were found when TGA and TGB were treated as separate groups. The range of values for the number of gusset plates in CS 3 + CS 4 was from 4 to 48 plates. The combined results for CS 2-CS 4 are presented in the table to provide a measure of the damage reported by the inspectors. It is notable from these data that there was some agreement that approximately ¾ of the gusset plates were damaged in CS 2, 3, or 4. It is also notable that there was some agreement between the mean estimates from the two groups in terms of CS 3, with mean of approximately 35%. Table E-11. Inspection results for gusset plates reported as percentage of the total quantity. Conditi on State (CS). CI Result (%) TGA Results (%) TGB Results (%) Combined (%) Mean σ COV Mean σ COV Mean σ COV CS 2 35 32 17 54 49 12 24 41 21 52 CS 3 46 36 23 64 34 22 67 33 32 92 CS 4 19 8.6 12 143 4.9 2.4 49 6.7 9.0 130 CS 3+4 65 42 24 57 36 21 59 39 22 56 CS 2-4 100 74 31 43 83 15 17 78 17 31 The overall results from this task indicate a high degree of variation in assigning CSs to gusset plates, in particular for the assignment of CS 4. The overall time used to complete Task 6 was measured to determine if using different units of measure for steel coating assessment would result in a significantly different amount of time required to complete inspection. It was found that the mean time for TGA was 32 minutes. The mean time for TGB was 26.3 minutes. These data indicated that it did not take significantly more time to rate the coating system in sq ft, in fact, the group (TGB) that rated the coating in sq ft actually took less time on average than TGA, which rated the coating in ft. This may be explained by the fact that TGA was using a new visual guide with which they were unfamiliar. E.2.5 Bridge I1 & I2 Inspection Exercise Tasks This section of the report describes the individual tasks that were completed on bridges I1 and I2. A focus of the field exercise was the assessment of steel members for corrosion damage. The bridge deck and wearing surface, movable bearings, and seals were also recorded and analyzed. In addition, the use of ft for documenting the CS of the coating was compared with the conventional use of sq ft. Specific elements of the bridges are divided into separate tasks for reporting purposes. The participants conducted the inspection as a routine inspection. Two twin bridges with steel superstructures were chosen for the field exercises in Indiana (ID # I65-179- 05487 BNBL and BSBL, north and south bound). Both bridges are steel stringer bridges constructed in 1968 with damaged coatings and corrosion damage in the primary members. The rationale for selecting these bridges included the bridges’ common design and age, common elements, and the element of protective coating (515), which is to be evaluated in terms of different measurement units (ft instead of sq ft). The bridges have nearly identical design and age, but have different levels of damage in the superstructure and deck. Consequently, the bridges provided good samples for collecting data on the

E-29 variation in inspection results, since the design characteristics and situational factors (e.g., access) are identical, and therefore, not a factor in any variation between inspection results between the two bridges. For ease of referencing in the experiment and data analysis, bridge ID # I65-179-05487 BNBL was identified as bridge I1, and bridge ID # I65-179-05487 BSBL was bridge I2. The bridges were 3-span continuous steel girder bridges with cast-in-place concrete decks over Burnett Creek. Figure E-13 shows photographs of the bridges. Both bridges have good access in the areas of the abutments. Full details and previous inspection results for these bridges were previously reported and are included in Appendix E herein for reference. Each bridge consisted of 816 ft of Element 107-Steel open girder/beam and 6817 sq ft of coating. Figure E-13. Photograph of test bridges I1 (ID #I65-179-05487 BNBL) and I2 (ID # I65-179-05487 BSBL). The inspection of bridges I1 and I2 by both inspection groups occurred simultaneously on July 12, 2017, during the hours of 12:00 to 3:00 pm. The time allowed for the primary inspection was three hours, which was shorter than originally planned for the inspection. The time available for inspecting the bridge was affected by local traffic control concerns that required work on the bridges to be stopped by 3:00 pm. Weather during the conduct of inspections was hot and very humid. Thunderstorms occurred shortly after the completion of the inspection tasks. E.2.5.1 Task I.1 Assessment of Open Steel Section Beam/Girders Task I.1 included inspection of element 107-Steel Open Girder/Beam. During this task, participants assigned CSs to the steel member with anticipated defects of corrosion and coating damage. Both bridges have section loss in the area of the bearing, but in different amounts in each bridge. Based on previous inspection results, bridge I1 has about 3% of the steel member assigned a defect (CS 3), while bridge I2 has about 15% damage, mostly assigned in CS 2. This task evaluated the estimates from the different teams of the quantity of corrosion damage and variation in the assigned CS. The steel protective coating system was also assessed. Previous inspections have indicated that bridge I1 has 99 % of its coating in CS 1, while bridge I2 has 99% of its coating in CS 2. Both bridges have some coating in CS 4, though different amounts. It was anticipated that using the visual images available in the guidelines would yield a different distribution of the CSs for the coating damage element.

E-30 Figure E-14 illustrates the general conditions of test bridges I1 and I2. The figure shows an elevation view of the bridge where corrosion damage on the fascia beams can be observed (A); an expanded view of a portion of each fascia girder is shown in (B). The general conditions of the inner beams are illustrated in the bottom figure for both I1 and I2 in Figure E-14C. As part of this task, all inspectors were asked to provide a second rating of the protective coating, using linear feet instead of the area (sq ft). If the unit for protective coatings were changed from the current practice of using area to using linear feet, these data points will provide insight regarding the distribution of the results. Due to time constraints during the field testing, the linear ft estimate was only completed for one of the test bridges. The following section provides the results from these tasks. E.2.5.1.1 Element 107-Steel Open Girder/Beam The task included an assessment of element 107-Open Steel Girder, primarily focused on corrosion damage (defect element 1000). The test bridges were known to have section loss in the areas of the bearings. Typical section loss near the bearing is illustrated in Figure E-15, which shows a fascia girder with flaking corrosion product in Figure E-15A and the inside of a different fascia girder with section loss of the flange, as noted by a previous inspector’s note, Figure E-15B. It was expected that all of the inspectors would identify at least some of the area as CS 3 for section loss. The results are shown Table E- 12, which indicates the results for the control inspection, for each test group, and for the combined results considering all inspectors that participated in the testing. The results are reported in percentages in order Figure E-14. Photographs of bridge I1 (left) and I2 (right) showing general conditions of the beam elements with corrosion and coating damage.

E-31 to normalize the results and assist in the analysis; obviously, the original test data was in the appropriate units for the given element. For example, corrosion damage in element 107 was reported in ft. These data have been converted to percentages throughout, as previously noted. It appears from the data that the control inspection assigned a larger portion of bridge I1 to CS 3, than either TGA or TGB. In general terms, Bridge I2 had more damage assigned than bridge I1, based on the generally larger values assigned to CS 2 and CS 3 by the inspectors in each group. It was notable that each bridge had areas that were in poor condition (CS 3). For TGA, four of seven inspectors assigned CS 3 for bridge I1, and the same four of seven inspectors assigned CS 3 for bridge I2. For TGB, six of seven inspectors assigned some quantity to CS 3 for both bridges. One inspector in TGB did not assign CS 3 for either bridge. One inspector from TGB assigned 4 ft to CS 4 for both I1 and I2. In other words, one inspector assessed that the girder element required structural review, while another inspector assessed that girder to be in fair condition throughout. These data illustrate the variation between individual inspectors. The results were further analyzed by combining the quantities reported in CS 2 and CS 3, to provide data on the consistency of the reporting of some damage as compared with CS 1 without damage. In this way consistency in the overall reporting of damage could be analyzed. These results indicated that there was still inconsistency between the two groups, with TGA assigning 19% and 43% to bridge I1 and I2 respectively. TGB assigned values of 45% and 61% for bridges I1 and I2, respectively. It was also notable that the typical standard deviations, which represents the variation in the data assuming a normal distribution, were close to the averaged values reported, indicating the high variation in the reported results. This variation is reflected in the COV value. For example, for bridge I1, TGA rated ~8% of the length in CS 3, with a standard deviation of ~7%. The coefficient of variation is the ~7/8, or ~88%. For some cases studied, the COV was greater than 1, indicating a large variation in the inspection results among different inspectors. Combining the results from all of the inspectors increased the COV values in some cases. For the combined group of inspectors, results for bridge I1 indicated a mean value of 32% with a COV of 94% for total damage reported (CS 2 + CS 3). For bridge I2, the combined results had a mean value of 53% and a COV of 74. These values are indicative of high variation found in the inspection results for element. Figure E-15. Photographs of typical areas of section loss observed on bridges I1 and I2.

E-32 Table E-12. Inspection results for Element 107-Open steel girder. Bridge I1 Condition State (CS) Control Quantity (%) TGA Results (%) TGB Results (%) Combined (%) Mean σ COV Mean σ COV Mean σ COV CS2 20 17 27 161 25 75 101 21 26 120 CS3 63 7.7 6.7 87 21 39 75 14 14 98 CS2+CS3 83 19 25 130 43 96 74 32 30 94 Bridge I2 CS2 80 32 30 91 87 8.7 10 60 36 60 CS3 20 15 17 111 13 17 132 14 17 115 CS2+CS3 100 45 38 84 61 42 69 53 40 74 Figure E-16 shows the individual inspection results presented for the steel girder element, presented in the original units of ft. These data illustrate the variation in inspection results from different inspectors. The overall range of values of CS 3 for bridge I1 was between 5 ln ft and 326 ln ft (<1% to 40%), while for bridge I2 the range was from 4 to 400 ln ft (<1% to 49%). The smallest value was provided by the same inspector for each bridge. The highest value was also assigned by the same inspector for each bridges. These results indicate how individual inspectors can have much different assessment of the quantities in a particular CS. Removing the inspector that provided the low values, and the inspector providing the high values, the range was between 27 ln ft and 258 (3% and 32%) for bridge I1 and between 24 ft and 280 ft (3% and 34%) for bridge I2. These data indicate that there was still high variation in the assignment of CS 3, even with outliers removed. Importantly, the majority of inspectors indicated the presence of poor condition (CS 3). E.2.5.1.2 Element 515-Steel Protective Coating Table E-13 represents the results for estimating element 515-Steel Protective Coatings element for the steel girders. Element 515 has four defined CS for the coating system, and as such, data was analyzed according to assignment of CS 2, CS 3 and CS 4, as well as the combined values for CS 3 and CS4. The results from the control inspection, TGA and TGB are shown in the table, as well as the combined group. Notable in the results is that the amount of coating area in poor or severe condition (CS3 + CS4) were reasonably consistent between TGA and TGB for both bridges; for bridge I1, TGA’s average estimate was ~4% and TGB’s average estimate was ~5%; for I2, TGA average estimate was ~3% and TGB estimate was ~7%. Although there are difference in the exact values between the two groups, both groups reported that only a small percentage of the coating (less than 10%) was in poor or severe condition. The mean values for both TGA and TGB were smaller than those reported by the control. Additional analysis of these data indicate that 3/14 participants (21%) did not rate the protective coating element. The reason for this omission was not known. Further, although both bridges included some area in CS 4, there was only 9/25 (36%) inspectors that reported any area in CS 4. In other words, the existence of any area in CS 4 was identified about 1 out of 3 times.

E-33 These data indicate that there is variation in the interpretation of CS 4 and that not all participants are rating the coating element. It should be noted that the inspection sheet provided to the inspectors included a preprinted table for entering the values assigned for the coating (element 515) rating. In this way, the inspectors were directed by the inspection form to provide an assessment for coatings, but three inspectors chose not to complete this part of the form. Table E-13. Bridge I1 and I2 inspection result for element 515-Steel Protective Coating reported as percentage of the total quantity (sq ft). Condition State (CS) Control Quantity (%) TGA Results (%) TGB Results (%) Combined (%) Mean σ COV Mean σ COV Mean σ COV Bridge I1 CS2 25 47 51 107 60 42 71 53 44 85 CS3 15 4.2 4.4 105 10 14 141 7 10 142 CS4 10 2.6 2.6 102 3.5 0.83 24 3.0 1.7 55 CS3+CS4 25 4.4 6.0 137 5.3 4.1 79 7.5 9.7 129 Bridge I2 CS2 85 62 47 76 35 36 1.0 47 42 88 CS3 10 2.5 3.2 126 7 3.9 59 4.7 4.0 85 CS4 5.0 3.5 0.83 24 1.3 1.7 132 2.2 1.8 81 CS3+CS4 15 3.3 3.4 105 7.2 4.5 62 5.2 4.3 82 As a part of this overall task, the participants were asked to rate the coatings for bridge I2 using the units of ln ft rather than the conventional unit of sq ft. This assessment was completed after the overall inspection had been completed. The purpose of this testing was to assess the time required to assign the rating and if Figure E-16. Individual inspection results for element 107- Open Steel Girder (ft).

E-34 the resulting distribution of results was greater than, or less than, the results from using a sq ft assessment. It was found that the overall quantities assigned to the CS of 3 and 4 were significantly larger when the unit of linear ft was used as compared with units of sq ft, as shown in Table E-14. When the results from all of the inspectors using units of ft were combined into a single analysis, the average quantity of CS 4 was 12%, close to the control value for CS 4 of 15%. It also notable that the total damage, assessed as CS 3 + CS 4, was 40% as compared with only ~5% when assessed on a sq ft basis. It was also notable that when using ft for the unit, the mean results for CS 3 + CS 4 was the same (40%) for TGA and TGB. These data indicated that simply changing from sq ft to ft as a unit for assigning the CS to element 515 caused a significant increase the quantity of damage reported. This indicates that there would be a calibration required in order to change the units of measure for this particular element in the future. It was also found that the COV value for the combined group, considering the total damage (CS 3 + CS 4), was generally smaller when using ft for the units as compared with sq ft. The time required for making the assessment was an average of 8 minutes for both TGA and TGB. Table E-14. Inspection results for bridge I2 for steel protective coatings assigned as length (ft), shown as percentage. Condition State (CS) Control Quantity (%) TGA Results (%) TGB Results (%) Combined (%) Mean σ COV Mean σ COV Mean σ COV CS2 60 48 34 70 70 36 52 60 35 58 CS3 25 30 26 84 35 26 75 33 24 74 CS4 15 181 22 118 8.3 13 151 12 15 126 CS3+CS4 40 40 26 65 40 35 86 40 30 74 1Mean value based on only 2 data points. E.2.5.2 Task I.2 Assessment of Deck, Wearing Surface, and Joint This task consisted of the evaluation of the concrete deck, wearing surface, joint, and the approach slab. The test bridges were constructed with reinforced concrete decks with an overlay. Both decks had some damage in the overlay and damage in the deck itself, as could be observed on the soffit of the deck. Access to the driving surface of the deck was gained from a single shoulder on each deck (Figure E-17). Access to the soffit of the deck was from the ground below the structure. The deck was comprised of 6090 sq ft of surface area and the wearing surface was comprised of 5670 sq ft. of surface area. The joint consisted of a compression seal 86 ft in length.

E-35 E.2.5.2.1 Element 12-Deck The expected outcome of the tasks was different estimates of CS and spatial estimates. In particular, the crack defects in the deck may be rated CS 2 or CS 3. It was expected that different inspectors would yield different results for the crack defect because of the visual guide images that will suggest that most or all of the cracking should be in CS 3. The deck itself also had observable areas of cracking on the soffit of the deck, as well as some areas of spalling and patching. Figure E-18 illustrates the type and extent of damage found in the soffit area of the bridge deck, as diagrammed during the control inspection. These diagrams illustrate the bridge I1 had significant areas of cracking, while deck I2 had areas of patching and spalling, with generally less extensive cracking as compared with I1. Deck I2 also had a portion of the deck soffit Figure E-17. Photographs of the deck surface illustrating some damage in the decks of I1 and I2. Figure E-18. Diagrams of damage in the deck soffit for bridges I1 (A) and I2 (B).

E-36 concealed by a stay-in-place form, as indicated in Figure E-18. One member of TGB did not provide any results for the deck of I1. All inspectors provided results for the deck of I2 Overall, the control inspection and most of the inspectors identified two defects in the bridge decks- defect 1080-Delamination/Spall/Patched Area and defect 1130-Cracking (RC and Other). Three inspectors in TGA and one inspector in TGB also identified defect 1120-Efflorescence/Rust Staining in the deck. As noted previously, the CS ratings from different defects were combined in the analysis. The results from the inspection are shown in Table E-15. As shown in the table, both TGA and TGB reported portions of the decks in CS 3. For bridge I1, 92% (12/13) of the inspectors assigned some portion of the deck to CS 3. For bridge I2, 64% (9/14) of the inspectors assigned some portion of the deck to CS 3. Assuming that the control provides an accurate value for the damage in the RC deck, TGA provided more accurate quantities of damage (i.e. CS 2 + CS 3). The COV for TGA was also smaller as compared with TGB, with values of 51% and 84% for bridges I1 and I2 respectively. In contrast, TGB has COV values greater than 100 for both bridges I1 and I2 (173 and 123%). The data indicated that there was more variation in TGB for the deck element, although that variation is obviously very high in both cases. Table E-15. Results for element 12-RC deck for bridges I1 and I2. Condition State (CS) Control Quantity (%) TGA Results (%) TGB Results (%) Combined (%) Mean σ COV Mean σ COV Mean σ COV Bridge I1 CS2 7.0 5.3 4.5 85 22 42 187 12 27 217 CS3 0.13 4.9 4.5 92 3.1 3.0 96 4.0 3.8 94 CS2+CS3 7.1 9.5 4.8 51 21 37 173 15 25 167 Bridge I2 CS2 1.5 11 10 93 8.2 4.8 59 9.6 7.8 81 CS3 4.2 1.0 0.94 94 12 23 197 5.9 15 265 CS2+CS3 5.7 12 9.8 84 15 18 123 13 14 108 Examining the individual inspection results indicates that there was two outliers in the data that increasing the variation significantly, as shown in Figure E-19. It was found that the same inspector was responsible for this large estimate of damage area, which appears in CS 2 for bridge I1 and CS 3 for bridge I2. Removing this outlier reduces the variation in the results of TGB. Because these data were so significantly different that the results from other inspectors participating in the study, statistics were calculated with the outliers removed. These data are shown in Table E-16.

E-37 As shown in the table, the removal of this outlier significantly reduces the value of σ and the COV values, and improves the consistency in the mean results. With the outliers removed, the mean value for damage of the combined group drops from 15% to 7.7% for bridge I1, and from 13% to 10% for bridge I2. Importantly, the σ value dropped from 25% to 4.3% from bridge I1, and from 14% to 7.6% for I2. Finally, the COV values dropped from 167% to 55% for bridge I1 and from 108% to 76% for I2. Generally, outliers of this magnitude relative to the mean were not found in the data from the field exercises in either Indiana or Michigan. In this case, because of the large difference between a single quantity estimate and the mean, it was felt that it was justified to remove these data. Table E-16. Results for element 12-RC deck for bridges I1 and I2 with two outliers removed from TGB. Condition State (CS) Control Quantity (%) TGA Results (%) TGB Results (%) Combined (%) Mean σ COV Mean σ COV Mean σ COV Bridge I1 CS2 7.0 5.3 4.5 85 3.7 1.92 52 4.7 3.7 79 CS3 0.13 4.9 4.5 92 3.1 3.0 96 4.0 3.8 94 CS2+CS3 7.1 9.5 4.8 51 5.5 2.3 42 7.7 4.3 55 Bridge I2 CS2 1.5 11 10 93 8.2 4.8 59 9.6 7.8 81 CS3 4.2 1.0 0.94 94 0.20 0.18 0.93 0.70 0.83 118 CS2+CS3 5.7 12 9.8 84 8.2 4.8 58 10 7.6 76 Figure E-19. Individual inspection results for element 12- RC deck (sq ft).

E-38 E.2.5.2.2 Element 510-Wearing Surface The riding surface of bridge I1 and I2 consisted of an overlay with cracking and areas of spalling and poor condition patches. The inspection of the wearing surface was completed from the shoulder of the roadway, while the bridge was open to traffic. The deck overlay for both bridges I2 had a significant number of transverse cracks throughout the deck, as shown in Figure E-20. The figure illustrate the damage in the wearing surface based on the results of the control inspection. The results for element 500-Wearing Surface are shown for both bridges in Table E-17. The assignment of mean damage was consistent among the two inspection groups, with only small differences in the mean amount of deck assigned CS 2 or 3. When considering the total damage quantity reported by TGA and TGB, i.e., CS 2 + CS 3, the mean values were 19% for TGA and 15% for TGB. However, the COV values are significant, indicating that there was significant variation between different inspectors. Table E-17. Results for element 510-Wearing Surface for bridges I1 and I2. Condition State (CS) Control Quantity (%) TGA Results (%) TGB Results (%) Combined (%) Mean σ COV Mean σ COV Mean σ COV Bridge I1 CS2 5.2 11 9.4 90 8.1 14 171 9.2 11 126 CS3 1.5 9.5 14 146 7.2 6.4 89 8.4 11 125 CS2+CS3 6.7 19 14 76 15 18 122 17 16 95 Bridge I2 CS2 4 11 6.4 58 7.5 6.1 81 9.2 6.2 68 CS3 3.1 1.6 1.2 74 5.9 4.4 74 3.8 3.7 99 CS2+CS3 7.1 12 5.8 49 11 8.3 76 11 6.9 61 Figure E-20. Diagrams of damage in the wearing surface of bridge I1 (A) and I2 (B).

E-39 All of the inspectors identified some portion of the wearing surface of bridge I1 as CS 3, and 8 out of 14 identified some portion of I2 as CS 3. Because of the role of material in poor condition (i.e., CS 3) in decision-making for preservation, maintenance, and repair activities, it is useful to examine more closely the results from the individual inspectors. Figure E-21 shows the results from individual inspectors for the wearing surface element for bridges I1 and I2. It can also be observed that there is a distribution of results in different quantities that might be used for decision-making. For example, if a decision boundary was 10% (~500 sq ft), 60% of the inspection results reported a value smaller than 10% and 40% indicated a value greater than 10% for bridge I1. For bridge I2, all inspectors reported less than 10% of the deck in CS 3. These results illustrate that the threshold for decision making may be challenged by the variability of the inspection results. One inspector in TGA and two inspectors in TGB assigned CS 4 for a portion of the deck of bridge I1. The assignments were 1%, 2%, and 9% of the wearing surface of bridge I1. No inspectors assigned CS 4 to the wearing surface of bridge I2. The assessment of the wearing surface allowed for the examination of the defects that were assigned for the wearing surface element, which included significant level of cracking in the deck of bridge I2. Table E-18 shows the results for element 500-Wearing Surface for bridges I1 and I2. The table shows the results for TGA and TGB, with the combined group treating the inspectors as single population. Results are presented for the percentage of deck assigned to defect 3210, Delamination/spalling, and defect 3210, Cracking. All members of both groups captured these defects; one person from each group assigned defect 1190-Abrasion/Wear and defect 1090-Exposed Rebar to a portion of the wearing surface. Those results are not presented in the table. For bridge I1, TGA assigned a higher portion of the damage to defect 3220, cracking, while TGB assigned a higher portion to defect 3210, delam/spalling. In this way, the two groups were inconsistent in identifying the primary defect for the wearing surface. The control inspection identified cracking as the primary damage mode for this wearing surface, with 5% of the deck areas damaged by cracking as compared with 1% from delamination/spalling. For bridge deck I2, both TGA and TGB identified defect 3220 (cracking) as the primary defect, as did the control inspection. When examining the overall results of combining TGA and TGB into a single population of inspectors, it was found that that defect 3220 (cracking) was the primary defect assigned for each bridge, which was consistent with the control inspection. These data from bridges I1 and I2 indicate Figure E-21. Individual inspection results for the wearing surface of bridges I1 and I2, (sq ft).

E-40 that there was variability in the primary defect identified by the inspectors in different groups, but when all inspectors were combined into a single group, there was agreement that cracking was the primary defect affecting the deck. Table E-18. Assignment of defect elements 3210 (delam) and 3220 (cracking) to element 510, wearing surface for bridges I1 and I2. Condition State (CS) Control TGA Results (%) TGB Results (%) Combined 3210 3220 3210 3220 3210 3220 3210 3220 Bridge I1 CS2 0 5 0 13 13 3 12 8 CS3 1 0 5 30 4 3 4 10 CS2+CS3 1 5 5 17 11 4 8 11 Bridge I2 CS2 0 4 4 6 3 9 3 8 CS3 3 0 3 0 3 5 3 4 CS2+CS3 3 4 4 6 3 9 3 8 E.2.5.2.3 Element 302-Compression Seals The compression seal at the end of the bridge was a total of 86 ft in length and was assessed by both teams. The control inspection indicated that large portions of this seal were in CS 3, with the defect of Figure E-22. Photograph of typical conditions of compression seals for bridge I1 (A) and I2 (B).

E-41 2310-Leakage. The condition of the compression seals at the ends of the bridge was fair to poor as shown in Figure E-22. As illustrated in this figure, there was some debris impaction and damage to the concrete adjacent to the seal in different locations. Table E-19 below indicates the results from the inspection. It is useful to examine the combined length of damage identified by the two different test groups. For both bridges, the average combined quantity (CS 2 + CS 3) was 70% or greater for both groups. When all of the data from both test groups was combined to form a single group, the resulting COV values were 50% and 46% for bridge I1 and I2, respectively. In practical terms, these data indicate that there was high variation in the assignment of quantities of damage for the seal, with σ values of 35% or greater. There were also two members of TGA that assigned CS 4 for some or all of the seal for bridge I2. One inspector assigned all of the seal (86 ft) to CS 4 for defect element 2330-Seal damage, and another inspector assigned 12 ft to CS 4 for the defect 2350, Debris impaction. No members of TGB assigned any quantity to CS 4 for bridge I2. There were no inspectors that assigned CS 4 to the seal for bridge I1. Table E-19. Results for Element 302-Compression Joint Seal. Condition State (CS) Control Quantity (%) TGA Results (%) TGB Results (%) Combined (%) Mean σ COV Mean σ COV Mean σ COV Bridge I1 CS2 0 0 0 0 42 35 83 51 39 76 CS3 100 78 36 46 70 38 54 75 35 47 CS2+CS3 100 78 36 46 70 40 57 73 37 50 Bridge I2 CS2 0 39 25 63 66 35 53 52 31 60 CS3 100 66 42 64 47 48 102 57 44 77 CS2+CS3 100 81 29 36 71 42 58 76 35 46 The types of defects that were assigned for the compression seal was also analyzed, and it was found that there was significant variation in the types of defects identified by inspectors for the compression seal. Table E-20 records the damage and defects identified by the individual inspectors in TGA and TGB. These data represent the quantity (ft) in CS 2, 3, or 4 identified by each inspector, according to the defect identified. In some cases, quantities in CS 2, 3 or 4 needed to be summed together provide the data in Table E-20. In other words the inspector identified one defect with several different CSs. The table illustrates the diversity of the types of defects identified by the inspectors overall. There were four different defects selected (out of eight possible defects for seals), including 2310, Leakage, 2330, Seal damage, 2350, Debris impaction, and 2360, Adjacent deck or header damage. As shown in the table, there was very little consistency between the inspectors in terms of the defect identified in the field. For example, for bridge I1, TGB had one inspector identify defect 2310, four identify 2330, three identify 2350, and two identify 2360. Looking at those eight inspectors as a group (TGB + control), there is little consensus regarding the defects present in the bridge. The same was true for bridge I2, and for TGA. It can be noted that three members of TGA did not specify the type of defect for bridge I1, they only recorded the ft of damage. One of these inspectors did not provide a defect assignment for bridge I2. The reason for these omissions was not known.

E-42 Table E-20. Defects for compression seal identified by inspectors with CS 2, 3 or 4 for bridges I1 and I2. Insp. No. TGA TGB 2310 Leakage 2330 Damage 2350 Debris 2360 Adj. Deck 2310 Leakage 2330 Damage 2350 Debris 2360 Adj. Deck Bridge I1 1 - - - - - 86 - - 2 - 85 - 1 - 0.25 17.2 - 3 - - - 86 86 - - - 4 - - 15 - - 20 66 5 - - - - - - 3 46 6 - - - - - - - 10 7 - - 86 - - 10 - - Control 86 - - - 86 - - - Bridge I2 1 - 86 - - - - - 86 2 - 86 - - - - 15 - 3 - - 12 52 86 - - - 4 - - 22 - - 12 65 - 5 - - - - - - - 76 6 - - 86 - - - - 4 7 - 5 30 - - 86 - - Control 86 - - - 8 - - - E.2.5.3 Task I.3 Assessment of Substructure and Bearing Elements This task includes inspection of element 210-Reinforced Concrete Pier wall, element 215-Reinforced Concrete Abutment, and the bearing elements (313 and 311). The abutment has no reported defects and the reinforced concrete pier walls had isolated damage in the form of delamination and cracking. The movable bearings differ in condition between the two bridges. E.2.5.3.1 Element 215-RC Abutment Bridges I1 and I2 each included 91 ft of Element 215-RC Abutment. The control inspection rated the total length of element 215-RC Abutment for Bridge I1 for defect 1130-Cracking in CS 1-Good. For Bridge I1, one member of TGA did not provide inspection results, and four members assigned CS 2 for this element, in quantities ranging from 3 to 91 ft. Five members of TGB assigned quantities ranging from 3 to 9 ft in CS 2. For Bridge I2, the control inspection assigned 1 ft in CS 2 for cracking. Four members of TGA assigned CS 2 to this element, in quantities ranging from 3 to 91 ft. Four members of TGB assigned quantities in CS 2, also ranging from 3 to 91 ft. For each group, several inspectors did not note any deficiencies in this element. No inspector in either group assigned CS 3 to any portion of the abutment in either bridge. Given the paucity of data for this element, statistics were not provided.

E-43 E.2.5.3.2 Element 210-Reinforced Concrete Pier Wall Bridges I1 and I2 included 87 ft of pier wall in generally good condition. For bridge I1, the control inspection assigned 1ft (~1%) to CS 2 and 5 ft (6%) to CS 3. Three members of TGA assigned CS 2 and three members assigned CS 3. Two members of TGA assigned CS 1 to the entire pier wall. Six members of TGB assigned CS 2 and three members assigned CS 3. All members of TGB assigned CS 2 or 3 to the wall. The statistics for the inspection result are shown in Table E-21. Overall, there was agreement that the quantity of damage was small. It was notable that 6/14 inspectors assigned CS 3 to some portion of the wall, while 2/14 assigned CS 1 throughout. Table E-21. Results for Element 210-Reinforced Concrete Pier Wall, bridge I1. Condition State (CS) Control Quantity (%) TGA Results (%) TGB Results (%) Combined (%) Mean σ COV Mean σ COV Mean σ COV Bridge I1 CS2 1.1 1.9 0.7 35 5.2 2.8 50 3.6 2.8 77 CS3 5.7 3.8 1.3 35 3.1 0.66 22 3.4 1.0 30 CS2+CS3 6.9 3.4 1.6 47 5.8 3.1 54 4.6 2.8 61 For Bridge I2, the control inspection indicated two ft. of pier wall in CS 3. Four members of TGA assigned quantities to CS 2, in quantities of 2 or 3 ft. One member of TGA assigned 1 ft in CS 3. Three members of TGB assigned quantities to CS 2, in values ranging from 1 to 12 ft. One member of TGB assigned 3 ft to CS 3. Due to the paucity of data, statistics for I2 are not presented herein. E.2.5.3.3 Element 313-Fixed Bearing The test bridges included 6 fixed bearings in each bridge. The control inspection rated all 12 fixed bearings (Element 313-Fixed Bearing) for Bridge I1 and I2 in CS 1. Six of the seven inspectors in TGA rated all 12 fixed bearings in CS 1, and a single member of TGA rated the 12 bearings in CS2. Four of the inspectors in TGB rated the 12 fixed bearings in CS 1, two inspectors rated the 12 bearings in CS 2, and one member of TGB rated the bearings in bridge I1 as CS 3 and the fixed bearings in bridge I2 in CS 2. Overall, the predominant rating for the fixed bearing was CS 1, with about 70% of the inspectors assigning CS 1 to all of the fixed bearings. It was notable that 10/14 inspectors rated all of the fixed bearing in CS 1, 3/14 rated all bearings in CS 2, and 1/14 inspectors rated 6 in CS 2 and 6 in CS 3. E.2.5.3.4 Element 311-Movable Bearings The movable bearings in both bridges were identified as having corrosion damage, as illustrated in Figure E-23. In the figure, two movable bearings from bridge I1 are shown, each has corrosion damage that has resulted in widespread loss of the coating and corrosion product on the surface of the steel. There were 18 element 311-Movable Bearings in each of the two bridges. For bridge I1, the control inspection rated this element for defect 1000-corrosion, with six bearing in CS 1 and the remaining 12 bearings in CS 2. All TGA and TGB members also assigned the corrosion defect for the movable bearings in bridge I1.

E-44 Figure E-23. Photograph of two moveable bearings in bridge I1 showing corrosion damage. For Bridge I2, the movable bearings were rated for defect 1000-Corrosion, with a single bearing rated according to defect 2220-Alignment (CS 2) by control inspection. Two members of TGA and two members of TGB also identified defect 2220-Alignment for this element. In TGA, one inspector rated two bearings in CS 3 and one inspector rated one of the bearings in CS 4 for the alignment defect. Within TGB, one inspector rated one of the bearing as CS 3, and one inspector rated 17 bearing in CS 2 and one bearing in CS 4 for the alignment defect. One member of TGA assigned defect 7000-Damage. In summary, for bridge I2, only 4/14 identified the defect of alignment, and 2/14 identified CS 4 for alignment. The inspection results for this element are shown in Table E-22. The data shown in Table E-22 was consolidated to illustrate the average ratings for CS 2, 3 and 4. Because a few inspectors indicated CS 4 for certain bearing, the CS of 2, 3 and 4 were grouped to illustrate the average results considering the inspector-recorded damage for the movable bearing elements. As shown in the table, TGB generally rated a larger percentage of the bearings in CS 3 than did TGA and also recorded a higher percentage of the bearings with some damage (CS 2 for greater). E.2.5.4 Frequency of Defect Element Use The inspection results from Bridge I1 and I2 were analyzed to determine how consistently the defect elements were assigned during the course of the inspection. For Element 107, Open Steel Girder, all inspectors assigned the defect of corrosion to the element. The frequency of defect assignment for the deck of I1 and I2 is shown in Table E-23. The data in the table represents all of the defects identified by each inspector; in some cases, more than one defect may be assigned to an element by a single inspector. In some cases, inspectors identified the CS 2 or greater, but did not record the defect to which the CS was assigned. In this case, it cannot be known if the inspector observed a given defect; therefore, that inspector is not included in the reported results. The ratio shown in the table represents the number of inspectors identifying the specific defect over the number of inspectors that assigned defects (any defect) of at least CS 2 to that element, or assigned CS 1 without assigning a specific defect. For the latter case, the entry of CS 1 confirmed the inspector observed the element but did not find any defects. For example, two members of TGA assigned CS 2 but did not identify a specific defect of the deck of bridge I1. Therefore, it is not

E-45 known what defect that inspector observed when assigning the CS 2. If three inspectors positively identified a defect, the quotient 3/5 is assigned. Table E-22. Results for Element 311-Movable Bearings. Condition State (CS) Control Quant. (%) TGA Results (%) TGB Results (%) Combined (%) Mean σ COV Mean σ COV Mean σ COV Bridge I1 CS2 33 20 12 62 25 12 47 21 11 56 CS3 67 49 12 25 74 19 26 63 20 32 CS2, 3 or 4 100 65 20 31 81 19 22 73 20 27 Bridge I2 CS2 33 37 23 61 23 15 65 30 19 64 CS3 67 56 15 26 67 11 17 61 13 22 CS2, 3 or 4 100 71 12 18 84 17 21 77 16 20 It can be observed in the table that there was a distribution of different defects assigned to the deck element, and there is not agreement regarding what defects are present in the deck. For example, four members of TGA identified efflorescence, but only one member of TGB identified efflorescence. There were no defects that were identified by all of the inspectors in a group that assigned specific defects, but the majority of results included spalling/delamination and cracking. Table E-23. Table showing the frequency of defect assignment for the decks for bridge I1 and I2. Element No./Name TGA TGB 1080 Delamination 1120 Efflor. 1130 Cracking 1080 Delamination 1120 Efflor. 1130 Cracking Bridge I1 (BNBL) 12-RC deck 3/5 4/5 3/5 5/6 1/6 5/6 Bridge I2 (BNBL) 12-RC deck 4/6 4/6 2/6 5/6 1/6 5/6 The defects identified for the primary concrete elements reported herein were also analyzed to determine how frequently different defect elements were selected, as shown in Table E-24. These data also indicated that there was variability in the defects selected by the inspectors. For example, for element 215-RC abutment of bridge I1, seven members of TGB assigned the defect 1130-Cracking, while only two members of TGA assigned defect 1130. The data indicated that there was inconsistency in the assignment of defect elements between different inspectors. These data may be important for those hoping to integrate the type of defect identified during the inspection into deterioration and damage models based on the type of damage (e.g. spalling, cracking, etc.). These data also impact the accuracy of inspection results, since it appears that not all of the inspectors are making the same observations of damage in the field. However, the use of defects is relatively new for element level inspection, and as more experience is gained, the consistency of defect assignments is likely

E-46 to increase. Results presented later in this report from Michigan, where experience in element-level inspection is greater, generally had less inconsistency in the assignment of defects. Table E-24. Frequency of defect element selection for concrete elements. Element No./Name TGA TGB 1080 1090 1130 1080 1090 1120 1130 Bridge I1 (BNBL) 210-RC Pier wall 3/6 - 1/6 6/7 2/7 - 2/7 215-RC abutment 1/6 - 2/6 1/7 - - 7/7 Bridge I2 (BSBL) 210-RC Pier wall - 1/7 4/7 2/6 3/7 - 2/7 215-RC abutment 1/6 - 3/6 - - 1/7 5/7 Key 1080 Delam./Spall/ patched area 1120 Efflorescence/Rust staining 1090 Abrasion/wear 1130 Cracking (RC) E.2.5.5 Routine NBIS Inspection Results This section of the report documents the condition ratings assigned by TGA and TGB for test bridges I1 and I2. Table E-25 records the ratings provided by each inspector for the item 58-Deck, 59- Superstructure, and 60-Substructure. The table also includes statistics based on these data showing the mean, σ, COV, and the range of values. The data analysis indicated that the largest σ value was found to be 0.92, meaning that most inspection results would be within +/-1 from the mean value. This is also shown in the range values provided, in that the largest range value was found to be 3, but occurring in only 1/6 ratings; the most common range was 2 ratings. E.2.5.6 Post Test Questionnaire A post-test questionnaire was administered to all participants following the field exercises. The post-test questionnaire encompassed some questions that were alike for TGA and TGB members, and some questions that were specific to each group. The questions sought to obtain various information from the inspectors, including an evaluation of the newly developed visual guide, ways to improve the visual guide, the participants’ previous experience with inspecting the bridges in the study or similar bridges, and the tools they used during the inspection.

E-47 Table E-25. Condition ratings for bridges I1 and I2. Bridge I1 Bridge I2 58 deck 59 SS 60 Sub 58 Deck 59 SS 60 Sub TGA 01 5 7 7 4 5 7 TGA 02 4 6 7 5 5 6 TGA 03 5 5 7 4 5 7 TGA 04 4 5 7 5 5 7 TGA 05 5 5 6 3 5 7 TGA 06 5 6 7 5 6 6 TGA 07 4 6 6 4 6 7 TGB 01 7 7 7 5 6 7 TGB 02 4 6 7 5 5 6 TGB 03 5 6 7 5 6 6 TGB 04 4 5 6 5 5 7 TGB 05 5 6 6 5 7 7 TGB 06 6 7 7 5 6 6 TGB 07 6 6 6 5 6 7 Mean 5 6 7 5 6 7 σ 0.92 0.73 0.50 0.63 0.65 0.50 COV 0.19 0.12 0.07 0.14 0.12 0.07 Range 3.00 2.00 1.00 2.00 2.00 1.00 One of the questions specific to TGA asked if use of the visual guide would assist in rating bridge elements in the field. Answers to this question demonstrated agreement of all seven inspectors upon the usefulness of the visual guide with responses ranging from “felt pictures were very helpful” to “limited instances”. Some of the inspectors provided their opinion in detail, and some of them proposed some changes. One of the inspectors mentioned that the use of visual guide would be more pronounced “if the rating was done in the field, not as much if rating were given later from notes/pictures in the office.” Another inspector wrote that the visual guide effectiveness “may be on a less traveled highway, on the interstate, it was a big distraction.” Another inspector wrote that the visual guide would work if “reworked and placed on an iPad like a pull-down menu. I have great difficulties with protective coating inspection with elements. I think we could develop overall picture to compare entire structure or span and get % of CS that way.” Six members of TGA agreed about a question that asked if the visual guide would help in understanding the different condition states that should be assigned to an element. The extent of their agreement ranged from “yes, pictures always help” to “yes, very much so” to “somewhat”. Another question sought suggestions from TGA members to improve the visual guide. Four members answered this question and their suggestions included “if it could be slightly more compact”, “make it digital where it could be linked to online bridge inspection database”, ‘titles of defects should be at the top and center of each page not at the bottom. Gather more appropriate pictures.”, and “have multiple defect photos on the same page.” Two other questions asked TGA members what they liked most and least about the visual guide. Answers to what they liked most about the visual guide showed that the inspectors “always like photos, less items”, “the pictures more clearly define each condition state”, “helped with speed once deficiencies/defects were identified,” and “not too long, eventually should eliminate or make electronic. Got somewhat damaged after

E-48 one day.” Likewise, answers to what did the inspectors liked least about the visual guide identified two concerns: 1) Four inspectors said that it was “large size” “bulky” and “lengthy” to “carry it around” and one of them mentioned that “we typically condensed inspection guides for ease of use in the field.” 2) One inspector responded that “some items seem not to be addressed.” The latter comment is believed to refer to the fact that not all defects are currently included in the guide. Another question asked TGA members to rate the ease of use of the visual guide on a scale of 1 to 5, for 1 being difficult and 5 being easiest. One of the TGA members who rated this question as 1 also wrote: “Too many papers to carry. Once learned it would work.” Three other members of TGA rated the ease of the use of the visual guide as 3, and the other three members rated as 4. There were two questions specific to TGB (those who did not use the visual guide) in the post-test questionnaire. One of the questions asked the inspectors to rate the ease of the new format of the Manual for Bridge Element Inspection (MBEI) attached at the end of inspection workbook on a scale of 1 to 5, for 1 being “significantly more difficult” and 5 “significantly easier.” This question referred to MBEI pages attached to the workbook that were organized based on material. Four of the TGB members rated the new format of MBEI as 4 and 3 other members rated as 5. The other question asked the inspectors which format of the MBEI they prefer to use in the future, the original MBEI format or new format available in the workbook. All members of TGB responded that they prefer to use the new format of the MBEI in the future. The following are answers to questions that were asked of both TGA and TGB members in the questionnaire. Two people from TGA and one person from TGB had previously inspected the bridges chosen for the inspection exercises. Also, except for one TGB inspector who indicated that the inspection exercise bridges were “somewhat” similar to the bridges he/she commonly inspects, all other participants had previous experience of similar bridge inspection. Answers to another question which asked the participants if they had inspected the bridges differently than they conduct normal inspection showed a wide range of answers. Four inspectors said that they felt rushed. Two other participants wrote, “Could not access entire deck due to traffic.” Two members wrote that they have a file with plans and reports to update. Four inspectors said that they inspected the bridges differently due to use of the visual guide. One member wrote, “I don’t add up all the deficient areas or note them, I will estimate using percentage.” Another question for both groups asked about the tools the participants used during the inspection exercise. The answers to this question showed that half of the inspectors used two or more of the tools such as a hammer, crack comparator, tape measure, pick/probe, and binoculars. E.3 Michigan Field Test Results This section of the report documents the results from the field exercises completed in Michigan. The field exercise in Michigan consisted of the inspection of superstructure and substructure elements of twin prestressed girder bridges. These adjacent structures were constructed at the same time and had similar design features. This inspection tasks on these twin bridges included assessment of the Element 109- Prestressed Concrete Open Girder/Beam, Element 205-RC Columns, Element 234- RC pier cap, Element 215-RC Abutment, Element 313- Fixed Bearings, and Element 310-Elastomeric Bearings. The field exercise also included tasks for assessing the concrete decks of two bridges. This included the assessment of the Element 12-RC Deck, and joint elements 300-Strip Seal and 301-Pourable Joints seal. A complete description of the bridges and their previous inspection results are included Appendix D. The field exercises were completed over a two-day period, with five inspectors participating on the first day and five inspectors participating on the second day. This schedule was required to meet the availability of personnel to participate in the study.

E-49 E.3.1 Pre-test Questionnaire A total of ten inspectors attended the inspection exercise with five inspectors in each day. The inspectors were randomly divided into two groups, TGA (the group who used the newly developed visual guide) and TGB (the group who used their routine inspection practice). The group membership was randomly selected because there was not information available prior to the field exercises regarding the experience level of the inspectors. The pre-test questionnaire was used to capture information about the inspectors’ level of education, bridge inspection training courses attended, and the portion of their job dedicated to bridge inspection. The inspectors were also asked to indicate whether they wear eyeglasses during inspection and have any form of color blindness. Finally, there were some questions specific to the methods used by inspectors for inspection of the deck and superstructure. Table E-26 shows attendees education level, bridge inspection experience, education level, and training courses attended. The TGA group consisted of three inspectors with bachelor’s degree, one inspectors with Master’s degree, and one member that did not provide his education level. The educational level for TGB members included one inspector with high school education, three inspectors with a Bachelor’s degree and one with Master’s degree. Table E-26. Qualification of the bridge inspectors for Michigan inspection exercise. Category Option # of Inspectors TGA TGB Education H.S. 0 1 B.S. 3 3 M.S. 1 1 Br. Insp. Experience < 5yrs 1 4 5-15 yrs 2 0 15 + yrs 2 1 Element-Level Insp. Exp. 1-2 yrs 1 2 3-4 yrs 0 2 10+yrs 3 1 Training NHI 130055, Safety Insp. of Bridges 5 3 FHWA Element Training 4 3 Team Leader 4 4 Although, the project invitation requested inspectors that meet the National Bridge Inspection Standards (NBIS) requirements for a bridge inspection, one of the attendees did not appear to fulfill this requirement, and one other member did not provide this information. The member that did not provide this information was generally unwilling to complete the test questionnaire, indicating verbally that he was concerned with the possibility of being deposed in future litigation. The source of this concern was unknown. However, based on discussion with this apparently very experienced inspector, it was believed that he was a qualified as a team leader. Therefore, it is believed that at least four out of five inspectors from each group were qualified team leaders. This feature was considered in the data analysis, and results were examined to learn

E-50 if there appeared to be any effect from have a least one inspector that was not a qualified team leader. No effect could be found. Information on training courses taken by inspection exercise attendees was collected and documented. The pre-test questionnaire contained questions about training types provided by National Highway Institute (NHI), Federal Highway Administration (FHWA), and any state specific bridge inspection training. Through this question it was found that four TGA members and one TGB member attended NHI 130053 Bridge Inspection Refresher Training, four members of TGA and two members of TGB taken NHI 130078 Fracture Critical Inspection Techniques for Steel Bridges, and four members of TGA and one member of TGB taken State Specific Bridge Inspection Refreshing Training. Field exercise participants’ percentage of job duties dedicated to bridge inspection is 100% for four inspectors, 50% for one inspector, 25% for two inspectors, and 10% for three inspectors. All participants had 20/20 vision based on the Snellen eyechart test. None of the inspectors reported any form of color blindness, which was verified by the Ishihara 14 plate test completed prior to the field exercises. Two questions asked about common methods used by the inspection exercise attendees for measurement of area of damage on a bridge deck and length of damage on a superstructure. For measurement of area of damage on a bridge deck, three participants responded that they draw each area of damage (spall, crack, etc.) on a diagram with estimated dimensions, and then tally the total area. Four other participants commonly estimate the total area of damage as a percentage, and then multiplied by the total quantity for that element. Two participants indicated that they use both above common methods and each one of the inspectors wrote a comment for clarification. One of the inspectors wrote, “for structures with a large amount of defects, percentage is used.” The other inspector mentioned that he writes notes and quantities on reports without drawing. One other inspector reported that he estimates size of spalls and bad patches visually by span and summed for total area. This inspector also reported in the comment that for a busy roadway, where sounding is not possible, he estimates areas of delamination based on the visual observable spalls and patches. For measurement of length of damage on a superstructure, three inspectors responded that they draw each length of damage on a diagram with estimated dimension and then tally the total length. Six inspectors replied that they estimate the total length of damage as a percentage, and then multiply the total quantity for that element. One of the six inspectors wrote that he uses “visual count, pacing and ruler as needed.” One inspector measures each length of damage with a ruler, and then sum the total length. E.3.2 Spatial Area Estimation After completion of the pre-test questionnaire, all inspectors were asked to complete area estimation of simulated damages printed on six sheets of A4 size (8.5 x 11) in paper, as shown in Figure E-4 (i.e. page test). The estimation result for this task is shown in Table E-27 for TGA and TGB. As seen in Table E-27 the estimation means reported by TGA and TGB are close to each other but are larger than the actual quantities of the simulated damage. In other words, all inspectors overestimated the simulated damage printed on sheets of paper. Comparing the σ values between the two groups shows that quantities reported by TGA are less dispersed than quantities reported by TGB in 5/6 (83%) of the times. It can also be noted that the σ values increase as the area percentage increases. The average of the COV values were the same for each group, suggesting that the variation in the estimates was very similar between the two groups. Examining the error values, the data showed that average normalized error was slightly smaller for TGA as compared with TGB. This result was consistent with the results of the page testing in Indiana in which the average normalized error was slightly smaller for TGA as compared with TGB. These data may suggest that the use of the visual guide by TGA improved the accuracy of the area estimates during the page test; however, given the high variation in the results, as represented by the COV values of greater than 30% in many cases, the significance of the results is not ensured.

E-51 Table E-27. Simulated area estimation as percentage of the total area of the sheet of paper. Plate No. Actual Sim. Damage (%) TGA Results (%) TGB Results (%) Mean Error Norm Error σ COV Mean Error Norm Error σ COV 1 1.0 2.1 1.1 110 0.74 35 1.8 0.8 80 0.84 47 2 6.0 6.8 0.8 13 4.6 68 9.4 3.4 57 3.8 40 3 12 13 0.6 5.0 4.9 39 15 3.0 25 5.9 39 4 18 24 6.0 33 5.5 23 22 4.4 24 7.2 32 5 36 39 3.0 8.3 7.4 19 42 6.0 17 9.1 22 6 48 57 9.0 19 11 19 65 17 35 14 22 Average 31 - 34 Average 40 - 34 E.3.3 Bridge Inspection Exercise Results This section reports the inspection results for bridges M1, M2, M3, and M4. Specific elements of the bridges are divided into separate tasks for reporting purposes. The participants conducted the inspection as a routine inspection. Bridges M1 and M2 are twin prestressed concrete girder bridges of the same size and the inspection results for these bridges are reported in the same section. The data statistics for each element are reported for the control inspection, TGA and TGB group separately, as well as a combined sample irrespective of the TGA and TGB group assignments. The data statistics for an element are reported as a percentage of the total quantity for that element regardless of the actual measurement unit of the element. Data were analyzed in the same manner as with the field exercises in Indiana. E.3.3.1 Task 1 Bridge M1 and M2-Assessment of Prestressed Girders Task 1 consisted of inspection of element 109 –Prestressed Concrete Open Girder/Beam for each bridge and CS quantity for each bridge was reported in two separate inspections forms. During the inspection TGA used the visual guide, which contains images for prestressed concrete elements’ defects and spatial estimate diagrams, and TGB inspected this element normally. Damage in the prestressed girders consisted primarily of damage in the area of bearings where beam-end damage had developed, some of which had been repaired. Figure E-24 illustrates the overall conditions of these bridges (Figure E-24 A and B), consisting of multi-girder construction with typical prestressed members. The type of damage typical in these prestressed girders is shown in Figure E-24 (C and D), consisting of spalling at beam ends with some cracking/spalling developing.

E-52 E.3.3.1.1 Element 109-Prestressed Girder For bridge M1 the control inspection rated element 109 for defect 1080-Delamination/Spall/Patched Area and assigned 1% of this element in to CS2 and 1% in to CS4. The control inspection analyzed that the spalling damage required review (CS 4), but none of the participants reported a similar result. All five TGA members assigned defect 1080. In addition to defect 1080, one TGA member assigned defect 1130- Cracking (RC and Other). Three out of five TGB members reported defect 1080 for this element. Two members of TGB who did not report defect 1080 for this element reported defect 1090 - Exposed rebar and defect 1110-Cracking (PSC) for this element. Two participants-one TGA member and one TGB member reported using crack comparator for determining CS for this element in the post-test questionnaire. For Bridge M2, the control inspection rated prestressed concrete open girder/beam for defect 1080- Delamination/Spall/Patched Area and defect 1090-Exposed Rebar in CS2 and CS 4. Again, the assessment by the control inspection was that the spalling damage required review, but none of the participants reached a similar conclusion. The participants rated all damage as either CS 2 or CS 3. All members of TGA and TGB captured defect 1080 for this element. Two members of TGB captured defect 1090 and one other member assigned defect 1100 - Exposed Prestressing in addition to defect 1080 for this element. None of the TGA members reported defect 1090 for this element. One TGA member assigned defect 1130-Cracking (RC and Other). The inspection result for element 109 for bridge M2 is shown in Table E-28. In the table, results are shown for TGA, TGB, and both groups combined into a single population. The combined damage quantity of CS 2, 3, and 4 is also reported. It should be noted that only the control inspection included a quantity in Figure E-24. Photographs of bridges M1 (A), M2 (B), and typical damage (C and D).

E-53 CS 4, such that for TGA and TGB only CS 2 and CS 3 quantities are included. The COV reported in this table indicate values are typically greater than 50%, with most being equal to or close to 100%. Table E-28. Inspection result for element 109 - Prestressed Concrete Girder/Beam for bridge M1 and M2 reported as percentage of the total quantity Bridge M1 Condition State (CS) CI Qua. (%) TGA Results (%) TGB Results (%) Combined Result (%) Mean σ COV Mean σ COV Mean σ COV CS2 0.79 1.2 1.3 107 1.2 1.3 104 1.2 12 99 CS3 0 2.5 2.5 97 1.6 1.2 74 2.0 1.7 86 CS4 1.2 0 0 0 0 0 0 0 0 0 CS2-4 2.0 2.7 3.3 123 2.5 2.5 97 2.6 2.8 106 Bridge M2 CS2 1.3 1.4 1.1 77 1.1 1.0 94 1.2 1.0 80 CS3 0 2.1 2.0 95 1.6 1.2 76 1.8 1.5 80 CS4 1.9 0 0 0 0 0 0 0 0 0 CS2-4 3.1 2.6 2.2 84 2.4 1.5 63 2.5 1.8 72 The significant result from this task was that all participants indicated that the quantity of damage in the prestressed girders was relatively small. For bridge M1, the average quantity reported by both TGA and TGB was ~ 3%. For bridge M2, the average quantity for TGA was 2.4% and for TGB the average quantity was 2.6%, only slightly different. The range of values (in ft) was more revealing. Considering the combined results from both groups, the inspector ratings for damage length (CS 2 + CS 3) ranged from a minimum of 1 to a maximum of 63 for bridge M1 (range = 62 ft), and a minimum of 5 ft and a maximum of 47 ft (range = 42 ft) for bridge M2. The control inspection documented damage lengths of 15 and 23.5 ft for bridges M1 and M2, respectively. These data indicate that estimates for quantities of damage had some amount of variation in practical terms, although on a percentage basis, the average overall results were consistent and the quantity of damage was small.

E-54 The results from the individual inspectors are shown in Figure E-25 for damage in the PS girder. This figure shows the individual CS 2 and CS 3 quantities for each of the 10 inspectors in TGA and TGB, and the combined quantity for CS 2 + CS 3. Results are presented in units of ft in the figure. The data in the figure illustrate the scatter in the results from CS 2 and CS 3 in units of ft. For the combined CS 2 + CS 3, the total amount of damage recorded by the inspectors, it can be noted that one inspector in TGA and one inspector in TGB recorded significantly more damage for bridge M1 than the other eight inspectors. For bridge M2, there was also scatter in the combined CS 2 + CS 3 quantity. It was also revealing to examine the individual inspector results for CS assignment. Considering bridge M1, three out of five TGA inspectors assigned a quantity to CS 3, and four out of five TGB inspectors assigned a quantity to CS 3. For bridge M2, the rate was exactly the same, and the same inspectors reported quantities in CS 3. In other words, two inspectors from TGA and one inspector from TGB did not report any CS 3 in either bridge. This may illustrate a the difference in interpretation of the CS definitions between inspectors; inspectors that recorded CS 3 for bridge M1 also recorded CS 3 for M2, and inspectors that did not record CS 3 in M1 also did not record CS 3 in M2. The range of values assigned to CS 3 for bridge M1 was from 0 ft to 38 ft. The range of values assigned to CS 3 for bridge M2 was from 0 to 33 ft. Figure E-25. Inspection results for PS girder damage for bridges M1 (top) and M2 (bottom).

E-55 E.3.3.2 Task 2 Bridge M1 and M2 Substructure Elements This task consisted of inspection of element 205/207-Reinforced Concrete Column for bridges M1 and M2. In this task all inspectors were asked to assess the columns using unit of each; a later task asked the inspectors to rate the columns in ft. Each bridge included eight columns. Figure E-26 illustrates the overall condition of the reinforced concrete columns that were rated by the inspectors. The columns in bridge M1 and M2 were in very similar condition, with a limited amount of localized spalling and few cracks. Figure E-27 shows three examples of the type of damage that was present in the columns, including localized spalling (A), rust staining (B), and cracking (C). Figure E-26. Photograph of overall condition of columns in Bridge M1. E.3.3.2.1 Element 205 - RC Column One member of TGA rated all columns in CS 1 for this element in bridge M1. The remaining four inspectors and all five TGB inspectors rated RC columns in CS2 and/or CS3 damaged by defect 1080- Delamination/Spall/Patched Area and one member from each group assigned defect 1130-Cracking (RC and Other). Generally, inspectors in TGA and TGB rated the damage in the columns as CS 3 for at least two of the columns, as compared to the control inspection which rated the damage entirely in CS 2. For bridge M2, the control inspection rated (3/8) 37% of columns in CS2 damaged by defect 1080- Delamination/Spall/Patched Area and defect 1130-Cracking (RC and Other). From TGA, a single inspector rated all columns in CS 1. This was the same inspector that rated all of the columns in bridge M1 as being in CS 1. In terms of the defects identified by the inspectors, six of the ten inspectors assigned the defect 1080 (spalling), and five assigned defect 1130 (cracking). Two inspectors assigned both of these defects. The inspection result for reinforced concrete column or pile for bridge M2 is shown in Table E-29.

E-56 The data in Table E-29 shows that there was, when looking at the averaged values presented in the table, consistency in both TGA and TGB estimating about 50% of the columns were damaged in bridge M1; there was less consistency for bridge M2 where the mean value was 53% for TGA and 30% for TGB. There was some agreement between TGA and TGB in assigning CS 3 for bridge M1; TGA assigned 42% and TGB assigned 34% of the columns to CS3. The COV values for CS 3 were 17 and 18% for TGA and TGB, respectively. When examining the combined group, the COV value was only 19%. For bridge M2, only 1/10 inspectors assigned any columns to CS 3. These data indicated that there was limited variation in the inspection results for the columns. Table E-29. Inspection result for element 205 - Reinforced Concrete Column for bridge M1 and M2 inspected using units of (ea) and reported as percentage of the total quantity Condition State (CS) CI Qua. (%) TGA Results (%) TGB Results (%) Combined Result (%) Mean σ COV Mean σ COV Mean σ COV Bridge M1 CS 2 75 881 0 0 29 19 65 44 33 76 CS 3 0 42 7.2 17 34 6.3 18 38 7.2 19 CS 2+CS 3 75 53 24 45 45 11 25 49 17 35 Bridge M2 CS 2 38 63 45 72 30 14 48 42 31 74 CS 3 0 251 0 0 0 0 0 25 0 0 CS 2+CS 3 38 53 41 78 30 14 48 40 30 74 1Only a single inspector reported damage in this CS Figure E-27. Photographs of typical damage in columns showing spalling (A) and cracking (B).

E-57 It is useful to examine the individual inspection results to better understand the data presented in Table E-29. Table E-30 shows the CS assignments from each of the 10 inspectors in TGA and TGB. Examining results for bridge M1, CS 3, three inspectors in TGA assigned columns to this condition state. The inspectors assigned either three or four columns in CS 3, and the resulting σ and COV values in Table E- 29 are 7% and 17%, respectively, which are relatively low values. For TGB, four inspectors reported columns in CS 3, with the resulting σ and COV values in Table E-29 were 6.3 and 18%. These data indicate that among the inspector identifying CS 3, there was consistency Table E-30. Inspection results for columns of Bridges M1 and M2, ea. Bridge M1 Columns Bridge M2 Columns CS1 CS2 CS3 CS1 CS2 CS3 TGA 1 7 - 2 6 - 5 - 3 7 1 - 4 - 4 - 8 - 8 - - 8 - - 5 - 3 6 - 2 TGB 4 4 6 2 - 3 2 3 4 4 - 5 - 3 6 2 - 5 1 2 7 1 - 5 - 3 5 3 - E.3.3.3 Task 4 Assessment of Columns Using Units of ft This task consisted of inspection of element 205-RC Column units of linear ft for bridge M1 and M2. The objective of this task was to assess how different the outcome of the inspection would be if the columns were assessed based on units of ft rather than the conventional methodology of using units of ea. The assessment of the columns using units of ft was completed following the overall routine inspection of the superstructure and substructure of the bridges. The results of this task are reported before the results for Task 3 such that the results for column assessment by units of ft would follow the results from the column assessment using units of ea. The results of Task 3, which included the assessment of other substructure elements, follows this section. E.3.3.3.1 Element 205-RC Column, by ft For bridge M1, the control inspection rated 11% of this element in CS2 damaged by defects 1080- Delamination/Spall/Patched Area, defect 1090-Exposed Rebar, and defect 1120-Efflorescence/Rust Staining. All TGA members captured defect 1080 for this element, but none of the members assigned the other two defects assigned by the control inspection. Likewise, all TGB members captured defect 1080 for this element, but only one TGB member captured defect 1120. One other member of TGB assigned defect 1130-cracking (RC and Other) in addition to defect 1080. For Bridge M2, the control inspection assigned 9% of the columns in CS2 damaged by defect 1080- Delamination/Spall/Patched Area, defect 1120-Efflorescence/Rust Staining, and defect 1130 - Cracking (RC and Other). All TGA members assigned defect 1080 and one member assigned defect 1130; defect 1120 was not assigned by any inspector in TGA. From TGB, two inspectors assigned defect 1080, three

E-58 inspectors captured defect 1130, and one inspector assigned defect 1120 for this element. These data indicate that different defect elements were assigned among the two test groups. The inspection result for RC column for bridge M2 measured in ft is shown Table E-31. As shown in the table, there was agreement regarding the quantity of damage for each bridge. For example, for bridge M1, TGA estimated (average) 11% of the total column length was damaged (CS 2 + CS 3), and TGB estimated 9.3%. The combined group estimated 10% with a COV of 33%. In other words, based on the normal distribution assumption, 68% of results could be expected to lie roughly between 7% and 13% of damage reported by inspectors. For bridge M2, TGA estimated 6.4% while TGB estimated 5% of total damage (CS 2 + CS 3). In this case, the combine results showed a mean of 5.7% with a COV of 51%. It should be noted that there was a limited number of inspectors that reported CS 3 in bridge M2, with only two inspectors in TGA and a single inspector in TGB. For bridge M1, only two inspectors in TGA reported CS 3, while 4/5 members in TGB reported quantities in CS 3. Table E-31. Inspection result for element 205/227 - RC Column or Pile for bridge M1 and M2 using units of ft, reported as a percentage of the total quantity. Condition State (CS) CI Qua. (%) TGA Results (%) TGB Results (%) Combined Result (%) Mean σ COV Mean σ COV Mean σ COV Bridge M1 CS2 11 8 3.7 46 6.8 2 30 7.5 3 40 CS3 0 81 3.2 40 6.5 4.9 74 7 4.1 58 CS2+CS3 11 11 3.4 31 9.3 3.4 36 10 3.3 33 Bridge M2 CS2 9 6.3 3.9 62 5.4 0.6 10 5.8 2.6 45 CS3 0 3.41 0 0 3.42 0 0 3.4 0 0 CS2+CS3 9 6.4 4.1 65 5 1 20 5.7 2.9 51 1 Result from only 2 inspectors. 2 Result from a single inspector. In comparing the results of the inspection conducted using units of ft rather than each, there was a significant reduction in the percentage of damage that was recorded for each bridge. Examining the results from the combined group, for bridge M1 the total estimated damage (CS 2 + CS 3) quantity was reduced from 49% to 10%, and for bridge M2 the estimate was reduced from 40% to 5.7%. There was agreement in both cases that there was more damage in bridge M1 as compared with bridge M2. The defects identified by the inspectors were analyzed to assess if the defects were assigned consistently between the two different inspection tasks. Bridge M2 contained the crack shown Figure E-24, and the reporting of the cracking defect (1130) was analyzed. When using units of ea, the control inspection assigned 1130- Cracking CS 2 to one column. Five of the ten inspectors also assigned at least one column using Defect 1130 and CS 2. When using units of ft, four of those five inspectors assigned some length to defect 1130. One inspector that had assigned a column as CS 2 due to cracking when using units of ea did not subsequently record any quantity of defect 1130 when using units of ft. Two of the five inspectors that had assigned defect 1130, CS 2, when using units of ea subsequently assigned 1130, CS 3, when using units of ft. The latter inspectors had not assigned any columns to CS 3 when using units of each. This may suggest they perceived these defects as more severe when assessing them with units of ft. However, it should be noted that 50% (5/10) inspectors never noted the defect 1130- Cracking when using either units. Again, these data indicate that there is variation in the assignment of defect elements and CSs.

E-59 The time required to complete the assessment of the columns using the units of ft was reported by the inspectors as part of this task. The average time that inspectors reported to complete the assessment of the columns using unit of ft was 7 minutes per bridge. The routine inspection task, in which the inspectors completed the overall inspection of the superstructure and substructure, was completed in an average time of 37 minutes per bridge. E.3.3.4 Task 3-Substructure Elements This task consisted of inspection of element 215-Reinforced Concrete Abutment, element 234-RC Pier Cap, element 313-Fixed Bearing, and element 310-Elastomeric Bearing. The general condition of these elements in shown in Figure E-28. This figure shows photographs of the abutment of Bridge M1, a typical crack that in the abutment (top) and typical bearings (bottom). Each abutment and pier cap was 89 ft. in length. There were seven fixed bearings including steel plates and 28 elastomeric bearing in the bridge. The following text describes the results from each of these elements. E.3.3.4.1 Element 215 - RC Abutments For bridge M1, the control inspection rated element 215-RC Abutment in CS1 and CS2 damaged by defect 1130-Cracking (RC and Other). All TGA and TGB members captured the same defect and CS assigned by the control inspection for this element. One TGB member assigned defect 1080- Delamination/Spall/Patched Area for this element in addition to defect 1130. As shown in Figure E-28, vertical cracking in the abutments had occurred at a number of locations, and this defect was identified by all of the inspectors. The results shown in Table E-32 indicate that there as general agreement that the quantity of damage was relatively small, and that the severity of the damage was consistent with CS 2. Figure E-28. Photographs of typical conditions for the abutment (top) and bearings (bottom).

E-60 For Bridge M2, the control inspection rated the abutment with CS 2 damage by defect 1130-Cracking. Four out of five TGA members captured the same defect and CS assigned by the control inspection and one TGA member reported quantities without defect assignment. All TGB members identified cracking and CS 2. One TGB member assigned defect 1120-Efflorescence/Rust Staining in addition to the cracking. The results for the RC abutments in bridges M1 and M2 are shown in Table E-32. There was agreement between almost all inspectors that the damage in the abutment should be rated in CS 2; a single inspector in TGA rated a single foot in CS 3, and these data were neglected in data shown in Table E-32. There was agreement that the quantity of damage was relatively small, as shown in the table. E.3.3.4.2 Element 234 - RC Pier Cap The control inspection rated element 234 - RC Pier Cap in bridge M1 as the entire 89 ft in CS 1. One member of TGA also rated the entire element in CS 1. The remaining members of TGA identified from 3 to 11 ft of the element in CS2, damaged by either defect 1080-Delamination/Spall/Patched Area or defect 1130-Cracking (RC and Others). Two out of the five inspectors in TGB identified damage in the pier cap using the same defects, while three of the five agreed with the control inspection that there was no damage to report. For bridge M2, the control inspection rated a small portion of element 234-RC Pier Cap in CS2 damaged by defect 1080-Delamination/Spall/Patched Area. All members from TGA and TGB identified the same defect. The inspection result for this element is shown in Table E-32. As shown in the table, CS 3 was assigned to the portions of the pier cap for bridge M2. The inspection results indicated that two members of TGA and two members of TGB assessed a small amount of damage caused by defect 1080 to be CS 3. The amounts were quite small, between one and four ft assigned to CS 3. Overall, there was agreement between TGA and TGB that the amount of damage in the pier cap was small. The average result for TGA was 5%, the average for TGB was 2.2%, and the combined group average was 3.6%. E.3.3.4.3 Element 313- Fixed Bearings For bridge M1, the control inspection rated element 313-Fixed Bearing in CS2 damaged by defect 1000- Corrosion. Two out of five inspectors from TGA captured defect 1000 and two other members from TGA only reported quantities without defect assignment, and one other member assigned defect 3440- effectiveness (Steel Protective Coatings). From TGB four out of five inspectors reported the same defect assigned by the control inspection for bridge M1. One member of TGB did not report any rating for this element. In terms of reporting quantities in a CS, three inspectors from TGA and two inspectors from TGB assigned quantities in CS2. For bridge M2, four out of five inspectors from TGA reported defect 1000-Corrosion and one other member reported defect 3440-effectiveness (Steel Protective Coatings) for this element. All five members from TGB reported defect 1000 for this element. In terms of reporting quantities four inspectors from TGA and three inspectors from TGB reported some quantity in CS2. One of the inspectors in TGA assigned all bearings to CS 1. Three of the inspectors in TGA assigned CS 3 to between one and three bearings. For TGB, one member assigned all of the bearings to CS 1 and one inspector assigned all bearings to CS 3. Therefore, overall there were two inspectors that rated all the fixed bearings in bridge M2 as CS 1. There was one inspector that rated all fixed bearing in M2 as CS 3. And there were six inspectors that rated at least 1 bearing in CS 3. These data indicate that there was significant variation in the assessment of the fixed bearing. The inspection result for the fixed bearings for both bridge M1 and brige M2 are shown in Table E-33.

E-61 Table E-32. Inspection result for element 215 - RC Abutment and Element 234 - RC Pier Cap for bridge M1 and M2 reported as a percentage of the total quantity. Element 215-RC Abutment Bridge M1 Condition State (CS) CI Qua. (%) TGA Results (%) TGB Results (%) Combined Result (%) Mean σ COV Mean σ COV Mean σ COV CS1 92 92 1.1 1.2 88 9.1 10 90 7.4 8.2 CS2 8 7.6 1.5 19 12 9.1 71 10 6.7 67 Bridge M2 CS1 92 94 3.5 3.7 89 10 12 91 8 8 CS2 8 6 3.5 62 11 10 88 8.8 8 92 Element 234 - RC Pier Cap Bridge M1 CS1 100 94 5.6 5.9 99 1.2 1.2 97 4.6 4.8 CS2 0 7.3 5.2 72 1.7 0.8 47 5.4 5 92 Bridge M2 CS2 2.2 4.8 4 84 2 1.4 54 3.4 3.1 93 CS3 0 2.8 2.4 84 1.7 1.1 47 2.2 1.6 71 CS2+CS3 2.2 5 3.3 67 2.2 1.4 61 3.6 2.8 78 E.3.3.4.4 Element 310-Elastomeric Bearing For Bridge M1, the control inspection rated element 310-Elastomeric Bearing in CS2 and CS3 damaged by defect-1000 Corrosion. Only one member from TGA captured defect 1000 assigned by the control inspection and other four members from TGA assigned defect 2230-Bulging, Splitting, or Tearing for this element. Similarly, three inspectors from TGB captured defect 1000, one other member reported defect 2230, and one other member did not report inspection for this element. The inspection result for elastomeric bearings is shown in Table E-33. For bridge M1, the data indicated that there was good agreement regarding the total quantities of bearings with damage in either CS 2 or CS 3. TGA members estimated the damage as 79% of the bearings, and TGB 84%. When the result from both groups were combined to form a single group including ten inspectors, the total quantity of damage was estimated at 81% of the bearings with a COV of 28%. This was a relatively low COV value as compared with many other elements evaluated in the study. For bridge M2, the control inspection rated element 310-Elastomeric Bearing in CS3 damaged by defect 1000-Corrosion. One inspector from TGA and one inspector from TGB identified the same defect assigned by the control inspection. Four TGA members and three TGB members assigned defect 2230-Bulging, Splitting, or Tearing, and one other TGB member assigned defect 2210-Movement for this element. The inspection result for elastomeric bearing for bridge M2 is shown in Table E-33. There was less consistency between the groups for bridge M2, with TGA estimating the total quantity of damage at 58% while TGB estimated the total amount at 73%. The combined group estimate was 65% with a COV of almost 50%, indicating the variation in the results from inspectors.

E-62 Table E-33. Inspection result for element 313-Fixed Bearing and element 310 - Elastomeric Bearing for bridge M1 and M2 reported as percentage of the total quantity. Element 313-Fixed Bearing Condition State (CS) CI Qua. (%) TGA Results (%) TGB Results (%) Combined Result (%) Mean σ COV Mean σ COV Mean σ COV Bridge M1 CS2 100 81 33 41 64 51 79 74 36 49 Bridge M2 CS2 100 64 37 58 95 4.1 4.3 78 31 40 310 - Elastomeric Bearing Bridge M1 CS2 50 66 23 35 69 37 53 67 26 39 CS3 50 30 2.5 8.3 64 10 16 47 20 43 CS2+CS3 100 79 30 39 84 12 14 81 23 28 Bridge M2 CS2 0 50 40 81 76 30 39 60 37 62 CS3 100 20 7.6 37 45 29 64 35 25 72 CS2+CS3 100 58 38 66 73 27 37 65 32 49 E.3.3.5 Task 5 Bridge M3 Deck The assessment of bridge decks was conducted using two bridges at which inspectors completed primarily an inspection of the deck element. These bridges had low ADT and good access to for inspection, and the inspections were completed without the need for lane closures. The results for these two bridges, bridge M3 and Bridge M4, are reported in the following sections. As noted previously, the inspections were conducted over the course of two days, with five inspectors conducting inspections on day 1 and five inspectors conducting inspections on day 2. On day 2, marking of damage on the top surface of the bridge deck was apparent on bridge M4 due to a local contractor marking off areas for repair. There were no markings on the soffit of the bridge deck. Inspectors were asked to ignore the marking and conduct their inspection in the normal way. The data was analyzed to determine if this had a significant effect on the distribution of results, and there was no apparent effect in the data; the variation in results was similar to bridge M3 and some other elements inspected in the study. A portion of the deck of bridge M3 is shown in Figure E-29. As shown in the figure, this deck included a number of patches. Some areas of asphalt patching were present, as well as some areas of concrete patching. The total areas of the deck was 12,200 sq ft. E.3.3.5.1 Element 12-Reinforced Concrete Deck Element 12-Reinforced Concrete Deck was inspected under this task. The control inspection rated 4.0% of the RC deck in CS2 damaged by defect 1080-Delamination/Spalls/Patched Area and defect 1130 - Cracking (RC and Other). All TGA and TGB members assigned defect 1080, but only one TGA member and four TGB members identified defect 1130, cracking. One TGA and one TGB member assigned defect 1120-Efflorescence/ Rust Staining for RC deck.

E-63 Figure E-29. Bridge M3 span 5 deck surface showing delamination/Spalls/Patched Area defects. The inspection result for RC deck is reported for defect 1080 and defect 1130 separately, in order to illustrate the differences in the primary defects and defect quantities identified by inspectors (Table E-34). The results are also reported as the combined results, regardless of the defect assignment. As shown in this table, there was variation between the assigned defects between TGA, TGB, and the control inspection. For example, TGA assigned almost all of the damage to defect 1080, while TGB assigned an average of 1.8% to defect 1080 and 2.4% to defect 1130. The control inspection assigned 1% to defect 1080 and 2.7% to defect 1130, in close agreement with TGB. However, there was very little variation in the mean value of total quantities of damage recorded. Looking at the combined results for the quantity of damage assigned in CS 2, the results were 3.7, 3.2, and 3.4% for the control, TGA, and TGB. The test groups identified some areas in CS 3, whereas the control did not. The total amount of damage (CS 2 + CS 3) was 3.7, 3.8 and 3.3% for the control, TGA, and TGB, respectively. These mean results were very consistent; the mean combined team result was 3.6%. The σ for the combined group was 1.9%, which indicates that there was good agreement among all of the inspectors that there was a limited amount of damage on the bridge deck in terms of the percentage of deck damaged. The inspection results were also analyzed considering the raw data shown Figure E-30 in which the CS assigned by the ten inspectors is shown in units of sq ft. These data illustrate the scatter in the inspection results between different inspectors participating in the study. Considering the amount of deck assigned CS 3, the range of values reported 0 sq ft (0%) to 400 sq ft (3.2%). If we discard the high (400 sq ft) and low (0 sq ft) reported, assuming these are outliers, the range is between 11 sq ft (<1%) and 150 sq ft (1.2 %) in CS 3. These data indicate that the variation in the results may not be that significant in terms of potential decision-making criteria, since there is consensus that the amount of damage in the deck is 5% or less. In other words, there was consensus among the inspection group that the deck had only a small percentage of its area in poor condition. Practically speaking, the combined group damage estimate was 3.6%, +/- 1.9% (+/- σ) which illustrates good consistency among the inspection results. For the reported CS 2, again discarding the high and low values, the range was from 312 sq ft (2.6%) and 565 sq ft (4.6%). Examining the total damage assigned (CS 2 + CS 3), the range of reported values was from 55 sq ft (<1%) to 760 sq ft (6.2%). Again, if we assumed that the high and low values are outliers, the range would be between 92 sq ft (<1%) and 630 sq ft (5.2%) damage in the deck. Again, there is variation among the inspection results in terms of the exact percentage, but there is consensus in the broader finding that there is only 5% or less damage in the deck. However, it is worth noting that the range of the values, when examining the raw data in sq ft, has a large scatter in practical terms. For example, there were

E-64 several inspectors that estimated less than 100 sq ft of total damage (CS 2 + CS 3), and several inspectors that estimated more than 600 sq ft of damage. Table E-34. Inspection result for element 12 - RC Deck for bridge M3 reported as percentage of the total quantity. Condition State (CS) CI Qua. (%) TGA Results (%) TGB Results (%) Combined Result (%) Mean σ COV Mean σ COV Mean σ COV Defect 1080-Delamination/Spalls/Patched Area CS2 1 2.7 1 34 1.3 1.5 115 2 1.4 69 CS3 0 1.4 1.6 117 0.5 0.4 71 0.8 1 122 CS2+CS3 1 3.5 0.8 22 1.8 1.4 79 2.6 1.4 53 Defect 1130-Cracking (RC and Others) CS2 2.7 0 0 0 2.4 1.6 69 2.1 1.5 70 CS4 0 0 0 0 0 0 0 0.2 0 0 CS2+CS3 2.7 0 0 0 2.4 1.6 66 2.1 1.4 68 Combined result CS2 3.7 3.2 0.44 15 3.4 2.1 61 3.2 1.3 42 CS3 0 1.4 1.6 117 0.5 0.4 80 0.9 1 120 CS2+CS3 3.7 3.8 1.4 37 3.3 2.4 74 3.6 1.9 54 Figure E-30. Inspection results for RC deck M3 showing areas of damage in the deck.

E-65 E.3.3.6 Bridge M3 Task 2 E.3.3.6.1 Element 300-Strip Seal, and Element 301-Pourable Joint Seal. This task consisted of inspection of element 300-Strip Seal, and element 301-Pourable Joint Seal. There was 171 ft of strip seal on the deck that was assessed by inspectors. The control inspection rated 84% of strip seal in CS2 damaged by defect 2350-Debris Impaction. All TGA and TGB members captured defect 2350 assigned by the control inspection. One member of TGB assigned defect 2360-Adjacent Deck or Header in addition to defect 2350. The inspection result for strip seal is shown in Table E-35. The assessment of the strip seal was very consistent in terms of the total damage (CS 2 + CS 3). Table E-35. Inspection result for element 300 - Strip Seal for bridge M3 reported as a percentage of the total quantity. Condition State (CS) CI Qua. (%) TGA Results (%) TGB Results (%) Combined Result (%) Mean σ COV Mean σ COV Mean σ COV CS2 84 91 17 19 98 4.1 4 93 14 14 CS3 0 11 5.4 50 60 48 80 40 43 107 CS2+CS3 84 95 11 11 95 12 12 95 11 11 The inspection result for element 301-Pourable Joint showed that the control inspection assigned 100% of this element in CS2 damaged by defect 2330-Seal Damage (56 ft) and defect 2360-Adjacent Deck or Header (1 ft). The inspectors participating in group TGA did not assign these defects to the pourable joint, but rather simply provided CS ratings. Two members of TGA assigned CS 2 for the entire seal, two member assigned CS 1, and one member assigned 56 ft to CS 1 and 1 ft to CS 3. Two of the inspectors in TGB assigned defect 2360-Adjacent Deck or Header; one member assigned 60 ft and the other assigned 81 ft as CS 3. E.3.3.7 Task 1 Bridge M4 Deck E.3.3.7.1 Element 12-Reinforced Concrete Deck Element 12-Reinforced Concrete Deck was inspected under this task. Photographs of the surface of the deck of bridge M4 span - 4 and span - 6 are shown in Figure E-31. The total area of deck was 16,210 sq ft. The control inspection rated 11% of this element in CS 2 and CS 3 damaged by defect 1080- Delamination/Spall/Patched Area, defect 1120-Efflorescence, and defect 1130-Cracking (RC and Other). This included 11% in CS 2 and 0.1% in CS 3.

E-66 Figure E-31. Bridge M4 top of deck showing delamination/spalls/patched area on spans 4 and 6 All TGA members assigned defect 1080 for some portion of the deck. One member also assigned defect 1130 to a portion of the deck. The assignment of defects was more diversified among TGB. Four members of TGB assigned defect 1080 (spalling), four members assigned defect 1130 (cracking), and one member assigned defect 1120 (efflorescence/rust staining) to portions of the deck. The inspection result for this element is shown in Table E-36. In this table the inspection result is reported for control inspection, TGA, TGB, and a combined result which is a synthesis of the TGA and TGB together as one data set. Examining the results for damage (CS 2 + CS 3) assessed by inspectors, there was again consistency shown in the average values (mean) between the groups; TGA assessed 8.8% and TGB assessed 10%. However, there was variation among the individual inspectors, as illustrated by the high COV values. For example, the mean value when all inspectors were placed into the same population (combined results) was 9.3% with a COV of 60%. These data indicated that the majority of assessment would range from ~ 4 to 15%. For CS 3, the COV values were greater than 100%, illustrating the variation in the inspection results between inspectors, although the mean value is small, less than 3% for the combined group. Table E-36. Inspection result for element 12 - RC Deck for bridge M4 reported as a percentage of the total quantity Condition State (CS) CI Qua. (%) TGA Results (%) TGB Results (%) Combined Result (%) Mean σ COV Mean σ COV Mean σ COV CS2 11 6 2.7 45 11 6.5 60 8.1 5.1 63 CS3 0 3.6 4 113 1.6 1.1 64 2.6 2.9 112 CS2+CS3 11 8.8 4.3 48 10 7.2 72 9.3 5.6 60 The inspection results were also analyzed considering the raw data shown in Figure E-32, which consists of raw data provided by the ten inspectors that participated in the study. These data illustrate the significant variation in the amount of deck assigned to CS 3. For example, two inspectors did not assign any of the deck to CS3. Among the inspectors that did assign CS 3, the range of values assigned was from 64 sq ft (<1%) to 1554 sq ft (almost 10%). If we discard the high (1554 sq ft) and low (one of the 0 sq ft), assuming these are outliers, the range is between 0 sq ft (0%) and 460 sq ft (~3 %) in CS 3. Examining the total damage assigned (CS 2 + CS 3), the range of reported values was from 460 sq ft (~3%) to 3451 sq ft (21%). Again, if we assumed that the high and low values are outliers, the range would be between 790 sq ft (~5%)

E-67 and 2307 sq ft (14%) damage in the deck. These data indicate that there was significant variation between inspectors in reporting damage in the bridge decks, although the average values when considered as a group were more consistent. These data indicate that there was significant variation between individual inspectors in terms of the area of bridge deck that was assessed as being damaged (CS 2 + CS 3), and in the area assessed to be in poor condition (CS 3). However, there was agreement between both groups that the area of damage in the deck was about 10% E.3.3.7.2 Element 300-Strip Seal and Element 301-Pourable Joint Seal This task consisted of the inspection of element 300-Strip Seal and element 301-Pourable Joint Seal for bridge M4. The control inspection rated 100% of element 300-Strip Seal in CS3 damaged by defect 2350- Debris Impaction. Three TGA members assigned the defect 2350, and two TGA members assigned defect 2330-Seal Damage for this element. Similarly, all TGB members captured defect 2350 assigned by the control inspection, and one TGB member assigned defect 2360-Adjacent Deck or Header in addition to defect 2350 for this element. The inspection result for this element is shown in Table E-37. These data indicated that there was consistency among the inspectors in terms of the amount of seal that was damaged. For example, the control inspection and TGA both assessed that 100% of the seal was damaged (CS 2 + CS 3), and TBA was, on average, only slightly less at 90% of the length of the seal. It was also notable that the dispersion in the data was relatively small, with COV values on the order of 10-15%. The pourable joint seal was also assessed. The control inspection rated 38% of element 301-Pourable Joint Seal in CS2 and CS3 damaged by defect 2330-Seal Damage and defect 2350-Debris Impaction. All members of TGA and TGB also assigned defect 2350 for this element. One TGB member assigned defect 2360-Adjacent Deck or Header in addition to defect 2350 for this element. The inspection result for element 301 is shown in Table E-38. There was some agreement between TGA and TGB in terms of the total length of damage, with the mean values for TGA and TGB being 14% and 8.3%, respectively. However the COV values indicated that there was wide dispersion in the results, with values on the order of 150% or more. Figure E-32. Inspection results for RC deck M4 showing areas of damage in the deck.

E-68 These values indicated that for the poured seal, where damage was less than 50%, the variation in results was much greater than for the strip seal where damage was close to 100%. Table E-37. Inspection result for element 300 - Strip Seal for bridge M4 reported as a percentage of the quantity Condition State (CS) CI Qua. (%) TGA Results (%) TGB Results (%) Combined Result (%) Mean σ COV Mean σ COV Mean σ COV CS2 0 73 32 44 72 41 57 73 33 45 CS3 100 44 30 69 58 39 67 52 33 64 CS2+CS3 100 100 0 0 90 14 15 95 11 11 Table E-38. Inspection result for element 301 - Pourable Joint Seal for bridge M4 reported as a percentage of the total quantity Condition State (CS) CI Qua. (%) TGA Results (%) TGB Results (%) Combined Result (%) Mean σ COV Mean σ COV Mean σ COV CS2 37 7.6 11 142 3.9 5.2 133 5.8 7.9 136 CS3 1 20 0 0 11 13 123 14 11 78 CS2+CS3 38 14 23 157 8.3 14 174 11 17 155 E.3.3.8 NBIS Inspection of Bridge Elements All inspectors from the TGA and TGB groups and the control were asked to provide an NBIS rating for bridge M1 and M2 superstructure and substructure and for bridge M3 and M4 deck. Figure E-33 shows the results from the field testing for NBIS rating of the bridges included in the inspection exercise. Examining the results from the condition rating, there was some scatter in the results. For example, for the superstructure of bridge M1, there was a range of five different condition ratings assigned for this component. This included a range from a low rating of 4 to a high rating of 9. However, a single inspector was responsible for higher ratings than the group as a whole. Because there was at least one inspector that may not have been fully trained, and also reported that bridge inspection was not a common activity, the results were analyzed by removing the high and low values. This reduced the range for all components to 2. The resulting statistical data is shown in Table E-39. As shown in the table, removing these outliers resulted in σ values that were less than 1 for all of the components analyzed.

E-69 Table E-39. Statistical results from NBI ratings of bridges in Michigan. Bridge (component) M3 Dk M4 Dk M1 SS M2 SS M1 Sub. M2 Sub. All Inspectors in TGA and TGB Mean 6 5 6 5 6 6 σ 0.79 1.00 1.43 1.16 0.74 0.84 COV 0.14 0.20 0.26 0.22 0.12 0.13 Range 3 3 5 4 2 3 High and Low Removed Mean 6 5 5 5 6 6 σ 0.71 0.69 0.74 0.64 0.64 0.52 COV 0.12 0.14 0.14 0.13 0.10 0.08 Range 2 2 2 2 2 1 E.3.3.9 Frequency of Element Assignment The selection of defect elements was analyzed for the test bridges in Michigan, and these data are presented in Table E-40 and Table E-41. Table E-40 includes the results for the prestressed concrete superstructures of bridges M1 and M2. This table includes entries for two Agency Defined Elements (ADEs). This includes element 826 - Beam end damage, and element 845-Beam end support. There was Figure E-33. NBIS ratings provided for the deck (DK), superstructure (SS), and substructure (Sub) for test bridges in Michigan.

E-70 some consistency in the reporting of defect elements in these data. For example, five members of TGA reported defect 1080 for bridge M2, and four members of TGB reported defect 1080 for bridge M2. It is notable that very few inspectors reported cracking in the PSC girders. In one case, the wrong defect was assigned because the inspector assigned defect 1130, which is cracking for reinforced concrete. Table E-40. Frequency table showing number of inspectors assigned defects for PSC girder for bridge M1 and M2. Element No./Name TGA TGB 1080 1130 1090 826 845 1080 1090 1110 826 845 109-PSC girder, M1 5/5 1/5 1/5 - 1/5 3/5 1/5 1/5 1/5 1/5 109-PSC girder, M2 5/5 1/5 - 1/5 1/5 4/5 2/5 - 1/5 - 1080 Delamination/Spall 1130 Cracking (RC) 1190 Abrasion / wear 1120 Efflorescence/rust stain 1110 Cracking (PSC) 826 ADE-Beam End Damage 1090 Exposed rebar 845 ADE-Beam end support Table E-41 shows the defect frequency data for the reinforced concrete elements in Michigan. It is notable that there was more consistency in the assignment of defects, as compared with the data reported from the Indiana field exercises. For example, for bridges M1 and M2 element 210, all members of each group identified the defect of cracking for the abutment. For bridge M3, all participants identified defect 1080-Delamination/spalling for the deck element. There was less consistency in the assignment of cracking in the deck, where four members of TGB assigned some portion of the deck to the defect of cracking, while only one member of TGA assigned that defect. Overall, the consistency of defects identified by the inspectors was more common in the Michigan field exercises as compared with the Indiana field exercises. This may be due to the fact that the inspectors in Michigan have much longer experience, generally, with element-level inspection because the state has used element-level inspection for a long period of time. In Indiana, element-level inspection has only been conducted since the time it became required in 2014. It was also found that generally, inspectors in Michigan more commonly assigned defects in every element, whereas inspectors in Indiana sometimes provided only a CS. This may be additional evidence that the experience with element-level inspection in Michigan resulted in increased consistency in defect assigned as compared with Indiana. However, the test bridges were different between the two field tests, and the effect of the different situations, bridge design, and bridge conditions is not known. E.3.4 Post-test Questionnaire The post-test questionnaire encompassed some questions that were alike to TGA and TGB members, and some questions that were specific to each group. The questions sought to obtain various information from the inspectors, including an evaluation of the newly developed visual guide which was attached at the end of the inspection exercise booklet, and ways to improve the newly developed visual guide, the participants’ previous experience inspecting the bridges in the study or similar bridges, and the tools they used during the inspection. Answers to the specific questions to TGA and TGB are reported first in the following paragraphs followed by the answers to questions which were the same for both groups.

E-71 Table E-41. Frequency table showing number of inspectors assigned defects for an element for bridge M1, M2, M3, and M4. One of the questions specific to TGA (those who used the visual guide) asked if use of the visual guide would assist in rating bridge elements in the field. Answers to this question demonstrated agreement of all five inspectors upon the usefulness of the visual guide. Two participants wrote that the newly developed visual guide is similar to MDOT guide, and one of them added, “The quantity estimate pictures are very good”. One of the other participants from TGA wrote, “For deck, yes, I see an advantage”, and another participant mentioned “for sq ft items it helps, lft items maybe”. Four members of TGA agreed about a question that asked if use of the visual guide would help in understanding the different condition states that should be assigned to an element. The extent of their agreement ranged from “yes” to “same as MDOT book”. One of the members did not provide an answer to this question. Another question sought suggestions from TGA members to improve the newly developed visual guide. Three members answered to this question and their suggestions included “joint rating for strip seal should include cracking in the adjacent concrete.”, “add guide for elastomeric bearing.”, and make it “smaller” size. One member did not answer to this question and another participant’s answer was omitted due to ambiguity. Two other questions asked TGA members what they liked most and least about the visual guide. Answers regarding what they liked most about the visual guide included: “Easy to see the CS the element is in. It helped with the deck patching and cracking. It also helped with the abutments. The quantity estimate pictures are very good and knowing what amount looked like for each percentage.” Likewise, answers to what the inspectors least liked identified two things 1) the content of the visual guide, and 2) size and format of the visual guide. Four inspectors responded to this question. One of the inspectors wrote, “For beams it did not seem to help as much, given the locations of the defects mainly at the beam ends. For cracking and bearing the guide was not that useful. It is tedious to find pages.” Element No./Name TGA TGB 1080 1120 1130 1080 1120 1130 Bridge M1 210-RC Abutment - - 5/5 1/5 - 5/5 215-RC Column 5/5 - 1/5 1/5 - 5/5 234-RC Pier Cap 4/5 - 1/5 - - 5/5 Bridge M2 210-RC Abutment - - 5/5 - 1/5 5/5 215-RC Column 4/5 - 2/5 3/5 - 3/5 234-RC Pier Cap 5/5 - - 5/5 - - Bridge M3 12-RC Deck 5/5 1/5 1/5 5/5 1/5 4/5 Bridge M4 12-RC Deck 5/5 - 1/5 4/5 1/5 4/5

E-72 Another question asked TGA members to rate the ease of use of the visual guide on a scale of 1 to 5, for 1 being difficult and 5 being easiest. Three inspectors rated the visual guide as 4 (Easier), one inspector rated as 3 (Easy), and one participant rated as 2 (moderately difficult). There were two questions specific to TGB (those who did not use the visual guide) in the post-test questionnaire. One of the questions asked the inspectors to rate the ease of the new format of the MBEI attached at the end of inspection booklet compared to the original MBEI on a scale of 1 to 5, for 1 being “significantly more difficult” and 5 “significantly easier”. One member of TGB rated the new format as 4 (easier), three members rated as 3 (same as the original MBEI), and one member rated as 2 (moderately difficult). The other question asked the inspectors which format of the MBEI they prefer to use in the future - the original MBEI format or the new format available in the workbook. Three members of TGB preferred to use the new format of the MBEI and two other members preferred to use the original MBEI manual. The following are answers to the questions which were the same for both TGA and TGB members. A question asked the inspectors if they inspected any of the bridges chosen for the field exercises previously. Seven inspectors answered to this question and just one member of the TGA group previously inspected the bridge(s) that were part of the study. Answers to another question which asked the participants if they inspected the bridges differently than they conduct normal inspection showed a wide range of answers. Five inspectors answered “yes” to this question. Two of the inspectors wrote “I used the visual guide”, one of the inspectors wrote “I did not take photos I would normally take or work from the previous report” and another inspector wrote “I filled out different forms.” Another question asked the field exercise participants about the tools they used during the inspection exercise. Answers to this question showed that 8/10 (80%) of the inspectors used two or more of the tools such as; hammer, crack comparator, tape measure, pick/probe, binoculars, sounding rod, and flash light. Two other questions asked all participants how they quantified the damage in the prestressed concrete (PSC) beams and if they used crack comparator for determining the CS in PSC beams. All TGA members reported that they made an estimate of the length visually where the beam was damaged and just one of the inspectors used crack comparator for determining the CS for cracks in PSC. Likewise, four out of five TGB members estimated the damaged length visually, and one member of the TGB group used a combination of estimation of the length visually where the beam was damaged with using typical lengths of flanges for estimation. One member of the TGB used crack comparator for determining the CS of cracks in PSC beams. An open-ended question sought the inspector’s feedback about the inspection exercise concluded the questionnaire. Three of the participants provided answer to this question. One of the participants wrote “the visual guides may be most beneficial for sq ft items and for wall in lft such as abutments but for joints maybe not so much.” Another participant commented “I like the element tables due to variety of possible defects based on element location. It allows for more detail.”, and another participant wrote “My data should be given less weight than others as I don’t normally inspect bridges.” As mentioned previously, the data was examined considering that one member of the group was not qualified as a team leader, but no effect on the outcome could be found among the element-level inspection results. E.3.5 Inspection Times The time required to complete the inspections to determine if the use of the visual guide required more time or less time as compared to conventional inspections. It was found that in most cases, TGA required more time to complete the inspections than TGB as shown in Table E-42. This was likely due to the need to find the images of different defect in the visual guide, and a lack of familiarity with the visual guide. It is likely that over time, as inspectors became more familiar with the use and application of the guide, the time required to use the guide in the field would diminish. It can also be noted that although the average

E-73 time to complete the tasks was increased for TGA, the additional time required was not that significant. On average, TGA completed inspections in 54 minutes as compared to TGB completing the tasks in 50 minutes. Table E-42. Average time reported by TGA and TGB for routine bridge inspection exercises Tasks Average TGA Time (min) Average TGB Time (min) Bridge I1 routine inspection 57 63 Bridge I2 routine inspection 62 58 Bridge M1 routine inspection 48 47 Bridge M2 routing inspection 45 36 Bridge M3 routine inspection 60 53 Bridge M4 routing inspection 50 45 AVERAGE 54 50 E.4 Discussion This study provided very unique and important data on the variability that is found in element-level inspection data. One of the primary objectives of the field exercises was to provide data on the quality of element-level data overall by assessing the distribution of spatial estimates and the consistent assignment of condition states. It was found in the field exercises that the distribution of quantity estimates among different inspectors was sometimes very high. The variation in quantity estimates was quantified using the COV values, to provide a normalized value for the magnitude of the standard deviation. This provided a measure of the variation in inspection results that could be compared between TGA and TGB. Table E-43 shows the COV values for the primary elements in the study. The table includes the averaged COV values for CS 2, CS 3, and CS 2 + CS 3. The groupings of TGA, TGB, and the combined TGA + TGB are shown. These data include the primary superstructure, substructure, and deck elements that had CS assignments in CS 2 and CS 3 of sufficient quantity to provide a meaningful measure. These data do not include ancillary elements such as joint seals, bearings, or bridge railings. (Note: The linear average of the magnitude of COV values is shown for illustration, which is different statistically from the average COV value). It was found that overall, the average of the COV values was greater than 50%. From both the Indiana and Michigan field exercises, the average of the COV values were very similar between TGA and TGB. In Indiana, the average of the COV values for TGB were slightly lower than for TGA, indicating there was more consistency in the results from TGB. The significant findings shown this table are twofold; first, the average of the COV values were somewhat high for both TGA and TGB. Second, the use of the visual guides did not have an effect that was reflected through a reduction in the variation of the data, as measured by the COV values. It can also be observed that the average of the COV values were generally lower for Michigan as compared with Indiana. The fact that Michigan has been conducting element level inspection for longer than Indiana may contribute to this difference. However, the two groups inspected different bridges which had different levels of damage, so it is not possible to determine how the increased experience of the MI inspectors may have affected the results. Qualitatively, the inspectors in Michigan were more likely to record defects during the field exercises as compared with inspectors participating in the field exercises in Indiana. This may be associated with the increased experience of inspectors from Michigan as compared with inspectors from Indiana.

E-74 Table E-43. COV values determined for TGA and TGB for the field exercises. Group Average of the COV Values CS2 CS3 CS 2+3 IN - TGA 80 89 69 IN - TGB 70 79 59 MI - TGA 57 88 62 MI - TGB 74 62 67 IN Combined 82 96 69 MI Combined 74 83 70 It is also notable that the variation in the total damage, i.e., CS 2 + CS 3 was also relatively high, with a minimum of ~60% as shown in Table E-43. These data are significant because they indicate the variation between reporting of damage quantities overall, not just differentiating between CS 2 and CS 3. However, the COV values require some context to place the results in practical terms. The COV values reflect the ration of σ to the mean value. If the mean damage quantity is small then the variation in the inspection result is small when expressed as a percentage of the total quantity. For example, assume that the damage in a deck was an area equal to 10% of the bridge deck. If the COV were 50%, then the variation as expressed by the σ value is only 5% of the total quantity of deck. This may be an acceptable variation. However, if the damage in the deck were 50%, the variation would be +/- 25% of the total deck area, which may not be acceptable. In other words, assuming the mean of the inspection result was equal to the quantity of 50% of the deck, the range of inspection results would be from 25% to 75% of the deck area. To further illustrate the test results, Figure E-34 includes two graphs showing the measured σ values from the field exercises, for CS 3 (Figure E-34A) and for CS 2 + CS 3 (Figure E-34B). The figures include a trend line for the combined data set from Indiana and Michigan. As shown in these figures, the σ values increase as the mean amount of damage increases. In this way, when the quantity of damage is small, the variation in inspection results is small, but when the quantity is large, the variation is also large. This is significant because it indicates that the more damaged a bridge element, the lower quality the inspection data that will be obtained. Looking at these data from an overall perspective in regards to decision-making and bridge management, the data suggest that as bridge deterioration increases, such that decisions regarding maintenance and repair actions are required, the quality of inspection data is decreasing. This is the opposite of what would be most desirable from a bridge management perspective.

E-75 The results from MI and IN were also analyzed to determine how the assignment of CS 3 related to the quantity of an element in CS 3. Figure E-35 shows the relationship between the numbers of inspectors reporting CS 3 and the mean value of the quantity assigned to CS 3. A trend line is drawn on the figure that shows the trend for the results with mean values less than 10%. These data illustrate that when the mean value in CS 3 is less than 10%, there is variation in the number of inspectors that report any quantity in CS 3. However, the trend of these data shows that as the amount of damage increases, the likelihood of an inspector reporting damage in the CS 3 also increases. This illustrates that the assignment of CS 3 is not simply random, but in fact follows the trend that you might expect. When the amount of damage in CS 3 is small, some inspectors assign CS 3 and some do not. But as the amount of damage increases, an increasing number of inspectors assign CS 3. The data with mean values above 10% are probably too Figure E-34. Standard deviation (σ) as a function of damage for CS 3 (A) and CS 2 + CS 3(B). Figure E-35. Rate of detection for CS 3 as a function of the mean.

E-76 sparse to assess effectively; more data is needed for situations where the quantity of damage in CS 3 is greater than 10% is needed. One of the objectives of the field exercise was to compare the use of the visual guide with the traditional inspection approach. The data was studied to determine if there was a measurable improvement in the accuracy of spatial estimates as a result of using the visual guide. In the studies conducted at the S-BRITE Center, it was found that there was not a consistent pattern showing that use of the visual guide resulted in improved accuracy. There were three tasks that considered the inspectors ability to estimate a quantity with the use of the visual guide; the page test, estimating the area of simulated damage on the web of a plate girder, and estimating the length of the simulated damage on the web of a plate girder. TGA, which used the visual guide, had smaller normalized error in only one of the three tests, the page test. TGA also had a lower normalized error in from the page test in Michigan. As such, from four tests that examined the ability to estimate a quantity, without needing to also consider the appropriate condition state, TGA had a lower normalized error in two of the four tests, and these were both page tests. Given that the page test was very similar in appearance to the images in the visual guide, this results is not very significant. For the S-BRITE tests examining the area estimates of simulated damage placed on a plate girder, the averaged COV values were 57% and 50% for TGA and TGB, respectively. These data indicate that for any given quantity being estimated, the variation of 1σ is approximately +/-50% of the mean value of the estimate. Given this significant amount of variation in the results, it is unlikely that the influence of using the guide could be effectively detected. An additional explanation may be that there is more training and experience required in how to use the spatial estimating guides than could be provided within the constraints of the field exercises. The results from the page test suggest that the guide may be helpful, if those results can be transferred to full-scale elements. In terms of the assignment of CS, it was found that the variability of the results combined with the relatively small data set did not allow for an effective evaluation regarding effect of using the visual guide as compared with not using the visual guide. Qualitatively, several participants noted in the post-test questionnaire that the visual guide assisted them in making assessments. It may be that additional training and experience in utilizing the visual guide is necessary to realize its full benefits. Several inspectors noted that the guide was difficult to use due to its size, the need to look up defects, and the fact that it was not computer-based. These limitations are easily resolved; the guide can be printed smaller, and once inspectors are familiar with the content, finding defects would not be a limitation. Given the variation found in the assignment of defects and CSs in the study, it seems unlikely that the visual guide would not ultimately be helpful. Part of the study evaluated the differences between making estimates based on a percentage as compared with tallying individual areas, and making estimates in area (sq ft) as compared to length (ft). Exercises at the S-BRITE Center used simulated damage on the web of a plate girder were designed to examine these differences. For the estimation of area, it was found that the normalized error was actually increased when tallying individual areas as compared with providing a percentage estimate of the areas. However, the variation in results from individual inspectors, as expressed by the COV, was found to be reduced when tallying areas as compared with estimating areas as a percentage. For the evaluation of length (i.e. units of ft), different results were found. The results indicated that tallying individual lengths provided both the lowest normalized error and the lowest variation between inspectors. For example, when examining all of the participating inspectors in a single group, the normalized error was about the same for area (sq ft) and length (ft) when percentage estimates were used, about 16%. When tallying individual areas (sq ft), the normalized error for the group was 41%, but when tallying individual length (ft), the normalized error was only 8%. It was also found that the quantity of damage recorded was increased when units of length were used as compared with units of area. Exercises completed at the S-BRITE center also included analyzing the effect of changing the units of steel protective coatings (Element 515) from sq to ft. Data was analyzed considering the total amount of damage identified by inspectors, i.e., CS 2, 3 or 4. It was found the that total quantity of damage was

E-77 increased when using units of ft (98%) as compared with sq ft (72%). The variation in results, as expressed by the COV value, was lower when using units of ft (4.2%) as compared with sq ft (33%). These data illustrated the reduced precision of using units of ft, where any portion of the truss defines the rating for that linear foot of truss panel, as compared to using units of sq ft. However, the consistency between different inspectors was increased when using units of ft (i.e. lower COV). This result may have been affected by the quantity of damage in the truss, since almost 100% of the truss was damaged when considered in this way. Therefore, there was a limitation on the overestimate that could be made in this exercise, because values greater than 100% were not possible. A different field exercise was used to examine the effect of changing unit of RC column from ea to ft. In this task, inspectors rated eight columns in each of two bridges (M1 and M2). Again, it was shown that there was more precision when using units of ft, which resulted in significantly less damage being reported when using unit of ft as compared with units of ea. This would be expected, since each column is being divided into 1 ft sections. Overall, the results showed that using units of length (ft) as compared to area (sq ft) resulted in a higher quantity of damage reported, but reduced variation between inspectors. Using unit of length (ft) as compared to ea resulted in decreased quantities of damage and decreased variation between inspectors. It is noted that statistical analysis of the data was conducted after the field exercises. The F-test was used to evaluate the dispersion of TGA as compared to TGB. This analysis showed that there was not enough data to show a statistically significant result that indicated that the use of the guide improved the quality of results from TGA as compared with TGB. Analysis to compare the mean from the combined group as compared with the control inspection using the t-test showed that there was not enough data to show statistically significant similarity between these results. Given the variation found in the results, a larger sample would be needed to prove these hypotheses. E.5 Conclusions This section of the report includes the conclusions from the field exercises with regard to the primary objective of the field exercises: 1. Compare the use of the visual guide with the traditional inspection approach The field exercises did not show that there was a decrease in inspection variability when using the visual guide as compared with not using the visual guide. The variation in the data from the field exercises did not allow for recognizable trends regarding the assignment of the appropriate CSs. It was also found inspector groups using the visual guide tended to require more time than those not using the visual guide. Based on feedback from the post-test questionnaire, more training and experience with the visual guide is needed to make the guide more effective for improving the quality of element-level data. Qualitatively, inspectors indicated that the guide was helpful and assisted in identifying the correct assignment of CSs. Inspectors also indicated that the guide was relatively easy to use, but could be improved if it was reformatted to be more suitable for field use. Analysis: The results of the field exercises indicate that more training and experience with the use of a visual guide is needed to realize positive results. Given the short period of time that the inspectors had to work with the guide, it may have been unrealistic to expect a recognizable improvements. Analysis of the assignment of defect elements and CSs indicates that more training is needed in these areas. Given the variability found in this area, the need for the visual guide appears obvious; more training and experience is needed to realize the benefits. 2. Assess potential changes to the MBEI The analysis of potential changes to the units of measure for elements such as protective coating and columns indicated the following: Overall, the results showed that using units of length (ft) as compared to area (sq ft) resulted in a higher quantity of damage reported, but reduced variation between inspectors.

E-78 Using unit of length (ft) as compared to ea resulted in decreased quantities of damage and decreased variation between inspectors. The assignment of defects to seals was found to have high variability. The field exercises also evaluated the capabilities of different methods for assessing damage quantities in bridge elements using visual inspection techniques. It was found for estimating area (sq ft), estimating the percentage of damage produced more accurate results than tallying individual areas. It was found that tallying individual areas produced less variability in the inspection results. For length estimates, it was found that tallying produced both more accurate and less variable results. 3. Evaluate the quality of element-level inspection data The results of the field exercises showed that there was variability in the damage quantities determined from element level inspections. It was found that the variation was on the order of greater than 50% of the quantity being measured, based on statistical analysis of the data. It was found that the variation in the inspection data increased as the quantity of damaged increased. It was also found that the likelihood of detecting CS 3, Poor, had variation when quantities were less than 10% of the total element quantity, but trended toward increased detection as the quantity in CS 3 increase. There was insufficient data to assess the assignment of CS 3 when quantities were greater than 10%. It was also found that there was variability in the assignment of CS 4 for gusset plate elements in the Indiana field exercises. A truss bridge containing 72 gusset plates was inspected by 14 inspectors. It was found that 1/3 inspectors did not report any gusset plates in CS 4, while 2/3 inspectors did report gusset plate elements in CS 4. Among the inspectors that identified CS 4, the number of gusset plates identified in CS 4 also varied. This results in significant because assigning CS 4 indicates that a structural review is warranted for the condition observed by the inspector. The results from the field exercises also showed that there was inconsistency in the assignment of defect elements. Different inspectors tended to report different defect elements for the same bridge element. For example, some inspectors reported only cracking in concrete, some reported only delamination/spalling, and some inspectors reported both, all for the same element. It was also found that there was variability in the methods used for estimating quantities for element- level inspections, with some inspectors basing estimates on percentage and some using tallying. Results from tests on simulated damage indicated that for units of area, tallying did not improve the accuracy of results, but did reduce variability between estimates. When units of length were used, tallying resulted in improved accuracy and reduced variability. Analysis: The data showed an increase in variability as the quantity of damage increased based on a statistical analysis of the data. The quantity of data is limited for such a statistical analysis, but this trend seems apparent and is consistent with what might be expected. The variation in the results may not be problematic when damage quantities are small, in practical terms. As damage quantities increase, the variation may be more problematic for decision-making and bridge management. E.6 Recommendations Based on the results of the field exercises, the following recommendations are made. 1. Increased training is needed to improve the quality of element-level inspection. This training should include the use of visual guides to clarify the correct assignment of CS. This training should also include how to properly identify and record defects, if defects are to be used within a given bridge inspection program. More consistent implementation of the methods for assigning defect quantities could also improve the quality of results. 2. Inspector calibration exercises should be considered to improve the quality of element-level inspections and ensure the uniform understanding of inspection procedures and practices. Calibration exercises can help ensure the proper application of procedures and practices for element-level inspection. As shown in data from the field exercises, improvements are needed to reduce the variation in identification of defects, assignment of CSs, and estimating quantities. Inspector calibration exercises, in which

E-79 group of inspector compare results to a standard, would increase the uniform understanding of CS and quantity estimation. The exercises would also improve the consistency of defect element assignment.

Next: Appendix F. Field Exercise Data »
Guidelines to Improve the Quality of Element-Level Bridge Inspection Data Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Web-Only Document 259: Guidelines to Improve the Quality of Element-Level Bridge Inspection Data is intended to assist inspectors in identifying defects and assigning the appropriate condition state for bridge elements.

To ensure the safety and serviceability of bridges, the guidelines include accuracy requirements designed to help promote consistency in the collection of element-level data for bridges on the National Highway System.

The quality of element-level bridge inspection data is critical for effective bridge management and asset management practices. Therefore, the guidelines also include a methodology developed to verify the impact of different accuracy requirements on deterioration models.

The American Association of State Highway and Transportation Officials’ Manual for Bridge Element Inspection includes elements of the guidelines in NCHRP Web-Only Document 259.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!