National Academies Press: OpenBook
« Previous: CHAPTER 5 COMPARING TIME HISTORIES
Page 156
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 156
Page 157
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 157
Page 158
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 158
Page 159
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 159
Page 160
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 160
Page 161
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 161
Page 162
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 162
Page 163
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 163
Page 164
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 164
Page 165
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 165
Page 166
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 166
Page 167
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 167
Page 168
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 168
Page 169
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 169
Page 170
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 170
Page 171
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 171
Page 172
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 172
Page 173
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 173
Page 174
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 174
Page 175
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 175
Page 176
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 176
Page 177
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 177
Page 178
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 178
Page 179
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 179
Page 180
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 180
Page 181
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 181
Page 182
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 182
Page 183
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 183
Page 184
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 184
Page 185
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 185
Page 186
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 186
Page 187
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 187
Page 188
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 188
Page 189
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 189
Page 190
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 190
Page 191
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 191
Page 192
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 192
Page 193
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 193
Page 194
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 194
Page 195
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 195
Page 196
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 196
Page 197
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 197
Page 198
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 198
Page 199
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 199
Page 200
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 200
Page 201
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 201
Page 202
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 202
Page 203
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 203
Page 204
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 204
Page 205
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 205
Page 206
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 206
Page 207
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 207
Page 208
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 208
Page 209
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 209
Page 210
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 210
Page 211
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 211
Page 212
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 212
Page 213
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 213
Page 214
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 214
Page 215
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 215
Page 216
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 216
Page 217
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 217
Page 218
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 218
Page 219
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 219
Page 220
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 220
Page 221
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 221
Page 222
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 222
Page 223
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 223
Page 224
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 224
Page 225
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 225
Page 226
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 226
Page 227
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 227
Page 228
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 228
Page 229
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 229
Page 230
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 230
Page 231
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 231
Page 232
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 232
Page 233
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 233
Page 234
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 234
Page 235
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 235
Page 236
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 236
Page 237
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 237
Page 238
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 238
Page 239
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 239
Page 240
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 240
Page 241
Suggested Citation:"CHAPTER 6 BENCHMARK CASES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 241

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

154 CHAPTER 6 BENCHMARK CASES INTRODUCTION Chapter 4 described the recommended procedures for validating roadside safety simulations of crash tests and Chapter 5 discussed the comparison of time histories, the development of comparison metric acceptance criteria and the use of RSVVP. This chapter provides several examples of the validation process including documentation of the capabilities of the vehicle and roadside hardware models in phenomena importance ranking tables (PIRTs) and the documentation of the comparison in the validation and verification report. The following four benchmark test cases are presented: 1. A C2500 pickup truck striking a strong-post w-beam guardrail (see Appendices C1, C6 and C8), 2. A C2500 pickup truck striking a strong-post w-beam guardrail with a six-inch curb (see Appendix C2, C6 and C8), 3. A Geo Metro striking a vertical concrete barrier (i.e., the ROBUST Tests) (see Appendices C3 and C4) and 4. A tractor Trailer Striking a Concrete Median Barrier (see Appendix C5 and C7). The complete PIRTS and V&V reports for each of these test cases are provided in Appendix C. TEST CASE 1: PICKUP TRUCK STRIKING A GUARDRAIL INTRODUCTION The first test case involves a ¾-ton pickup truck impacting the most commonly installed strong-post w-beam guardrail system in the U.S. (i.e., the modified G4(1S) with wood blockouts). The purpose of the research leading to the development of this finite element model was to evaluate the effects of installing curbs in combination with guardrail systems.(141) Since there were no physical tests of a curb installed in combination with the G4(1S) at the time when this model was developed, the research approach used in validating the model was to first develop a model of the guardrail, validate the model using existing full-scale crash test data of the system without a curb, then modify the model by including various curbs placed in combination with the guardrail to evaluate their effects on crash performance based on safety criteria in Report 350. This first test case, therefore, involves the validation of the original strong-post w-beam guardrail model without a curb in an impact with a pickup truck. This model was used to assess the effects of several types of curbs placed in combination with the guardrail system, as well as the lateral offset distance of the curb relative to guardrail position. The results of these extrapolated simulations with the model were ultimately validated with subsequent full-scale crash tests of select curb-guardrail combinations. The validation of one of those extrapolated cases is the subject of Test Case 2 in this chapter.

155 The G4(1S) guardrail with routed wood blockouts is composed of 12-gauge w-beam rails supported by W150x13.5 steel posts with150x200 mm wood blockouts, as shown in Figure 50. The posts are spaced at 1.905 m center-to-center. The w-beam rails are spliced together using eight 16-mm diameter bolts at each splice connection and the rails are connected to the posts and blockouts using a single bolt at each post location. Figure 50. Modified G4(1S) guardrail with routed wood blockouts. Figure 51. Finite element model of the G4(1S).

156 The complete guardrail model, shown in Figure 51, consists of 34.6 m of the guardrail system with thirteen 3.81-m sections of w-beam rail, twenty-six W150x13.5 steel posts spaced at 1.905 m, and twenty-six 150 x 200-mm wood blockouts. At the up-stream end of the model a MELT guardrail terminal, validated in a previous study, was used to simulate the anchor system used in the test.(40, 89) The downstream anchor was modeled using nonlinear springs representative of the longitudinal stiffness of the MELT guardrail terminal model. PIRT FOR THE G4(1S) When a model is validated for a particular application it may not be appropriate for use in other applications that vary significantly from the original application. In many cases, models are developed and validated by one analyst and then obtained and used by others for entirely different purposes. Because a model was considered “valid” by its developer, subsequent users may inadvertently use the models inappropriately if they do not understand the modeler’s original intent. It is always the user’s responsibility to verify the results of the model. Therefore, the user must ensure that the various components of the model accurately simulate the phenomena that are important to their application. The PIRT provides a means of communicating to other users the specific phenomena that the model was validated for during its development. The PIRT lists all the physical tests that were used to validate the various components and subassemblies of the model and provides a quantified assessment of their validity. The G4(1S) model was developed for use in crash simulations of Report 350 test 3-11 (e.g., a 2000-kg pickup truck impacting the guardrail at a speed of 100 km/hr and 25 degrees at a specified critical impact point along the length-of-need section of the guardrail). In the development of the model, several components were validated by simulating physical tests that were conducted on those components (or subassembly of components) and comparing the results. The specific components of the guardrail model that were validated include the steel post, the splice connection, the rail-to-post connection, and the post-soil interaction/response. Table 20 lists the six laboratory tests that were used in validating specific phenomena of the model and indicates whether they were considered valid. The validity of each phenomenon should be quantified in much the same way as validating the complete model against full-scale crash tests. That is, the time-history data collected from the simulation are compared to a physical test using RSVVP to calculate the validation metrics. If the Sprague and Geers metrics are less than 40 and the ANOVA metrics are less than 0.5 and 0.35 for the mean residual error and the standard deviation of the mean residual error, respectively, then the phenomenon is considered valid. Unfortunately, much of the electronic data that were collected from the physical tests in the validation of the G4(1S) are no longer accessible. In those cases, validity was determined based on a qualitative assessment of data presented in the project reports.

157 Table 20. Phenomena Importance Ranking Table (PIRT) for the G4(1S). Validated Phenomenon Validated? Verified? Calibrated? 1. Three-Point Bend Test of W150x13.5 Post About Weak Axis Validated 2. Load-to-rupture of splice connection under quasi-static axial loading Qualitative Validation 3. Pull-through of post-bolt-head connection to w-beam using axial load machine Qualitative Validation 4. Full-scale bogie impact tests of the W150x13.5post embedded in 1,980 kg/m3 soil Qualitative Validation 5. Full-scale bogie impact tests of the W150x13.5post embedded in 2,110 kg/m3 soil Qualitative Validation 6. Full-scale bogie impact tests of the W150x13.5post embedded in 2,240 kg/m3 soil Validated An example of documenting the validity of a particular phenomenon is shown in Figure 52. In this case, phenomenon number one in Table 20 is documented in Figure 52. A weak-axis quasi-static bending test was performed on a W150x13.5 steel post in order to validate the finite element model of the guardrail post. The force-displacement response was measured in the physical test and compared to the prediction from the finite element model as shown in the lower left portion of Figure 52. The two curves, one from the physical experiment and the other from the finite element simulation, were compared using RSVVP and the results are tabulated in the upper portion of Figure 52. The comparison, in this particular case, was very good resulting in satisfactory values for all the metrics. Since the curve comparisons were acceptable, the results of this aspect of the model can be considered validated such that users can be confident that the model will provide accurate results in similar applications. Some of the phenomena in Table 20 could only be validated qualitatively in this case because the original data was no longer available. In these cases, the experimental and finite element curves were compared visually but since the data files were not available, the metrics could not be calculated. Sometimes physical tests are used to calibrate the model. This can also be indicated in the PIRT in the right column by indicating “calibrated” rather than “validated.” The complete PIRT report for the strong-post w-beam guardrail is included in Appendix C6. PIRT FOR THE C2500R VEHICLE MODEL The vehicle model used in the analysis was a modified version of the NCAC C2500R finite element model, which is a reduced element model of a 1995 Chevrolet 2500 pickup truck. (83, 144) The C2500R model has been used by several research organizations over the years and each organization has made changes/improvements to the model. As a result, the model has become very efficient and robust for use in crash analyses.

158 PHENOMENA # 1: Plastic deformation of guardrail posts due to bending about weak axis Sprauge-Geers Metrics List all the data channels to be compared below. Using RSVVP calculate the M and P metrics comparing the experiment and the simulation. Values less than or equal to 20 are acceptable. M P Pass? • Force-Displacement 3.6 1.1 Yes ANOVA Metrics List all the data channels to compare in the rows below. Use RSVVP to calculate the ANOVA metrics and enter the values below. The following criteria must be met: • The mean residual error must be less than or equal to five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than or equal to 25 percent of the peak acceleration ( Peaka⋅≤ 25.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n o f R es id ua ls Pass? • Force-Displacement 0.03 0.03 Yes PHENOMENA Three-Point Bend Test of W150x13.5 Post About Weak Axis Figure 52. Example of a validation sheet from a roadside hardware PIRT. The research team at WPI made several modifications to the model in order to improve its accuracy in simulating vehicle interaction with curbs, with particular emphasis on the suspension system.(83) A list of the modifications and the extent of verification, calibration, and validation of each component model is shown in Table 21. The development of a comprehensive PIRT for the vehicle model was not possible since the majority of information regarding model development is no longer available. In order to compute the quantitative comparison metrics, time-history data from both the physical test and simulation is needed. All validation assessments reported in Table 21 were based on qualitative comparison of test and simulation data, as

159 reported in the literature.(83) The vehicle PIRT for the C2500 pickup truck is shown in Appendix C8. VALIDATION OF THE G4(1S) GUARDRAIL MODEL Part I – Basic Information The first step in the validation procedure is to document basic information about the test and simulation on the cover page and in Part I of the validation report as shown in Appendix C1. The full-scale crash test used for validation of the G4(1S) model was TTI test No. 405421-1, conducted by TTI on November 16, 1995.(143) The simulation was conducted by WPI in January 2002 with model No. TTI-405421-1_SIM-2002_01. The impact conditions for the simulation matched exactly those from the full-scale test (i.e., a 2000-kg pickup impacting the guardrail system at 101.5 km/hr at an angle of 25.5 degrees, at an impact point just upstream of post 12 in the guardrail system). These impact conditions corespond to NCHRP Report 350 test 3-11. The complete validation and verificaiton report for this test case is shown in Appendix C1. Table 21. Partial PIRT for the NCAC C2500R pickup truck. (83) Phenomena Summary Valid? Front suspension coil springs Properties calibrated with physical test data Calibration Front suspension dampers Properties verified with physical test data obtained from external source and calibrated with laboratory tests conducted at WPI Calibration Suspension stops on front A-arms Response verified through visual observation of computer model results Verification Stabilizer bar Response verified through visual observation of computer model results Verification Rear leaf spring suspension Spring properties for vertical stiffness calibrated with physical test data. Lateral and torsional stiffness properties obtained analytically. Calibration Steering system properties Properties calibrated with physical tests Calibration Steer stops on steering system Response verified through visual observation of computer model results Verification Inertial Properties Properties calibrated through data obtained from NHTSA and TTI Calibration Vertical front suspension response Roll-off drop tests Validation* Vertical rear suspension response Roll-off drop tests Validation* Front and rear suspension response 90-degree curb traversal tests – 6-inch AASHTO type B curb Validation* Front and rear suspension response and steer response 25-degree curb traversal tests – 6-inch AASHTO type B curb Validation* * Qualitative assessment only

160 A qualitative assessment of the response of the vehicle behavior in the full-scale test and numerical simulation was quite good as shown in Figure 53. The model results replicated the basic timing and phenomenological events that occurred in the full-scale test. Sequential views of TTI Test 405421-1 and the simulation are shown in Figure 53 from a downstream perspective. In the past, the analyst may have been satisified with a comparison of views like those shown in Figure 53 but the objective of the procedures discussed in chapter 4 is to develop a much more quantitative procedure for assessing the validation. Part II - Solution Verification After documenting the basic information on the cover page and in Part I, the next step in the validation process is to perform global checks of the analysis to verify that the numerical solution is stable and is producing physical results (e.g., results conform to the basic laws of conservation). This analysis is modeled as a closed system, which means that energy is not being added or removed during the analysis. Consequently, the total energy should remain constant throughout the analysis and should be equal to the initial kinetic energy of the impacting vehicle. The one exception in this case is any kinetic energy generated due to the gravity load which should be minimal during the short time period of the crash event relative to the initial kinetic energy of the vehicle. There are, however, several opportunities for non-physical energy to be added during the analysis, resulting from numerical inaccuracies in element formulation, contact definitions, mass-scaling, etc. It is typical to expect some error in the analysis due to these deficiencies; however, it is the responsibility of the analyst to ensure that the errors are sufficiently small so that they have minimal effect on the solution. Table 22 shows a summary of the global verification assessment based on criteria recommended in the procedures for verification and validation discussed in Chapter 4. Figure 54 shows a plot of the global energy- time histories from the analysis. As shown in Table 22, all the solution verification parameters were satisfied so the analyst can be reasonably sure that the solution represents a physically plausible impact event that obeys basic conservation laws. This is confirmed as well by Figure 54 which shows that the total energy remains fairly constant during the simulated event. The solution meets all the recommended global energy balance criteria and appears to be free of any major numerical problems. This does not indicate that the simulation is necessarily valid, only that the results adhere to the basic laws of physics and that the solution is numerically stable. The validation assessment for Step I was entered as “Yes” on the cover page of the validation report in Appendix C1.

161 Figure 53. Sequential views of TTI Test 405421-1 and the finite element simulation from a downstream perspective, test case 1. (146)

162 Table 22. Analysis solution verification table for test case 1. Verification Evaluation Criteria Change (%) Pass? Total energy of the analysis solution (i.e., kinetic, potential, contact, etc.) must not vary more than 10 percent from the beginning of the run to the end of the run. 1.3 YES Hourglass Energy of the analysis solution at the end of the run is less than five percent of the total initial energy at the beginning of the run. 0 YES Hourglass Energy of the analysis solution at the end of the run is less than ten percent of the total internal energy at the end of the run. 0 YES The part/material with the highest amount of hourglass energy at any time during the run is less than five percent of the total initial energy at the beginning of the run. 0 YES Mass added to the total model is less than five percent of the total model mass at the beginning of the run. 0 YES The part/material with the most mass added had less than 10 percent of its initial mass added. 0 Yes The moving parts/materials in the model have less than five percent of mass added to the initial moving mass of the model. 0 Yes There are shooting nodes in the solution? No Yes There are solid elements with negative volumes? No Yes Figure 54. Plot of global energy-time histories for test case 1.

163 Part III - Quantitative Evaluation Next, the RSVVP computer program was used to compute the Sprague-Geer metrics and ANOVA metrics using time-history data from the physical test (i.e., true curve) and analysis data (i.e., test curve). The multi-channel option in RSVVP was used since this option computes metrics for each individual channel as well as for the weighted composite of the combined channels. The data from each of the six data channels, which were located at the center of gravity of the vehicle, were input into RSVVP. These data included the x-acceleration, y-acceleration, z- acceleration, roll-rate, pitch-rate and yaw-rate. From Chapter 4, it was recommended that the raw data be used as input into the program; however, only pre-filtered data was available for this test case. The data was pre-filtered using a SAE class 180 filter. The data was then filtered in RSVVP using a CFC class 180 filter, which resulted in essentially no change to the curves. The shift and drift options in RSVVP were used for the physical test data (i.e., true curves) for the x-, y-, and z-channels, but not used for the roll-, pitch- and yaw-channels. From visual inspection, the physical test data appeared to show no initial offset of acceleration magnitude and experienced very little drift. Consequently, the shift and drift options had minimal effect on the shape of the curves, as illustrated in Figure 55 which shows the results of preprocessing of the x- channel data. Figure 55. RSVVP preprocessing input and results for the x-channel data, test case 1.

164 Each of the curve pairs (i.e., true and test) were synchronized using the minimum absolute area of residuals option in RSVVP. For the metrics evaluation options in RSVVP, the default metrics were selected, which included the Sprague and Geers and the ANOVA metrics, as shown in Figure 56. The “Whole Time Window Only” option was also selected which directed RSVVP to evaluate the curves over the complete time history of available data, which in this case was 0.7 seconds. Figure 56. RSVVP metrics evaluation selection for test case 1 The results of the evaluation for the individual channels are shown in Table 23. Figure 57 through Figure 62 show the time histories for each pair of data that were used to compute the metrics. Based on the Sprague & Geers metrics, the x-, roll- and yaw-channels indicated that the numerical analysis was in agreement with the test, and that the y-, z-, and pitch-channels were not. The ANOVA metrics indicated that the simulation was in good agreement with the test for all channels except the pitch-channel.

165 Table 23. Roadside safety validation metrics rating table for test case 1 – (single channel option). Evaluation Criteria Time interval [0 sec; 0.7 sec] O Sprague-Geer Metrics List all the data channels being compared. Calculate the M and P metrics using RSVVP and enter the results. Values less than or equal to 40 are acceptable. Filter Option Sync. Option Shift Drift M P Pass? True Curve Test Curve True Curve Test Curve X acceleration CFC 180 Min. area of Residuals Y N Y N 21.5 33.3 Y Y acceleration CFC 180 Min. area of Residuals Y N Y N 43.9 35.7 N Z acceleration CFC 180 Min. area of Residuals Y N Y N 21.1 43.0 N Roll rate CFC 180 Min. area of Residuals N N N N 35.3 32.7 Y Pitch rate CFC 180 Min. area of Residuals N N N N 13.3 48.0 N Yaw rate CFC 180 Min. area of Residuals N N N N 11.7 8.7 Y P ANOVA Metrics List all the data channels being compared. Calculate the ANOVA metrics using RSVVP and enter the results. Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) and • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? X acceleration/Peak 0.02 0.34 Y Y acceleration/Peak 0.05 0.27 Y Z acceleration/Peak 0.02 0.32 Y Roll rate 0.02 0.27 Y Pitch rate 0.05 0.36 N Yaw rate 0.04 0.12 Y

166 S&G mag. = 21.5√ S&G phase = 33.3√ Mean = 0.02√ St.D. = 0.34√ (a) (b) Figure 57. X-channel (a) acceleration-time history data used to compute metrics and (b) integration of acceleration-time history data, test case 1. (a) (b) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 1.8 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 1.6 S&G mag. = 43.9x S&G phase = 35.7√ Mean = 0.05√ St.D. = 0.27√ Figure 58. Y-Channel (a) acceleration-time history data used to compute metrics and (b) integration of acceleration-time history data, test case 1.

167 (a) (b) 0.10 0.0 -0.05 -0.10 -0.15 -0.20 -0.25 -0.35 0.05 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 -0.30 S&G mag. = 21.1√ S&G phase = 43 x Mean = 0.02√ St.D. = 0.32√ Figure 59. Z-Channel (a) acceleration-time history data used to compute metrics and (b) integration of acceleration-time history data, test case 1. (a) (b) 2 -2 -4 -6 -8 -10 -12 0.0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 S&G mag. = 35.3 √ S&G phase = 32.7√ Mean = 0.02√ St.D. = 0.27√ Figure 60. Roll-Channel (a) angular rate-time history data used to compute metrics and (b) integration of angular rate-time history data, test case 1.

168 (a) (b) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 2 0.0 -1 -2 -3 -4 -5 1 S&G mag. = 13.3√ S&G phase = 48x Mean = 0.05√ St.D. = 0.36x Figure 61. Pitch-Channel (a) angular rate-time history data used to compute metrics and (b) integration of angular rate-time history data, test case 1. (a) (b) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 45 35 30 25 20 10 -5 40 S&G mag. = 11.7 √ S&G phase = 8.7√ Mean = 0.04√ St.D. = 0.12√ 0.0 15 5 Figure 62. Yaw-Channel (a) angular rate-time history data used to compute metrics and (b) integration of angular rate-time history data, test case 1.

169 Since the metrics computed for the individual data channels did not all satisfy the acceptance criteria, the multi-channel option in RSVVP was used to calculate the weighted Sprague-Geer and ANOVA metrics for the six channels of data. Two weighting methods were used in the multi-channel evaluation: (1) the Area II method and (2) the inertia method. The Area II method is the default method used in RSVVP. The Area (II) method determines the weight for each channel based on a pseudo momentum approach using the area under the curves. The Inertia method, on the other hand, determines the weight for each channel by computing the actual linear and rotational momentum from the six channels of data. The Inertia method provides the most accurate weight value for each channel but requires that the mass of the vehicle and the three angular inertia properties be input into RSVVP. As is typical especially for the rotary moments of inertia, the exact inertial properties for the test vehicle were not known. It is also important that the data from the six channels be collected at the center of gravity of the vehicle. The inertia properties that were input into the RSVVP program were obtained from the NHTSA website* • Mass = 1999 kg for a 1991 Chevrolet 1500 Silverado, where: • Ixx = 705 kg*m2 • Iyy = 4802 kg* m2 • Izz = 4924 kg* m2 Table 2 and Table 3 show the results from RSVVP for the multi-channel option using the Area (II) method and the Inertia method, respectively. The resulting weight factors computed for each channel are shown in both tabular form and graphical form in the tables. The results from both methods indicate that the x-, y-, and yaw rate-channels dominate the kinematics of the impact event. A visual assessment of the magnitudes of the integrated acceleration-time history curves for the x-, y-, and z-channels, show that the velocity change in the z-direction is insignificant compared to the change in velocity in the x- and y-directions. Similarly, visual inspection of the integrated angular-rate channels indicates that the yaw-channel is much more important than the roll- and pitch-channels. *http://www.nhtsa.dot.gov/staticfiles/DOT/NHTSA/NRD/Multimedia/PDFs/VRTC/ca/nhtsa_inertia_database_metri c.pdf

170 Table 24. Roadside safety validation metrics rating table for test case 1 – (multi- channel option using Area II method). Evaluation Criteria (time interval [0 sec; 0.7 sec]) Channels (Select which were used) X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights -Area (II) Method- X Channel – 0.255116 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 X acc Y acc Z acc Yaw rate Roll rate Pitch rate Y Channel – 0.210572 Z Channel – 0.034312 Yaw Channel – 0.392648 Roll Channel – 0.06581 Pitch Channel – 0.041542 O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M P Pass? 22.9 25 Y P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? 0.03 0.24 Y

171 Table 25. Roadside safety validation metrics rating table for test case 1– (multi- channel option using Inertia method). Evaluation Criteria (time interval [0 sec; 0.7 sec]) Channels (Select which were used) X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights -Inertia Method- X Channel – 0.296345 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 X acc Y acc Z acc Yaw rate Roll rate Pitch rate Y Channel – 0.227346 Z Channel – 0.079612 Yaw Channel – 0.242396 Roll Channel – 0.030312 Pitch Channel – 0.123988 O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M P Pass? 23.6 30.4 Y P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? 0.04 0.27 Y The two methods do not result in the exact same weights for each channel, as shown graphically in Figure 63 although they are similar, especially considering that the inertial properties are not known exactly. Given that the inertial properties are not always known for the test vehicles, the Area (II) method appears to provide acceptable values for the weight factors in this case and should for most typical cases. The value for the Sprague and Geers Magnitude was 22.9 for the Area II method compared to 23.6 for the Inertia method. The value for the Sprague and Geers Phase was 25 for the Area II method compared to 30.4 for the Inertia method. The value for the mean residual error was 0.03 for the Area II method compared to 0.4 for the Inertia method. The value for the standard deviation of residual errors was 0.24 for the Area II method compared to 0.27 for the Inertia method. In general, unless the actual inertial properties are known from the crash test, the Area (II) method should be used.

172 Figure 63. Comparison of multi-channel weight values computed using the Area II and the Inertia methods, test case 1. The weighted metrics computed in RSVVP using both the Area II and the Inertia methods in the multi-channel mode all satisfy the acceptance criteria, and therefore the time history comparison can be considered acceptable. The validation assessment for Part II should be entered as “Yes” on the cover page of the validation report. Part IV - Validation of Crash Specific Phenomena The last step in the validation procedure is to compare the phenomena observed in both the crash test and the numerical solution. Table 26 contains the Report 350 crash test criteria with the applicable test numbers. The criteria that apply to test 3-11 (i.e., corresponding to this particular test case) are marked with a red square. These include criteria A, D, F, L and M. Table 27 through Table 29 contain an expanded list of these same criteria including additional specific phenomena that were measured in the test and that could be directly compared to the numerical solution. Table 27 contains a comparison of phenomena related to structural adequacy, Table 28 contains a comparison of phenomena related to occupant risk, and Table 29 contains a comparison of phenomena related to vehicle trajectory.

173 Table 26. Evaluation criteria test applicability table for test case 1. Evaluation Factors Evaluation Criteria Applicable Tests uctural Adequacy A Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38 B The test article should readily activate in a predictable manner by breaking away, fracturing or yielding. 60, 61, 70, 71, 80, 81 C Acceptable test article performance may be by redirection, controlled penetration or controlled stopping of the vehicle. 30, 31,, 32, 33, 34, 39, 40, 41, 42, 43, 44, 50, 51, 52, 53 Occupant Risk D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. All E Detached elements, fragments or other debris from the test article, or vehicular damage should not block the driver’s vision or otherwise cause the driver to lose control of the vehicle. (Answer Yes or No) 70, 71 F The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. All except those listed in criterion G G It is preferable, although not essential, that the vehicle remain upright during and after collision. 12, 22 (for test level 1 – 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44) H Occupant impact velocities should satisfy the following: Occupant Impact Velocity Limits (m/s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 80, 81 Longitudinal and Lateral 9 12 Longitudinal 3 5 60, 61, 70, 71 I Occupant ridedown accelerations should satisfy the following: Occupant Ridedown Acceleration Limits (g’s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 60, 61, 70, 71, 80, 81 Longitudinal and Lateral 15 20 L The occupant impact velocity in the longitudinal direction should not exceed 40 ft/sec and the occupant ride-down acceleration in the longitudinal direction should not exceed 20 G’s. 11,21, 35, 37, 38, 39 Vehicle Trajectory M The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38, 39 N Vehicle trajectory behind the test article is acceptable. 30, 31, 32, 33, 34, 39, 42, 43, 44, 60, 61, 70, 71, 80, 81 Note: The squares around the letters indicate the criteria that are applicable to this case.

174 Table 27. Structural adequacy phenomena for test case 1. Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? St ru ct ur al A de qu ac y A A1 Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. (Answer Yes or No) Yes Yes YES A2 Maximum dynamic deflection: - Relative difference is less than 20 percent or - Absolute difference is less than 0.15 m 1.0 m 0.985m 1.5% 0.02 m YES A3 Length of vehicle-barrier contact: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m 0.691 s 0.690 s 0.1% YES A4 The relative difference in the number of broken or significantly bent posts is less than 20 percent. 3 3 0 YES A5 Did the rail element rupture or tear (Answer Yes or No) No No YES A6 Were there failures of connector elements (Answer Yes or No). Yes Yes YES A7 Was there significant snagging between the vehicle wheels and barrier elements (Answer Yes or No). Yes Yes YES A8 Was there significant snagging between vehicle body components and barrier elements (Answer Yes or No). No No YES

175 Table 28. Occupant risk phenomena for test case 1. Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? O cc up an t R is k D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. (Answer Yes or No) Pass Pass YES F F1 The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. (Answer Yes or No) Pass Pass YES F2 Maximum roll of the vehicle: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. -8.7 -10.1 16% 1.4 deg YES F3 Maximum pitch of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. -3.3 -4.3 30% 1.0 deg YES F4 Maximum yaw of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 41 42.8 4% 1.8 deg YES L The occupant impact velocity in the longitudinal direction should not exceed 12 m/sec and the occupant ridedown acceleration in the longitudinal direction should not exceed 20 G’s. L1 • Longitudinal OIV (m/s) 5.4 4.7 13% 0.7m/s YES L2 • Lateral OIV (m/s) 4.4 5.0 13.6% 0.6 m/s YES L3 • THIV (m/s) 6.3 6.4 1.6% 0.1 m/s YES L4 Occupant accelerations: - Relative difference is less than 20 percent or - Absolute difference is less than 4 g’s. L5 • Longitudinal ORA 7.9 8.9 12.7% 1.0 G YES L6 • Lateral ORA 8.4 10.0 19.0% 1.6 G YES L7 • PHD 12.1 13.2 9.1% 1.2 G YES • ASI 0.68 0.72 5.9% 0.04 YES

176 Table 29. Vehicle trajectory phenomena for test case 1. Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? V eh ic le T ra je ct or y M M1 The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. 15.5° 61% 17.3° 68% YES M2 Exit angle at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 15.5° 17.3° 11.6% 1.8 deg YES M3 Exit velocity at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 55 km/h 62 km/h 12.7% 7.0 km/hr YES M4 One or more vehicle tires failed or de-beaded during the collision event (Answer Yes or No). Yes N.A. All the applicable criteria in Table 27 through Table 29 agree (i.e., the relative difference between the numerical solution and the test was less than 20%) except for the comparison of maximum pitch angle, which resulted in a relative difference of 30%. However, the magnitude of pitch was relatively small for both the test (i.e., 3.3 degrees) and the numerical solution (i.e., 4.3 degrees) compared to the other angular components, and therefore could be considered negligible. Recall that the weight factor computed for the pitch channel in RSVVP, as shown in Table 2, also indicated that the effects of pitch were of little significance in the crash event. In general, as the magnitudes of the measured quantities get smaller and smaller, the use of the relative percent difference as a validation criterion may not be appropriate. For example, according to NCHRP Report 350 it is preferred that the occupant ridedown acceleration (ORA) remains below 12 G’s and that it should not exceed 15 G’s. If the ORA from the physical test was, for example, 1.0 G and the ORA from the numerical solution was 1.4 G, the absolute difference would be 40 percent. Whereas from a qualitative view point, based on crash test experience, there is essentially no difference between these two acceleration values. To account for this problem, the quantitative phenomena in Table 27 through Table 29 also include absolute acceptance criteria. For example, the criterion F3 in Table 28 requires that the maximum pitch in the simulation must either be within 30 percent of the true pitch in the crash test, or else the absolute difference between them must be less than 5 degrees. The validation assessment for Part IV for Case 1 can be entered as “Yes” on the cover page of the validation report since all the phenomena agree. Since the model has been validated for each part of the validation procedure, it can now be considered valid for use in assessing the

177 effects of incremental modifications to the system. The comparison documented in the verification and validation report (i.e., Appendix C1) shows that the finite element model accurately replicates the results of the baseline crash test. The analyst and policy maker can have confidence that the model will produce good predictions for crashes that are reasonably similar to the baseline crash. The analyst could now use this model to explore incremental variations to the roadside hardware system represented by the baseline crash. For example, in this case, the analyst could use this model to explore slightly different guardrail heights, different blockout configurations, or the affect of curbs placed in front of the guardrail. In fact, this particular model was used to explore the performance of a variety of curbs placed in front of the guardrail system as described in NCHRP Report 537. (141) The research team in NCHRP Report 537 used the model to parametrically vary the location of the curb (i.e., three different offset locations), the type and height of the curb (i.e., five different types) and the speed and angle of the pickup truck when it left the road (i.e., three angles and three speeds). Altogether, simulations of 33 different combinations were performed to determine the likely Report 350 performance of the curb-guardrail combinations. Clearly, running 33 finite element simulations was far less expensive than performing 33 full-scale crash tests but establishing the validity of the model was, therefore, very important. In order to further validate the whole parametric range of the finite element simulations, seven full-scale crash tests were performed at critical points in the parameter space. A validation of one of those simulations with one of these crash tests is the subject of Case 2 presented in the next section.

178 TEST CASE 2: PICKUP TRUCK STRIKING A GUARDRAIL WITH A CURB INTRODUCTION The validated model of the G4(1S) described in Case I was used to assess the effects of installing various types of curbs in combination with the G4(1S) guardrail system. The objective of the study was to investigate the effects of curb type, curb placement and impact speed on the performance of the barrier system, regarding structural adequacy of the barrier, occupant risk (e.g., OIV, ORA, etc.) and vehicle trajectory.(141) The results from the parametric study were subsequently confirmed by conducting full-scale tests on a subset of the curb-barrier combination scenarios. One of those tests involved the same impact conditions that were used in one of the numerical simulations; i.e., a 2000-kg pickup impacting a G4(1S) guardrail with a 6- inch AASHTO type B curb placed underneath the rail.(145) The finite element model is shown in Figure 64 which illustrates the placement of the curb relative to the barrier system. In the following sections, the results from the numerical solution are compared to the full-scale test using the validation procedures presented in Chapter 4. Figure 64. Finite element model for the analysis of the G4(1S) and AASHTO B curb. (146) PIRT FOR THE CURB AND VEHICLE MODELS The finite element models of the curbs were modeled with rigid material properties in the Worchester Polytechnic Institute (WPI) curb-guardrail combination study.(141) It was assumed that the curb does not deform or suffer damage during “tracking” impacts with passenger vehicles. The curb dimensions were consistent with those specified for the 6-inch tall AASHTO type B curb.(142) Since there were no tests used to validate the accuracy of the curb model, the only information included in the PIRT was verification of the curb dimensions. The PIRT for the guardrail used in Case 1 and shown in Appendix C6 was also used for this case. Similarly, the same vehicle model was used in Case 2 as in Case 1 so the same PIRT for the C2500 pickup truck shown in Appendix C8 applies to this case as well.

179 VALIDATION OF THE G4(1S) GUARDRAIL WITH CURB MODEL The basic information about the test and simulation was documented on the Cover Page and in Part I of the validation report as shown in Appendix C2. The full-scale crash test, test No. 52-2556-001, was conducted by E-TECH Testing Services, Inc. on June 6, 2003.(141, 145, 146) The simulation was conducted by WPI in July 2002 with model No. B-0m-85-FEA_2002-0708 and is documented in NCHRP Report 537.(141) The impact conditions for the test were similar to those used in the numerical simulation. The test vehicle was a 1998 GMC 3/4 –ton pickup with a gross mass of 1,993-kg. The impact speed and angle of the test vehicle were 85.6 km/hr and 25 degrees, respectively. The impact point in the test was 0.6 m upstream of post 14. The simulation vehicle was the modified C2500R with a total mass of 2,000 kg. The impact conditions for the numerical simulation were 85 km/hr and 25 degrees. The impact point in the numerical simulation was 0.49 m upstream of post 14. A summary of the test and simulation information is provided in Table 30. Sequential views of the full-scale test and simulation are shown in Figure 65 from a downstream perspective and Figure 66 from an over-head view. Table 30. Summary of the test and simulation impact conditions for Case 2. General Information Known Solution Analysis Solution Performing Organization ETECH WPI Test/Run Number: 52-2556-001 (6/5/2003) B-0m-85-FEA (7/8/2002) Vehicle: 1998 GMC 3/4-ton WPI modified (NCAC C2500R) Impact Conditions Vehicle Mass: 1,993 kg 2,000 kg Speed: 85.6 km/hr 85.0 km/hr Angle: 25 degrees 25 degrees Impact Point: 0.6 m upstream of post 14 0.49 m upstream of post 14

180 Figure 65. Sequential views from E-TECH test 52-2556-001 and simulation from a downstream perspective for Case 2.(146)

181 Figure 65. Sequential views from E-TECH test 52-2556-001 and simulation from a downstream perspective for Case 2. (continued).(146)

182 Figure 66. Sequential views from E-TECH test 52-2556-001 and simulation from an overhead perspective for Case 2.(146)

183 Figure 66. Sequential views from E-TECH test 52-2556-001 and simulation from an overhead perspective for Case 2. (continued).(146) Part I - Solution Verification The next step in the validation process is to perform global checks of the analysis to verify that the numerical solution is stable and is producing physical results. (e.g., results that conform to the basic laws of conservation). This analysis was modeled as a closed system so the total energy should remain constant throughout the analysis and should be equal to the initial

184 kinetic energy of the impacting vehicle. There are, however, several opportunities for non- physical energy to be added during the analysis, resulting from numerical inaccuracies in element formulation, contact definitions, mass-scaling, etc. It is typical to expect some error in the analysis due to numerical inaccuracies in element formulation, contact definitions, mass- scaling, etc.; however, it is necessary to ensure that the errors are sufficiently small so that they have minimal effect on the solution. Table 31. Summary of Global Energy Checks for Case 2. Verification Evaluation Criteria Change (%) Pass? Total energy of the analysis solution (i.e., kinetic, potential, contact, etc.) must not vary more than 10 percent from the beginning of the run to the end of the run. 0.3 Yes Hourglass Energy of the analysis solution at the end of the run is less than five percent of the total initial energy at the beginning of the run. 2.1 Yes Hourglass Energy of the analysis solution at the end of the run is less than ten percent of the total internal energy at the end of the run. 5.5 Yes The part/material with the highest amount of hourglass energy at any time during the run is less than five percent of the total initial energy at the beginning of the run. 0.8 Yes Mass added to the total model is less than five percent of the total model mass at the beginning of the run. 3.4e-4 Yes The part/material with the most mass added had less than 10 percent of its initial mass added. 0.3 Yes The moving parts/materials in the model have less than five percent of mass added to the initial moving mass of the model. 0.006 Yes There are shooting nodes in the solution? No Yes There are solid elements with negative volumes? No Yes The solution meets all the recommended global energy balance criteria and appears to be free of any major numerical problems. The validation assessment for Part I should be entered as “Yes” on the cover page of the validation report. Part II - Quantitative Evaluation Next, the RSVVP computer program was used to compute the Sprague-Geers metrics and ANOVA metrics using time-history data from the known (i.e., physical test) and analysis data. The multi-channel option in RSVVP was used since this option computes metrics for each individual channel as well as for the weighted composite of the combined channels. The raw data from each of the six data channels collected at the center of gravity of the vehicle were input into RSVVP. These data included the x-acceleration, y-acceleration, z-acceleration, roll-rate, pitch- rate and yaw-rate. The data was then filtered in RSVVP using a CFC class 180 filter. The shift and drift options in RSVVP were used for the true curves (i.e., physical test data) for the x-, y-,

185 and z-channels, but not used for the roll-, pitch- and yaw-channels. In general, numerical solutions are not subject to shift and drift since these are sensor (e.g., accelerometer) phenomena. Each of the curve pairs were synchronized using the minimum absolute area of residuals option in RSVVP. For the metrics evaluation options in RSVVP, the default metrics were selected, which included the Sprague and Geers and the ANOVA metrics. The Whole Time Window Only option was also selected which directed RSVVP to evaluate the curves over the complete time history of available data, which in this case was 1.22 seconds. The results of the evaluation for the individual channels are shown in Table 32. Figures 67 through 72 show the time histories for each pair of data that were used to compute the metrics. Based on the Sprague & Geers metrics, the pitch- and the yaw-channels from the numerical analysis were in agreement with the test, with the metrics for the x- and y-channels falling just outside the acceptable values. The z-channel and the roll-channel were not in good agreement. The ANOVA metrics indicated that the simulation was in good agreement with the test for all channels except the roll- and pitch-channels. The x-, y-, and yaw rate-channels dominate the kinematics of the impact event, which can be verified by simple inspection of the integrated time-history traces. A visual assessment of the magnitudes of the integrated acceleration-time history curves for the x-, y-, and z-channels, indicate that the velocity change in the z-direction is insignificant compared to the change in velocity in the x- and y-directions. Similarly, visual inspection of the integrated angular-rate channels indicates that the yaw-channel is much more important than the roll- and pitch- channels. Since the metrics computed for the individual data channels did not all satisfy the acceptance criteria, the multi-channel option in RSVVP was used to calculate the weighted Sprague-Geers and ANOVA metrics for the six channels of data. Two weighting methods were used in the multi-channel evaluation: 1) Area II method (default method) and 2) Inertial method. The Inertial method, as discussed previously in Test Case 1 and in Chapter 4, provides the most accurate weight value for each channel but requires that the mass of the vehicle and the three angular inertial properties be input into RSVVP. The inertial properties used in the multi-channel evaluation were the same as the properties used in Test Case 1 (see Test Case 1 for more details).

186 Table 32. Roadside safety validation metrics rating table for Case 2– (single channel option). Evaluation Criteria Time interval [0 sec; 1.22 sec] O Sprague-Geer Metrics List all the data channels being compared. Calculate the M and P metrics using RSVVP and enter the results. Values less than or equal to 40 are acceptable. Filter Option Sync. Option Shift Drift M P Pass? True Curve Test Curve True Curve Test Curve X acceleration CFC 180 Min. area of Residuals Y N Y N 1.2 41.6 N Y acceleration CFC 180 Min. area of Residuals Y N Y N 5.7 43.1 N Z acceleration CFC 180 Min. area of Residuals Y N Y N 0.5 48.6 N Roll rate CFC 180 Min. area of Residuals N N N N 1.5 44.5 N Pitch rate CFC 180 Min. area of Residuals N N N N 9.7 25.2 Y Yaw rate CFC 180 Min. area of Residuals N N N N 9.6 10.4 Y P ANOVA Metrics List all the data channels being compared. Calculate the ANOVA metrics using RSVVP and enter the results. Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) and • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? X acceleration/Peak 0.00 0.17 Y Y acceleration/Peak 0.02 0.17 Y Z acceleration/Peak 0.01 0.23 Y Roll rate 0.02 0.51 N Pitch rate 0.07 0.36 N Yaw rate 0.06 0.14 Y

187 S&G mag. = 1.2 √ S&G phase = 41.6 x Mean = 0.00 √ St.D. = 0.17 √ (a) (b) Figure 67. X-channel (a) acceleration-time history data used to compute metrics and (b) integration of acceleration-time history data for Case 2. S&G mag. = 5.7 √ S&G phase = 43.1 x Mean = 0.02 √ St.D. = 0.17 √ (a) (b) 1.0 0.5 0.0 1.5 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Figure 68. Y-Channel (a) acceleration-time history data used to compute metrics and (b) integration of acceleration-time history data for Case 2.

188 S&G mag. = 0.5 √ S&G phase = 48.6 x Mean = 0.01√ St.D. = 0.23√ (a) (b) 0.20 0.10 0.00 0.10 0.30 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Figure 69. Z-Channel (a) acceleration-time history data used to compute metrics and (b) integration of acceleration-time history data for Case 2. S&G mag. = 1.5 √ S&G phase = 44.5 x Mean = 0.02 √ St.D. = 0.51 x (a) (b) 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 8 4 2 0.0 -6 -4 -2 6 Figure 70. Roll-Channel (a) angular rate-time history data used to compute metrics and (b) integration of angular rate-time history data for Case 2.

189 S&G mag. = 9.7 √ S&G phase = 25.2 √ Mean = 0.07 x St.D. = 0.36 x (a) (b) 2 0 -2 -4 -6 -8 -10 -12 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 4 Figure 71. Pitch-Channel (a) angular rate-time history data used to compute metrics and (b) integration of angular rate-time history data for Case 2. S&G mag. = 9.6 √ S&G phase = 10.4 √ Mean = 0.06 √ St.D. = 0.14 √ (a) (b) 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 -15 -5 0 -20 -25 -30 -35 -40 -45 -10 Figure 72. Yaw-Channel (a) angular rate-time history data used to compute metrics and (b) integration of angular rate-time history data for Case 2. Table 33 and Table 34 show the results from RSVVP for the multi-channel option using the Area (II) method and the Inertial method, respectively. The resulting weight factors computed for each channel are shown in both tabular form and graphical form in the tables. The two methods disagree considerably on the weight given to the x-channel and the pitch-channel, however, all other weight factors are very similar for both methods, as shown in Figure 73, the end results, as shown in Table 33 and Table 34, agree that the comparison is adequate.

190 Table 33. Roadside safety validation metrics rating table for Case 2 – (multi-channel option using Area II method). Evaluation Criteria (time interval [0 sec; 1.22 sec]) Channels (Select which was used) X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights - Area II Method - X Channel – 0.268011 0 0.1 0.2 0.3 0.4 0.5 X acc Y acc Z acc Yaw rate Roll rate Pitch rate Y Channel – 0.145893 Z Channel – 0.086096 Yaw Channel – 0.446323 Roll Channel – 0.028886 Pitch Channel – 0.02479 O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M P Pass? 5.7 28.2 Y P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? 0.03 0.18 Y The weighted metrics computed in RSVVP using both the Area II and the Inertial methods in the multi-channel mode all satisfy the acceptance criteria, and therefore the time history comparison can be considered acceptable. The validation assessment for Part II should be entered as “Yes” on the cover page of the validation report. Part III - Validation of Crash Specific Phenomena The last step in the validation procedure is to compare the phenomena observed in both the crash test and the numerical solution. Since, like Case 1, this case is an example of Report 350 Test 3-11, the same phenomena previously identified in Table 26 apply to this case as well, namely: criteria A, D, F, L and M.

191 Table 34. Roadside safety validation metrics rating table for Case 2 – (multi-channel option using Inertia method). Evaluation Criteria (time interval [0 sec; 1.22 sec]) Channels (Select which was used) X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights - Inertia Method - X Channel – 0.119486 0 0.1 0.2 0.3 0.4 0.5 0.6 X acc Y acc Z acc Yaw rate Roll rate Pitch rate Y Channel – 0.129217 Z Channel – 0.04426 Yaw Channel – 0.477606 Roll Channel – 0.034208 Pitch Channel – 0.195224 O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M P Pass? 7.3 24.2 Y P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? 0.02 0.21 Y Figure 73. Comparison of multi-channel weight values computed using the Area II and the Inertial methods for Case 2.

192 Table 35 through Table 37 contain expanded lists of these same criteria including additional specific phenomena that were measured in the test and that could be directly compared to the numerical solution. Table 35 contains a comparison of phenomena related to structural adequacy, Table 36 contains a comparison of phenomena related to occupant risk, and Table 37 contains a comparison of phenomena related to vehicle trajectory. As shown in Table 35, the comparison between all the structural adequacy phenomena was acceptable both in relative and absolute comparison terms. For example, the dynamic deflection in the full-scale crash test was 0.5 m whereas it was 0.6 m in the finite element simulation. This corresponds to an absolute difference of 0.1 m and a relative difference of 20%. Since the absolute difference is less than 0.15 m, the comparison is judged to be good (i.e., even though the relative difference was 20 percent, which is at the limit, the absolute difference was well below the limit). The finite element simulation and full-scale test agreed exactly with respect to the number of bent/broken posts, number of rail elements detached from the post and the number of blockouts broken or detached. Table 35. Structural Adequacy Phenomena for Case 2. Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? St ru ct ur al A de qu ac y A A1 Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. (Answer Yes or No) Yes Yes YES A2 Maximum dynamic deflection: - Relative difference is less than 20 percent or - Absolute difference is less than 0.15 m 0.5 m 0.6 m 20% 0.1 m YES A3 Length of vehicle-barrier contact: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m 6.32 m 6.19 m 2.1% 0.13 m YES A4 The relative difference in the number of broken or significantly bent posts is less than 20 percent. 2 2 YES A5 The rail element ruptured or failed (Answer Yes or No) No No YES A6 There were failures of connector elements (Answer Yes or No). Yes Yes YES A6a Number of detached posts from rail 2 2 YES A6b Number of detached blockouts from posts 1 1 YES As shown in Table 36, a comparison of the finite element and full-scale crash test results for all the occupant risk phenomena result in good comparisons except the lateral ridedown acceleration (i.e., criterion L6). The maximum lateral ORA predicted in the simulation was 4.3 g’s higher than the value measured in the test. The absolute limit for differences in acceleration is 4 g’s so the lateral ORA is just barely over the limit. Both the test and simulation agree that the ORA is lower than the suggested limits specified in Report 350 (i.e., < 12 G’s) and the

193 simulation is being over conservative (predicting too high) so this result will be considered a pass although a marginal one. An examination of criterion F3 in Table 36 illustrates the importance of using absolute as well as relative differences. The relative difference in the maximum pitch rotation was 25.5 percent (i.e., 12.8 versus 10.2 degrees) which is an unacceptably high relative difference. The absolute difference, however, is only 1.2 degrees which, given the relatively small maximum pitch is considered acceptable since it is less than 5 degrees of absolute difference. Table 36. Occupant Risk Phenomena for Case 2. Evaluation Criteria Known Result Analysis Result Differen ce Relative/ Absolute Agree? O cc up an t R is k D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. (Answer Yes or No) Pass Pass YES F F1 The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. (Answer Yes or No) Pass Pass YES F2 Maximum roll of the vehicle: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 6.5 4.9 24.6% 1.6 deg. YES F3 Maximum pitch of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 10.2 12.8 25.5% 2.6 deg YES F4 Maximum yaw of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 42.0 43.2 2.9% 1.2 deg YES L L1 Occupant impact velocities: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m/s. L1 • Longitudinal OIV (m/s) 4.9 4.2 14.3% 0.7 m/s YES L2 • Lateral OIV (m/s) 4.7 4.1 12.8% 0.6 m/s YES L3 • THIV (m/s) 24.1 26.8 11.2% 2.7 m/s YES L2 Occupant accelerations: - Relative difference is less than 20 percent or - Absolute difference is less than 4 g’s. L5 • Longitudinal ORA 8.1 8.1 0.0 YES L6 • Lateral ORA 6.3 10.6 68.3% 4.3 G NO ASI 0.7 0.67 4.3% 0.03 YES

194 Table 37. Vehicle Trajectory Phenomena for Case 2. Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? V eh ic le T ra je ct or y M M1 The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. Yes 14 deg No 16 deg NO M2 Exit angle at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 14 deg 16 deg 5.1 YES M3 Exit velocity at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 10 m/s. 41.3* 53.1** km/hr 56.7 km/hr 37.3 6.8 YES M4 One or more vehicle tires failed or de-beaded during the collision event (Answer Yes or No). Yes N/A - M5 One or more tires separated from the vehicle (Answer Yes or No). No N/A - * Velocity reported in the test report computed by integrating the raw x-acceleration channel ** Velocity computed by integrating the x-channel data processed by RSVVP (e.g., with drift and shift) Criterion M1, shown in Table 37, which is related to the exit angle of the vehicle when the vehicle looses contact with the guardrail, did not agree although the exit angle predicted in the simulation agreed very well with the yaw angle measured in the test. This criterion states that the exit angle of the vehicle should be less than 60 percent of the impact angle (i.e., < 15 degrees). The exit angle computed in the simulation was one degree over this limit and the exit angle measured in the test was one degree below this limit so the standard Report 350 criterion M would be right on the limit. As shown by criterion M2, however, the difference between the test and simulation was only two degrees which is quite good. In this particular case, the “NO” result for criterion M1 can be ignored since the two results fall right on the limit and criterion M2 shows that the agreement in terms of the angles, is quite good. In general, the simulation agrees fairly well with the full-scale test results with the small exceptions noted. Criteria L6 and M1 did not compare well but on closer examination it is clear that the results are actually either conservative or right on the limit. The simulation, therefore, can be considered validated with these exceptions. The fact that this case was validated with crash tests gives the community more confidence that the results of the 33 simulations in Report 537 are, in fact, good predictions.(141)

195 TEST CASE 3: GEO METRO STRIKING A RIGID BARRIER INTRODUCTION This test case involves a 900-kg small car impacting a rigid concrete barrier. The simulation was originally performed to reproduce the ten repeated full-scale crash tests performed during the ROBUST project.(43) The main purpose of this task of the ROBUST project was twofold: (1) assessing the typical scatter of results which characterizes experimental tests in roadside safety by performing the same full-scale crash test at different test houses and (2) reproducing numerically the test in order to assess the reliability of numerical simulations using the model of a generic small car. The simulation was performed using an improved version of the publicly available LSDYNA numerical model of the Geo Metro vehicle originally developed by the NCAC. Figure 74. Temporary vertical concrete barrier. The concrete barrier used in the Round Robin tests was a temporary vertical concrete barrier (Figure 74) produced by the British company Marshall RC Products Company and is usually employed as a modular lane divider. It is constructed with independent, precast concrete units. Each unit has a mass of 2,600 kg (5,732 lb) is 3.150 m (10.3 ft) long, 0.450 m (1.5 ft) wide and 0.816 m (2.7 ft) high. They were placed end to end on the track and were connected by two M24 bolts passing through holes in the vertical scarf-joints at the ends of the units. To prevent damage to the joints, a 5-mm thick non-cellular rubber gasket was placed in the gap between the ends of the units. The total length of the barrier installation during the test was 30 m.

196 The validation of the numerical model was carried out by comparing the simulation results against the outcomes of two different full-scale experimental tests performed using the same type of barrier and similar vehicles. In the following sections, the various steps of the validation process are separately described with respect to each of the two experimental tests, which will be referred to as Test 1 and Test 2 accordingly. The verification and validation reports for the comparisons using Tests 1 and 2 are contained in Appendices C3 and C4, respectively. The vehicles used in Test 1 and Test 2 were, respectively, a Fiat UNO (Figure 75a) and a Peugeot 206 (Figure 75b). In both the tests the vehicle complied with the requirements of the EN1317 standard for the 900-kg small car category. (a) (b) Figure 75. Vehicle used in (a) Test 1 and (b) Test 2. PIRT FOR THE GEO METRO VEHICLE MODEL The vehicle used in the numerical analysis was a modified version of the Geo Metro reduced finite element model originally developed by the NCAC. (77) Although this numerical model does not represent in full detail the actual vehicle used during the full-scale test (e.g., the model was based on a Geo Metro whereas neither test used that particular brand), it has similar geometrical and inertial properties and falls into the same category of small cars. The original vehicle model was modified by a team of researchers from Politecnico di Milano in Italy. (77) The primary changes and improvements involved both the front and rear suspensions and the steering system. For the details of the modifications made to these components of the model, refer to the ROBUST reports.(43, 77) Furthermore, other minor improvements were made to the original model such as re-meshing of the tires and the wheel rims. The total mass of the model was increased in order to comply with the specifications of the European standard EN 1317 for a small vehicle (i.e., a 900-kg car impacting barriers at a

197 speed of 100 km/hr and 20°). (11) Table 38 shows a list of the modifications made to the original model. As no data regarding any experimental component tests relative to the actual Geo Metro car were available at the time the model was improved, only a partial PIRT of the vehicle model could be filled out. Table 38. Partial PIRT for the Geo Metro model. (43) Phenomena Summary Valid? Front suspension system (independence from steering system) Response verified through visual observation of computer model results Verification Steering system properties i) Ackerman principle ii) Steep stop Response verified through visual observation of computer model results Verification Front suspensions and steer stability (curb response) Response verified through visual observation of computer model results Verification Rear suspension system i) Correct kinematics ii) Curb response Response verified through visual observation of computer model results Verification PIRT FOR THE CONCRETE BARRIER The model of the concrete barrier developed in the study is shown in Figure 76. Due to the simplicity of the geometry of the actual design and its strength, the barrier has been modeled as a monolithic wall made of solid elements (Figure 76). In order to improve the contact between the vehicle and the barrier model, the external surfaces of the solid elements facing the front side of the barrier model were overlapped by shell elements defined with the LSDYNA null material (i.e., Mat_009). As experienced during the various tests independently carried out by various test agencies, the barrier was always firmly held in its position by placing it against concrete parapets or massive concrete blocks so the corresponding model was rigidly anchored to the ground. The total length of the barrier model is 15 m. Considering the simple nature of the concrete barrier, which can be considered for its full extent a rigid wall, no tests were performed to validate any of its components (i.e., concrete blocks and connections). As a consequence, the PIRT table for the barrier model could not be filled out. VALIDATION OF THE GEO METRO MODEL This section decribes the steps followed in the validation process of the finite element model of the Geo Metro vehicle. The detailed validation report for this case can be found in Appendix C3 for the test using a Fiat (Test 1) and Appendix C4 for the test using a Peugout (Test 2).

198 Figure 76. Finite element model of the concrete barrier. Part I – Basic Information The first step of the validation procedure is to document basic information about the test and simulation on the Cover Page and in Part I of the validation report. The validation of the improved Geo Metro model was performed by comparing the outcomes of the experimental tests against the corresponding numerical results. In this case, two sets of experimental data were obtained from the full-scale crash test No S70 (Test 1) and ROU/ROB -02/664 (Test 2) performed by two of the test agencies involved in the Robust Project. (43) The tests were conducted according to the testing guidelines of the European standard EN 1317 for the test level TB11. Test 1 involved a 922-kg (2,033-lb) small car vehicle while in Test 2 the vehicle mass was 862 kg (1,900 lb). In both tests the vehicle impacted against a concrete barrier at 100 km/hr (62 mph) and an impact angle of 20 degrees. In particular, the test vehicle used in Test 1 was a Fiat UNO (2nd edition) and the test vehicle used in Test 2 was a Peugeot 206. The test article, which was of the same exact type in both the two tests, was a 0.816 m (32 inches) high concrete barrier with an installation length of 30 m (98 ft). The numerical model of the vehicle and the barrier used in the simulation were the modified version of the Geo Metro and the monolithic barrier model described in the previous sections. Table 39 summarizes the information about the vehicle and the test/simulations conditions.

199 Table 39. Vehicle type and impact conditions for the two tests in Case 3. General Information Known Solution (Test 1) Known Solution (Test 2) Analysis Solution Performing Organization Robust test agency 1 Robust test agency 2 Politecnico di Milano Test/Run Number: S 70 ROU/ROB-02/664 GM_R5 Vehicle: F i a t U N O Peugeot 205 Geo Metro (GM_R5) Impact Conditions Vehicle Mass: 922 kg (2,033 lb) 862 kg (1,900 lb) 860 kg (1,896 lb) Speed: 100.33 km/h (62 mph) 100.4 km/h (62 mph) 100 km/h (62 mph) Angle: 20deg 20deg 20 deg. Impact Point: 10 m (33 ft) from beginning 10.7 m (35 ft) from beginning 4.5 m (15 ft) from beginning The qualitative assessment of the vehicle response obtained from the numerical simulation compared well with both of the two full-scale tests. In both cases, the numerical model was able to replicate the general vehicle kinematics and the timing of the actual experimental test during the first phase of the impact. Figure 77 shows a sequential comparison of the vehicle behavior between Test 1 (left side of Figure 77) and Test 2 (right side of Figure 77), and the numerical analysis (middle column of Figure 77) . In the second phase of the impact, a different behavior in the vehicle kinematics occurred between the numerical solution and the two experimental tests. In the numerical model, the vehicle tended to remain parallel to the barrier during the entire test while in the actual tests it was redirected back toward the roadway. In Test 1, the vehicle eventually settled back parallel to the barrier. The reason for this difference in the vehicle trajectory of the numerical model is probably due to the turning of the right front wheel towards the barrier in the simulation caused by the failure of the steering system when the opposite wheel hits against the rigid barrier. Although suspension failure did not happen in these two particular experimental tests, this phenomenon is not unusual in such a small car impact. The comparison of the experimental and numerical curves was performed on a time interval smaller than the entire period which was simulated and is shown in the sequence in Figure 77.

200 Test 1 Simulation Test 2 0.1 sec 0.2 sec 0.4 sec 0.5 sec 0.6 sec 0.7 sec Figure 77. Sequential views of experimental tests and the finite element simulationfor Case 3.

201 Part II - Solution Verification The first step in the validation of the numerical analysis involves simple checks to ensure that the model is stable and capable of producing physically reasonable results. This first set of criteria serve to make sure that the model does not contain any numerical errors and it complies with the basic general physics laws; hence, passing these checks is considered a conditio sine qua non which it is necessary to comply with but not yet sufficient to consider the model as validated. In fact, these early controls are made independently from the results of the experimental tests. This step is performed by checking that the global energies and mass involved in the simulation vary within a reasonable range. From the energy point of view, as no external energy is added to the system, the total energy should remain essentially constant. The time histories of the global energies, normalized with respect to the initial energy, involved in the simulation are shown in Figure 78. As can be seen, the total initial energy stays constant for the duration of the simulation. At the beginning of the simulation, the total energy of the system is purely the initial kinetic energy of the vehicle. The decrease of the kinetic energy during the impact is compensated by an equivalent increase of the internal energy and the energy dissipated by the frictional forces. Also, the fictitious hourglass energy can be considered practically null. Similarly, the total mass of the system should stay constant and any increase of the total mass of the model due to the application of the mass-scaling technique during the simulation should be negligible with respect to the initial mass. Table 40 shows the verification of the energy and mass conservation performed according to the criteria recommended in the procedures for verification and validation proposed in this research.

202 Table 40. Analysis Solution Verification Table for Case 3. Verification Evaluation Criteria Change (%) Pass? Total energy of the analysis solution (i.e., kinetic, potential, contact, etc.) must not vary more than 10 percent from the beginning of the run to the end of the run. -1 YES Hourglass Energy of the analysis solution at the end of the run is less than five percent of the total initial energy at the beginning of the run. 0.5 YES The part/material with the highest amount of hourglass energy at any time during the run is less than five percent of the total initial energy at the beginning of the run. 0.5 YES Mass added to the total model is less than five percent of the total model mass at the beginning of the run. 0 YES The part/material with the most mass added had less than 10 percent of its initial mass added. 0 YES The moving parts/materials in the model have less than five percent of mass added to the initial moving mass of the model. 0 YES There are shooting nodes in the solution? No YES There are solid elements with negative volumes? No YES Figure 78. Plot of normalized global energy time histories for Case 3.

203 As all the criteria listed in Table 40 are considered acceptable, the simulation can be considered verified with respect to the conventional conservation laws. The assessment for Part I was entered as “Yes” on the cover page of the validation report for both Test 1 and Test 2. Part III - Quantitative Evaluation The next step of the validation procedure was the quantitative comparison of the time histories using the Sprague & Geers and ANOVA comparison metrics which were computed using the RSVVP computer program described in Chapter 5. The two full-scale crash tests used different instrumentation setups. Test 1 used X, Y and Z accelerometers and a Yaw rate gyro whereas Test 2 used three accelerometers (i.e., X, Y and Z) and three rate gyros (i.e., roll, pitch and yaw). The data were collected for both tests and the simulation at the vehicle center of gravity. According to the RSVVP nomenclature, the experimental curve is defined as the “true” curve while the numerical curve is the “test” curve. The original numerical input curves had already been initially filtered using an SAE 180 class filter. Before the metrics were calculated, both the experimental and numerical curves were preprocessed using the RSVVP preprocessing options. The units and, for certain channels also the sign, of the numerical time histories needed to be adjusted to be consistent with the experimental curves. In particular, the numerical acceleration channels were converted from mm/s2 to g’s and the sign of the Y, Z acceleration and the Yaw and Pitch rates (for Test 2) were inverted due to a different reference system between Report 350 and EN 1317. The experimental time histories from both Test 1 and Test 2 were manually trimmed as they were characterized by a considerably long flat head due to very early triggering and an excessively long flat tail. After the experimental curves were manually trimmed, RSVVP automatically shifted the time vector to the origin (i.e., the beginning of the impact event started at time zero). The tails of the simulation time histories were manually trimmed after 0.4 sec, in order to consider only the interval of impact. Eventually, both the experimental and numerical curves were re-filtered in RSVVP using the SAE J211 CFC180 Class filter option and each channel was synchronized using the minimization of the area of the residuals method in RSVVP. Figure 79 and Figure 80 show comparisons of the original and preprocessed curves for each of the input channels for Test 1. From the graph of the preprocessed yaw rate in Figure 80 it is clearly evident that the time interval on which the true and test curves were compared was large enough to adequately cover all the phases of the impact. In fact, the selected interval completely contained the curve since the velocity increased from zero until the time it was stabilized back to a null value. Similar conclusions can be drawn considering the six channels when using the time histories from Test 2. The comparisons between the original and preprocessed curves for the six channels considering Test 2 are shown in Figure 81 and Figure 82.

204 Original curves Preprocessed curves X acceleration Y acceleration Figure 79. Original and preprocessed acceleration curves for Case 3 with Test 1.

205 Original curves Preprocessed curves Yaw rate Figure 80. Original and preprocessed Yaw rate curve for Case 3 with Test 1. Original curves Preprocessed curves X acceleration Y acceleration Z acceleration Figure 81. Original and preprocessed acceleration curves for Case 3 with Test 2.

206 Original curves Preprocessed curves Yaw rate Roll rate Pitch rate Figure 82. Original and preprocessed rotational rate curves for Case 3 with Test 2. The time histories were compared over the complete time history of the preprocessed data, which was 0.4 seconds. The comparison metrics were computed for each individual channel using the single channel option in RSVVP and they were also computed using the multi- channel weighting option in RSVVP. In particular, the weights were assessed automatically by RSVVP based on the area of the time history of the experimental curve for each channel. The results of the evaluation for the individual channels are shown in Table 41. For Test 1, the only channel that was outside the acceptance criterion for both the Sprague & Geers and the ANOVA metrics was the acceleration along the vertical axis (i.e., Z). In particular, the magnitude component of the Sprague & Geers was significantly greater than the proposed criterion of 40 percent indicating that, for this channel, the numerical curve had a

207 considerable difference in magnitude respect to the corresponding experimental curve. The vertical acceleration in this type of redirectional crash test is likely negligible with respect to the other channels, in particular the lateral acceleration along the Y axis. All the other input channels (i.e., X and Y accelerations and Yaw rate) were acceptable according to the acceptance criteria. The comparison of the six individual channels in the case of Test 2 confirms the disagreement between the vertical acceleration time histories from the experimental test and the numerical analysis. As was the case for Test 1, the magnitude component of the Sprague & Geers metric is slightly above the acceptance value (i.e., 41.4 %) for the X channel and even higher for the Z channel. As for the rotational rates, the yaw rate time histories were a good match, but the less important pitch and roll rates did not result in good comparisons. A visual confirmation of the results obtained from the values of the comparison metrics from each input channel can be found in the analysis of the integrated time histories which are shown in Figure 83 and Figure 84. Considering the acceleration channels, the integral functions of the Z-acceleration time histories are clearly the ones with the worst match for both the cases with Test 1 and test 2. Furthermore, the comparison of the integral functions from the accelerations along the X axis denotes that the case with Test 1 has a slightly better agreement than with Test 2, especially the magnitude. This is in agreement with the fact that the value of the M component of the Sprague & Geers metric which is computed using the acceleration time histories, in the case with Test 2, presents a slightly worse magnitude correlation between the curves. As for the rotational rate channels, also in this instance, the visual inspection of the corresponding integral functions shows a better agreement in the comparison involving Test 1.

208 Table 41. Roadside safety validation metrics rating table for Case 3 (single channel option). Evaluation Criteria Time interval [0 sec; 0.4 sec] O Sprague-Geers Metrics List all the data channels being compared. Calculate the M and P metrics using RSVVP and enter the results. Values less than or equal to 40 are acceptable. Simulation vs. Test 1 Simulation vs. Test 2 RSVVP Curve Preprocessing Options M [%] P [%] Pass? M [%] P [%] Pass? Filter Option Sync. Option Shift Drift True Curve Test Curve True Curve Test Curv e X acceleration CFC 180 Min. area of Residuals N N N N 7.7 36.8 Y 6.8 41.3 N Y acceleration CFC 180 Min. area of Residuals N N N N 24.5 38.5 Y 12.3 39.7 Y Z acceleration CFC 180 Min. area of Residuals N N N N 218 41.5 N 181 47.8 N Yaw rate N/A N/A N/A N/A N/A N/A 0.7 11.1 Y 16.4 12 Y Roll rate N/A N/A N/A N/A N/A N/A N/A N/A N/A 46.2 50.1 N Pitch rate N/A N/A N/A N/A N/A N/A N/A N/A N/A 38.7 40.2 N P ANOVA Metrics List all the data channels being compared. Calculate the ANOVA metrics using RSVVP and enter the results. Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) and • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l [ % ] S ta nd ar d D ev ia tio n of R es id ua ls [% ] Pass ? M ea n R es id ua l [ % ] S ta nd ar d D ev ia tio n of R es id ua ls [% ] Pass ? X acceleration/Peak 0.82 17.4 Y 0.9 16.7 Y Y acceleration/Peak -2.32 30.5 Y -1 20 Y Z acceleration/Peak -2.84 54.2 N -3 53 N Yaw rate 3.3 9.5 Y -11 11.8 N Roll rate N/A N/A N/A 6.2 36.7 N Pitch rate N/A N/A N/A -0.11 16.1 Y

209 Simulation vs. Test 1 Simulation vs. Test 2 X acceleration Y acceleration Z acceleration Figure 83. Comparison of the integral functions for each of the acceleration channels for Case 3.

210 Simulation vs. Test 1 Simulation vs. Test 2 Yaw rate N/A Roll rate N/A Pitch rate Figure 84. Comparison of the integral functions for each of the rotational rate channels for Case 3. Since not all the channels satisfied the single-channel acceptance criteria, Test 1 and Test 2 were further compared using the multi-channel option in RSVVP in order to calculate the weighted Sprague & Geers and ANOVA metrics. The weighting factors were automatically calculated based on the area of the true curves (i.e., method Area II in RSVVP).

211 Table 42. Roadside safety validation metrics rating table for the Case 3 (multi-channel option). Evaluation Criteria (time interval [0 sec; 0.4 sec]) Channels (Select which was used) Simulation vs. Test 1 X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights -Area (II) Method- X Channel – 0.16 Y Channel – 0.30 Z Channel – 0.04 Yaw rate Channel – 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 X acc Y acc Z acc Yaw rate Simulation vs. Test 1 X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights -Area (II) Method- X Channel – 0.17 Y Channel – 0.28 Z Channel – 0.05 Yaw rate Channel – 0.36 Roll rate Channel – 0.10 Pitch rate Channel – 0.04 Simulation vs. Test 1 Simulation vs. Test 2 O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M [%] P [%] Pass? M [%] P [%] Pass? 17.6 24.7 Y 25.7 31.5 Y P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? 1% 18.8% Y -3.7 19.6 Y The weighted composite value of both the Sprague & Geers and the ANOVA metrics, and the weighting factors are shown in Table 42. In this case, all the values meet the acceptance criteria. As can be seen from the column diagrams of the weighting factors for the comparison

212 case with Test 1, the yaw rate channel is given 50 percent of the total weight. This happens because RSVVP assumes that the weights are equally distributed between the acceleration and the rotational rate groups and, in the case with Test 2, the yaw rate is the only channel in the latter group so it is weighted by half the total (i.e., 50 percent). In any case, also in the case with Test 2 where all the six channels where involved, the yaw rate channel was the one with the highest weighting factor which is reasonable for this type of re-directional longitudinal barrier crash test. For both tests, the channel with the highest weight in the acceleration group is the Y acceleration (i.e., lateral direction). This is also reasonable considering the type of impact: a 20- degree re-directional impact. In particular, the Z acceleration channel was considered negligible for comparisons to both tests as expected. In the case of Test 2, the roll- and pitch-rate channels are negligible as well. Another confirmation of the low weighting factor assigned to the Z-acceleration channel can be obtained by analyzing the integrated acceleration time histories of the X, Y, and Z channels shown in Figure 83 for Case 3. It can be clearly seen that the change in velocity along the Z direction (i.e., the integral of the Z-acceleration curve) is insignificant with respect to the other two acceleration channels. Similar conclusions can be drawn also for the roll- and pitch- rate channels for Test 2 (Figure 84). Considering the results obtained from the calculation of the comparison metrics in the multichannel case, the quantitative evaluation of the time histories can be regarded as passed for both comparisons with Test 1 and Test 2. Hence, the validation assessment for Part III was entered as “Yes” on the cover page of the respective validation report. Part IV - Validation of Crash Specific Phenomena This section describes the last section of the verification and validation report, the comparison of the phenomena characterizing both the experimental test and the numerical simulation. Referring to the phenomena selection table shown earlier in Table 26, the phenomena that apply to this type of crash test (i.e., a Report 350 test 3-10) are criteria A, D, F, H, I and M. Table 43 through Table 45 show, respectively, the phenomena related to structural adequacy, occupant risk and vehicle trajectory for the comparison with Test 1. In each of the three tables, the criteria applicable to this particular case are indicated with a red square. Similar results and conclusions were obtained in the PIRT tables for the comparison involving Test 2. For the sake of conciseness, the tables for Test 2 are not shown here although they are included in Appendix C4. As shown in Table 43, all the structural adequacy criteria agree with the exception of criterion A3. As explained earlier, the reason is that the vehicle in the simulation steered into the wall due to suspension damage whereas in Test 1, the vehicle was re-directed away from the wall. The absence of agreement for criterion A3 points out the need to model the suspension accurately. In fact, the difference may well be the differences between the actual vehicles used in the tests (i.e., a Fiat and a Peugout) and the vehicle model (i.e., a Geo Metro).

213 Table 43. Structural Adequacy Phenomena for Case 3 with Test 1. Evaluation Criteria Known Result Analysis Result Relative Diff. (%) Agree? St ru ct ur al A de qu ac y A A1 Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. (Answer Yes or No) Yes Yes YES A2 Maximum dynamic deflection: - Relative difference is less than 20 percent or - Absolute difference is less than 0.15 m 0 0 0 % 0 m YES A3 Length of vehicle-barrier contact: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m 7 m 10 m 30% 3 m NO A4 The relative difference in the number of broken or significantly bent posts is less than 20 percent. 0 0 0 YES A5 The rail element ruptured or failed (Answer Yes or No) No No YES A6 There were failures of connector elements (Answer Yes or No). No No YES A7 There was significant snagging between the vehicle wheels and barrier elements (Answer Yes or No). No No YES A8 There was significant snagging between vehicle body components and barrier elements (Answer Yes or No). No No YES The occupant risk phenomena comparisons for Test 1 and the simulation are shown in Table 44. There is good agreement for all aspects of criterion H in both relative and absolute terms, but the lateral ORA and PHD of criterion I both exceed the absolute and relative acceptance criteria. In both cases, the experimental response is stiffer than the simulated one. This may be a result of the difference between vehicles as well but it is an issue that should be further examined by the analyst. The severity indexes for criteria H and I were computed using exactly the same preprocessed curves used to calculate the comparison metrics by RSVVP. Initially, the preprocessed time histories were considered for their total length (i.e., 0 sec, 0.4 sec) but this created an inconsistency in the time at which TRAP evaluated the maximum longitudinal Occupant Ridedown Acceleration (ORA) for the experimental and simulation results. Also, because of the different ORA time, the values of the longitudinal ORA resulted in opposite signs. Figure 85 shows 10-ms averaged time histories of the X-acceleration channel and the time of the maximum ORA for the test and true curves. Although the curves of the longitudinal acceleration for the experimental and numerical case were in good agreement on the time interval during which the impact occurred, the presence of a some smaller peaks in the numerical curve, which did not appear in the actual test, affected the time at which the maximum ORA was calculated. Considering only the time interval on which the impact occurred (i.e., 0sec, 0.2 sec) allowed

214 TRAP to solve this inconsistency as shown in Figure 85b. Limiting the interval of the curves did not affect the value for any of the other severity indexes evaluated by TRAP. Criterion M2 in Table 45 did not compare well. This is a result of the vehicle steering into the barrier in the simulation and away from it in both the tests. Although the comparisons were acceptable for most of the criteria in Table 43 through Table 45, a few of them were not within the criteria indicating that this result should not be considered valid as yet. The analyst should re-examine the model and determine why the suspension components failed in so different a manner. Resolving the suspension failure issue should improve the failed results for criteria M2 and I.

215 Table 44. Occupant Risk Phenomena for the Case 3 with Test 1. Evaluation Criteria Known Result Analysis Result Relative Diff. (%) Agree? O cc up an t R is k D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. (Answer Yes or No) Pass Pass √ F F1 The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. (Answer Pass or Not pass) Pass Pass √ F2 The relative difference between the maximum roll of the vehicle is less than 20 percent. ≈3° (1) 2.5° 16 √ F3 The relative difference between the maximum pitch of the vehicle is less than 20 percent. N/A N/A N/A N/A F4 The relative difference between the maximum yaw of the vehicle is less than 20 percent. 16.8° 17.5° 4 √ H H1 Occupant impact velocities: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m/s. (2): • Longitudinal OIV (m/s) • Lateral OIV (m/s) 4.5 3.3 -27% 1.2 m/s YES -7.2 -7.2 0 % 0 m/s YES H2 • Longitudinal OIV 4.5 3.3 -27% 1.2 m/s YES H3 • THIV (m/s) 7.9 7.6 -3.8 % 0.3 m/s YES I Occupant accelerations: - Relative difference is less than 20 percent or - Absolute difference is less than 4 g’s. (2): • Longitudinal ORA (g) -5 -3.5 30 % 1.5 g’s YES • Lateral ORA (g) 19.8 10 -49.5 % 9.8 g’s NO • PHD (g) 20.4 11.2 -45 % 9.2 g’s NO • ASI 1.59 1.78 11% 0.2 YES (1) The value was visually assessed from the image sequence of the test. (2) The severity indexes were computed considering the curves preprocessed by RSVVP on the time interval [0 sec, 0.2 sec].

216 (a) (b) Figure 85. X-acceleration: time of max longitudinal ORA considering the intervals (a) [0, 0.4] and (b) [0, 0.2] for Case 3 with Test 1. Table 45. Vehicle Trajectory Phenomena for the Case 3 with Test 1. Evaluation Criteria Known Result Analysis Result Relative Diff. (%) Agree? V eh ic le T ra je ct or y M M1 The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. ≈10(1) Yes 0°(2) Yes YES M2 The relative difference in the exit angle at loss of contact is less than 20 percent or 5 degrees 10 0° (2) -100% 10° NO M3 The relative difference in the exit velocity at loss of contact is less than 20 percent or 10 km/hr 78.8 km/h 82 (2,3) km/h 4% 3.2 km/hr YES M4 One or more vehicle tires failed or de-beaded during the collision event (Answer Yes or No). Yes Yes YES M5 One or more tires separated from the vehicle (Answer Yes or No). No No YEs (1) The value was visually assessed from the image sequence of the test. (2) The vehicle slid along the whole length of the barrier and never lost contact. (3) The exit velocity was considered at the same time the vehicle lost contact w/ barrier in the experimental test (t = 0.35 sec).

217 TEST CASE 4: TRACTOR AND SEMITRAILER FE MODEL INTRODUCTION This fourth test case involves the NCAC/Battelle tractor-semitrailer finite element model impacting a “rigid” 42-inch tall median barrier at 80 km/hr and at an impact angle of 15 degrees. The National Transportation Research Center, Inc. (NTRCI), in collaboration with the Turner Fairbanks Highway Research Center (TFHRC), sponsored the research team of Battelle, Oak Ridge National Laboratory (ORNL) and the University of Tennessee at Knoxville (UTK) to conduct a three-phase investigation to enhance and refine a finite element model (the beta version of the tractor model was developed by the NCAC) for simulating tractor-semitrailer crash events involving roadside safety barriers such as bridge rails and median barriers. A quantitative evaluation of the model results is presented here based on the procedures outlined in Chapter 4. The project is currently in its third phase and is not yet complete; therefore, the data presented in the following sections are tentative for this model. The tractor-semitrailer model, shown in Figure 86, is based on a 1992 Freightliner FLD120 with integral sleeper cabin and the semitrailer model is based on a1990 Stoughton box semitrailer. The model consists of 417,550 nodes, 348,700 elements, and 541 parts. More details regarding the development of the model can be found in the project reports.(147, 148, 149) Figure 86. NCAC/Battelle tractor-semitrailer model. (147) PIRT FOR THE TRACTOR-SEMITRAILER MODEL The use of this model has so far been limited to re-directive impacts into rigid barriers (e.g., Report 350 Test 5-12 through about 1.70 seconds of impact). The research team believes that this tractor-semitrailer model is reasonably valid for this type of crash simulation and that it

218 will provide useful results in general barrier design evaluation work regarding impact loads and general vehicle-barrier interaction. The model has not been assessed for use in other applications, such as high-energy impacts (e.g., full frontal impact with bridge pier), vehicle dynamics (e.g., vehicle response due to steer maneuvers), or vehicle-to-vehicle impacts to name a few. The model user must critically assess the results obtained from the model in all cases, but especially for applications that the model has not been validated for. In the development of the model, several components were validated by simulating physical tests that were conducted on those components (or subassembly of components) and comparing results. The components of the tractor that have been validated are all related to the front and rear suspension components and are listed in the preliminary PIRT shown in Table 46. Table 46. Phenomenon Importance Ranking Table for Tractor-Semitrailer Model (Case 4). No. Validated Phenomenon Test Description Validated? Verified? Calibrated? 1. Front Leaf-Spring Suspension Uniaxial load vs. displacement Validated 2. Suspension Displacement Limiter Uniaxial load/unload vs. displacement Validated 3. Rear Shock Absorbers Uniaxial sinusoidal displacement tests to measure load-velocity time history data at various displacement rates Calibrated 4. Front Shock Absorbers Uniaxial sinusoidal displacement tests to measure load-velocity time history data at various displacement rates Calibrated 5. Rear “Air-Bag” Suspension Compression/extension tests at various load rates and bag pressures Validated 6. Front Suspension U-Bolts Uniaxial load vs. displacement to failure Calibrated For example, in the development of the front leaf spring model, a leaf spring assembly for a 1992 Freightliner FLD120 tractor was purchased from a local Freightliner dealer. A laboratory test was conducted to measure the force/velocity response of the leaf spring assembly using a MTS uni-axial machine. The test and test setup are shown in Figure 87.

219 Figure 87. Laboratory Test of a 1992 Freightliner FLD120 Leaf-spring Suspension (Case 4). The modeled geometry of the components of the leaf spring assembly was created by digitizing the physical components and generating a three dimensional rendering of the part using Pro-Engineer™ CAD software, as shown in Figure 88. The leaf spring was then meshed using thin shell elements. The taper of the leaves was accounted for in the model in a piecewise manner, as shown in the exploded view in Figure 89. Each colored segment was defined as a separate part in the model and a representative thickness was assigned to each segment based on the average thickness of the segment (measured from the physical component). An analysis was conducted to validate the stiffness of the leaf-spring model based on a comparison with the laboratory test results. The boundary and loading conditions were modeled based on the test fixture used in the laboratory test, as shown in Figure 91. Figure 88. Digitized three dimensional geometry of the 1992 Freightliner FLD 120 suspension (Case 4).

220 Figure 89. Exploded view of leaf-spring thin shell model (Case 4). Figure 90. Finite element model for validating leaf-spring stiffness response. The finite element model of the leaf-spring was modeled with two different mesh densities for comparison: 1) nominal element size =20 mm and 2) nominal element size = 10mm. The load was applied to the model in the finite element analysis dynamically, whereas the load was applied at a quasi-static rate in the physical test. Consequently, the results of the simulation were somewhat “noisy” at the beginning of the analysis but tended to damp out as the load increased. A comparison of the model results to the experimental results are shown in Figure 91.

221 Figure 91. Force-displacement Response of Leaf Spring from Test and FEA The RSVVP program was used to quantify the similarity between the test and simulation. The results for the quantitative assessment of the leaf spring suspension are shown in Table 47. The more refined mesh yielded better agreement, although the ANOVA metric for the mean residual error was higher than the recommended maximum value of 0.05. It is assumed that the difference in load rate between the test and simulation significantly affected the ANOVA metrics since the ANOVA is sensitive to “noise.” The overall response of the models was slightly more stiff than the response measured in the physical test. The more refined mesh (i.e., 10-mm elements) yielded better agreement with the test and it is expected that the response would continue to approach that of the test with further mesh refinement. This indicates that the material properties, geometric dimensions, and element thicknesses were accurate, and that the error was due to mesh refinement. Table 47. Metric Evaluation Table for Leaf Spring Response (Case 3 Vehicle PIRT). Sprauge-Geers Metrics M P Pass? • Force-Displacement History (Element Size 20 mm)* 11.3 0.9 Y • Force-Displacement History (Element Size 10 mm) 5.9 1.1 Y ANOVA Metrics Mean Std.D. Pass? • Force-Displacement History (Element Size 20 mm)* 0.06 0.04 N • Force-Displacement History (Element Size 10 mm) 0.03 0.03 Y General Comparisons Test FEA Error • Stiffness (lb/in) – Element Size 20 mm* 1176 1317 12% • Stiffness (lb/in) – Element Size 10 mm 1176 1262 7.3% * Element size used in tractor model The details of the component validation for the remaining phenomena listed in Table 46 can be found in the NTRCI project reports. A summary of the validation of each phenomenon in the tractor-trailer vehicle PIRT can be found in Appendix C7.

222 PIRT FOR THE MEDIAN BARRIER MODEL In this case the objective was to validate the vehicle model. As such, the test selected for the validation was one that involved minimal or zero deflection of the barrier during impact, which ensured that any discrepancies between the simulated and test results were due to errors in the vehicle model and not the barrier model. Considering the simple nature of the rigid concrete barrier there were no tests were performed to validate any of its components (i.e., concrete blocks and connections); the geometric dimensions of the barrier model, however, were verified to be consistent with those of the test article. As a consequence, the PIRT table for the barrier model could not be filled out. The top of the concrete barrier was not flat, but shaped as shown in Figure 93. VALIDATION OF THE TRACTOR-SEMITRAILER FINITE ELEMENT MODEL The performance of the tractor-semitrailer finite element model was evaluated by comparing simulation results to data obtained from full-scale crash test No. TL5CMB-2 conducted at the Midwest Roadside Safety Facility on July 12, 2007.(148) The crash test was conducted according to the testing guidelines of NCHRP Report 350 for Test Level 5 impact conditions. The test involved a 36,153-kg (79,705-lb) tractor-semitrailer vehicle impacting a concrete median barrier at 84.9 km/hr (52.7 mph) and an impact angle of 15.4 degrees. The test vehicle was a 1991 White GMC tractor with a 1988 Pines 14.6-m (48-ft) semitrailer. The test article was a 1.067 m (42 inches) tall concrete median barrier with an installation length of 60.9 m (200 ft). The simulation was conducted by Battelle in May 2009 with model No. TT090518_RUN1_200ms-approach-SP. The impact conditions for the numerical analysis were consistent with those used in the full-scale crash test. The simulation vehicle was the tractor model, Trac_Day_v1a_090506, and the semitrailer model, SemiTrailer48_090520. A summary of the test and simulation information is provided in Table 48 . This information was also documented on the Cover Page and in Part I of the validation report shown in Appendix C7. Table 48. Summary of the test and simulation impact conditions for Case 4. General Information Known Solution Analysis Solution Performing Organization MwRSF WPI/Battelle Test/Run Number: TL5CMB-2 TT090518_RUN1_200ms- approach-SP Vehicle: 1991 White/GMC Tractor 1988 Pines 48-ft Trailer 01aTrac_Day_v1a_090506.k 02aSemiTrailer48_090520.k Impact Conditions Vehicle Mass: 36,154 kg 36,200 kg Speed: 84.9 km/hr 84.9 km/hr Angle: 15.5 degrees 15.5 degrees

223 This version of the tractor model was modified from the original model. The geometry of the tractor finite element model was modified such that the wheelbase of the model was the same as the wheelbase of the test vehicle used in MwRSF Test No. TL5CMB-2. In particular, a section of the sleeper-cabin was removed to make the tractor model a day-cab style tractor, and the wheelbase length of the model was adjusted by removing a section of the frame rails (along with other components in this section of the model). The geometric and mass inertial properties of the modified tractor-semitrailer model are compared to those of the test vehicle in Figure 92. The most notable differences between the test vehicle and modified finite element model are listed below: • The length dimensions of the finite element model were all within two percent of the test vehicle dimensions, except for the distance from the front bumper to the center of the front wheel (e.g., dimension “B” in Figure 92.), which was 13.5% shorter in the finite element model. • The trailer floor in the finite element model was 148 mm (5.8 inches) higher than the test vehicle (e.g., dimension “L” in Figure 92.), and the top of the trailer in the finite element model was 169 mm (6.7 inches) lower than the test vehicle (e.g., dimension “W” in Figure 92.). • The center of gravity of the ballast in the finite element model was located 600 mm (23.6 inches) rearward of and 188 mm (4.6 inches) higher than the c.g. location of the ballast in the test vehicle. • The suspension system on the finite element trailer model was the Airide™ design, and the suspension on the trailer test vehicle was a leaf-spring design. In the qualitative assessment, the general response of the modified finite element model compared well to test TL5CMB-2; the model results replicated the basic timing and magnitudes of phenomenological events that occurred in the full-scale test. Figure 93 shows a comparison of sequential views of the test and simulation.

224 Figure 92. Comparison of finite element vehicle model dimensions to those of the test vehicle for Case 4.

225 Time = 0.4 seconds Time = 0.8 seconds Time = 1.2 seconds Figure 93. Summary of Phenomenological Events that Occurred during Full-scale Test and Finite Element Model Simulation in Case 4.

226 Time = 1.4 seconds Time = 1.7 seconds Figure 93. Summary of Phenomenological Events that Occurred during Full-scale Test and Finite Element Model Simulation in Case 4. (continued) Part I - Solution Verification The next step in the validation process is to perform global checks of the analysis to verify that the numerical solution is stable and is producing physical results. (e.g., results conform to the basic laws of conservation). The analysis was modeled as a closed system; therefore, the total energy should remain constant throughout the analysis and should be equal to the initial kinetic energy of the impacting vehicle. It is typical to expect some error in the analysis due to numerical inaccuracies in element formulation, contact definitions, mass-scaling, etc. It is therefore necessary to ensure that the errors are sufficiently small so that they have minimal effect on the solution. Table 49 shows a summary of the global verification assessment based on criteria recommended in the procedures for verification and validation discussed in Chapter 4.

227 Table 49. Summary of Global Energy Checks for Case 4. Verification Evaluation Criteria Change (%) Pass? Total Energy of the analysis solution (i.e., kinetic, potential, contact, etc.) must not vary more than 10 percent from the beginning of the run to the end of the run. • Sliding Interface Energy was the source of the increase in total energy 10 YES Hourglass Energy of the analysis solution at the end of the run is less than 5 percent of the total initial energy at the beginning of the run. 0.1 YES Hourglass Energy of the analysis solution at the end of the run is less than ten percent of the total internal energy at the end of the run. 0.6 Yes The part/material with the highest amount of hourglass energy at any time during the run is less than five percent of the total initial energy at the beginning of the run. 0.02 Yes Mass added to the total model is less than five percent of the total model mass at the beginning of the run. 0.0 Yes The part/material with the most mass added had more than 10 percent of its initial mass added*. 400 NO* - Weld elements connecting trailer side panels to vertical posts - 200 kg (added) /50 kg (initial) The moving parts/materials in the model have less than five percent of mass added to the initial moving mass of the model. 0.0 Yes There are shooting nodes in the solution? No Yes There are solid elements with negative volumes? No Yes * Part 7803 are weld elements used to connect the trailer’s outer side panels to the vertical support posts. These connector elements are relatively “rigid” and the mass added is considered insignificant to the overall mass of the parts to which they connect. The additional mass added to the weld elements was considered acceptable by the model developers. The solution meets all other recommended global energy balance criteria and appears to be free of any major numerical problems. Thus, the verification assessment for Part I was entered as “Yes” on the cover page of the validation report. Part II - Quantitative Evaluation Next, the RSVVP computer program was used to compute the Sprague & Geers metrics and ANOVA metrics using time-history data from the known (i.e., physical test) and analysis data. The number, type, and location of the electronic data recorders (EDRs) vary from test to test. The most common EDR locations are: 1) inside the tractor cabin near the center of gravity

228 of the tractor, 2) on the tractor near the fifth-wheel, 3) at the front of the trailer near the king-pin, 4) at the center of gravity of the trailer ballast, and 5) on the trailer near the trailer tandem axle. The translational accelerations and rotational velocities of the vehicle model were collected at 16 locations on the tractor-semitrailer. The test vehicle was instrumented with only two triaxial accelerometers: one located near the tractor fifth-wheel and another located on the floor of the semitrailer near the trailer’s tandem axle. Due to technical issues that caused the EDR on the tractor’s fifth-wheel to start recording prematurely, the only quantitative data recorded in the full-scale test was from the accelerometer set located at the rear of the trailer. This location is consistent with accelerometer No. 16 in the numerical model, as illustrated in Figure 94. Accelerometer 16 11.42 m 1.31 m Figure 94. EDR locations and nomenclature used in test report for Case 4. The multi-channel option in RSVVP was used to compute metrics for each individual channel as well as for the weighted composite of the combined channels. The raw acceleration data from each of the three data channels collected on the semitrailer were input into RSVVP. The data was then filtered in RSVVP using a CFC class 180 filter. The shift and drift options in RSVVP were not used. Each of the curve pairs were synchronized using the minimum absolute area of residuals option in RSVVP. For the metrics evaluation options in RSVVP, the default metrics (i.e., Sprauge & Geers and ANOVA) were selected. The “Whole Time Window Only” option was also selected which directed RSVVP to evaluate the curves over the complete time history of available data, which in this case was 1.54 seconds. The time-history data collected in the analysis was actually for a time period 1.67 seconds, but due to the synchronization of the curves in RSVVP, the analysis time was shifted back slightly.

229 Figure 95 through Figure 98 show the time histories that were used to compute the metrics and the 50-millisecond average acceleration-time history for each pair of data. The results of the evaluation for the individual channels are shown in Table 50. Based on the Sprague & Geers metrics, the x-acceleration and z-acceleration were out of phase with the test, but the magnitudes of acceleration were in very good agreement with the test for all channels. The ANOVA metrics also indicated that the simulation was in good agreement with the test for all channels. Table 50. Roadside safety validation metrics rating table for Case 4 – (single channel option). Evaluation Criteria Time interval [0 sec; 1.54 sec] O Sprague-Geer Metrics List all the data channels being compared. Calculate the M and P metrics using RSVVP and enter the results. Values less than or equal to 40 are acceptable. RSVVP Curve Preprocessing Options M P Pass? Filter Option Sync. Option Shift Drift True Curve Test Curve True Curve Test Curve X acceleration CFC 180 Min. area of Residuals N N N N 12.4 48.5 N Y acceleration CFC 180 Min. area of Residuals N N N N 13.5 31.4 Y Z acceleration CFC 180 Min. area of Residuals N N N N 12.8 47.1 N P ANOVA Metrics List all the data channels being compared. Calculate the ANOVA metrics using RSVVP and enter the results. Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) and • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? X acceleration/Peak 0.02 0.10 Y Y acceleration/Peak 0.0 0.08 Y Z acceleration/Peak 0.0 0.14 Y

230 S&G mag. = 12.4√ S&G phase = 48.5x Mean = 0.02√ St.D. = 0.10√ (a) (b) Time (seconds) X- ac ce le ra tio n (G ’s ) Figure 95. X-channel (a) acceleration-time history data used to compute metrics and (b) 50- millisecond average acceleration -time history data for Case 4. (a) (b) y- ac ce le ra tio n (G ’s ) Time (seconds) S&G mag. = 13.5 √ S&G phase = 31.4√ Mean = 0.00√ St.D. = 0.08√ Figure 96. Y-channel (a) acceleration-time history data used to compute metrics and (b) 50- millisecond average acceleration -time history data for Case 4.

231 S&G mag. = 12.8√ S&G phase = 47.1x Mean = 0.00√ St.D. = 0.14√ (a) (b) X- ch an ne l 5 0- m ill is ec on d av er ag e ac ce le ra tio n (G ’s ) Time (seconds) z- ac ce le ra tio n (G ’s ) Time (seconds) Figure 97. Z-channel (a) acceleration-time history data used to compute metrics and (b) 50- millisecond average acceleration -time history data for Case 4. Since the metrics computed for the individual data channels did not all satisfy the acceptance criteria, the multi-channel option in RSVVP was used to calculate the weighted Sprague & Geers and ANOVA metrics for the six channels of data. Table 50 shows the results from RSVVP for the multi-channel option. RSVVP weights the relative importance of each channel based on the total area under the curve. The resulting weight factors computed for each channel are shown in both tabular form and graphical form in Table 51. The y-channel dominates the kinematics of the impact event, which can be verified by comparison of the acceleration magnitudes from the 50-ms average acceleration-time history plots.

232 Table 51. Roadside safety validation metrics rating table for the Case 4 – (multi-channel option). Evaluation Criteria (time interval [0 sec; 1.54 sec]) Channels (Select which was used) X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights -Area (II) Method- X Channel – 0.038 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 X acc Y acc Z acc Y Channel – 0.640 Z Channel – 0.322 O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M P Pass? 13.2 37.1 Y P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? 0.00 0.10 Y The weighted metrics computed in RSVVP in the multi-channel mode all satisfy the acceptance criteria, and therefore the time history comparison can be considered acceptable. The validation assessment for Part II should be entered as “Yes” on the cover page of the validation report. Part III - Validation of Crash Specific Phenomena The last step in the validation procedure is to compare the phenomena observed in both the crash test and the numerical solution. Table 52 contains the Report 350 crash test criteria with the applicable test numbers. The criteria that apply to test 5-12 (e.g., corresponding to this particular test case) are marked with a red square. These include criteria A, D, F, L and M.

233 Table 52. Evaluation Criteria Test Applicability Table for Case 4. Evaluation Factors Evaluation Criteria Applicable Tests Structural Adequacy A Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38 B The test article should readily activate in a predictable manner by breaking away, fracturing or yielding. 60, 61, 70, 71, 80, 81 C Acceptable test article performance may be by redirection, controlled penetration or controlled stopping of the vehicle. 30, 31,, 32, 33, 34, 39, 40, 41, 42, 43, 44, 50, 51, 52, 53 Occupant Risk D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. All E Detached elements, fragments or other debris from the test article, or vehicular damage should not block the driver’s vision or otherwise cause the driver to lose control of the vehicle. (Answer Yes or No) 70, 71 F The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. All except those listed in criterion G G It is preferable, although not essential, that the vehicle remain upright during and after collision. 12, 22 (for test level 1 – 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44) H Occupant impact velocities should satisfy the following: Occupant Impact Velocity Limits (m/s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 80, 81 Longitudinal and Lateral 9 12 Longitudinal 3 5 60, 61, 70, 71 I Occupant ridedown accelerations should satisfy the following: Occupant Ridedown Acceleration Limits (g’s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 60, 61, 70, 71, 80, 81 Longitudinal and Lateral 15 20 L The occupant impact velocity in the longitudinal direction should not exceed 40 ft/sec and the occupant ride-down acceleration in the longitudinal direction should not exceed 20 G’s. 11,21, 35, 37, 38, 39 Vehicle Trajectory M The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38, 39 N Vehicle trajectory behind the test article is acceptable. 30, 31, 32, 33, 34, 39, 42, 43, 44, 60, 61, 70, 71, 80, 81 Angular-rate data was not collected in test TL5CMB-2. However, the high-speed videos, TL-5 CMB-2 aos3.avi and TL-5 CMB-2 aos-4.avi, from the full-scale test were used to measure the approximate roll-time history of the trailer at time intervals of 0.2 seconds. The roll-time history of the trailer in the simulation compares very well to the roll-time history measured from the high-speed test video, regarding both timing and magnitude, as shown in Figure 98. The

234 analysis terminated prematurely at 1.67 seconds of the impact event, but at the time of termination the simulation was showing approximately the same behavior (e.g., roll position and roll-rate) of the trailer as was measured from the test videos. -20.00 -10.00 0.00 10.00 20.00 30.00 40.00 50.00 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Ro ll A ng le ( de g) Time (seconds) Roll Angle TL5CMB-2 FEA Figure 98. Roll angle-time history plot for the tractor-semitrailer test case. Table 53 through Table 55 contain a list of the crash test evaluation phenomena including additional specific phenomena that were measured in the test and that could be directly compared to the numerical solution. Table 53 contains a comparison of phenomena related to structural adequacy. Table 54 addresses occupant risk and phenomena related to vehicle trajectory. Table 55 provides a list of other phenomenological events and their time of occurrence for both the full-scale crash test and the finite element simulation. In this particular analysis, structural adequacy (e.g., ability of barrier to contain and redirect the vehicle) was not really an issue. The primary focus of the analysis was to assess the performance of the tractor model for simulating the impact load and kinematic response of the vehicle. The deflection of the barrier in the test was negligible; therefore, the barrier was modeled as rigid to ensure no deflection of the barrier during the crash event. Since the barrier was modeled as rigid, criterion D could not be assessed (i.e., occupant compartment penetration

235 and hazard to other traffic, pedestrians or personnel in a work zone caused by detached elements, fragments or other debris from the barrier). All the applicable criteria in Table 53 through Table 55 agree. The timing of phenomenological events in Table 55 also compared very well between the simulation and test with only a few exceptions. The validation assessment for Part III should, therefore, be entered as “Yes” on the cover page of the validation report. Since the model has been validated for each part of the validation procedure, it can now be considered valid for use in assessing the performance of longitudinal barriers, regarding impact loads and general vehicle-barrier interaction.

236 Table 53. Structural Adequacy Phenomena for Case 4. Evaluation Criteria Known Result Analysis Result Relative Diff. (%) Agree? St ru ct ur al A de qu ac y A A1 Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. (Answer Yes or No) Yes Yes YES A2 Maximum dynamic deflection: - Relative difference is less than 20 percent or - Absolute difference is less than 0.15 m 0 0 0 % 0 m YES A3 Length of vehicle-barrier contact: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m full- barrier N.A. * 30% 3 m N.A. A4 The relative difference in the number of broken or significantly bent posts is less than 20 percent. N.A. N.A. 0 N.A. A5 The rail element ruptured or failed (Answer Yes or No) N.A. N.A. N.A. A6 There were failures of connector elements (Answer Yes or No). N.A. N.A. N.A. A7 There was significant snagging between the vehicle wheels and barrier elements (Answer Yes or No). No No YES A8 There was significant snagging between vehicle body components and barrier elements (Answer Yes or No). No No YES * FE vehicle was still in contact with the barrier at time of analysis termination

237 Table 54. Occupant Risk Phenomena for Case 4. Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? O cc up an t R is k D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. (Answer Yes or No) Pass Pass N.A.* G G1 It is preferable, although not essential, that the vehicle remain upright during and after collision. (Answer Yes or No) Pass Pass YES G2 Maximum roll of the vehicle: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 42° 42.8° 2 % 0.8° YES G3 Maximum pitch of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. Not measured G4 Maximum yaw of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 15.5° 15.5° 0% 0° YES V eh ic le T ra je ct or y M M1 The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. Yes Yes YES M2 Exit angle at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 15.5° 15.5° 0% 0° YES M3 Exit velocity at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 10 m/s. M4 One or more vehicle tires failed or de-beaded during the collision event (Answer Yes or No). Yes N/A - M5 One or more tires separated from the vehicle (Answer Yes or No). No N/A - *The barrier was modeled as rigid; therefore criterion D could not be assessed.

238 Table 55. Comparison of Phenomenological Events for Case 4. Event Test Time (seconds) FE model Time (seconds) Relative Diff. (%) Agree? Tractor begins to yaw 0.024 0.02 - 0.03 0 YES U-bolt connecting front axle to right-side leaf spring broke unknown 0.09 – 0.1 Left-front tire lifts off pavement 0.144 0.10 - 0.11 24 NO Right-front corner of trailer contacted the top protrusion of the barrier 0.186 0.17 - 0.18 3 YES Trailer started to roll toward the barrier 0.190 0.19 - 0.200 0 YES Left-rear tires were lifted off the ground 0.2 0.21 - 0.22 0 YES The right front corner of the trailer was vertically coincident with the back face of the barrier 0.260 0.26 - 0.27 0 YES Both left-rear trailer tires were lifted off the ground 0.356 0.25 - 0.26 27 NO Tractor was parallel to barrier 0.394 0.32 - 0.33 16 YES Tractor reached peak roll and began to roll back from the barrier 0.290 – 0.364 (≈15 deg.) 0.29 - 0.30 (14.6 deg.) 0 YES Left-front tractor tires returned to roadway surface 0.468 0.34 - 0.35 25 NO U-bolt connecting front axle to left-side leaf spring broke unknown 0.34 - 0.35 Trailer was parallel to barrier 0.648 0.63 - 0.64 1 YES Tractor rolled back to level position 0.650 0.74 - 0.75 14 YES Rear trailer tandem contacts barrier 0.656 0.65 - 0.66 0 YES Time of maximum impact force between trailer tandem and barrier 0.72 0.71 - 0.72 0 YES Tractor started to roll toward the barrier 0.776 0.80 - 0.81 3 YES Tractor left-front tire again lifted from the roadway 0.956 N/A (u-bolts broken) Trailer reached maximum roll and began to roll back from the barrier All left side tires were off the ground 1.150 (≈42 deg.) 1.19 - 1.20 (42.8 deg.) 3 YES Tractor again reached peak (maximum) roll angle 0.994 (≈19 deg.) 1.16 - 1.17 (23.8 deg.) 17 YES Left-front tire returned to the roadway surface 1.294 N/A Tractor left side tandems returned to roadway surface 1.652 1.52 - 1.53 7 YES Analysis Terminated 1.67 Trailer left side tires returned to roadway surface 1.800 -

239 CONCLUSION The four test cases described in this chapter provide step-by-step examples for how to document the capabilities of the roadside hardware and vehicle models in a PIRT and how to make the comparison between a full-scale test and simulation and document the results of the comparison in a verification and validation report. These examples have also shown that comparing real tests to finite element simulations can sometimes be a challenge. Sometimes all the data from the full-scale test is not available, sometimes it was not collected or the instrumentation failed. In some cases, the differences between the actual vehicle models and the vehicles used in the test may be too different for a good comparison. The purpose of performing the comparison is to develop a quantifiable assessment of the validity and utility of the models. Some of the comparisons described in the chapter were very good, whereas some others could not be considered validated. For the cases where validation could not be established, the analyst can go back and re-examine the model knowing exactly what aspect of the model is not predicting the results of the crash test correctly. Improvements and modifications can be made to the model and a new comparison with the improved model can then be made. Similarly, sometimes the experimental data is not complete or the post-processing methods are unclear. The analyst and test engineer should work to resolve the differences if at all possible. It may sometimes be necessary to run a full-scale crash test experiment in order to ensure that all the needed data is collected for the comparison. The purpose of the procedures described in this report and the examples provided in this chapter is to establish a common language for discussing the degree of similarity between tests and numerical results. There will also be “grey” areas or phenomena and criteria that require some engineering judgment to resolve but these procedures place the comparison task on a firm foundation of quantifiable and objective criteria.

Next: CHAPTER 7 CONCLUSIONS »
Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications Get This Book
×
 Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Web Only Document 179: Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications explores verification and validation procedures, quantifiable evaluation metrics, and acceptance criteria for roadside safety research that maximize the accuracy and utility of using finite element simulations.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!