National Academies Press: OpenBook
« Previous: REFERENCES
Page 256
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 256
Page 257
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 257
Page 258
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 258
Page 259
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 259
Page 260
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 260
Page 261
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 261
Page 262
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 262
Page 263
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 263
Page 264
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 264
Page 265
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 265
Page 266
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 266
Page 267
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 267
Page 268
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 268
Page 269
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 269
Page 270
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 270
Page 271
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 271
Page 272
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 272
Page 273
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 273
Page 274
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 274
Page 275
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 275
Page 276
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 276
Page 277
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 277
Page 278
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 278
Page 279
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 279
Page 280
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 280
Page 281
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 281
Page 282
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 282
Page 283
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 283
Page 284
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 284
Page 285
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 285
Page 286
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 286
Page 287
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 287
Page 288
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 288
Page 289
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 289
Page 290
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 290
Page 291
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 291
Page 292
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 292
Page 293
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 293
Page 294
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 294
Page 295
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 295
Page 296
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 296
Page 297
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 297
Page 298
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 298
Page 299
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 299
Page 300
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 300
Page 301
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 301
Page 302
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 302
Page 303
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 303
Page 304
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 304
Page 305
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 305
Page 306
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 306
Page 307
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 307
Page 308
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 308
Page 309
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 309
Page 310
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 310
Page 311
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 311
Page 312
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 312
Page 313
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 313
Page 314
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 314
Page 315
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 315
Page 316
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 316
Page 317
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 317
Page 318
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 318
Page 319
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 319
Page 320
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 320
Page 321
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 321
Page 322
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 322
Page 323
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 323
Page 324
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 324
Page 325
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 325
Page 326
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 326
Page 327
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 327
Page 328
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 328
Page 329
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 329
Page 330
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 330
Page 331
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 331
Page 332
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 332
Page 333
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 333
Page 334
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 334
Page 335
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 335
Page 336
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 336
Page 337
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 337
Page 338
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 338
Page 339
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 339
Page 340
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 340
Page 341
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 341
Page 342
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 342
Page 343
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 343
Page 344
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 344
Page 345
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 345
Page 346
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 346
Page 347
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 347
Page 348
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 348
Page 349
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 349
Page 350
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 350
Page 351
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 351
Page 352
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 352
Page 353
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 353
Page 354
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 354
Page 355
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 355
Page 356
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 356
Page 357
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 357
Page 358
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 358
Page 359
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 359
Page 360
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 360
Page 361
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 361
Page 362
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 362
Page 363
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 363
Page 364
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 364
Page 365
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 365
Page 366
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 366
Page 367
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 367
Page 368
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 368
Page 369
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 369
Page 370
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 370
Page 371
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 371
Page 372
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 372
Page 373
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 373
Page 374
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 374
Page 375
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 375
Page 376
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 376
Page 377
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 377
Page 378
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 378
Page 379
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 379
Page 380
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 380
Page 381
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 381
Page 382
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 382
Page 383
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 383
Page 384
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 384
Page 385
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 385
Page 386
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 386
Page 387
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 387
Page 388
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 388
Page 389
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 389
Page 390
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 390
Page 391
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 391
Page 392
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 392
Page 393
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 393
Page 394
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 394
Page 395
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 395
Page 396
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 396
Page 397
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 397
Page 398
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 398
Page 399
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 399
Page 400
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 400
Page 401
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 401
Page 402
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 402
Page 403
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 403
Page 404
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 404
Page 405
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 405
Page 406
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 406
Page 407
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 407
Page 408
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 408
Page 409
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 409
Page 410
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 410
Page 411
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 411
Page 412
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 412
Page 413
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 413
Page 414
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 414
Page 415
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 415
Page 416
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 416
Page 417
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 417
Page 418
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 418
Page 419
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 419
Page 420
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 420
Page 421
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 421
Page 422
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 422
Page 423
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 423
Page 424
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 424
Page 425
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 425
Page 426
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 426
Page 427
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 427
Page 428
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 428
Page 429
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 429
Page 430
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 430
Page 431
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 431
Page 432
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 432
Page 433
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 433
Page 434
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 434
Page 435
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 435
Page 436
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 436
Page 437
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 437
Page 438
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 438
Page 439
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 439
Page 440
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 440
Page 441
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 441
Page 442
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 442
Page 443
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 443
Page 444
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 444
Page 445
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 445
Page 446
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 446
Page 447
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 447
Page 448
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 448
Page 449
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 449
Page 450
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 450
Page 451
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 451
Page 452
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 452
Page 453
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 453
Page 454
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 454
Page 455
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 455
Page 456
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 456
Page 457
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 457
Page 458
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 458
Page 459
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 459
Page 460
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 460
Page 461
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 461
Page 462
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 462
Page 463
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 463
Page 464
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 464
Page 465
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 465
Page 466
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 466
Page 467
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 467
Page 468
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 468
Page 469
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 469
Page 470
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 470
Page 471
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 471
Page 472
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 472
Page 473
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 473
Page 474
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 474
Page 475
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 475
Page 476
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 476
Page 477
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 477
Page 478
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 478
Page 479
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 479
Page 480
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 480
Page 481
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 481
Page 482
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 482
Page 483
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 483
Page 484
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 484
Page 485
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 485
Page 486
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 486
Page 487
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 487
Page 488
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 488
Page 489
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 489
Page 490
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 490
Page 491
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 491
Page 492
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 492
Page 493
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 493
Page 494
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 494
Page 495
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 495
Page 496
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 496
Page 497
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 497
Page 498
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 498
Page 499
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 499
Page 500
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 500
Page 501
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 501
Page 502
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 502
Page 503
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 503
Page 504
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 504
Page 505
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 505
Page 506
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 506
Page 507
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 507
Page 508
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 508
Page 509
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 509
Page 510
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 510
Page 511
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 511
Page 512
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 512
Page 513
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 513
Page 514
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 514
Page 515
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 515
Page 516
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 516
Page 517
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 517
Page 518
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 518
Page 519
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 519
Page 520
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 520
Page 521
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 521
Page 522
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 522
Page 523
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 523
Page 524
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 524
Page 525
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 525
Page 526
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 526
Page 527
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 527
Page 528
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 528
Page 529
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 529
Page 530
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 530
Page 531
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 531
Page 532
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 532
Page 533
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 533
Page 534
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 534
Page 535
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 535
Page 536
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 536
Page 537
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 537
Page 538
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 538
Page 539
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 539
Page 540
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 540
Page 541
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 541
Page 542
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 542
Page 543
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 543
Page 544
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 544
Page 545
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 545
Page 546
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 546
Page 547
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 547
Page 548
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 548
Page 549
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 549
Page 550
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 550
Page 551
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 551
Page 552
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 552
Page 553
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 553
Page 554
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 554
Page 555
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 555
Page 556
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 556
Page 557
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 557
Page 558
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 558
Page 559
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 559
Page 560
Suggested Citation:"APPENDIXES ." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 560

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

A-1 Appendix A Roadside Safety Verification and Validation Program (RSVVP) User's Manual December 2009 (Revision 1.4) Mario Mongiardini Malcolm H. Ray

A-2 CONTENTS INTRODUCTION TO RSVVP ...................................................................................................... 4 INSTALLATION ........................................................................................................................... 5 System requirements ................................................................................................................... 5 Installation of the MATLAB Component Runtime .................................................................... 5 Starting RSVVP .......................................................................................................................... 6 EVALUATION METHODS AND DATA ENTRY PROCEDURE ............................................. 7 General Discussion ..................................................................................................................... 7 Format of input curves ................................................................................................................ 8 Copy of the original input curves................................................................................................ 9 Loading a configuration file ........................................................................................................ 9 Procedure for Selecting Evaluation Methods ......................................................... 10 Procedure for Data Entry ....................................................................................... 12 Procedure for Initial Preprocessing ....................................................................... 13 PREPROCESSING ....................................................................................................................... 16 Filtering ..................................................................................................................................... 17 Procedure for Filtering Data ................................................................................. 18 Shift/Drift controls .................................................................................................................... 19 Procedure for Applying Shift and Drift .................................................................. 20 Curve Synchronization (Single-channel mode) ........................................................................ 20 Procedure for Applying Synchronization ............................................................... 21 Procedure for defining input for Multiple Channels .............................................. 22 Procedure for Performing Additional Preprocessing in Multiple-Channel Mode. 23 METRICS SELECTION .............................................................................................................. 26 Metrics selection ....................................................................................................................... 26 Procedure for Metrics selection ............................................................................. 27 Time interval ............................................................................................................................. 29 Procedure for Selecting Time Window ................................................................... 29 Procedure for Compression of Image Files ........................................................... 30 METRICS EVALUATION .......................................................................................................... 31 Procedure for Metrics Evaluation .......................................................................... 31 Procedure for Defining the Whole-Time Window .................................................. 32 Procedure for Defining User-Defined-Time Window(s) ........................................ 32 SCREEN OUTPUT ...................................................................................................................... 34 OUTPUT OF RESULTS .............................................................................................................. 36 Procedure for Exiting and Saving Results .............................................................. 36 Table of results (Excel® worksheet) ......................................................................................... 37 Graphs ....................................................................................................................................... 40 Time Histories Results .............................................................................................................. 40 EXAMPLES ................................................................................................................................. 41 Example 1: Single-Channel Comparison .................................................................................. 41 Analysis Type .......................................................................................................... 42 Data Entry and Preprocessing ............................................................................... 42 Metric selection and evaluation ............................................................................. 46 Save Results ............................................................................................................ 50

A-3 Example 2: Multiple-Channel Comparison .............................................................................. 52 Analysis Type .......................................................................................................... 53 Data Entry and Preprocessing ............................................................................... 53 Metric selection and evaluation ............................................................................. 56 REFERENCES ............................................................................................................................. 63 APPENDIX A1: Comparison Metrics in RSVVP ....................................................................... 64 APPENDIX A2: Multi-Channel Weight Factors .......................................................................... 69 List of Figures Figure A-1: Format of the test and true curves. .............................................................................. 9 Figure A-2: Input of the test and true curves. ............................................................................... 13 Figure A-3: Synchronization of the channel/resultant. ................................................................. 25 Figure A-4: Select the metric profile from the drop-down menu. ................................................ 28 Figure A-5: Example of a metrics selection using the ‘User selected metrics’ profile. ............... 28 Figure A-6: Time window(s) selection. ........................................................................................ 29 Figure A-7: Option to compress/uncompress the image files created by RSVVP. ...................... 30 Figure A-8: Press the ‘Evaluate metrics’ button to begin the metrics calculations. ..................... 31 Figure A-9: Pop-up window for saving the configuration file. .................................................... 32 Figure A-10: Defining data range in the user defined time window. ........................................... 33 Figure A-11: Screen output for the NCHRP 22-24 profile ........................................................... 35 Figure A-12: Screen output for the ‘All metrics’ or ’User defined’ profiles ................................ 35 Figure A-13: Pop-up browse window for selecting output folder for RSVVP results. ................ 37 Figure A-14: Message shown while RSVVP creates results folder. ............................................ 37 Figure A-15: Excel table containing the metrics results for the various time intervals. ............... 38 Figure A-16: Summary of preprocessing options and separate sheets for each input channel in the Excel file. ......................................................................................... 39 List of Tables Table A1: Acceptance criteria suggested for the NCHRP 22-24 metrics profile. ........................ 71

A-4 INTRODUCTION TO RSVVP The Roadside Safety Verification and Validation Program (RSVVP) quantitatively compares the similarity between two curves, or between multiple pairs of curves, by computing comparison metrics. Comparison metrics are objective, quantitative mathematical measures of the agreement between two curves. The comparison metrics calculated by RSVVP can be used to validate computer simulation results against experimental data, to verify the results of a simulation against the results of another simulation or analytical solution, or to assess the repeatability of a physical experiment. Although RSVVP has been specifically developed to aid in the verification and validation of roadside safety computational models, it can generally be used to provide a quantitative comparison of essentially any pair of curves. The comparison metrics calculated by RSVVP are deterministic, meaning they do not specifically address the probabilistic variation of either experiments or calculations (i.e., the calculation results are the same every time given the same input). For a description of each metric calculated by the RSVVP see Appendix A1. In order to ensure the most accurate comparison between the curves, RSVVP allows the user to select among several preprocessing tasks prior to calculating the metrics. The interactive graphical user interface of RSVVP was designed to be as intuitive as possible in order to facilitate the use of the program. Throughout each step of the program, RSVVP provides warnings to alert the user of possible mistakes in their data and to provide general guidance for making proper selection of the various options. The interpretation of the results obtained using RSVVP is solely the responsibility of the user. The RSVVP program does not presuppose anything about the data; it simply processes the data and calculates the metrics. The user must verify that the data input into the program is appropriate for comparison and that the appropriate options in RSVVP are used for their specific case.

A-5 INSTALLATION SYSTEM REQUIREMENTS RSVVP has been written and compiled using Matlab®. In order to run the RSVVP program either the full Matlab® (version 2009a or higher) software or the free distributable MATLAB Component Runtime (MCR 7.10) software must be installed on the user’s system. The minimum hardware requirements to run RSVVP are shown below in Table A1: Table A1. Minimum hardware requirements for running RSVVP 32 bit version 64-bit version CPU Intel® Pentium 4 (and above), Intel Celeron, Intel Xeon, Intel Core, AMD Athlon 64, AMD Opteron, AMD Sempron Intel® Pentium 4 (and above), Intel Celeron, Intel Xeon, Intel Core, AMD64, RAM 512 MB 1024 MB Disk space 510 MB (MATLAB® only) 510 MB (MATLAB® only) INSTALLATION OF THE MATLAB COMPONENT RUNTIME The source code for RSVVP was written in Matlab® (version R2007b) and then compiled as an executable file for Windows® XP/Vista in order to create a standalone program that can be run on computers with or without Matlab® installed on them. However, before running RSVVP on a machine without Matlab® it is first necessary to install Matlab® Component Runtime (MCR 7.10), which is free software distributed by Matlab®. MCR provides all the necessary Matlab® functional support to ensure proper execution of the RSVVP software. (Note: the MCR environment only has to be installed once). The latest version of RSVVP and the MCR

A-6 environment can be downloaded from: http://civil-ws2.wpi.edu/Documents/Roadsafe/NCHRP22-24/RSVVP/RSVVP_1_7.zip To install MCR, perform the following steps: 1. Extract the content of the RSVVP.zip file in the folder on your PC where you want to install RSVVP (for example: C:\RSVVP\). 2. Open the folder where you extracted the files and double-click on the Installer.bat file. 3. Follow the instructions of the installation wizard. It may take several minutes to complete the installation. This installs the free Matlab® MCR environment that is used in conjunction with RSVVP. 4. Reboot your PC. At this point RSVVP should be installed on your computer. STARTING RSVVP After MCR and RSVVP have been installed, simply double-click the RSVVP.exe file located in the installation folder (e.g., C:\RSVVP\) to start the program. Once started, a series of graphical user interfaces will guide the user through the preprocessing, the evaluation of the comparison metrics and saving the results. The following sections describe the features and use of the program.

A-7 EVALUATION METHODS AND DATA ENTRY PROCEDURE GENERAL DISCUSSION In RSVVP, the baseline curve or reference curve is called the “true curve” as it is assumed to be the correct response, whereas the curve that is to be verified or validated, say from a model or experiment, is called the “test curve.” For example, in validating a computer simulation against a full-scale crash test, the time history data from the physical crash test would be input as the “true curve” in RSVVP and the computer simulation time history would be input as the “test curve”. Since the comparison metrics assess the degree of similarity between any pair of curves in general, the input curves may represent various physical entities (e.g., acceleration time histories, force-deflection plots, stress-strain plots, etc.). RSVVP does not presuppose anything about the curves being compared so it is the user’s responsibility to ensure that the units, for example, are consistent. The only restriction on the input data is that the abscissa values must increase monotonically. Curves representing loading/unloading cycles or, in general, curves which are characterized by more than one data point with the same abscissa value cannot be managed in RSVVP at the moment. As a note of caution: when using RSVVP to compare force- deflection data or stress-strain data, the user must ensure that the abscissa data is monotonically increasing. It may be more appropriate to compare force-time history data and deflection-time history data separately to avoid this problem. Comparison metrics provide an objective measure of how well two curves match each other and can thus be applied to essentially any monotonically increasing pair of curves. A typical application of the metrics evaluated by RSVVP is the validation of a numerical model by comparing the numerical results with the experimental results. Another application could be to check the repeatability of an experiment by comparing the results obtained from several repetitions of the same experiment. Yet another application is to verify the results of one numerical simulation with the results of another numerical simulation. Two general types of comparison can be performed in RSVVP:

A-8 1. Single Channel - A single pair of curves are compared 2. Multiple Channels- Multiple pairs of curves are compared (i.e., up to three acceleration- time histories and/or three angular rate-time histories). In the ‘Single Channel’ option, comparison metrics are based on the comparison of a single pair of input curves, while in the ‘Multiple Channel’ option the comparison metrics are computed by either, 1) calculating the metrics for the individual channels (i.e., curve pairs) and then computing composite metrics based on a weighted average of the individual channels, or 2) calculating the resultant of the various channels and then computing the comparison metrics based on the resulting single curve pair. In either case, the ‘Multiple Channel’ option is intended to provide an overall assessment of multiple data channels by computing a single set of composite metrics. The multiple channel option in RSVVP was created for the specific purpose of comparing numerical simulations of vehicle impacts into roadside barriers to the results from a full-scale crash test. An example might be a small sign support test where the longitudinal acceleration has a much greater influence on the results of the impact event than do the lateral or vertical accelerations. The less important channels may not satisfy the criteria because they are essentially recording noise. The longitudinal channel in this example will probably be an order of magnitude greater than some of the other less important channels and the response is essentially completely determined by the one longitudinal channel. The weighting factors used to compute the composite metrics are based on the area under the true curve for that respective channel, and thereby account for the different levels of importance of the various channels. FORMAT OF INPUT CURVES The input curve files must be in ASCII format but can have any extension (or no extension) in the file name. The abscissa and ordinate data of the input curves must be tabulated into two columns as shown in Figure A-1. Each line in the input file represents a single data point (e.g., time and corresponding acceleration). If a data file includes a header, RSVVP will automatically detect and skip it. In such case, RSVVP will warn that a header was detected and will ask the user for confirmation of the number of lines to be skipped before starting data entry.

A-9 0.00000000 0.10000000 0.02000000 0.09900000 0.04000000 0.09800000 0.06000000 0.09700000 0.08000000 0.09600000 0.10000000 0.09500000 0.12000000 0.09400000 0.14000000 0.09300000 ……………………………………… Abscissa Ordinate Figure A-1: Format of the test and true curves. Although no limitation is imposed or assumed for the units of both the abscissa and ordinate columns, the use of some preprocessing features like the SAE filtering option may only make sense for time history data (i.e., the first column represents time). It is the user’s responsibility to ensure that the units of the input curves are consistent, especially when comparing multiple pairs of curves in the Multichannel mode. COPY OF THE ORIGINAL INPUT CURVES A copy of the original input curves is automatically saved into the folder ‘\Input_curves’ in both the main directory of RSVVP and the ‘Result_XX’ folder. Any file saved into the ‘\Input_curves’ folder located in the main directory is deleted at the beginning of each new run of RSVVP. LOADING A CONFIGURATION FILE The user can also load a configuration file from a previous run of RSVVP. This configuration file contains all the necessary information to retrieve the files containing the original input curves and all the selected options for the preprocessing of the curves and the evaluation of the metrics. This configuration file can be loaded into two different ways: • Run Completely mode, or • Edit Curves/Preprocessing mode.

A-10 When the run completely mode is selected, RSVVP reads the configuration file and automatically evaluates the comparison metrics using the options stored in to the configuration file (e.g. preprocessing, metrics selection time intervals, etc.). This option is a useful tool for providing documentary proof of the values of the comparison metrics obtained during the verification/validation process or to simply enable the user to re-run a previously saved session. Using the run completely mode, RSVVP provides the user three options: 1. Reproduce comparison metrics using all the user time intervals from the original run, 2. Reproduce comparison metrics from a portion of the original time intervals (but with the constraint to follow the original sequence of the intervals) or 3. Compute comparison metrics on new user-defined time intervals. The original configuration file can be updated with the new user defined time intervals at the end of the calculation. Likewise, in edit curves/preprocessing mode, RSVVP loads the original input curves and automatically preprocesses them according to the options saved in the configuration file. In this mode, however, once the curves have been preprocessed, the user can go back and modify any of the preprocessing options or replace any of the original input curves. This option can be very useful when the analyst wants to assess, for example, how the various pre-processing options affect the values of the comparison metrics. Procedure for Selecting Evaluation Methods At the startup of RSVVP, first select a maximum re-sampling rate using the drop-down menu, ‘Re-sampling rate limit’, as illustrated in Figure A-2. By default, RSVVP limits the rate at which the curves are re-sampled to a maximum of 10 kHz. If a higher limit is desired, the user can choose from the available options in the drop-down menu. Then choose between ‘Single Channel’, ‘Multiple Channel’, or ‘Load a Configuration’ File options.

A-11 Compare a Single pair of curves Compare multiple pairs of curves Figure A-2: Selection of the type of comparison and re-sampling limit. To Load the configuration file, click the button with three dots (i.e., ). This will open a browse window that can be used to search/select the desired configuration file, as shown in Figure A-3. Once the configuration file has been loaded, the button ‘Proceed’ becomes active. Before proceeding, select the desired mode for running the configuration file (i.e., ‘Run completely’ or ‘Edit curves/preprocessing’) The default option is to load the configuration file in Edit mode; to change to ‘Run completely’ mode, select the corresponding radio button Note : When a configuration file has been loaded in ‘run completely’ mode, any selection made by the user to limit the re-sampling rate is overridden by the configuration file. In order to change the re-sampling limit, load the configuration file in ‘edit’ mode.

A-12 Figure A-3: Selection of the configuration file. Procedure for Data Entry After the analysis options have been selected, RSVVP closes the window and opens another graphical user interface that will be used for loading and preprocessing the input curves. Clicking on the buttons, ‘Load True Curve’ and ‘Load Test Curve’, opens a browse window that can be used to search/select the corresponding curves, as illustrated in Figure A-4. Recall from the discussion section that the ‘True Curve’ is the baseline curve or reference curve and is assumed to be the correct response; the ‘Test Curve’ is the data from a model or experiment that is to be verified or validated. After each input file is loaded, RSVVP will show a preview of the raw curves in the graphics area on the left side of the main window, as shown in Figure A-4.

A-13 Figure A-4: Input of the test and true curves. Procedure for Initial Preprocessing The user is given the option to perform initial adjustments of the data, including scaling, trimming, and translating the curves, prior to applying additional preprocessing options, as shown in Figure A-5. The radio button to scale the input curves and the checkboxes to activate the option to trim and/or translate the curves to the origin can be selected only after both the test and true curves have been input. Figure A-5: Checkboxes for the manual trim and the translation of the raw curves.

A-14 Curve scaling The ‘scale’ option allows the user to scale the original time histories using user-defined scale factors. The true and test curves can be scaled by separate scale factors. This option may be used, for example, to invert the sign of time histories or to convert units (e.g., accelerations can be converted from m/s^2 to g’s). To scale either the true curve or test curve or both, check the radio button ‘Scale original curves’ shown in Figure A-5. Input the scale factor for the true and/or test curves into the respective fields ‘True’ and ‘Test’ located beside the radio button. Each time a new scale factor is defined for either the true or the test curve (or the scaling option is deselected), the graphs are automatically updated. Curve trimming The ‘trim’ option allows the user to trim the beginning and/or the end of the raw data before preprocessing the curves. This option can be used, for example, to remove the pre- and post-impact data from the curves to ensure that the comparison evaluation is applied only to the impact portion of the data. The ‘trim’ option can also be used, for example, to trim the input data at a point where the true and test curves start diverging to allow for better synchronization of the curves in the preprocessing phase. Although it is possible to specify a user defined time interval over which to evaluate the comparison metrics (see section Time Interval), it is advisable to trim the input curves when they have a ‘null head’ or ‘null tail’ in order to improve data synchronization during the preprocessing operations. To trim the original data, check the box ‘Trim original curves before preprocessing’. This action will open the pop-up window shown in Figure A-6. The ‘trim’ option is applied to the true and test curves independently. The fields ‘Lower limit’ and ‘Upper limit’ show the boundary values for the curve selected using the radio buttons for either the test or true curve. Only one curve at a time can be selected in order to allow for independent trimming of each of the two curves. The curve selection is performed using the radio buttons located at the bottom left of the window. A straight and dotted line respectively indicates the lower and upper limit in the graph area. Both the lines move according to the value input in the user fields (blue and green color are used for the true and test curves, respectively). By default, both the test and true curves are shown in the graph area; however, RSVVP provides an option to only show the curve being trimmed, which is useful when the curves cannot easily be distinguished. If the raw data curves are characterized by a high level of noise, the trim window also provides an option for the user to filter the curves before performing the trim operation. The user can select the desired CFC value from the drop-down menu located in the ‘Filter option’ box. While it is not recommended, if the user wants to use filter

A-15 specifications different from the standard SAE J211 filter, user defined filters parameters can be specified. Note: If data is filtered during the trimming process, the user will not be allowed to change the filtering option during subsequent preprocessing operations. If a different filtering option is desired, it will be necessary to return to the ‘trimming’ box to make any change in the choice of filtering. Figure A-6: Window for trimming input curves. Curve translation The ‘translate’ option allows the user to shift the input curves along the abscissa. This may be used, for example, to ensure that the beginning of the abscissa vector starts at zero (e.g., if time histories are input, the time vector can be shifted to start at time zero). This option works for either positive or negative value. If the ‘trim’ option has been used, then the curves are automatically translated to the origin so there is no need to perform the ‘curve translation’ procedure. In fact, the checkbox to translate the original raw curves is not active when the ‘trim’ option has been selected. This option is useful whenever one or both the original input curves are shifted with respect to the origin. A typical application is shown in Figure A-7.

A-16 Original input true and test curve True and test curves after the translation to the origin Figure A-7: Shift of one of the two input curves to the origin. Note: If the option to scale the original curves is changed or if the scaling factors are changed, RSVVP will automatically update the graph of the original input curves as well as the graph of the preprocessed curves. Note: If the ‘trim’ option or the ‘translate’ option is changed, or if an input curve is changed, then all the preprocessing operations applied to the curves are reset by RSVVP. Note PREPROCESSING : The copies of the original input curves (automatically saved by RSVVP) do not include any of these initial preprocessing results. RSVVP is now ready to perform some basic and necessary pre-processing operations on the input curves, as well as some optional preprocessing operations that can be selected by the user based on qualitative visual assessment of the original data. In order to calculate the comparison metrics, all the curves must all have the same sampling rate and the same number of data points. Because these operations are necessary for subsequent calculations, they are performed automatically by RSVVP and do not permit user control. When the ‘multiple channel’

A-17 option has been selected, RSVVP trims each individual channel of data based on the shortest curve in each curve pair; then, after all the data has been input and preprocessed, the curves are further trimmed to the length of the shortest channel. If the original sampling rate of one of the curves is larger than the ‘re-sampling rate limit’, the data will be re-sampled to the chosen limit value (see Figure A-2). Note that higher sampling rates result in more data points and will therefore increase computation time. When the ‘multiple channel’ option has been selected, the sampling rate determined for the first pair of curves is used for all subsequent data pairs. In order to proceed to the next step (i.e., metrics selection) it is necessary to press the ‘Preprocess curves’ button even if no optional preprocessing options have been selected. RSVVP provides three optional pre-processing operations, including: • Filtering, • Shift/drift control and • Synchronization. Each of these three preprocessing operations is optional and can be selected independently from each other. After selecting the desired preprocessing options, press the ‘Preprocess curves’ button located immediately below the Preprocessing box to preview results. If the results are not satisfactory, any of the previous options can be changed until satisfactory results are obtained. Note: When the ‘multiple channel’ option has been selected, the synchronization option will not be active in the preprocessing window. For multiple channels, the option for data synchronization, as well as other preprocessing operations, will be made available in an additional/secondary preprocessing step. FILTERING RSVVP gives the user the option of filtering the two input curves. This option can be very useful when the original input curves are noisy (e.g., noise created by the transducer during the acquisition process of experimental curves or undesired high-frequency vibrations). In order to obtain a value of the comparison metrics that is as reliable as possible, it is very important to remove noise from both the test and true curves. While noise derives from different sources in

A-18 physical experiments and numerical simulations, the true and test curves should be filtered using the same filter to ensure that differences in the metric evaluation are not based on the difference in frequency content in the true and test signals. The filter options in RSVVP are compliant with the SAE J211/1 specification. It is recommended that raw data be used whenever possible in the evaluation to avoid inconsistent processing of the two curves. It is also important that both the test and true curves are filtered in the same way to avoid errors due to different filtering techniques. Although there is no general limitation to the type of units used for the input to RSVVP, the SAE filtering option presumes that the curves are time histories with time values expressed in units of seconds. In a future release of RSVVP, the option to use different units for the time vector of the time histories will be implemented. The user can select between the following SAE J211 Channel Frequency Class (CFC) filters: 60, 180, 600 and 1000. Table shows the specifications of each CFC value as defined by SAE J211/1. Table 2: Specifications for typical CFC values. CFC value 3 dB limit frequency [Hz] Stop damping [dB] 60 100 -30 180 300 -30 600 1000 -40 1000 1650 -40 While it is not recommended, if the user wants to use filter specifications different from the standard SAE J211 filters, user defined filters parameters can be specified. Procedure for Filtering Data By default RSVVP does NOT filter the input curves. To apply the filter option, click on the drop-down menu in the ‘Filter Options’ box (Figure A-8a) and select the desired CFC value

A-19 If it is necessary to specify a CFC value that is not listed in the menu, select the option ‘User defined CFC…’ at the end of the list and input the desired CFC parameters in the ‘Optional user defined CFC’ field located right below (Figure A-8b). Note (a) (b) : This field is active only if the ‘User defined CFC’ option is selected from the drop- down menu. Figure A-8: Filter Options’ box - (a) drop down menu and (b) ‘Optional user defined CFC’ field. Note SHIFT/DRIFT CONTROLS : If the original curves have already been filtered during the optional trimming process, the ‘Filter Options’ box will show the filtering option chosen at that time without allowing the user to make any change. If a different filtering option is desired, it is necessary to go back to the trimming box to make any change to the previous choice. Another preprocessing option supported by RSVVP is the possibility to correct any initial shift and/or drift in the curves. Experimental data sometimes contain shift and/or drift effects due to the change of temperature immediately before or during the test. The shift effect is an initial vertical shift of the curve due an increase of the temperature after the measurement gauges have been zeroed while the drift effect is a linear drift of experimental curve typical of the increase of the temperature during the test. The shift and drift controls of RSVVP correct the above mentioned effects and, therefore, can be very useful in case one or both the two input curves have been recorded from experimental tests and present either or both these data acquisition problems. As either the initial shift or drift of the test and/or the true curve are caused by an incorrect acquisition of the experimental data, these pre-processing options are important for an accurate evaluation of the comparison metrics. In generally, curves resulting

A-20 from numerical solution should not need to use these options since shift and drift are features of sensor characteristics in physical tests. The use of the shift and drift options is, therefore, not recommended for curves resulting from computer simulations. Procedure for Applying Shift and Drift Both the shift and drift controls can be activated independently from each other by checking the respective boxes. Once one or both of them have been checked, the user has the choice to apply the selected control/s to the true curve, the test curve or both the true and test curves (Figure A-9). By default these controls are inactive. Figure A-9: Shift and Drift controls. CURVE SYNCHRONIZATION (SINGLE-CHANNEL MODE) RSVVP allows the user to optionally synchronize the two input curves before evaluating the comparison metrics. This option can be very useful if the original test and true curves have not been acquired starting at exactly the same instant (e.g., the test and true curve represent respectively a numerical simulation and an experimental test of the same crash test but the instant at which data collection was started is not the same). The synchronization of the two input curves is very important as any initial shift in the time of acquisition between the test and true curves could seriously affect the final value of the comparison metrics. For example, two

A-21 identical input curves with an initial phase difference due to a different starting point in the acquisition process would probably lead to poor results of some of the comparison metrics. Two different synchronization options are available in RSVVP: (1) the absolute area between the two curves (i.e., the area of the residuals) and (2) the squared error between the two curves. Both options are based on the minimization of a target function. Although these two methods are similar, they sometimes give slightly different results. Selecting one of these methods will result in the most probable pairing point for the two curves. Once the original curves have been preprocessed, the user is given the option to further refine the synchronization of the data. Procedure for Applying Synchronization By default RSVVP does NOT synchronize the input curves. To apply the synchronization option, click on the drop-down menu in the ‘Sync Options’ box, shown in Figure A-10, and select one of the two available synchronization methods: (1) Minimum absolute area of residuals or (2) Least Square error. As previously noted: Once the curves have been preprocessed by pressing the ‘Preprocess curves’ button, a pop-up window will ask the user to verify that the synchronization is satisfactory. If the ‘No’ button is selected, another pop-up window with a slider will appear, as illustrated in Figure A-11. Moving the slider changes the initial starting point of the minimization algorithm on which the synchronization process is based. when the ‘multiple channel’ option has been selected, the option for data synchronization, as well as other preprocessing operations, will be made available in an additional/secondary preprocessing step.

A-22 Figure A-10: Drop down menu of the ‘Sync Options’ box. Figure A-11: Option for selecting new starting point for synchronization. Procedure for defining input for Multiple Channels For the multiple channel option, selecting the ‘Next Ch.’ button located at the bottom of the screen advances the input selection to the next channel (note: the name of the current channel appears at the top of the window). If data is not available for a particular channel, the radio button, ‘Skip this channel,’ (located at the top of the window) may be used to skip any of the six available channels.

A-23 In the multichannel mode, six tabs are located at the bottom, left corner of the GUI window, as shown in Figure A-12. The tab corresponding to the current channel’s input/preprocessing page is highlighted in red. If the user wants to return to a previous channel, for instance, to change the input files or to modify preprocessing options, the user can simply select the corresponding tab and RSVVP will display the selected channel’s input/preprocessing page. Figure A-12: Tabs linked to the input/pre-processing page for each channel Procedure for Performing Additional Preprocessing in Multiple-Channel Mode RSVVP provides two methods for evaluating the multiple channels of data: 1) weighting factors method and 2) resultant method. The weighting factors method calculates the metrics for the individual channels (i.e., curve pairs) and then computes composite metrics based on a weighted average of the individual channels. The ‘resultant’ method, on the other hand, calculates the resultant of the various channels and then computes the comparison metrics based on the resulting single curve pair. In either case, the ‘Multiple Channel’ option is intended to provide an overall assessment of the multiple data channels by computing a single set of composite metrics. After the preprocessing has been completed for each data channel, press the button ‘Proceed to curves synchro.’ This opens a second window that will be used to select the Evaluation Method and synchronize the curves.

A-24 Note In the Evaluation method box, select the desired method for the evaluation of the multiple data channels using the dropdown menu, as illustrated in Figure A-13. The default method is to use ‘Weighting Factors.’ If this method is selected, the graph on the left side of the window will show the curves for the first available channel. To switch to the resultant method, click on the drop down menu and select ‘Resultant’. Once the method has been changed, the button ‘Update’ becomes red (refer to Figure A-13). Press this button in order to update to the new selected method. The graph will now show the resultant of the first three channels. : If the last channel is skipped, RSVVP will automatically proceed to this second GUI. Figure A-13: Selection of the method for the computation of the multichannel metrics. After the evaluation method has been selected, RSVVP is now ready to synchronize the curves. To begin the synchronization process, select the checkbox ‘Synchronize the two curves’ located in the Synch options box on the left side of the GUI, as shown in Figure A-14 (Note As in the single channel mode, two different synchronization methods are available: (1) minimum area of residuals and (2) least square error. Both options are based on the minimization of a target function. Although these two methods are similar, they sometimes give slightly different results. Selecting one of these methods will result in : Synchronization starts automatically). Synchronization of the curves is optional, and leaving the checkbox unselected will allow the user to skip this operation.

A-25 the most probable pairing point for the two curves. However, if the user is not satisfied with the synchronization, he has the option of changing the initial starting point used in the minimization algorithms. To proceed to the next channel, press the button, ‘Next Ch.’ Note: If the resultant method has been selected, pressing the ‘Next Ch.’ button then displays the resultant curves computed from the second group of channels (i.e., the angular rate channels). Note: Each time the evaluation method is changed, it is necessary to select the ‘Update’ button to make the change effective. Note When the last channel/resultant has been reached, the button ‘Proceed to metrics selection’ will become active. Pressing it will advance RSVVP to the next phase of the program. : Changing the evaluation method resets all curve synchronizations. Figure A-2: Synchronization of the channel/resultant.

A-26 METRICS SELECTION METRICS SELECTION The metrics computed in RSVVP provide mathematical measures that quantify the level of agreement between the shapes of two curves (e.g., time-history data obtained from numerical simulations and full-scale tests). There are currently fourteen metrics available in RSVVP for computing quantitative comparison measures; all are deterministic shape-comparison metrics and are classified into three main categories: 1. Magnitude Phase Composite (MPC) metrics a) Geers b) Geers CSA c) Sprague & Geers d) Russell e) Knowles & Gear 2. Single Value Metrics f) Whang’s inequality g) Theil’s inequality h) Zilliacus error i) RSS error j) Weighted Integrated Factor k) Regression coefficient l) Correlation Coefficient m) Correlation Coefficient (NARD) 3. Analysis of Variance (ANOVA) n) Ray A description of each metric is provided in Appendix A1. The MPC metrics treat the magnitude and phase of the curves separately and combine them into a single value comprehensive metric. The single-value metrics give a single numerical value that represents the agreement between two curves. The ANOVA metric is a statistical assessment of whether the variance between two curves can be attributed to random error. The recommended metrics are the Sprague & Geers metrics and the ANOVA metrics. The Sprague & Geers metrics assess the magnitude and phase of two curves while the ANOVA

A-27 examines the differences of residual errors between them. Of the fourteen different metrics available in RSVVP, the Sprague-Geers MPC metrics were found to be the most useful metrics for assessing the similarity of magnitude and phase between curves and the ANOVA metrics were found to be the best for examining the characteristics of the residual errors. For more details regarding the definitions of these metrics refer to Appendix A1. Procedure for Metrics selection Select the desired Metric profile from the drop down menu at the top of the metrics window, as illustrated in Figure A-15. There are three metrics profiles available: 1. NCHRP 22-24 (default), 2. All metrics, and 3. User selected metrics. The ‘NCHRP 22-24’ profile is the default profile and it is suggested that this profile be used when validating numerical simulations against full-scale crash tests (e.g., NCHRP Report 350 crash tests). The second profile ‘All metrics’ automatically selects all fourteen different comparison metrics that are available in RSVVP. If the ‘User selected metrics’ profile has been selected, the checkbox beside each available metric will become active and allow the user to select any number of the available metrics by selecting the corresponding checkboxes, as shown in Figure A-16.

A-28 Figure A-3: Select the metric profile from the drop-down menu. Figure A-4: Example of a metrics selection using the ‘User selected metrics’ profile.

A-29 TIME INTERVAL In RSVVP, metrics can be evaluated over the complete length of the curve (e.g., whole time interval) and/or over one or more user defined time intervals. Procedure for Selecting Time Window From the drop-down menu in the Time window box shown in Figure A-17, select from one of the three available options: 1) Whole time window and User defined time window, 2) Whole time window only and 3) User defined time window only. Figure A-5: Time window(s) selection. If the “Whole time window” option is selected, the metrics are computed using all the available data (i.e., the complete length of the curves). If the “User defined time window” option is selected, the metrics will be computed for one or more arbitrary user defined intervals of data.

A-30 By default RSVVP evaluates the selected metrics on both Procedure for Compression of Image Files the whole time interval and user selected time interval(s). If this option is selected, RSVVP will first compute the comparison metrics over the ‘Whole Time interval,’ then, after displaying the results, it will prompt the user to define an arbitrary ‘User Defined Time interval’ over which to calculate the metrics. During the computation of the metrics, RSVVP creates several graphs and saves them as bitmap images (.bmp). Since the cumulated size of these entire image files may exceed several megabytes, the default option in RSVVP is to compress them in .zip format. RSVVP provides an option for overriding file the file compression by unchecking the box ‘Compress plot files’ at the bottom of the window, as shown in Figure A-18. Figure A-6: Option to compress/uncompress the image files created by RSVVP.

A-31 METRICS EVALUATION Once the desired metrics have been selected, and the time intervals over which the metrics will be calculated have been defined by the user, RSVVP begins the metrics calculation process. In the multichannel mode, RSVVP first calculates the value of the metric for each individual channel (or channel resultants if the resultant method was selected) and then computes single metric value based on a weighted average of the results. For details regarding the weighting scheme refer to Appendix A2. Procedure for Metrics Evaluation To start the metrics evaluation, press the ‘Evaluate metrics’ button located at the bottom of the window, as shown in Figure A-19. Note: It is possible to go back to the main graphical interface to change any of the selected input curves and /or modify any of the preprocessing options by clicking the ‘Back’ button. Figure A-7: Press the ‘Evaluate metrics’ button to begin the metrics calculations. Before the metrics are evaluated, a pop-up window appears, as shown in Figure A-20, asking the user to indicate a location and file name for saving the configuration file. The

A-32 configuration file contains all the information that has been input in RSVVP, including all the preprocessing options as well as the metrics selection. Thus, the configuration file contains all the information necessary to repeat the analysis. By default, the location of the configuration file is in the “working” directory and the name of the configuration file is ‘Configuration_Day-Month-Year.rsv’, where Day, Month and Year correspond to the data that the file is being created. Figure A-8: Pop-up window for saving the configuration file. Note: A copy of the configuration file is also saved in the subfolder .../Results_x that is created by RSVVP at the end of the run (see section Output of Results for more details about the result folder). Note Procedure for Defining the Whole-Time Window : The configuration file can be used, for example: (i) to quickly re-input a set of curves and configurations and then modify any of the previously selected options or (ii) to exactly repeat a previous run. No action is needed to define the time interval for the ‘Whole time window’ option (i.e., options 1 and 2 from the time interval box) as RSVVP will automatically consider the maximum time interval possible for the data. Procedure for Defining User-Defined-Time Window(s) If a ‘User defined time window’ was selected (i.e., options 1 and 3 from the time interval box), RSVVP will prompt the user to select the upper and lower boundaries of the local time interval on which the comparison metrics will be evaluated. RSVVP shows a window with a graph of the test and true curves and two blank fields at the bottom which

A-33 are used to define respectively the time value of the lower and upper boundary, as shown in Figure A-21. Fill in the desired values and press the ‘Evaluate metrics’ button to start the evaluation of the metrics on the defined interval. Figure A-9: Defining data range in the user defined time window. When the limits are input into the fields, the upper and lower limits are shown as vertical lines in the graph. For multichannel input, a drop-down menu located at the bottom of the window allows the user to select the desired channel to use for defining the limits. Note 1 It is possible to evaluate the metrics on as many user defined time windows as desired; after the results of the user defined time window have been shown, RSVVP will prompt : The selected upper and lower boundaries do not change when a new channel is plotted as they share the same interval for each channel in the multi-channel option.

A-34 the user for a new User Defined time window. The results obtained for each time interval will be saved separately. SCREEN OUTPUT For each of the time intervals on which the comparison metrics were evaluated, RSVVP shows various screen outputs to present the results: • Graph of the true curve and test curve, • Graphs of the time-integration of the curves, • Values of the comparison metrics, • Graph of residual time history, • Graph of the residual histogram and • Graph of the residual cumulative distribution. Note: Comparison metrics are always computed using the curves shown in the graph of the true and test curves. The time-integrated curves are shown only to provide additional interpretation of the curves. For example, if acceleration data is being compared, it is often quite noisy and difficult to visually interpret. The time-integration of acceleration, however, yields a velocity-time history plot that is much easier for the user to interpret. Figure A-11 and Figure A-12 show the typical output screen for the NCHRP 22-24 profile and the other two metric selection profiles, respectively (i.e., ‘All metrics’ or ’User defined’ profiles). If the NCHRP 22-24 profile was selected, only the Sprague & Geers and ANOVA metrics are shown. The word ‘Passed’ and a green square beside the value of each metric indicate that the metric value meets the NCHRP 22-24 acceptance criterion for that specific metric; the word ’Not passed’ and a red square indicate that the value does not meet the suggested acceptance criterion. When either of the other two metrics profiles is selected, the results of all fourteen metrics are shown in the window and the word N/A appears beside any metrics that were not calculated (i.e., metrics not checked by the user in the ’User defined’ profile). In these cases, no acceptance criteria have been defined and the user must use their own judgment regarding acceptable values. Also, only the graph of the true curve and test curve is shown.

A-35 Figure A-10: Screen output for the NCHRP 22-24 profile Figure A-11: Screen output for the ‘All metrics’ or ’User defined’ profiles

A-36 For multichannel input, if the weighting factors method has been selected, the user can view the results for any of the individual channels or the multi-channel weighted results by selecting the desired option from the drop-down menu beside the time-history graph. When the Multi-channel results is selected from the drop-down menu, a histogram graph of the weighting factors used to compute the metric values in the multichannel mode is plotted. This gives an immediate understanding of the weight of each input channel with respect to the others in the evaluation of the multichannel metrics. Note: It may be necessary to wait a few seconds before the metric values and the graphs are updated to a new selected channel. The next step in RSVVP depends on whether or not the option for User time intervals was selected in the Metrics Selection GUI. If so, the user has the option to: (1) proceed to the evaluation of a new interval and/or (2) to save the results and quit the program. Select the button corresponding to the desired action. If the option ‘whole and user defined time interval’ was selected, RSVVP requires the user to go through the process of defining at least one user-defined time interval before they will have the option to save the results and quit RSVVP. OUTPUT OF RESULTS During the curve preprocessing and evaluation of the metrics, RSVVP generates several types of output, which are saved in the output-folder location defined by the user. If no output- folder was selected, RSVVP automatically saves the results in a folder called ‘\Results_X’, where X is an incremental numbering (i.e., 1, 2, etc). The folder ‘\Results_X’ is created in the folder where RSVVP was executed. At the beginning of the run, RSVVP checks to see if there is a previous sequence of folders named “\Results_X”, and creates a new Results folder with the suffix corresponding to the next number in the sequence. For example, if there is already a previous folder named ...\Results_3, the new output folder will be named ...\Results_4). Procedure for Exiting and Saving Results Pressing the button ‘Save results and Exit’ will open a browse window, as shown in Figure A-24, for the user to select where to save the results.

A-37 Figure A-12: Pop-up browse window for selecting output folder for RSVVP results. The user has the option of creating a new folder by selecting the tab ‘Make New Folder’ in the browse window. If no selection has been made or if the cancel button has been pressed, RSVVP will automatically create a folder named ‘Results_X’ in the current directory. Note: The process of saving of the results may take a few minutes. During this period, RSVVP displays the message shown in Figure A-25. Figure A-13: Message shown while RSVVP creates results folder. TABLE OF RESULTS (EXCEL® WORKSHEET) The results of the comparison metrics are saved in the Excel file ‘Comparison Metrics.xls’. This spreadsheet contains the results for all the comparison metrics computed for

A-38 the whole time interval and all user defined time intervals, as shown in Figure A-26. The time interval used in each evaluation is indicated in the heading of each column. Whole time interval [0,0.5474] User time interval #1 [0.08005,0.19995] User time interval #2 [0.12005,0.21995] MPC Metrics Value [%] Value [%] Value [%] Geers Magnitude 7.1 4.7 10.5 Geers Phase 23.9 22.1 21.4 Geers Comprehensive 24.9 22.6 23.8 Geers CSA Magnitude N/A N/A N/A Geers CSA Phase N/A N/A N/A Geers CSA Comprehensive N/A N/A N/A Sprague-Geers Magnitude N/A N/A N/A Sprague-Geers Phase N/A N/A N/A Sprague-Geers Comprehensive N/A N/A N/A Russell Magnitude 5.6 3.8 7.9 Russell Phase 22.5 21.6 21.2 Russell Comprehensive 20.5 19.4 20.1 Knowles-Gear Magnitude 58 101.1 1573.2 Knowles-Gear Phase 1.8 0 0 Knowles-Gear Comprehensive 53 92.3 1436.2 Single Value Metrics Value [%] Value [%] Value [%] Whang's inequality metric 38.5 36.5 38.1 Theil's inequality metric N/A N/A N/A Zilliacus error metric 76.8 76.5 85.9 RSS error metric metric N/A N/A N/A WIFac_Error N/A N/A N/A Regression Coefficient 66.7 49.9 65.2 Correlation Coefficient N/A N/A N/A Correlation Coefficient(NARD) 76.1 77.9 78.6 ANOVA Metrics Value Value Value Average 0.01 0.04 0.05 Std 0.15 0.25 0.16 T-test 7.21 7.39 14.43 T/T_c 2.81 2.88 5.63 Figure A-14: Excel table containing the metrics results for the various time intervals. A summary of the input files and preprocessing options for each channel is written at the end of the Excel file, as shown in Figure A-27. If RSVVP is run in multichannel mode using the weighting factors method, the weighting factors and the metrics values calculated for each separate channel are provided in the Excel file on separate sheets, as indicated in Figure A-27.

A-39 Figure A-15: Summary of pre-processing options and separate sheets for each input channel in the Excel file.

A-40 GRAPHS RSVVP creates several graphs during the evaluation of the metrics and saves them as bitmap image files. For each time interval evaluated in RSVVP, the following graphs are created in the folder …/Results/Time-histories/: a) Time histories of the true and test curves, b) Time histories of the metrics and c) Residuals time histories, histogram and cumulative distribution. For multichannel input, the time histories of the metrics represent the weighted average of the time histories of the metrics from each channel. Similarly, the residuals time history, histogram and distribution are plotted using the weighted average from the residual histories of each channel. The graphs are saved in separate directories corresponding to each time interval. TIME HISTORIES RESULTS Time-history data generated by RSVVP is saved in a convenient format (ASCII or Excel) so that the user has ready access to the data. For example, the user may want to conduct additional post processing of the data, or to simply recreate the graphs produced by RSVVP so that they can be reformatted for inclusion in a report. RSVVP generates time history files for the following: a) Original input curves b) Preprocessed curves c) Calculated metrics Each of the original input curves is saved as an ASCII file in the subfolder .../results_X/Input_curves. Likewise, the preprocessed curves used in the metrics calculations are saved ASCII files in the subfolder …/Results/Preprocessed_curves. The time histories of the metrics are saved in Excel format; a separate metrics-time history file is created for each time interval evaluated (e.g., Metrics_histories_whole.xlsx).

A-41 EXAMPLES Two examples are presented in the following sections in order to illustrate the step-by- step procedure for using RSVVP. In Example 1, an acceleration-time history from a full-scale crash test is compared to that of another “essentially” identical full-scale crash test using the single channel option in RSVVP. In Example 2, data from multiple data channels (including three acceleration channels and three rotational rate channels) from a numerical simulation are compared to those from a full-scale crash test using the multiple channels option. EXAMPLE 1: SINGLE-CHANNEL COMPARISON In this example, RSVVP is used to compare the longitudinal acceleration-time history between two full-scale crash tests. The tests involved a small car impacting a rigid longitudinal barrier at 100 km/hr at a 25-degree impact angle. Both tests were performed using new vehicles of the same make and model and the same longitudinal barrier. The acceleration-time history data was collected from the center-of-gravity of the vehicle in each case. Although, theoretically, the results from two essentially identical crash tests should be the same, in practice, results from supposedly identical tests will always show some variations due to random differences in material make-up and experimental procedure. In fact, in complex experiments such as full-scale crash tests, it is practically impossible to completely control parameters such as the initial impact speed, impact angle, point of impact, or especially the behavior of the vehicle’s mechanical components. As such, perfect agreement between experiments is rarely achieved; however, the agreement should be within an acceptable range of expected differences that are typical of such experiments (e.g., tolerances determined from experience). The steps of the evaluation process in this example will include 1) data entry, 2) preprocessing, 3) selection of comparison metrics, 4) calculation of the metrics and 5) interpretation of the results based on recommended acceptance criteria for these types of full- scale crash tests.

A-42 Analysis Type The first step is to select the type of curve comparison that will be performed. In this example, only a single pair of curves is being compared, so the option ‘single channel’ is selected in the GUI window, as shown in Figure A-17. Figure A-17: The Single Channel option is selected in the GUI window Data Entry and Preprocessing The next step is to load the two acceleration time histories (i.e., curve 1 and 2) into RSVVP. Note that when comparing results from a numerical computation to those from a physical experiment, the experimental data will always be considered the true curve and the numerical data will be the test curve. In this case, however, both curves are from physical experiments, thus the choice of true curve and test curve is irrelevant. In this example, curve 1 is arbitrarily designated as the true curve, as shown in Figure A-18.

A-43 Figure A-18: GUI-preview of original input data loaded into RSVVP. The various preprocessing operations are applied incrementally in this example in order to demonstrate how each operation contributes to the general improvement of the input curves. Note, however, that these preprocessing operations can be applied simultaneously. From the graph shown in the GUI window (Figure A-18), it is obvious that both curves include some pre- and post-impact data. That is, the curves have an initial “flat” section at the beginning (pre-impact data) and a relatively flat section at the end starting at approximately 0.4 seconds (post impact data). To trim the heads and tails of the curves, select the checkbox beside the option ‘trim original curves before preprocessing’, as shown in Figure A-19. Note: this option opens a pop-up window (not shown) that permits the user to perform the trim operation. The tails of the two curves were trimmed starting at 0.4 seconds, and the results are shown in the graphics display in the GUI window in Figure A-19. In this example, only the tail of the each curve is trimmed in order to demonstrate the effectiveness of the synchronization

A-44 option, which will be used in a later step. Note: It is typically desirable to also trim the head of the curves to eliminate any pre-impact data from the curve comparison. Figure A-19: Input curves after the manual trimming operation. The input curves are characterized by a certain level of high frequency vibrations (as is typical of most acceleration data), which are not generally important in overall response of the vehicle, and should be filtered before computing the comparison metrics. In this example, the CFC 60 filter is selected and the results of the filtering operation are shown in the graph on the right side of the GUI-window in figure A-20.

A-45 Figure A-20: Original and filtered acceleration time histories. It is apparent from the graphs in Figure A-20 that the two curves are not synchronized with each other, as each curve demonstrates a different start-time at which the acceleration data started recording. There are two methods available in RSVVP for performing the synchronization operation: one based on the ‘Least squares’ and the other based on ‘Minimum area of residuals’. The results from both methods are shown in Figure A-21. Both of these methods typically give good results, especially if the pre- and post-impact data is trimmed appropriately. In this case, however, the method of ‘Minimum area of residuals’ provides the best results. Note: RSVVP shows a warning message if no filtering and/or synchronization options were selected.

A-46 (a) (b) Figure A-21: Data synchronization results using (a) the Least squares method and (b) the Minimum Area of Residuals method. After the test and true curves have been preprocessed, the next step is the selection of the metrics and time intervals. Metric selection and evaluation There are three metrics profiles available in RSVVP: 1) NCHRP 22-24, 2) All Metrics and 3) User Selected Metrics. In this example, the NCHRP 22-24 metrics profile is selected, which is the recommended profile for comparing full-scale crash test data. This profile calculates Sprague & Geers MPC metrics and the ANOVA metrics and provides an interpretation of the data based on recommended acceptance criteria. The option ‘Whole time window and user-defined time window’ was selected from the drop-down list in the Time Window box. For this option, RSVVP first computes the metrics based on all the available data from the preprocessed curves (i.e., complete length of curves) and then computes the metrics on a select interval of the data defined by the user. The metric evaluation is initiated by pushing the ‘Evaluate metrics’ button shown in Figure A-22.

A-47 Figure A- 22: Selection of the metrics profile and time interval. During the calculations of the metrics, various graphs appear and disappear on the computer screen. Screen-captures of these graphs are taken during this process and the files are saved in the output directory defined by the user. When the metrics calculations are completed, the results are displayed in the GUI-window shown Figure A-23. Note that beside each metric value RSVVP indicates whether or not the result meets the recommended acceptance criteria.

A-48 Figure A-23: GUI-window displaying results from whole time interval metrics calculations Clicking the ‘Proceed to evaluate metrics’ button, opens a GUI-window, as shown in Figure A-24, that will allow the user to define upper and lower boundaries for a new time interval over which to calculate the metrics. The interval selected for this example is 0.05 seconds to 0.15 seconds. Figure A-24: GUI window for setting user defined time interval.

A-49 Once the user time window has been defined, the button ‘Evaluate metrics’ is pressed to start the calculations of the metrics based on the data within the user defined interval. As before, various graphs appear and disappear on the computer screen, as RSVVP captures and saves the data. The results of the metrics calculations for the user defined window are shown in the GUI- window shown in Figure A-25. FigureA-25: Metrics results for user-defined time interval [0.05 sec, 0.15 sec] At this point we have the option to save results and exit or to evaluate metrics on another time interval. For this example, we will select the ‘Evaluate on a new interval’ button and define another time interval over which to compute the metrics following the same procedure used in defining the first time interval. In this case, the time interval 0.15 seconds to 0.20 seconds is defined, as shown in Figure A-26; the resulting metrics calculations are shown in Figure A-27. Note: The preceding procedure can be repeated indefinitely to compute comparison metrics for as many user-defined time intervals as desired.

A-50 Figure A-26: Time interval 0.15 seconds to 0.20 seconds defined using GUI window Figure A-27: Metrics computed for time interval [0.15 sec, 0.20 sec] Save Results To save results and exit, simply press the button ‘Save results and Exit’. RSVVP creates a folder called \Results\ in the ‘working’ directory and creates subfolders for each time interval

A-51 evaluated during the metrics calculations. For this example, three different subfolders were created: • Whole_time_Interval, • User_defined_interval_1_[0.05 , 0.15] and • User_defined_interval_2_[0.15005 , 0.19995]. Also, an Excel file named Comparison Metrics.xls is created that contains a summary of the metrics values for each interval. Table A-3 summarizes the results of the comparison metrics for each of the three time intervals (i.e., whole time and two user defined time intervals). The values of the metrics computed using the whole time interval of data are all within the recommended acceptance criteria for these types of data, which indicates that they are similar enough to be considered “equivalent”. The metric values computed for the data between 0.5 seconds and 0.15 seconds also indicate that the two curves are effectively “equivalent.” The metric values calculated for the data between 0.15 seconds and 0.20 seconds, however, yield mixed results. For this section of the curves, the values for Sprague & Geers indicate that they are more or less “Equivalent,” while the ANOVA metrics indicate that the differences between the curves are at least not likely to be attributable to random experimental errors. This result should not be surprising, since any differences that occur during the crash event are cumulative and will continuously alter the response of the vehicle. Thus, the similarity of the curves should be expected to diminish as the test progresses, especially towards the end of the test.

A-52 Table A-3: Summary of the metrics values for each of the time intervals evaluated. Calculated Metric Whole Time Interval [0, 0.3396] User Time Interval [0.05, 0.15] User Time Interval [0.15, 0.20] Sprague & Geers Magnitude 4.8% [pass] 3.9% [pass] 9.9% [pass] Sprague & Geers Phase 21.2% [pass] 18.9% [pass] 25.8% [pass] Sprague & Geers Comprehensive 21.7% [pass] 19.3% [pass] 27.6% [pass] ANOVA Average Residual Error -.08% [pass] -3.84% [pass] 9.3% [fail] ANOVA Standard Deviation of Residual Errors 17.77% [pass] 25.07% [pass] 27.13% [pass] EXAMPLE 2: MULTIPLE-CHANNEL COMPARISON In this second example, the multiple channel option in RSVVP is used to compare the results from a finite element analysis to the results of a full-scale crash test. Six data channels are compared: three acceleration channels and three rotational rate channels. Although each of these channels could be compared independently using the single channel option in RSVVP, the multiple channel option provides an additional analysis feature. That is, in addition to computing the metrics for each individual channel, the program also computes a single set of metrics that provide a comprehensive assessment of the combined data. The basic concept of this comprehensive assessment is to calculate a weight factor for each channel that is representative of its importance with respect to the other channels. Once the weighting factors have been evaluated, the multi-channel comprehensive metrics are calculated from a weighted average of the individual channel metrics.

A-53 Analysis Type The first step is to select the type of curve comparison that will be performed. In this example, six pairs of curves are being compared, so the option ‘multiple channel’ is selected in the GUI window, as shown in Figure A-28. Figure A-28: The Multiple Channel option is selected in the GUI window Data Entry and Preprocessing The data entry for the multiple channel option is accomplished by loading and preprocessing each pair of data channels one at a time, using the same basic procedure described in Example 1. In fact, the GUI for the multiple channel option is the same basic GUI used in the single channel option. Since each pair of curves is processed independently, it is possible to select different preprocessing options for each channel. In this example, however, the same preprocessing options are used for each of the six pairs of data. In particular, all curves were trimmed using the ‘trim original curves before preprocessing’ option (i.e., lower limit = 0.0 and upper limit = 0.9

A-54 seconds), and filtered using SAE 60 filter. Figure A-29 shows the original and preprocessed curve pairs for each of the acceleration and rotational rate channels. X acceleration Y acceleration Z acceleration Yaw rate

A-55 Roll rate Pitch rate Figure A-29: Original and pre-processed curve pairs for each data channel Note that, in the multi-channel case, the synchronization is performed in an intermediate step, after all the channels have been input. Once all the curve pairs have been entered into RSVVP and preprocessed, the ‘Proceed to curves syncho’ option at the bottom of the GUI window will open a new GUI for synchronizing the curves. The default evaluation method, ‘Weighting Factors,’ will be used in this example (see Appendix A2 for more details regarding the Weighting Factor method). The default synchronization method, ‘Minimum absolute area of residuals,’ is then used to synchronize each of the curve pairs. The results of the synchronization operation are shown in Figure A-30. X acceleration Y acceleration

A-56 Z acceleration Yaw rate Roll rate Pitch rate Figure A-30: Synchronization results Metric selection and evaluation After the synchronization process is completed, RSVVP automatically opens another GUI for selecting the desired metrics. For this example, the NCHRP 22-24 metrics profile (i.e., ANOVA metrics and the Sprague & Geers MPC metrics) was selected from the Metrics Box and the option Whole time window only’ was selected from the drop-down men in the Time Window Box. The metrics calculations are initiated by pressing the ‘Evaluate metrics’ button at the bottom of the GUI window. RSVVP then calculates the metrics for each individual channel, computes a weight factor for each channel based on a pseudo momentum approach (see Appendix A2), and computes the multi-channel comprehensive metrics from a weighted average of the individual channels. During the calculations of the metrics, various graphs appear and disappear on the computer screen. Screen-captures of these graphs are taken during this process and the files are saved in the output directory defined by the user. When the metrics calculations

A-57 are completed, RSVVP displays the results of the first channel on the screen. Note that beside each metric value RSVVP indicates whether or not the result meets the recommended acceptance criteria. To view the results for the other five channels or to view the weighted average results, use the drop-down menu at the left of the True and Test curves graph to select the corresponding option. Note that when the weighted average of the results is selected from the drop-down menu, RSVVP displays a bar graph of the weight factors for each channel. Figures A-31 through A-36 show the results obtained for each channel, and Figure A-37 shows the weighted average results. Figure A-31: Screen output of the results for the X channel.

A-58 Figure A-32: Screen output of the results for the Y channel. Figure A-33: Screen output of the results for the Z channel.

A-59 Figure A-34: Screen output of the results for the Yaw channel. Figure A-35: Screen output of the results for the Roll channel.

A-60 Figure A-36: Screen output of the results for the Pitch channel. Figure A-37: Screen output of the results for the weighted average.

A-61 Table A-4 shows a summary of the comparison metrics computed for each data channel and the weighted average. The values that exceed the NCHRP 22-24 recommended acceptance criterion for that metric are displayed with a red background in the table. The comparison of the roll-channel shows that the simulation results were not similar to those measured in the test. The magnitudes of the z-channel accelerations in the numerical simulation are consistent with the test data, but they are out of phase with each other. The pitch- channel data from the simulation was of similar magnitude and phase, but failed to meet the criterion for the standard deviation of residual errors. Thus, based on the comparison metrics for the individual channels, the numerical model cannot be deemed valid. Taking into consideration the weighted contribution of each channel to the overall response of the vehicle in the test event, however, yields a set of comprehensive metrics which indicate that, in fact, the simulation and test are in agreement. The weighting factors for each channel are shown in Figure A-37, which indicate that the response of the vehicle was dominated by the x-acceleration, y-acceleration and yaw-rate. It should not be surprising that the numerical simulation and the test were not in agreement with respect to the z-, roll-, and pitch-channels; since there is such low energy involved in these channels, compared to the other channels, the agreement would not be expected to be in any better had we been comparing two identical full- scale crash tests.

A-62 Table A-4: Summary of the calculated metrics for the multi-channel data Data Channel Sprague & Geers ANOVA (M) (P) (average) (std) x 9 % 37 % 3 % 19 % y 2 % 40 % 0 % 2 % z 14 % 48 % 0 % 26 % Yaw 8 % 9 % 2 % 14 % Roll 44 % 48% 13 % 51 % Pitch 2 % 27 % -5 % 39 % Weighted Average 9 % 27 % 0 % 2 %

A-63 REFERENCES [1] M.H. Ray, “Repeatability of Full-Scale Crash Tests and a Criteria for Validating Finite Element Simulations”, Transportation Research Record, Vol. 1528, pp. 155-160, (1996). [2] W.L. Oberkampf and M.F. Barone, “Measures of Agreement Between Computation and Experiment: Validation Metrics,” Journal of Computational Physics Vol. 217, No. 1 (Special issue: Uncertainty quantification in simulation science) pp 5–36, (2006). [3] T.L Geers, “An Objective Error Measure for the Comparison of Calculated and Measured Transient Response Histories”, The Shock and Vibration Bulletin, The Shock and Vibration Information Center, Naval Research Laboratory, Washington, D.C., Bulletin 54, Part 2, pp. 99- 107, (June 1984). [4] Comparative Shock Analysis (CSA) of Main Propulsion Unit (MPU), Validation and Shock Approval Plan, SEAWOLF Program: Contract No. N00024-90-C-2901, 9200/SER: 03/039, September 20, 1994. [5] M.A. Sprague and T.L. Geers, “Spectral elements and field separation for an acoustic fluid subject to cavitation”, J Comput. Phys., pp. 184:149, Vol. 162, (2003). [6] D.M. Russell, “Error Measures for Comparing Transient Data: Part I: Development of a Comprehensive Error Measure”, Proceedings of the 68th shock and vibration symposium, pp. 175−184, (2006). [7] L.E. Schwer, “Validation Metrics for Response Time Histories: Perspective and Case Studies”, Engng. with Computers, Vol. 23, Issue 4, pp. 295−309, (2007). [8] C.P. Knowles and C.W. Gear, “Revised validation metric”, unpublished manuscript, 16 June 2004 (revised July 2004). [9] J. Cohen, P. Cohen, S.G. West and L.S. Aiken, Applied multiple regression/correlation analysis for the behavioral sciences, Hillsdale, NJ: Lawrence Erlbaum, (3rd ed.), 2003. [10] S. Basu and A. Haghighi, “Numerical Analysis of Roadside Design (NARD) vol. III: Validation Procedure Manual”, Report No. FHWA-RD-88-213, Federal Highway Administration, Virginia, 1988. [11] B. Whang, W.E. Gilbert and S. Zilliacus, Two Visually Meaningful Correlation Measures for Comparing Calculated and Measured Response Histories, Carderock Division, Naval Surface Warfare Center, Bethesda, Maryland, Survivability, Structures and Materials Directorate, Research and Development Report, CARDEROCKDIV-U-SSM-67-93/15, September, 1993. [12] H. Theil, Economic Forecasts and Policy, North-Holland Publishing Company, Amsterdam, 1975. [13] D.M. Russell, “Error Measures for Comparing Transient Data: Part II: Error Measures Case Study”, Proceedings of the 68th shock and vibration symposium, pp. 185−198, (2006).

A-64 APPENDIX A1: Comparison Metrics in RSVVP A brief description of the metrics evaluated by RSVVP is presented in this section. All fourteen metrics available in RSVVP are deterministic shape-comparison metrics. Details about the mathematical formulation of each metric can be found in the cited literature. Conceptually, the metrics evaluated can be classified into three main categories: (i) magnitude-phase-composite (MPC) metrics, (ii) single-value metrics and (iii) analysis of variance (ANOVA) metrics. MPC METRICS MPC metrics treat the curve magnitude and phase separately using two different metrics (i.e., M and P, respectively). The M and P metrics are then combined into a single value comprehensive metric, C. The following MPC metrics are included in RSVVP: (a) Geers (original formulation and two variants), (b) Russell and (c) Knowles and Gear. [3-8] Table A1- 1Table A1-1 shows the analytical definition of each metric. In this and the following sections, the terms mi and ci refer to the measured and computed quantities, respectively, with the “i” subscribe indicating a specific instant in time. In all MPC metrics the phase component (P) should be insensitive to magnitude differences but sensitive to differences in phasing or timing between the two time histories. Similarly, the magnitude component (M) should be sensitive to differences in magnitude but relatively insensitive to differences in phase. These characteristics of MPC metrics allow the analyst to identify the aspects of the curves that do not agree. For each component of the MPC metrics, zero indicates that the two curves are identical. Each of the MPC metrics differs slightly in its mathematical formulation. The different variations of the MPC metrics are primarily distinguished in the way the phase metric is computed, how it is scaled with respect to the magnitude metric and how it deals with synchronizing the phase. In particular, the Sprague & Geers metric [5] uses the same phase component as the Russell metric [6]. Also, the magnitude component of the Russell metric is peculiar as it is based on a base-10 logarithm and it is the only MPC metric that is symmetric (i.e., the order of the two curves is irrelevant). The Knowles

A-65 and Gear metric [7,8] is the most recent variation of MPC-type metrics. Unlike the previously discussed MPC metrics, it is based on a point-to-point comparison. In fact, this metric requires that the two compared curves are first synchronized in time based on the so called Time of Arrival (TOA), which represents the time at which a curve reaches a certain percentage of the peak value. In RSVVP the percentage of the peak value used to evaluate the TOA was 5%, which is the typical value found in literature. Once the curves have been synchronized using the TOA, it is possible to evaluate the magnitude metric. Also, in order to avoid creating a gap between time histories characterized by a large magnitude and those characterized by a smaller one, the magnitude component M has to be normalized using the normalization factor QS.

A-66 Table A1-1: Definition of MPC metrics. Magnitude Phase Comprehensive Integral comparison metrics Geers Geers CSA Sprague & Geers Russell where Point-to-point comparison metrics Knowles & Gear where (with )

A-67 SINGLE-VALUE METRICS Single-value metrics give a single numerical value that represents the agreement between the two curves. Seven single-value metrics were considered in this work: (1) the correlation coefficient metric, (2) the NARD correlation coefficient metric (NARD), (3) Zilliacus error metric, (4) RSS error metric, (5) Theil's inequality metric, (6) Whang's inequality metric and (7) the regression coefficient metric. [9-12] The first two metrics are based on integral comparisons while the others are based on point-to-point comparisons. The definition of each metric is shown in Table A1-2. Table A1-2: Definition of single-value metrics. Integral comparison metrics Correlation Coefficient Correlation Coefficient (NARD) Weighted Integrated Factor Point-to-point comparison metrics Zilliacus error RSS error Theil's inequality Whang's inequality Regression coefficient

A-68 ANOVA METRICS ANOVA metrics are based on the assumption that two curves do, in fact, represent the same event such that any differences between the curves must be attributable only to random experimental error. The analysis of variance (i.e., ANOVA) is a standard statistical test that assesses whether the variance between two curves can be attributed to random error.[1,2] When two time histories represent the same physical event, both should be identical such that the mean residual error, e , and the standard deviation of the residual errors,σ , are both zero. Of course, this is never the case in practical situations (e.g., experimental errors cause small variations between tested responses even in identical tests). Ray proposed a method where the residual error and its standard deviation are normalized with respect to the peak value of the true curve and came to the following acceptance criteria based on six repeated frontal full-scale crash tests [1]: • The average residual error normalized by the peak response (i.e., re ) should be less than five percent. max max 05.0 )( m n mmc e ii r ⋅< − = ∑ • The standard deviation of the normalized residuals (i.e., rσ ) should be less than 35 percent. ( ) max 2 35.0 1 m n ee rr r ⋅< − − = ∑σ

69 APPENDIX A2: Multi-Channel Weight Factors The multi-channel mode in RSVVP was created for the specific purpose of comparing numerical simulations of vehicle impact into roadside barriers to the results from a full-scale crash test. The data that are typically collected in such tests include (at a minimum) three acceleration channels (i.e., longitudinal, transverse and vertical directions) and three rotational rate channels (i.e., roll, pitch and yaw angular rates). These data are collected at the center of gravity of the vehicle and are used to measure vehicle response (e.g., stability) and are also used to estimate occupant risk factors (e.g., occupant impact velocity and occupant ride-down acceleration). It is desired to have as much time history data as possible available from the physical experiment for use in validating the numerical model; however, it is more often the case that only the six aforementioned channels of data are collected in the full-scale tests. As such, all these data should be used in the validation process. Sometimes, however, there may be one or two relatively unimportant channels that do not result in good quantitative comparisons. An example might be a small sign support test where the longitudinal acceleration has a much greater influence on the results of the impact event than do the lateral or vertical accelerations. The less important channels may not satisfy the criteria because they are essentially recording noise. The longitudinal channel in this example will probably be an order of magnitude greater than some of the other less important channels, and the response would essentially be determined by a single channel, i.e., the longitudinal channel. In such case, the analyst may want to ignore any of the channels that appear to be less meaningful to the outcome of the crash event, or at least to rank those channels with less importance. The issue then is how to make the decision objective, since it is not likely that everyone will have the same opinion on how to rank each channel. The RSVVP program calculates a weight for each channel that corresponds to the importance that each channel had in the overall response in the physical test. The methods available in RSVVP for computing these weight factors include: 1. Inertial Method – weighted momentum approach and

70 2. Area Method (default) – pseudo momentum approach 3. Kinetic Energy Approach – (not available in current version of RSVVP) The Inertial method determines the weight for each channel by computing the linear and rotational momentum of the six channels of data. The weight factors correspond to the proportion of the momentum in each channel. This method provides the most accurate weight value for each channel but requires that the mass of the vehicle and the three angular inertial properties be input into RSVVP. In many cases, however, the exact inertial properties for the test vehicle are not known. The Area method, on the other hand, calculates a weight for each channel based on a pseudo momentum approach using the area under the curves. In this method, the inertial properties of the vehicle are not used in the calculations and therefore the weight values will not be an exact representation of the momentum change associated with each channel. The Area Method has been shown, however, to provide values similar to those computed using the Inertial Method for cases involving vehicle impact into longitudinal roadside barriers (e.g., concrete median barrier). AREA METHOD WEIGHT FACTORS In this section, a brief description of how the weighting factors are calculated in RSVVP for the Area Method is presented. Note: The weight factors are calculated in all cases using the data from the true curve input. Using the Area Method, RSVVP computes weight factors for each individual channel based on a ‘pseudo’ momentum approach. The basic concept of this weighting scheme is to calculate a local index for each channel that is representative of its importance (or weight) with respect to the other channels. Once these indexes have been computed, the weighting factors are calculated by simply dividing the index calculated for each channel by the sum of all the channels indexes. Thus the total sum of the weight factors equals unity. Because the units differ between linear and rotational momentum, each of these two groups of channels will be treated separately. The weighting factors for each channel are calculated using the following procedure:

71 • Evaluation of the area of the True curve for each acceleration channel, ai , and rotational channel, vi. • Evaluation of the sum of the acceleration areas, aSum, and rotational areas, vSum. • Evaluation of the local weight of each acceleration channel, Sum ia i a alw =)( , and rotational channel Sum iv i v vlw =)( • Evaluation of the channel weight factors, ∑∑ + = (v) i (a) i (a) i)( lwlw lwa iw ∑∑ + = (v) i (a) i (v) i)( lwlw lwv iw Once the weighting factors have been evaluated, the multi-channel metrics are calculated using a weighted average of the individual channel metrics. Note that the combination of the time histories is performed for each of the metrics selected at the beginning of the run by the user. Table A2-1 shows the acceptance criteria proposed for the verification and validation of finite element models in roadside safety using the NCHRP 22-24 metrics profile. Table A2-1: Acceptance criteria suggested for the NCHRP 22-24 metrics profile. Sprague & Geers metrics ANOVA metrics Magnitude (M) Mean Phase (P) Standard deviation Composite (C) Apart from the value of the comparison metrics, it is important that the graphs of the cumulative distribution and histogram of the residual errors have the following typical characteristics of a normal distribution:

72 • The histogram should have a normal or bell shaped distribution and the • Cumulative distribution should have an “S” shape If the histogram and the cumulative distribution do have these shape characteristics, the residuals between the two curves are most likely due to some systematic error which should be identified and corrected.

B-i Appendix B: Roadside Safety Verification and Validation Program (RSVVP) Programmer's Manual December 2009 (Revision 1.4) Mario Mongiardini Malcolm H. Ray

B-ii CONTENTS   FOREWORD .................................................................................................................................. 1  INTRODUCTION .......................................................................................................................... 2  DESCRIPTION OF TASKS ........................................................................................................... 3  Input of data ................................................................................................................................ 3  Preprocessing .............................................................................................................................. 3  Scaling ...................................................................................................................... 4  Manual trimming and/or shift of the origin .............................................................. 4  Filtering .................................................................................................................... 4  Shift/drift correction ................................................................................................. 6  Re-sampling and Trimming ...................................................................................... 7  Synchronization ........................................................................................................ 8  Metrics evaluation ....................................................................................................................... 9  Post-processing ......................................................................................................................... 11  Output of results ........................................................................................................................ 12  PROGRAM STRUCTURE .......................................................................................................... 13  Notation used for the flow charts ........................................................................... 15  Graphical User Interfaces ......................................................................................................... 16  Block A (Initialization) ............................................................................................................. 19  Opening (Block A.1) .............................................................................................. 20  Initialization (Block A.2) ........................................................................................ 20  Block B (Input and Option selection) ....................................................................................... 24  Input/Preprocessing (Block B.1) ............................................................................ 27  Preprocessing 2 (Block B.2) ................................................................................... 33  Metrics selection (Block B.3) .................................................................................. 36  Block C (Curve preparation) ..................................................................................................... 38  Curves preparation (Block C.1) ............................................................................. 39  Curves histories (Block C.2) ................................................................................... 41  Curve plotting (Block C.3) ...................................................................................... 42  Block D (Metrics evaluation).................................................................................................... 43  Whole time (Block D.1) .......................................................................................... 45  User time (Block D.2) ............................................................................................. 52  Block E (Output) ....................................................................................................................... 56  Configuration file (E.1) .......................................................................................... 58  Excel results (Block E.2) ........................................................................................ 58  Folder selection (Block E.3) ................................................................................... 65  REFERENCES ............................................................................................................................. 66  APPENDIX B-1: CODE VERIFICATION .................................................................................. 68  APPENDIX B-2: COMPILING RSVVP..................................................................................... 72  APPENDIX B-3: Type of Variables Used in the Code ............................................................... 73  APPENDIX B-4: Preprocessing Algorithms ............................................................................... 75  APPENDIX B-5: Metrics Algorithms ......................................................................................... 83 

B-iii List of Figures Figure B-1: Representation of (a) Shift and (b) drift effects. ......................................................... 6  Figure B-2: The behavior of the shift subroutine for a (a) positive or (b) negative offsets. ........... 9  Figure B-3: Diagram of the five main blocks of the RSVVP code .............................................. 14  Figure B-4: Symbols used for the flowcharts in this manual........................................................ 15  Figure B-5: Structure of a Matlab® GUI. ..................................................................................... 17  Figure B-6: Structure of the variable handles. .............................................................................. 18  Figure B-7: Representation of the workspace of the Main and Objective functions of a GUI. .... 19  Figure B-8: Diagram of Block A. ................................................................................................. 20  Figure B-9: Fields of the variable Selection. ................................................................................ 21  Figure B-10: Flow chart of the algorithm of sub-block A.2 (Initialization). ................................ 21  Figure B-11: Flow chart of the scripts Load configuration (left) and Load preprocess (right) (sub-block A.2). ............................................................... 23  Figure B-12: Diagram of Block B. ............................................................................................... 24  Figure B-13: Scheme of the loop which forms Block B. .............................................................. 25  Figure B-14: Flow chart of the main algorithm of Block B. ........................................................ 26  Figure B-15: Flow chart of the algorithm of sub-block B.1 (Input/Preprocessing). ..................... 27  Figure B-16: Flow chart of the script Load_Preprocess (sub-block B.1) – part A. ..................... 30  Figure B-17: Flow chart of the script Load_Preprocess (sub-block B.1) – part B. ..................... 31  Figure B-18: Flow chart of the script Load_curves (sub-block B.1). ........................................... 32  Figure B-19: Diagram of sub-block B.2 (Preprocessing2). .......................................................... 33  Figure B-20: Flow chart of the script Preprocessing_2 (sub-block B.2). .................................... 35  Figure B-21: Diagram of sub-block B.3 (Metrics selection). ....................................................... 37  Figure B-22: Flow chart of the script Metrics_selection (sub-block B.3). ................................... 38  Figure B-23: Diagram of Block C. ............................................................................................... 39  Figure B-24: Data organization of the matrix variables True and Test. ....................................... 40  Figure B-25: Diagram of sub-block C.1 (Curves preparation). .................................................... 40  Figure B-26: Diagram of sub-block C.2 (Curves histories) (center) and the two invoked scripts, Save_curves_original (left) and Save_curves_preprocessed (right). ......... 41  Figure B-27: Flow chart of the script Whole_plot_curves (sub-block C.3). ................................. 43  Figure B-28: Diagram of Block D. ............................................................................................... 44  Figure B-29: Main structure Block D. .......................................................................................... 44  Figure B-30: Diagram of sub-block D.1 (Whole time). ................................................................ 46  Figure B-31: Flow chart of the script Weighting_scheme_whole (sub-block D.1). ...................... 47  Figure B-32: Flow chart of the scripts Whole_time_evaluation (left) and Whole_time_postprocessing (right) (sub-block D.1). ...................................... 48  Figure B-33: Data organization of the variables Output_xls and Output_channel_xls. ............... 51  Figure B-34: Data organization of the variables Output_single_history_xls and Output_channel_history_xls. ........................... 52  Figure B-35: Diagram of sub-block D.2 (User time). ................................................................... 54  Figure B-36: Data organization of the variables Output_xls and Output_channel_xls. ............... 56  Figure B-37: Diagram of Block E. ................................................................................................ 57  Figure B-38: Main structure Block E. ........................................................................................... 57  Figure B-39: Diagram of sub-block E.1 (Configuration file). ...................................................... 58 

B-iv Figure B-40: Diagram of sub-block E.2 (Excel results). .............................................................. 59  Figure B-41: Flow chart of the script Excel_results (sub-block E.2). .......................................... 61  Figure B-42: Flow chart of the script Excel_results_MPC . ......................................................... 62  Figure B-43: Data extraction from the variable Output_channel_xls. .......................................... 63  Figure B-44: Flow chart of the script Excel_time_histories (sub-block E.2). .............................. 64  Figure B-45: Diagram of sub-block E.3 (Folder selection). ......................................................... 65  Figure B-46: Idealized time histories used for the case of (a) 20% magnitude and (b) time of arrival. ................................................................................................... 69  Figure B-47: Sprague-Geers component metric vs. time for the magnitude-difference test. ...... 70  Figure B-48: Sprague-Geers component metric vs. time for the phase-difference tests: (a) +20% and (b)-20%. ................................................................................. 71  Figure B-49: Example of a Matlab structure variable [1]. ............................................................ 73  Figure B-50: Example of a Matlab structure variable [1]. ............................................................ 74  Figure B-51: Algorithm of the SAE filtering. ............................................................................... 76  Figure B-52: Main algorithm of the script Shift_drift. .................................................................. 77  Figure B-53: User-defined functions shift_value (left) and drift_value (right) Algorithms. ........ 78  Figure B-54: Algorithm of the script Resampling_trimming. ....................................................... 79  Figure B-55: Algorithm of the script Curve_synchronizing. ........................................................ 80  Figure B-56. Algorithm of the user-defined functions area_res (left) and sre (right). ................ 81  Figure B-57: Algorithm of the user-defined function shift. .......................................................... 82  Figure B-58. Algorithm of the Sprague & Geers metric implemented in RSVVP. ...................... 84  Figure B-59. Algorithm of the ANOVA metrics implemented in RSVVP. ................................. 85  List of Tables Table B-1: Definition of MPC metrics. ........................................................................................ 10  Table B-2: Definition of single-value metrics. ............................................................................. 11  Table B-3: Definition of ANOVA metrics. .................................................................................. 11  Table B-4: Value of the Sprague & Geers metric components calculated using the RSVVP program.......................................................................................... 70 

B-1 FOREWORD This guide describes the implementation of the Roadside Safety Verification and Validation Program (RSVVP) developed under the NCHRP 22-24 project. The main intent of the guide is to provide the programmer with a comprehensive description of the various parts which compose the RSVVP code and their corresponding algorithms. For this reason, this programmer’s manual has the form of a "service" manual. The programmer can refer to this manual to retrieve all the information necessary to locate the section of the code which performs a specific operation and understand the implemented algorithms or, vice versa, i.e., given a specific section of the code, the programmer can go back to the task that part of the code performs. This information may be useful for future improvements, modifications or customization of the original code. The manual is organized in the following manner. First an initial overview of the different tasks performed by RSVVP along with an explanation of the theory behind the implementation. Then, both a general and detailed description of the structure of the code and the algorithms used to implement each task is described. RSVVP is written in Matlab® (version R2009 a) [1] and the source code can be either executed directly from the Matlab® environment or compiled as an executable application. In the latter case, it is necessary to have the Matlab® Compiler Runtime (MCR) component installed on the machine on which it is desired to run the executable. See 0-1 for further detail about how to compile and run RSVVP as an executable application.

B-2 INTRODUCTION RSVVP quantitatively compares one or multiple pairs of curves by computing comparison metrics which are objective, quantitative mathematical measures of the agreement between two curves. The comparison metrics calculated by RSVVP can be used to accomplish one or more of the following operations:  Validate computer simulation models using data obtained from experimental tests  Verify the results of a simulation with another simulation or analytical solution  Assess the repeatability of a physical experiment Although RSVVP has been specifically developed to perform the verification and validation of roadside safety simulations and crash tests, it can be used to generally perform a comparison of virtually any pair of curves. All the comparison metrics evaluated by RSVVP are deterministic, meaning they do not specifically address the probabilistic variation of either experiments or calculations (i.e., the calculation results are the same every time given the same input). In order to ensure a correct comparison of the curves, RSVVP gives the user the option to perform various preprocessing tasks before the metrics are calculated. The intuitive and interactive graphical user interface of RSVVP allows the user to effortlessly input the curves to be compared and easily perform any of the existing preprocessing operations. Also, a series of automatic warnings alert the user about possible mistakes during the preprocessing phase. For programmers interested in modifying or improving the original code, The Matlab® source code of RSVVP can be downloaded from: http://civil-ws2.wpi.edu/Documents/Roadsafe/NCHRP22-24/RSVVP/Source_code.zip

B-3 DESCRIPTION OF TASKS This section gives a description of the operations performed by RSVVP, and when possible the theoretical background behind the operations. The tasks performed by RSVVP can be categorized into six main categories: 1) Input of data 2) Preprocessing 3) Selection of metrics/time interval 4) Metrics evaluation 5) Post-processing 6) Output of results Each task may be further divided into various steps or subtasks. A description of the steps performed for each of the above mentioned tasks is given in this section. INPUT OF DATA Data are input by loading ASCII files containing the data points of the curves. Each curve must be defined by a distinct file containing two columns; the first column representing the time (or x coordinate) and the second one the value of the curve at the corresponding time (i.e., y coordinate). After the sets of data have been loaded, the program automatically calculates the minimum sampling rate and the maximum time value based on the time vector of each curve. These values are used to perform some of the preprocessing operations. PREPROCESSING The program performs various preprocessing operations. Some of them are necessary and automatically executed, while others are optional and the user can decide whether to accomplish them or not. The following preprocessing tasks are implemented in the code: o Scaling o Manual trimming and/or Shift of the origin o Filtering o Shift/Drift

B-4 o Re-sampling and Trimming o Synchronization The re-sampling and trimming operations are performed by default as they are necessary to correctly compare any pair of curves because both curves must match point-to-point. In the next sections, a brief description of each preprocessing operation and the theory/method implemented are given. For a description of the algorithms used to implement each preprocessing feature, see 0. Scaling The original input curves can be scaled by an arbitrary user defined factor. This operation may be useful when the true and test curves have been collected using different units. The scaling of the original curves is performed multiplying the vector containing the data point by the user-defined scale factors. In case the user has not input any of the two optional scale factors, one for each of the two curves of the pair, the default values are automatically set to the unity. Manual trimming and/or shift of the origin The manual trimming of the original curves is performed after the rescaling operation. After the user has defined the minimum and maximum extreme values for either one or both the two curves, the value of the index corresponding to (or approximating, in case the sampling rate does not allow the exact fit) these values are calculated. The vectors containing the data point for each curve are then trimmed using the index values previously found. After being trimmed, the time vectors are shifted in order to start from the origin, by subtracting the corresponding initial value. Note that, at this point, the two curves may be still characterized by different sampling periods as the trimming operation is performed independently for each of them. Filtering Filtering the time histories is the first step of data preprocessing and is a very common operation in preprocessing time history data. The filtering operation is performed by

B-5 implementing a digital filter which complies with the specifications of the SAE J211 standard [2], the reference in the matter of filtering for the NCHRP Report 350 [3] and EN1317 [4]. A digital four-pole Butterworth low-pass filter is implemented using an algorithm which uses a double-pass filtering option (i.e., forward/backward): data are filtered twice, once forward and once backward using the following difference equation in the time domain: )2()1()2()1()()( 21210  tYbtYbtXatXatXatY (B1) Where: )(tX is the input data sequence and )(tY is the filtered output sequence. The filter coefficients vary with the CFC value and are calculated using the following formulas: 2 2 0 21 aa aa     (B2) 01 2aa  (B3) 02 aa  (B4)   2 2 1 21 12 aa ab      (B5) 2 2 2 21 21 aa aab      (B6) where, 0775.22  CFCd  (B7) )2cos( )2sin( T T a a a      (B8) In order to avoid the typical scatter at both the beginning and the end of the filtered time histories due to the application of the difference equation (B1), a head and tail are added to the original data sets consisting respectively of a simple repetition of the first and last data value. Once the modified data sets are filtered, the head and tail are deleted from the final filtered

B-6 curve. The length of the head and tail is equal to the closest integer approximation of the curve frequency divided by 10. Shift/drift correction The shift and drift effects are generally due to the heating of sensors during an experiment. In particular, the shift effect is homogeneous vertical translation of the entire experimental curve due to the change of the sensor temperature between the time it has been zeroed and the time the test has been performed (Figure B-1 a). Instead, the drift effect is a linearly increasing translation of the experimental curve due to a change in the temperature of the sensor between the beginning of the test and the end (Figure B-1 b). Shift  a  b ∆t (a) Shift effect (b) Drift effect Figure B-1: Representation of (a) Shift and (b) drift effects. The drifted curve is basically shifted by a value which linearly increases/decreases in the time according to the following simple formula: (B9) where m is the slope of the linear drift and can be easily computed as: (B10) with a and b representing respectively the initial and final values of the linear drift function and its total length as shown in Figure B-1(b).

B-7 The correction of the shift effect can be easily achieved by translating the whole curve by the shift value a. As for the drift effect, once the value of the slope m has been calculated from Equation (10), it is possible to correct it by translating each point of the curve by the opposite of the corresponding values obtained from the drift baseline function (Equation (9)). Note that, as the drift baseline is not a constant function, the correction value is different for each sample point of the original curve. In order to calculate the shift or drift values by which the input curves have to be translated, it is necessary to consider the mean values at the beginning and the end of each curve. In particular, it is important to ensure that the values at the beginning and the end of the vector containing the data points (i.e., head and tail) are sufficiently constant to guarantee that the mean values of these sub-vectors effectively represent the shift/drift values. An initial guess for the point until the head of the curve can be considered as constant (or from which the tail of the curve is considered constant) is the time at which the curve reaches 5 percent of the peak value (or from which the curve is less than 5 percent of the curve’s peak). In order to check that the initially guessed head or tail sub-vectors are constant, their standard deviation must be under a critical value, which was defined to be 0.1. In case the initial sub-vectors do not satisfy the above mentioned criterion, the algorithm iteratively decreases the initial length of the head and tail sub-vectors till the standard deviation of the new reduced sub-vectors is less than the proposed critical value. Re-sampling and Trimming As most of the metrics which quantify the agreement between the test and simulation curves are based on the evaluation of the point-to-point error, in order to correctly calculate the residuals, it is mandatory that the two curves have the same sampling rate. The original curves, however, may have been sampled at different frequencies so it could be necessary to re-sample them in order to compute a point-to-point difference. The program checks if the two sets of data have the same sampling period within a tolerance of 5E-6 sec. If the two curves do not have the same sampling frequency, RSVVP proceeds to re-sample the curve which has the lower sampling rate (i.e., the bigger difference in time between two contiguous data points) at the higher rate of the other curve. The re-sampling is performed by mean of a linear interpolation, assuming that the time vector starts from zero. Also, when the two curves are re-sampled, the

B-8 smaller of the end values between the two original time vectors is considered, in order to trim them to the same interval. Note that, because of the new sampling rate the end value of the new time vector may be approximated by defecting the maximum time of the original curve. Synchronization Usually the experimental and numerical time histories do not start at the same time and, hence, the two curves are shifted by a certain value in the time direction. In this case, the curves should be shifted back or forth so that the impact time in each of them is synchronized. As the comparison metrics are mostly based on the evaluation of the residual error, it is necessary to delete or at least reduce as much as possible any time shift between the test and simulation curves; otherwise, even if the two curves are exactly the same, this gap in time would negatively affect the final metric result. The last preprocessing operation performed by the RSVVP before the metric evaluation is synchronizing the test and simulation curves. Two different methods of synchronizing have been implemented in RSVVP, based on (1) the minimum area between the curves or (2) the least square error of the residuals. The main idea in both the cases is to find the shift value which minimizes the target function. In order to implement these two methods of synchronization, a specific function shifts along the time direction either one of the two curves by a value s, with a positive value of s meaning a forward shift for the test curve and a negative value being equivalent to a backward shift for the simulation curve (Figure B-2). RSVVP identifies the shift value which minimizes either the absolute area of residuals (method 1) or the sum of squared residuals (method 2). The shift value corresponding to the minimum error is the most probable matching point between the curves. In case the result is not satisfactory, the user can repeat the synchronization procedure using a different initial shift value at the beginning of the minimization algorithm or using the other minimization method.

B-9 (a) Positive offset s (b) Negative offset s             s  Tests Simulation Tests Simulation  s True Test True Test Figure B-2: The behavior of the shift subroutine for a (a) positive or (b) negative offsets. METRICS EVALUATION Various comparison metrics have been implemented into the code. The mathematical formulation of the metrics is shown in Table B-1 through Table B-3, where the measured and computed data points are indicated respectively as mi and ci. For more details about the comparison metrics implemented in RSVVP refer to the Appendix A-1 in the User’s Manual. For a description of the algorithms used to implement each metric, see 0B-5. Although all the metrics by definition give a scalar value (i.e., a simple number), they are implemented in order to be evaluated on various time intervals characterized by an increasing size. The smaller time interval on which the input curves are compared is ten percent of the total time and, at each step, it is increased in size till it becomes the total interval. In this way, it is possible to reconstruct a time history of the metrics indicating how the values evolve comparing the curves on increasing portions of their total time interval.

B-10 Table B-1: Definition of MPC metrics. Magnitude Phase Comprehensive Integral comparison metrics Geers [5] Geers CSA [6] Sprague&Geers [7,8] Russell [9] where Point‐to‐point comparison metrics  Knowles&Gear [8] where (with )

B-11 Table B-2: Definition of single-value metrics. Integral comparison metrics Correlation Coefficient [10] Correlation Coefficient (NARD) [11] Weighted Integrated Factor [12] Point‐to‐point comparison metrics Zilliacus error [13] RMS error [13] Theil's inequality [14] Whang's inequality [13] Regression coefficient [10] Table B-3: Definition of ANOVA metrics. Average residual error* [15,16] n mmc e ii r   max)( Standard deviation of the residuals* [15, 16]   1 2     n ee rr r (*) normalized to the peak of the measured values POST-PROCESSING The post-processing of data consists of the following operations:

B-12  Compute the weighted average of the metrics,  Plot the time history of the metrics and  Prepare the variable to output results in Excel files. The program can evaluate metrics considering either a single couple of curves or multiple pairs simultaneously. In the latter case, the results obtained from each pair of curves (channels) are combined together during the post-processing of data by computing a weighted average. The weighting factors are automatically calculated by the user based on the area of the true curve for each channel. Following is a description of the procedure implemented to calculate the weighting factors:  Evaluation of the area of the true curve for each acceleration channel, ai , and rotational channel, vi.  Evaluation of the sum of the acceleration areas, aSum, and rotational areas, vSum.  Evaluation of the local weight of each acceleration channel, Sum ia i a alw )( , and rotational channel Sum iv i v vlw )(  Evaluation of the channel weight factors,    (v) i (a) i (a) i)( lwlw lwa iw    (v) i (a) i (v) i)( lwlw lwv iw Apart from calculating the weighted values of the metrics, the post-processing operation also consists in plotting the time histories of the metrics and preparing the variables with a complex data structure which contain the time histories and results for the output in Excel files. OUTPUT OF RESULTS Results are output in various formats:  ASCII files,  Excel files and  Graphs (bmp. pictures).

B-13 In particular, the output of the results in Excel format requires the results be stored in variables characterized by particular data structures which will be discussed in detail in the following section. PROGRAM STRUCTURE The information presented herein is intended to illustrate the basic structure and organization of the RSVVP program so that users can easily locate where and how each specific task is programmed. In each of the following sections a general overview of the organizational structure of the code is given, followed by a detailed description of each programmed task. The code has a modular structure; it is divided into five main blocks (i.e., subroutines), where each block performs one or more of the operations described in the previous section. The five main blocks (i.e., groups of operations) are:  Block A – Initialization  Block B – Input & Options selection  Block C – Curve Preparation  Block D – Metrics Evaluation  Block E – Output Each of the five blocks is invoked systematically by the principal script (Main.m) which is executed at the start of the program. Each of the blocks then invokes one or more secondary scripts that perform the various tasks. Both the principal and secondary scripts may recall either specific functions from various Matlab® libraries (toolboxes) or user-defined functions programmed ad hoc to perform specific operations. Each operation may be performed by one or more scripts, and in some cases, a secondary script may also recall one or more subscripts. Figure B-3 illustrates the five organizational blocks in RSVVP and the respective operations performed in each. Every block is divided into two or three sub-blocks, each performing specific operations. In most cases these blocks are accessed via graphical user interfaces. The general concepts behind the implementation of the graphical interfaces in Matlab®, and a detailed description of the algorithms and data structures making up the various sub-blocks are presented in the

B-14 following sections. Due to various reasons which will be explained later in this manual, in general, it was not possible to implement each of the programmed tasks into a specific corresponding block (i.e., a one to one correspondence between the tasks and the blocks). Block B            (Input and Option selection) Block A                                       (Initialization)  Preprocessing 2  Opening Initialization Input / Preprocessing  Metrics selection  Block C                             (Curve preparation) Curve preparation Curves plotting Block D                           (Metrics evaluation)  Whole time  User time  Block E                                                 (Output) Excel results Configuration file  Folder selection  Figure B-3: Diagram of the five main blocks of the RSVVP code

B-15 Because of the complexity of the code, the algorithms implemented in each block are described at different levels of detail, starting from a general overview and going more into details at each further level of the flowcharts. In particular, each block is described using flowcharts at three different levels: 1. Block level Delineates the main frame of the block and the relations between the various sub-blocks. 2. Sub-block level Describes the implementation of each sub-block. 3. Script level Provides a detailed description of the specific scripts invoked by a sub-block. Notation used for the flow charts The flowchart diagrams presented in the next sections of this manual have been drawn using a set of standard symbols. Figure B-44 displays the symbols used in the flowcharts and their related meaning. Data Decision Process Internal storage Script / Function Name of the script/ function is written beside the symbol Block (X.Y) Figure B-4: Symbols used for the flowcharts in this manual. Note that the filename containing a specific script/function which performs the operation indicated by each ‘predefined’ shape is shown beside the shape in bold characters.

B-16 GRAPHICAL USER INTERFACES The interaction between the program and the user is achieved using various Graphical User Interfaces (GUI’s). A graphical interface in Matlab® is regarded as a function, which means it is possible to define both input and output variables. The possibility to input variables allows a GUI to load information about the configuration saved during the previous instance or a default configuration in case the GUI is opened for the first time. In fact, some of the GUIs are implemented into iterative loops and, after being invoked for the first time, they may be opened again. In this case, the new instance of a GUI is given as input a variable which contains all the information about the configuration previously saved. In order to organize all the configuration information in a single variable which is easy to pass through the GUI functions, the Matlab® structure type has been adopted. A structure variable is composed of multiple fields which can store an array of any available Matlab® data type. The fields of a structure variable can be heterogeneous, thus allowing to store in the same variable different type of information. For more information see 0. In Matlab®, each GUI is composed of two main components: (1) a figure which includes all the various graphical objects (checkbox, radio button, drop-down menu, plot area, etc.) and (2) an M-file which recalls the figure and manages the various components. The development and modification of the figure can be achieved only in Matlab® by using the command “guide” (GUI Developer), which opens a graphical interface for the creation/modification of the GUI figure. The M-file of each GUI is a Matlab® script containing a set of functions and is basically composed of the following main parts (Figure B-5):  Main function  Opening function  Output functions  Object functions

B-17 Main function Opening function Output function Objective function 1 Objective function 2 Objective function N …….. Figure B-5: Structure of a Matlab® GUI. The Main function has the same name as the GUI and is the function which is recalled in the code to start the GUI itself. This function invokes in sequence the Opening function and the Output function and is then set into “wait” mode in order to make the GUI wait for user response before returning to the code where it has been invoked. The Opening function performs all the necessary operations to initialize the GUI immediately before it appears on the screen. In particular, it may receive the structure variable passed to the Main function which contains the configuration saved during a previous instance of that GUI. In this case, the fields of the input structure variable are used to properly configure the GUI. The Output function, instead, manages the data passed back by the GUI function when it is closed. Once the GUI has appeared, the user can select the different objects. To each object of the GUI corresponds a specific Object function, which implements the operations to be performed for that particular object. During the period in which the Main function is in wait mode, each time the user interacts with an object of the GUI, the corresponding function is invoked from the local workspace of the Main function. In general, each Object function receives as input the three variables “hObject”, “eventdata”, “handles” from the workspace of

B-18 the Main function and may return the variable “handles” to the same local workspace. In most cases, except in some rare exceptions, the exchange of information between an Object function and the Main function is achieved using the field ‘output’ of the structure variable “handles” (i.e., handles.output). The field “handles.output” is usually further structured into various subfields (Figure B- 6) according to specific GUI and is the one which, eventually, is passed back by the Main function to the principal code of the program.             Output field is further  structured in various  sub‐fields  Figure B-6: Structure of the variable handles. In case the GUI is iteratively recalled, the values of this structure variable are repeatedly passed back and forth from the main code to the GUI and vice versa (Figure B-7).

B-19 Input  configuration  Output  configuration  Figure B-7: Representation of the workspace of the Main and Objective functions of a GUI. When the GUI main function closes and returns to the main invoking code, the related figure is not automatically closed by Matlab®. Thus, in order to close the figure, it is necessary to add into the main code the command “close all” immediately after the command which recalled the GUI. BLOCK A (INITIALIZATION) Block A is the first of the blocks in which the program code is subdivided; it performs the initial operations necessary to start the program. The functions performed by this block are: defining the type of comparison, selecting between some basic choices and performing the initialization of the major variables used during the execution of RSVVP. Moreover, in case a configuration file is input, this block performs the following tasks: (i) input of data and (ii) preprocessing. As shown in Figure B-8, Block A is divided into two sub-blocks:  Opening and  Initialization. A detailed description of each sub-block is given in the following sections.

B-20   Block A  (Initialization)  Block B  (Input and Option selection)  Block C  (Curve preparation)  Block D  (Metrics evaluation)  Block E  (Output)  A.1) Opening  A.2) Initialization  Figure B-8: Diagram of Block A. Opening (Block A.1) The sub-block Opening contains the first interactive Graphical User Interface (GUI) of the code. This GUI, which is called GUI_Start, manages the user choice for the following options:  Single / Multiple channel  Re-sampling rate limit  Configuration file When the GUI function GUI_Start closes, the options selected by the user are passed to the variable Selection which is used afterwards in the code. Figure B-9 B-9 shows the fields of the structured variable Selection which are used to store the options. Initialization (Block A.2) The sub-block Initialization performs an initialization of most of the variables used by RSVVP or, in case a configuration file has been loaded, it reads the variables containing information about the preprocessing options from the file and performs the necessary

B-21 input/preprocessing operations. Figure B-10 shows the main structure of the sub-block Initialization. As can be seen from the flow chart, different operations are performed according to whether a configuration file has been input or not. Figure B-9: Fields of the variable Selection. Configuration input? Load and preprocess curves Read Configuration variables Yes No Initialize variablesLoad_configuration Load_Preprocess_initialization Figure B-10: Flow chart of the algorithm of sub-block A.2 (Initialization). In the case where no configuration file is input, the sub-block A.2 initializes most of the variables used during the execution of the code. Although in Matlab® there is no need to statically allocate memory for the definition of variables, the way RSVVP has been implemented requires that an initial default value is assigned to the variables which control the input/preprocessing options. Following is a list of the variables initialized:  Reply (v)  Reply_2 (v)  Procedure  Channel_list  Time_interval (v)  Time_interval_total_run (v)

B-22 In the case of multiple channels, most of the option values for each input channel are stored in vectors instead of scalar variables (The variables which become vectors are indicated with “(v)” in the previous list). In this case, each element of the vector represents the default option for the corresponding channel. In case a configuration file has been loaded, the input curves are read from the original data files and then preprocessed according to the options obtained from the configuration file. Figure B-11 describes in detail the algorithms used to implement respectively the two scripts Load configuration and Load preprocess recalled by the main algorithm of Block A.2. Load_configuration This script implements the operations necessary to load the variables contained in the configuration file. As can be seen in the flowchart of the algorithm, if the user has selected to load the configuration in edit mode, a copy of the input re-sampling rate is saved at the beginning in order to be overwritten to the value of the corresponding variable once it has been loaded from the configuration file. The algorithm checks if the file name of the configuration file is correct and, if it is correct, loads the variables; otherwise, a warning message is displayed and the program quits. In order to avoid a crash of RSVVP in the case where no channel has been manually trimmed, it is also necessary to delete the corresponding flag variable Manual_trim_config. Load_preprocess_initialization This script manages the loading of the data from the input curves and the preprocessing operations according to the information obtained from the configuration file. The two branches clearly visible in the flowchart diagram of this script indicate the two main sections in which it can be divided. These two sections of the algorithm run in series, one after the other, and each of them contains a loop (both indicated by a red rectangle). The first branch is a loop which cycles over the total number of channels (i.e., one for the case of a single channel or six for the case of multiple channels). Each iteration of the loop loads the input curves and performs the preprocessing operations according to the information read from the configuration file. For more details about the scripts or part of the code which perform

B-23 the loading and the specific preprocessing operations, refer to the next section (Block B). Before concluding each iteration, the preprocessed curves are saved in the matrix variable Preprocessed (refer to Section Block B in this manual for more details about the structure of this variable) and the variable Ch_num is incremented. Load_configuration Load_preprocess Figure B-11: Flow chart of the scripts Load configuration (left) and Load preprocess (right) (sub-block A.2).

B-24 After the cycle has concluded and all the channels have been input and preprocessed, the minimum length between all the couple of channels is computed. In case of multiple channels, first the weighting factors or the resultants are calculated and, then, the program cycles over the channels/resultants to perform the synchronization of the curves in case it is requested by the configuration file. Eventually, a vector with the names of the specific input channels (or the resultants) is created to be used during during the postprocessing operations for the output of results. BLOCK B (INPUT AND OPTION SELECTION) Block B implements most of the interaction activity with the user. In fact, this block manages three different graphical interfaces which respectively perform three different tasks: (i) input of data, (ii) preprocessing and (iii) selection of metrics/time intervals. The diagram of Block B is shown in Figure B-12; the block is divided into three sub-blocks:  Input/Preprocessing (GUI_1_3),  Preprocessing 2 (GUI_1_3_II) and  Metrics selection (GUI_metrics).   Block A  (Initialization)  Block B  (Input and Option selection)  Block C  (Curve preparation)  Block D  (Metrics evaluation)  Block E  (Output)  B.2) Preprocessing 2 B.1) Input / Preprocessing  B.3) Metrics selection Figure B-12: Diagram of Block B.

B-25 The main characteristic of Block B is that the three sub-blocks are implemented in sequence into a loop which terminates only when the user decides to proceed to the evaluation of metrics (Figure B-13). Figure B-13: Scheme of the loop which forms Block B. Figure B-14 shows the main algorithm of Block B. The implementation of the three sub- blocks into a loop allows the possibility to go back to the previous sub-block to apply any further change to the options selected in the previous instance. In fact, by skipping directly to the next iteration of the loop, it is possible to go back to the first sub-block (i.e., B.1). In case the user is executing operations implemented in the sub-block B.3 (i.e., the last of the three blocks), it is also possible to skip directly to the second sub-block, B.2, during the new iteration of the loop by defining a flag variable and a conditional statement at the beginning of the first block. In this case, given a particular value of the flag variable, the conditional statement for the execution of the first block would skip it and start the new iteration of the loop directly from sub-block B.2.

B-26 Figure B-14: Flow chart of the main algorithm of Block B. Any time they open, the graphical interfaces of each of the three sub-blocks load the options and the various data input by the user during the previous iteration of the main loop. This information is read from variables whose structure and content will be specifically discussed ahead in this section. Using such a structure, whenever the user goes back to the previous sub-block, the graphical interface which is reopened shows the same information of the previous iteration. In the case where a configuration file has been loaded, the input/preprocessing operations performed by Block B are skipped as they have already been performed in Block A. A detailed description of each sub-block is given in the following sections.

B-27 Input/Preprocessing (Block B.1) This sub-block is the first of Block B and implements the GUI which handles the input of the curves and their preprocessing (GUI_1_3). The flow chart in Figure B-15 shows the general implementation of this sub-block. As can be seen from the flow chart, the core of sub-block B.1 is the script Load_preprocess, which is invoked only under the condition that both no configuration file has been input (config_loaded = 0) and that GUI_1_3 is set to be open (open_GUI_1_3 = 1). The former condition avoids that curves are input and preprocessed when a configuration file has been loaded, while the latter condition is used to skip the execution of this block in order to go directly to the next sub-block during the new iteration of the main loop defined by the cycle in Figure B-14. In either the case the script Load_Preprocess has been invoked or not, before proceeding to the sub-block B.2, a copy of the original input curves is saved. If the Exit button has been pressed during the execution of GUI_1_3, the algorithm terminates the program. Figure B-15: Flow chart of the algorithm of sub-block B.1 (Input/Preprocessing).

B-28 Load_Preprocess As previously mentioned, this script is the core of Block B.1 which manages the input of the channel/s and the corresponding preprocessing. The algorithm of the script is shown in Figure B-16 and Figure B-17. The script is embedded into a loop which stops only when, after the last channel has been input and preprocessed, the user decides to proceed to the selection of metrics. Using a loop it is possible to move back and forth through any of the available channels by incrementing/decrementing the variable ch_num which is used to define the specific channel. This allows the user to go back to a previously input channel and make any modification also after this channel has already being input. The flag variable which controls the loop is Reply.flag and the condition to keep cycling is either the value 0 or 4. Initially, before starting the cycle, this flag variable is assigned the null value; while the value 4 means that the preprocessing options of the specific channel have been reset and a new iteration has to be performed on that channel. When the user pushes the button to proceed to the selection of metrics in the corresponding GUI (GUI_1_3), Reply.flag becomes unity and the cycle terminates. The variable ch_num specifies the channel which is considered at each iteration of the loop. The value of this variable is by default equal to 1 in case of a single channel or is assigned by the variable Reply.channel_id (written by the GUI_1_3) in case of multiple channels. At the beginning of each iteration of the loop, GUI_1_3 opens. The first time GUI_1_3 is opened for each channel indicated by the variable ch_num, the default values created during the initialization are recalled. When the GUI closes, the algorithm proceeds to load the curves form the selected files and perform the preprocessing according to the selection made by the user. At the beginning of the next iteration, the GUI is opened again and shows the preprocessed curves. In case the GUI has already been opened for a specific channel before, it reloads any previous option or data and, if the user has modified/reset any of the preprocessing options, when the GUI closes the algorithm proceeds to reload the curves and modify the preprocessing according to the new options. In case the last channel has been skipped, the algorithm automatically ends the loop.

B-29 Apart from the scaling operation, which is performed by the script Load_curves, and the manual trimming of the curves, which is implemented in the script Manual_trim_shift, all the other preprocessing operations are invoked by the script Preprocessing. The scaling option is implemented by simply multiplying the vector containing the data points by the scaling factor defined by the user for that specific channel and curve. Data vectors are manually trimmed by limiting the original vectors within the indexes which are closer to either the lower or upper boundary values, respectively; while, the shift of the time vector is obtained by subtracting the initial time value from the time vectors of the curves. The other preprocessing operations are performed by recalling the script Preprocessing which, on its turn, recalls one or more of the following scripts according to the specific options selected for each channel:  Filtering  Shift_drift  Resampling_trimming  Curve_synchronizing The algorithms for these pre-processing operations are described in 0. The preprocessed curves are saved into the variable Preprocessed which is a matrix of cells. As shown in Figure B-18, the ith row contains respectively the time, true data and test data vectors of the corresponding channel i. Note that the possibility to store vectors for each of the element of the matrix Preprocessed is achieved by first converting the vector into a cell. In case of a single channel comparison, this matrix reduces to a horizontal vector. After the loop ends, the pairs of curves from the various input channels are trimmed to the length of the shortest and the algorithm proceeds to the next block.

B-30 Reply.flag = 0 or Reply.flag = 4? Open GUIGUI_1_3 Multichannel? ch_num = Reply.channel_id Yes No ch_num = 1 Skip channel checkbox ? Manual trim? Check box on? Yes No Last channel? Yes Reply.flag = 1 No Increase Reply.channel_id Yes Load input curves Load_curves Manual trim/shift Yes No Time shift? True/Test button pushed? No Manual_trim_shift First cycle of GUI_1_3? Trim option changed? No Shift option changed? Reset previous preprocessing options Yes No Yes Yes No No Continued to part B Yes No YesYes No Figure B-16: Flow chart of the script Load_Preprocess (sub-block B.1) – part A.

B-31 Multichannel? Next/Previous Channel? No Yes Yes Reset GUI cycle flag (first_cycle = 0) Reset flag No Quit button? Preprocessing? Exit Yes No Refine synchro? Yes No GUI for re-synchroGUI_Synchro Set time history variables Yes No Preprocess curvesPreprocessing Save in Preprocessed Trim channels Reply.flag = 0 or Reply.flag = 4? Continued from part A No Figure B-17: Flow chart of the script Load_Preprocess (sub-block B.1) – part B.

B-32 0.00000000 0.02000000 0.04000000 0.06000000 0.08000000 0.10000000 0.12000000 0.14000000 …………………………… Figure B-18: Sketch of the structure of the variable Preprocessed. Load_curves This script manages the load of the input curves from the ASCII files provided by the user. In particular, it skips any of the initial rows containing non-numeric characters and scales the data point by the user defined factors. The algorithm of this script is shown in Figure B-18. The same algorithm is repeated of both the test and true curves. Figure B-18: Flow chart of the script Load_curves (sub-block B.1).

B-33 Preprocessing 2 (Block B.2) This sub-block implements the synchronization of the curves in case multiple channels are input. In this case, the synchronization is performed after the single channels have been input and partially preprocessed in order to allow the user the option to compute the resultants from the acceleration and rotational rate channels. In fact, if the resultant option is selected, the true and test curves from the input channels are combined together in order to first obtain the corresponding resultant curves which are then synchronized. The flow chart in Figure B-19 shows the general implementation of this sub-block. It is recalled only in the case multiple channels are input (comparison_type = 2) and is completely skipped in the case of a single channel. At the beginning, the sub-block B.2 initializes some variables and then proceeds to the core script Preprocess_2 which manages GUI 1_3_II. In case the Exit or the Back button has been pressed in GUI_1_3_II, the sub-block B.2 respectively quits the program or goes back to GUI_1_3 by forcing a new iteration of the loop described by the flow chart in Figure B-14. Multichannel? Yes No Set specific variables Preprocess multiple channels Exit button? Back button? Quit Yes No No Yes Back to B.1 Preprocessing_2 Figure B-19: Diagram of sub-block B.2 (Preprocessing2).

B-34 Preprocessing_2 This script is the core of sub-block B.2. It manages the synchronization of the multiple channels and the selection of the method to use for computing the equivalent metrics (weighting factors or resultant). Similarly to the previous sub-block, also in this case the script is mostly contained into a loop in order to implement the possibility to move back and forth through any of the available channels by incrementing/decrementing the variable ch_num which is used to define the specific channel. The algorithm of the script is shown in Figure B-20. Before entering the loop, the variable Preprocessed_2 is initialized as a copy of the variable Preprocessed. In this way, the curves originally preprocessed by the previous sub-block are always available in case the user decides to reset the preprocessing performed by this sub-block. At the beginning of each iteration of the loop, the GUI called GUI_1_3_II is opened. Similarly to GUI_1_3, also the first time that GUI_1_3_II is opened for each channel the default values created during the initialization are recalled and, as soon as the GUI closes, the algorithm performs the preprocessing operations. When the GUI is opened again at the beginning of the next iteration, both the initial and preprocessed curves are shown. If the synchronization option has been selected in the GUI (indicated by the variable Reply_2. synchro) the specific channel is synchronized. Also, in case the user decided to re- synchronize the curves, a new GUI is invoked (GUI_Synchro) to define a new starting point for the synchronization procedure. Note that in this case, after the re-synchronization, the algorithm will go to the next iteration of the loop where GUI_1_3_II will be invoked to show the new synchronized curves. The default option for the computation of multichannel (weighting factors/resultant) is set to weighting factors. In case this option is changed (identified by the variable Reply_2.update) in the GUI, the algorithm proceeds to re-initialize the variable Preprocessed_2 accordingly by recalling the script Initialization_2.

B-35 Figure B-20: Flow chart of the script Preprocessing_2 (sub-block B.2).

B-36 In case the iteration of the loop is the first for that channel (Reply_2.first_iteration = 1), the variable Preprocessed_3 is initialized. If the next or previous buttons have been pushed in the GUI_1_3_II, a new iteration of the loop is forced and GUI_1_3_II is invoked showing the next or previous channels. Metrics selection (Block B.3) Metrics selection is the last of the sub-blocks of Block B. It implements the GUI for the selection of the metrics and time intervals on which compare the input curves (whole time and/ or user defined intervals). The flow chart in Figure B-21 shows the general implementation of this sub-block. The first action performed is set the value of the variable procedure according to the type of multichannel comparison (i.e., weighting factors or resultant). This variable is then used in subsequent parts of the program code. In case multiple channels are input, a script checks if any channel has been skipped, and in case, null vectors are added to the corresponding row in the variable Preprocessed_3. Also, the algorithm creates a configuration file relative to the RSVVP run. Note that, if a previous configuration file has been loaded without editing (total-run mode), the creation of the configuration file is skipped. Metrics_selection This script is the core of sub-block B.3 and manages the GUI for the selection of the metrics and the type of time intervals on which to compare the curves (whole and/or user-defined time intervals). The algorithm of the script is shown in Figure B-22. The script manages the GUI for the selection of metrics (GUI_metrics). In case a configuration file is loaded in ‘run mode’, the GUI is not opened as the information is taken directly from the configuration file and RSVVP automatically re-runs the comparison. Also in this case, once the GUI has collected the information entered by the user, it is closed and the algorithm proceeds according to the selected option/s.

B-37 Figure B-21: Diagram of sub-block B.3 (Metrics selection). If the back button has been pressed in the GUI, the algorithm set the variables open_GUI_1_3 and config_loaded equal to zero and then a new iteration of the main loop of Block B is forced. The values assigned to these variables skip the opening of GUI_1_3 at the next iteration of the main loop. In case the user has selected the metrics and time intervals, the algorithm sets the variable Metrics_list according to the metrics selected in the GUI. Note that, in this case, the variable Metrics.flag is set to the unity in order to quit the main loop of Block B and proceed to Block C.

B-38 Figure B-22: Flow chart of the script Metrics_selection (sub-block B.3). BLOCK C (CURVE PREPARATION) Block C finalizes the preprocessing activities and creates the plots of the input curves after the preprocessing. Figure B-23 shows the diagram of this block, which is composed of three sub-blocks:  Curve preparation,  Curves histories and  Curves plotting. A detailed description of each sub-block is given in the following sections.

B-39 Figure B-23: Diagram of Block C. Curves preparation (Block C.1) The sub-block Curves preparation refines the preprocessing by trimming the couples of curves from the various channels to the length of the shortest channel. This operation is performed only in multi-channel mode (comparison_type = 2). Data points of the input channels are initially stored in the variable Preprocessed_3, which is a matrix of cell (for details about the structure of this variable, refer to section Block C). The vectors of the true and test curves of each channel are extracted from the respective cells structures, trimmed and stored in the matrix variable True or Test correspondingly. As the length of the vectors is the same for all the channels after the trimming, it is possible to store the data in a simple matrix structure without the need to use a cell configuration, like required before the trimming operation. The organization of the matrices True and Test is represented in Figure B-24. For both the matrices, the vectors of the data points for the channel i is saved in the ith column. The flow chart in Figure B-25 shows the general implementation of this sub-block.

B-40 C hannel 1 C hannel 2 C hannel 3 C hannel 4 C hannel 5 C hannel 6 Figure B-24: Data organization of the matrix variables True and Test. Multichannel? Yes No Set variable Preprocessed Trim channels Evaluate channels length Evaluate min length Figure B-25: Diagram of sub-block C.1 (Curves preparation).

B-41 Curves histories (Block C.2) This sub-block saves the time histories of both the original and preprocessed input curves. The original and preprocessed time histories are managed respectively by the script Save_curves_original and Save_curves_preprocessed. The main implementation of this sub- block and the two mentioned scripts are shown in Figure B-26. As usual, in case of multiple channels, in the algorithm of both the scripts a loop cycles over the total number of input channels.   Save_curves_original  Sub‐block C.2 Save_curves_preprocessed Figure B-26: Diagram of sub-block C.2 (Curves histories) (center) and the two invoked scripts, Save_curves_original (left) and Save_curves_preprocessed (right).

B-42 Curve plotting (Block C.3) The sub-block Curves plotting performs two main operations: (i) plot the preprocessed curves after they have been finalized by the previous sub-block (Curve preparation) and (ii) evaluate the area of the true curve for each channel. Before plotting the curves, the code creates the destination folder where the corresponding bitmap files are saved. In case the NCHRP 22-24 profile has been chosen, also the integrals of the original input curves are plotted. In both the cases, a conditional statement based on the value of the variable zip_flag handles the possibility to compress the bitmap files. The evaluation of the area of each channel is implemented at the end of the block and the corresponding values are saved in the vector variable Channel_area. This vector is used in the following part of the code (Block D) to evaluate the weighting factors for the whole time interval. Note that, for each user-defined time interval, the values of the channel areas are evaluated again in a following part of the code located in Block D. The flow chart in Figure B-27 shows the general implementation of this sub-block.

B-43 Figure B-27: Flow chart of the script Whole_plot_curves (sub-block C.3). BLOCK D (METRICS EVALUATION) Block D implements the computation of the comparison metrics and the corresponding post-processing operations necessary to evaluate the weighted average of the metrics obtained from the single channels. As can be seen in Figure B-28, Block D is composed of two sub- blocks which handles the evaluation and post-processing of metrics respectively on the whole time interval and the user defined time interval/s.

B-44   Block A  (Initialization)  Block B  (Input and Option selection)  Block C  (Curve preparation)  Block D  (Metrics evaluation)  Block E  (Output)  D.1) Whole time D.2) User time  Figure B-28: Diagram of Block D. The main structure of Block D is shown in Figure B-29. The sub-blocks Whole time (D.1) and User time (D.2) are invoked respectively in the case the curves are compared on the whole time or a user time interval. In particular, in case the whole time option is not selected, a void vector/matrix is created for the corresponding output in the Excel file. Figure B-29: Main structure Block D.

B-45 Whole time (Block D.1) The sub-block Whole time calculates the metrics in the full time interval on which the curves are defined. The flow chart in Figure B-30 shows the general implementation of this sub- block. If RSVVP is running in single channel mode, the weight assigned to the channel is the unity. In multichannel mode, at the beginning of its execution, the sub-block computes the weighting factors according to the specific method selected by the user. In case the resultants for the accelerations and rotational rates have been computed, each of them is assigned a weight equal to 50 percent of the total; otherwise, the weighting factors for each channel are computed based on the area of the true curves. In the latter case, the script Weighting_scheme_whole is invoked. The calculation of the metrics is performed by invoking the script Metrics_evaluation which is cycled over each of the input channels (the loop in the flowchart is indicated by the red rectangle). After the metrics have been computed for each channel, the sub-block D.1 also performs two main post-processing operations:  Computation of the weighted average (multiple channels)  Creation of variables for the Excel output The post-processing is executed by the script Whole_time_postprocessing. Eventually, the sub-block D.1 displays the results (metrics values and various graphs) through a GUI which is managed by the script Table_output_whole.

B-46 Figure B-30: Diagram of sub-block D.1 (Whole time).

B-47 Weighting_scheme_whole This script calculates the weighting factors in case of multiple channels. The steps followed to compute the weighting factors are shown in the flowchart in Figure B-32. The method is based on the computation of the area of the true curve for each channel and the areas of the accelerations and rotational rates are considered separately because, otherwise, the different units may lead to a disproportion in the evaluation of the weighting factors. Figure B-31: Flow chart of the script Weighting_scheme_whole (sub-block D.1).

B-48     Whole_time_evalution  Whole_time_postprocessing  Figure B-32: Flow chart of the scripts Whole_time_evaluation (left) and Whole_time_postprocessing (right) (sub-block D.1).

B-49 Whole_time_evaluation This script manages the computation of the metrics according to the selection made by the user in the corresponding GUI. The variable used to store the metric flags is Metrics. The algorithm of this script is composed of a series of conditional blocks, one for each of the metrics implemented in the code. The generic flowchart of these blocks is shown in Figure B-32 (left). If it has been selected, a metric is evaluated by invoking the corresponding function. Metric functions are programmed in Matlab® and, generally, they receive as input the preprocessed curves corresponding to a specific channel and give as output one or more vectors containing the time history/ies of the metric. Once the metric values have been computed, a counter variable (metric_evaluated) is incremented which then is used to add a bar to an on- screen waiting bar used to inform the user about how many metrics have been calculated/remain. Whole_time_postprocessing This script is the core of sub-block D.1. It manages the following four operations:  calculation of the weighted average of metrics (multiple channels)  creation of the variables containing the metrics values and time histories (for the output both on screen and using Excel files)  round-off of the metrics values  residuals plotting (time histories, histogram and cumulative distribution) The flowchart of the whole script is shown in Figure B-32 (right). Similarly to the previous script, also the algorithm of this script is mainly composed of a series of conditional blocks, one for each of the metrics implemented in the code. The generic structure of each of these conditional blocks is delimitated in the dotted rectangle in the flowchart. In the whole flowchart, each block is related to a specific metric and is performed only if that metric has been computed. In case of multiple channels, a loop is defined which cycles over each of the input channels. For each iteration of the loop, a local variable is created which contains the metric values for that specific channel. Also, the values for each channel are then multiplied by the corresponding weighting factors previously computed

B-50 by the script Weighting_scheme_whole. These weighted values are then summed up immediately after the loop ends in order to obtain a weighted average. Once variables with metrics values and time histories have been created for a specific metric, they are appended to a corresponding global variable which cumulates the results for the various metrics computed during the run. After the script has post-processed all the metrics, it proceeds to round the values and plot the graphs of the residuals, histogram and cumulative distribution. Note that, during the rounding of the values, the algorithm checks if any of the implemented metrics has been computed and, in case it has not, it automatically adds the word ‘N/A’. Eventually, the script creates the following variables containing the various values to be output by Block E: in Excel format.  Output_channel_history_xls  Output_single_history_xls  Output_xls  Output_channel_xls The diagrams in Figure B-33 and Figure B-34 show how data are organized in each of the above mentioned variables.

B-51 Sprague&Geers M Sprague&Geers P … … … … … … … … … … … … … … … … … .. T-test Channel 1 Channel 2 Channel 3 Channel 4 Channel 5 Channel 6 Sprague&Geers M Sprague&Geers P … … … … … … … … … … … … … … … … … .. T-test Output_xls Output_channel_xls Figure B-33: Data organization of the variables Output_xls and Output_channel_xls.

B-52 Output_single_history_xls Output_channel_history_xls Figure B-34: Data organization of the variables Output_single_history_xls and Output_channel_history_xls. Table_output_whole This script is the last recalled by sub-block D.1 and contains commands to create a summary table with graphics and the values of the metrics. The summarizing table is a GUI managed by the function Table_Results_NCHRP or Table_Results according to which type of metric profile has been selected at the beginning of the run. User time (Block D.2) The sub-block User time calculates the metrics on the time interval/s defined by the user during the execution of RSVVP. The flow chart in Figure B-35 shows the general implementation of this sub-block. The main characteristic of the algorithm for this sub-block is that it is implemented in a loop (indicated by the bigger red rectangle in the flow chart) in order to allow to cycle over an arbitrary number of user-defined time intervals.

B-53 Because the scripts used in the following sub-blocks have a structure similar to the corresponding scripts used in the preceding sub-block, D.1, the reader can refer to the description already given in the previous section. The only script which is described next in this section is Store_results, as it is peculiar of sub-block D.2. At the beginning of each iteration of the loop, the script User_time_interval is invoked which manages a GUI for the definition of time intervals and the curves are plotted on that interval by the script User_plot_curves. After the time interval has been defined, the weighting factors and the comparison metrics are calculated using two algorithms (respectively Weighting_scheme_user and User_time_evaluation) similar to those used to compute the metrics for each channel in sub-block D.1. Similarly to sub-block D.1, also for the user-defined time interval, after the metrics have been computed for each channel, they are post-processed by the script User_time_postprocessing. The variables containing the final values of the metrics have the same structure of those earlier described Also, the various results for each user-defined time interval (i.e., each iteration of the loop) are appended to the corresponding variables which are used in Block E to write the results in Excel format. In fact, as multiple user time intervals may be defined, at the end of each iteration the matrices containing the results for the specific time interval considered are stored before being rewritten for the next time interval (see the description of script Store Results for details). Similarly to sub-block D.1, before concluding the iteration, the results for the specific time interval are displayed on the screen through a GUI which is managed by the script Table_output_user. In case the user decides to define a new time interval, the variable Time_interval which is used to count the number of user-defined time intervals is incremented and a new iteration of the cycle starts.

B-54   Figure B-35: Diagram of sub-block D.2 (User time).

B-55 Save Results This script manages the storage of the results obtained for each iteration of the main loop of sub-block D.2 (i.e., for each user-defined time interval on which the comparison metrics are computed). In fact, because the various variables in which the script User_time_postprocessing saves the results are rewritten during each iteration, this script appends the results obtained for every specific time interval to the corresponding global variables. Eventually, the following global variables are created which will be used by the following block (Block E:).  Time_history_channel_xls_user  Time_history_single_xls_user  Output_xls  Output_channel_xls The diagrams in Figure B-36 show the data organization of the variables Output_xls and Output_channel_xls. The column vector containing the final metrics values for the specific time interval (Output_xls_user) is appended to the end of the matrix Output_xls, which has been previously created by sub-block D.1 and already contains the final results for the whole time interval. Note that, in case, the whole time interval has not been considered, the results will be appended to an originally null matrix. In case of multiple channels, the final metrics values for the actual time interval have been previously saved in the matrix Output_channel_xls. In this case, in a manner similar to that used to append the single/weighted results, the matrix containing the results for the current time interval is appended to the third dimension of the 3D matrix Output_channel_xls. As for the matrices containing the time histories of the metrics for the single/weighted channel (Output_single_history) and each of the input channels (Output_channel_history), they are transformed in a single cell and appended to respectively the vector Time_history_channel_xls_user and Time_history_single_xls_user.

B-56   Output_xls  Output_channel_xls  Sprague&Geers M Sprague&Geers P … … … … … … … … … … … … … … … … … .. T-test W ho le tim e in te rv al Us er ti m e i nt er va l # 1 Us er ti m e i nt er va l # 2 Us er ti m e i nt er va l # n …… 0 0 Whole time interval User time interval #1 User time interval #2 User time interval #3 C hannel 1 C hannel 2 C hannel 3 C hannel 4 C hannel 5 C hannel 6 Sprague&Geers M Sprague&Geers P … … … … … … … … … … … … … … … … … .. T-test Figure B-36: Data organization of the variables Output_xls and Output_channel_xls. BLOCK E (OUTPUT) Block E is the last block implemented in the program code. It handles the various results output by RSVVP. As can be seen in Error! Reference source not found., this block is composed of three sub-blocks: Configuration file, Excel results and Folder selection. The main structure of Block E is shown in Error! Reference source not found..

B-57   Block A  (Initialization)  Block B (Input and Option selection)  Block C (Curve preparation)  Block D  (Metrics evaluation)  Block E (Output)  E.2) Excel results  E.1) Configuration file E.3) Folder selection  Figure B-37: Diagram of Block E. Copy configuration file in default directory Configuration file (E.1) Excel results (E.2) Folder selection (E.2) Figure B-38: Main structure Block E.

B-58 Configuration file (E.1) The sub-block Configuration file manages the possibility to update the configuration file with the information about any time interval defined by the user during the execution of RSVVP. The flow chart in Figure B-35 shows the general implementation of this sub-block. The option to update the configuration file is given for any case the user has defined any time interval during the execution of the program. The algorithm skips the update only in two cases: (1) a previous configuration file has been loaded in total run mode and no new user intervals have been defined or (2) in case the user does not want to update the previous configuration file. Figure B-39: Diagram of sub-block E.1 (Configuration file). Excel results (Block E.2) This sub-block is the core of the Block E as it creates the Excel files containing respectively the metrics values and time histories.

B-59 Select directory (GUI) Show onscreen message Save results Excel_results Save time histories Excel_time_histories Figure B-40: Diagram of sub-block E.2 (Excel results). As can be seen from the algorithm shown in Figure B-40, after the GUI for the selection of the destination folder where to save all the results, the code performs the scripts Excel_results and Excel_time_histories. Excel_results This script manages the creation of the Excel file containing the metrics values for the various time intervals considered during the run of RSVVP. The flowchart of this script is shown in Figure B-41. In case of multiple channels, the algorithm also cycles over the total number of input channels and creates specific sheets containing the results for each of them. The writing of the results for the MPC, Single-value and ANOVA metrics is managed by separate scripts, respectively Excel_results_MPC, Excel_results_Single and Excel_results_ANOVA. Note that, in case the NCHRP 22-24 profile has been selected, the single metrics are not computed and, hence, the corresponding script is skipped as they do not appear in the Excel file. Excel_results_MPC Figure B-42 shows the flowchart of the script Excel_results_MPC. The first step performed by this script is to create the headers for each of the time interval considered during the run of the program and write them in the main sheet of the Excel file. In case of multiple

B-60 channels, a new sheet is written for each channel and the same headers are written also for any of the channel sheets. This operation is implemented in a loop which cycles over the number of input channels. As this script is focused on managing and writing only the values for the MPC metrics, after the headers have been created, the script proceeds to extracting the metrics values from the variable Output_xls which has been created during the post-processing in Block D. The values corresponding to the MPC metrics are extracted by considering only a certain range of rows of the matrix Output_xls. Note that, according to the metric profile chosen, the number of computed metrics varies and, hence, also the corresponding number of rows extracted. In case of multiple channels, a similar extraction procedure is applied on the 3-D matrix Output_channel_xls in order to extract only the desired metrics values. Also, in this particular case, the results for each channel are extracted by cycling over their total number. See Figure B- 43 for a graphical visualization of the extraction procedure. The other two scripts which manage the writing of the results for the single-value and the ANOVA metrics (Excel_results_Single and Excel_results_ANOVA) are implemented in a similar manner and, hence, the corresponding flowcharts are not shown for the sake of conciseness.

B-61 Figure B-41: Flow chart of the script Excel_results (sub-block E.2).

B-62 Figure B-42: Flow chart of the script Excel_results_MPC (recalled by the script Excel_results).

B-63 Step 1 Extract metrics values for all channels   0 Whole time interval User time interval #1 User time interval #2 User time interval #n Channel 1 Channel 2 Channel 3 Channel 4 Channel 5 Channel 6 Sprague&Geers M Sprague&Geers P … … … … … … … … … … … … … … … … … .. T-test Metrics extracted 0 Whole time interval User time interval #1 User time interval #2 User time interval #n Channel 1 Channel 2 Channel 3 Channel 4 Channel 5 Channel 6 Metrics extracted Step 2 Extract metrics values for each channels   Whole time interval User time interval #1 User time interval #2 User time interval #n Channel 1 Channel 2 Channel 3 Channel 4 Channel 5 Channel 6 Metrics extracted Extracted values  for Channel 1 W ho le tim e i nt er va l Us er ti m e i nt er va l # 1 Us er ti m e i nt er va l # 2 Us er ti m e i nt er va l # n Metrics extracted Figure B-43: Data extraction from the variable Output_channel_xls. Excel_time_histories This script manages the creation of Excel files containing the time histories for the metrics computed by RSVVP. Figure B-44 shows the algorithm of the script. The first step performed is the selection of the name to be given to the sheet containing accordingly the metrics time histories for either the single channel or the weighted mean from the multiple channels. The algorithm then creates an Excel file containing the results for the comparison between the curves on the whole time interval and/or separate Excel files for each of the user-defined time intervals, depending on which selection was made during the run of the program. In the latter

B-64 case, a loop cycles over the number of time intervals defined by the user and creates an Excel file during each iteration. Also, if multiple channels were input, in either the case the comparison was performed on the whole time interval or user defined time interval/s the algorithm cycles over each of the input channels in order to save them in separate sheets of the same Excel file.             Define file name Create header Save time histories for Single Channel Multichannel? No i <= # channels? Yes No Extract ith channel from Time_history_channel_xls i = i + 1 i = 1 Yes Cycle over each channel Create sheet name Save time histories for the ith channel Move files in /Results   i <= # intervals? Yes No Extract time vector for ith channel from Time_user_cut_vector i = i + 1 i = 1 Cycle over each interval Create file name for ith interval Extract time histories from Time_history_single_xls_user Move files in /Results Create header Save time histories for Single Channel Multichannel? No d <= # channels? Yes No Extract dth channel from Time_history_channel_xls d = i + 1 d = 1 Yes Create sheet name Save time histories for the dth channel Cycle over each channel Figure B-44: Flow chart of the script Excel_time_histories (sub-block E.2).

B-65 Folder selection (Block E.3) This sub-block (Figure B-45) checks if the user defined a directory where to save the output files and, in case, moves all the previously created files from the default directory /Results_XX to that folder. In any case, at the end, a message is shown on the screen indicating to the user that the results have been saved in the selected directory. Figure B-45: Diagram of sub-block E.3 (Folder selection).

B-66 REFERENCES [1] MathWorks, “Matlab® User Guide – high performance numeric computation and visualization Software”, The MathWorks Inc., 3 Apple Hill Drive, Natick, MA, USA, 2008. [2] Society of Automotive Engineers, “SAE J211-1 (R) Instrumentation for Impact Test—Part 1—Electronic Instrumentation. SAE International”, Warrendale PA, USA, Jul 1, 2007. [3] H.E. Ross, D.L. Sicking, R.A. Zimmer, and J.D. Michie, “Recommended Procedures for the Safety Performance Evaluation of Highway Features”, National Cooperative Highway Research Program (NCHRP) Report 350, Transportation Research Board, Washington, D.C., 1993. [4] European Committee of Standardization, “European Standard EN 1317-1 and EN 1317-2: Road Restraint Systems”, CEN (1998). [5] T.L Geers, “An Objective Error Measure for the Comparison of Calculated and Measured Transient Response Histories”, The Shock and Vibration Bulletin, The Shock and Vibration Information Center, Naval Research Laboratory, Washington, D.C., Bulletin 54, Part 2, pp. 99- 107, (June 1984). [6] Comparative Shock Analysis (CSA) of Main Propulsion Unit (MPU), Validation and Shock Approval Plan, SEAWOLF Program: Contract No. N00024-90-C-2901, 9200/SER: 03/039, September 20, 1994. [7] M.A. Sprague and T.L. Geers, “Spectral elements and field separation for an acoustic fluid subject to cavitation”, J Comput. Phys., pp. 184:149, Vol. 162, (2003). [8] L.E. Schwer, “Validation Metrics for Response Time Histories: Perspective and Case Studies”, Engng. with Computers, Vol. 23, Issue 4, pp. 295309, (2007). [9] D.M. Russell, “Error Measures for Comparing Transient Data: Part I: Development of a Comprehensive Error Measure”, Proceedings of the 68th shock and vibration symposium, pp. 175184, (2006). [10] J. Cohen, P. Cohen, S.G. West and L.S. Aiken, Applied multiple regression/correlation analysis for the behavioral sciences, Hillsdale, NJ: Lawrence Erlbaum, (3rd ed.), 2003. [11] S. Basu and A. Haghighi, “Numerical Analysis of Roadside Design (NARD) vol. III: Validation Procedure Manual”, Report No. FHWA-RD-88-213, Federal Highway Administration, Virginia, 1988. [12] D. Twisk and A. Ritmeijer, “A Software for Demonstrating Validation of Computer Dummy Models Used in the Evaluation of Aircraft Seating Systems”, SAE Paper No. 2007-01- 3925, Society of Automotive Engineers, Warrendale, PA, 2007. [13] B. Whang, W.E. Gilbert and S. Zilliacus, Two Visually Meaningful Correlation Measures for Comparing Calculated and Measured Response Histories, Carderock Division, Naval Surface Warfare Center, Bethesda, Maryland, Survivability, Structures and Materials Directorate, Research and Development Report, CARDEROCKDIV-U-SSM-67-93/15, September, 1993. [14] H. Theil, Economic Forecasts and Policy, North-Holland Publishing Company, Amsterdam, 1975.

B-67 [15] M.H. Ray, “Repeatability of Full-Scale Crash Tests and a Criteria for Validating Finite Element Simulations”, Transportation Research Record, Vol. 1528, pp. 155-160, 1996. [16] W.L. Oberkampf and M.F. Barone, “Measures of Agreement Between Computation and Experiment: Validation Metrics,” Journal of Computational Physics Vol. 217, No. 1 (Special issue: Uncertainty quantification in simulation science) pp 5–36, 2006.

B-68 APPENDIX B-1: CODE VERIFICATION The implementation of the following main features of RSVVP has been verified:  Sprague & Geers metrics  Knowles & Gear metrics In order to verify the correct implementation of the Sprague & Geers metric, a comparison of ideal analytical wave forms differing only in magnitude or phase was performed and the results were compared with the outcomes obtained by Schwer [8] using the same benchmark curves. The baseline analytical wave form was given by the following decayed sinusoidal curve: )(2sin)( )(    tetm t (B1-1) where the parameter  was used to create a phase in time or “time of arrival”. Following Schwer’s work, two different tests were performed, considering respectively as the test function: (a) a wave form with the same time of arrival of (B1-1), but an amplitude 20% greater than the original wave form (magnitude-error test) and (b) a wave form with the same amplitude of (B1-1) but a time of arrival such that the phase was about %20 respect to the original wave form (phase- error test). The analytical forms used for the magnitude-error test were:        )14.0(2sin2.1)( )14.0(2sin)( )14.0( )14.0( tetc tetm t t   (B1-2) while the analytical wave forms used for the phase-error test were defined as:        )04.0(2sin)( )14.0(2sin)( )04.0( )14.0( tetc tetm t t   (B1-3) and,        )24.0(2sin)( )14.0(2sin)( )24.0( )14.0( tetc tetm t t   (B1-4)

B-69 In both cases, the sampling period was sec02.0t and sec2sec0  t . Figure B-46 shows the graphs of the analytical curves used respectively for the magnitude error and the phase error tests. a) curves for magnitude-error test b) curves used for phase-error test Figure B-46: Idealized time histories used for the case of (a) 20% magnitude and (b) time of arrival. As in this case, the phase shift between the baseline and the test curves was a wanted characteristic, the metrics was applied without synchronizing the two time histories in the preprocessing phase. The results obtained using the Sprague & Geers metric implemented in RSVVP for the difference in magnitude and the difference in time of arrival are shown respectively in Figure B-47 and Figure B-48. Table B-4 shows the value of the metric component obtained considering a time interval equal to the total length of the time histories. These values match completely with those obtained by Schwer, thus confirming the correct implementation of this metric.

B-70 Table B-4: Value of the Sprague & Geers metric components calculated using the RSVVP program. Metric component 20 % Magnitude difference Phase difference +20% -20% Magnitude 0.2 ≈0 ≈0 Phase 0 0.195 0.195 Combined 0.2 0.195 0.195 Figure B-47. Sprague & Geers component metric vs. time for the magnitude-difference test.

B-71 a) phase difference of +20% b) a) phase difference of -20% Figure B-48: Sprague & Geers component metric vs. time for the phase-difference tests: (a) +20% and (b)-20%.

B-72 APPENDIX B-2: COMPILING RSVVP The Matlab® code of RSVVP can be compiled as a standalone executable. This allows users who do not have Matlab® installed on their machine to be able to run RSVVP. The standalone application of RSVVP is compiled using Matlab® Compiler, a Matlab® toolbox which requires a specific license. In order to create an executable version of RSVVP, set the current directory of Matlab® to the folder where the RSVVP files are located and launch the following command: mcc -m -v RSVVP Note 1: In order to compile the standalone application, it is first necessary to indicate to Matlab® which compiler to use by launching the following command: mbuild -setup The 32-bit Windows version of Matlab® has a built–in C compiler called ‘Lcc-win32’. In case other third-party C/C++ compilers are installed on the machine, it is also possible to select any of them. Note 2: the previous versions of Matlab® Compiler allowed the user to create a standalone application only from functions and it was not possible to directly compile scripts. As the principal file of RSVVP (Main.m) is a script, in order to compile the first versions of the program, the code was originally invoked from a function called RSVVP. While this trick made it possible to compile RSVVP, a side effect was that a few local variables created in specific functions of the code had to be saved to the Matlab® base space. This was necessary in order to permit a sharing of these specific local variables among different functions. Although the latest versions of Matlab® Compiler can now directly compile a script, making the conversion of these local variables into global variables no more necessary, these conversions may still appear in some part of the code.

B-73 APPENDIX B-3: Type of Variables Used in the Code The main typologies of variables available in Matlab® and used in the implementation of the code of RSVVP are:  Matrices and arrays (floating-point/integer data, characters and strings)  Structures  Cell arrays Matrices and arrays are used to store both numbers and text characters. These data type includes also scalar numbers or single characters which are considered to be a 1x1 matrix. Numbers can be either stored as floating-point or integer data. Structures and cell arrays provide a way to store dissimilar types of data in the same array. A Matlab® structure is a data type that provides the means to store selected data together in a single entity. A structure consists mainly of data containers, called fields, and each of these fields stores an array of some Matlab® data type. Figure B-49 shows an example of structure variable, s, which has three fields: a, b, and c. Figure B-49: Example of a Matlab structure variable [1]. A Matlab® cell array is a collection of containers called cells in which different types of data can be stored. As an example, Figure B-50 shows a 2-by-3 cell array in which the cells in row one hold an array of unsigned integers, an array of strings, and an array of complex

B-74 numbers, while row two holds three other types of arrays, the last being a second cell array nested in the outer one. Figure B-50: Example of a Matlab structure variable [1]. Structure-type variables are used to store in an organized manner the several options selected by the user in the various graphical interfaces while cell-type variables are used to conveniently store in a single variable all the data vectors (or matrices) which are characterized by different dimensions and would have otherwise required singular specific array (or matrix) variables.

B-75 APPENDIX B-4: Preprocessing Algorithms This appendix describes the general algorithms used to perform the following preprocessing operations:  Filtering  Shift/drift  Resampling & trimming  Synchronization Filtering The filter process is implemented in the function sae_filter, whose algorithm is shown in Figure B-51. The function receives as input the following three variables: (i) CFC, (ii) T and (iii) X. CFC and T are scalar variables containing respectively the value of the filter class and the sampling period of the input curves, while X is a vector containing the data points of the curve. The algorithm assumes that the sampling period is constant and, hence, the time vector is not necessary. Before filtering the data a head and a tail vectors are created by repeating respectively the first and last value of the vector containing the data points. These vectors are then appended at the beginning and end of the vector containing the original data, correspondingly. The modified vector is then filtered by applying equation (B1) a first time frontward and a second time backward in order to have a two-pass filter. The filter coefficients are calculated using the formulas indicated in equations (B2) through (B6). Once the data are filtered, the added head and tail are trimmed and the filtered vector Y is given as the function output.

B-76 Figure B-51: Algorithm of the SAE filtering. Shift/drift The shift and drift corrections are implemented in the script Shift_drift. The main steps followed to perform this preprocessing task are shown in the algorithm in Figure B-52. The

B-77 steps described in the algorithm are performed on either the true or test curves or both of them, according to the user selection. Figure B-52: Main algorithm of the script Shift_drift. The shift values at the beginning and the end of the curve are computed respectively by the user-defined functions shit_value and drift_value (Figure B-53). The former function evaluates the vertical shift at the beginning of the curve, while the drift function at the end of the curve. The algorithms of both the two functions are very similar; the shift values are computed as the mean of respectively the initial and final portion of the data vector till/from 5 percent of the peak value. Also, if the standard deviation of the sub-vector is greater than 0.1, the algorithm iteratively decreases the size till the standard deviation drops within this value.

B-78 Function shift_value  Function drift_value        i = i + 1 curve(i) < peak*0.05 ? i = i + 1 Yes No     i = end curve(i) < peak*0.05 ? i = i - 1 Yes No Figure B-53: User-defined functions shift_value (left) and drift_value (right) Algorithms. Re-sampling & trimming The re-sampling and trimming operations are implemented simultaneously in the script Resampling_trimming whose algorithm is shown in Figure B-54. In fact, the curves are resampled on a trimmed time interval which considers respectively the maximum between the initial time values of the two curves and the minimum of the final time values. The Resampling is performed using a linear polynomial whose coefficients are evaluated using the predefined Matlab® function interp1. The sampling period (i.e., the minimum time between the collection of two consecutive data points) is defined as the minimum between that of the two curves and, in any case, it cannot be smaller than a minimum limit value (defined by the user at the beginning of the calculation). The minimum sampling period is determined by considering the first input channel; the same period is then used to resample also any of the other channels. Eventually, both the true and test curves are interpolated on the trimmed time interval using their respective polynomial coefficients through the Matlab® function ppval. In case of multiple channels, the same procedure is repeated for each of them.

B-79 Figure B-54: Algorithm of the script Resampling_trimming.

B-80 Synchronization The automatic synchronization of the curves is implemented in the script Curve_synchronizing whose algorithm is shown in Figure B-55. This script calculates the shift value which minimizes a target function, which can be either the area between the two curves (area of the residuals method) or the square error (lest square error method). The two target functions are recalled by user-defined Matlab® functions, respectively area_res and rse. In both the two functions, the couple of curves can be shifted respect to each other by an arbitrary value s. The minimization process is performed using a Matlab® function (fminsearch) which iteratively calculates the values of the selected target function (area of residual or square-root error) varying the value of the shift variable in order to find the optimal solution. Eventually, the algorithm shifts the two input curves by the optimal value obtained from the minimization process. Figure B-55: Algorithm of the script Curve_synchronizing. Both the user-defined functions are-res and sre require as input only the value by which to shift the two curves before calculating respectively the area between them or the square-root error. The algorithms of these two functions are shown in Figure B-56. For either of these two

B-81 functions, the shifting of the two curves is performed by invoking the user-defined function shift whose algorithm is shown in Figure B-57.   Function area_res  Function sre  Figure B-56. Algorithm of the user-defined functions area_res (left) and sre (right). The function shift has only one input, the value by which to shift the two curves respect to each other. In fact, the data points of the two curves are read directly from the global space of Matlab®. This allows the shift function to be recalled also within the local space area_res and sre, where the data points of the input curves would not otherwise be available. Based on the sign of the input shift value, the algorithm of the function shift creates a shifted time vector for either the test or the true curve and a vector trimmed at the end for the other curve. In particular, a positive shift value corresponds to a time vector for the true curve which starts at the shift value and a time vector for the test curve which is trimmed at the end by the same shift value, in order to have two vectors with the same length. Vice versa happens in case of a negative shift value. Once the appropriate time vectors have been defined, the shifted curves are obtained by computing the corresponding interpolating polynomials on these time points. Eventually, a

B-82 common time vector is used for both the interpolated true and test curves, which starts at time zero and is trimmed at the end by a value equal to the shift. Shift value s < 0.5 length of the shorter curve ? No Yes s = 0.5 * length of shorter curve s > 0 ? Yes No Shift test curveShift true curve Interpolate curve values on the shifted time intervals Shift the beginning of the time intervals to origin Figure B-57: Algorithm of the user-defined function shift.

B-83 APPENDIX B-5: Metrics Algorithms This appendix gives a detailed description of the algorithms used to implement the various comparison metrics available in RSVVP. Note that each metric implemented in RSVVP is repeatedly evaluated considering time intervals increasing in size in order to track the behavior of the metrics when the two curves are compared on a limited portion of the time domain. The starting interval has a dimension equal to 10 percent of the total time interval and the final time interval is the entire time domain on which the true and test curves are compared. In the following sections, the measured and the computed curves are indicated respectively as m and c. Sprague & Geers The analytical definition of the magnitude and phase component of the Sprague &Geers metric is defined by the following formulas: 1/  mmccSG IIM (B5-1) )/(cos1 1 ccmmmcSG IIIP   (B5-2) where and are respectively defined as:  2 1 )()( 2112 t t mm dttmttI (B5-3)  2 1 )()( 2112 t t cc dttcttI (B5-4)    2 1 )()()( 112 t t mc dttctmttI (B5-6) The comprehensive error is defined as the vector whose two components are SGM and SGP : 22 SGSGSG PMC  (B5-7) Assuming that the two curves are characterized by a uniform and common sampling rate, the integrals defined by equations B5-3 through B5-6 reduces to the following summation:

B-84  2mI mm (B5-8)  2cI cc (B5-9)  mcI mc (B5-10) m, c, time        NO     M,P,C vectors Trim head of vectors  (10% of total time)  YES    YES      NO Figure B-58. Algorithm of the Sprague & Geers metric implemented in RSVVP.

B-85 ANOVA The Analysis of Variance metrics are based on the residuals between the measured and the computed curves. In particular, the residuals are normalized to the peak value of the measured curve. The algorithm of the ANOVA metrics (mean, standard deviation, and t-test of the residuals) is shown in Figure B-59. m, c, time        YES No   Avg * 100  Std * 100  Avg * 100  Std * 100  Trim head of vectors Avg, Std and T (10% of total time)  Avg, Std, T  vectors  Figure B-59. Algorithm of the ANOVA metrics implemented in RSVVP.

C-1 APPENDIX C : BENCHMARK CASE EXAMPLE FORMS The following sections include the filled-out forms and reports corresponding to the benchmark cases described in Chapter 6. The blank forms with instructions are included in Appendix E. The instructions are omitted from the forms in this Appendix to conserve space and reduce repetition.

C-2 APPENDIX C1: TEST CASE 1: PICKUP TRUCK STRIKING A STRONG-POST W- BEAM GUARDRAIL WITHOUT A CURB VALIDATION/VERIFICATION REPORT FOR A _______________Report 350 2000P Pickup Truck_________________________ (Report 350 or MASH or EN1317 Vehicle Type) Striking a _______Steel deformable barrier G4(1S)_with Wood Blockouts (roadside hardware type and name) Report Date: _____11-30-2009________________________________________________ Type of Report (check one) Verification (known numerical solution compared to new numerical solution) or Validation (full-scale crash test compared to a numerical solution). General Information Known Solution Analysis Solution Performing Organization TTI WPI/Battelle Test/Run Number: TTI 405421-1 TTI-405421-1_SIM-2002_01 Vehicle: 1989 Chevrolet 2500 WPI modified (NCAC C2500R) Impact Conditions Vehicle Mass: 2000 kg 2000 kg Speed: 101.5 km/hr 101.5 km/hr Angle: 25.5 degrees 25.5 degrees Impact Point: Upstream of post 12 Upstream of post 12 Composite Validation/Verification Score List the Report 350/MASH or EN1317 Test Number T3-11 Part I Did all solution verification criteria in Table 1 pass? Yes Part II Do all the time history evaluation scores from Table C1-2 result in a satisfactory comparison (i.e., the comparison passes the criterion)? If all the values in Table C1-2 did not pass, did the weighted procedure shown in Table C1-3 result in an acceptable comparison. If all the criteria in Table C1-2 pass, enter “yes.” If all the criteria in Table C1-2 did not pass but Table C1-3 resulted in a passing score, enter “yes.” Yes Part III All the criteria in Table C1-4 (Test-PIRT) passed? Yes Are the results of Steps I through III all affirmative (i.e., YES)? If all three steps result in a “YES” answer, the comparison can be considered validated or verified. If one of the steps results in a negative response, the result cannot be considered validated or verified. Yes The analysis solution (check one) is is NOT verified/validated against the known solution.

C-3 PART I: BASIC INFORMATION 1. What type of roadside hardware is being evaluated (check one)? Longitudinal barrier or transition Terminal or crash cushion Breakaway support or work zone traffic control device Truck-mounted attenuator Other hardware: _____________________________________ 2. What test guidelines were used to perform the full-scale crash test (check one)? NCHRP Report 350 MASH EN1317 Other: ______________________________________________ 3. Indicate the test level and number being evaluated (fill in the blank). ___3-11_____ 4. Indicate the vehicle type appropriate for the test level and number indicated in item 3 according to the testing guidelines indicated in item 2. NCHRP Report 350/MASH 700C 820C 1100C 2000P 2270P 8000S 10000S 36000V 36000T EN1317 Car (900 kg) Car (1300 kg) Car (1500 kg) Rigid HGV (10 ton) Rigid HGV (16 ton) Rigid HGV (30 ton) Bus (13 ton) Articulated HGV (38 ton)

C-4 PART II: ANALYSIS SOLUTION VERIFICATION Table C1-1. Analysis Solution Verification Table. Verification Evaluation Criteria Change (%) Pass? Total energy of the analysis solution (i.e., kinetic, potential, contact, etc.) must not vary more than 10 percent from the beginning of the run to the end of the run. 1.3 YES Hourglass Energy of the analysis solution at the end of the run is less than five percent of the total initial energy at the beginning of the run. 0 YES Hourglass Energy of the analysis solution at the end of the run is less than ten percent of the total internal energy at the end of the run. 0 YES The part/material with the highest amount of hourglass energy at the end of the run is less than ten percent of the total internal energy of the part/material at the end of the run. 0 YES Mass added to the total model is less than five percent of the total model mass at the beginning of the run. 0 YES The part/material with the most mass added had less than 10 percent of its initial mass added. 0 Yes The moving parts/materials in the model have less than five percent of mass added to the initial moving mass of the model. 0 Yes There are no shooting nodes in the solution? Yes Yes There are no solid elements with negative volumes? Yes Yes The Analysis Solution (check one) passes does NOT pass all the criteria in Table C1-1 with without exceptions as noted.

C-5 PART III: TIME HISTORY EVALUATION TABLE Table C1-2. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (single channel option). Evaluation Criteria Time interval [0 sec; 0.7 sec] O Sprague-Geers Metrics List all the data channels being compared. Calculate the M and P metrics using RSVVP and enter the results. Values less than or equal to 40 are acceptable. RSVVP Curve Preprocessing Options M P Pass? Filter Option Sync. Option Shift Drift True Curve Test Curve True Curve Test Curve X acceleration CFC 180 Min. area of Residuals Y N Y N 21.5 33.3 Y Y acceleration CFC 180 Min. area of Residuals Y N Y N 43.9 35.7 N Z acceleration CFC 180 Min. area of Residuals Y N Y N 21.1 43.0 N Roll rate CFC 180 Min. area of Residuals N N N N 35.3 32.7 Y Pitch rate CFC 180 Min. area of Residuals N N N N 13.3 48.0 N Yaw rate CFC 180 Min. area of Residuals N N N N 11.7 8.7 Y P ANOVA Metrics List all the data channels being compared. Calculate the ANOVA metrics using RSVVP and enter the results. Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) and • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? X acceleration/Peak 0.02 0.34 Y Y acceleration/Peak 0.05 0.27 Y Z acceleration/Peak 0.02 0.32 Y Roll rate 0.02 0.27 Y Pitch rate 0.05 0.36 N Yaw rate 0.04 0.12 Y The Analysis Solution (check one) passes does NOT pass all the criteria in Table C1-2.

C-6 Table C1-3(a). Roadside Safety Validation Metrics Rating Table – Time History Comparisons (multi-channel option using Area II method). Evaluation Criteria (time interval [0 sec; 0.7 sec]) Channels (Select which were used) X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights -Area (II) Method- X Channel – 0.255116 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 X acc Y acc Z acc Yaw rate Roll rate Pitch rate Y Channel – 0.210572 Z Channel – 0.034312 Yaw Channel – 0.392648 Roll Channel – 0.06581 Pitch Channel – 0.041542 O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M P Pass? 22.9 25 Y P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? 0.03 0.24 Y The Analysis Solution (check one) passes does NOT pass all the criteria in Table C1-3.

C-7 Table C1-3(b). Roadside Safety Validation Metrics Rating Table – Time History Comparisons (multi-channel option using Inertia method). Evaluation Criteria (time interval [0 sec; 0.7 sec]) Channels (Select which were used) X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights -Inertia Method- X Channel – 0.296345 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 X acc Y acc Z acc Yaw rate Roll rate Pitch rate Y Channel – 0.227346 Z Channel – 0.079612 Yaw Channel – 0.242396 Roll Channel – 0.030312 Pitch Channel – 0.123988 O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M P Pass? 23.6 30.4 Y P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? 0.04 0.27 Y The Analysis Solution (check one) passes does NOT pass all the criteria in Table C1-3.

C-8 PART IV: PHENOMENA IMPORTANCE RANKING TABLE Table C1-4. Evaluation Criteria Test Applicability Table. Evaluation Factors Evaluation Criteria Applicable Tests Structural Adequacy A Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38 B The test article should readily activate in a predictable manner by breaking away, fracturing or yielding. 60, 61, 70, 71, 80, 81 C Acceptable test article performance may be by redirection, controlled penetration or controlled stopping of the vehicle. 30, 31,, 32, 33, 34, 39, 40, 41, 42, 43, 44, 50, 51, 52, 53 Occupant Risk D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. All E Detached elements, fragments or other debris from the test article, or vehicular damage should not block the driver’s vision or otherwise cause the driver to lose control of the vehicle. (Answer Yes or No) 70, 71 F The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. All except those listed in criterion G G It is preferable, although not essential, that the vehicle remain upright during and after collision. 12, 22 (for test level 1 – 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44) H Occupant impact velocities should satisfy the following: Occupant Impact Velocity Limits (m/s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 80, 81 Longitudinal and Lateral 9 12 Longitudinal 3 5 60, 61, 70, 71 I Occupant ridedown accelerations should satisfy the following: Occupant Ridedown Acceleration Limits (g’s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 60, 61, 70, 71, 80, 81 Longitudinal and Lateral 15 20 Vehicle Trajectory L The occupant impact velocity in the longitudinal direction should not exceed 40 ft/sec and the occupant ride-down acceleration in the longitudinal direction should not exceed 20 G’s. 11,21, 35, 37, 38, 39 M The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38, 39 N Vehicle trajectory behind the test article is acceptable. 30, 31, 32, 33, 34, 39, 42, 43, 44, 60, 61, 70, 71, 80, 81

C-9 Table C1-5. Roadside Safety Phenomena Importance Ranking Table. Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? St ru ct ur al A de qu ac y A A1 Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. (Answer Yes or No) Yes Yes YES A2 Maximum dynamic deflection: - Relative difference is less than 20 percent or - Absolute difference is less than 0.15 m 1.0 m 0.985m 1.5% 0.02 m YES A3 Length of vehicle-barrier contact: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m 0.691 s 0.690 s 0.1% YES A4 Number of broken or significantly bent posts is less than 20 percent. 3 3 0 YES A5 Did the rail element rupture or tear (Answer Yes or No) No No YES A6 Were there failures of connector elements (Answer Yes or No). Yes Yes YES A7 Was there significant snagging between the vehicle wheels and barrier elements (Answer Yes or No). Yes Yes YES A8 Was there significant snagging between vehicle body components and barrier elements (Answer Yes or No). No No YES

C-10 Table C1-5. Roadside Safety Phenomena Importance Ranking Table (continued). Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? O cc up an t R is k D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. (Answer Yes or No) Pass Pass YES F F1 The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. (Answer Yes or No) Pass Pass YES F2 Maximum roll of the vehicle: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. -8.7 -10.1 16% 1.4 deg YES F3 Maximum pitch of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. -3.3 -4.3 30% 1.0 deg YES F4 Maximum yaw of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 41 42.8 4% 1.8 deg YES L L1 Occupant impact velocities: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m/s. • Longitudinal OIV (m/s) 5.4 4.7 13% 0.7m/s YES • Lateral OIV (m/s) 4.4 5.0 13.6% 0.6 m/s YES • THIV (m/s) 6.3 6.4 1.6% 0.1 m/s YES L2 Occupant accelerations: - Relative difference is less than 20 percent or - Absolute difference is less than 4 g’s. • Longitudinal ORA 7.9 8.9 12.7% 1.0 G YES • Lateral ORA 8.4 10.0 19.0% 1.6 G YES • PHD 12.1 13.2 9.1% 1.2 G YES • ASI 0.68 0.72 5.9% 0.04 YES

C-11 Table C1-5. Roadside Safety Phenomena Importance Ranking Table (continued). Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? V eh ic le T ra je ct or y M M1 The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. 15.5° 61% 17.3° 68% YES M2 Exit angle at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 15.5° 17.3° 11.6% 1.8 deg YES M3 Exit velocity at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 55 km/h 62 km/h 12.7% 7.0 km/hr YES M4 One or more vehicle tires failed or de-beaded during the collision event (Answer Yes or No). Yes N.M.* *N.M. – Not Modeled The Analysis Solution (check one) passes does NOT pass all the criteria in Tables C1-5 with exceptions as noted without exceptions .

C-12 Plots of the time histories used to evaluate the comparison metrics S&G mag. = 21.5√ S&G phase = 33.3√ Mean = 0.02√ St.D. = 0.34√ (a) (b) Figure C1-1. X-channel (a) acceleration-time history data used to compute metrics and (b) integration of acceleration-time history data. (a) (b) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 1.8 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 1.6 S&G mag. = 43.9x S&G phase = 35.7√ Mean = 0.05√ St.D. = 0.27√ Figure C1-2. Y-Channel (a) acceleration-time history data used to compute metrics and (b) integration of acceleration-time history data

C-13 (a) (b) 0.10 0.0 -0.05 -0.10 -0.15 -0.20 -0.25 -0.35 0.05 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 -0.30 S&G mag. = 21.1√ S&G phase = 43 x Mean = 0.02√ St.D. = 0.32√ Figure C1-3. Z-Channel (a) acceleration-time history data used to compute metrics and (b) integration of acceleration-time history data. (a) (b) 2 -2 -4 -6 -8 -10 -12 0.0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 S&G mag. = 35.3 √ S&G phase = 32.7√ Mean = 0.02√ St.D. = 0.27√ Figure C1-4. Roll-Channel (a) angular rate-time history data used to compute metrics and (b) integration of angular rate-time history data.

C-14 (a) (b) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 2 0.0 -1 -2 -3 -4 -5 1 S&G mag. = 13.3√ S&G phase = 48x Mean = 0.05√ St.D. = 0.36x Figure C1-5. Pitch-Channel (a) angular rate-time history data used to compute metrics and (b) integration of angular rate-time history data. (a) (b) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 45 35 30 25 20 10 -5 40 S&G mag. = 11.7 √ S&G phase = 8.7√ Mean = 0.04√ St.D. = 0.12√ 0.0 15 5 Figure C1-6. Yaw-Channel (a) angular rate-time history data used to compute metrics and (b) integration of angular rate-time history data.

C-15

C-16

C-17 APPENDIX C2: PICKUP TRUCK STRIKING A STRONG-POST W-BEAM GUARDRAIL IN COMBINATION WITH AN AASHTO TYPE B CURB VALIDATION/VERIFICATION REPORT FOR A _______________Report 350 2000P Pickup Truck_______________________________ (Report 350 or MASH or EN1317 Vehicle Type) Striking a Steel deformable barrier G4(1S)_with Wood Blockouts and with an AASHTO Type B curb positioned underneath the barrier “flush” with the w-beam face (roadside hardware type and name) Report Date: _______11-30-2009_______________________________________________ Type of Report (check one) Verification (known numerical solution compared to new numerical solution) or Validation (full-scale crash test compared to a numerical solution). General Information Known Solution Analysis Solution Performing Organization ETECH WPI Test/Run Number: 52-2556-001 (6/5/2003) B-0m-85-FEA (7/8/2002) Vehicle: 1998 GMC 3/4-ton WPI modified (NCAC C2500R) Impact Conditions Vehicle Mass: 1,993 kg 2,000 kg Speed: 85.6 km/hr 85.0 km/hr Angle: 25 degrees 25 degrees Impact Point: 0.6 m upstream of post 14 0.49 m upstream of post 14 Composite Validation/Verification Score List the Report 350/MASH or EN1317 Test Number Step I Did all solution verification criteria in Table C2-1 pass? Yes Step II Do all the time history evaluation scores from Table C2-2 result in a satisfactory comparison (i.e., the comparison passes the criterion)? If all the values in Table C2-2 did not pass, did the weighted procedure shown in Table C2-3 result in an acceptable comparison. If all the criteria in Table C2-2 pass, enter “yes.” If all the criteria in Table C2-2 did not pass but Table C2-3 resulted in a passing score, enter “yes.” Yes Step III All the criteria in Table 2-5 passed No Are the results of Steps I through III all affirmative (i.e., YES)? If all three steps result in a “YES” answer, the comparison can be considered validated or verified. If one of the steps results in a negative response, the result cannot be considered validated or verified. No The analysis solution (check one) is is NOT verified/validated against the known solution.

C-18 PART I: BASIC INFORMATION 1. What type of roadside hardware is being evaluated (check one)? Longitudinal barrier or transition Terminal or crash cushion Breakaway support or work zone traffic control device Truck-mounted attenuator Other hardware: AASHTO Type B curb underneath and flush with guardrail face 2. What test guidelines were used to perform the full-scale crash test (check one)? NCHRP Report 350 MASH EN1317 Other: ______________________________________________ 3. Indicate the test level and number being evaluated (fill in the blank). ___3-11______ 4. Indicate the vehicle type appropriate for the test level and number indicated in item 3 according to the testing guidelines indicated in item 2. NCHRP Report 350/MASH 700C 820C 1100C 2000P 2270P 8000S 10000S 36000V 36000T EN1317 Car (900 kg) Car (1300 kg) Car (1500 kg) Rigid HGV (10 ton) Rigid HGV (16 ton) Rigid HGV (30 ton) Bus (13 ton) Articulated HGV (38 ton)

C-19 PART II: ANALYSIS SOLUTION VERIFICATION Table C2-1. Analysis Solution Verification Table. Verification Evaluation Criteria Change (%) Pass? Total energy of the analysis solution (i.e., kinetic, potential, contact, etc.) must not vary more than 10 percent from the beginning of the run to the end of the run. 0.3 Yes Hourglass Energy of the analysis solution at the end of the run is less than five percent of the total initial energy at the beginning of the run. 2.1 Yes Hourglass Energy of the analysis solution at the end of the run is less than ten percent of the total internal energy at the end of the run. 5.5 Yes The part/material with the highest amount of hourglass energy at any time during the run is less than five percent of the total initial energy at the beginning of the run. 0.8 Yes Mass added to the total model is less than five percent of the total model mass at the beginning of the run. 3.4e-4 Yes The part/material with the most mass added had less than 10 percent of its initial mass added. 0.3 Yes The moving parts/materials in the model have less than five percent of mass added to the initial moving mass of the model. 0.006 Yes There are no shooting nodes in the solution? Yes Yes There are no solid elements with negative volumes? Yes Yes The Analysis Solution (check one) passes does NOT pass all the criteria in Table C2-1 with without exceptions as noted.

C-20 PART III: TIME HISTORY EVALUATION TABLE Table C2-2. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (single channel option). Evaluation Criteria Time interval [0 sec; 1.22 sec] O Sprague-Geer Metrics List all the data channels being compared. Calculate the M and P metrics using RSVVP and enter the results. Values less than or equal to 40 are acceptable. Filter Option Sync. Option Shift Drift M P Pass? True Curve Test Curve True Curve Test Curve X acceleration CFC 180 Min. area of Residuals Y N Y N 1.2 41.6 N Y acceleration CFC 180 Min. area of Residuals Y N Y N 5.7 43.1 N Z acceleration CFC 180 Min. area of Residuals Y N Y N 0.5 48.6 N Roll rate CFC 180 Min. area of Residuals N N N N 1.5 44.5 N Pitch rate CFC 180 Min. area of Residuals N N N N 9.7 25.2 Y Yaw rate CFC 180 Min. area of Residuals N N N N 9.6 10.4 Y P ANOVA Metrics List all the data channels being compared. Calculate the ANOVA metrics using RSVVP and enter the results. Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) and • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? X acceleration/Peak 0.00 0.17 Y Y acceleration/Peak 0.02 0.17 Y Z acceleration/Peak 0.01 0.23 Y Roll rate 0.02 0.51 N Pitch rate 0.07 0.36 N Yaw rate 0.06 0.14 Y The Analysis Solution (check one) passes does NOT pass all the criteria in Table C2-2.

C-21 Table C2-3(a). Roadside Safety Validation Metrics Rating Table for the G4(1S) with curb model– (multi-channel option using Area II method). Evaluation Criteria (time interval [0 sec; 1.22 sec]) Channels (Select which was used) X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights - Area II Method - X Channel – 0.268011 0 0.1 0.2 0.3 0.4 0.5 X acc Y acc Z acc Yaw rate Roll rate Pitch rate Y Channel – 0.145893 Z Channel – 0.086096 Yaw Channel – 0.446323 Roll Channel – 0.028886 Pitch Channel – 0.02479 O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M P Pass? 5.7 28.2 Y P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? 0.03 0.18 Y The Analysis Solution (check one) passes does NOT pass all the criteria in Table C2-3.

C-22 Table C2-3(b). Roadside Safety Validation Metrics Rating Table for the G4(1S) with curb model– (multi-channel option using Inertia method). Evaluation Criteria (time interval [0 sec; 1.22 sec]) Channels (Select which was used) X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights - Inertia Method - X Channel – 0.119486 0 0.1 0.2 0.3 0.4 0.5 0.6 X acc Y acc Z acc Yaw rate Roll rate Pitch rate Y Channel – 0.129217 Z Channel – 0.04426 Yaw Channel – 0.477606 Roll Channel – 0.034208 Pitch Channel – 0.195224 O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M P Pass? 7.3 24.2 Y P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? 0.02 0.21 Y The Analysis Solution (check one) passes does NOT pass all the criteria in Table C2-3.

C-23 PART IV: PHENOMENA IMPORTANCE RANKING TABLE Table C2-4. Evaluation Criteria Test Applicability Table. Evaluation Factors Evaluation Criteria Applicable Tests Structural Adequacy A Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38 B The test article should readily activate in a predictable manner by breaking away, fracturing or yielding. 60, 61, 70, 71, 80, 81 C Acceptable test article performance may be by redirection, controlled penetration or controlled stopping of the vehicle. 30, 31,, 32, 33, 34, 39, 40, 41, 42, 43, 44, 50, 51, 52, 53 Occupant Risk D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. All E Detached elements, fragments or other debris from the test article, or vehicular damage should not block the driver’s vision or otherwise cause the driver to lose control of the vehicle. (Answer Yes or No) 70, 71 F The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. All except those listed in criterion G G It is preferable, although not essential, that the vehicle remain upright during and after collision. 12, 22 (for test level 1 – 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44) H Occupant impact velocities should satisfy the following: Occupant Impact Velocity Limits (m/s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 80, 81 Longitudinal and Lateral 9 12 Longitudinal 3 5 60, 61, 70, 71 I Occupant ridedown accelerations should satisfy the following: Occupant Ridedown Acceleration Limits (g’s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 60, 61, 70, 71, 80, 81 Longitudinal and Lateral 15 20 Vehicle Trajectory L The occupant impact velocity in the longitudinal direction should not exceed 40 ft/sec and the occupant ride-down acceleration in the longitudinal direction should not exceed 20 G’s. 11,21, 35, 37, 38, 39 M The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38, 39 N Vehicle trajectory behind the test article is acceptable. 30, 31, 32, 33, 34, 39, 42, 43, 44, 60, 61, 70, 71, 80, 81

C-24 Table C2-5(a). Roadside Safety Phenomena Importance Ranking Table (Structural Adequacy). Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? St ru ct ur al A de qu ac y A A1 Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. (Answer Yes or No) Yes Yes YES A2 Maximum dynamic deflection: - Relative difference is less than 20 percent or - Absolute difference is less than 0.15 m 0.5 m 0.6 m 20% 0.1 m YES A3 Length of vehicle-barrier contact: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m 6.32 m 6.19 m 2.1% 0.13 m YES A4 The relative difference in the number of broken or significantly bent posts is less than 20 percent. 2 2 YES A5 The rail element did not rupture or fail (Answer Yes or No) Yes Yes YES A6 There were no failures of connector elements (Answer Yes or No). No No YES A6a Number of detached posts from rail 2 2 YES A6b Number of detached blockouts from posts 1 1 YES

C-25 Table C2-5(b). Roadside Safety Phenomena Importance Ranking Table (Occupant Risk). Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? O cc up an t R is k D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. (Answer Yes or No) Pass Pass YES F F1 The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. (Answer Yes or No) Pass Pass YES F2 Maximum roll of the vehicle: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 6.5 4.9 24.6% 1.6 deg. YES F3 Maximum pitch of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 10.2 12.8 25.5% 2.6 deg YES F4 Maximum yaw of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 42.0 43.2 2.9% 1.2 deg YES L L1 Occupant impact velocities: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m/s. • Longitudinal OIV (m/s) 4.9 4.2 14.3% 0.7 m/s YES • Lateral OIV (m/s) 4.7 4.1 12.8% 0.6 m/s YES • THIV (m/s) 24.1 26.8 11.2% 2.7 m/s YES L2 Occupant accelerations: - Relative difference is less than 20 percent or - Absolute difference is less than 4 g’s. • Longitudinal ORA 8.1 8.1 0.0 YES • Lateral ORA 6.3 10.6 68.3% 4.3 G NO • ASI 0.7 0.67 4.3% 0.03 YES

C-26 Table C2-5(c). Roadside Safety Phenomena Importance Ranking Table (Vehicle Trajectory). Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? V eh ic le Tr aj ec to ry M M1 The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. Yes 14 deg No 16 deg NO *** M2 Exit angle at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 14 deg 16 deg 14 % 2 deg YES M3 Exit velocity at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 10 m/s. 41.3* 56.7 km/hr 37.3% 15.4 km/hr YES 53.1** km/hr 6.8% 3.6 km/hr YES M4 One or more vehicle tires failed or de-beaded during the collision event (Answer Yes or No). Yes N.M. - M5 One or more tires separated from the vehicle (Answer Yes or No). No N.M. - * Velocity reported in the test report computed by integrating the raw x-acceleration channel ** Velocity computed by integrating the x-channel data processed by RSVVP (e.g., with drift and shift) *** The exit angle was one degree below the 15 degree limit for the test and one degree over for the simulation. Both the test and simulation, therefore, were essentially at the limit so this should be considered an agreeing result. The Analysis Solution (check one) passes does NOT pass all the criteria in Tables C2-5 with exceptions as noted without exceptions .

C-27 Plots of the time histories used to evaluate the comparison metrics A. Accelerations S&G mag. = 1.2 √ S&G phase = 41.6 x Mean = 0.00 √ St.D. = 0.17 √ (a) (b) Figure C2-1. X-channel (a) acceleration-time history data used to compute metrics and (b) integration of acceleration-time history data. S&G mag. = 5.7 √ S&G phase = 43.1 x Mean = 0.02 √ St.D. = 0.17 √ (a) (b) 1.0 0.5 0.0 1.5 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Figure C2-2. Y-Channel (a) acceleration-time history data used to compute metrics and (b) integration of acceleration-time history data.

C-28 S&G mag. = 0.5 √ S&G phase = 48.6 x Mean = 0.01√ St.D. = 0.23√ (a) (b) 0.20 0.10 0.00 0.10 0.30 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Figure C2-3. Z-Channel (a) acceleration-time history data used to compute metrics and (b) integration of acceleration-time history data. S&G mag. = 1.5 √ S&G phase = 44.5 x Mean = 0.02 √ St.D. = 0.51 x (a) (b) 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 8 4 2 0.0 -6 -4 -2 6 Figure C2-4. Roll-Channel (a) angular rate-time history data used to compute metrics and (b) integration of angular rate-time history data.

C-29 S&G mag. = 9.7 √ S&G phase = 25.2 √ Mean = 0.07 x St.D. = 0.36 x (a) (b) 2 0 -2 -4 -6 -8 -10 -12 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 4 Figure C2-5. Pitch-Channel (a) angular rate-time history data used to compute metrics and (b) integration of angular rate-time history data. S&G mag. = 9.6 √ S&G phase = 10.4 √ Mean = 0.06 √ St.D. = 0.14 √ (a) (b) 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 -15 -5 0 -20 -25 -30 -35 -40 -45 -10 Figure C2-6. Yaw-Channel (a) angular rate-time history data used to compute metrics and (b) integration of angular rate-time history data.

C-30 APPENDIX C3: SMALL CAR STRIKING A VERTICAL RIGID WALL VALIDATION/VERIFICATION REPORT FOR A ________________________EN 1317 Vehicle ___________________________________ (Report 350 or MASH or EN1317 Vehicle Type) Striking a ______________________Concrete barrier_______________________________ (roadside hardware type and name) Report Date: _______________________________12/30/09_____________________________________ Type of Report (check one) Verification (known numerical solution compared to new numerical solution) or Validation (full-scale crash test compared to a numerical solution). General Information Known Solution Analysis Solution Performing Organization Test agency 1 Politecnico di Milano Test/Run Number: S 70 GM_R5 Round Robin Vehicle: F i a t U N O Geo Metro (GM_R5) Impact Conditions Vehicle Mass: 922 kg 860 kg Speed: 100.33 km/h 100 km/h Angle: 20deg 20 deg. Impact Point: 10 m from beginning 4.5 m from the beginning Composite Validation/Verification Score List the Report 350/MASH or EN1317 Test Number Step I Did all solution verification criteria in Table C3-1 pass? YES Step II Do all the time history evaluation scores from Table C3-2 result in a satisfactory comparison (i.e., the comparison passes the criterion)? If all the values in Table C3-2 did not pass, did the weighted procedure shown in Table C3-3 result in an acceptable comparison. If all the criteria in Table C3-2 pass, enter “yes.” If all the criteria in Table C3-2 did not pass but Table C3-3 resulted in a passing score, enter “yes.” YES Step III All the criteria in Table C3-5 passed NO Are the results of Steps I through III all affirmative (i.e., YES)? If all three steps result in a “YES” answer, the comparison can be considered validated or verified. If one of the steps results in a negative response, the result cannot be considered validated or verified. NO The analysis solution (check one) is is NOT verified/validated against the known solution.

C-31 PART I: BASIC INFORMATION 1. What type of roadside hardware is being evaluated (check one)? Longitudinal barrier or transition Terminal or crash cushion Breakaway support or work zone traffic control device Truck-mounted attenuator Other hardware: _____________________________________ 2. What test guidelines were used to perform the full-scale crash test (check one)? NCHRP Report 350 MASH EN1317 Other: ______________________________________________ 3. Indicate the test level and number being evaluated (fill in the blank). ___TB-11________ 4. Indicate the vehicle type appropriate for the test level and number indicated in item 3 according to the testing guidelines indicated in item 2. NCHRP Report 350/MASH 700C 820C 1100C 2000P 2270P 8000S 10000S 36000V 36000T EN1317 Car (900 kg) Car (1300 kg) Car (1500 kg) Rigid HGV (10 ton) Rigid HGV (16 ton) Rigid HGV (30 ton) Bus (13 ton) Articulated HGV (38 ton)

C-32 PART II: ANALYSIS SOLUTION VERIFICATION Table C3-1. Analysis Solution Verification Table. Verification Evaluation Criteria Change (%) Pass? Total energy of the analysis solution (i.e., kinetic, potential, contact, etc.) must not vary more than 10 percent from the beginning of the run to the end of the run. -1 YES Hourglass Energy of the analysis solution at the end of the run is less than five percent of the total initial energy at the beginning of the run. 0.5 YES The part/material with the highest amount of hourglass energy at any time during the run is less than five percent of the total initial energy at the beginning of the run. 0.5 YES Mass added to the total model is less than five percent of the total model mass at the beginning of the run. 0 YES The part/material with the most mass added had less than 10 percent of its initial mass added. 0 YES The moving parts/materials in the model have less than five percent of mass added to the initial moving mass of the model. 0 YES There are no shooting nodes in the solution? No YES There are no solid elements with negative volumes? No YES The Analysis Solution (check one) passes does NOT pass all the criteria in Table C3-1.

C-33 PART III: TIME HISTORY EVALUATION TABLE Table C3-2. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (single channel option). Evaluation Criteria Time interval [0 sec; 0.4 sec] O Sprague-Geer Metrics List all the data channels being compared. Calculate the M and P metrics using RSVVP and enter the results. Values less than or equal to 40 are acceptable. RSVVP Curve Preprocessing Options M [%] P [%] Pass? Filter Option Sync. Option Shift Drift True Curve Test Curve True Curve Test Curve X acceleration CFC 180 Min. area of Residuals N N N N 7.7 36.8 Y Y acceleration CFC 180 Min. area of Residuals N N N N 24.5 38.5 Y Z acceleration CFC 180 Min. area of Residuals N N N N 218.4 41.5 N Yaw rate N/A N/A N/A N/A N/A N/A 0.7 11.1 Y Roll rate N/A N/A N/A N/A N/A N/A N/A N/A N/A Pitch rate N/A N/A N/A N/A N/A N/A N/A N/A N/A P ANOVA Metrics List all the data channels being compared. Calculate the ANOVA metrics using RSVVP and enter the results. Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) and • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l [ % ] S ta nd ar d D ev ia tio n of R es id ua ls [% ] Pass? X acceleration/Peak 0.82 17.4 Y Y acceleration/Peak -2.32 30.5 Y Z acceleration/Peak -2.84 54.2 N Yaw rate 3.3 9.5 Y Roll rate N/A N/A N/A Pitch rate N/A N/A N/A The Analysis Solution (check one) passes does NOT pass all the criteria in Table C3-2.

C-34 Table C3-3. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (Multiple channels). Evaluation Criteria (time interval [0 sec; 0.4 sec]) Channels (Select which was used) X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights -Area (II) Method- X Channel – 0.16 Y Channel – 0.30 Z Channel – 0.04 Yaw rate Channel – 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 X acc Y acc Z acc Yaw rate O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M [%] P [%] Pass? 17.6 24.7 Y P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? 1% 18.8% Y The Analysis Solution (check one) passes does NOT pass all the criteria in Table C3-3.

C-35 PART IV: PHENOMENA IMPORTANCE RANKING TABLE Table C3-4. Evaluation Criteria Test Applicability Table. Evaluation Factors Evaluation Criteria Applicable Tests Structural Adequacy A Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38 B The test article should readily activate in a predictable manner by breaking away, fracturing or yielding. 60, 61, 70, 71, 80, 81 C Acceptable test article performance may be by redirection, controlled penetration or controlled stopping of the vehicle. 30, 31,, 32, 33, 34, 39, 40, 41, 42, 43, 44, 50, 51, 52, 53 Occupant Risk D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. All E Detached elements, fragments or other debris from the test article, or vehicular damage should not block the driver’s vision or otherwise cause the driver to lose control of the vehicle. (Answer Yes or No) 70, 71 F The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. All except those listed in criterion G G It is preferable, although not essential, that the vehicle remain upright during and after collision. 12, 22 (for test level 1 – 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44) H Occupant impact velocities should satisfy the following: Occupant Impact Velocity Limits (m/s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 80, 81 Longitudinal and Lateral 9 12 Longitudinal 3 5 60, 61, 70, 71 I Occupant ridedown accelerations should satisfy the following: Occupant Ridedown Acceleration Limits (g’s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 60, 61, 70, 71, 80, 81 Longitudinal and Lateral 15 20 Vehicle Trajectory L The occupant impact velocity in the longitudinal direction should not exceed 40 ft/sec and the occupant ride-down acceleration in the longitudinal direction should not exceed 20 G’s. 11,21, 35, 37, 38, 39 M The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38, 39 N Vehicle trajectory behind the test article is acceptable. 30, 31, 32, 33, 34, 39, 42, 43, 44, 60, 61, 70, 71, 80, 81

C-36 Table C3-5. Roadside Safety Phenomena Importance Ranking Table. Evaluation Criteria Known Result Analysis Result Difference Relative/A bsolute Agree? St ru ct ur al A de qu ac y A A1 Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. (Answer Yes or No) Yes Yes YES A2 Maximum dynamic deflection: - Relative difference is less than 20 percent or - Absolute difference is less than 0.15 m 0 0 0 % 0 m YES A3 Length of vehicle-barrier contact: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m 7 m 10 m(1) 30% 3 m NO A4 The relative difference in the number of broken or significantly bent posts is less than 20 percent. 0 0 0 YES A5 The rail element did not rupture or fail (Answer Yes or No) Yes Yes YES A6 There were no failures of connector elements (Answer Yes or No). Yes Yes YES A7 There was no significant snagging between the vehicle wheels and barrier elements (Answer Yes or No). Yes Yes YES A8 There was no significant snagging between vehicle body components and barrier elements (Answer Yes or No). Yes Yes YES B B1 The test article should readily activate in a predictable manner by breaking away, fracturing or yielding. (Answer Yes or No) C C1 Acceptable test article performance may be by redirection, controlled penetration or controlled stopping of the vehicle. (Answer Yes or No) C2 The relative difference in maximum system stroke is less than 20 percent. C3 The relative difference in the number of broken or significantly bent posts is less than 20 percent. C4 The rail element did not rupture or tear (Answer Yes or No). C5 There were no failures of connector elements (Answer Yes or No). (1) The vehicle slid along the barrier due to collapse of the steering system (front right wheel turned towards the barrier).

C-37 Table C3-5. Roadside Safety Phenomena Importance Ranking Table (continued). Evaluation Criteria Known Result Analysis Result Relative Diff. (%) Agree? O cc up an t R is k D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. (Answer Yes or No) Pass Pass YES E Detached elements, fragments or other debris from the test article, or vehicular damage should not block the driver’s vision or otherwise cause the driver to lose control of the vehicle. (Answer Yes or No) F F1 The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. (Answer Pass or Not pass) Pass Pass YES F2 Maximum roll of the vehicle: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. ≈3°(1) 2.5° 16 % 0.5° YES F3 Maximum pitch of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. N/A N/A N/A N/A F4 Maximum yaw of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 16.8° 17.5° 4 % 0.7° YES H H1 Occupant impact velocities: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m/s. (2): • Longitudinal OIV (m/s) • Lateral OIV (m/s) 4.5 3.3 -27% 1.2 m/s YES -7.2 -7.2 0 % 0 m/s YES H2 • Longitudinal OIV 4.5 3.3 -27% 1.2 m/s YES H3 • THIV (m/s) 7.9 7.6 -3.8 % 0.3 m/s YES I Occupant accelerations: - Relative difference is less than 20 percent or - Absolute difference is less than 4 g’s. (2): • Longitudinal ORA (g) -5 -3.5 30 % 1.5 g’s YES • Lateral ORA (g) 19.8 10 -49.5 % 9.8 g’s NO • PHD (g) 20.4 11.2 -45 % 9.2 g’s NO • ASI 1.59 1.78 11% 0.2 YES (1) The value was visually assessed from the image sequence of the test.

C-38 (2) The severity indexes were computed considering the curves preprocessed by RSVVP on the time interval [0 sec, 0.2 sec]. Table C3-5. Roadside Safety Phenomena Importance Ranking Table (continued). Evaluation Criteria Known Result Analysis Result Relative Diff. (%) Agree? V eh ic le T ra je ct or y L The occupant impact velocity in the longitudinal direction should not exceed 40 ft/sec and the occupant ridedown acceleration in the longitudinal direction should not exceed 20 G’s. M M1 The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. ≈10(1) Yes 0°(2) Yes YES M2 The relative difference in the exit angle at loss of contact is less than 20 percent or 5 degrees 10 0° (2) -100% 10° NO M3 The relative difference in the exit velocity at loss of contact is less than 20 percent or 10 km/hr 78.8 km/h 82 (2,3) km/h 4% 3.2 km/hr YES M4 One or more vehicle tires failed or de-beaded during the collision event (Answer Yes or No). Yes Yes YES M5 One or more tires separated from the vehicle (Answer Yes or No). No No YES N Vehicle trajectory went behind the test article (Answer Yes or No) (1) The value was visually assessed from the image sequence of the test. (2) The vehicle slid along the whole length of the barrier and never lost contact. (3) The exit velocity was considered at the same time the vehicle lost contact w/ barrier in the experimental test (t = 0.35 sec). The Analysis Solution (check one) passes does NOT pass all the criteria in Tables C3-5 with exceptions as noted without exceptions .

C-39 Plot of the time histories used to evaluate the comparison metrics X-Acceleration (g) Y-Acceleration (g)

C-40 Z-Acceleration (g) Yaw rate (rad/sec)

C-41 APPENDIX C4: SMALL CAR STRIKING A VERTICAL RIGID WALL VALIDATION/VERIFICATION REPORT FOR A ________________________EN 1317 Vehicle ___________________________________ (Report 350 or MASH or EN1317 Vehicle Type) Striking a ______________________Concrete barrier_______________________________ (roadside hardware type and name) Report Date: _______________________________01/07/09_____________________________________ Type of Report (check one) Verification (known numerical solution compared to new numerical solution) or Validation (full-scale crash test compared to a numerical solution). General Information Known Solution Analysis Solution Performing Organization Test agency 2 Politecnico di Milano Test/Run Number: ROU/ROB-02/664 GM_R5 Round Robin Vehicle: P e u geo t 20 5 Geo Metro (GM_R5) Impact Conditions Vehicle Mass: 862 kg 860 kg Speed: 100.4 km/h 100 km/h Angle: 20deg 20 deg. Impact Point: 10.7 m from beginning 4.5 m from the beginning Composite Validation/Verification Score List the Report 350/MASH or EN1317 Test Number Step I Did all solution verification criteria in Table C4-1 pass? YES Step II Do all the time history evaluation scores from Table C4-2 result in a satisfactory comparison (i.e., the comparison passes the criterion)? If all the values in Table C4-2 did not pass, did the weighted procedure shown in Table C4-3 result in an acceptable comparison. If all the criteria in Table C4-2 pass, enter “yes.” If all the criteria in Table C4-2 did not pass but Table C4-3 resulted in a passing score, enter “yes.” YES Step III All the criteria in Table C4-5 passed NO Are the results of Steps I through III all affirmative (i.e., YES)? If all three steps result in a “YES” answer, the comparison can be considered validated or verified. If one of the steps results in a negative response, the result cannot be considered validated or verified. NO The analysis solution (check one) is is NOT verified/validated against the known solution.

C-42 PART I: BASIC INFORMATION 1. What type of roadside hardware is being evaluated (check one)? Longitudinal barrier or transition Terminal or crash cushion Breakaway support or work zone traffic control device Truck-mounted attenuator Other hardware: _____________________________________ 2. What test guidelines were used to perform the full-scale crash test (check one)? NCHRP Report 350 MASH EN1317 Other: ______________________________________________ 3. Indicate the test level and number being evaluated (fill in the blank). ___TB-11________ 4. Indicate the vehicle type appropriate for the test level and number indicated in item 3 according to the testing guidelines indicated in item 2. NCHRP Report 350/MASH 700C 820C 1100C 2000P 2270P 8000S 10000S 36000V 36000T EN1317 Car (900 kg) Car (1300 kg) Car (1500 kg) Rigid HGV (10 ton) Rigid HGV (16 ton) Rigid HGV (30 ton) Bus (13 ton) Articulated HGV (38 ton)

C-43 PART II: ANALYSIS SOLUTION VERIFICATION Table C4-1. Analysis Solution Verification Table. Verification Evaluation Criteria Change (%) Pass? Total energy of the analysis solution (i.e., kinetic, potential, contact, etc.) must not vary more than 10 percent from the beginning of the run to the end of the run. -1 YES Hourglass Energy of the analysis solution at the end of the run is less than five percent of the total initial energy at the beginning of the run. 0.5 YES The part/material with the highest amount of hourglass energy at any time during the run is less than five percent of the total initial energy at the beginning of the run. 0.5 YES Mass added to the total model is less than five percent of the total model mass at the beginning of the run. 0 YES The part/material with the most mass added had less than 10 percent of its initial mass added. 0 YES The moving parts/materials in the model have less than five percent of mass added to the initial moving mass of the model. 0 YES There are no shooting nodes in the solution? No YES There are no solid elements with negative volumes? No YES The Analysis Solution (check one) passes does NOT pass all the criteria in Table C4-1 with without exceptions as noted.

C-44 PART III: TIME HISTORY EVALUATION TABLE Table C4-2. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (single channel option). Evaluation Criteria Time interval [0 sec; 0.4 sec] O Sprague-Geer Metrics List all the data channels being compared. Calculate the M and P metrics using RSVVP and enter the results. Values less than or equal to 40 are acceptable. RSVVP Curve Preprocessing Options M [%] P [%] Pass? Filter Option Sync. Option Shift Drift True Curve Test Curve True Curve Test Curve X acceleration CFC 180 Min. area of Residuals N N N N 6.8 41.3 N Y acceleration CFC 180 Min. area of Residuals N N N N 12.3 39.7 Y Z acceleration CFC 180 Min. area of Residuals N N N N 181.3 47.8 N Yaw rate CFC 180 Min. area of Residuals N N N N 16.4 12 Y Roll rate CFC 180 Min. area of Residuals N N N N 46.2 50.1 N Pitch rate CFC 180 Min. area of Residuals N N N N 38.7 40.2 N P ANOVA Metrics List all the data channels being compared. Calculate the ANOVA metrics using RSVVP and enter the results. Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) and • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l [ % ] S ta nd ar d D ev ia tio n of R es id ua ls [% ] Pass? X acceleration/Peak 0.9 16.7 Y Y acceleration/Peak -1 20 Y Z acceleration/Peak -3 53 N Yaw rate -11 11.8 N Roll rate 6.2 36.7 N Pitch rate -0.11 16.1 Y The Analysis Solution (check one) passes does NOT pass all the criteria in Table C4-2.

C-45 Table C4-3. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (Multiple channels). Evaluation Criteria (time interval [0 sec; 0.4 sec]) Channels (Select which was used) X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights -Area (II) Method- X Channel – 0.17 Y Channel – 0.28 Z Channel – 0.05 Yaw rate Channel – 0.36 Roll rate Channel – 0.10 Pitch rate Channel – 0.04 O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M [%] P [%] Pass? 25.7 31.5 Y P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l [ % ] S ta nd ar d D ev ia tio n of R es id ua ls [% ] Pass? -3.7 19.6 Y The Analysis Solution (check one) passes does NOT pass all the criteria in Table C4-3.

C-46 PART IV: PHENOMENA IMPORTANCE RANKING TABLE Table C4-4. Evaluation Criteria Test Applicability Table. Evaluation Factors Evaluation Criteria Applicable Tests Structural Adequacy A Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38 B The test article should readily activate in a predictable manner by breaking away, fracturing or yielding. 60, 61, 70, 71, 80, 81 C Acceptable test article performance may be by redirection, controlled penetration or controlled stopping of the vehicle. 30, 31,, 32, 33, 34, 39, 40, 41, 42, 43, 44, 50, 51, 52, 53 Occupant Risk D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. All E Detached elements, fragments or other debris from the test article, or vehicular damage should not block the driver’s vision or otherwise cause the driver to lose control of the vehicle. (Answer Yes or No) 70, 71 F The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. All except those listed in criterion G G It is preferable, although not essential, that the vehicle remain upright during and after collision. 12, 22 (for test level 1 – 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44) H Occupant impact velocities should satisfy the following: Occupant Impact Velocity Limits (m/s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 80, 81 Longitudinal and Lateral 9 12 Longitudinal 3 5 60, 61, 70, 71 I Occupant ridedown accelerations should satisfy the following: Occupant Ridedown Acceleration Limits (g’s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 60, 61, 70, 71, 80, 81 Longitudinal and Lateral 15 20 Vehicle Trajectory L The occupant impact velocity in the longitudinal direction should not exceed 40 ft/sec and the occupant ride-down acceleration in the longitudinal direction should not exceed 20 G’s. 11,21, 35, 37, 38, 39 M The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38, 39 N Vehicle trajectory behind the test article is acceptable. 30, 31, 32, 33, 34, 39, 42, 43, 44, 60, 61, 70, 71, 80, 81

C-47 Table C4-5. Roadside Safety Phenomena Importance Ranking Table. Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? St ru ct ur al A de qu ac y A A1 Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. (Answer Yes or No) Yes Yes YES A2 Maximum dynamic deflection: - Relative difference is less than 20 percent or - Absolute difference is less than 0.15 m 0 0 0 YES A3 Length of vehicle-barrier contact: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m 3 m 10 m(1) 233% 7 m NO A4 The relative difference in the number of broken or significantly bent posts is less than 20 percent. 0 0 0 % 0 YES A5 The rail element did not rupture or fail (Answer Yes or No) Yes Yes YES A6 There were no failures of connector elements (Answer Yes or No). Yes Yes YES A7 There was no significant snagging between the vehicle wheels and barrier elements (Answer Yes or No). Yes Yes YES A8 There was no significant snagging between vehicle body components and barrier elements (Answer Yes or No). Yes Yes YES B B1 The test article should readily activate in a predictable manner by breaking away, fracturing or yielding. (Answer Yes or No) C C1 Acceptable test article performance may be by redirection, controlled penetration or controlled stopping of the vehicle. (Answer Yes or No) C2 The relative difference in maximum system stroke is less than 20 percent. C3 The relative difference in the number of broken or significantly bent posts is less than 20 percent. C4 The rail element did not rupture or tear (Answer Yes or No). C5 There were no failures of connector elements (Answer Yes or No). (1) The vehicle slid along the barrier due to collapse of the steering system (front right wheel turned towards the barrier).

C-48 Table C4-5. Roadside Safety Phenomena Importance Ranking Table (continued). Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? O cc up an t R is k D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. (Answer Yes or No) Pass Pass YES E Detached elements, fragments or other debris from the test article, or vehicular damage should not block the driver’s vision or otherwise cause the driver to lose control of the vehicle. (Answer Yes or No) F F1 The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. (Answer Pass or Not pass) Pass Pass YES F2 Maximum roll of the vehicle: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 3°(1) 2.5° 20% 0.5° YES F3 Maximum pitch of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 4.5° 3° -33% 1.5° YES F4 Maximum yaw of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 25° 17.5° -30% 7.5° NO H H1 Occupant impact velocities: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m/s. (2): • Longitudinal OIV (m/s) • Lateral OIV (m/s) 4.8 3.3 31.2% 1.5 m/s YES -8 -7.2 -10% 0.8 m/s YES H2 • Longitudinal OIV 4.8 3.3 31.2% 1.5 m/s NO H3 • THIV (m/s) 8.1 7.6 -6.1% 0.5 m/s YES I Occupant accelerations: - Relative difference is less than 20 percent or - Absolute difference is less than 4 g’s. (2): • Longitudinal ORA (g) -3.7 -3.5 -5.4% 0.2 g’s YES • Lateral ORA (g) 14.3 10 -30% 4.3 g’s NO • PHD (g) 15.2 11.2 -26% 4 g’s NO • ASI 1.93 1.78 -7% 0.15 g’s YES (1) The value was visually assessed from the image sequence of the test.

C-49 (2) The severity indexes were computed considering the curves preprocessed by RSVVP on the time interval [0 sec, 0.2 sec]. Table C4-5. Roadside Safety Phenomena Importance Ranking Table (continued). Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? V eh ic le T ra je ct or y L The occupant impact velocity in the longitudinal direction should not exceed 40 ft/sec and the occupant ridedown acceleration in the longitudinal direction should not exceed 20 G’s. M M1 The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. ≈5.5° (27.5%) Yes 0°(1) (0%) Yes YES M2 Exit angle at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. 5.5° 0°(1) -100% 5.5 NO M3 Exit velocity at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 10 m/s. 78.4 km/h 82 (1,2) km/h 4.6% 3.6 m/s YES M4 One or more vehicle tires failed or de-beaded during the collision event (Answer Yes or No). Yes Yes YES M5 One or more tires separated from the vehicle (Answer Yes or No). No No YES N Vehicle trajectory went behind the test article (Answer Yes or No) (1) The vehicle slid along the whole length of the barrier and never lost contact. (2) The exit velocity was considered at the same time the vehicle lost contact w/ barrier in the experimental test (t = 0.35 sec). The Analysis Solution (check one) passes does NOT pass all the criteria in Tables C4-5 with exceptions as noted without exceptions .

C-50 Plots of the time histories used to evaluate the comparison metrics X-Acceleration (g) Y-Acceleration (g)

C-51 Z-Acceleration (g) Yaw rate (rad/sec)

C-52 Roll rate (rad/sec) Pitch rate (rad/sec)

C-53 APPENDIX C5: TRACTOR TRAILER TRUCK STRIKING A 42” TALL RIGID CONCRETE MEDIAN BARRIER VALIDATION/VERIFICATION REPORT FOR A _______________Tractor-Semitrailer Model (36000V)__________________________ (Report 350 Vehicle Type) Striking a 42-inch tall “rigid” concrete median barrier (roadside hardware type and name) Report Date: _______11-30-2009_______________________________________________ Type of Report (check one) Verification (known numerical solution compared to new numerical solution) or Validation (full-scale crash test compared to a numerical solution). General Information Known Solution Analysis Solution Performing Organization MwRSF WPI/Battelle Test/Run Number: TL5CMB-2 TT090518_RUN1_200ms- approach-SP Vehicle: 1991 White/GMC Tractor 1988 Pines 48-ft Trailer 01aTrac_Day_v1a_090506.k 02aSemiTrailer48_090520.k Impact Conditions Vehicle Mass: 36,154 kg 36,200 kg Speed: 84.9 km/hr 84.9 km/hr Angle: 15.5 degrees 15.5 degrees Impact Point: Composite Validation/Verification Score List the Report 350/MASH or EN1317 Test Number Step I Did all solution verification criteria in Table C5-1 pass? Y? Step II Do all the time history evaluation scores from Table C5-2 result in a satisfactory comparison (i.e., the comparison passes the criterion)? If all the values in Table C5-2 did not pass, did the weighted procedure shown in Table C5-3 result in an acceptable comparison. If all the criteria in Table C5-2 pass, enter “yes.” If all the criteria in Table C5-2 did not pass but Table C5-3 resulted in a passing score, enter “yes.” Y Step III All the criteria in Table C5-5 passed Y Are the results of Steps I through III all affirmative (i.e., YES)? If all three steps result in a “YES” answer, the comparison can be considered validated or verified. If one of the steps results in a negative response, the result cannot be considered validated or verified. Y? The analysis solution (check one) is is NOT verified/validated against the known solution.

C-54 PART I: BASIC INFORMATION 1. What type of roadside hardware is being evaluated (check one)? Longitudinal barrier or transition Terminal or crash cushion Breakaway support or work zone traffic control device Truck-mounted attenuator Other hardware: ______________________________________ 2. What test guidelines were used to perform the full-scale crash test (check one)? NCHRP Report 350 MASH EN1317 Other: ______________________________________________ 3. Indicate the test level and number being evaluated (fill in the blank). ___5-12______ 4. Indicate the vehicle type appropriate for the test level and number indicated in item 3 according to the testing guidelines indicated in item 2. NCHRP Report 350/MASH 700C 820C 1100C 2000P 2270P 8000S 10000S 36000V 36000T EN1317 Car (900 kg) Car (1300 kg) Car (1500 kg) Rigid HGV (10 ton) Rigid HGV (16 ton) Rigid HGV (30 ton) Bus (13 ton) Articulated HGV (38 ton)

C-55 PART II: ANALYSIS SOLUTION VERIFICATION Table C5-1. Analysis Solution Verification Table. Verification Evaluation Criteria Change (%) Pass? Total Energy of the analysis solution (i.e., kinetic, potential, contact, etc.) must not vary more than 10 percent from the beginning of the run to the end of the run. • Sliding Interface Energy was the source of the increase in total energy 10 YES Hourglass Energy of the analysis solution at the end of the run is less than 5 percent of the total initial energy at the beginning of the run. 0.1 YES Hourglass Energy of the analysis solution at the end of the run is less than ten percent of the total internal energy at the end of the run. 0.6 Yes The part/material with the highest amount of hourglass energy at any time during the run is less than five percent of the total initial energy at the beginning of the run. 0.02 Yes Mass added to the total model is less than five percent of the total model mass at the beginning of the run. 0.0 Yes The part/material with the most mass added had less than 10 percent of its initial mass added. 400 NO* - Weld element connecting trailer side panels to vertical posts - 200 kg (added) /50 kg (initial) The moving parts/materials in the model have less than five percent of mass added to the initial moving mass of the model. 0.0 Yes There are no shooting nodes in the solution? Yes Yes There are no solid elements with negative volumes? Yes Yes * Part 7803 are weld elements used to connect the trailer’s outer side panels to the vertical support posts. These connector elements are relatively “rigid” and the mass added is considered insignificant to the overall mass of the parts to which they connect. The Analysis Solution (check one) passes does NOT pass all the criteria in Table C5-1 with without exceptions noted.

C-56 PART III: TIME HISTORY EVALUATION TABLE Table C5-2. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (single channel option). Evaluation Criteria Time interval [0 sec; 1.54 sec] O Sprague-Geer Metrics List all the data channels being compared. Calculate the M and P metrics using RSVVP and enter the results. Values less than or equal to 40 are acceptable. RSVVP Curve Preprocessing Options M P Pass? Filter Option Sync. Option Shift Drift True Curve Test Curve True Curve Test Curve X acceleration CFC 180 Min. area of Residuals N N N N 12.4 48.5 N Y acceleration CFC 180 Min. area of Residuals N N N N 13.5 31.4 Y Z acceleration CFC 180 Min. area of Residuals N N N N 12.8 47.1 N P ANOVA Metrics List all the data channels being compared. Calculate the ANOVA metrics using RSVVP and enter the results. Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) and • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? X acceleration/Peak 0.02 0.10 Y Y acceleration/Peak 0.0 0.08 Y Z acceleration/Peak 0.0 0.14 Y The Analysis Solution (check one) passes does NOT pass all the criteria in Table C5-2.

C-57 Table C5-3. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (multi-channel option using Area II method). Evaluation Criteria (time interval [0 sec; 1.54 sec]) Channels (Select which was used) X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights -Area (II) Method- X Channel – 0.038 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 X acc Y acc Z acc Y Channel – 0.640 Z Channel – 0.322 O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M P Pass? 13.2 37.1 Y P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n of R es id ua ls Pass? 0.00 0.10 Y The Analysis Solution (check one) passes does NOT pass all the criteria in Table C5-3.

C-58 PART IV: PHENOMENA IMPORTANCE RANKING TABLE Table C5-4. Evaluation Criteria Test Applicability Table. Evaluation Factors Evaluation Criteria Applicable Tests Structural Adequacy A Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. 10, 11, 12 , 20, 21, 22, 35, 36, 37, 38 B The test article should readily activate in a predictable manner by breaking away, fracturing or yielding. 60, 61, 70, 71, 80, 81 C Acceptable test article performance may be by redirection, controlled penetration or controlled stopping of the vehicle. 30, 31,, 32, 33, 34, 39, 40, 41, 42, 43, 44, 50, 51, 52, 53 Occupant Risk D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. All E Detached elements, fragments or other debris from the test article, or vehicular damage should not block the driver’s vision or otherwise cause the driver to lose control of the vehicle. (Answer Yes or No) 70, 71 F The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. All except those listed in criterion G G It is preferable, although not essential, that the vehicle remain upright during and after collision. 12 , 22 (for test level 1 – 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44) H Occupant impact velocities should satisfy the following: Occupant Impact Velocity Limits (ft/s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 80, 81 Longitudinal and Lateral 30 40 Longitudinal 10 15 60, 61, 70, 71 I Occupant ridedown accelerations should satisfy the following: Occupant Ridedown Acceleration Limits (g’s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 60, 61, 70, 71, 80, 81 Longitudinal and Lateral 15 20 Vehicle Trajectory J The occupant impact velocity in the longitudinal direction should not exceed 40 ft/sec and the occupant ride-down acceleration in the longitudinal direction should not exceed 20 G’s. 11,21, 35, 37, 38, 39 M The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. 10, 11, 12 , 20, 21, 22, 35, 36, 37, 38, 39 N Vehicle trajectory behind the test article is acceptable. 30, 31, 32, 33, 34, 39, 42, 43, 44, 60, 61, 70, 71, 80, 81

C-59 Table C5-5. Structural Adequacy Phenomena for the Tractor-Semitrailer Test Case. Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? St ru ct ur al A de qu ac y A Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. (Answer Yes or No) Yes Yes YES O cc up an t R is k D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. (Answer Yes or No) Pass Pass N.M.* G G1 It is preferable, although not essential, that the vehicle remain upright during and after collision. (Answer Yes or No) Pass Pass YES G2 The relative difference between the maximum roll of the vehicle is less than 20 percent. 42 deg. 42.8 deg. 2% 0.8 deg YES G3 The relative difference between the maximum pitch of the vehicle is less than 20 percent. Not measured G4 The relative difference between the maximum yaw of the vehicle is less than 20 percent. 15.5 15.5 0 YES V eh ic le T ra je ct or y M M1 The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. Yes Yes YES M2 The relative difference in the yaw angle at loss of contact is less than 20 percent. 15.5 deg 15.5 deg 0 YES M3 The relative difference in the exit velocity at loss of contact is less than 20 percent. M4 One or more vehicle tires failed or de-beaded during the collision event (Answer Yes or No). Yes N/A - M5 One or more tires separated from the vehicle (Answer Yes or No). No N/A - *In this analysis structural adequacy was not of interest. The barrier was modeled as rigid; therefore criterion D could not be assessed. The Analysis Solution (check one) passes does NOT pass all the criteria in Tables C5-5 with exceptions as noted without exceptions .

C-60 Plots of the time histories used to evaluate the comparison metrics S&G mag. = 12.4√ S&G phase = 48.5x Mean = 0.02√ St.D. = 0.10√ (a) (b) Time (seconds) X- ac ce le ra tio n (G ’s ) Figure C5-1. X-channel (a) acceleration-time history data used to compute metrics and (b) 50- millisecond average acceleration -time history data. (a) (b) y- ac ce le ra tio n (G ’s ) Time (seconds) S&G mag. = 13.5 √ S&G phase = 31.4√ Mean = 0.00√ St.D. = 0.08√ Figure C5-2. Y-channel (a) acceleration-time history data used to compute metrics and (b) 50- millisecond average acceleration -time history data.

C-61 S&G mag. = 12.8√ S&G phase = 47.1x Mean = 0.00√ St.D. = 0.14√ (a) (b) X- ch an ne l 5 0- m ill is ec on d av er ag e ac ce le ra tio n (G ’s ) Time (seconds) z- ac ce le ra tio n (G ’s ) Time (seconds) Figure C5-3. Z-channel (a) acceleration-time history data used to compute metrics and (b) 50- millisecond average acceleration -time history data.

C-62 APPENDIX C6: ROADSIDE HARDWARE PIRT FOR A STRONG-POST W-BEAM GUARDRAIL WITH WOOD BLOCKOUTS Developer: Worcester Polytechnic Institute Worcester, MA Model Date: January 2002 Report Date: November 30, 2009 Barrier: The modified G4(1S) guardrail with wood blockouts is composed of 12-gauge w- beam rails supported by W150x13.5 steel posts with150x200 mm wood blockouts (i.e., the type of blockout used in the G4(2W) guardrail system), as shown in Figure C6-1. The posts are spaced at 1.905 m center-to-center. The w-beam rails are spliced together using eight 16-mm diameter bolts at each splice connection, and the rails are connected to the posts and blockouts using a single bolt at each post location. Figure C6-1: Modified G4(1S) guardrail with routed wood blockouts Model: The guardrail model is shown in Figure C6-2. The model consists of 34.6 m of the guardrail system with thirteen 3.81-m sections of w-beam rail, twenty-six W150x13.5 steel posts spaced at 1.905 m, and twenty-six 150 x 200-mm wood blockouts. The up-stream end included the MELT guardrail terminal (validated in a previous study). The downstream anchor was modeled using nonlinear springs representative of a MELT guardrail terminal.

C-63 Figure C6-2. Model of the G4(1S0 Strong-Post W-Beam Guardrail Model. Table C6-1. List of Experiments used in the PIRT Development 1. Three-point bend test of a W150x13.5 steel post about its weak axis. 2. Load-to-rupture of splice connection under quasi-static loading. 3. Pull-through of post-bolt head connection to w-beam using axial load machine. 4. Full-scale bogie impact tests of W150x13.5 posts embedded in soil with density of 1,980 kg/m3. 5. Full-scale bogie impact tests of W150x13.5 posts embedded in soil with density of 2,110 kg/m3. 6. Full-scale bogie impact tests of W150x13.5 posts embedded in soil with density of 2,240 kg/m3.

C-64 Table C6-2. Comparison Metric Evaluation Table for Phenomena #1. PHENOMENA # 1: Plastic deformation of guardrail posts due to bending about weak axis Sprauge-Geers Metrics List all the data channels to be compared below. Using RSVVP calculate the M and P metrics comparing the experiment and the simulation. Values less than or equal to 20 are acceptable. M P Pass? • Force-Displacement 3.6 1.1 Yes ANOVA Metrics List all the data channels to compare in the rows below. Use RSVVP to calculate the ANOVA metrics and enter the values below. The following criteria must be met: • The mean residual error must be less than or equal to five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than or equal to 25 percent of the peak acceleration ( Peaka⋅≤ 25.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n o f R es id ua ls Pass? • Force-Displacement 0.03 0.03 Yes PHENOMENA Three-Point Bend Test of W150x13.5 Post About Weak Axis

C-65 Table C6-3. Comparison Metric Evaluation Table for Phenomena #2. PHENOMENA # 2: Splice Rupture due to Tensile Load in W-Beam Sprauge-Geers Metrics List all the data channels to be compared below. Using RSVVP calculate the M and P metrics comparing the experiment and the simulation. Values less than or equal to 20 are acceptable. M P Pass? • Force-Displacement ANOVA Metrics List all the data channels to compare in the rows below. Use RSVVP to calculate the ANOVA metrics and enter the values below. The following criteria must be met: • The mean residual error must be less than or equal to five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than or equal to 25 percent of the peak acceleration ( Peaka⋅≤ 25.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n o f R es id ua ls Pass? • Force-Displacement PHENOMENA Load-to-rupture of splice connection under quasi-static axial loading

C-66 Table C6-4. Comparison Metric Evaluation Table for Phenomena #3. PHENOMENA # 3: Post-Bolt-Head Pull-Through and Release from W-Beam Sprauge-Geers Metrics List all the data channels to be compared below. Using RSVVP calculate the M and P metrics comparing the experiment and the simulation. Values less than or equal to 20 are acceptable. M P Pass? • Case 1 • Case 2 • Case 3 PHENOMENA TEST CASE Pull-through of post-bolt-head connection to w-beam using axial load machine

C-67 Table C6-5. Comparison Metric Evaluation Table for Phenomena #4. PHENOMENA # 4: Post-Soil Interaction/Response (soil density = 1,980 kg/m3) Sprauge-Geers Metrics List all the data channels to be compared below. Using RSVVP calculate the M and P metrics comparing the experiment and the simulation. Values less than or equal to 20 are acceptable. M P Pass? • Force-Displacement History ANOVA Metrics List all the data channels to compare in the rows below. Use RSVVP to calculate the ANOVA metrics and enter the values below. The following criteria must be met: • The mean residual error must be less than or equal to five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than or equal to 25 percent of the peak acceleration ( Peaka⋅≤ 25.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n o f R es id ua ls Pass? • Force-Displacement History General Comparisons Test FEA Error • Peak Force (kN) 63 50 21% • Average Force (kN) 42.8 40.2 6.1% • Maximum Deflection (mm) 234 249 6.4% PHENOMENA The post-soil model was validated with full-scale bogie impact tests of the W150x13.5post embedded in soil. Test WISC-1 was conducted at the Midwest Roadside Safety Facility. The impact point on the posts was at 550-mm above grade and the impact direction was perpendicular to the flange of the post (i.e., strong direction of post). • Impactor – 946-kg MwRSF rigid nose bogie vehicle • Impact speed = 4.6 m/s • Soil type – AASHTO M 147-65 Gradation B • Soil density – 1,980 kg/m3 Reference: Coon, B.A., J.D. Reid, and J.R. Rhode, .Dynamic Impact Testing of Guardrail Posts Embedded in Soil,. Research Report No. TRP-03-77-98, Midwest Roadside Safety Facility, University of Nebraska-Lincoln, Lincoln, Nebraska (July 21, 1999).

C-68 Table C6-6. Comparison Metric Evaluation Table for Phenomena #5. PHENOMENA # 5: Post-Soil Interaction/Response (soil density = 2,110 kg/m3) Sprauge-Geers Metrics List all the data channels to be compared below. Using RSVVP calculate the M and P metrics comparing the experiment and the simulation. Values less than or equal to 20 are acceptable. M P Pass? • Acceration-Time History ANOVA Metrics List all the data channels to compare in the rows below. Use RSVVP to calculate the ANOVA metrics and enter the values below. The following criteria must be met: • The mean residual error must be less than or equal to five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than or equal to 25 percent of the peak acceleration ( Peaka⋅≤ 25.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n o f R es id ua ls Pass? • Force-Displacement History General Comparisons Test FEA Error • Peak Force (kN) 66 50 24% • Average Force (kN) 43.9 45.1 2.7% • Maximum Deflection (mm) 314 306 2.5% PHENOMENA The post-soil model was validated with full-scale bogie impact tests of the W150x13.5post embedded in soil. Test WISC-3 was conducted at the Midwest Roadside Safety Facility. The impact point on the posts was at 550-mm above grade and the impact direction was perpendicular to the flange of the post (i.e., strong direction of post). • Impactor – 946-kg MwRSF rigid nose bogie vehicle • Impact speed = 5.4 m/s • Soil type – AASHTO M 147-65 Gradation B • Soil density – 2,110 kg/m3 Reference: Coon, B.A., J.D. Reid, and J.R. Rhode, .Dynamic Impact Testing of Guardrail Posts Embedded in Soil,. Research Report No. TRP-03-77-98, Midwest Roadside Safety Facility, University of Nebraska-Lincoln, Lincoln, Nebraska (July 21, 1999).

C-69 Table C6-7. Comparison Metric Evaluation Table for Phenomena #6. PHENOMENA # 6: Post-Soil Interaction/Response (soil density = 2,240 kg/m3) Sprauge-Geers Metrics List all the data channels to be compared below. Using RSVVP calculate the M and P metrics comparing the experiment and the simulation. Values less than or equal to 20 are acceptable. M P Pass? • Acceleration-Time History 4 4 Y ANOVA Metrics List all the data channels to compare in the rows below. Use RSVVP to calculate the ANOVA metrics and enter the values below. The following criteria must be met: • The mean residual error must be less than or equal to five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than or equal to 25 percent of the peak acceleration ( Peaka⋅≤ 25.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n o f R es id ua ls Pass? • Acceleration-Time History 0.04 0.08 Y General Comparisons Test FEA Error • Peak Force (kN) 66 52 21% • Average Force (kN) 47.3 48.1 1.7% • Maximum Deflection (mm) 348 342 1.7% PHENOMENA The post-soil model was validated with full-scale bogie impact tests of the W150x13.5post embedded in soil. Test WISC-4 was conducted at the Midwest Roadside Safety Facility. The impact point on the posts was at 550-mm above grade and the impact direction was perpendicular to the flange of the post (i.e., strong direction of post). • Impactor – 946-kg MwRSF rigid nose bogie vehicle • Impact speed = 5.9 m/s • Soil type – AASHTO M 147-65 Gradation B • Soil density – 2,240 kg/m3 Reference: Coon, B.A., J.D. Reid, and J.R. Rhode, .Dynamic Impact Testing of Guardrail Posts Embedded in Soil,. Research Report No. TRP-03-77-98, Midwest Roadside Safety Facility, University of Nebraska-Lincoln, Lincoln, Nebraska (July 21, 1999).

C-70 Table C6-8. Phenomenon Importance Ranking Table for the Modified G4(1S) Guardrail with Wood Blockouts Validated Phenomenon Validated? Verified? Calibrated? 1. Three-Point Bend Test of W150x13.5 Post About Weak Axis Validated 2. Load-to-rupture of splice connection under quasi-static axial loading Qualitatively Validated 3. Pull-through of post-bolt-head connection to w-beam using axial load machine Qualitatively Validated 4. Full-scale bogie impact tests of the W150x13.5post embedded in soil (soil density = 1,980 kg/m3) Qualitatively Validated 5. Full-scale bogie impact tests of the W150x13.5post embedded in soil (soil density = 1,980 kg/m3) Qualitatively Validated 6. Full-scale bogie impact tests of the W150x13.5post embedded in soil (soil density = 1,980 kg/m3) Validated * Qualitative assessment only

C-71 APPENDIX C7: VEHICLE PIRT FOR A 1992 FREIGHTLINER FLD120 TRACTOR PHENOMENA IMPORTANCE RANKING TABLE FOR A 1992 FREIGHTLINER FLD120 TRACTOR Developer: NCAC/Battelle/ORNL/University of Tennessee at Knoxville Date: 11/30/2009 Model: Reduced Element (i.e., bullet model) model of a 1992 Freightliner FLD120 Tractor with integral sleeper-cabin. Wheel-base length measured from center of front axle to the center of the rear tandem assembly is 6.1 m (240 in). NTRCI has funded the research team of Battelle, Oak Ridge National Laboratory (ORNL) and the University of Tennessee at Knoxville (UTK) to conduct a three-phase investigation to enhance and refine an FE model for simulating tractor-trailer crash events involving barriers and roadside safety hardware such as bridge rails and median barriers. This model was originally developed by the National Crash Analysis Center (NCAC) of George Washington University (GWU) and requires refinement and testing before it can be used by the engineering community for infrastructure design.

C-72 Table C7-1. List of Experiments to be used in the PIRT Development 1. Front leaf-spring compression load-displacement test. 2. Compression load/unload displacement test of suspension displacement load limiter. 3. Uniaxial sinusoidal displacement test to measure load-velocity time history of the rear shock absorber at various displacement rates. 4. Uniaxial sinusoidal displacement tests to measure load-velocity time history of the front shock absorbers at various displacement rates. 5. Compression/extension tests of the rear “air bag” suspension at various load rates and bag pressures. 6. Failure tests of front suspension u-bolts.

C-73 Table C7-2. Comparison Metric Evaluation Table. For Phenomena #1 PHENOMENA #1: Front Leaf Suspension Sprauge-Geers Metrics List all the data channels to be compared below. Using RSVVP calculate the M and P metrics comparing the experiment and the simulation. Values less than or equal to 20 are acceptable. M P Pass? • Force-Displacement History (Element Size 20 mm) * Mesh size used in Model 11.3 0.9 Y • Force-Displacement History (Element Size 10 mm) 5.9 1.1 Y ANOVA Metrics List all the data channels to compare in the rows below. Use RSVVP to calculate the ANOVA metrics and enter the values below. The following criteria must be met: • The mean residual error must be less than or equal to five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than or equal to 25 percent of the peak acceleration ( Peaka⋅≤ 25.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n o f R es id ua ls Pass? • Force-Displacement History (Element Size 20 mm) * Mesh size used in Model 0.06 0.04 N • Force-Displacement History (Element Size 10 mm) 0.03 0.03 Y General Comparisons Test FEA Error • Stiffness (lb/in) – Element Size 20 mm * Mesh size used in Model 1176 1317 12% • Stiffness (lb/in) – Element Size 10 mm 1176 1262 7.3% Front leaf-spring suspension compression test A leaf spring assembly for a 1992 Freightliner FLD120 tractor was purchased from a local Freightliner dealer. A laboratory test was conducted to measure the force/velocity response of the leaf spring assembly using a MTS uniaxial machine. The FE model of the leaf-spring was modeled with two different mesh densities for comparison: 1) nominal element size =20 mm and 2) nominal element size = 10mm.

C-74 Table C7-3. Comparison Metric Evaluation Table for Phenomena #2. PHENOMENA #2: Suspension Displacement Limiter Sprauge-Geers Metrics List all the data channels to be compared below. Using RSVVP calculate the M and P metrics comparing the experiment and the simulation. Values less than or equal to 20 are acceptable. M P Pass? • Uniaxial Force-Time History 0 0.8 Y ANOVA Metrics List all the data channels to compare in the rows below. Use RSVVP to calculate the ANOVA metrics and enter the values below. The following criteria must be met: • The mean residual error must be less than or equal to five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than or equal to 25 percent of the peak acceleration ( Peaka⋅≤ 25.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n o f R es id ua ls Pass? • Uniaxial Force-Time History 0 0.01 Y Suspension Displacement Limiter Load/Unload-Displacement Test The load-deflection response of the rubber tip was measured in the laboratory using displacement control on a uniaxial load machine. The displacement was ramped at a constant velocity from 0 to 0.417 inches in 447 seconds and immediately unloaded at the same rate. Note: Comparison is made based on the first 0.4 seconds of loading, since the simulation “overshot” the displacement by 1 mm.

C-75 Table C7-4. Comparison Metric Evaluation Table for Phenomena #3. PHENOMENA #3: Rear Shock Absorbers (Calibration Tests) Uniaxial sinusoidal displacement calibration tests The shock absorbers (Monroe Gas-Magnum) were tested in a uniaxial loading machine using sinusoidal displacement input with +-0.5 inch maximum displacement. Load-velocity data were collected for loading rates of 0.5, 1, 2, 4, and 8 Hz. The shock absorbers are modeled in the as discrete elements with response characterized using *mat_damper_nonlinear in LS-Dyna. The force-velocity characterization curve for the shock absorber is represented by the bold-red curve in the plot below.

C-76 Table C7-5. Comparison Metric Evaluation Table for Phenomena #4. PHENOMENA #4: Front Shock Absorbers (Calibration Tests) Uniaxial sinusoidal displacement calibration tests The shock absorbers (Monroe Gas-Magnum) were tested in a uniaxial loading machine using sinusoidal displacement input with +-0.5 inch maximum displacement. Load-velocity data were collected for loading rates of 0.5, 1, 2, 4, and 8 Hz. The shock absorbers are modeled in the as discrete elements with response characterized using *mat_damper_nonlinear in LS-Dyna. The force-velocity characterization curve for the shock absorber is represented by the bold-red curve in the plot below.

C-77 Table C7-6. Comparison Metric Evaluation Table for Phenomena #5. PHENOMENA #5: Rear “Air-Bag” Suspension (20 psig bag pressure, 1.2 in/sec) Sprauge-Geers Metrics List all the data channels to be compared below. Using RSVVP calculate the M and P metrics comparing the experiment and the simulation. Values less than or equal to 20 are acceptable. M P Pass? • Uniaxial Force-Time History 1.1 1.6 Y ANOVA Metrics List all the data channels to compare in the rows below. Use RSVVP to calculate the ANOVA metrics and enter the values below. The following criteria must be met: • The mean residual error must be less than or equal to five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than or equal to 25 percent of the peak acceleration ( Peaka⋅≤ 25.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n o f R es id ua ls Pass? • Uniaxial Force-Time History 0.01 0.02 Y Rear “Air-Bag” Suspension: Compression/extension tests at various load rates and bag pressures Firestone Airide suspension, Part No. 1T15ZR6. The airbag was modeled via discrete spring and damper elements. Tests were conducted at various bag pressures and deflection rates. For each test, the “zero position” of the Airide component was set to mid-stroke, corresponding to a spring height of 12.5 inches, and held at this position while the internal air pressure in the component was set to the desired value. The tests were conducted under displacement control. Starting from the zero position, the displacement was ramped up 3 inches to a spring height of 15.5 inches, and the displacement was held at this position for a period of time (typically 10 seconds) to allow for relaxation/recovery of the load. The displacement was then ramped down 6 inches to a spring height of 9.5 inches, and again held for a period of time. The displacement was then ramped back up 6 inches to a spring height of 15.5 inches and again held.

C-78 Table C7-7. Comparison Metric Evaluation Table for Phenomena #5. PHENOMENA #5: Rear “Air-Bag” Suspension (20 psig bag pressure, 6 in/sec) Sprauge-Geers Metrics List all the data channels to be compared below. Using RSVVP calculate the M and P metrics comparing the experiment and the simulation. Values less than or equal to 20 are acceptable. M P Pass? • Uniaxial Force-Time History 1.5 1.7 Y ANOVA Metrics List all the data channels to compare in the rows below. Use RSVVP to calculate the ANOVA metrics and enter the values below. The following criteria must be met: • The mean residual error must be less than or equal to five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than or equal to 25 percent of the peak acceleration ( Peaka⋅≤ 25.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n o f R es id ua ls Pass? • Uniaxial Force-Time History 0.02 0.02 Y Rear “Air-Bag” Suspension: Compression/extension tests at various load rates and bag pressures Firestone Airide suspension, Part No. 1T15ZR6. The airbag was modeled via discrete spring and damper elements. Tests were conducted at various bag pressures and deflection rates. For each test, the “zero position” of the Airide component was set to mid-stroke, corresponding to a spring height of 12.5 inches, and held at this position while the internal air pressure in the component was set to the desired value. The tests were conducted under displacement control. Starting from the zero position, the displacement was ramped up 3 inches to a spring height of 15.5 inches, and the displacement was held at this position for a period of time (typically 10 seconds) to allow for relaxation/recovery of the load. The displacement was then ramped down 6 inches to a spring height of 9.5 inches, and again held for a period of time. The displacement was then ramped back up 6 inches to a spring height of 15.5 inches and again held.

C-79 Table C7-8 Comparison Metric Evaluation Table for Phenomena #5. PHENOMENA #5: Rear “Air-Bag” Suspension (60 psig bag pressure, 0.1 in/sec) Sprauge-Geers Metrics List all the data channels to be compared below. Using RSVVP calculate the M and P metrics comparing the experiment and the simulation. Values less than or equal to 20 are acceptable. M P Pass? • Uniaxial Force-Time History 4.7 1.9 Y ANOVA Metrics List all the data channels to compare in the rows below. Use RSVVP to calculate the ANOVA metrics and enter the values below. The following criteria must be met: • The mean residual error must be less than or equal to five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than or equal to 25 percent of the peak acceleration ( Peaka⋅≤ 25.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n o f R es id ua ls Pass? • Uniaxial Force-Time History 0 0.03 Y Rear “Air-Bag” Suspension: Compression/extension tests at various load rates and bag pressures Firestone Airide suspension, Part No. 1T15ZR6. The airbag was modeled via discrete spring and damper elements. Tests were conducted at various bag pressures and deflection rates. For each test, the “zero position” of the Airide component was set to mid-stroke, corresponding to a spring height of 12.5 inches, and held at this position while the internal air pressure in the component was set to the desired value. The tests were conducted under displacement control. Starting from the zero position, the displacement was ramped up 3 inches to a spring height of 15.5 inches, and the displacement was held at this position for a period of time (typically 10 seconds) to allow for relaxation/recovery of the load. The displacement was then ramped down 6 inches to a spring height of 9.5 inches, and again held for a period of time. The displacement was then ramped back up 6 inches to a spring height of 15.5 inches and again held.

C-80 Table C7-9 . Comparison Metric Evaluation Table for Phenomena #5. PHENOMENA #5: Rear “Air-Bag” Suspension (60 psig bag pressure, 6 in/sec) Sprauge-Geers Metrics List all the data channels to be compared below. Using RSVVP calculate the M and P metrics comparing the experiment and the simulation. Values less than or equal to 20 are acceptable. M P Pass? • Uniaxial Force-Time History 2.5 2.9 Y ANOVA Metrics List all the data channels to compare in the rows below. Use RSVVP to calculate the ANOVA metrics and enter the values below. The following criteria must be met: • The mean residual error must be less than or equal to five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than or equal to 25 percent of the peak acceleration ( Peaka⋅≤ 25.0σ ) M ea n R es id ua l S ta nd ar d D ev ia tio n o f R es id ua ls Pass? • Uniaxial Force-Time History 0.02 0.04 Y Rear “Air-Bag” Suspension: Compression/extension tests at various load rates and bag pressures Firestone Airide suspension, Part No. 1T15ZR6. The airbag was modeled via discrete spring and damper elements. Tests were conducted at various bag pressures and deflection rates. For each test, the “zero position” of the Airide component was set to mid-stroke, corresponding to a spring height of 12.5 inches, and held at this position while the internal air pressure in the component was set to the desired value. The tests were conducted under displacement control. Starting from the zero position, the displacement was ramped up 3 inches to a spring height of 15.5 inches, and the displacement was held at this position for a period of time (typically 10 seconds) to allow for relaxation/recovery of the load. The displacement was then ramped down 6 inches to a spring height of 9.5 inches, and again held for a period of time. The displacement was then ramped back up 6 inches to a spring height of 15.5 inches and again held.

C-81 Table C7-10. Comparison Metric Evaluation Table for Phenomena #6. PHENOMENA #6: Front Suspension U-Bolt Calibration Tests Uniaxial Load-to-Failure Calibration Tests A front suspension u-bolt was cut into a tensile test specimen and a uniaxial tensile test was carried out up to failure of the bolt. The data from the test was processed in order to generate true stress versus true plastic strain data for input into *MAT_24 in LS-DYNA. Additional Information: • Yield Strength = 152,009 psi • Ultimate Strength = 162,933 psi • A0 = 0.19737 in2 • Af = 0.12285 in2 • R.A. = 62% • Elongation = 11% • Load Rate = 0.01/minute • Gauge length = 2 in

C-82 Table C7-11. Phenomenon Importance Ranking Table for Tractor-Semitrailer Model. No. Phenomenon Validated? Verified? Calibrated? 1. Front Leaf-Spring Suspension Validated 2. Suspension Displacement Limiter Validated 3. Rear Shock Absorbers (Calibration Tests) Calibrated 4. Front Shock Absorbers (Calibration Tests) Calibrated 5. Rear “Air-Bag” Suspension Validated 6. Front Suspension U-Bolts (Calibration Test) Calibrated

C-83 APPENDIX C8: VEHICLE PIRT FOR THE MODIFIED C2500R VEHICLE MODEL Developer: National Crash Analysis Center George Washington University Modified by: Worcester Polytechnic Institute Worcester, MA Model Date: January 2002 Model: The NCAC C2500R finite element model is a reduced element model of a 1995 Chevrolet 2500 pickup truck. The C2500R model, shown in Figure C8-1, has been used by several research organizations over the years and each organization has made changes and improvements to the model based on their particular analysis needs. As a result, the model has become very efficient and robust for use in crash analyses. The research team at WPI made several modifications to the model in order to improve its accuracy in simulating vehicle interaction with curbs, with particular emphasis on the suspension system. A list of the modifications and the extent of verification, calibration, and validation of each component model is provided in the following tables and describe d in a paper by Tiso. (83, 145) The development of a comprehensive PIRT for the vehicle model was not possible since the electronic data was no longer available for quantitative assessment. All validation assessments reported herein were based on qualitative comparison of test and simulation data, as reported in the literature but had a PIRT been developed for this model when it was first created, the electronic data should have been used. Figure C8-1. View of a 1995 C2500 pickup truck (a) actual vehicle and (b) finite element model of the vehicle.

C-84 Table C8-1. List of Experiments used in the PIRT Development 1. Uniaxial tests of front suspension coil springs 2. Uniaxial leaf spring test 3. Front suspension dampers 4. Front suspension displacement limiter 5. Dynamic tests on front suspension 6. Dynamic tests on rear suspension 7. 90-degree curb traversal tests – 6-inch AASHTO type B curb 8. 25-degree curb traversal tests – 6-inch AASHTO type B curb

C-85 Table C8-2. Comparison Metric Evaluation Table. For Phenomena #1 PHENOMENA # 1: Front Coil Springs (Calibration Tests) The front coil spring was tested with the Sintech axial test machine. The spring was tested to a maximum load of 22.27 kN, the maximum force that could be measured with the load cell installed on the tester. The maximum compression was approximately 120 mm. The behavior of the spring was found to be linear throughout the displacement range explored. According to the test, the stiffness coefficient of the coil spring was approximately 185 N/mm. 0 5000 10000 15000 20000 25000 0 20 40 60 80 100 120 140 displacement [mm] fo rc e [N ]

C-86 Table C8-3. Comparison Metric Evaluation Table. For Phenomena #2 PHENOMENA # 2: Rear Leaf Springs (Calibration Tests) The leaf spring was tested with the Sintech axial test machine. Because of the complex geometry of the system, the fixture that applies the force on the leaf spring had to be designed carefully, in order not to introduce additional bending moment into the leaf spring. The maximum compressive load applied in the test was 8900 N followed by a 2220 N load applied in the rebound direction. In the compression phase, the load-deflection response of the leaf spring is linear until the overload leaf contacts the other leaves. The behavior in the compression phase is adequately represented by a bi-linear curve, with the knee of the curve at a displacement value of approximately 145 millimeters. The overload leaf was found to roughly double the value of the stiffness coefficient. The slopes of the two lines are about 34 N/mm and 68 N/mm. The shackle pivoted about its hinge to a maximum angle of approximately 27 degrees. -10000 -8000 -6000 -4000 -2000 0 2000 4000 -250 -150 -50 50 150 displacement [mm] fo rc e [N ]

C-87 Table C8-4. Comparison Metric Evaluation Table. For Phenomena #3 PHENOMENA # 3: Shock Absorbers (Calibration Data from Literature) The shock absorber should be tested by imposing a known position waveform to the moving ram of the testing machine and measuring the force at the other end with a load cell. Since high velocities (e.g., up at least to 2 m/s) need to be explored for typical vehicle impact scenarios, the high amount of hydraulic power and the strict requirements on the feedback control of the axial machine makes this a demanding test requiring specialized testing facilities. Some data, however, were obtained from a shock absorber manufacturer concerning the front and rear struts of the C2500 pickup truck. They do not cover the whole range of velocities expected in a vehicle to curb impact scenario but they do consider component non-linearity and non-symmetrical behavior in compression and extension. The curves are shown below. 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 0 0.2 0.4 0.6 0.8 1 velocity [m/s] fo rc e [N ] rebound compression 0 1000 2000 3000 4000 5000 0 0.2 0.4 0.6 0.8 1 velocity [m/s] fo rc e [N ] rebound compression

C-88 Table C8-5. Comparison Metric Evaluation Table. For Phenomena #6 PHENOMENA # 4: Front suspension displacement limiter (Calibration tests) The front displacement limiter (i.e., bump stop) of the C2500 pickup truck consists of a wedge- shaped piece of hard rubber. Two lateral, shorter edges provide extra stiffness when large relative displacements (e.g., greater than 25 mm) between the frame and the lower A-arm occur. The front bump stop was tested in compression with the Sintech axial test machine. The bump stop was simply laid on a flat piece of steel and compressed with the moving head of the machine. The test setup and results are shown below. 0 500 1000 1500 2000 2500 3000 3500 4000 0 5 10 15 20 25 30 displacement [mm] fo rc e [N ]

C-89 Table C8-6. Comparison Metric Evaluation Table. For Phenomena #6 PHENOMENA # 5: Dynamic tests of Front Suspension (Validation) The test vehicle was driven up onto 220-mm high wooden ramps and stopped. The instrumentation system was then initialized and the vehicle slowly rolled off the ramps. The relative displacement between the wheels and the frame where acquired during the test. High- speed video cameras were used for a visual comparison with the simulation. The simulations were similar but instead of driving the vehicle off ramps the vehicle was simply dropped from the same height. This accounts for the time discrepancy in the plot below. The magnitude of the initial displacement in the test compared favorably to the simulation. The response tends to compare less favorably as the event progresses. This may be partly due to the age of the actual shock absorbers in the actual vehicle

C-90 Table C8-7. Comparison Metric Evaluation Table. For Phenomena #6 PHENOMENA # 6: Dynamic tests of Rear Suspension (Validation) The test vehicle was driven up onto 220-mm high wooden ramps and stopped. The instrumentation system was then initialized and the vehicle slowly rolled off the ramps. The relative displacement between the wheels and the frame where acquired during the test. High- speed video cameras were used for a visual comparison with the simulation. The simulations were similar but instead of driving the vehicle off ramps the vehicle was simply dropped from the same height. This accounts for the time discrepancy in the plot below. The magnitude of the initial displacement in the test compared favorably to the simulation. The response tends to compare less favorably as the event progresses. This may be partly due to the age of the actual shock absorbers in the actual vehicle.

C-91 Table C8-8. Comparison Metric Evaluation Table. For Phenomena #7 PHENOMENA # 7: 90-degree curb traversal tests (Validation) The validation test series used 150-mm tall AASHTO Type B curbs. The curbs were made using reinforced concrete cast in 1.2-m long sections. Each set of curbs was attached to the ground with steel rods driven through holes in the curbs into the gravel. The area behind the curb was backfilled with gravel up to the top of the curb. The tests were performed at a nominal speed of 25 and 18 km/hr and at an approach angle of 25 and 90 degrees, respectively. The driver left the steering wheel free just before the impact. Two high-speed digital video cameras and a real-time video camera were used to record the impact event. High-contrast targets were mounted on the vehicle body and wheel hubs to aid in post processing the data from the high-speed video cameras and determining the actual impacting speed. -150 -100 -50 0 50 100 0 0.2 0.4 0.6 0.8 1 D is pl ac em en t [ m m ] Time [sec] test 1 test 2 modified version 9 -150 -100 -50 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 D is pl ac em en t [ m m ] Time [sec] test 1 test 2 modified version 9 Comparison of the front wheel relative displacement for the 90 degrees test Comparison of the back wheel relative displacement for the 90 degrees test

C-92 Table C8-9. Comparison Metric Evaluation Table. For Phenomena #8 PHENOMENA # 8: 25-degree curb traversal tests (Validation) The validation test series used 150-mm tall AASHTO Type B curbs. The curbs were made using reinforced concrete cast in 1.2-m long sections. Each set of curbs was attached to the ground with steel rods driven through holes in the curbs into the gravel. The area behind the curb was backfilled with gravel up to the top of the curb. The tests were performed at a nominal speed of 25 and 18 km/hr and at an approach angle of 25 and 90 degrees, respectively. The driver left the steering wheel free just before the impact. Two high-speed digital video cameras and a real-time video camera were used to record the impact event. High-contrast targets were mounted on the vehicle body and wheel hubs to aid in post processing the data from the high-speed video cameras and determining the actual impacting speed. Front Left Wheel Front Right Wheel Back Left Wheel Back Right Wheel

C-93 Table C8-10. Phenomenon Importance Ranking Table for the Modified C2500 Vehicle Model Validated Phenomenon Validated? Verified? Calibrated? 1. Uniaxial tests of front suspension coil springs Calibrated 2. Uniaxial leaf spring test Calibrated 3. Front suspension dampers Calibrated 4. Front suspension displacement limiter Calibrated 5. Dynamic tests on front suspension Qualitative Validation 6. Dynamic tests on rear suspension Qualitative Validation 7. 90-degree curb traversal tests – 6-inch AASHTO type B curb Qualitative Validation 8. 25-degree curb traversal tests – 6-inch AASHTO type B curb Qualitative Validation * Qualitative assessments were made because the original experimental data was no longer available.

C-94 Table C8-5. Comparison Metric Evaluation Table. For Phenomena #6 PHENOMENA # 4: Front suspension displacement limiter (Calibration tests) The front displacement limiter (i.e., bump stop) of the C2500 pickup truck consists of a wedge- shaped piece of hard rubber. Two lateral, shorter edges provide extra stiffness when large relative displacements (e.g., greater than 25 mm) between the frame and the lower A-arm occur. The front bump stop was tested in compression with the Sintech axial test machine. The bump stop was simply laid on a flat piece of steel and compressed with the moving head of the machine. The test setup and results are shown below. 0 500 1000 1500 2000 2500 3000 3500 4000 0 5 10 15 20 25 30 displacement [mm] fo rc e [N ]

C-95 Table C8-6. Comparison Metric Evaluation Table. For Phenomena #6 PHENOMENA # 5: Dynamic tests of Front Suspension (Validation) The test vehicle was driven up onto 220-mm high wooden ramps and stopped. The instrumentation system was then initialized and the vehicle slowly rolled off the ramps. The relative displacement between the wheels and the frame where acquired during the test. High- speed video cameras were used for a visual comparison with the simulation. The simulations were similar but instead of driving the vehicle off ramps the vehicle was simply dropped from the same height. This accounts for the time discrepancy in the plot below. The magnitude of the initial displacement in the test compared favorably to the simulation. The response tends to compare less favorably as the event progresses. This may be partly due to the age of the actual shock absorbers in the actual vehicle

C-96 Table C8-7. Comparison Metric Evaluation Table. For Phenomena #6 PHENOMENA # 6: Dynamic tests of Rear Suspension (Validation) The test vehicle was driven up onto 220-mm high wooden ramps and stopped. The instrumentation system was then initialized and the vehicle slowly rolled off the ramps. The relative displacement between the wheels and the frame where acquired during the test. High- speed video cameras were used for a visual comparison with the simulation. The simulations were similar but instead of driving the vehicle off ramps the vehicle was simply dropped from the same height. This accounts for the time discrepancy in the plot below. The magnitude of the initial displacement in the test compared favorably to the simulation. The response tends to compare less favorably as the event progresses. This may be partly due to the age of the actual shock absorbers in the actual vehicle.

C-97 Table C8-8. Comparison Metric Evaluation Table. For Phenomena #7 PHENOMENA # 7: 90-degree curb traversal tests (Validation) The validation test series used 150-mm tall AASHTO Type B curbs. The curbs were made using reinforced concrete cast in 1.2-m long sections. Each set of curbs was attached to the ground with steel rods driven through holes in the curbs into the gravel. The area behind the curb was backfilled with gravel up to the top of the curb. The tests were performed at a nominal speed of 25 and 18 km/hr and at an approach angle of 25 and 90 degrees, respectively. The driver left the steering wheel free just before the impact. Two high-speed digital video cameras and a real-time video camera were used to record the impact event. High-contrast targets were mounted on the vehicle body and wheel hubs to aid in post processing the data from the high-speed video cameras and determining the actual impacting speed. -150 -100 -50 0 50 100 0 0.2 0.4 0.6 0.8 1 D is pl ac em en t [ m m ] Time [sec] test 1 test 2 modified version 9 -150 -100 -50 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 D is pl ac em en t [ m m ] Time [sec] test 1 test 2 modified version 9 Comparison of the front wheel relative displacement for the 90 degrees test Comparison of the back wheel relative displacement for the 90 degrees test

C-98 Table C8-9. Comparison Metric Evaluation Table. For Phenomena #8 PHENOMENA # 8: 25-degree curb traversal tests (Validation) The validation test series used 150-mm tall AASHTO Type B curbs. The curbs were made using reinforced concrete cast in 1.2-m long sections. Each set of curbs was attached to the ground with steel rods driven through holes in the curbs into the gravel. The area behind the curb was backfilled with gravel up to the top of the curb. The tests were performed at a nominal speed of 25 and 18 km/hr and at an approach angle of 25 and 90 degrees, respectively. The driver left the steering wheel free just before the impact. Two high-speed digital video cameras and a real-time video camera were used to record the impact event. High-contrast targets were mounted on the vehicle body and wheel hubs to aid in post processing the data from the high-speed video cameras and determining the actual impacting speed. Front Left Wheel Front Right Wheel Back Left Wheel Back Right Wheel

C-99 Table C8-10. Phenomenon Importance Ranking Table for the Modified C2500 Vehicle Model Validated Phenomenon Validated? Verified? Calibrated? 1. Uniaxial tests of front suspension coil springs Calibrated 2. Uniaxial leaf spring test Calibrated 3. Front suspension dampers Calibrated 4. Front suspension displacement limiter Calibrated 5. Dynamic tests on front suspension Qualitative Validation 6. Dynamic tests on rear suspension Qualitative Validation 7. 90-degree curb traversal tests – 6-inch AASHTO type B curb Qualitative Validation 8. 25-degree curb traversal tests – 6-inch AASHTO type B curb Qualitative Validation * Qualitative assessments were made because the original experimental data was no longer available.

1 APPENDIX D SURVEY OF PRACTITIONERS The survey of practitioners is included in the following pages. The actual survey form itself is provided first and a tabulation of the survey responses is provided second. SURVEY FORMS The following pages contain copies of the survey forms that were distributed to participants using the web service surveymonkey.com.

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16 Summary of the survey responses

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

E-1 APPENDIX E VALIDATION/VERIFICATION REPORT FORMS A _______________ ________________________________________________________ (Report 350 or MASH or EN1317 Vehicle Type) Striking a _________________________________________________________________ (roadside hardware type and name) Report Date: ______________________________________________________________ Type of Report (check one) Verification (known numerical solution compared to new numerical solution) or Validation (full-scale crash test compared to a numerical solution). General Information Known Solution Analysis Solution Performing Organization Test/Run Number: Vehicle: Reference: Impact Conditions Vehicle Mass: Speed: Angle: Impact Point: Composite Validation/Verification Score List the Report 350/MASH or EN1317 Test Number Part I Did all solution verification criteria in Table E-1 pass? Part II Do all the time history evaluation scores from Table E-2 result in a satisfactory comparison (i.e., the comparison passes the criterion)? If all the values in Table E-2 did not pass, did the weighted procedure shown in Table E-3 result in an acceptable comparison. If all the criteria in Table E-2 pass, enter “yes.” If all the criteria in Table E-2 did not pass but Table E-3 resulted in a passing score, enter “yes.” Part III All the criteria in Table E-4 (Test-PIRT) passed? Are the results of Steps I through III all affirmative (i.e., YES)? If all three steps result in a “YES” answer, the comparison can be considered validated or verified. If one of the steps results in a negative response, the result cannot be considered validated or verified. The analysis solution (check one) is is NOT verified/validated against the known solution.

E-2 PART I: BASIC INFORMATION These forms may be used for validation or verification of roadside hardware crash tests. If the known solution is a full-scale crash test (i.e., physical experiment) which is being compared to a numerical solution (e.g., LSDYNA analysis) then the procedure is a validation exercise. If the known solution is a numerical solution (e.g., a prior finite element model using a different program or earlier version of the software) then the procedure is a verification exercise. This form can also be used to verify the repeatability of crash tests by comparing two full-scale crash test experiments. Provide the following basic information for the validation/verification comparison: 1. What type of roadside hardware is being evaluated (check one)? Longitudinal barrier or transition Terminal or crash cushion Breakaway support or work zone traffic control device Truck-mounted attenuator Other hardware: _____________________________________________________________ 2. What test guidelines were used to perform the full-scale crash test (check one)? NCHRP Report 350 MASH EN1317 Other: ______________________________________________________________________ 3. Indicate the test level and number being evaluated (fill in the blank). ______________________ 4. Indicate the vehicle type appropriate for the test level and number indicated in item 3 according to the testing guidelines indicated in item 2. NCHRP Report 350/MASH 700C 820C 1100C 2000P 2270P Other:_______________________________ 8000S 10000S 36000V 36000T EN1317 Car (900 kg) Car (1300 kg) Car (1500 kg) Rigid HGV (10 ton) Rigid HGV (16 ton) Rigid HGV (30 ton) Bus (13 ton) Articulated HGV (38 ton) Other:________________________

E-3 PART II: ANALYSIS SOLUTION VERIFICATION Using the results of the analysis solution, fill in the values for Table E-1. These values are indications of whether the analysis solution produced a numerically stable result and do not necessarily mean that the result is a good comparison to the known solution. The purpose of this table is to ensure that the numerical solution produces results that are numerically stable and conform to the conservation laws (e.g., energy, mass and momentum). Table E-1. Analysis Solution Verification Table. Verification Evaluation Criteria Change (%) Pass? Total energy of the analysis solution (i.e., kinetic, potential, contact, etc.) must not vary more than 10 percent from the beginning of the run to the end of the run. Hourglass Energy of the analysis solution at the end of the run is less than five percent of the total initial energy at the beginning of the run. Hourglass Energy of the analysis solution at the end of the run is less than ten percent of the total internal energy at the end of the run. The part/material with the highest amount of hourglass energy at the end of the run is less than ten percent of the total internal energy of the part/material at the end of the run. Mass added to the total model is less than five percent of the total model mass at the beginning of the run. The part/material with the most mass added had less than 10 percent of its initial mass added. The moving parts/materials in the model have less than five percent of mass added to the initial moving mass of the model. There are no shooting nodes in the solution? There are no solid elements with negative volumes? If all the analysis solution verification criteria are scored as passing, the analysis solution can be verified or validated against the known solution. If any criterion in Table E-1 does not pass one of the verification criterion listed in Table E-1, the analysis solution cannot be used to verify or validate the known solution. If there are exceptions that the analyst things are relevant these should be footnoted in the table and explained below the table. The Analysis Solution (check one) passes does NOT pass all the criteria in Table E1-1 with without exceptions as noted.

E-4 PART III: TIME HISTORY EVALUATION TABLE Using the RSVVP computer program (‘Single channel’ option), compute the Sprague & Geers MPC metrics and ANOVA metrics using time-history data from the known and analysis solutions for a time period starting at the beginning of the contact and ending at the loss of contact. Both the Sprague & Geers and ANOVA metrics must be calculated based on the original units the data was collected in (e.g., if accelerations were measured in the experiment with accelerometers then the comparison should be between accelerations. If rate gyros were used in the experiment, the comparison should be between rotation rates). If all six data channels are not available for both the known and analysis solutions, enter “N/A” in the column corresponding to the missing data. Enter the values obtained from the RSVVP program in Table E-2 and indicate if the comparison was acceptable or not by entering a “yes” or “no” in the “Agree?” column. Attach a graph of each channel for which the metrics have been compared at the end of the report. Enter the filter, synchronization method and shift/drift options used in RSVVP to perform the comparison so that it is clear to the reviewer what options were used. Normally, SAE J211 filter class 180 is used to compare vehicle kinematics in full-scale crash tests. Either synchronization option in RSVVP is acceptable and both should result in a similar start point. The shift and drift options should generally only be used for the experimental curve since shift and drift are characteristics of sensors. For example, the zero point for an accelerometer sometimes “drifts” as the accelerometer sits out in the open environment of the crash test pad whereas there is no sensor to “drift” or “shift” in a numerical solution. In order for the analysis solution to be considered in agreement with the known solution (i.e., verified or validated), all the criteria scored in Table E-2 must pass. If all the channels in Table E-2 do not pass, fill out Table E-3, the multi-channel weighted procedure. If one or more channels do not satisfy the criteria in Table E-2, the multi-channel weighting option may be used. Using the RSVVP computer program (‘Multiple channel’ option), compute the Sprague-Geers MPC metrics and ANOVA metrics using all the time histories data from the known and analysis solutions for a time period starting at the beginning of the contact and ending at the loss of contact. If all six data channels are not available for both the known and analysis solutions, enter “N/A” in the column corresponding to the missing data. For some types of roadside hardware impacts, some of the channels are not as important as others. An example might be a breakaway sign support test where the lateral (i.e., Y) and vertical (i.e., Z) accelerations are insignificant to the dynamics of the crash event. The weighting procedure provides a way to weight the most important channels more highly than less important channels. The procedure used is based on the area under the curve, therefore, the weighing scheme will weight channels with large areas more highly than those with smaller areas. In general, using the “Area (II)” method is acceptable although if the complete inertial properties of the vehicle are available the “inertial” method may be used. Enter the values obtained from the RSVVP program in Table E-3 and indicate if the comparison was acceptable or not by entering a “yes” or “no” in the “Agree?” column. In order for the analysis solution to be considered in agreement with the known solution (i.e., verified or validated), all the criteria scored in Table E-3 must pass.

E-5 Table E-2. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (single channel option). Evaluation Criteria Time interval [_________] O Sprague-Geers Metrics List all the data channels being compared. Calculate the M and P metrics using RSVVP and enter the results. Values less than or equal to 40 are acceptable. RSVVP Curve Preprocessing Options M P Pass? Filter Option Sync. Option Shift Drift True Curve Test Curve True Curve Test Curve X acceleration Y acceleration Z acceleration Roll rate Pitch rate Yaw rate P ANOVA Metrics List all the data channels being compared. Calculate the ANOVA metrics using RSVVP and enter the results. Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) and • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n Re si du al S ta nd ar d D ev ia ti on of R es id ua ls Pass? X acceleration/Peak Y acceleration/Peak Z acceleration/Peak Roll rate Pitch rate Yaw rate The Analysis Solution (check one) passes does NOT pass all the criteria in Table E-2 (single- channel time history comparison). If the Analysis Solution does NOT pass, perform the analysis in Table E-3 (multi-channel time history comparison).

E-6 Table E-3. Roadside Safety Validation Metrics Rating Table – Time History Comparisons (multi- channel option). Evaluation Criteria (time interval [_____________]) Channels (Select which were used) X Acceleration Y Acceleration Z Acceleration Roll rate Pitch rate Yaw rate Multi-Channel Weights Area II method Inertial method X Channel: Y Channel: Z Channel: Yaw Channel: Roll Channel: ___ Pitch Channel: O Sprague-Geer Metrics Values less or equal to 40 are acceptable. M P Pass? P ANOVA Metrics Both of the following criteria must be met: • The mean residual error must be less than five percent of the peak acceleration ( Peakae ⋅≤ 05.0 ) • The standard deviation of the residuals must be less than 35 percent of the peak acceleration ( Peaka⋅≤ 35.0σ ) M ea n Re si du al S ta nd ar d D ev ia ti on of R es id ua ls Pass? The Analysis Solution (check one) passes does NOT pass all the criteria in Table E-3.

E-7 PART IV: PHENOMENA IMPORTANCE RANKING TABLE Table E-4 is similar to the evaluation tables in Report 350 and MASH. For the Report 350 or MASH test number identified in Part I (e.g., test 3-10, 5-12, etc.), circle all the evaluation criteria applicable to that test in Table E-4. The tests that apply to each criterion are listed in the far right column without the test level designator. For example, if a Report 350 test 3-11 is being compared (i.e., a pickup truck striking a barrier at 25 degrees and 100 km/hr), circle all the criteria in the second column where the number “11” appears in the far right column. Some of the Report 350 evaluation criteria have been removed (i.e., J and K) since they are not generally useful in assessing the comparison between the known and analysis solutions.

E-8 Table E-4. Evaluation Criteria Test Applicability Table. Evaluation Factors Evaluation Criteria Applicable Tests Structural Adequacy A Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38 B The test article should readily activate in a predictable manner by breaking away, fracturing or yielding. 60, 61, 70, 71, 80, 81 C Acceptable test article performance may be by redirection, controlled penetration or controlled stopping of the vehicle. 30, 31,, 32, 33, 34, 39, 40, 41, 42, 43, 44, 50, 51, 52, 53 Occupant Risk D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. All E Detached elements, fragments or other debris from the test article, or vehicular damage should not block the driver’s vision or otherwise cause the driver to lose control of the vehicle. (Answer Yes or No) 70, 71 F The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. All except those listed in criterion G G It is preferable, although not essential, that the vehicle remain upright during and after collision. 12, 22 (for test level 1 – 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44) H Occupant impact velocities should satisfy the following: Occupant Impact Velocity Limits (m/s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 80, 81 Longitudinal and Lateral 9 12 Longitudinal 3 5 60, 61, 70, 71 I Occupant ridedown accelerations should satisfy the following: Occupant Ridedown Acceleration Limits (g’s) Component Preferred Maximum 10, 20, 30,31, 32, 33, 34, 36, 40, 41, 42, 43, 50, 51, 52, 53, 60, 61, 70, 71, 80, 81 Longitudinal and Lateral 15 20 Vehicle Trajectory L The occupant impact velocity in the longitudinal direction should not exceed 40 ft/sec and the occupant ride-down acceleration in the longitudinal direction should not exceed 20 G’s. 11,21, 35, 37, 38, 39 M The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. 10, 11, 12, 20, 21, 22, 35, 36, 37, 38, 39 N Vehicle trajectory behind the test article is acceptable. 30, 31, 32, 33, 34, 39, 42, 43, 44, 60, 61, 70, 71, 80, 81

E-9 Complete Table E-5 according to the results of the known solution (e.g., crash test) and the numerical solution (e.g., simulation). Consistent with Report 350 and MASH, Task E-5 has three parts: the structural adequacy phenomena listed in Table E-5a, the occupant risk phenomena listed in Table E- 5b and the vehicle trajectory criteria listed in Table E-5c. If the result of the analysis solution agrees with the known solution, mark the “agree” column “yes.” For example, if the vehicle in both the known and analysis solutions rolls over and, therefore, fails criterion F1, the known and the analysis columns for criterion F1 would be evaluated as “no.” Even though both failed the criteria, they agree with each other so the “agree” column is marked as “yes.” Any criterion that is not applicable to the test being evaluated (i.e., not circled in Table E-4) should be indicated by entering “NA” in the “agree?” column for that row. Many of the Report 350 evaluation criteria have been subdivided into more specific phenomenon. For example, criterion A is divided into eight sub-criteria, A1 through A8, that provide more specific and quantifiable phenomena for evaluation. Some of the values are simple yes or no questions while other request numerical values. For the numerical phenomena, the analyst should enter the value for the known and analysis result and then calculate the relative difference. Relative difference is always the absolute value of the difference of the known and analysis solutions divided by the known solution. Enter the value in the “relative difference” column. If the relative difference is less than 20 percent, enter “yes” in the “agree?” column. Sometimes, when the values are very small, the relative difference might be large while the absolute difference is very small. For example, the longitudinal occupant ride down acceleration (i.e., criterion L2) in a test might be 3 g’s and in the corresponding analysis might be 4 g’s. The relative difference is 33 percent but the absolute difference is only 1 g and the result for both is well below the 20 g limit. Clearly, the analysis solution in this case is a good match to the experiment and the relative difference is large only because the values are small. The absolute difference, therefore, should also be entered into the “Difference” column in Table E-5. The experimental and analysis result can be considered to agree as long as either the relative difference or the absolute difference is less than the acceptance limit listed in the criterion. Generally, relative differences of less than 20 percent are acceptable and the absolute difference limits were generally chosen to represent 20 percent of the acceptance limit in Report 350 or MASH. For example, Report 350 limits occupant ride-down accelerations to those less than 20 g’s so 20 percent of 20 g’s is 4 g’s. As shown for criterion L2 in Table E-5, the relative acceptance limit is 20 percent and the absolute acceptance limit is 4 g’s. If a numerical model was not created to represent the phenomenon, a value of “NM” (i.e., not modeled) should be entered in the appropriate column of Table E-5. If the known solution for that phenomenon number is “no” then a “NM” value in the “test result” column can be considered to agree. For example, if the material model for the rail element did not include the possibility of failure, “NM” should be entered for phenomenon number T in Table E-5. If the known solution does not indicate rail rupture or failure (i.e., phenomenon T = “no”), then the known and analysis solutions agree and a “yes” can be entered in the “agree?” column. On the other hand, if the known solution shows that a rail

E-10 rupture did occur resulting in a phenomenon T entry of “yes” for the known solution, the known and analysis solutions do not agree and “no” should be entered in the “agree?” column. Analysts should seriously consider refining their model to incorporate any phenomena that appears in the known solution and is shown in Table E-5. All the criteria identified in Table E-4 are expected to agree but if one does not and, in the opinion of the analyst, is not considered important to the overall evaluation for this particular comparison, then a footnote should be provided with a justification for why this particular criterion can be ignored for this particular comparison. Table E-5(a). Roadside Safety Phenomena Importance Ranking Table (Structural Adequacy). Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? St ru ct ur al A de qu ac y A A1 Test article should contain and redirect the vehicle; the vehicle should not penetrate, under-ride, or override the installation although controlled lateral deflection of the test article is acceptable. (Answer Yes or No) A2 Maximum dynamic deflection: - Relative difference is less than 20 percent or - Absolute difference is less than 0.15 m A3 Length of vehicle-barrier contact: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m A4 Number of broken or significantly bent posts is less than 20 percent. A5 Did the rail element rupture or tear (Answer Yes or No) A6 Were there failures of connector elements (Answer Yes or No). A7 Was there significant snagging between the vehicle wheels and barrier elements (Answer Yes or No). A8 Was there significant snagging between vehicle body components and barrier elements (Answer Yes or No).

E-11 Table E-5(b). Roadside Safety Phenomena Importance Ranking Table (Occupant Risk). Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? O cc up an t R is k D Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. (Answer Yes or No) F F1 The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. (Answer Yes or No) F2 Maximum roll of the vehicle: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. F3 Maximum pitch of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. F4 Maximum yaw of the vehicle is: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. L L1 Occupant impact velocities: - Relative difference is less than 20 percent or - Absolute difference is less than 2 m/s. • Longitudinal OIV (m/s) • Lateral OIV (m/s) • THIV (m/s) L2 Occupant accelerations: - Relative difference is less than 20 percent or - Absolute difference is less than 4 g’s. • Longitudinal ORA • Lateral ORA • PHD • ASI

E-12 Table E-5(c). Roadside Safety Phenomena Importance Ranking Table (Vehicle Trajectory). Evaluation Criteria Known Result Analysis Result Difference Relative/ Absolute Agree? Ve hi cl e Tr aj ec to ry M M1 The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. M2 Exit angle at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. M3 Exit velocity at loss of contact: - Relative difference is less than 20 percent or - Absolute difference is less than 5 degrees. M4 One or more vehicle tires failed or de-beaded during the collision event (Answer Yes or No). The Analysis Solution (check one) passes does NOT pass all the criteria in Tables E-5a through E- 5c with exceptions as noted without exceptions .

Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications Get This Book
×
 Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Web Only Document 179: Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications explores verification and validation procedures, quantifiable evaluation metrics, and acceptance criteria for roadside safety research that maximize the accuracy and utility of using finite element simulations.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!