National Academies Press: OpenBook
« Previous: Report Contents
Page 11
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 11
Page 12
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 12
Page 13
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 13
Page 14
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 14
Page 15
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 15
Page 16
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 16
Page 17
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 17
Page 18
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 18
Page 19
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 19
Page 20
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 20
Page 21
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 21
Page 22
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 22
Page 23
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 23
Page 24
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 24
Page 25
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 25
Page 26
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 26
Page 27
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 27
Page 28
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 28
Page 29
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 29
Page 30
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 30
Page 31
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 31
Page 32
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 32
Page 33
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 33
Page 34
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 34
Page 35
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 35
Page 36
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 36
Page 37
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 37
Page 38
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 38
Page 39
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 39
Page 40
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 40
Page 41
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 41
Page 42
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 42
Page 43
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 43
Page 44
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 44
Page 45
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 45
Page 46
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 46
Page 47
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 47
Page 48
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 48
Page 49
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 49
Page 50
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 50
Page 51
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 51
Page 52
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 52
Page 53
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 53
Page 54
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 54
Page 55
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 55
Page 56
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 56
Page 57
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 57
Page 58
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 58
Page 59
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 59
Page 60
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 60
Page 61
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 61
Page 62
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 62
Page 63
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 63
Page 64
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 64
Page 65
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 65
Page 66
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 66
Page 67
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 67
Page 68
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 68
Page 69
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 69
Page 70
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 70
Page 71
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 71
Page 72
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 72
Page 73
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 73
Page 74
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 74
Page 75
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 75
Page 76
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 76
Page 77
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 77
Page 78
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 78
Page 79
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 79
Page 80
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 80
Page 81
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 81
Page 82
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 82
Page 83
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 83
Page 84
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 84
Page 85
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 85
Page 86
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 86
Page 87
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 87
Page 88
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 88
Page 89
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 89
Page 90
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 90
Page 91
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 91
Page 92
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 92
Page 93
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 93
Page 94
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 94
Page 95
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 95
Page 96
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 96
Page 97
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 97
Page 98
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 98
Page 99
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 99
Page 100
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 100
Page 101
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 101
Page 102
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 102
Page 103
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 103
Page 104
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 104
Page 105
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 105
Page 106
Suggested Citation:"CHAPTER 2 LITERATURE REVIEW." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 106

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

9 CHAPTER 2 LITERATURE REVIEW DEFINITIONS Introduction The purpose of this research was to develop verification and validation procedures for using computer simulations in developing incremental improvements to roadside safety hardware. Defining what is meant by the terms “validation” and “verification” was an important first step. The issue of verifying and validating mathematical and computational models has been of interest in many disciplines in recent years; roadside safety simulations are just one of many applications in the general area of computational mechanics. The Department of Defense (DoD) and the American Institute of Aeronautics and Astronautics (AIAA) define validation and verification as follows:(15, 16) Verification: The process of determining that a model implementation represents the developer’s conceptual description of the model and the solution to the model. Validation: The process of determining the degree to which a model is an accurate representation of the real world from the perspective of its intended use. In 2006 the American Society of Mechanical Engineers (ASME) published its “Guide for verification and validation in computational solid mechanics,” ASME V&V 10-2006.(17) This guide is the result of a 10-year effort by the PTC-60 committee to standardize definitions and establish the basic process for validation and verification activities. The ASME guide governs the terminology and processes used throughout the remainder of this report. The guide does not provide specific validation and verification procedures but rather establishes a philosophical backdrop that can be used to create appropriate procedures in particular domains such as roadside safety. Since the needs of each technical area in computational mechanics are different, it is not possible to have a single validation and verification procedure, but the ASME guide provides an essential basis for creating domain-specific procedures. ASME V&V 10-2006 defines validation, verification and calibration as follows: Verification: The process of determining that a computational model accurately represents the underlying mathematical model and its solution. Validation: The process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model. Calibration: The process of adjusting physical modeling parameters in the computational model to improve agreement with experimental data.

10 The ASME definitions are essentially the same as the AIAA/DoD definitions. Since the ASME definitions and philosophy represent a broad consensus of engineers and analysts, it makes sense for the roadside safety community to build upon these existing definitions. Figure 1 shows a schematic representation of the verification processes from the AIAA Guide. Verification, according to this definition, involves comparing the computerized discrete numerical solution (e.g., typically but not exclusively LSDYNA in roadside safety) to known mathematical solutions. Verification is concerned with how well the discrete numerical approximation (e.g., the LSDYNA simulation) agrees with the known mathematical solution (i.e., differential equation solution). Verification does not address the relationship between the computer simulation and the real world or experiments. Validation, on the other hand, is concerned exclusively with comparing the discrete numerical solutions to real-world physical experiments. The following sections will discuss the implications of these definitions with respect to roadside safety simulations. Figure 1. Schematic representation of the verification processes. (16) Verification Computational modeling is a way of representing physical phenomena using mathematical techniques. Verification is concerned with how well the discrete numerical approximation (e.g., typically but not exclusively LS-DYNA in roadside safety) agrees with the known mathematical solution (i.e., differential equation solution). For example, the following equation represents a mathematical theory about how waves propagate through solids:(18, 19) 2 2 2 2 x uE t u ∂ ∂ = ∂ ∂ ρ

11 The mathematical theory may or may not be correct with respect to its ability to predict physical experiments, but the solution is known and mathematically unambiguous. A researcher or code developer might develop a discrete numerical approximation (i.e., a computer simulation) of a wave propagating along a long rod, for example. Verification would be the process of comparing the solutions of the computational numerical experiment with the solution of the partial differential equation shown above. If the numerical computations replicate the known analytical solution, the computational model is considered “verified.” In verification, the issue is not comparison to physical experiments but rather a comparison to the underlying physics. Verification is about comparing numerical approximations to the known solutions such as the solution of a differential equation. As such, verification could be defined as: The process of ensuring that the computational model provides results consistent with the developer’s conceptual description of the model and the underlying assumptions that form the basis of the mathematical model (i.e., the model responds as the developer intends) and that computed results adhere to basic physical laws, such as conservation of energy and momentum. With respect to this research project, the definition of verification presents some problems. Aside from known solutions from basic mechanics (e.g., the impact of long rods or impulsively loaded beams) there are no known analytical solutions in the realm of roadside safety. Stated another way, in roadside safety we do not have simple benchmark problems with known differential equation solutions against which we can test analysis codes. While some basic cases of simple impacts could be verified, a researcher developing a model of a particular roadside hardware system or component generally has no “known” solution to work from. In fact, the lack of a “known” solution is the whole point of using a computer approximation. Code developers like LSTC perform software verification to ensure that the computer programs produce solutions consistent with the algorithms used to develop them. ASME V&V 10-2006 refers to this as code verification. Code developers and users also often perform simple verification experiments to demonstrate that a particular code such as LSDYNA produces solutions to simple mechanics problems with known solutions. One graduate course at Worcester Polytechnic Institute, for example, is largely concerned with students verifying for themselves that LSDYNA produces the correct results for a number of simple mechanics problems: • One dimensional wave propagation problems are verified by comparing to particular solutions of the wave equation. • Elastic-plastic material behavior is verified by performing simulations of a rigid hammer axially striking a solid rod and comparing the results to Taylor’s momentum approach. • Wave propagation and failure properties are verified by performing simulations of the Landon-Quinney experiment with concrete rods and verifying the results with one- dimensional wave theory predictions.

12 • Simulations of impulsively loaded beams in three and four-point bending are verified using Jones’s upper and lower bound collapse theorems. Comparisons between simulations of these types of events and the known analytical solutions verify that LSDYNA produces results consistent with the physics of the known mathematical models. A general purpose, widely used computer program like LSDYNA can, therefore, be considered “verified” for general nonlinear dynamic contact/impact problems and this level of verification is generally the responsibility of the developer. In addition, since the source code is not generally available, it is nearly impossible for someone other than the code developer to verify the code. This report will not address the issue of code verification. In the context of roadside safety research, what is meant by the term “verification?” There are two approaches: (i) roadside safety benchmark cases and (ii) model assurance verification. Benchmark verification is based on “calculation verification,” which is defined as “the process of determining the solution accuracy of a particular calculation.”(17) There is a need for some standardized roadside safety benchmark cases that can be used to verify new versions of code (e.g., are the results the same after updating from LSDYNA 960 to 970) and to verify solutions between platforms (e.g., are the results the same on a 10-cpu Linux cluster as a dual-cpu workstation?). This type of activity would be defined as calculation verification since prior solutions on previous hardware/software platforms are available and the results of new calculations on new computational platforms could be verified. The issue when running a standardized benchmark is not the correctness of the solution with respect to physical crash tests, but whether the new numerical solution arrangement (i.e., a new version of the code, a different computational platform, or both) produces the same results as the previous arrangement using the same unmodified model. Most analysts do this type of verification informally by re-running an old model that they have confidence in on a new hardware/software platform. The advantage of developing standardized benchmarks is that the roadside safety computational mechanics community would be able to more effectively share information and would be able to develop more relevant benchmark cases. Model assurance verification involves developing procedures and metrics for a particular model of a vehicle, barrier, occupant or other component of a roadside hardware simulation to ensure that results adhere to basic physical laws, such as conservation of energy and momentum, and to maximize the likelihood that there will be no numerical or computational problems in the model. Model assurance verification seeks to confirm that models obey basic physical laws. The total energy and momentum balances should be checked to ensure that they do not change beyond reasonable amounts (e.g., five percent of the initial kinetic energy and initial momentum). Likewise, computational items like hourglass energy and mass added should be checked to ensure that these items stay below some agreed upon value. Accelerations of the center of gravity are often saved for use in post-processing and steps must be taken to verify that the data has been collected at an appropriate rate to avoid aliasing. A simple check for

13 aliasing of acceleration data from a finite element analysis is to integrate the acceleration-time history (e.g., collected at a nodal point) and check that the result is consistent with the velocity- time history collected from the same location (a good discussion of aliasing can be found at http://www.daqarta.com/dw_0haa.htm). These types of verification exercises ensure that the model results conform to the basic laws of physics. If any of these types of checks fail (i.e., the total energy grows beyond the allowed value or the hourglass energy becomes large) it is an indication that there is some type of computational problem that should be identified and corrected before the model is used for either validation or prediction. Validation Validation is conceptually much easier to define since it involves any comparison between a numerical simulation and a physical experiment. Validation procedures can be used to compare numerical results to component-level tests, sub-assembly tests, material characterization experiments or full-scale crash tests. The ASME V&V 10-2006 definition of validation, illustrated in Figure 2, is “the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model.”(17) Validation, as defined above, involves a comparison of the simulation and experiment to determine “the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model”.(17) A finite element simulation whose intended use is, for example, to design the crashworthiness of a vehicle may differ from a model intended to examine the NCHRP Report 350 evaluation parameters. In the former, for example, the deformations of parts of the vehicle would be highly relevant even if they do not affect the kinematics of the vehicle, whereas in the latter, such deformations would probably have little effect on parameters like occupant risk or ride-down accelerations. Similarly, the choice of validation metrics is tied to the intended use. If the purpose of a finite element model of a cantilever beam is to predict the tip deflection of the beam under Figure 2. Schematic representation of the validation. (16)

14 some loading, the metric that should be used in validating the model is the tip deflection. The validation metrics are indicative of the intended use of the model and the expertise needed to evaluate the model. In general, the purpose of performing a finite element simulation in roadside safety is to assess the safety performance of a roadside hardware device by simulating the equivalent of a full-scale crash test. Full-scale crash tests are evaluated using the criteria provided in NCHRP Report 350 in the US or EN 1317 in Europe so ultimately, a simulation should be judged successful if the computer simulation results in good predictive values of the Report 350/EN 1317 evaluation criteria (e.g., occupant ride down accelerations, occupant impact velocities, THIV, and ASI). The objective of this project is “to develop guidelines for verification and validation of detailed finite element analysis for crash simulations of roadside safety features. The focus of these guidelines will be on establishing accuracy, credibility, and confidence in the results of crash test simulations intended (i) to support policy decisions and (ii) to be used for approval of design modifications to roadside safety devices that were originally approved with full-scale crash testing.” This statement clearly indicates that the use of computation models being validated is intended to support policy decisions and approval of design modifications so the crash test evaluation metrics will of necessity be used in the validation process. We propose, therefore, the following purpose for computer simulations in roadside safety: The purpose of performing roadside safety computer simulations is to assess the response of the vehicle, barrier and occupant in a collision such that (i) the NCHRP Report 350 evaluation parameters can be predicted and (ii) the structural performance of the barrier can be assessed. Combining the ASME V&V 1-2006 definition of validation with the purpose of performing roadside safety simulations yields a definition of validation as it relates to simulations of roadside devices. Validation in the area of roadside safety is: The process of determining the degree to which a roadside safety computer model is an accurate representation of the real world crash tests from the perspective of accurately replicating (i) the NCHRP Report 350 or EN 1317 crash test evaluation parameters, (ii) the structural performance of the barrier and (iii) the response of the vehicle. Calibration Calibration is often confused with verification and validation. Verification and validation involve comparisons with physical experiments or solutions that are independent of the model development, whereas calibration is the use of physical experiments, the literature and analytical solutions to estimate the values of the parameters needed to develop the model. For example, if a material model is needed in a particular finite element simulation, the analyst may perform some physical tension tests in the laboratory to obtain the stress-strain response of the model. These physical test results can then be used to estimate the values of the parameters needed for the computational material model. Such a material model has been “calibrated” by

15 the physical tests and so the same physical tests cannot be used to “validate” the material model. Physical experiments, therefore, can be used either to estimate the values of the parameters and thereby calibrate a model, or to validate a model, but not both. VALIDATION PROCESS As defined earlier, validation in the context of roadside safety computer simulations is: The process of determining the degree to which a roadside safety computer model is an accurate representation of the real world crash tests from the perspective of accurately replicating (i) the Report 350or EN 1317 crash test evaluation parameters, (ii) the structural performance of the barrier and (iii) the response of the vehicle. Validation always involves comparing a computer simulation or numerical experiment to a physical experiment of some type. The question at the root of the validation exercise is whether the simulation replicates the physical experiment and whether it can be used to explore and predict the response of new or modified roadside hardware in the real-world. Validation activities can be performed not only for full-scale crash tests but for material models, sub-assembly models and component models. Ideally, each portion of a large complex model should be validated separately if it is possible to perform a meaningful physical test. The following sections discuss various aspects of validation including validation metrics and procedures; validation of materials, components and subassemblies; and the repeatability and validation of full-scale tests. Significant activity in recent years has aimed at formalizing verification and validation processes in computational solid mechanics. As mentioned earlier the AIAA guide was developed almost a decade ago and the ASME guide was published just last year.(15, 16) This project is concerned mainly with developing procedures that can be used as part of the evaluation and acceptance process. Designers have been using computer simulations to develop roadside hardware for a decade now but acceptance decisions are still generally made based on crash tests. A computer simulation used in the design process is useful since it helps to minimize the number of development crash tests needed and provides insight into how a device functions. Using computer simulations in the acceptance process, however, requires more formality and uniformity since the person evaluating the model did not participate in the model or hardware development. The full-scale crash test procedures illustrate the process. Crash test procedures in roadside safety have existed for nearly 40 years. The roadside safety community has learned to trust the results of crash tests if they are performed according to the accepted guidelines of NCHRP Report 350.(20) A similar procedure is necessary for the use of computer simulations – the roadside safety community needs to develop procedures for validating and verifying computational results to enable them to be trusted by those making acceptance decisions. Report 350 enables a hardware developer to state that the testing was performed in accordance

16 with the recommendations of NCHRP Report 350 and to determine that the proper crash performance information is being submitted to those making acceptance decisions. These validation and verification procedures can likewise become an analogous process for the computational results. The basic approach of developing a quality control procedure for gaining confidence in the validity of the results has been used in other areas for both crash/dynamic testing and computational modeling. For example, FAA Advisory Circular 20-146 presents a procedure whereby computer simulation results can be used to qualify aircraft and rotorcraft seats.(13) The basic process recommended by the FAA circular is: • Develop a model of an already tested and approved seat configuration for which dynamic test results are available. This is referred to as the baseline seat model. • Validate the results of the computer simulation of the baseline seat model against the already-existing dynamic tests. • Modify the baseline seat model to replicate any desired changes in the structure or positioning of the seat and its occupant and perform the simulations of the new configuration. • Evaluate the results of the new seat configuration using the same requirements for the dynamic tests. If the results of the simulation indicate passing performance using the seat test metrics, the seat can be approved for use. Circular 20-146 allows the use of LSDYNA, MADYMO and MCS/DYTRAN as acceptable analysis codes. It states that models must be validated against dynamic tests and that the tests used for the validation must be as similar as possible to the extrapolated design. For example, results from a three-legged seat cannot be used to validate a model which will then be used to examine a four-legged seat design. Since the seats are tested with anthropometric dummies, the computational models also include models of the dummy. The test evaluation criteria involve the responses of the anthropometric dummy in terms of the head injury criteria (HIC), lumbar spine loading and femur force. Time histories from the computational model are used to calculate these same response parameters and specific deterministic acceptance values are provided. For example, a HIC greater than 700 is considered a failure in the tests. When validating the model, the numerically-obtained HIC must be within +/- 50 units of the experimentally measured value. Similarly, the maximum lumbar spine compression is measured in the test and estimated in the computational solution. The maximum limit is 1500 lbs and the computational value must agree within 10 percent of the experimental value. The femur load should also agree with the physical test within 10 percent. The FAA Circular does allow some flexibility stating that some values are not as critical in some situations so while good HIC validation is always required, validation of the lumbar spine might not be as critical. This procedure is a deterministic evaluation in the sense that it does not address uncertainty

17 about either the physical test or the numerical calculation—the evaluation parameters are calculated and required to be within an appropriate range compared to the physical test. The results of the validation process must be documented in a report, analogous to a test report. The report documents the results, comparisons with the physical tests, the software/hardware used and the sources of material and geometric data. The analyst must address some particular issues like the energy balance and hourglass modes and the Circular specifies that the analysis results be filtered using SAE J211 Class 1000, the same filter specifications used in the dynamic tests. Engine inlets of turbofan planes must be verified with bird strike impacts. These impacts are usually performed experimentally with birds of 2 pounds launched at speed around 150 m/s (depending on the airplane) against the structure. Pass fail criterion is the capability of the structure to avoid the penetration, during shots at critical impact points, of the bird after the main bulkhead. These tests are performed on high technological structures made of titanium, steel, aluminum alloy, carbon fibers, Kevlar fibers, honeycomb and special acoustic panels. The result is that the certification shots require a strong effort for the industries that must build several of these structures. In the last years a new procedure has been accepted where the producer can provide a FEM simulation before the first certification shot and, if the comparison between the simulation and the test is accepted, the following shots can be performed numerically. The comparison is usually based on the deformations and failures found in the structures. Lately also the amount of bird penetrating the structure is used as indicator. The same approach has been used for helicopter fuel tank approval where simulations of impact tests have been used to certify the AB139 Agusta Westland helicopter.(21) The design of helicopters for crashworthiness is based on US military specification MIL-STD-1290A. One component of helicopter crashworthiness is the design of the fuel tanks. The fuel tanks must not rupture in a crash so a standard impact condition is specified for the approval of fuselage and tank designs. Invernizzi describes a recent example where an LSDYNA model of a cross- section of the helicopter including the fuel tanks was developed. Vertical drop impacts were performed at the drop-test facility at the Politecnico di Milano of the same AB139 Agusta- Westland design. The deformed shapes and location of tank-seam failures were compared between the test and the simulation to validate the computation model. The validated model can then be used to develop improved designs and the approval of the design can be based on the computation model without the necessity of performing another full-scale drop test. Finite element simulation programs like LSDYNA have been used to design and evaluate rolling stock for several years. While crash tests with full-scale trains are occasionally performed such tests are obviously very expensive and the number needs to be minimized when developing new designs. The draft European Standard prEN 15227 is aimed at the approval and self-certification of rail car crashworthiness.(14) prEN 15227 identifies four basic impact scenarios and allows the design to be evaluated using computer simulations which are then

18 validated against at least one full-scale test. The standard encourages building models of the major subassemblies and then calibrating the models with the results of experiments on the assemblies. Once the assemblies are calibrated, the full model of the rail car is developed and subjected to the four reference crash scenarios. Generally, the objective in designing rail cars is to absorb energy in particular areas of the rail car and to provide “survival space” for occupants. prEN 15227 states that if the total energy absorbed and the total stroke of the computational model is within 10 percent of the validation test, then the result is acceptable. A plot of the total force versus time curve is also required and the norm states that high frequency spikes shorter than 5 msec in the numerical calculation can be ignored. This rail car approval process allows designers to create designs using computer models and then validate the results based on comparison with a physical test corresponding to the worst-case crash scenario. If the model adequately predicts the results for this one physical test, then the model results for all other crash scenarios are considered valid and may be used as the basis of approval. Kokkins et al also developed a technique to simulate the dynamic, nonlinear structural behavior of moving rail vehicles during a collision.(22) Kokkins considered the interdependence of the many vehicles connected in a typical train. He accomplished this by the innovation of combining the dynamic modeling of the overall model with embedded detailed models of the lead locomotive, other railcars in the overall model and the objects with which it collides, including standing car and ISO-type shipping containers. LSDYNA was used to simulate the three-dimensional effects of non-linear, elastoplastic material behavior in addition to the effects of large deflections, buckling, energy absorption, and fracture. It was possible to generate and visualize the collision process and view the most significant locomotive structural deformations, movements, and decelerations. These insights into the structural performance and interactions of the various areas of the locomotive, including the cab and interior areas, relate directly to crew survivability in collisions. Several types of locomotive design improvements were also assessed with this method. Validation studies using a historical accident were also performed. Dynamic finite element methods are also being used in the medical device industry to design implants such as heart valves, pacing lead coils, and artificial joints.(23,24) Analysis is particularly important for medical devices since performing experiments in living human beings under typical in vivo conditions is not only expensive and extremely risky but is also of questionable morality. Before such experiments can be performed, the developer must have a high degree of confidence in the design. Another interesting example is the evaluation of ship hull designs. Full-scale testing of ships is extremely complex and extremely expensive, and by the time the first hull is built, it is difficult to make any large changes in the structure. The U.S. Navy (USN) has been investigating the use of LSDYNA to design ships for blast loadings but before the process could be used widely the USN decided to validate LSDYNA models using several prior full- scale shock experiments. (25) Five full-scale ship shock trials have been performed over the

19 past 20 years involving the AEGIS class ships. Each shock trial involves detonating an underwater blast and measuring the response of the ship structure at a large number of locations on the ship. Didoszak et al describes a validation of an LSDYNA model of an AGEIS guided missile destroyer (DDG 81) compared to a full-scale shock trial. The results were compared quantitatively using the Russell Comprehensive metric described in a later section. The results of the comparison were generally good and the USN decided that the use of LSDYNA models was appropriate for the evaluation of ship hull designs in blast loadings. The validation procedures discussed above all conform to the recommended validation and verification process described by ASME V&V 10-2006 and illustrated in Figure 3. The first step in each procedure is determining the physical responses of interest and formulating a conceptual model. At this point, the process breaks into two separate paths, one involving computational modeling and the other involving physical experiments. In the computational modeling branch, the mathematical model is developed, calibrated and verified and then used to produce numerical simulations of the “reality of interest.” On the experimental branch, experiments are performed that provide useful tests of the phenomena being investigated. The two branches come back together in the validation stage where the results of the simulations are compared with the results of the experiments. If the results agree, the model is validated and can then be used to investigate other scenarios. Metrics Introduction Visually comparing time histories from computational and experimental results to assess the degree of similarity is the most common way of qualitatively validating a model. For example, Figure 4 shows two time histories. An evaluator might look at the results and consider the two curves substantially similar since they “look” similar—since this way of comparing computational and experimental results on the same graph is no more than a subjective comparison, it is qualitative rather than quantitative validation. Qualitative comparison between experimental and numerical results cannot provide several important features such as: quantification of numerical difference between the experiment and calculation, quantification of computational uncertainties (e.g., model sensitivity to the change of some parameters or poorly defined boundary conditions) and an estimate of the experimental and calculation uncertainty. If comparisons are only qualitative, one reviewer’s “good” correlation may become another’s “poor” correlation, and since the evaluation is qualitative, the inconsistency cannot be resolved. Approval decisions need to be based as much as possible on quantitative criteria that are unambiguous and mathematically precise.

20 Figure 3. Typical validation and verification activities.(17)

21 Figure 4. Comparison between experimental and analytical time history results.(13) A validation metric is a “mathematical measure that quantifies the level of agreement between simulation outcomes and experimental outcomes.”(17) A variety of validation metrics can be found in literature but essentially they can be grouped into two main categories: (i) deterministic metrics and (ii) stochastic metrics. Deterministic metrics do not specifically address the probabilistic variation of either experiments or calculation—they are deterministic because given the same input the calculation results in the same result every time. Such metrics simply are characteristics that can be determined by examining the experiments and calculations. Stochastic metrics involve computing the likely variation in both the simulation and the experiment due to parameter variations. There are two types of deterministic metrics: domain-specific metrics and shape comparison metrics. As discussed earlier, the intended use of the model helps to identify the domain-specific metrics. For the example of designing a railroad car, prEU 15227 requires comparing the energy absorption of the rail car for both the experiment and simulation.(14) In this case, the total energy absorption is a domain-specific metric that is relevant to rail car design but may not be relevant to some other type of design. Similarly, designing an aircraft seat involves comparing the Head Injury Criteria (HIC) number from the simulation to the HIC observed in an experiment.(13) The HIC in this case is a domain-specific metric that is only relevant to design situations in which dummy head impact is a part of the design criteria. One

22 step in choosing validation metrics, therefore, is to choose the parameters that are necessary when judging the performance of an experiment. Domain specific metrics are generally the same as the dynamic or crash test evaluation metrics in a particular design field. Just as aircraft seat and rail car designers use the metrics common in their respective crash test programs, roadside safety designers should use domain-specific metrics found in crash test guidelines like Report 350 and EN 1317. The other type of deterministic metric involves comparisons of shape. In dynamic domains such as roadside safety design, aircraft crashworthy seat design, ship blast-worthiness or vehicle crashworthiness analysis, the shapes being compared are generally based on time history data (i.e., acceleration, velocity and displacement at a specific sensor location). Shape metrics found in the literature include the following: 1. Frequency domain metrics 2. Relative absolute difference of Moments of the two signals 3. Root Mean Square (RMS) log measure of the difference between two signals 4. Correlation Coefficient 5. Geer MPC Metrics 6. Knowles and Gear MPC metrics 7. Russell Metrics 8. ANOVA Metrics 9. Velocity of the Residual Errors Once a particular deterministic metric is chosen, a procedure for deciding if the comparison is acceptable is necessary. There are two basic approaches to the developing an acceptance criteria: ad-hoc and probabilistic. Ad hoc acceptance criteria bases acceptance on community experience or engineering judgment. For example, the HIC value in an experiment might be found to be 352 and the corresponding simulation might result in a HIC of 398. The two values are similar but not identical. An ad hoc acceptance approach simply sets a criterion for the range of acceptable measurements without any real basis in the probabilistic variation of the parameters. Based on experience in the specific domain, for example, the FAA considers a HIC comparison that is within +/- 50 HIC units as sufficiently close. Based on the FAA criteria, therefore, the results of the simulation and experiment in this example would be sufficient. Ad hoc criteria are generally based on the experience of the community and the degree of closeness that the community feels it can achieve. A better approach is to base the acceptance on the probabilistic variation of the experiments. For example, if 10 experiments are performed and the mean HIC is found to be 385 with a standard deviation of 11 HIC units, the analyst can estimate that the 90th percentile confidence limit for the HIC experiments is 385 +/- 1.65· 11=18 HIC units. Stated another way, if 100 tests were performed, 90 of those should be in the range of 334 to 370 HIC units. In this case the simulation estimate of 398 is outside the 90th percentile confidence range so the conclusion would be that the simulation is not sufficiently close to the experiment. If the

23 simulation were as good as another experiment, the simulation result would have fallen in the specified range. This approach is better than the deterministic approach since it incorporates the uncertainty in physical experiments but it does require some prior knowledge about the level of uncertainty typical in experiments which may be difficult to obtain. These methods generally involve some type of classical hypothesis testing such as a t or z test or the Kamalgomov-Smirnov test. The subject of repeatability of crash tests is more completely discussed later in this chapter. A validation metric, therefore, potentially has two parts: the deterministic metric and an acceptance criterion. The acceptance criteria may be either a deterministic ad hoc value or a probabilistically determined range. Domain-specific metrics should be chosen that correspond to the testing typically performed in a particular solid mechanics area to facilitate comparisons between physical tests and computational simulations. Domain-Specific Deterministic Metrics in Roadside Safety In the case of roadside safety, tests are performed and evaluated according to Report 350 in the US or EN 1317 in Europe. Both these crash testing guidelines include specific metrics like the occupant impact velocity, the occupant ridedown acceleration, THIV, ASI and exit conditions. Each of these metrics is calculated based on time history data collected in a crash test. Since the purpose of performing computer simulations is generally to design roadside hardware such that the Report 350 or EN 1317 test response can be predicted, it makes sense to include these test-based evaluation parameters in any discussion of validation. Table 1, for example, shows a comparison between the Report 350 evaluation table and the results of an LSDYNA simulation for the Plastic Safety System CrashGard Sand Barrel System.(26) A model of the system was developed and then simulations of baseline full-scale crash tests were performed. The domain-specific results from the simulation are compared to the full-scale crash test results for test 3-31 in Table 1. The Report 350 evaluation table contains 14 specific evaluation metrics, eight of which apply to test 3-31. Of the eight evaluation criteria, six are pass/fail qualitative assessments (i.e., criteria C, D, F,G, K and L) based on the global performance of the system. The two remaining criteria (i.e., H and I ) are calculated quantities based on the time histories. As shown in Table 1, the qualitative criteria like “acceptable test article performance” (i.e., criterion C) and the detached fragment criterion (i.e., criterion D) can be used to compare results of crash tests and simulations. It is best if these criteria are unambiguous but the simulation results can be judged in the same way as the experimental results. Also shown in Table 1 are the deterministic domain-specific metrics occupant impact velocity (OIV) and occupant ride down acceleration (ORA) (i.e., criteria H and I). In these cases, the longitudinal OIV value is 20 percent greater than the experimental value so if 20 percent were the allowable acceptance criteria, this domain-specific metric would be acceptable. If the acceptance criteria were 10 percent, however, the longitudinal value would be judged not acceptable. This has

24 certain diagnostic value since a higher simulation value indicates the model may be too stiff so the analyst can investigate possible reasons. The ORA (i.e., criteria I) values in the longitudinal and lateral directions are both less than five percent so the simulation would be judged Table 1. Report 350 evaluation criteria for test 3-31 on the Plastic Safety System CrashGard Sand Barrel System – test (left) and simulation (right).(26) Evaluation Factors Evaluation Criteria Crash Test FE Difference % Structural Adequacy A. Test article should contain and redirect the vehicle; the vehicle should not penetrate, underride, or override the installation although controlled lateral deflection of the test article is acceptable. NA NA - B. The test article should readily activate in a predictable manner by breaking away, fracturing or yielding. NA NA - C. Acceptable test article performance may be by redirection, controlled penetration or controlled stopping of the vehicle. Passed Passed - Occupant Risk D. Detached elements, fragments or other debris from the test article should not penetrate or show potential for penetrating the occupant compartment, or present an undue hazard to other traffic, pedestrians or personnel in a work zone. Passed Passed - E. Detached elements, fragments or other debris from the test article, or vehicular damage should not block the driver’s vision or otherwise cause the driver to lose control of the vehicle. NA NA - F. The vehicle should remain upright during and after the collision although moderate roll, pitching and yawing are acceptable. Passed Passed - G. It is preferable, although not essential, that the vehicle remain upright during and after collision. Passed Passed - H. Occupant impact velocities should satisfy the following: Occupant Impact Velocity Limits (ft/s) Component Preferred Maximum Longitudinal and Lateral 30 40 24.27 ft/s 0 ft/s 29.2 ft/s 0 ft/s 20 NA Longitudinal 10 15 NA NA - I. Occupant ridedown accelerations should satisfy the following: Occupant Ridedown Acceleration Limits (g’s) Component Preferred Maximum Longitudinal and Lateral 15 20 10.9 g’s 1.8 g’s 11.4 g’s 1.6 g’s 4.5 11 J. (Optional) Hybrid III dummy responses. NA NA - Vehicle Trajectory K. After collision it is preferable that the vehicle’s trajectory not intrude into adjacent traffic lanes. Passed Passed - L. The occupant impact velocity in the longitudinal direction should not exceed 40 ft/sec and the occupant ridedown acceleration in the longitudinal direction should not exceed 20 G’s. Passed Passed -

25 M. The exit angle from the test article preferable should be less than 60 percent of test impact angle, measured at the time of vehicle loss of contact with test device. NA NA - N Vehicle trajectory behind the test article is acceptable. NA NA - acceptable by this domain-specific metric if the acceptance criteria were either 10 or 20 percent. The point of Table 1 is to show that the same experimental evaluation metrics can be used as deterministic domain-specific metrics when comparing full-scale crash test experiments to simulations.

26 The EN 1317 values like THIV, PHD and ASI could likewise be used as deterministic domain-specific validation metrics as shown in Table 2 for the same CrashGard Sand Barrel System shown in Table 1.(26) Another interesting feature of the metrics in Table 2 is the use of time as an evaluation criterion. Not only can the value of the metric (i.e., the THIV or PHD) be used but the time of arrival of that metric can likewise be used to assess the validity of the simulation. Table 2. EN 1317 evaluation criteria for test 3-31 on the Plastic Safety System CrashGard Sand Barrel System – test (right) and simulation (left).(26) Deterministic Shape Metrics with ad-hoc acceptance criteria The following section presents the details of all the common shape-comparison metrics found in the literature review. The basic method for calculating the metric is presented along with a reference to its original derivation. If the metric has been used in a design and evaluation project the paper or report is cited. In all the following sections, the terms mi and ci refer to the measured and computed quantities respectively. The “i” subscribe indicates the measurement at a specific instant in time.

27 When comparing two or more time histories, the simplest technique is the point-to-point comparison in which the magnitude of the curve (e.g., the acceleration) at a particular point in time in the experiment is compared to the magnitude of the simulation curve at the corresponding time. Point-to-point comparisons are performed in the time domain. When the comparison involves a time varying quantity, it is possible that there may be a time shift between the two curves. When comparing two time histories, the magnitude and the phase should be considered simultaneously. Frequency Domain The NARD Validation Manual provides three validation metrics in the frequency domain for the comparison of transformed signals of the measured curve, M(ω), and the transformed signal of the computed curve, C(ω):(27) • The relative absolute difference of amplitude of two signals, • The point-wise absolute difference of amplitudes of two signals and • The root-mean-squared (RMS) log spectral difference between two signals. The time domain signal is transformed into its corresponding frequency domain signal using a Fourier transformation. Any time domain signal can be expressed in the form: ( ) ( ) ( )∑ ∞ ∞− += tnBtnAtf nn ωω sincos If Am and Bm are coefficients of Mt and Ac and Bc cmBcmA BBAA −=∆−=∆ ; are coefficients of C, the point-wise absolute differences are defined as: The relative absolute difference is defined as: 2222 ccmmrel BABAAB +−+=∆ For both the point-wise absolute differences and the relative absolute difference, the measured and computed curve are considered by the NARD Validation Manual to be close to one another if the difference is less than 20 percent. The RMS log spectral distance measures the distance between the smooth power spectra of the measured and computed signals. In order to define the smooth power spectrum, it is first necessary to define the auto covariance functions associated with the measured and computed signals:

28 ( ) ( ) ( ) ( ) ( ) ( )       += += ∫ ∫ −∞→ −∞→ dttctc T C dttmtm T C T TT c T TT m 2 2 2 2 1 1 lim lim ττ ττ The smooth power spectrum of the measured and calculated functions is defined as: ( ) ( ) ( ) ( ) ( ) ( )       =Γ =Γ ∫ ∫ ∞ ∞− − ∞ ∞− − ττωω ττωω ω ω deCw deCw ti cc ti mm where ( )ωw is the spectral window. Eventually, the RMS log spectral distance in units of decibels (db) is given by: ( ) ( )( ) ωωω π π π dD cm 2 2ln 2 1 ∫ − ΓΓ= The smaller the RMS log spectral distance between the signals is, the closer the signals are. A distance of 20 db or less indicates that the difference between the signals is not more than 20 percent. Both the RMS log spectral distance and the relative absolute RMS are deterministic metrics with an ad hoc acceptance criterion of 20 percent. In the Roadside Safety literature no application of the NARD frequency analysis metrics was found. In fact, no example of the use of frequency domain metrics was found in any of the solid mechanics literature. This may be due to the difficulties which arise when trying to apply these particular metrics to the very short time histories of a crash event. Time Domain Comparisons based on time-domain point-to-point measures are far more common in a variety of solid mechanics domains. Relative Absolute difference of Moments of the two signals One of the simplest ways to compare two signals is to compare the moments of the shapes as proposed by the NARD Validation manual.(27) The relative absolute difference of the moments is based on the comparison of the moments of the area under the time history curve. Moments are mathematical characteristics of a shape (e.g., moments of inertia) and can be defined by the following general expression:

29 tctM tmtM n i i j icj n i i j imj ∆⋅= ∆⋅= ∑ ∑ = = 1 , 1 , The lower order moments have some physical meaning. For example, the zero order moment (j = 0) when divided by the number of samples is the average acceleration. The 1st order moment (j = 1) divided by the 0th order moment locates the time at the centroid of the time history. Moments of order greater than one have little physical meaning when comparing time histories and are simply mathematical characteristics of the shapes. The more moments (i.e., shape characteristics) that two shapes have in common the more likely, in a general sense, they are to represent the same shape. If enough characteristics of the measured acceleration history shape match the characteristics of the calculated acceleration history the shapes should be similar. The ratio R between the difference of the nth moment of a measured (mi) and a calculated (ci) signal and the nth moment of the calculated signal is given by: ∑ ∑∑ = == − = n i i n i n i i n i n i i n i ct ctmt R 1 11 The NARD validation procedure recommends that the 0th through 5th relative differences of the moments defined by equation (8) be calculated. The NARD Validation Manual arbitrarily considers the measured and calculated moments to be similar if the absolute difference between the respective moments of order n Mn (mi) and Mn (ci) is less than 0.2. All the relative moment metrics are, therefore, deterministic shape metrics with an ad hoc acceptance criterion of 20 percent. Interestingly, the NARD validation procedure makes the comparison with respect to the calculated value rather than the experimental value. It is more appropriate to make the comparison with the experimental value since, from a validation point of view; the experimental value is the “true” response. Root Mean Square (RMS) log measure of the difference between two signals The mean value of a signal is simply the algebraic sum of the values divided by the number of values.(27) Similarly, the mean squared is the algebraic sum of the square of the values divided by the number of values. If the square root of the mean squared is taken, the root mean square (RMS) of the measured and computed signals are obtained as shown below:

30 ∑ ∑ = = = = N i ic N i im c n RMS m n RMS 1 2 1 2 1 1 The RMS is the average value of the signal without respect to its sign. The RMS of two signals can be compared by taking the difference of the two RMS and dividing by the average of the two RMS as follows: 2 1 2 1 2 1 2 1 2       + − =∆ ∑∑ ∑∑ == == N i i N i i N i i N i i r cm cm RMS As with the relative moments, the choice of denominator is ambiguous. For moments, the difference is calculated with respect to the calculated signal whereas for the relative RMS, the difference is calculated with respect to the average. Again, since these are validation metrics, the “true” experimental solution should be the reference value in the denominator. Like the relative moments, the RMS is simply a characteristic of a particular shape. In the case of an acceleration time history, it is the average value of the accelerations without respect to the sign. The logarithmic form of the RMS difference can also be considered, as suggested by the NARD Validation Manual: ( ) ( ) ( )∑ ∑ = =      + =∆ N i ii N i ii r cm cm RMS 1 2222 2 1 22 log log10log10 log102 Both the relative RMS and the logarithmic relative RMS are deterministic shape metrics with an ad hoc acceptance criteria of 20 percent. Correlation Coefficient The correlation coefficient, proposed in the NARD Validation Manual, measures the correlation between two signals.(27) Correlation in this context does not mean that the signals are identical but only that one can be linearly transformed into the other. The correlation coefficient is, therefore, a measure of the relative phasing of the two signals. The correlation coefficient of two signals is given in the NARD Validation Manual as:

31 ∑∑ ∑ == == N i i N i i N i ii mc mc 1 2 1 2 1ρ The closer the correlation ratio is to unity, the more the calculated and measured signals can be linearly transformed into each other. Several applications of the NARD validation metrics were found in literature for roadside safety finite element simulations. For example Ray, in an unpublished paper quantitatively evaluated four finite element models using the NARD metrics in the time domain among others.(28) The results of Ray’s analysis will be described in a later section. In 2000, Tabiei and Wu used the RMS log measure of difference and correlation coefficient to quantitatively validate the results obtained from a finite element model of a strong-post W beam guardrail system and a pick-up truck.(29) In 2005, Atahan and Cansiz used the relative absolute difference of moments between two signals to quantitatively evaluate the accuracy of the results from a baseline simulation of a full-scale test of a guardrail to bridge rail transition with a pick-up truck.(30) The correlation coefficient is, therefore, a deterministic shape metric with an ad hoc acceptance criterion. Geers MPC Metrics Geer developed a three part metric that includes quantitative assessments of the magnitude and phase which are then combined into a single value that represent the whole comparison.(31) First, the magnitude and phase components of the metric are calculated. The two components (i.e., M and P) are then combined into a single metric (C) that represents the combined effect of both magnitude and phase. All the Geers metrics are arranged such that the values range from zero to unity with values closer to zero representing a higher level of agreement. The Geers MPC metrics are defined by the following summations: 1 1 2 1 2 −             = ∑ ∑ = = N i i N i i G m c M               −= ∑∑ ∑ == = N i i N i i N i ii G cm cm P 1 2 1 2 11 22 GGG PMC +=

32 Geers showed that the phase component is insensitive to magnitude differences but is sensitive to differences in phasing or timing between the two time histories. Similarly, the magnitude component is sensitive to differences in magnitude but relatively insensitive to differences in phase. These characteristics give the Geers metrics good diagnostic value since they identify the aspects of the curves that do not agree. For example, if the phase metric is acceptable but the magnitude metric is not, the analyst can examine the stiffness and strength of the model to make sure it is correct. As a note, the Geers magnitude metric can be simply seen as the one subtracted from the ratio between the measured and calculated root RMS signals defined in the NARD validation metrics, while the Geers phase metric is one minus the square root of the correlation coefficient. Once the magnitude component and the phase component have been calculated, the combined metric C is calculated by combining the two component metrics into a single value. All the components of the Geer metrics range between zero and unity with values of zero corresponding to exact agreement between the curves. The components can be thought of as coordinates of a circle where the M and P values define the coordinates of a point on the circle and C defines the radius. Sprague and Geers later modified the phase component of the MPC metrics in order to better scale the magnitude and phase components. They found that the original formulation of the phase component did not scale similar to the magnitude component. (32) A magnitude component of 10 percent, for example, did not reflect the same degree of comparability as a phase component of 10 percent. Sprague and Geers modified the original MPC metrics to include a trigonometric term that helped the two components scale more similarly. Sprague and Geers used a phase formulation based on Russell, a metric discussed later in this section. The Sprague and Geers is structured in the same way as the original version with magnitude, phase and combined metrics. Only the phase component is different. The three components of the Sprague and Geers metric are given by the following equations: 1 1 2 1 2 −             = ∑ ∑ = = N i i N i i SG m c M               = ∑∑ ∑ == =− N i i N i i N i ii SG cm cm P 1 2 1 2 11cos1 π 22 SGSGSG PMC +=

33 Knowles and Gear MPC metrics The most recent variation of an MPC-type metric is the Knowles and Gear metric.1 Synchronizing the signals was accomplished by defining the time of arrival (TOA). The TOA of a time history is the time at which the time history attains some percentage of the maximum wave form value. Typically, for time histories with relatively fast rise times, a range of 5 – 10% is recommended, but this range may be changed in case of slower rise times. Defining TOAc and TOAm as the times of arrival of the simulation and measured time histories respectively, then the TOA metric is defined as: (32) Like Geers’s original metrics and the Sprague and Geers metrics, the Knowles and Gear metrics are composed of three parts: a magnitude component, a phase component and a combination parameter. As in the other versions of these metrics, the values range between zero and unity with zero representing exact agreement between the curves. Knowles and Gear recognized that if two shapes were arbitrarily shifted from each other the phase component may erroneously indicate poor phase correlation that was simply due to not synchronizing the two signals. For example, if the time of impact was not precisely defined in the experimental curve, some of the phase error might actually be due to the poor identification of the impact point. Before a good comparison of the phasing can be performed, the two signals must be synchronized so they start at the same point. m mc TOA TOA TOATOA M − = The magnitude component MKG of the Knowles and Gear metric is defined as a weighted sum-of-squared differences between the simulated and measured time histories. Considering a discrete time history characterized by N time samples, MKG is defined as: ( )∑ = −= N i iiiKG QSmcQM 1 2 /~ where, mi is the measured time history at the ith sample and )()(~ τ−= tctc is the time-of-arrival shifted simulation history (i.e., if TOAc and TOAm are the times of arrival of the simulation and measured time histories and TOAc > TOAm, then mc TOATOA −=τ ). Shifting the simulation time history using the function )(~ tc instead of )(tc allows the metric to focus only on the magnitude comparison between the curves without complications arising from the asynchronous signals. Qi and QS represent respectively the weighting and normalization factors. 1 Note that Geers and Gear are two different people.

34 The weighting factor is designed to scale the sum-of-squares differences to the maximum value of the measurement to the maximum value of the measurement: ( )11 max −+ −      = ii p i i ttm m Q where, a unit value of p is recommended to place more weight on the large values of m(t). In order to avoid creating a gap between time histories characterized by a large magnitude and those characterized by a smaller one, the magnitude has to be normalized. In this metric, the normalization factor QS is chosen to define a value of unity when the magnitude of the time histories differs by 100 percent: ( )∑ = −+ −      = N i iii p i ttm m m QS 1 11 2 max If a uniform time sampling is chosen, the magnitude component simplifies to the following form: ( ) ( )∑ ∑ = =       −      = N i i p i N i ii p i KG m m m mc m m M 1 2 max 1 2 max ~ In an analogous case as for the Gears metrics, once the magnitude component and the phase component have been evaluated, the Knowles and Gear combination metric is evaluated by combining the two component metrics by mean of a weighted average: 12 210 22 TOAKG KG MM C + = In the Knowles and Gear combined metric, the magnitude and phase factors are weighted such that the phase value does not dominate the combined metric. The main limitation of the Knowles and Gear metric is that it cannot differentiate between an under or over prediction because it is based on the sum of the square differences between the measured and the simulation curve. The CKG metric represents the comparison of a single response quantity like the acceleration time history of the vehicle center of mass. The Knowles and Gear metric can be also applied in a more general case when different system response quantities are considered at the same time.

35 Russell Metrics Another metric based on the concept of magnitude and phase differences between two curves was developed by Russell in 1997. (33) Russell defined the relative magnitude error between the measured and computed curve as: ∑∑ ∑∑ == == ⋅ − = N i i N i i N i i N i i cm cm m 1 2 1 2 1 2 1 2 The phase correlation between the measured and computed curve is: ∑∑ ∑ == = ⋅ ⋅ = N i i N i i i N i i cm cm p 1 2 1 2 1 Hence Russell derived the magnitude and phase error respectively from the corresponding relative magnitude error and phase correlation. The resulting form for the magnitude error is: )1(log)sin( 10 mmM R += The phase error is computed as: π )(cos 1 pPR − = The comprehensive error of the Russell’s metric is defined as: )( 4 22 RRR PMC += π Shin and Schneider used the Russell metrics to evaluate the blast-worthiness of a naval ship.(34) An experiment using the DDG51 class vessel USS Winston Churchill was replicated with an LSDYNA model. As illustrated in Figure 5, the ship was instrumented with 30 accelerometers at various locations to measure the local accelerations in the ship hull when the vessel was in the vicinity of a blast.

36 The magnitude and phase components of the Russell metrics were computed and then the results for all the sensors were plotted as shown in Figure 5. The two component metrics can be considered coordinates of a point with the magnitude component plotted on the x axis and the phase component on the y axis. The radial distance to the origin represents the combined metric. The RC (Russell Combined) values shown in Figure 5 represent the values of the combined metric for two different acceptance levels. If the combined metric is less than 0.15 (i.e., 15 percent) the comparison is excellent. If the combined metric is between 0.15 and 0.28, the comparison is acceptable whereas if the combined metric is greater than 0.28, the comparison is unacceptable. Figure 5 shows not only of the use of the Russell metrics but the combination of sensor data from a number of different sensors on the same plot to assess the overall utility of a model. For example, the three data points outside the acceptable range in Figure 5 might not invalidate the whole model, but they call the analyst’s attention to regions in which the experiment and calculation did not agree. Shin and Schneider also plotted the Russell combined metric against the longitudinal position of the sensor as shown in Figure 6.(34) This figure clearly shows that the results of the comparison degraded at sensors located at the extreme ends of the vessel. This plot helps the analyst and the experimenter to identify problems with the model or with the location and mounting of sensors that can be used to improve both subsequent experiments and model development. Figure 5. Russell metrics for 30 accelerometers in a ship blast model validation.(34)

37 Figure 6. Russell combined metric plotted versus the longitudinal position of the sensor in a ship blast validation activity.(34)

38 ANOVA Metrics Analysis of variance (ANOVA) is a standard statistical technique that is commonly used in the analysis of statistical data and for building regression models. Ray suggested a series of simple statistical tests based on an analysis of the variance of the residuals (i.e., differences) between repeated crash test acceleration histories.(35) If two time histories are assumed to represent the same event, the differences between them (i.e., the residuals) should be attributable only to random experimental error. Hence, if the residuals are truly random, then they should be normally distributed around a mean error of zero (i.e., typical bell-shaped Gaussian distribution). If the mean error is not zero and the error distribution does not conform to random experimental error, then it can be reasonably concluded that there is some underlying systematic error (i.e., there is some physical reason that the curves are different).The assumption that residuals are normally distributed about a mean of zero can be examined by means of a paired t-tail test performed with the mean and standard deviation: n eT σ = where e is the average residual between the two curves, σ is the standard deviation of the residuals and n is the number of paired samples. For convenience in comparing different types of impacts, the average residual e and the standard deviation of residuals σ may be divided by the maximum observed experimental value (e.g., the peak measured acceleration) to obtain the relative average residual error, re , and the relative standard deviation of the residual errors, rσ . The terms re and rσ are calculated as follows: 2 1 max 1 )(1 1 )( ∑ ∑ = = −−= ⋅ − = n i r ii r i n i i r ecm n nm cm e σ When two time histories represent the same physical event, both time histories should be identical such that e and σ are zero, but this is almost never the case in practical situations since experimental error causes small variations between tested responses. The conventional T statistic provides an effective method for testing the assumption that the observed e is close enough to zero to represent only experimental error. In fact, the t-test indicates if the differences

39 between the two responses can be reasonably attributed to normal experimental error without having a series of repeated tests. One of the biggest advantages of the t-test is that it requires only two curves: a test curve and a simulation curve. In order to correctly evaluate the residuals, it is important that the two time histories are correctly paired. In case there is a random offset between the two time histories, the most probable starting point can be obtained using the method of the least squares, analogous to the standard use of least squares in surveying to balance a traverse so that it closes. Although synchronizing the two signals was discussed in the previous section regarding the Knowles and Gear metrics, the method of least squares is a better approach because it is not based on an arbitrary point in the curve as Knowles and Gear assumed for the TOA metric but is based on minimizing the error between the curves for the whole event. With the method of least squares, the residual area is calculated and the curves are shifted in time with respect to each other until the error (i.e., the area of the residuals) is at a minimum. This point is the statistically most likely point of synchronization. Ray implemented the least squares method to find the most likely synchronization point in his computer program CTRP.(35) The analysis of residuals should be performed only on measured time histories and not on time histories mathematically derived from primary measurements (e.g., velocity obtained from the integration of the acceleration). In fact, certain numerical operations such as integration cause an accumulation of the residuals that are supposed to be independent from one instant to another. While Ray discusses this explicitly, this is really the case for all the metrics discussed in this section in which sensor data is used to compare curves. Comparisons (and therefore validations) should always be made using the original data from the sensor (e.g., accelerations from an accelerometer, rotation rates from a rate transducer or displacements from a string-pot displacement transducer). Ray proposed the following acceptance criteria based on an examination of repeated crash tests of rigid poles: • The average relative residual (i.e., re ) should be less than 5 percent, • The standard deviation of the residuals (i.e., rσ ) should be less than 20 percent and the t-statistic should be calculated between the test and simulation curve. The absolute value of the calculated t statistic should be less than the critical t-statistic for a two-tailed t-test at the 5-percent level, ∞,005.0t (90 th percentile). Once the mean and variance of the residual distribution are known, they can be used to plot an envelope around the average response (i.e., the admissible error corridor). In order to be considered the same events, the curve obtained from the simulation should be always inside the above defined corridor. Ray developed the computer program CTRP (mentioned above) that calculated these metrics along with the original Geer metrics and NARD validation metrics.

40 The analysis of variance method involves three deterministic metrics with acceptance criteria that are based on the probabilistic distribution of expected variation between crash tests. Apart from the work in which Ray proposed the ANOVA metrics, the above described analysis of variance method has been applied as a validation procedure by some other authors in the roadside safety literature.(35) In 1998, Sean and Wekezer applied this metric to compare the results from a finite element simulation and a full crash of a pick-up truck against a G2 guardrail. In 2005, Atahan and Cansiz applied the analysis of variance metrics to compare a baseline finite element model of a full-scale test of a guardrail to bridge rail transition with a pick-up truck.(30) Ray also used this method in several projects that will be mentioned later in our discussion of the repeatability of full-scale tests. Larsson, Petterson and Svensson used Ray’s method to develop a JAVA program called “Curve Analyzer v1.0.”(130) CurveAnalyzer inputs the accelerations from two curves. The user can perform several adjustments like shifting the baseline, shifting the times (presumably to achieve better synchronization), changing units (e.g., from msec to secs or G’s to m/s2), etc. Once the analyst is satisfied with the two curves, the mean and maximum residuals, standard deviation of the residuals, and correlation coefficient are computed in the “analysis” phase of the program. Finally, the “results” phase of the program presents the values obtained and indicates if the curves have passed the criteria. For example, the T test must be satisfied at the 95 percentile confidence level, the correlation coefficient must be greater than 0.8, the mean residual must be less than 20 percent, etc. CurveAnalyzer uses essentially the same acceptance criteria proposed by Ray with the exception that the mean residual must be less than 20 percent rather than 5 percent. The CurveAnalyzer program has been used to some degree by members of the ROBUST team in comparing curves although documentation of results has not been found. One other interesting feature of the program is that it allows the user to restrict the area of analysis to a particular window in time. This way the analyst can look at the whole impact event or some portion of the event. This is often a useful way of finding out where in time problems in a comparison might occur. Oberkampf et al. developed a metric they called the “simple validation metric, VM.” (16). They define this metric as: ∑ = − −= N i i ii m mc N VM 1 tanh11 The term i ii m mc − is the point-by-point residual error between the computational and measured curve. The error is normalized by the value of the experimental measurement. The tanh function is used to map the result into the zero to unit space. The summation is the sum of all the residual errors between the measured and computed result so dividing by N yields the

41 average residual error. After using this metric it became apparent that the tanh function did not add much value so it was dropped. Oberkampf and Barone continued the development of this type of validation metric based on the same concept of statistical confidence intervals used earlier by Ray. (36)They developed two specific metrics: one requiring interpolation of experimental data and another requiring regression (i.e., curve fitting) of experimental data. Although they developed it independently, Oberkampf and Barone’s method, as will be shown shortly, is identical to the method proposed by Ray and discussed in the previous paragraphs. (35) As Oberkampf and Barone were interested in an error measure between a deterministic computational result and the mean of the population of experimental measurements, their key issue was the statistical nature of the sample mean of the measured response of the system. In other words, they were particularly concerned with a statistical estimate of a confidence interval for the true mean of the residuals. They first defined and constructed a statistical confidence integral for the population mean using sampled quantities for the mean and standard deviation:       ⋅+⋅− n szy n szy 22 ,~ ααµ where y and s are respectively the sample mean and standard deviation based on n observations and 2αz is the value of the random variable z (i.e., standardized random variable) for which the integral of Z from 2αz to ∞+ is equal to α . From standard statistics, the level of confidence that µ is in the interval given by the above equation is ( )α/1100 ⋅ percent. As the number of observations in an experiment is usually limited, they used a t distribution instead of a normal distribution resulting in the following test:       ⋅+⋅− n sty n sty vv ,2,2 ,~ ααµ where vt ,2α is the 21 α− quintile of the t distribution for 1−= nv :degrees of freedom. The validation metric developed by Oberkampf and Barone was initially applied to the case of a scalar value and then extended to the case of a vector of values (e.g., functions of time or space).(36)

42 The main idea of the Oberkampf and Barone metric was to estimate an error of the computational result based on the difference E~ between the numerical solution, cy , and the estimated mean of the population of the experimental results, my . mc yyE −= ~ The second step to build their metric was to compute an interval containing the true error at a specified level of confidence. In order to achieve this target, the confidence interval expression was rewritten as an inequality relation using the above mentioned notation: n sty n sty vmvm ⋅+<<⋅− ,2,2 αα µ where µ is the true mean and s is the sample standard deviation given by: ( )∑ = − − = n i m i m yyn s 1 2 1 1 Multiplying by -1 and adding cy to each term, it becomes: n styyy n styy vmccvmc ⋅−−>−>⋅+− ,2,2 αα µ Defining the true error as: µ−= cyE where the inequality relation can be further rewritten as: n stEE n stE vv ⋅−>>⋅+ ,2,2 ~~ αα The inequality above represents an interval containing the true error with a level of confidence of )%1(100 α− . Using a traditional level of confidence of 90%, the metric becomes: n stEE n stE vv ⋅−>>⋅+ ,05.0,05.0 ~~

43 Considering now the case for which the measured and the computed values are functions of an input variable x (e.g., acceleration vs. time), the following assumptions are necessary: • The mean value of both the computed and measured results is obtained using a sufficient number of values over the range of the input variables. • The input variables from the computation are measured much more accurately than the measured experimental values. • Two or more experimental replications have been obtained and each replication has multiple measurements of the variable of interest over the range of the input values. • The measurement uncertainty is given by a normal distribution. • There is no correlation or dependence between one experimental replication and another. With the previous assumptions, the metric can be easily rewritten as: n xstxEE n xstxE vv )()(~)()(~ ,05.0,05.0 ⋅−>>⋅+ where the standard deviation s(x) is defined as: ( )∑ = − − = n i m i m xyxyn s 1 2)()( 1 1 Examination of the metrics developed by Oberkampf et al (16), Oberkampf and Barone (36) and Ray (35) show that they are, in fact, nearly identical. Barone and Oberkampf’s term my is almost identical to Ray’s re . The difference is that Ray sums all the residuals and then divides by the peak measured response whereas Oberkampf normalizes the response by the measured value at each point. Both represent the mean value of the residuals between the computed and experimental curve. Likewise, Barone and Oberkampf’s s is identical to Ray’s rσ ; both represent the standard deviation of the residuals. Barone and Oberkampf and Ray then use the same standard statistical test, the t-test, to test the hypothesis that the experimental and computational curves are the same within the expected variation of the residuals. Velocity of the Residual Errors Ray and Hiranmayee developed a metric based on calculating the area between two curves. This method examines the point-to-point differences between specific points on the

44 simulation and experimental curve. The difference between the values at a specific time is referred to as the residual or error. If two curves are identical, there would be no area between the curves and the residuals would be zero. This method arose from using the method of least squares to find the best pairing or synchronization point for two signals. The minimum value of the area between the two curves defines the most likely point of synchronization. The area between two curves is given by Ray and Hiranmayee as:(28) ( ) tcmV n i iie ∆⋅−= ∑ =1 2 In the case of signals representing the acceleration time history of a simulation and full- scale test (i.e., ci and mi, respectively), the area under the acceleration time history curve has units of velocity, so the area between the acceleration curves will also have units of velocity. In order to obtain a non-dimensional measure of the amount of the residual error between the two time histories, the area of the residuals can be divided by the initial impact velocity, Vo, which should be the same in both test and simulation: ( ) 0 1 2 V tcm V n i ii r e ∆⋅− = ∑ = Smaller values of reV indicate smaller residual errors between the physical test and the simulated event with a value of zero indicating identical point-to-point responses. Stochastic Methods In the previous section describing ANOVA techniques and metrics, there was an implicit assumption that the computational results were deterministic (i.e., the same result occurs given the same input) but the experimental results were probabilistic (i.e., substantially identical experiments can result in a range of outputs due to random experimental error). Oberkampf and Barone (36); Oberkampf et al (16); and Ray (35) all developed metrics that examined the computational results to see if they fall within the expected probabilistic range of experimental results. Computational results, however, need not be deterministic. Every computed result is based on input data like material properties, geometry and initial conditions. If these input parameters are varied, the results of the computation will likewise vary. For example, an analyst may use the yield stress reported in an engineering handbook to develop a computational model. Although the real physical material exhibits random variation in its properties, the analyst usually just assumes the mean value of the parameter (i.e., the handbook value). Stochastic methods, on the other hand, treat the input to the finite element model as parameters that can experience random variation. If mean values are used to perform the simulation, as is usually the case, the result is a deterministic mean response. If the input variables are allowed to

45 vary randomly as they do in the physical world, the simulation response would vary as well. The idea of stochastic variation of parameters is a key component of computational optimization. If three computations were performed at, say, the minimum, mean and maximum yield stress, the response of the simulation would likewise vary. If the variation of the computation is compared with the expected variation in the experiments a stochastic comparison technique is being used. The brute-force way of performing a Bayesian or stochastic analysis is to first characterize the probabilistic distributions of all the variables in the model (i.e., Young’s moduli, yield strains, rate affects, densities, etc.). Next, specific parameter values can be randomly selected using a Monte Carlo technique and then a trial simulation is performed and the response of interest is generated (e.g., acceleration time history). The process is repeated with another set of randomly selected variations and the simulation is performed again. This process continues until the analyst has adequately characterized the response. The result will be a variety of acceleration time histories that are similar but vary in random ways much as experiments do. Obviously, this method requires a great deal of computer run time and a great deal of analyst intervention so it is not very practical except for small problems. Techniques for estimating the response under Bayesian parameters variations have been developed in the field of structural optimization that can reduce the computational demand somewhat. Several authors have proposed techniques and processes for performing such analyses for crashworthiness problems but there are relatively few examples of practical Figure 7. Effect of soil and wood material parameter variation on the performance of the MELT guardrail terminal.(40)

46 applications and none dealing with large models as would be typical of roadside safety applications.(37,38). Faravelli used this technique to perform a stochastic analysis of the front frame of a vehicle.(39) In her case she only varied the angles on the frame horn to observe the change in responses. In fact, Patzner did something like this when he examined the response variation due to changes in soil parameters and timber materials properties for guardrail posts.(40) He varied the soil and wood properties in a model of the MELT guardrail terminal and computed the range of responses. The result, summarized in Figure 7, shows a plot of the soil density versus the maximum guardrail deflection at the rail height. He found that certain vehicle responses were associated with different combinations of the soil and wood material properties. If, for example, Grade 1 dense wood post were used in a poorly compacted soil (i.e., low density), the vehicle tended to snag on the guardrail post. If weak Grade 2 wood posts were used in over consolidated soils (i.e., high density), the chance of guardrail rupture was maximized. In Patzner’s case only a half dozen or so material parameters were being varied but the run time to produce the plot in Figure 7 was significant. While stochastic methods provide very interesting and useful information about the range of results of a computation, they are probably not practical for large models at this time. The amount of work and run-time required to vary even a few parameters would be prohibitive and most analysts would likely refuse to perform such analyses. Stochastic variation is mathematically very similar to optimization and many software vendors are developing improved tools for optimization such as LSDYNAOP that may hold promise in the future. For now, however, stochastic methods do not appear to be practical for roadside safety validation and verification efforts. COMPARISON OF METRICS AND CRASH TEST REPEATABILITY In addition to the papers described in the previous sections proposing and defining validation metrics, there have been a few papers that compare several validation metrics. Papers and presentations by Schwer, Moorecroft and Ray have each examined the utility and fidelity of several metrics and a summary of these papers will be presented in this section. At the heart of all discussions of validation and verification metrics is the issue of repeatability. Several authors have examined different validation metrics by calculating the metrics for multiple experiments and comparing the results. Such an exercise provides insight into appropriate acceptance criteria and the expected range of values that should be expected. Any series of physical tests of a mechanical system will produce some variation in response. No two experiments or crash tests are ever identical so when comparing a numerical solution to a physical test the requirement should be that the numerical experiment (i.e., the computer simulation) cannot be distinguished from the responses of the physical experiments. If the computed response cannot be distinguished from an array of “identical” physical responses, then the computed response is as good as another physical test. Such a computer simulation would be validated by the comparison to the physical test responses. This technique

47 is widely used in biomechanics where a series of physical tests are used to develop corridors for the typical human response and as long as a computational model remains within the corridor it is considered to be a good predictor of the physical system. For example, Ray and Silvestri developed an LSDYNA model of the lower extremities of a 50th percentile male for use in frontal crash simulations.(41) The model was validated for three different failure conditions: fractures of the femoral head, fractures of the femoral condyles and fractures of the pelvis. The response corridors for the femoral head simulations are shown in Figure 8. The outer lines represent the range in response for 15 physical tests, the center line represents the averaged response from the physical tests and the thicker line represents a finite element simulation response. As shown in Figure 8, the computer simulation response is at the lower end of the physical response corridors but since it lies within the corridors it is a valid response. While ideally it might be better for the simulation response to follow the mean response, there would be no way to distinguish a simulation response at the bottom end of the corridor from a test response at the bottom end of the corridor. Figure 9 shows another example of the use of response corridors in biomechanics. Ray, Hiranmayee and Kirkpatrick developed an LSDYNA model of the US SID side impact anthropometric test device (ATD).(42) The purpose of using an ATD in a crash test is to estimate the likely response of a human in a similar crash environment. Ray et al developed a model of the ATD used in side impact crash tests and then compared the response of the ATD model to the response corridors published by NHTSA for calibrating the physical ATD. These response corridors are based on many repeated physical tests. The SID ATD model was validated against two calibration tests scenarios: one with the impact on the pelvis and the other at the mid-thorax. Two versions of the computation model are shown: an original version and a version improved by Ray, Hiranmayee and Kirkpatrick. As shown in Figure 9, the improved SID ATD model resulted in better though not perfect agreement with the test corridors. The response was within the test corridors through the end of the first and most important peak and the computation response agreed with the NHTSA ATD acceptance criteria. Response corridors are very useful but they require repeated physical test data that is most often not available in roadside safety. Unfortunately there is little information about the repeatability of full-scale crash tests in roadside safety since repeated tests are seldom performed so constructing response corridors based on physical tests is not in general possible. There have been a few exceptions, however, that are relevant to establishing verification and validation procedures.

48 Figure 8. Response corridors for the femur force of a 50th percentile male in a frontal impact for 15 physical tests and one LSDYNA simulation.(41) Figure 9. SID ATD response corridors for the lower rib acceleration (LURY) compared to the response of two finite element models.(42)

49 Recently, an activity was performed as a part of the ROBUST project in Europe where a series of full-scale tests were repeated to study the sources of experimental error and repeatability in roadside safety tests.(43) The same rigid barrier (e.g., a 0.8-m tall vertical flat wall) was tested with a 900-kg passenger car at 100 km/h and a 20° angle. The following two variations were considered: • The same rigid barrier with the same model new car to underline the influence of the test house procedures and tolerances. • The same rigid barrier with the vehicles normally used inside the test houses (note: EN1317 does not require the use of exactly the same vehicle) to investigate influences arising from different vehicle models. Each of the five test agencies calculated the ASI, THIV and PHD for both their own crash test experiment as well as others performed at the other agencies. The results are shown in Table 3. Results on the main diagonal represent each test agency’s evaluation of its own test whereas the off-diagonal terms represent evaluations of other agency’s tests. The purpose of this part of the activity was to assess how consistent the software in each test agency was with the others. The statistics of the data are shown at the lower left of Table 3. Based on Table 3, the range in, for example, the THIV (similar to the Report 350 OIV) was between 31 and 34 m/sec with a mean value of 32.6 m/s. This might suggest that THIV values are only accurate to the +/- 1.6 m/sec. Likewise, the mean PHD (similar to the Report 350 ORA) was found to be 14.1 g’s and it varied from a minimum of 10.78 g’s to a maximum of 18.27 g’s. If used as a quantitative metric, the PHD acceptance criteria might be +/- 4.2 g’s. A computation that resulted in a THIV that was within 1.6 m/sec of the experimentally measured THIV and a PHD that was within 4.2 g’s of the experimentally measured PHD would be considered as good as another test and thereby validated. After this initial phase a second round of tests was performed four times using the same test on a deformable barrier (always with a 900 kg vehicle at 100 km/h 20°) to investigate the influence of soil. The recommendations developed are now being incorporated into the next revision of EN1317. After reviewing the data from this first phase recommendations were made to reduce scatter starting from the procedures to install and acquire signal and ending with the software used for the severity indices evaluation. These recommendations have been finally used in a third series of tests to verify them. A total of 12 tests on the same barrier have been performed. This ROBUST project activity shows how repeated tests can be used to establish acceptance criteria for domain-specific parameters based on the probabilistic variation of repeated crash tests.

50 Table 3. Results of a ROBUST round robin crash test activity involving a 900-kg car striking a vertical concrete wall.(43) Test Laboratory Performing the Test A B C D E asi thiv phd Asi thiv phd asi thiv phd asi thiv phd asi thiv phd A asi 1.83 1.84 1.91 1.87 thiv 32.40 31.40 32.80 33.80 E phd 17.70 11.90 11.00 18.27 v B asi 1.79 1.85 1.88 1.85 a thiv 31.44 32.57 33.29 32.63 l phd 17.22 12.26 10.78 15.00 u C asi 1.81 1.84 1.91 a thiv 32.78 32.40 34.20 t phd 17.74 11.95 11.40 e D asi 1.87 1.85 1.86 1.84 d thiv 33.67 32.42 32.82 32.40 phd 18.27 13.72 10.94 15.24 b E asi 2.17 y thiv 31.05 phd 12.93 Summary asi thiv phd Mean 1.87 32.6 14.1 Stnd Dev. 0.085 0.859 2.888 Min 1.79 31.05 10.78 Max. 2.17 34.2 18.27

51 Brown performed six essentially identical tests of Ford Festivas striking an instrumented rigid pole which Ray then used to explore the issue of full-scale crash test repeatability.(44) Each of the six tests involved the same vehicle type, the same nominal impact conditions and the same struck object. They were performed at the same laboratory and the same data reduction techniques were used to process the test data. While the “inputs” were as identical as practically possible, there was still some variation in the response for this series of tests. In an unpublished paper, Ray calculated a variety of metrics including the time-domain NARD Validation Manual metrics, Geer’s original MPC metrics and the analysis of the variance metrics for the six identical tests performed by Brown.(44) The results are shown in Table 4. Table 4. Comparison of the six tests for the Ford Festiva rigid pole test.(35) Parameters 91F049 92F032 92F033 94F001 94F002 94FO11 Analysis of residuals re 3.53 % 3.32 % 3.73 % 4.32 % 3.98 % 4.42 % r eσ 16.91 % 18.53 % 16.59 % 17.81 % 14.43 % 16.92 % pT 3.24 2.79 3.50 3.75 4.28 4.05 >)( pTp 0.0012 0.0052 0.0022 0.0004 0.0002 0.0002 r eV 5.08 % 5.06 % 4.92 % 5.13 % 4.61 % 4.82% NARD Validation rM 0∆ 12 % 11 % 13% 14% 14% 14% rM1∆ 3% 5% 0% 3% 1% 4% rM 2∆ 20% 17% 6% 5% 13% 7% rM 3∆ 65 % 40% 16% 21 % 48% 36% rM 4∆ 422% 116% 50% 77% 277 % 206% rM 5∆ 284% 3820% 284% 1330% 302% 327% rRMSlog∆ 72% 74% 79% 75% 72% 73% ρ 0.92 0.92 0.92 0.92 0.94 0.90 relAB∆ 0.29 0.32 0.32 0.35 0.26 0.31 D 0.0032 0.0088 0.0067 0.0079 0.0046 0.0098 Geer’s MPC M -1 % 6% 8% 5% 0% 1% P 4% 4% 4% 4% 3% 4% C 4% 7% 9% 6% 3% 4% Table 4 shows the typical range of a variety of metrics for the six identical frontal rigid pole tests. The analysis of variance technique resulted in mean residuals errors, re , between 3.32 and 4.32 percent with standard deviations, reσ , of between 14 and 18 percent. The t-scores ranged between 2.79 and 4.28 which in turn represent the possibility of not being the same

52 event less than 0.52 percent at the 90th percentile. Ray suggested the following acceptance criteria based on these results: • The average relative residual (i.e., re ) should be less than 5 percent, • The standard deviation of the residuals (i.e., rσ ) should be less than 20 percent and • The t-statistic should be calculated between the test and simulation curve. The absolute value of the calculated t statistic should be less than the critical t-statistic for a two-tailed t-test at the 5-percent level, ∞,005.0t (90 th percentile). Referring again to Table 4, the zero through second relative moments all were less than 20 percent indicating that the value recommended for acceptance in the NARD Validation Manual was a good choice. The higher order moments (i.e., 3rd through 5th), however, varied widely and seem to have little diagnostic value since they are not able to detect that these six tests are identical. The correlation coefficient was generally over 0.9 for all the tests and the rRMSlog∆ values were generally about 72 percent indicating that these metrics correctly detected the similarity of the curves. Ray used Geers’ original MPC metrics as shown in Table 4. In all six crash tests, the magnitude, phase and combined metrics were less than 10 percent. This indicated that the MPC metrics did a good job detecting that these six curves represent the same event and an acceptance value of 10 percent would be a reasonable assumption in using these metrics in roadside safety. One of the drawbacks of Ray’s work was that these six tests involved one of the simplest, most repeatable types of tests in roadside safety. The vehicle was striking the rigid pole head-on, the vehicles were all of the same make and model and were from similar model years, and the struck object was a rigid pole so most of the variability was isolated in the vehicle. The variations in the impact velocity and offset distance were also factors. The frontal crush characteristics of these vehicles are quite sensitive to variations in the offset distance between the centerlines of the vehicle and the pole. Crash tests involving, for example, flexible barriers, a range of vehicles and multiple crash test laboratories would likely exhibit much more scatter. Schwer has also examined the utility of the Sprague and Geers and Knowles and Gear metrics in a recent paper.(32) He calculated the Sprague and Geers and the Knowles and Gear metrics and compared the experimental and numerical velocity wave form in a geological medium due to the application of energy from a nearby source. Figure 10 shows the three simulation curves and the experimental curve and Table 5 shows the calculated values of the Sprague and Geers and Knowles and Gear metrics.

53 Figure 10. Comparison of measured velocity wave form with three simulation results.(32) Table 5. Metric components for the three simulation curves of Figure 10.(32) Sprague and Geers Knowles and Gear M P C M P C Blue (Squares) 0.60 0.08 0.61 0.54 0.17 0.50 Green (Diamonds) 0.26 0.13 0.29 0.27 0.23 0.26 Red (Triangles) 0.45 0.15 0.47 0.48 0.21 0.45 Qualitatively, the green and diamond curve is the best comparison with the experiment (i.e., the solid black line). As shown in Table 5, both methods correctly identify the green or diamond curve as having the best magnitude agreement of all three curves. Both methods also indicate that the blue and square curve has the best phase agreement. Further, both methods indicate the green and diamond curve is the best overall fit. An important feature of all three of the Geers family of metrics is that they are all “non- symmetric” since they produce different values when the measurement and calculated response are interchanged. This is because the measured value is always considered the true value in these metrics and the variation is always made in comparison to the experimentally observed values. All the Geers metrics are deterministic shape metrics and they have been used with a variety of ad hoc acceptance criteria, typically in the 10 to 20 percent range.

54 Of the metrics examined by both Ray and Schwer, all the MPC metrics (i.e., the original Geers, Sprague-Geers and Knowles-Gear), the analysis of variance (i.e., the metrics by Ray and those by Barone and Oberkampf), the zero through second order relative moments, the correlation coefficient and the relative RMS all show promise as validation metrics. The lower order moments, relative RMS and correlation coefficient can all be shown to be closely related to one of the other metrics, however, so the best candidates for future use appear to be some version of the MPC metrics or the analysis of variance metrics. HIERARCHICAL MODELING A complex finite element model is a hierarchy of parts, sub-assemblies and assemblies as shown in Figure 11. This figure represents a small car impact with the so-called European “super rail.” The super rail is composed of an unusually large number of parts for a longitudinal barrier. At the top level is the whole model comprising all the parts of the vehicle and barrier, boundary conditions, initial conditions, etc. The vehicle and the barrier constitute the two main assemblies of the complete model. Often, these major assemblies are separately developed and combined for a specific simulation. For example, the small car vehicle model has been used in numerous other simulations and the model of the super rail barrier would be used in impact simulations with other vehicles. Each assembly is composed of a variety of subassemblies. In the case of Figure 11, the barrier is composed of a top-rail subassembly, a mid-rail subassembly, a post subassembly and a rub-rail subassembly. Each of these can be further reduced to a series of parts. For example, the mid-rail subassembly is composed of the guardrail component, a spacer, a blockout and a back up rail. Each of these parts is composed of some geometry and material properties. The part level is the lowest level in the typical model. In addition to these components, important features like contact interfaces, connections, boundary conditions and initial conditions must be specified. Calibration, verification and validation can be performed on each of these different levels of the model hierarchy. As an example of hierarchical modeling, researchers at Battelle Memorial Institute are currently working on a project for the National Transportation Research Center (NTRCI) involving the development of a tractor-trailer finite element model for use in roadside safety research. This model was developed for the purpose of simulating tractor-trailer crash events with particular emphasis on those crash events involving roadside safety hardware (e.g., bridge rails, median barriers, etc.). As part of that study, certain components of the model have been developed and their response validated though comparison with laboratory tests, including the leaf spring in the front suspension assembly. In the process of developing a model, consideration must be given to both computational efficiency and accuracy of the model results. As computational efficiency improves, the accuracy in results tend to degrade, thus one must strive to develop a model that produces as much accuracy as possible with a computational cost that can be afforded. For any given component in a model, the increase in the number of degrees of freedom resulting from a

55 higher fidelity in geometric representation or higher order element formulation, will not likely affect the overall analysis time of a simulation. But, the model developer must consider the affects of element size on the time step required for the analysis. For example, if a component is modeled with element dimensions or properties that require a time step smaller than that of the current model, the analysis time for the full model could be increased significantly. Figure 11. Hierarchy of a typical roadside hardware finite element model. For example, a leaf-spring assembly was digitized to develop a three-dimensional geometric model and a finite element model was developed based on two different mesh sizes: (i) an element size of 20 mm by 20 mm and (ii) an element size 10 mm by 10 mm. A rendering of the geometric model is shown in Figure 12. The finite element model of the component is shown in Figure 13 where the element size is 20 mm by 20 mm). The model consists of 1380 elements (i.e., 6,900 DOF) and the critical time step for the model is 1.4(10)-6 seconds which is consistent with the critical time step of the overall vehicle model. Vehicle assembly Barrier assembly Top rail subassembly Middle rail subassembly Rubrail part Post part Guardrail part Spacer part Blockout part Stiffner parts Main-rail part Posts parts

56 The response of the model subjected to a compressive load was compared to the results from a test conducted on the physical component to assess the accuracy of the model. A photograph of the test is shown in Figure 14 showing the deformed geometry of the leaf spring under a load of 6,000 lbs. Figure 15 shows the results of the finite element analyses for each of the two cases compared to the results from the laboratory test. Also included on the graph is the force- displacement characterization of a generic tractor suspension used by other Battelle staff in vehicle dynamics simulations. Figure 12. Three-dimensional rendering of the leaf-spring suspension for a Freightliner FLD 120 tractor Figure 13. Finite element model of the leaf-spring suspension for a Freightliner FLD 120 tractor.

57 Figure 14. Laboratory test of a Freightliner FLD120 Tractor front suspension leaf spring Figure 15. Force-displacement response of the test and simulation of a Freightliner FLD120 Tractor front suspension leaf spring As can be ascertained from the results, the simulation results improve as the finite element mesh is refined; however, the economy of the solution is greatly reduced for the refined mesh model where the critical time step is approximately 1.0E-6. Based on a qualitative evaluation of the data, both finite element models were considered to produce acceptable results and were thus both considered “valid” for use in the tractor model. In a study by Orengo et al, a finite element model of a pickup truck tire was developed and was validated by comparing the response of the tire model subjected to various load cases with Shell model: Element size = 20 mm Stiffness = 1313 lb/in = 230 N/mm Test data Refined mesh model: Element size = 10 mm Stiffness = 1265 lb/in = 221 N/mm Data used in TruckSim Stiffness = 1374 lb/in

58 experimental data measured in the laboratory.(45) The model was further validated by comparison of simulation results with full-scale, non-tracking impacts of a pickup into various types of curbs. Although the tire model was not a detailed representation of all the components and structures in a tire, the critical parts of the structure that affect the overall tire mechanics were incorporated into the model, such as bead coils, radial fibers, rubber sidewall, under belt radial fibers, steel belt and tire tread. Two laboratory tests were conducted to assess the fidelity of the tire model. The first test was conducted to measure the force vs. displacement required to break the air-seal and deflate the tire (details of the load applicator and boundary conditions were not described, but are shown in Figure 16). Figure 16. Test setup for the tire de-beading test. Figure 17 shows the results computed in the finite element analysis and the results measured in the laboratory tests. A quantitative, statistical validation of the results was not conducted, but based on a qualitative evaluation of the model’s results compared to the test, the model was considered acceptable (taking into consideration the repeatability of tests, possible error in assumptions made in modeling boundary conditions and the specific application of the model).

59 Figure 17. Comparison of the force-deflection response of a pickup truck tire subjected to a quasi-static horizontal de-beading load. The second test was intended to measure the global stiffness of the inflated tire with a vertical load applied to the rim. The test was performed using a hydraulic uniaxial test machine (Instron 8803) as shown in Figure 18. The tire-rim assembly was loaded by a displacement time-history applied to the rim with the tread surface of the tire pressing against a flat steel plate. The tire-wheel assembly was loaded up to 13.345 kN, approximately twice that of the static load of the truck. Figure 18. Steel hub and fixture used in the compression test. Compression Test 0 2000 4000 6000 8000 10000 12000 14000 16000 0 10 20 30 40 50 60 Deflection (mm) Lo ad (N ) 0 50000 100000 150000 200000 250000 300000 350000 400000 W or k (J) Force (Experiment) Force (Simulation) Work (Experiment) Work (Simuation)

60 Again, a quantitative validation of the model was not conducted but the model was considered to be valid based on a qualitative comparison of the tire responses like those shown in Figure 19. The tire model was implemented into the WPI version of the National Crash Analysis Center (NCAC) Chevrolet C2500 pickup model. The modified vehicle model was used in the analyses of non-tracking impacts into various roadside curbs to study the curb-tire interaction and vehicle stability. Figure 20 shows the behavior of the tire model in the finite element analysis simulation of the truck model impacting a four-inch sloped curve compared to the results of the full-scale test. The tire model accurately captures critical events during the impact with the curb, including tire deflation and the behavior of the deflated tire as it interacts with the curb and the ground surface. The behavior of the tire model was considered acceptably valid for the given application. Figure 20. Sequential views from a full-scale test and simulation showing tire response during impact with a curb.(45) Figure 19. Comparison between the test and the simulation of the deformed shape at different loads (45)

61 It should be noted that this model adequately simulates the behavior of the tire in the applications that it was developed for; namely, the large-scale, overall response of the vehicle during impact with other structures such as roadside hardware, where the tires are only one aspect of many intricate parts that must be modeled. This same model may not be appropriate for investigating stresses and deformations in order to evaluate the performance of commercial tires, where one must take into account ply orientation, ply overlapping and mechanical properties of different layers of the composite structure of the tire. VALIDATION IN THE ROADSIDE SAFETY LITERATURE This section summarizes most of the finite element simulation work that has been done in roadside safety since the mid-1990’s with respect to validating models. There has been a rapid expansion in activities over the past decade in using software tools like LSDYNA in roadside safety. Validation has been an issue nearly from the beginning of the roadside safety community’s efforts to use modern general purpose contact-impact codes. Papers and reports are described in this section to show how the authors treated validation issues. Materials and Components Material models are the foundation of any finite element simulation. A typical model of a vehicle striking a barrier may easily have 100 or more material models using a variety of constitutive models and a variety of parameters. Wright and Ray performed a series of experiments to determine input parameters for LSDYNA to replicate the behavior of AASHTO M-180, the steel commonly used in guardrails. (46) First, material properties of standard AASHTO M-180 guardrail steel, such as yield stress and the plastic stress-strain curve, were obtained from coupon test experiments. These quantities were then incorporated into material number 24 (*MAT_PIESEWISE_LINEAR_PLASTICY) in LSDYNA. The simulation results compared favorably with the physical test results. At that time, Wright and Ray incorrectly called this activity a validation exercise whereas according to today’s ASME V&V definitions it would more properly be a calibration exercise since the physical tests were used to estimate the parameter values. Gentry and Bank investigated the experimental and simulated response of steel W- beam guardrail components to pendulum impact loadings for velocities of 20 km/h, 30 km/h, and 35 km/h.(47) The guardrails were supported by four posts and were cable-anchored at each end to ensure that the full tension capacity of the rail could be developed. Experiments carried out with a 912-kg impact pendulum were compared with LSDYNA finite-element simulations of the impact events. Acceleration, velocity, and displacement time histories were compared for the pendulum impact test and the LSDYNA simulations. Comparisons of the experimental and simulation acceleration records were made using the NARD time-domain statistics. The comparative statistics showed that the simulations were in good agreement with the

62 experiments. Results show that the guardrail was close to its tension yield point when impacted at an initial velocity of 35 km/h. Tabiei investigated the potential of using fiber reinforced composite materials for applications in highway structures.(48) The feasibility and application of composite materials were analyzed through a series of impact tests on laboratory specimens and numerical simulations were performed to replicate the results. First, the impact characteristics and failure modes of pultruded box-beams under impact loads were explored and the loading rate sensitivity of pultruded box-beams with different resin systems was investigated using physical tests. In order to compare the results for the composite guardrails to conventional steel guardrails, a series of pendulum experiments were performed at the FHWA Federal Outdoor Impact Laboratory (FOIL). Next, numerical simulations of a series of impact tests of steel guard rails replicating the experiments were performed. As shown in Figure 21, a finite element model of the pendulum fixture of the FOIL was developed to (i) determine the feasibility of simulating full-scale impact tests of guardrails made of isotropic and anisotropic materials and (ii) to identify the critical parameters governing a successful simulation of a test fixture pendulum impact. Qualitative comparisons such as the deformed shape of the barrier and acceleration, velocity and deformation time histories were used to qualitatively validate the numerical model. Figure 21. Finite element model of a W-beam (right) and time history comparison of the simulation and experiment (left).(48) Eskandarian et al describes the development of a finite element model of the FOIL bogie vehicle and in particular its honeycomb impact nose.(49) The effort consisted mainly of correlating the honeycomb response in the simulations and experiments such that the model produced realistic results. Honeycomb material parameters were developed for DYNA3D and

63 force-deflection data from compression test experiments were used to calibrate the honeycomb material model. After successfully calibrating the honeycomb material model, a model of the bogie with a crushable nose containing the calibrated honeycomb material parameters was constructed. Simulations of the bogie model were then compared to full-scale crash test experiments with the FOIL bogie vehicle. Acceleration, velocity and displacement time histories as well as sequential pictures of the crush of the honeycomb nose data obtained from bogie crash testing compared favorably with those of the simulation. These comparisons were used to qualitatively validate the honeycomb material model and bogie model. Numerical and experimental data were used to validate an aluminum material model as presented in paper by Langseth et al.(50) A material model for LSDYNA was validated against static and dynamic tests on aluminum tubes. An LSDYNA model of the tubes was developed and subjected to the same impact scenarios. Good predictions of the response of the tubes were found by using isotropic elasticity, the von Mises yield criterion, the associated flow rule and non-linear isotropic strain hardening. The plastic material parameters such as the initial yield stress and the strain hardening were determined from uniaxial tensile tests. Borvik tested notched specimens of the structural steel Weldox 460 E at high strain rates in a series of Split Hopkinson Tension Bar tests.(51) The aim was to study the combined effects of strain rate and stress triaxiality on the strength and ductility of the material. The force and elongation of the specimens were measured continuously by strain gauges on the half-bars while the true fracture strain was calculated based on measurements of the fracture area. Optical recordings of the notch deformation were obtained using a digital high-speed camera system. Using the digital images, it was possible to estimate the true strain versus time at the minimum cross-section in the specimen. The ductility of the material was found to depend considerably on the stress triaxiality. Non-linear finite element analyses of the notched tensile specimens at high strain rates were then carried out using LSDYNA. A computational material model including viscoplasticity and ductile damage was implemented in LSDYNA and determined for Weldox 460 E steel. Du Bois et al discussed modeling laminated safety glass for vehicle windshield applications.(52) The paper compared stress-strain data obtained from different material models to experimental data to validate a numerical model of laminated safety glass. As a continuation, Timmel et al discusses the validation of two different laminated glass models.(53) Force-displacement data obtained from four-point bending tests as well as acceleration time histories obtained from the impact tests were used to illustrate the success of numerical simulations in representing actual behavior of laminated glass. In 2004, Du Bois et al presented a comparative review of material models for polymers subjected to impact loading.(54) Material laws which allow for fast generating of input data based on uniaxial static and dynamic tensile tests at different strain rates was presented. For thermoplastics, an overview of suitable material laws was given and techniques for characterizing approximately polymer behavior using the metal plasticity models in LSDYNA were shown. The numerical results

64 were qualitatively validated using visual comparisons of acceleration time histories. In 2005, Sun et al presented another study on modeling the failure behavior of laminated glass windscreens.(55) A special element structure with three layers (shell/volume/shell) was used to model the laminated glass windscreen. A fracture criterion for brittle fracture based on the maximum principal stress was applied to model the fracture behavior of glass. The critical fracture stress of glass was determined by curve fitting the failure force measured from static bending tests on laminated glass windscreens. Qualitative comparisons showing the fracture patterns for different load cases were used to validate the numerical models. Atahan and Ross developed an LSDYNA model to evaluate the crashworthiness of a guardrail system with posts made of recycled materials.(56) Material properties for the recycled materials were obtained from laboratory experiments used to calibrate the accuracy of the LSDYNA material models. Results of a full-scale crash test were used to validate the accuracy of the finite element models used in the simulation study. Qualitative comparisons of deformations and velocity-time history were used to assess the validity of the models. In the mid-1990’s the FHWA sponsored three projects to develop LSDYNA material models that were better suited for use in roadside safety research. Models for soil, wood and concrete were developed that addressed the special needs of roadside safety analysts. (57)(58)(59) The objective of the first of these projects was to develop a soil model that could be used to provide support for components like guardrail posts. Lewis first examined the material models available in LSDYNA and found that all the soil models were designed for confined soils. This meant that for cases like a guardrail post mounted in soil where the upper surface was unconfined the material model was unstable.(57) Instead of modifying an existing model, Lewis developed a new model that would be stable under low or no-confinement pressures. Unfortunately, there was very little validation. Only one experiment was used for comparison and the experimental results were questionable so the model was never validated. Fortunately, Reid et al was able to evaluate the new soil material model as reported in 2004.(60) The behavior of the newly developed soil model during post rotation is shown in Figure 22. The model uses 18 parameters to represent the soil material. The focus of the project was (i) estimating the appropriate parameter values through testing or analytical means, (ii) providing an engineering understanding of the parameters and (iii) providing bounds for the effects of varying the parameters. Qualitative comparisons, such as force-deflection and energy–deflection plots as well as sequential pictures obtained from experiments were used to calibrate the soil material model (i.e., LSDYNA material 147). In 2007, Tong and Tuan also developed and validated a soil model; their work used a viscoplastic cap model for simulating high strain rate behavior of soils.(61) An associative viscous flow rule was used to represent time-dependent soil behaviors. The viscoplastic cap model was qualitatively validated against experimental data from static and dynamic soil tests. Stress-strain and the wave propagation speed in soil with depth of burial were considered in the validation process. The model was also compared with soil behaviors under creep and stress relaxation with good agreement. The

65 qualitatively validated model was subsequently integrated into LSDYNA for finite-element simulations of high strain rate behaviors of sandy and clay soils in explosive tests (i.e., LSDYNA material 8, MAT_HIGH_EXPLOSIVE_BURN). Figure 22. Behavior of LSDYNA material 147 for guardrail post rotation. (60) Figure 23. Damage to an under-reinforced concrete beam showing two crack rebar failure, (top) test specimen and (bottom) simulation.(58) Murray developed a concrete material model (i.e., LSDYNA material 159) for use in LSDYNA in the second FHWA-sponsored project to develop roadside safety materials.(58) The model was developed from basic principles and the parameters chosen where those based on conventional concrete specifications. Once the model was developed, its accuracy was evaluated using dynamic impact experiments with 47 reinforced concrete beams representing over / under reinforced beams and plain concrete beams performed in WPI’s impact laboratory (see Figure 23). Comparisons between the location and number of cracks were tabulated and the results assessed quantitatively. Further bogie and pendulum impact experiments were

66 performed at the MwRSF at the University of Nebraska. Qualitative comparisons of damage modes were performed and acceleration time histories were qualitatively compared to validate the model as well as quantitative comparisons of the maximum load. Once the model had been developed and validated, the model results were verified by comparing the developer’s solutions (i.e., the known solution) to a user’s solution of the same problem. In a follow up study in 2007, Murray developed an elasto-plastic damage model with rate effects for concrete and implemented it into LSDYNA.(62) This report includes the theory of the concrete material model, description of the required input format as well as example problems for use as a learning tool. A default material property input option was provided for normal strength concrete. The model was developed for roadside safety applications, such as concrete bridge rails and portable barriers impacted by vehicles, but it should also be applicable to other dynamic applications. Figure 24. Qualitative comparison of damage in a quasi-static pull-test of a wood guardrail post (left) experiment and (right) simulation.(59) In the third FHWA sponsored project, Murray et al developed a new material model to describe the constitutive behavior of wood, especially in roadside safety applications.(59) The result of her work was a new material model in LSDYNA (i.e., material 143). Like the concrete model in material 159, Murray provided input in terms of simple conventional engineering descriptions as well as more comprehensive constitutive parameters. In particular she provided default values for common roadside safety guardrail post materials like southern yellow pine and Douglas fir. The initial versions of the model were developed with test data on “clear” wood samples provided by the Forrest Products Laboratory. A variety of single-element simulations were performed to assess the stability and numerical performance of the model. Next, a series of quasi-static pull-tests of typical guardrail posts of several grades were

67 performed by MwRSF at UNL. Each grade post was tested approximately 10 times so a statistical envelope of typical responses was constructed for the force-deflection response of each grade. Simulations were then performed to see if the simulated force-deflection response remained within the experimental response curve envelopes. The results were generally good although the simulated response seldom remained completely inside the experimental response corridors. Impact experiments with full size posts were also performed and the time history results were compared to the simulations. Murray also verified the model on several different computer platforms to ensure the material model was not sensitive to numerical issues associated with a particular platform or setup. The resulting wood material model was a significant improvement over other options available in LSDYNA for modeling wood in roadside safety simulations. A comparison of an experiment and simulation for one of the quasi-static experiments used in validating the wood post model is shown in Figure 24. Figure 25 shows an example of the experimental force-deflection response corridors and the simulation response used in the validation process. Figure 25. Comparison of the force-deflection response for (i) the mean experimental response, (ii) an envelope of the observed experimental responses and (iii) an LSDYNA simulation.(59) Miele et al evaluated the single unit truck (SUT) finite element model initially developed by the National Crash Analysis Center (NCAC) at George Washington University. The model was evaluated to assess its ability to accurately simulate its interaction with roadside safety hardware and to identify areas of possible improvements.(63) Miele was particularly interested in the failure of suspension components. While this model is discussed further in a later section, Miele also examined the material models used in the SUT model. Stress-strain data obtained from experiments were used to calibrate the material models. A summary of the material information used in the SUT model is shown in Figure 26.

68 Figure 26. Summary of materials used in the SUT model.(63) Haufe et al developed a semi-analytical process for calibration, verification and validation of automotive polymer materials.(64) He describes identifying the characteristics of the polymer, verifying the constitutive models with known solutions, calibrating the verified constitutive models with simple laboratory tensile “dog-bone” tests and finally validating simulations with experiments representing realistic automotive parts. Comparisons are made using qualitative assessments of the force-deformation curves obtained from the experiments and simulations.

69 Wood et al studied the stochastic variation of automotive steels to establish material properties in standard quasi-static tensile tests.(65) Stochastic variation is when the material parameters vary randomly about a mean value. These random variations can affect the results of the test and, hence, the material properties. Eleven specimens were cut from the same coil of steel and subjected to tensile tests. The stress-strain response was experimentally determined and the variation among the 11 specimens was used to probabilistically characterize the material. Simulation model inputs were then determined by randomly assigning material properties using the probability distributions determined in the tests. This work is an interesting example of stochastic calibration of a material model Dietemberger et al examined the affect of using different LSDYNA material models to incorporate strain-rate into vehicle crash simulations.(66) The authors identified three types of automotive steels present in the C2500 Chevrolet pickup truck and collected rate sensitivity data from the literature. They used the NCAC version of the C2500 pickup truck model in their investigation of strain-rate sensitivity. They then used five different LSDYNA constitutive models to determine the influence of strain-rates on the overall behavior of the vehicle. LSDYNA simulations were performed with each of the different material models using the C2500 pickup truck model and the NCAP crash configuration. Qualitative comparisons of the deformation such as that shown in Figure 27 were made as well as qualitative comparisons of the longitudinal acceleration time history as shown in Figure 28. The authors found that the effect of strain-rate on the NCAP result was relatively small and constituted a second order effect, probably because the deformations while large are highly localized and affect only a select number of materials. Figure 27. Qualitative comparison of NCAP results for a C2500 pickup truck for (a) a simulation without strain-rate effects, (b) a simulation with strain-rate effects and (c) an NCAP experiment. (66)

70 Gerhard et al discussed the validation process of energy absorbing foam for automotive applications. (67) Numerical models of the foam material were developed and high speed drop tower tests were used to define the basic material model parameters. Sled tests with a rigid impactor shape based upon the Side Impact Dummy II’s (SID II) dummy pelvis and head impact tests with a Free Motion Head (FMH) form according to Federal Motor Vehicle Safety Standard (FMVSS) 201-U were used to validate the models and assess their accuracy with respect to various complexities of foam sample geometry. Load-displacement curves and acceleration-time histories were used in the qualitative validation process for the foam material model. Figure 28. Longitudinal acceleration-time comparison.(66)

71 Blankenhorn et al discussed the validation of a numerical model for a carillion bell.(68) The results of experimental and numerical investigations were used to estimate the quality of the numerical model. For this purpose, the Eigen frequencies of the experimental and numerical models are compared and the orthogonality of the Eigen modes is evaluated via a modal assurance criterion. Walker et al developed new LSDYNA models of experimental crash barriers used in automotive testing.(69) Most of these barriers use honeycomb and glue materials. As shown in Figure 29, experimental and numerical simulation studies were performed to qualitatively validate the accuracy of the developed new crash barrier model as well as the individual material models. In addition to deformations, acceleration, velocity and force time histories, energy balance comparisons were used in the validation process. Figure 29. Qualitative comparison of barrier deformation.(69) Wood et al investigated three-point bending and constant velocity boundary condition performance of thin-wall open channel and steel beams.(70) Experimental data was used to qualitatively validate the numerical simulations using stress-strain plots. Force and energy comparisons were also used to emphasize the accuracy of the steel material model. Sheikh et al describes the development of an energy absorbing guardrail end terminal developed for use with the European box beam guardrail system.(71) The overall design effort used finite element simulations, component tests using a bogie vehicle and an impact pendulum. The component models were calibrated using the component experiments and the whole model was qualitatively validated with full scale vehicle crash testing. The design process involved addressing several individual component performance issues. Of these were the design of an extruder head, splice connections for attaching adjacent rail segments, the post to rail attachment connection and anchorage of the rail. Shoukry et al examined the response of dowel jointed concrete pavements to the combined effect of nonlinear thermal gradient and moving axle loads using three-dimensional

72 finite-element (3D FE) modeling.(72) The 3D FE-computed responses to moving axle loads were field validated versus measured concrete slab response to a fully loaded moving dump truck. Moment vs. temperature gradient and stress vs. distance data obtained from the field measurements were used to validate the accuracy of the finite element models. The 3D FE- predicted slab curling due to the nonlinear thermal gradient through the slab thickness was validated versus: (i) corner-dowel bar bending as measured using instrumented dowels embedded in an instrumented rigid pavement section in West Virginia; and (ii) Westergaard’s closed-form solution. The effects of slab thickness, slab length, axle loading position, and axle type on slab stresses are examined. It is shown that while a negative temperature gradient reduces the intensity of traffic-induced stresses, positive temperature gradient increases it several fold. Formulas were developed for the computation of the peak principal stresses due to the combined effect of tandem axle load and nonlinear thermal gradient. Horstemeyer et al presented a comprehensive experimental material characterization and full-scale testing of structural connections of paratransit buses.(73) Structure-property relations were quantified for the constitutive material models used for finite element simulation-based crashworthiness research of paratransit buses. Static coupon tests along with a dynamic wall panel test with an impact hammer provided validation data for the finite element simulations. In addition to FE model calibration, the connection testing allowed for thorough qualitative assessment of connection design, which resulted in improved crashworthy connection details. The material models discussed above illustrate several important points with respect to validation. First, a general process has evolved where simple laboratory experiments like tensile coupon tests, compressive cylinder tests and beam bending tests are used to estimate material parameters and thereby calibrate a material model. Simulations are then performed with the calibrated material models and then compared to the component-level tests to provide validation. The two projects by Murray are good examples of this process.(58)(59) This is a good way to develop and validate material models. Second, most analysts and developers have used qualitative comparisons of time histories and force-deflection data to validate their models. Shape-comparison metrics like those discussed earlier should be used in the future to eliminate the subjectivity of the comparisons. Most material model developers have used quantitative comparisons for domain-specific parameters like the total deflection, location of cracks and deformed shapes. The third point, more apparent in the following sections, is that often the calibration or validation of materials is not documented at all, leaving reviewers uncertain about the quality of the underlying materials models. Material models, as shown in this section, are most often validated by first using simple laboratory tests to perform calibration studies to estimate the parameter values needed for a constitutive model in LSDYNA. Next, a component or subassembly simulation is performed and compared to a laboratory experiment. The component or subassembly simulation is then

73 compared to the experimental result using some appropriate domain-specific metric like the maximum deflection or the location of failure points. Vehicle Models In the discussion of validation and verification in roadside safety, vehicle models deserve some special consideration. One of the reasons finite element analysis has become so widespread in computational roadside safety is that researchers are able to re-use a standard set of vehicle models and thereby avoid the cost and time required to create such models. Since full-scale crash testing is based on the guidelines in Report 350 or its new update, researchers are generally only interested in models of one of the six standard crash test vehicles. Of these six, two (i.e., the 820C and 2000P vehicles in Report 350) represent by far the most used vehicles. Since the same vehicle models are used over and over again in a variety of different roadside safety application projects, it is particularly important that vehicle models be reliable, well documented and validated. One of the first vehicle models for use in roadside safety applications with general purpose contact-impact codes like LSDYNA was developed for the FHWA by EASi Engineering and is documented in a 1995 report.(74) Mendis et al developed a model for DYNA-3D using a reverse engineering process where they obtained and then disassembled two 1983 Honda Civics. Individual components were cut apart or separated from the vehicle and the geometry was digitized and imported into the pre-processor INGRID. The model was then created from this digitized geometry data. The resulting model was very crude by today’s standards with 63 parts and a total of about 13,000 elements. Only the front of the vehicle was represented in geometric detail so this model was only useful for frontal impacts with roadside objects like signs and luminaire supports. The final model was compared to the results of a full-scale crash test performed at the FOIL. Qualitative comparisons of the longitudinal acceleration time histories and final deformed shapes of vehicle components were made in an attempt to validate the model. While the overall shape response was apparent in the acceleration time history, the qualitative comparison was not particularly good at least by today’s standards. At about the same time, Cofie developed a very simple model of a small 820C vehicle for roadside hardware simulations.(75) Cofie was not particularly concerned with a faithful geometric representation but was more concerned with replicating the inertial properties and overall shape of the vehicle. His assumption was that roadside hardware responded primarily to inertial affects and the detailed structural response of the vehicle itself would be a secondary affect. The resulting model was given the general properties of an early 1990’s Ford Festiva. Simulations were performed with the vehicle striking a rigid pole at the center and the quarter point of the bumper. A number of crash tests were available from the FOIL that used the same impact conditions. As shown in Table 6, Cofie compared the results of the simulation to the experiments based on qualitative comparisons of the longitudinal acceleration time histories and the energy time histories and found that in general that the results were fairly good

74 especially up to the peak loading. Cofie also made quantitative comparisons between the experiments and the changes in velocity, kinetic energy and impulse. Examples of his quantitative comparisons are shown in Figure 30 along with an energy time history. The velocity, energy, work and impulse values up to the peak value were all less than five percent, usually much smaller as shown in Figure 30. The comparisons at the end of the event (i.e., 100 msec) were all less than 25 percent. Given the early date of this work and the primitive nature of the model, these results were quite encouraging. One of the notable aspects of Cofie’s validation efforts is that he compared his finite element results to the average acceleration response of all the experiments in recognition that there would be some experimental variation. Another notable feature of Cofie’s work is that he demonstrated that most of the response was due to the large-scale inertial characteristics of the vehicle even in this rigid pole impact. Table 6. Qualitative comparison of simulation results to full-scale test results.(75)

75 Figure 30. Comparisons between three rigid pole crash tests and a simulation of a 32 km/hr centerline impact between an 820C vehicle and a rigid pole.(75) Thacker describes a reverse engineering project to develop a general purpose LSDYNA model of a 1997 Honda Accord.(76) Like the Easi Honda Civic, the model was developed using a reverse engineering process. The model was intended for use in compatibility impact studies for NHTSA. Unlike the Easi Honda, however, the objective in developing the Accord model was to develop a single model that could be used in a variety of different crash situations (i.e., full-frontal NCAP tests, off-set frontal tests, side impact tests and oblique vehicle-to- vehicle tests). The results were compared to crash tests using qualitative comparisons of the acceleration time histories. In 2001, the NCAC made available a 193,000 element model of a Geo Metro on its website (no papers or reports have been found describing the development of this model). Because the Geo Metro is very similar to the types of small cars often found in Europe, the ROBUST group decided to evaluate the model for use in roadside safety simulations of European barriers. Mongiardini reviewed the original version of the Geo Metro and determined that it would be a good platform for roadside safety work but a functioning suspension and steering system needed to be added.(77) These features were added to the model and the improvements appeared to make the kinematic response of the vehicle much more realistic. The change in the vehicle behavior of the two models is illustrated in Figure 31. Sango continued the work and used the Geo Metro model in simulations with several common European barrier systems and compared the results of the original model with the improved model with suspension and steering systems.(12)(78) Finally, Sango and Haukass compared the results of the Geo Metro simulations in EN 1317 TB11 impacts with a vertical concrete wall, the EU2 guardrail and the Norwegian Standard Vehicle Parapet Type 1b-80 to full-scale crash tests.(78) For each of the three barriers, the researchers compared the EN 1317 domain-specific metrics THIV, ASI and PHD for the computer simulations and the experiments. An example from the concrete wall case is shown in Table 7. For all the quantitative comparisons, the simulation

76 was always within 1.2 percent of the experimentally observed value. Unfortunately, qualitative and quantitative comparisons of the time history data were not included in the report. Figure 31. Comparison of the original and modified models of the Geo Metro in a 100 km/hr 20 degree impact with deformable guardrail.(78) Table 7. EN 1317 domain-specific metric comparisons of a simulation and crash test of a small car striking a concrete wall at 20 degrees and 100 km/hr.(78) ASI Full-scale-test Computer simulation Ratio 1.86 1.69 0.91 THIV Full-scale-test [km/h] Computer simulation [km/h] Ratio 32.9 34.7 1.05 PHD Full-scale-test Computer simulation Ratio 14.1 6.8 0.48 Eskandarian et al developed and validated a finite element model of the bogie vehicle. (49) The performance of the model and, in particular, the honeycomb material used in the nose of the bogie vehicle was assessed in impacts with an instrumented rigid pole. The deformed shape of the bogie nose (i.e., see Figure 32) and the corresponding time history comparisons were used to validate the finite element model.

77 Figure 32. Comparison of crash test (left) and simulation (right) results showing successive crush (deformation) of the honeycomb nose material at 45 msec.(49) Zaouk and Marzougui developed a model representing the US deformable barrier for use in side impact simulations.(79) Special emphasis was given to the adhesives and honeycomb materials and the way they were modeled in LSDYNA. As shown in Figure 33, qualitative comparisons, such as deformed shapes and time histories were made between the finite element simulation and full-scale test to validate the models. Figure 33. Deformation (top) and time history (below) comparisons of a side impact deformable barrier.(79)

78 Zaouk et al described the development of a detailed multi-purpose finite element model of a 1994 Chevrolet C-1500 pick-up truck.(80) This was the first model of its kind developed specifically to address general vehicle safety issues, including front and side performance, as well as roadside hardware design. The idea behind the vehicle model was to provide roadside safety and highway safety researchers with a common model they could use to explore safety issues. This paper described the results of a non-linear finite element computer simulation using this model for frontal full barrier and median highway barrier impacts. Full scale vehicle crash tests conducted by NHTSA and FHWA were used to validate the model. Two tests were compared, a frontal impact with a full rigid wall and a corner impact to a 42-inch vertical concrete median barrier. The comparisons between tests and simulations in terms of overall impact deformation, component failure modes, velocity and acceleration at various locations in the vehicle were presented. Later, Zaouk presented the results of the development of a reduced element “bullet” model of the same C1500 vehicle more specifically modeled for roadside safety applications. (81) Full scale vehicle crash tests conducted by NHTSA and FHWA were used to evaluate the performance of the model. Two tests were used for comparison purposes: a frontal impact with a full rigid wall and a corner impact to a 42-inch vertical concrete median barrier. Mostly qualitative comparisons, such as deformation, component failure modes, velocity and acceleration at various locations in the vehicle for both the detailed and reduced models were made between physical tests and simulations in terms of overall impact. Further research was recommended to fully validate both vehicle models. Similarly, another study concentrated on the development and validation of a C-2500 pickup truck model for roadside hardware evaluation.(82) The C-1500 pickup truck model discussed by Zaouk was modified such that it represented the 2000P crash test vehicle recommended in Report 350 (i.e., the Chevrolet C2500). Model validation was done through a series of qualitative comparisons including pictures and time histories (see Figure 34 and Figure 35). Figure 34. Qualitative comparison of deformations in a C2500 crash test into a rigid wall (left) and the corresponding simulation (right).(81)

79 In 1998, Zaouk et al discussed the development process of a detailed Chevrolet C-1500 pickup truck model for multiple impact applications. Several crash conditions were used to make the model as accurate as possible. Full-scale crash tests were used to validate the model using qualitative comparisons such as sequential pictures and time histories.(81) Tiso et al performed an extensive program to develop functioning suspension and steering capabilities for the widely used NCAC C2500 pickup truck model in 2002.(83) In many roadside hardware impacts, the effect of the suspension on the impact performance of the barrier is considerable so having vehicle models with functioning suspension models was becoming very important. The major components of the suspension were examined using curb traversal tests and the results were compared to the original model. A number of sub-assembly laboratory tests such as shock absorber extension/compression tests and leaf-spring deflection tests were used to validate components of the model. Improvements to the leaf springs, shock absorbers, coil springs and steering linkages were made and then compared to the curb-traversal tests again. The results of the model improvements were qualitatively compared to the experiments and quantitative assessments of domain-specific values like maximum displacements were performed. Figure 35. Time history comparison of a C-2500 pickup truck striking a median barrier.(81)

80 Orengo continued this work by adding a tire model to the C2500 that would experience de-beading failures when the vehicle interacted with curbs and edge drop offs.(46) The tire model was validated first using in-laboratory assembly tests and then validated by full-scale non-tracking curb impact tests where the tires failed by debeading. This work was described in the last section as a good example of component modeling in the hierarchical modeling section. Marzougui et al also discussed the development of a detailed rear suspension model for the C2500 pickup truck model in a 2004 paper.(84) Pendulum tests were conducted at the Federal Highway Administration’s Federal Outdoor Impact Laboratory (FHWA’s FOIL). The pendulum test data was used in the validation of the suspension model. Simulations were conducted and the results were quantitatively compared to the pendulum tests in terms of deformation, displacement and acceleration at various locations. Miele et al conducted an evaluation of the single unit truck (SUT) finite element model initially developed by the NCAC to assess its ability to accurately simulate its interaction with roadside safety hardware and to identify areas of possible improvements.(63) The SUT model is intended to be a so-called “bullet” model (i.e., a vehicle model with a reduced number of elements) for computational evaluation of roadside safety hardware. A very detailed model description is available at on-line http:// thyme.ornl.gov/FHWA/ F800WebPage/simulations/simulation1.html. The researchers were particularly concerned with modeling and replicating suspension system failures that are typical in SUT re-directional crash tests. The improved model was compared to results of a full-scale crash test performed at Texas Transportation Institute. Comparisons with the crash test included quantitative domain- specific parameters like the OIV, ORA, THIV, ASI, PHD, 50 msec average and maximum roll, pitch and yaw angles. Qualitative comparisons of the rotations in the experiment and simulation were also presented. A comparison of photographs from the crash test and corresponding simulation are shown in Figure 36. One of the notable aspects of this project was the excellent documentation provided on-line to users. Vehicle model developers should be encouraged to provide this level of detail for vehicle models that will be used repeatedly as “bullet” vehicles in roadside safety research. Figure 36. Comparison of crash test (left) versus finite element (right) results. (63)

81 Mohan et al also evaluated the Ford F800 Single Unit Truck (SUT) model.(85) The characteristics of the SUT model were investigated and several modifications were incorporated into the model to facilitate using it in roadside safety hardware development projects. A full- scale crash test of the Ford F800 SUT impacting an F-shape Portable Concrete Barrier (PCB) was conducted at FHWA’s FOIL and used as a validation baseline test. Qualitative comparisons such as sequential pictures of the impact and acceleration time histories were used to assess the model. Pernetti and Scalera developed a multipurpose finite element model of an articulated truck.(86) The model was intended to address two particular impact scenarios, the first against a concrete wall and the second against a steel bridge railing. The results obtained demonstrated that the model was accurate and the articulated truck model is suitable for a wide range of impact conditions. As shown in these descriptions of previous work on vehicle modeling, vehicle models for use in roadside safety improvements have generally been the product of continuous improvement sometimes over as much as a decade. When a vehicle model is first developed, it is often validated with tests in the NHTSA literature like the NCAP tests. As the model is used for a wider variety of situations in roadside safety improvements are added to address particular concerns and these improvements remain in the model making it increasingly more general and more reliable as each new revision is developed. One of the draw backs of the way vehicle modeling has evolved in roadside safety is that there is very poor version control and documentation of the models. The models tend to be modified by each research organization and often the models diverge into several variations that are separately improved and modified. This results in a duplication of effort as well as confusion over what level of validation any particular model has achieved. Ideally, it would be useful to collaborate more effectively on model improvements and share these results with other researchers. Roadside Hardware Models Probably the first paper to address the use of nonlinear dynamic finite element analysis in roadside safety was a paper by Wekezer in 1993.(87) Wekezer examined an impact simulation of a compact car with a light pole. The paper illustrated that models with even relatively small numbers of degrees of freedom in DYNA3D can predict the kinematics of highway vehicles in impacts. Wekezer’s study was a preliminary feasibility study to develop the next generation of roadside safety computer software for vehicle impact simulation and analysis. In 1994, Ray examined the impact of a 820-kg small car striking a 5.5 kg/m flange- channel sign post at nine meters per second.(88) As shown in Figure 37, the 13,000-element vehicle model developed by EASi Engineering was used to examine the collision sequence in much greater detail than is possible with a full-scale crash test.(74) The state of stress of any vehicle or barrier component can be examined in detail to determine the actual failure

82 mechanisms involved in the collision. Qualitative comparisons of the deformed vehicle shape and acceleration-time histories were used to assess the accuracy of the model. As shown in Figure 38, the model successfully captured the basic phenomenon involved in the impact. Figure 37. Photograph (left) and corresponding finite element model (right) of the pre- collision with a sign post.(88) Figure 38. Vehicle deformation comparison after sign post impact crash test (left) and finite element simulation (right).(88) In 1997 the FHWA sponsored the publication of a collection of seven papers by their Centers of Excellence regarding using LSDYNA and DYNA-3D in roadside safety research. (81) The report contained papers regarding: • The development of a C1500 pickup truck model for use in roadside hardware simulations,(81) • The development of a model of the MELT guardrail terminal,(89) • The development of a model of a transformer base,(90) • The development of a thrie-beam guadrdrail model,(91) • The development of models for the Nebraska turned-down terminal, the dual- support breakaway sign, the buffalo guardrail and a breakaway mailbox and(92) • The development of a slip-base luminaire support.(93)

83 These papers are discussed in the following paragraphs. Another paper that appeared in the mentioned FHWA sponsored publication addressed the development and validation of a weak post W-beam G2 guardrail.(94) Modeling details, such as post-soil interaction, W-beam end anchorage, post-to-W-beam connection were explained. Results of a full-scale crash test were qualitatively compared to the simulation results. Martin and Wekezer continued work on the development of a finite element model of the G2 weak post w-beam guardrail.(95) The NCAC model of the 1994 Chevrolet pickup truck was used to simulate an impact with the barrier at 100 km/hr and 25 degrees. Data obtained from a full-scale crash test were used to validate the weak post w-beam guardrail model. Acceleration time histories obtained from the crash test and simulation study were used to make both qualitative and quantitative comparisons. The NARD validation metrics, the ANOVA metrics proposed by Ray and Wekezer’s protocol validation method were also used to compare the simulated and experimental responses. The results for the NARD metrics are shown in Table 8. Table 8. NARD time-domain metrics comparing a 100 km/hr, 25 degrees impact between a pickup truck and the weak-post w-beam guardrail.(95) Relative Moment Difference Moment 0 = 0.432 Relative moment difference = 0.27 Moment 1 = 0.078 Relative moment difference = 0.24 Moment 2 = 0.018 Relative moment difference = 0.22 Moment 3 = 0.005 Relative moment difference = 0.21 Moment 4 = 0.002 Relative moment difference = 0.20 Moment 5 = 0.001 Relative moment difference = 0.20 Root Mean Square (RMS) Log Measures RMS Log Difference (R3) 8.4 RMS Log Average (R4) 6.6 RMS Log Form of Signal #1 (R1) 6.9 RMS Log Form of Signal #2 (R2) 6.3 RMS Log Ratio (R3/R4) 1.3 Correlation Measure Energy Measure of Correlation 0.49 Ray and Plaxico continued the work on weak-post w-beam guardrails when crash tests of the standard weak-post w-beam guardrail involving the 2000-kg pickup truck resulted in a series of unacceptable test results including over-riding and penetrating the guardrail.(96)

84 Design modifications to the weak-post w-beam guardrail were explored using finite element simulations and full-scale crash tests. Qualitative and quantitative comparisons were made between the crash test results and simulation results to validate the accuracy of the finite element model. An improved version of the weak post w-beam guardrail system was developed and tested and found to satisfy the requirements of Report 350 for test level three. A finite element model of the slipbase luminaire support research was also included in the same FHWA sponsored publications.(97) A detailed finite element simulation study was performed to evaluate the important characteristics of the slipbase design. Cofie’s small car model was used to impact the support and the results of the simulation study were compared to those obtained from full-scale crash tests to validate the models.(75) Pictures and time histories were used in the qualitative assessment process. A paper by Eskandarian et al describes a model of a slip-base sign support system and its crash performance with vehicles using DYNA3D.(97) The geometric features, as well as several physical phenomena of components of the slip-base mechanism, e.g., sliding friction, clamping forces, bolt-plate interactions, and plate rupture, are modeled and verified in simulation. The FE modeling methods for the required features are described in detail. A validated model of Bogie and its honeycomb material is used as the impacting vehicle. FE crash simulations of Bogie with flexible honeycomb nose impacting the slip-base sign support are validated using the corresponding instrumented crash tests. The simulation model reveals the correct behavior of the breakaway system response upon impact. The slipping mechanism and Bogie acceleration and velocity responses are accurate as compared with actual crash test results. The FE approach and this validated model can be exercised in numerous crash scenarios for design optimization of other variations of slip-base systems in size, orientation, etc., or for performance evaluation of impacts with various vehicles. In a similar study, finite element simulation and its application to crashworthiness evaluation and safety analysis of roadside hardware appurtenances is presented.(98) Three specific case studies are discussed to demonstrate the effectiveness of these methods in modeling and simulating roadside hardware objects. The first case study involved finite element modeling and analysis of various slipbase sign support systems. Models were validated against full-scale crash test results. Only qualitative comparisons, such as acceleration, velocity and displacement vs. time graphs were compared. In the second case, the safety performance of a portable concrete barrier (PCB) during high-speed impact was investigated and design modifications were analyzed to improve their performance. The third case study utilized finite element analysis to predict the safety of a U-post sign support system. The effect of its height on intrusions into the occupant compartment was analyzed. Reid et al analyzed the turned-down guardrail terminal using LSDYNA and full-scale crash tests. (99) Finite element simulations were performed on the existing turned-down approach terminal section as well as on various retrofit options to understand the crash performance of end terminals and evaluate the performance of design alternatives. Modified

85 designs were subjected to one high-speed and six low-speed full-scale crash tests with an 820 kg vehicle. Deformations and crash test pictures were used to validate the accuracy of the finite element models. In another study, Paulsen and Reid modeled a dual support breakaway sign system using LSDYNA.(100) Component models were first constructed on critical parts of the breakaway sign system. The component models were compared with physical component tests to aid in the development process, as well as to validate the component simulation results. The components were then assembled into a complete system model. Very few changes were made in the complete sign model, because problems were worked out in component modeling phase. Qualitative comparisons were made between the simulation results and two full-scale vehicle crash tests were used to validate the model. Ray and Patzner describe the development of an LSDYNA model of a MELT guardrail terminal to learn more about the performance of this type of guardrail terminal.(101) Results of the analysis are discussed and compared to data from a full-scale crash test involving a small passenger car. Qualitative comparisons of the acceleration and velocity time histories were made and quantitative comparisons of the occupant risk criteria and a statistical method were used to illustrate the validity of the models. The quantitative comparison was performed using the Test Report Analysis Program (TRAP), which automatically evaluates the occupant risk criteria defined in NCHRP Report 350 using the acceleration curves obtained from the simulation data. TRAP was developed by Texas Transportation Institute (TTI) primarily for analyzing crash test data, but it can also be used for analyzing the data from crash test simulations. (149) In a similar study, Plaxico provided a description of the development of a model of a breakaway timber post and soil system used in the breakaway cable terminal (BCT) and the modified eccentric loader terminal (MELT).(102) The model is described and simulation results are qualitatively compared with data from physical tests of BCT/MELT posts. A guardrail system capable of capturing and redirecting a larger range of vehicle types and sizes was developed.(103) The new guardrail system, called the Buffalo Rail, was designed with a new w-beam cross-sectional shape with an effective depth of 311 mm compared to 194 mm for the W-beam, a rail thickness of 13 gauge, and a post spacing of 2500 mm. Finite element analyses were performed to evaluate the impact performance of the new barrier. The LSDYNA simulations predicted that the safety performance of the Buffalo Rail would be acceptable for the Report 350 test 3-11 pickup truck. Qualitative and quantitative comparisons were made to validate the finite element model of the Buffalo Rail. LSDYNA was used to develop a model of the sequential kinking process for energy dissipation used in a new guardrail terminal concept.(104) The sequential kinking process involves using a deflector plate to force a steel beam guardrail element to be bent around a rigid

86 beam until it forms a plastic hinge. Qualitative comparisons between the full-scale tests and finite element results were used to validate the model. Full-scale crash tests showed that predictions of the energy dissipation for the sequential kinking impact head were only seven percent below values obtained from dynamic impact tests. Reid and Bielenberg designed and successfully crash tested a bullnose median barrier head-on at 100 km/h with a 2000-kg pickup truck.(105) After a failed pickup truck test, LSDYNA was used to simulate the failed system in order to determine the cause of the failure and evaluate solutions to the problem. Subsequent testing substantiated the LSDYNA predictions and qualitative comparisons used to validate the models used in the simulation study. To keep up with the design project deadlines, some features of the simulation model were simplified. For other features, however, great attention to detail was required to make a useful model. Specifically, a considerable amount of effort went into defining the material failure criteria and appropriate mesh density for the guardrail, rolling tires for the truck model, and application of the relatively new cable element in LSDYNA. Tabiei and Wu developed a simulation of a truck impacting a strong-post w-beam guardrail system, the most common system in the USA.(29) Detailed methods for developing the simulation were presented and three major issues were identified: The rail to blockout bolt connection, dynamic soil-post interaction and the effect of the end anchorage of the guardrail. Soil-post interaction was modeled using both Lagrangian and Eulerian meshes and the results using the two methods were presented. Sequential pictures and acceleration time histories were used to make qualitative comparisons. The NARD metrics were used to make quantitative comparisons between physical crash test results and simulation results. Plaxico and Ray evaluated the crashworthiness of two similar strong post guardrails (i.e., the G4(2W) and the G4(1W) ) using LSDYNA. (10) A model of the G4(2W) guardrail was first developed and validated with the results of a full-scale crash test from the literature. As shown in Table 9 and Table 10, quantitative comparisons of the two impacts were performed using TRAP domain-specific metrics, the NARD time-domain metrics, the ANOVA metrics and Geer’s MPC metrics. Moreover, qualitative comparisons, such as acceleration, velocity and yaw angle time histories were used to validate the G4(2W) model. After the G4(2W) model had been validated, a model of the G4(1W) guardrail system was developed based on the validated G4(2W) which uses larger 8x8 inch posts. The results from the simulations of the two guardrail models were compared with respect to deflection, vehicle redirection and occupant risk factors. The results of the analysis indicated that the G4(1W) and G4(2W) perform similarly in collisions and they both satisfy the requirements of Report 350 for the test 3-11 conditions.

87 Table 9. Domain-specific TRAP metrics for TTI test 471470-26 and LSDYNA simulations of two strong-post w-beam guardrails.(11)

88 Table 10. NARD and ANOVA metrics for TTI test 471470-26 and LSDYNA simulations of two strong-post w-beam guardrails.(11) Marzougui et al discuss the effect of w-beam guardrail height on the crash test performance of the G4(1S) strong-post w-beam guardrail.(106) First, a detailed model of the G4(1S) guardrail was created. The model incorporated the details of the rail, connections, the post, the blockout, and the soil in which the post was embedded. To validate the model of the W-beam guardrail system, a model of the setup of this W-beam system in previous full-scale crash tests was created. Simulations were performed using this model and the results were compared to the full-scale crash test data. Qualitative and quantitative comparisons showed that the model was an accurate representation of the actual system. In the second step of the study,

89 the validated model served as the basis for four additional models of the G4(1S) guardrail to reflect varying rail heights. In two of the four models, the rails were raised 40 and 75 mm (1.5 and 3 inches). In the other two models, the rails were lowered 40 and 75 mm. Simulations with these four new models were carried out and compared to the first simulation to evaluate the effect of rail height on safety performance. The simulation results indicated that the effectiveness of the barrier to redirect a vehicle is compromised when the rail height is lower than recommended. The third step of the study consisted of performing full-scale crash tests with the guardrail at standard height and 60 mm (2.5 inches) lower. The data from the crash tests validated the simulation results. Marzougui et al used LSDYNA simulations to evaluate the safety of portable concrete barriers (PCB).(107) A methodology for creating accurate models of PCBs was first developed. This was achieved by developing a model of an F-shape PCB design and using full-scale crash test data to validate the model. Qualitative comparisons were made to validate the model (see Figure 39). Once the model had been qualitatively validated, models of two modified PCB designs were created and their safety performance was evaluated. Based on the simulation results, a third design was developed and its performance was analyzed as well. The safety performances of the three designs were compared. Figure 39. Qualitative comparison of pickup truck impact with an F-shape portable concrete barrier.(106) A new bullnose guardrail system for the treatment of median hazards was developed and successfully crash tested according to the safety criteria in Report 350 by Bielenberg et al. (108) The new system consists of a nose section and a special cable structure. The first five sections of guardrail had horizontal slots cut in the valleys of the rail to improve vehicle capture and reduce the formation of large kinks that could pose a threat to a vehicle as the system deformed to absorb an impact. The research study included computer simulation modeling using LSDYNA and full-scale vehicle testing. Qualitative comparisons between the simulation and crash test results were used to validate the finite element models.

90 Marzougui et al used LSDYNA to analyze the safety performance of roadside sign support systems.(109) Specifically, they investigated the effect of sign height on the amount of intrusion into the occupant compartment. Models of 5ft (1.5m) and 7ft (2.1m) height signs were created and impacted with a set of vehicle models at different impact speeds. Five vehicle models were used in the study: the Chevrolet C2500 pickup, Geo Metro, Ford Taurus, Plymouth Neon, and Dodge Caravan. Three impact speeds were analyzed: 20, 40, and 60 mph (32, 64, and 96km/h). A total of 18 simulations were performed and the results were compared to evaluate intrusion of the signs into the occupant compartment. The simulation results were verified qualitatively by using full-scale test results obtained from the worst-case scenario test, a Geo Metro into a U-post at 60 mph. Engstrand et al discussed simulation of vehicle models used in roadside safety research. (110) According to a recent European regulation, the passive safety of all new roadside barriers must be verified in impact tests with real vehicles. The test matrix includes a small car but, depending on the road containment level, also a medium-size car, a bus, a medium weight truck or a heavy truck. Obviously, it is important and efficient to first assure the passive safety of the barrier using simulations before the actual tests are conducted. In a simulation of the vehicle crash on a roadside barrier, the quality of the vehicle model is as important as the barrier model. In this study, qualitative comparisons between the simulations and full-scale tests were used to validate the accuracy of the vehicle models developed. After a failed crash test on a strong-post guardrail system, Atahan used LSDYNA to simulate the system and determine the potential problems with the design.(111) The accuracy of the finite element models used in the simulation study was evaluated by comparing the results against those obtained from the full scale crash test. Qualitative comparisons, such as sequential pictures, article deformations and vehicle velocity time graphs were made. After validating the model, a second simulation was performed on an improved version of the system. Simulation results indicated that the improved system would perform much better than the original design. Reid et al discusses the development of a new barrier system to improve the safety of drivers participating in automobile racing events.(112) Several barrier prototypes were investigated and evaluated using static and dynamic component testing, computer simulation modeling with LSDYNA and 20 full-scale vehicle crash tests. The full-scale crash testing program included bogie vehicles, small cars, and a full-size sedan, as well as Indy-style open- wheeled race cars and NASCAR race cars. LSDYNA models were validated using qualitative comparisons with full-scale test results. Bielenberg discussed the development of barriers for race venues. He describes a barrier with foam blocks placed between an outer steel tube structure and the existing race track concrete wall.(113) Polystyrene insulation foams were shown to have good energy absorbing capabilities and were used as a primary means of energy absorption in the barrier. Simulations

91 of the dynamic tests with LSDYNA used the *MAT_CRUSHABLE_FOAM material model. After successfully modeling the bogie tests, the component model of the foam was placed in the full-scale model of the SAFER barrier. Later in the research program, the cubic shape foam blocks were replaced with a trapezoidal shape. These trapezoidal shapes were also tested and then, successfully simulated. Sequential pictures and time history data was used to validate the finite element models. Ray et al described the design and analysis of an extruded aluminum truss-work bridge railing for Report 350 test levels three and four conditions.(114) In this case there were no crash tests available to assess the performance of the model so LSDYNA was used to determine if the bridge railing would be likely to result in successful crash tests. The LSDYNA simulations indicated that the truss-core panels would be strong enough in an impact and a subsequent AASHTO LRFD analysis supported the LSDYNA results. The design documented in this report was found to be of comparable strength to other F-shaped bridge railings so that successful crash test results are highly likely. The FHWA issued an acceptance letter for this new type of bridge railing based only on the computational analysis. Whitworth et al evaluated the crashworthiness of a modified w-beam guardrail design. (115) A model of the guardrail was developed and the crash response simulated for an impact by a pickup truck traveling at 100 km/hr. A model of a Chevrolet C2500 pickup truck was combined with the guardrail model to simulate the crash test. The model evaluation focused on comparison of actual crash test data with the simulation results in terms of roll and yaw angle measurements. Simulation results were found to be in good agreement with the crash test data. Additionally, simulations were also performed to evaluate the effect of certain guardrail design parameters, such as rail mounting height and routed/non-routed blockouts, on the crashworthiness and safety performance of the system. Mohan et al developed a detailed finite element model of a three-strand cable barrier. (116) The accuracy of the model was validated against a previously conducted full-scale crash test. The full-scale crash test and simulation were set up for an impact of the cable barrier with a 2000 kg pickup truck at an angle of 25 deg and an initial velocity of 100 km/hr. Details for simulating the dynamic interactions of the soil and post, post and hook bolts, cable and hook bolts and cable to truck were presented in the paper. Qualitative comparisons between the simulations and full-scale crash test were presented. Atahan and Cansiz studied a vertical flared back concrete bridge rail-to-guardrail transition to evaluate its compliance with Report 350 test level three requirements.(30) In a crash test, the system failed to meet the Report 350 requirements because the vehicle rolled over. To gain insight about the crash test phenomena, a simulation study was performed. The accuracy of the simulation was verified using qualitative and quantitative comparisons, such as sequential crash test pictures and TRAP, NARD, ANOVA and Geer’s metrics, respectively. Based on examination of the crash test and simulation, the w-beam height of 685 mm was

92 determined to be the main cause for vehicle rollover. The transition model was modified to have an 810 mm top rail height. A subsequent simulation resulted in a prediction that the improved model would contain and redirect the vehicle in a stable manner without rollover. No wheel snagging was observed due to increased rail height. The performance of the improved transition design was so good that consideration was given to testing it at the next level, test level four. Finite element computer simulations coupled with experimental testing were used to investigate the safety of mailbox supports and establish some guidelines on their use and installation.(117) First, a model of the mailbox support was developed and validated against pendulum crash tests. Second, a parametric finite element analysis was performed with various mailbox sizes, heights, mounting configurations and post sizes in order to evaluate the mailbox support crashworthiness performance. Third, the simulation results were qualitatively validated using the most critical case. A full scale crash test was performed using the critical impact scenario and compared with the corresponding simulation. Sheik et al discussed the development of the energy absorbing end terminal, HEART. (118) The HEART terminal was developed using LSDYNA modeling The paper presents the simulation approach adopted for the development of the HEART including the construction details, qualitative comparisons for the model validation and the description and results of crash tests performed so far to evaluate its performance. LSDYNA was used to analyze and improve the crash test behavior of the New York Department of Transportation Portable Concrete Barrier (NYPCB).(119) A full-scale crash test demonstrated that the current NYPCB design was unable to meet Report 350 standards. An inspection after the test revealed that the welding at the metal connectors forming the joint between the barrier segments was not properly fabricated. An LSDYNA model of the crash- tested barrier was developed with a special fillet weld with failure model and subjected to the same impact conditions as the failed crash test. Qualitative comparisons were made between the test and simulation such as force-deformation plots, roll angle time histories and sequential pictures of the crash event. Quantitative domain-specific comparisons using the TRAP metrics were also made. The results showed that the baseline model simulation replicated the failure in the crash test. After validating the model, an improved NYPCB model was developed by using proper welding details and subjected to full-scale impact simulation conditions to determine whether this design would satisfy the crash testing requirements. The results of the simulation were encouraging and it was predicted that the barrier would successfully contain and redirect the impacting vehicle in a stable manner. Subsequent full-scale crash testing on the NYPCB with proper welding details passed the NCHRP Report 350 requirements and substantiated the LSDYNA predictions. Concrete barriers have often been considered too stiff for small vehicles in European regulations.(120) As a result, there has been very little change in the design of concrete barriers

93 in Italy over the past 20 years. Bonin re-examined the use of concrete barriers for the European EN 1317 regulations and, in particular, used lightweight concrete and shorter section length than commonly used in Italy. A lighter concrete portable barrier would likely deform more in the lateral direction leading to more energy dissipation and decreased occupant risk values. Bonin examined these new alternatives using LSDYNA models. First the model was validated by comparing the results of the simulation to prior crash tests using the existing Italian concrete barrier design and the predictions of the LSDYNA model. Next, alternative designs were examined that used lighter-weight concrete and shorter segment lengths. The performance of both the ODOT GR-2.2 guardrail and the ODOT GR-3.4 transition system in Report 350 test level three conditions was investigated by Plaxico.(121) Modifications that would improve the crashworthiness of these transitions were proposed. LSDYNA was used to simulate Report 350 tests 3-10 and 3-11 for the improved transitions. The analyses indicated that the original ODOT GR-2.2 guardrail would successfully meet all Report 350 test level three criteria but the analyses also indicated that the performance of the system could be significantly improved with simple modifications to the guardrail. Plaxico developed a model of a 50-inch high portable concrete barrier which is tall enough to serve as its own glare-shield.(122) Finite element analysis was used to investigate various barrier shapes and connection schemes to identify a successful crashworthy design that would meet requirements of Report 350 for test level three. A full-scale crash test was performed of the barrier after it was developed using LSDYNA at the Transportation Research Center in East Liberty, Ohio. The results of the crash test were used to validate the simulation results using qualitative comparisons of the time histories and sequential pictures as well as quantitative comparisons of the TRAP domain-specific metrics Anghileri considered the crash test scenario of a small passenger car with a total mass of 900 kg striking a rigid concrete barrier at 100 km/hr and 20 degrees.(123) The impact conditions represent the EN 1317 TB11 impact conditions An accelerometer sensor was included in the vehicle model in order to collect the acceleration and velocity-time histories of the vehicle and to consequently assess the occupant risk factors during the impact simulation. LSDYNA card *ELEMENT_SEATBELT_ACCELEROMETER was used for this purpose. The results of the impact simulation were used to evaluate the influence of the output frequency on the computation of the acceleration-time histories and occupant risk factors. The same impact scenario and finite element model were used to evaluate the effect of the location of the accelerometer sensor. Anghileri’s work was used to explain some of the variation in crash test results that were observed in the ROBUST round-robin crash test series where the same test was performed by five different crash test laboratories. Finite element simulations, vehicle dynamics simulations and full-scale crash tests were used to study the effect of sloped terrain on the safety performance of cable median barriers. (124) A detailed finite element model of a three-strand cable barrier was developed and validated against a previously conducted full-scale crash test. The full-scale crash test and

94 simulation were setup for an impact of the cable barrier according to Report 350 TL3 conditions. Then the computer simulations were performed to assess the barrier performance under different impact scenarios and terrain profiles. Vehicle dynamics analyses were also conducted to compute the vehicle trajectory and dynamics as it crossed the sloped terrain and struck the cable median barrier. After completing the computer simulations, full-scale crash tests were performed to validate the results. A summary of the methods used to validate vehicle and barrier models in roadside safety research is shown in Table 11. As shown in the previous section on vehicle and barrier modeling, the most common methods for validating simulations are qualitative comparisons of time histories and sequential photographs and the comparison of domain-specific metrics from TRAP. There have been a few instances of using shape-comparison metrics but these have been relatively few. VERIFICATION Process Two types of verification are discussed in this section. The first might be referred to as calculation verification where solutions to standardized benchmark problems provide verification that new versions of software, new computing machinery or new software provide the same answers as previously accepted solutions. The other type of verification activity might be referred to as model assurance verification, where the model and its results are critiqued based on basic laws of physics and sound engineering practice; Qualitatively, do the results make sense based on basic laws of physics? In this type of activity, there is no known solution but modeling techniques and procedures can be used to maximize the likelihood of producing a numerically stable solution that adheres to fundamental physical laws. The following sections will describe each type of verification activity based on a review of the literature.

95 Table 11. Summary of methods used to validate models in roadside safety publications. Publication VALIDATION METHOD QUALITATIVE QUANTITATIVE Time Deformation Geer’s NARD TRAP Hypothesis Ray, 1994 √ √ Zaouk et al 1996 √ √ Marzougui et al 1996 √ √ Hendricks and Wekezer (1996) √ √ Abu-Odeh et al.1997 √ √ Reid et al 1996 √ Paulsen and Reid, 1996 √ √ Eskandarian et al 1997 √ √ Ray and Patzner, 1997 √ √ √ √ Reid et al 1997 √ √ √ Zaouk et al 1997 √ √ Gentry and Bank, 1998 √ √ √ Plaxico et al 1998 √ √ Zaouk et al 1998 √ √ Martin and Wekezer, 1998 √ √ √ Reid and Sicking, 1998 √ √ Reid and Bielenberg, 1999 √ √ Tabiei and Wu 2000 √ √ Plaxico and Ray 2000 √ √ √ √ √ √ Marzougui et al 2000a √ Marzougui et al 2000b √ Eskandarian et al 2000 √ √ √ Bielenberg et al 2001 √ Ray et al 2001 √ √ Marzougui et al 2001 √ √ Zaouk and Marzougui 2001 √ √ Engstrand et al 2001 √ √ Kokkins et al 2001 √ Atahan 2002 √ √ Orengo et al 2003 √ √ Reid et al 2003 √ √ Marzougui et al 2004 √ √ Ray et al 2004 √ √ Whitworth et al 2004 √ √ Atahan and Ross, Jr 2004 √ Bielenberg, 2004 √ √ Mohan et al 2005 √ √ Atahan and Cansiz 2005 √ √ √ √ √ √ Miele et al 2005 √ √ √ Tahan et al 2005 √ √ Sheikh et al 2005 √ √ Atahan 2006 √ √ √ Bonin et al 2006 √ Bhargava et al (2006) √ Plaxico et al 2006a √ Plaxico et al 2006b √ √ √ Anghileri 2006 √ √ √ Pernetti and Scalera 2007 √ √ Mohan et al 2007a √ √ Marzougui et al 2007a √ √ Marzougui et al 2007b √ √ √

96 Calculation Verification As discussed earlier, software or platform verification involves running computer models that have provided solutions previously with a particular software and computational platform on new versions of the software or different computation platforms or setups. For example, Ray made available Cofie’s simple model of an 820C passenger vehicle striking a rigid pole as a performance benchmark case for LSDYNA in about 1997and placed the model on the internet.(75) Users from around the world used that benchmark model to benchmark the performance (i.e., the run-time speed) on new computers and new computer set-ups as they came onto the market.(125) The frontal NCAP models of the NCAC Taurus and Neon have also been used as run-time benchmarks.(126) While these models were only used to compare run-time speeds, their use as benchmarks does illustrate the utility and importance of providing “known” solutions to test new software and computation platforms. It is likely that most serious roadside safety simulation researchers have their own informal benchmark cases that they run to verify that new versions of the software provide similar results to prior versions. Unfortunately, such activity is rarely documented in the literature and there is no standardization from one research group to another. Recently, during the Robust project in Europe, simulations with nominally identical models of the round-robin tests (i.e., 900-kg vehicle, impact angle of 20° and initial velocity of 100 km/h against a rigid barrier) were independently run by different organizations to verify consistency of results and procedures. This activity demonstrated that a common procedure to extract and elaborate signals was needed. After such a procedure was developed, the scatter, related to severity indices, obtained between the organizations was very low and much lower than was achieved in the experimental round robin tests. Calculation Verification Process Verification activities should use the same general process as used in validation with the exception that a known computational solution is used instead of an experimental solution. The same types of metrics used for validation can and should also be used for calculation verification. Since the purpose of the simulations is the same (i.e., roadside safety research) the same domain-specific validation metrics can be used to verify solutions. Likewise, the same shape comparison metrics can and should be used to compare known computational solutions to new computational solutions. This approach has the advantage of simplifying the validation and verification process since the same tools, techniques and procedures can be used in both areas.

97 Model Assurance Verification Model assurance verification should begin at the individual component level. Components in this context can be parts of the vehicle or barrier models (i.e., splices, connections, suspension parts, etc.) and can include both time history comparisons and phenomenological comparisons. Various checks should be made prior to analysis regarding mesh quality, structural idealization of components and connections, and material characterization. The analyst should also make various verification checks during analysis regarding energy balance, mesh distortion, contact stability, initial penetrations between components, etc., to ensure that the results are indicative of a well-behaved and stable model. The performance of the individual components of the model should also be assessed to ensure that they provide realistic behavior throughout the analysis. Basic Assessment of the Finite Element Mesh The verification procedures for the finite element mesh may specify metrics for element distortion, element aspect ratio, element warping, element surface normal direction, element Jacobians, connectivity problems, etc. Most finite element pre-processors are capable of performing basic quality checks of the finite element mesh and some also include tools to automatically optimize selected elements of a mesh based on a set of given parameters; this is a considerable advantage when working with a mesh on complex geometries. Assessment of Finite Element Model Mesh Discretization It is good practice to conduct a mesh sensitivity study, where the goal is to achieve a desired degree of accuracy with the model with minimal computational effort. In general the mesh should be refined in areas of higher strain. In a linear elastic stress analysis problem deformations are typically small and an experienced analyst can easily identify potential areas with stress risers and tailor the mesh accordingly by refining it in those regions. In many cases, however, it is difficult to determine a priori where areas of high strains may occur. In a crash analysis problem, for example, deformations can be very large and deformation patterns are not easily determined prior to the analysis. A coarse mesh on such a model will result in a stiffer response than that of a more refined mesh; therefore, refining the mesh in specific areas will effectively introduce “weak points” in the mesh. In fact, selective mesh refinement may inadvertently bias the deformation pattern to correspond to the regions of higher mesh density, especially if the structural member is subject to buckling loads. There are several strategies available for quantifying the error in mesh discretization to help guide model revision. These errors are often calculated based on differences in the element-by-element stress field and the averaged or smoothed stress field.(127) If the results of an analysis show steep gradients in certain elements, the error indicators will show larger errors in those areas than in regions where the gradients are less, and the mesh should be revised accordingly. These strategies are much less successful for transient loading due to the fact that inertia effects and time integration schemes introduce additional complexity and approximations.

98 For under-integrated elements, which are commonly used in crash analysis, there are no reliable error estimation techniques available based on stress gradients. A more qualitative assessment of mesh convergence can be achieved by refining the mesh and comparing results or by judging the smoothness of the deformed mesh. A finite element mesh developed for a component that is expected to experience large deformations should be able to accurately capture the deformed geometry. If the mesh density is refined enough to allow for a smooth representation of the deformed mesh then the mesh is acceptably convergent.(128) A metric could be developed based on maximum allowable angle between adjacent elements in either the undeformed or deformed mesh; however, such calculations for an entire model would be impractical. An alternative would be to simply provide a qualitative assessment of the mesh quality based on the analyst’s perception of the deformed mesh (e.g., is the deformed mesh free of apparent geometric inaccuracies and does the mesh adequately capture deformed geometry?) Energy Balance When under-integrated elements are used, there may exist one or more deformation modes that result in zero strain at all the Gauss points in an element (i.e., hourglassing). Since the element perceives no strain during these deformation modes, the deformations will occur with no resistance from the element and will lead to erroneous results and often numerical instability. Most element types used in crash analysis are based on selective reduced integration or single point integration and will have deformation patterns that may result in zero-energy modes (i.e., hourglass modes). Finite elements codes that use under-integrated element formulations include options for controlling zero-energy modes. The classical method of controlling zero-energy modes is to apply forces at the nodes of an element to resist those deformations that lead to zero strains at the integration points. The magnitude of these forces is calculated by the code as a function of element dimensions, material properties and a user defined penalty factor (i.e., scale factor). These hourglass forces correspond to non-physical forces that tend to reduce the kinetic energy in a crash analysis. Since the hourglass forces cannot exactly compensate for the missing stiffness of the elements, the energy resulting from these forces should be low compared to the internal energy of the element to ensure reasonable accuracy of the solution, say: 1.0≈≤ λ e e i h where he is hourglass energy and ie is internal energy.(128) Finite element programs generally provide energy calculations for the complete model and for individual part IDs in the model. They can also provide energy calculations on

99 individual elements, but to do so for the entire model would be impractical due to resulting file size. A metric may be developed that defines an acceptable limit of hourglass energy based on the amount of internal energy computed for the overall model and for each individual part. This, however, is not necessarily a fool-proof means of quantifying that hourglass energy is below acceptable limits. For example, a guardrail model may be developed with the entire length of w-beam rail elements identified in a single part ID. Only a small percentage of the overall length of rail will experience significant deformation in a crash, so comparing the amount of hourglass energy to the amount of internal energy experienced by the entire rail would likely result in a small value for λ even if hourglass forces were very high in the impact region. Nonetheless, providing a metric would, at a minimum, require documentation of the hourglass energy in the model. In addition, the total energy in the model should be checked to ensure that total energy remains essentially constant. Checking the total energy, kinetic energy and momentum of a simulation is quite straightforward in LSDYNA so the maximum change in energy over the simulation run can be reported. Ideally, there should be no change in total energy but as a practical matter, total energy sometimes varies due to a variety of computation affects including hourglass energy, as discussed above, as well as contact and frictional forces. If the change in energy and momentum as a percent of the initial energy and momentum is below some threshold value (e.g., say 5 or 10 percent) then the non-physical errors in the simulation are probably adequately small. If, on the other hand, either the energy or momentum grow above this threshold, there is likely a numerical problem in the simulation that should be identified and corrected before the simulation is used for either validation or predictive purposes. Mass Scaling Another important issue that can affect reliability of model results is mass scaling. Mass scaling is often utilized to improve numerical stability of an analysis or to maintain acceptable analysis time when employing the explicit time integration method to transient problems. The explicit time integration method is the most suitable and the most used by analyst for crash analysis problems. The primary drawback of the explicit time integration method is that the stability of the solution is dependent upon the time-step used in the analysis. The critical time- step, Δt, is defined as: max 2 ω =∆t where ωmax is the highest frequency in the structure. Determining the highest frequency in a structure, however, is a considerable effort in its own right. Most finite element software codes use a more simple method of determining a suitable time-step based on the Courant, Friedrichs and Lewy (CFL) condition, which calculates an upper bound on the frequency of the structural

100 model based on the shortest time it takes a stress wave to traverse each individual element in the model. The time-step in LS_DYNA, for example, is taken as: e c c lt α=∆ where le is the characteristic length of an element, ce is the element wave speed and α is a reduction factor that accounts for destabilizing effects due to nonlinearities. Thus the critical time-step can be determined by knowing the size and properties of each element and is computed internally by the finite element program prior to the start of the analysis. For a finite element analysis of a crash event to be computationally efficient, a practical time-step for the analysis must be maintained (e.g., typically on the order of one- to three- microseconds for crash analysis problems- based on a combination of model size and current computation speed of computer hardware). For the analyst, this means that care should be taken when building the finite element mesh to ensure that small elements are not created that will result in an unreasonably small time-step for the analysis. Some geometries, however, cannot be meshed without using very small elements in certain areas, such as bolts, bolt holes and geometrical stiffeners in sheet metal. In these cases, the analyst must make presumptions regarding the expected physical response of this part and balance computation time with acceptable accuracy of the model. When the geometry and stiffness are of primary importance, it may be acceptable to increase the mass of these elements (i.e., lower the wave speed) in order to maintain a reasonable time step for the solution, but mass increase should be documented and justified by the analyst. This technique, called mass scaling, is very useful for maintaining run-time efficiency and maintaining stability of an analysis and if used carefully will not affect the results. On the other hand, if used indiscriminately, mass scaling can create incorrect results since mass is being artificially added. From the Survey of Practitioners, shown in Appendix D, it was shown that nearly all analysts use mass scaling to some degree. A metric could be established that limits the amount of mass added in the analysis that is based on: 1) Percentage of the total mass of the model – Typically, the amount of mass added should be small in comparison to the overall mass of the model. 2) Percentage of the “moving” mass of the model – Too much mass added to moving parts will result in a non-physical increase in the initial kinetic energy of the system. 3) Percentage of mass added to individual elements of the model – Abrupt density changes in a mesh due to mass added to individual elements will influence transmission and reflection of stress waves through the system. Any model that exceeds the specified limit should be reported and justification provided.

101 Assessment of Structural Idealization Due to the immense computational requirements of analyzing a system in complete geometric and material detail, certain components of a system are often idealized in an attempt to balance analysis time with acceptable accuracy of the results. For example, structural members may be idealized as one-dimensional rods or beams, two-dimensional plates or shells or three-dimensional solid elements. Bolted connections and other fasteners are often modeled using kinematic idealizations (e.g., such as using a constrained joint approach), discrete elements (e.g., such as nonlinear springs) or simple multi-point constraints. Understanding the response and limitations of each of these modeling methodologies are very important in assessing their applicability to the given model. As discussed earlier, to maintain computational efficiency in the analysis, a practical time-step must be maintained. If the various connections in a system, such as bolts, rivets and welds, are modeled in geometric detail, a very refined mesh in those local areas would be required in order to obtain the correct geometry of those components and to accurately compute the high stress and strain concentrations in the local vicinity of the connections. It is common practice to model these connections using more simplistic modeling techniques such as spotwelds-with-failure or non-linear springs. Whatever method an analyst uses to model a particular component of a model, it should be verified that the model produces results consistent with its expected behavior. In a dissertation by Plaxico, the model of a strong post guardrail system was developed and the process of applying mathematical modeling techniques based on an understanding of the physical problem and correlation to physical tests was illustrated.(129) For example, Plaxico’s study showed that using a relatively fine mesh around the bolted connection of the w- beam splice in the model required a time-step on the order of 0.1 microseconds which was not practical for the analysis of the full-scale impact event which lasts 0.6 to 1.2 seconds (e.g., the analysis would require 6,000,000 to 12,500,000 time-steps to complete the simulation). Thus, more simplistic modeling techniques were investigated to simulate the bolted connections such as nodal rigid body spot-welds, non-linear springs and modeling the connection in geometric detail with a relatively coarse mesh. Nodal rigid bodies were not able to simulate the relative movement of the two w-beam sections in the splice, since they are rigid connections and were, therefore, not recommended. It was determined in the study that when the splice is subjected to uniaxial loading an appropriate method of connecting the w-beams together is to use non-linear springs with a force-displacement relationship defined such that the correct axial displacement of the splice is obtained for a given tensile load in the w-beam. This is especially important in the upstream and downstream regions of the guardrail model away from the impact zone, where the rail is in “pure” tension. In the impact zone where the loading on the splice is more complicated an explicit geometric model of the bolted connection may be required. Figure 40 shows the results of a finite element simulation compared to a physical test of a splice

102 connection subjected to a pure tensile load. A qualitative assessment of the two curves indicates the fidelity of the nonlinear springs to simulate the splice connection behavior. The forgoing example discussed the process used by members of the research team to qualitatively verify and validate the accuracy of an idealized modeling technique to simulate a bolted connection. Other components of the model, including those modeled in geometric detail, should be verified in like manner, to ensure that proper element type, element discretization and material characterization used in the model are appropriate for the given problem. Assessment of Material Characterization of Individual Components Material properties in particular can significantly affect results, and unfortunately, some materials common to roadside safety barrier systems have widely varying mechanical properties (e.g., soil and wood) and may be sensitive to load rate and environmental conditions Figure 40. Comparison between a simulation and test of the displacement of a guardrail splice in tension.(129) 0 100 200 300 400 Fo rc e (k N ) 0 5 10 15 20 25 30 Displacement (mm) F.E. Simulation Test 000117_01

103 (e.g., temperature, moisture, ultraviolet exposure and aging). Achieving accurate correlation between numerical simulation and physical tests requires accurate material properties to be used in the analysis. Metals, such as steel and aluminum, are relatively easy to characterize using standard material models available in most commercial finite element codes. LSDYNA, for example, has several material models appropriate for simulating the behavior (including thermal and strain- rate effects) of metals. Obtaining the input properties from design handbooks, however, may be misleading. For instance, the yield point of a material obtained from design handbooks usually provides a minimum value, which may be significantly less than the actual yield stress of the material. Although this value may be acceptable for design based on linear elastic stress analysis it may not be appropriate in the design of crashworthy structures where energy management through plastic deformation is the governing response. Obtaining material properties by conducting laboratory tests on samples taken directly from the part in question is by far the most desirable method, but test setup and conduct may be challenging due to limitations in testing equipment, in particular, regarding characterization of rate effects in materials. Wright and Ray, for example, developed material properties for AASHTO M-180 guardrail steel by cutting coupons of actual guardrail material, performing standard ASTM tensile tests to obtain material properties and then using those properties in several LS-DYNA constitutive models.(46) It is not considered practical to develop a metric for verifying methods used for generating material properties or for application of material constitutive laws to components of a model. Finite element models are developed for a wide range of structural systems and the types of materials that may be used in those designs are limitless. It is suggested, however, that analysts provide documentation of all material properties used in a model and a reference to where those properties were obtained. Over time this should result in a literature documenting validated material models for roadside hardware use. An analyst need not perform a validation of every material every time they perform a new simulation as long as the material properties used can be attributed to a source in the literature where the material properties were validated. Model Assurance Verification Process The metrics for verification should include parameters that analysts generally examine to assess energy balance, numerical stability and quality of the model. Such metrics may include common sources of uncertainty in models such as geometry detail, spatial discretization, element quality, element type, boundary conditions, loading conditions, appropriateness of material constitutive laws, material properties, structural idealization of the mechanics problem, and conservation of mass and energy during calculations. A datasheet could be developed for the analyst to use for documenting such verification metrics.

104

Next: CHAPTER 3 SURVEY OF MODELING BEST PRACTICES »
Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications Get This Book
×
 Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Web Only Document 179: Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications explores verification and validation procedures, quantifiable evaluation metrics, and acceptance criteria for roadside safety research that maximize the accuracy and utility of using finite element simulations.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!