National Academies Press: OpenBook
« Previous: CHAPTER 3 SURVEY OF MODELING BEST PRACTICES
Page 113
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 113
Page 114
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 114
Page 115
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 115
Page 116
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 116
Page 117
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 117
Page 118
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 118
Page 119
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 119
Page 120
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 120
Page 121
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 121
Page 122
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 122
Page 123
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 123
Page 124
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 124
Page 125
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 125
Page 126
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 126
Page 127
Suggested Citation:"CHAPTER 4 PROCEDURES." National Academies of Sciences, Engineering, and Medicine. 2011. Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications. Washington, DC: The National Academies Press. doi: 10.17226/17647.
×
Page 127

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

111 CHAPTER 4 PROCEDURES INTRODUCTION The purpose of this Chapter is to present potential procedures for verification and validation for computer simulations in roadside safety applications. The procedures described herein apply primarily to incremental improvements to roadside safety hardware. For example, say that a particular type of guardrail terminal has been designed, crash tested and accepted by the Federal Highway Administration (FHWA) for use and has been in service for some years. The designer and manufacturer may be considering a small change that will either improve the performance, reduce the cost of the system or both. For example, perhaps the manufacturer has made some changes in the impact head of an energy absorbing terminal to reduce weight (e.g., primarily a cost savings but also some safety benefit) while making the impact area slightly taller (e.g., primarily a safety improvement). Another example might be using a different shape or material blockout in a strong-post w-beam guardrail. The designers may decide to investigate new incremental improvements like these using a finite element program like LSDYNA either in lieu of or in preparation for a crash test. How can the designer satisfy decision makers (i.e., the FHWA and the States) that the incremental design is acceptable according to the Report 350 or MASH crash testing recommendations based on the finite element simulations? What information does the designer need to provide to the decision maker to make the case that the new incremental improvement will also satisfy the crash testing guidelines as did the original design? Can the computer simulation provide all the necessary information for the decision maker such that additional crash tests are avoided? The purpose of this document is to provide answers to these questions. The focus of this chapter, therefore, is on providing information to decision makers that will allow them to make an acceptance decision for incremental improvements to roadside hardware. These procedures are not intended for use on completely new hardware where there is little or no community experience but rather are intended for improvements to hardware whose performance is well known. On the other hand, however, these procedures are likely to be useful to hardware designers and crash test agencies in exploring new design options and evaluating them prior to a crash test even if the materials are not subsequently used in applying for acceptance of a design change. As the community becomes more experienced and proficient with the use of numerical simulation technologies, however, it is likely that the ad hoc definition of what is a small incremental change will expand. A decision maker will always be free to require either more information or a new crash test if they feel that the increment is too large for their confidence. The procedures and recommendation herein primarily provide a language and a means of communication that will allow designers and decision makers to compare test and numerical analysis results in judging the performance of roadside safety systems.

112 DEFINITIONS The definitions of verification, validation and calibration, as they are used in these procedures, were adopted with slight modifications from the definitions presented in the ASME “Guide for Verification and Validation in Computational Solid Mechanics,” ASME V&V 10- 2006.(17) These definitions were discussed in detail in Chapter 2 but are summarized below. Verification Verification is concerned with how well the discrete numerical approximation (e.g., an LSDYNA simulation) agrees with the known mathematical solution (i.e., differential equation solution). Thus, verification is the process of ensuring that the computational model provides results consistent with the developer’s conceptual description of the model and the underlying assumptions that form the basis of the mathematical model (i.e., the model responds as the developer intends) and that computed results adhere to basic physical laws, such as conservation of energy and momentum. There are no “known” solutions available in roadside safety in the sense that there is some closed-form differential equation that defines the mechanical and dynamic response of roadside hardware. Impacts with roadside hardware are complex since they involve a number of complicated mechanical structures (e.g., the vehicle, the barrier and even the terrain). While there are no “known” solutions, numerical solutions still must satisfy basic physical conservation laws: i.e. energy, momentum and mass must be conserved. The procedures developed in this chapter regarding model verification, as it relates to crash simulations of roadside safety hardware, include some solution verification criteria that help assure the analyst and decision maker that the numerical results are consistent with these basic conservation laws. Other types of verification, for example code verification, are not directly discussed in this document. Generally, code developers have the ability and responsibility for performing code verification of a numerical method and it is generally presumed that a code used for roadside safety simulations has been independently verified. Validation While verification is primarily concerned with ensuring results adhere to basic physical laws, validation, on the other hand, is concerned exclusively with comparing the numerical solutions to real-world physical experiments. Validation, as used throughout this report, always implies that a numerical solution is being compared to some type of physical experiment. Validation, as it relates to crash simulations of roadside safety hardware, is defined in these procedures as the process of determining the degree to which the computational model is an accurate representation of the real world crash tests from the perspective of accurately replicating (i) the NCHRP Report 350, MASH or EN 1317 crash test evaluation parameters, (ii) the structural performance of the barrier and (iii) the response of the vehicle. It is important to keep in mind that the intended use of the model is to assess the results of the computer simulation in the same manner.

113 Calibration Calibration is often confused with validation and verification. Verification and validation involve comparisons with physical experiments or known solutions that are independent of the model development, whereas calibration is the use of physical experiments, the literature or analytical solutions to estimate the parameters needed to develop the model. For example, if a material model is needed in a particular finite element simulation, the analyst may perform some physical tension tests in the laboratory to obtain the stress-strain response of the model. These physical test results can then be used to estimate the parameters needed for the computational material model. Such a material model has been “calibrated” by the physical tests. These same physical tests cannot be used to “validate” the material model since one is dependent on the other. On the other hand, the analyst might obtain the material properties from a handbook or the literature and use these literature parameters to build the model. A physical experiment in the laboratory can then be used to validate the material model because it is independent of the model development. Physical experiments, therefore, can be used to estimate parameters and thereby calibrate a model, or to validate a model, but not both. PROCEDURES Introduction An informal procedure for verification and validation has evolved in roadside safety over the past decade as described in the Literature Review presented in Chapter 2. The procedures in this document formalize what many in the roadside safety community have been doing informally for a number of years. The intent is not to create a burdensome and difficult procedure but rather to standardize an already used ad hoc procedure such that decision makers can readily assess results from different laboratories and analysts. The procedure for the validation of roadside safety models is shown graphically in Figure 41 and includes the following seven steps:

114 Figure 41. Roadside safety validation and incremental design process.

115 1. Identify the baseline experiment, 2. Build the computational model of the baseline experiment and document its characteristics in roadside hardware and vehicle PIRTS, 3. Use the model to simulate the baseline experiment, 4. Validate the model by comparing the simulation results to the physical test results, 5. Modify the model to represent incremental improvements of the baseline hardware design configuration, 6. Use the model to predict the performance of the incremental improvement, and 7. Evaluate the performance of the incrementally modified device to determine if the improvement satisfies the appropriate crash testing guidelines (e.g., Report 350 or MASH). Each of these steps will be discussed in the following sections. Identify the Baseline Experiment The first step, shown at the top of Figure 41, involves identifying a baseline experiment for the comparison. Since these procedures are intended for incremental improvements to existing roadside hardware, it is presumed that there are already crash tests available that document the performance of the original roadside safety hardware and that the original design is fairly similar to the anticipated incremental improvement. For example, one of the examples presented later in Chapter 6 involves assessing the performance of a strong-post w-beam guardrail placed behind a curb. There are numerous crash tests of strong-post w-beam guardrails so one was chosen from the literature where the test report, electronic data and film/video data was available. This test from the literature did not have a curb placed in front of it but it was a standard strong-post w-beam guardrail. This test from the literature, therefore, was identified as the baseline test for the validation exercise. As much documentation of the experiment should be obtained as possible. At a minimum, the time-histories (normally in the original raw electronic form) and a test report including the usual crash test evaluation criteria and photographs will be needed to perform the comparison in step four but video, additional photographs and other types of documentation may also be helpful in performing the comparison. There is no limit per se on the age of the experiment as long as all the needed data are available but as a practical matter vehicle models do not really exist for pre-Report 350 vehicles and test documentation including the electronic files may be difficult to find for older crash tests. The vehicle model used in the simulation must be a reasonable approximation of what was used in the crash test so for practical reasons baseline tests will generally be those that were performed for Report 350 (i.e., after about 1993). The result of this first step in the procedure should be complete documentation of the baseline crash test or tests.

116 Build the Model Next in Figure 41 on the left hand side, models of the vehicle, barrier and support conditions are developed to exactly match the baseline experiments. The development of the models may involve calibration, verification or validation of parts, materials, subassemblies and assemblies of the complete model. Figure 42, for example, shows a schematic representation of a guardrail (e.g., the so-called European super rail) struck by a small passenger vehicle. The vehicle, barrier, and any boundary conditions constitute the whole model. Major independent portions of the model like the vehicle and barrier are “assemblies” in the model (note: generally the boundary or support conditions are presumed to be included in the roadside hardware model). By definition, an assembly is composed of a collection of subassemblies. The suspension system and the vehicle frame, for example, are sub-assemblies of the vehicle assembly and the rail, posts and anchor systems are subassemblies of the guardrail system. A subassembly is composed of a collection of parts. The post is a subassembly that is composed of parts representing the post, blockout, fasteners, spacers, stiffeners and soil. Each of these parts in turn is defined by its geometry and material properties. Vehicle assembly Barrier assembly Top rail subassembly Middle rail subassembly Rubrail part Post part Guardrail part Spacer part Blockout part Stiffner parts Main-rail part Posts parts Figure 42. Hierarchy of a typical roadside hardware finite element model. During the model development phase, each part, subassembly, and assembly should be calibrated, verified or validated if at all possible. Requiring validation of component at every

117 level of the model hierarchy would be unreasonably burdensome and may not even be possible in some cases. Calibrating and validating as many of the components as possible will, however, increase the overall confidence in the accuracy and robustness of the model predictions. For example, in developing the post subassembly the interaction of the steel guardrail post and the soil is an important step. The analyst may decide to perform some pendulum experiments of a typical guardrail post embedded in a typical soil. At the same time, the analyst would build a numerical model of the pendulum experiment and perform numerical calculations for the same impact conditions. Similarly, the rear suspension of a vehicle might be examined by performing some “bump” tests. A numerical model of the same suspension components and the same impact conditions could be performed and compared to the physical experiment. The results of the numerical pendulum test calculations and the physical pendulum tests can then be compared to see if the numerical results can be validated with the experimental results. Each part, subassembly and assembly that can be validated in the model development process will result in increased confidence in the model’s ability to correctly predict the crash performance. Every roadside hardware simulation will contain a vehicle assembly and a roadside hardware assembly. It is important to document the capabilities of these two major assemblies in order to assess what the model reasonably can be expected to predict. For example, if the model does not include the possibility for the guardrail elements to fail, then it will never be able to replicate and be validated against an experiment where guardrail material failure was observed. The capabilities of the major assemblies (i.e., the vehicle model and the roadside hardware model) are documented in a so-call phenomena importance ranking table (PIRT). Phenomena importance ranking tables (PIRT) are a technique that has been suggested by Oberkampf as a means of documenting, verifying and validating the types of phenomena a numerical model is intended to replicate.(16) Since all mathematical models are abstractions of physical phenomena, all modelers are implicitly making assumptions about what is important in the model and what phenomena should be represented in the mathematical model. Unfortunately, these assumptions are generally not apparent to those reviewing the results of the model. For example, a modeler may build a vehicle model assuming that snagging of vehicle components is not an important phenomenon because the modeler’s intent is to investigate the response in a vertical rigid wall impact. In such a case, the modeler may not include contacts on the side of the vehicle to represent the door edges or edges of the body panels. Another modeler may take the vehicle model and use it in a guardrail simulation not knowing that snagging was not considered during model development. The simulations of the guardrail may be unlikely to predict or replicate snagging since this phenomenon was never included in the original model. In reviewing the results of the model, the second modeler may incorrectly believe that there are no snagging issues when, in fact, there are no snagging issues in the simulation because they were not accounted for in the model. Likewise, if a material is created without failure conditions, the physical system may experience failure in a test but this would not be observed in the model since failure was not included in the model. A PIRT provides a quick way of documenting the

118 phenomena that are included in the model so that subsequent users of the model or reviewers of the results will know what types of phenomena can reasonably be expected. Examples of roadside hardware and vehicle PIRTS are included in Chapter 6 and Appendix C. In essence, the development of the PIRT is a validation exercise for subassemblies, components and parts of the overall assembly. When developing a numerical model, it is common practice to perform experiments for some of the important subassemblies and components and to compare the results to computational experiments from the model. For example, a material coupon test might be performed in the laboratory and compared to a numerical version of the coupon test in the model to ensure the results are compatible. Likewise, a timber guardrail post might be tested using a ballistic pendulum to investigate its impact performance and the results compared to the computational model of the timber post. If an experiment is being compared to a computation, the computation is being validated. It is sometimes not possible to validate all parts and subassemblies so the model behavior can also be calibrated or verified depending on the information available. For example, steering response should respond to the well-known Ackerman angles and comparing the model results in a steering simulation to the theoretical Ackerman angles would be a verification activity since the comparison is with a closed-form mathematical solution. In the following discussion, the reader should refer to the example vehicle and roadside hardware PIRTs included in Appendices C6 through C7. Development of a PIRT for either a roadside device or vehicle involves three steps: 1. First, all comparisons to physical experiments or mathematical models that were performed during the development of the model should be listed and assigned a phenomena number (e.g., see Table C6-1, C7-1 or C8-1). These may include laboratory tests of materials, dynamic or static tests of components, full-model tests of suspension systems and any other type of comparison between a physical test and the computational model. 2. For each comparison between an experiment and calculation (e.g., row number 1 in Table C6-1), a “Comparison Metric Evaluation Table” should be developed (e.g., see Table C6- 2). The metric evaluation can be performed using the program RSVVP which is described in Chapter 5. The results of the curve comparison should be entered into the table. There should be one “Comparison Metric Evaluation Table” for each comparison listed in list of experiments. It is also helpful to include some summary information about the comparison on the metric evaluation table although precisely what to include will depend on the particular comparison. For example, if an experiment is performed to validate the suspension deformations under various loads (e.g.., see Table C7-2), a load- deformation plot and a photograph of the experimental set-up would be a useful item to include. 3. If the comparison in the “metric evaluation table” can be judged to be acceptable according to the criteria, the “phenomenon description” should be taken from the bottom

119 of each “Comparison Metric Evaluation Table” (e.g., Table C6-2) and entered into the PIRT (e.g.., Table C6-8). The far right column allows the developer to indicate if the experiment was validated, verified or calibrated. Validation should be indicated only if the experimental results were compared to the analysis results and the two are independent (i.e., the experimental results were not used to establish properties of the analysis model). If the comparison is between an analytical theory and the analysis results, the model has been verified. Lastly, if the results of the experiment were used to determine material properties or some other characteristic of the model, the analysis result has been calibrated meaning the experiment and analysis are related to each other. The appropriate type of comparison should be indicated in the right column of the PIRT (e.g., see Table C6-8). Phenomena that did not result in acceptable comparisons for the “metric evaluation table” should not be entered in the PIRT. A decision maker can very quickly determine what types of phenomena the vehicle or roadside hardware model is capable of replicating by examining the appropriate PIRT. If the decision maker feels that an important phenomenon is missing, the analyst may be asked to perform some tests and numerical simulations to validate, verify or calibrate that phenomenon and thereby add it to the PIRT. In this way, the PIRT also provides a way of keeping track of improvements to models that are re-used since new phenomena can be added as the model is used for an increasing variety of roadside hardware assessments. The result of this second step in the procedure should be (1) a complete model of the roadside hardware simulation including the vehicle, roadside hardware and appropriate boundary conditions, (2) a PIRT describing the vehicle capabilities and (3) a PIRT describing the roadside hardware capabilities. Compare the Baseline Test to the Computer Simulation Once the model has been completely developed including the vehicle and roadside hardware PIRTS and the baseline tests have been identified, a computer simulation of the baseline test is then performed. The objective of this simulation is to as closely as possible match the impact conditions in the baseline test. The results of the computer simulation of the baseline condition are then compared to the results of the physical full-scale crash test by completing the “Validation Report.” Examples of several Verification and Validation Reports are provided in Appendices C1 through C5 and blank forms are included in Appendix E. The validation/verification reports each have four parts: 1. Basic information, 2. Solution verification, 3. Time history comparison and 4. Domain specific evaluation.

120 The first part, basic information, lists important information about what the baseline test is, what organization performed the baseline test, what organization performed the numerical solution, the impact conditions, etc (see Appendix E). The second part, solution verification, involves global checks to make sure the numerical solution appears to be stable and conforming to the conservation laws. The analyst should fill in the information shown in Table E-1. The table requires information about the total energy, energy balance, hourglass energy, shooting nodes and other computational characteristics of the model. The purpose of this part is to provide information to the decision maker that indicates that the numerical solution obeyed basic physical laws (e.g., conservation of energy, mass and momentum) and that the solution is numerically stable. The analyst can certainly add to the list shown in the examples but at least those items shown should be reported. All the criteria listed in Table E-1 must be satisfied. In the third part, time history comparisons, the analyst performs a quantitative comparison of the time histories of the vehicle dynamics. The quantitative evaluation metrics can be easily generated by the RSVVP program, discussed in the next chapter, when comparing the time histories from the crash test to the numerical solution. The details of which quantitative metrics to use, how to calculate them and the appropriate acceptance criteria are discussed in Chapter 5 and illustrated in the examples shown in Chapter 6. The purpose of this general overview of the procedures, the objective of this third step in the process, is to compare the baseline crash test time histories to the numerical time histories in an objective, quantifiable manner. First the analyst should fill out Table E-2 using RSVVP to calculate the Sprague-Geers MPC metrics and the ANOVA metrics for each available time history (i.e., all the time histories collected in the full-scale crash test experiment). The time histories should be compared in the original units and orientation. For example, if the test vehicle was instrumented with accelerometers, accelerations should be compared and if the test data was collected in the local coordinate reference frame, the comparison should likewise use the local reference frame. If all the metrics satisfy the criteria in Table E-2, the time history comparisons can be considered acceptable and the analyst may continue on to the next step. Sometimes, however, there may be one or two relatively unimportant channels that do not result in good quantitative comparisons. An example might be a small sign support test where the longitudinal acceleration has a much greater influence on the results of the impact event than do the lateral or vertical accelerations. The less important channels may not satisfy the criteria because they are essentially recording noise. The longitudinal channel in this example will probably be an order of magnitude greater than some of the other less important channels and the response is essentially determined by the one longitudinal channel. RSVVP includes a method for accounting for different levels of importance of channels. The procedure will be explained in more detail in Chapter 6 but it basically uses the change in momentum represented by each channel and weights the comparison metrics by the proportion of the momentum in each channel. For example, if the longitudinal channel in the sign support example accounts for 80

121 percent of the linear and angular momentum in the crash test, the longitudinal channel will have a weight of 0.8 and the other channels will have smaller weights summing to 0.2 Table E-3, the multi-channel option, has been included in the validation procedure to account for such cases. The user can use RSVVP in multi-channel mode to calculate the weighted Sprague-Geers and ANOVA metrics for the six channels of data that are typically collected in full-scale crash testing of roadside hardware; namely, 1) longitudinal, lateral and vertical accelerations; and the roll, pitch and yaw rotation rates. If the metrics satisfy the criteria in Table E-3, the time history comparison can be considered acceptable. In summary, the time history comparison is considered acceptable if (1) all the channels result in acceptable comparison metrics (i.e., Table E-2) or (2) the weighted channel results produce acceptable comparison metrics (i.e., Table E-3). The fourth part of the verification and validation report compares the phenomena observed in both the crash test and the numerical solution. Table E-4 contains the Report 350/MASH crash test criteria with the applicable test numbers. The analyst should circle all the criteria that apply to the particular test being compared. For example, the small car test-level three longitudinal barrier test is test 3-10 so the analyst would circle criteria A, D, F, H, I and M since these are the criteria that Report 350 uses to evaluate test 3-10 (i.e., see the right column of Table E-4). Tables E-5a through E-5c contain an expanded list of these same criteria and provides space for the analyst to enter the observed crash test response and the observed numerical response. If the two agree, the analyst can indicate that there is agreement between the test and the numerical solution. For example, for test 3-10, criterion A1 in Table E-5a requires that the vehicle be contained and redirected. If both the numerical solution and the crash test resulted in redirection, the numerical solution and crash test would be judged to agree. If both did not redirect the vehicle, they would still agree. If the crash test vaulted over the barrier, however, and the numerical solution indicated redirection, then the two did not agree. The analyst should enter the result for each of the applicable criteria and indicate if there is agreement between the crash test and numerical solution or not in the right column of Table E-5. Some of the phenomena in Table E-5 are binary (e.g., “did the barrier contain and redirect the vehicle?” requires a “yes” or “no” answer) while others are numerical. For the numerical comparison phenomena, the results for the experiment and analysis should be entered into Table E-5 and compared in both an absolute and relative sense. For example, the lateral occupant impact velocity in a test 3-10 crash test might be observed to be 9 m/s whereas the analysis solution predicts 10.5 m/s. The relative difference is the absolute value of the difference divided by the “true” (i.e., experimental) value so in this example the relative difference is 16 percent. In general, results must agree within 20 percent so this comparison would be judged acceptable. In some cases where the values are very small, the relative difference can give unreasonable results so the absolute difference must also be examined. For example, suppose the longitudinal ridedown acceleration in a test 3-10 crash test is 3 g’s and the analysis solution predicts 4 g’s. The relative difference in this case is 33 percent but clearly the values are very close since the absolute difference is only one g. To account for these situations, the Report

122 350/MASH limit on the criterion was taken and the value corresponding to 20 percent of the Report 350/MASH acceptance value was calculated. For example, Report 350 limits the ridedown accelerations to 20 g’s so 20 percent of 20 g’s is 4 g’s. Any comparison where the absolute difference in occupant ridedown acceleration is less than 4 g’s is, therefore, acceptable. Numerical comparisons are acceptable, therefore, if the relative difference is less than 20 percent or the absolute difference is less than the value indicated in Table E-5. All the applicable criteria identified in Table E-4 should show acceptable agreement in the comparisons listed in Table E-5. If there is a case where one or two criteria do not agree and the analyst thinks the phenomenon is unimportant in that particular instance, the analyst should indicate that criteria with a footnote and explain why that criteria should be ignored in that particular instance. For example, in test 3-10 for criterion M4 in Table E-5 the analyst is asked if one or more vehicle tires failed or de-beaded. Say that there was a flat tire in the crash test but not in the numerical solution. If the analyst believes that the flat tire did not play a significant role in the dynamics of the crash (e.g., maybe the tire became flat during re-direction after losing contact with the barrier) then he may explain this in the footnote. Another example might be that the exit angle, criterion M2 in Table E-5c, did not agree because a suspension component failed in the experiment but not in the analysis leading to different dynamics after contact. In essence, this is a judgment call on the part of the analyst about how important the phenomena is and also whether it is a reasonable physical result. For example, if 10 full-scale crash tests were performed one may well observe a few where the suspension did not fail and resulted in a trajectory more similar to the analysis solution. Of course, the decision maker reviewing this information may or may not agree with the analyst’s assessment but by footnoting it in the validation report the issue has been appropriately identified. In any case, the agreement should be indicated as “no” with an explanatory footnote and the comparison can be judged as valid “with exceptions” as shown at the bottom of Table E-5c. If all the criteria in Tables E-1, E-2 or E-3 and E-5a-c are satisfied, the model can be considered “validated” and the appropriate check box can be marked on the cover sheet. This process should be repeated for each of the baseline tests. Four detailed examples with commentary are provided in Chapter 6 to illustrate the creation of all the PIRTS and documents needed for this step. Chapter 6 explains the process of developing the reports and the actual completed reports are included in Appendix C. Blank forms for all the reports are included in Appendix E. Predict the Performance of the Incremental Improvement Once the baseline model has been validated for all the baseline tests that are available, the model can be modified to explore incremental design changes. For example, the guardrail height might be changed or the connection type modified in an attempt to improve the guardrail performance. Generally this would involve two crash tests in Report 350 or MASH, one with the small car and another with the pickup. The simulations of both baseline tests (i.e., the small car

123 and the pickup truck) should be validated and documented in a validation report as described in the last section. Once the modifications to the model have been made, a simulation of the new untested alternative is performed to predict the outcome of a crash test of the improved design. The improved design is evaluated based on the appropriate crash test evaluation criteria (e.g., Report 350, MASH or EN 1317 as appropriate). If the performance is satisfactory according to the crash testing guidelines, the design can be considered acceptable and the results can be presented to decision makers for acceptance. If the design does not perform satisfactorily, the design should be modified such that it results in acceptable performance. This step extrapolates the results of the validated baseline computer model to predict the performance of an untested configuration. There is no full-scale test corresponding to the extrapolated computation result so the presumption is that a model that correctly replicates the results of the baseline test should be able to predict the results of a new test with similar but slightly modified hardware. The analyst should then document the results of the numerical simulation in a simulation report that is structured much like a traditional crash test report. The verification and validation report and PIRTS developed in Step 3 should be included as appendices to the simulation report of the incrementally improved hardware since it is the comparison to the baseline test that provides confidence in the un-tested result. This simulation report is the result of completing this fourth step in the process. Provide Documentation to Decision Makers The final step in the process is to provide the materials necessary for decision makers to make informed decisions about accepting the incremental improvement to the roadside hardware. The packet of material to be delivered to the decision makers is much the same as would be provided in the case of a roadside device developed entirely using crash tests except some additional material is provided showing that the method used to evaluate the incremental improvement is valid. Once the incremental design is complete, the analyst or inventor should provide the following materials to the decision makers: 1. A computer simulation report describing the computer simulation of the incremental improvement and providing a crash test assessment according to the appropriate crash test specification. This report was the product of step four and is much like a crash test report except the results are based on a computer simulation rather than a full-scale crash test. 2. A verification and validation report similar to the examples provided in Chapter 6 and Appendix C for each of the baseline tests showing a comparison between a similar physical crash test and a computational model of the baseline roadside hardware device. These validation reports were the result of completing step three in the process. Decision makers will review this report to satisfy themselves that the methods used to assess the incremental improvement are valid.

124 3. A vehicle PIRT similar to the examples provided in Chapter 6 and Appendices C7-C8. This document will provide evidence that the vehicle model is valid for its use in assessing the incremental improvement. A vehicle PIRT, which is one of the products produced in step two, should be provided for each type of vehicle used in the baseline test comparisons. 4. A roadside hardware PIRT similar to the examples provided in Chapter 6 and Appendix C6. Like the vehicle PIRT, this document will provide evidence that the roadside hardware model is valid for its intended use in assessing the incremental improvement. Like any acceptance decision, be it based on physical crash tests or computations, decision makers may request additional information or documentation from the analyst to satisfy themselves that the incremental improvement is indeed acceptable and the methods used to assess it are valid. Providing the information listed above will not guarantee a positive acceptance but it will provide the minimum documentation required for a fair and impartial assessment of the acceptability of the incremental improvement. IMPLEMENTATION This report can serve a role in assessing roadside safety hardware performance. This report is the first attempt to standardize the evaluation of numerical analyses in roadside safety. This report explains how designers and analysts should perform V&V assessments and present those results to the appropriate decision-makers. Ideally, the crash test assessment procedure and the V&V assessment procedure could parallel each other as much as possible. This project was focused primarily on making decisions on incremental hardware improvements. As discussed in the procedures section of the final report, a designer or analysis will be expected to provide the relevant decision-making authority (i.e., the FHWA or a State DOT) with the following information: 1. A V&V report that documents the comparison between a full-scale crash test and finite element analysis of that benchmark crash test. The benchmark crash test should involve a successful crash test of hardware that is the most similar to the retrofitted hardware. Since the goal is acceptance of design modifications to crash tested hardware, failed crash tests should be avoided. 2. A hardware PIRT for the benchmark case hardware. 3. A vehicle PIRT for the vehicle used in the benchmark crash test. 4. A simulation report documenting the results of the analysis of the extrapolated, untested design. The V&V report, hardware PIRT and vehicle PIRT of the benchmark case should give the decision-maker enough information to be confident that the extrapolation to the new situation is reasonable and the simulation report of the extrapolated design provides the details. If the

125 decision-maker is satisfied with the documentation and the results, an acceptance letter can be written in exactly the same way as is currently done for crash tested hardware. Another important implementation detail is providing access to the RSVVP code, user’s manual, benchmark models and benchmark case documentation. All these materials should be available to analyst on the internet such that the RSVVP program can be obtained and used by roadside safety designers and analysts. Providing the actual benchmark models as well as the PIRTS and V&V reports associated with them will provide users learning the V&V process with actual examples that they can re-run and compare to the information published in this report. The FHWA-NHTSA National Crash Analysis Center (NCAC) website would be one logical place to archive this information. The NCAC already maintains a library of roadside safety and vehicle models as well as a database of crash test data so the addition of the V&V materials would be a natural extension to the materials it currently makes available to the roadside safety community.

Next: CHAPTER 5 COMPARING TIME HISTORIES »
Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications Get This Book
×
 Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Web Only Document 179: Procedures for Verification and Validation of Computer Simulations Used for Roadside Safety Applications explores verification and validation procedures, quantifiable evaluation metrics, and acceptance criteria for roadside safety research that maximize the accuracy and utility of using finite element simulations.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!