National Academies Press: OpenBook

In-Service Performance Evaluation of Guardrail End Treatments (2018)

Chapter: 3 Nationally Coordinated Evaluation Research

« Previous: 2 Methods of Measuring Performance
Page 55
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 55
Page 56
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 56
Page 57
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 57
Page 58
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 58
Page 59
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 59
Page 60
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 60
Page 61
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 61
Page 62
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 62
Page 63
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 63
Page 64
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 64
Page 65
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 65
Page 66
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 66
Page 67
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 67
Page 68
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 68
Page 69
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 69
Page 70
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 70
Page 71
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 71
Page 72
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 72
Page 73
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 73
Page 74
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 74
Page 75
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 75
Page 76
Suggested Citation:"3 Nationally Coordinated Evaluation Research." National Academies of Sciences, Engineering, and Medicine. 2018. In-Service Performance Evaluation of Guardrail End Treatments. Washington, DC: The National Academies Press. doi: 10.17226/24799.
×
Page 76

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

3 Nationally Coordinated Evaluation Research In Chapter 1 it was noted that defining the objectives of the evaluation in terms of the intended applications of the evaluation results is the necessary first step in developing an evaluation method. The evaluation can then be designed to produce the results required for those applications. No single design for an evaluation study will be equally suitable for all objectives. Chapter 1 also observed that an evaluation program, whether at the na- tional or state level, requires the support of an administrative and planning structure that defines objectives, the scope of evaluation, and responsibili- ties for evaluation and that oversees the application of results. Chapter 1 identified possibly useful applications for the results of evalu- ations of guardrail end treatments and other roadside safety devices. These applications are in two categories: objectives that would be suitable for a nationally coordinated evaluation research program and objectives of a routine state highway agency in-service evaluation activity as a part of the agency’s safety management and asset management programs. This chapter outlines methods for conducting evaluations that would be appropriate within a nationally coordinated evaluation research pro- gram. The first section below identifies three candidate objectives for such a program. The second section and the annex to this chapter summarize the data collection procedures for the evaluation of roadside devices devel- oped in recent activities of the American Association of State Highway and Transportation Officials (AASHTO) and Federal Highway Administration (FHWA) and in earlier projects of the National Cooperative Highway Re- search Program (NCHRP) and state highway agencies. These procedures are the basis of the evaluation methods proposed in this chapter. The 55

56 PERFORMANCE EVALUATION OF GUARDRAIL END TREATMENTS subsequent three sections outline methods for each of the three candidate objectives. The methods proposed would be applicable to evaluation of guardrail end treatments, and could be adapted for evaluations of other roadside safety features. The final section considers arrangements for plan- ning and organizing evaluations. EVALUATION OBJECTIVES In this chapter, methods are outlined for evaluations with three applications: • Validating and refining crash test procedures to improve the reli- ability of testing as an indicator of the performance of the device in use. Crash testing will remain the primary means of evaluating the safety performance of roadside devices. In-service evaluation to validate testing also would provide data to support improvement of end treatment designs. • Demonstrating methods that state highway agencies could use for routine in-service evaluation to provide interested agencies with a method and also to test the usefulness of in-service evaluation in highway management. Among other possible benefits, state-level evaluation programs, even if not implemented in all states, could constitute the early warning system necessary to avoid or respond to future specific concerns about the safety of particular roadside device types, as in the ET-Plus case. • Evaluating how design, quality of installation and maintenance, and deterioration in use affect the performance of roadside devices to help states define cost-effective practices for selection, quality control, and inspection of devices. Results of such an evaluation also could support an estimate of the benefits and costs of a pro- gram that replaces installed end treatments or other roadside safety devices with devices that perform better. The committee’s reviews of the practices of state highway agencies, past statements of evaluation needs, and current and past federal and state evaluation activities suggest that a national program with at least these ob- jectives could respond to highway agencies’ expressed needs and improve road safety and the cost-effectiveness of roadside safety expenditures. If all three evaluations were conducted, they could be carried out jointly with shared resources; however, any one of them could be conducted alone.

NATIONALLY COORDINATED EVALUATION RESEARCH 57 SOURCES FOR EVALUATION PROCEDURES The experience of past evaluations described in Chapter 2 indicates that the major difficulties in conducting a prospective in-service evaluation of a roadside safety feature include the following: • Obtaining notification of relevant crashes quickly enough that data on the performance of the device and other circumstances of the crash may be obtained at the site; • Obtaining information on crashes not reported to police, which is needed if the complete severity distribution of crashes is the mea- sure of performance; • Obtaining a sufficiently large sample of crashes to allow for reli- able estimation of the effects of the device type and other factors on severity (or on crash frequency, in the case of an evaluation of a road feature intended to reduce crash risk, such as rumble strips); and • Ensuring the consistency and reliability of data, given that data collection may involve multiple participants (police, maintenance workers, crash investigators) over a long period, sometimes in multiple jurisdictions. Enlisting the cooperation of multiple government agencies and defining, teaching, and overseeing consistent data collection procedures have been challenging in past studies. Approaches to overcoming these difficulties of in-service evaluation have been developed and demonstrated in several past activities, including the following: • A project described in NCHRP Report 490 (Ray et al. 2003) and trials of the procedure proposed in that report; • 2015 AASHTO-FHWA Task Force inspections of end treatment in- stallations and investigations of end treatment crashes (AASHTO- FHWA Measurement Task Force 2015, n.d.; Joint AASHTO-FHWA Task Force on Guardrail Terminal Crash Analysis 2015); • FHWA’s recent pilot end treatment in-service evaluation (FHWA n.d.); and • Procedures developed for the Texas Department of Transportation (DOT) (van Schalkwyk et al. 2004). The procedures of these past studies are summarized in the annex to this chapter. The procedures need testing and refinement through application,

58 PERFORMANCE EVALUATION OF GUARDRAIL END TREATMENTS and the procedure chosen for a particular evaluation must be appropriate for the objective of the evaluation, but the past activities provide a ground- work for evaluation methodology and are applicable to the evaluations proposed in this chapter. VALIDATING CRASH TEST PROCEDURES Objective and Applications The 2015 AASHTO-FHWA Task Force investigation demonstrated that end treatment crashes occur that have potentially severe consequences and involve crash dynamics that are not represented in current and past crash testing protocols. The Task Force concluded that The review of guardrail terminal performance based upon the limited num- ber of crashes confirms . . . [that] there are real-world impact conditions that vary widely from the crash test matrices as related to vehicle type and sizes, first point of vehicle impact, vehicle non-tracking, and vehicle speed. Also, there are different installation and maintenance practices in place that can affect safety performance. . . . In addition, roadside features such as ditches, curbing, uneven terrain, and steep slopes in the vicinity of the terminal factor into the ability to mitigate the severity of the outcome of a guardrail terminal crash event. NCHRP Report 350 [Ross et al. 1993] crash test matrices do not spe- cifically address the performance limitations the AASHTO-FHWA Task Force identified. It appears that side impacts, head-on/shallow-angle high- energy impacts, and head-on/shallow-angle corner impacts may lead to safety performance issues. However, the data analyzed did not allow for an assessment of how frequently these situations occur (i.e., they may be limited or they may appear on a regular basis) in the field. The shallow angle impact test condition is addressed in the MASH crash test criteria, but side impacts and front corner impacts are not specifically addressed in MASH. This points to the need to conduct in-service performance evalu- ations on roadside safety hardware including guardrail terminals; these evaluations are critical to determine whether crash-tested hardware [has] performance limitations that are not detected by the crash testing process and should be used to amend the crash test criteria in subsequent updates. (Joint AASHTO-FHWA Task Force on Guardrail Terminal Crash Analysis 2015, 116–118) The in-service performance data needed to improve testing, together with the results of improved testing, could guide refinements in roadside safety device designs to improve performance.

NATIONALLY COORDINATED EVALUATION RESEARCH 59 Data Collection The basic requirement for an evaluation to validate or improve crash testing of a roadside safety device is a database of crashes involving the device. The database should have the following properties: • The sample of crashes is representative of the population of crashes; • The sample is large enough that rare events can be observed and the frequency of crash characteristics can be estimated; and • Information about the crash scenario, crash site environment, and pre- and postcrash conditions of the roadside device are sufficient to compare the circumstances of the crash with the characteristics of tests and the outcomes with the outcomes of test crashes. The data can be expected to include crashes in which the device provided vehicle occupants protection from serious injury and crashes in which it did not, and crash conditions that are within and outside the envelope of tested conditions. All four types of cases will provide relevant information. The procedures of the FHWA pilot in-service performance evaluation of guardrail end treatments were designed with the improvement of crash test- ing as an objective (FHWA n.d.). These procedures, including the methods of identifying crashes and coding information, should be the model for data collection in future evaluation research for this purpose. The results of the FHWA pilot will reveal whether the data elements collected are adequate for validation and improvement of testing and will indicate needed changes in data collection. The results of the FHWA pilot also will indicate the necessary scale of future evaluations in terms of the extent of road networks and duration of data collection required to obtain a sample of crashes that reveals the range of important crash circumstances and outcomes. If results appear to be similar across the four states in the pilot, then conducting future evalu- ations within a limited geographic area may be feasible. However, if results vary substantially from state to state (i.e., if the states differ substantially in patterns of crash circumstances or in end treatment performance), then broader geographic coverage may be necessary. Data collection should avoid certain potential pitfalls of the approach used by the AASHTO-FHWA Task Force: • Classifying certain crash conditions as constituting performance limitations involves judgment and could produce misleading results if definitions are not applied consistently. If analysis involves clas- sifying cases, the classifications should have objective definitions in

60 PERFORMANCE EVALUATION OF GUARDRAIL END TREATMENTS terms of measurements of crash characteristics and device response that are recorded in the crash database. • Case samples should be chosen by statistically valid methods. Judg- mental selection introduces a bias risk. The sample could be 100 percent of all crashes meeting certain objective criteria in a specified time period and geographic area, criteria that would eliminate the difficulties of sample selection. • Similarly, editing the database to exclude cases with incomplete information can introduce bias if the editing criteria are not strictly objective or if the excluded cases differ systematically from in- cluded cases in some relevant characteristic. Analysis The analysis for validating crash test procedures would have two components: • Comparison of actual crash circumstances with circumstances in the tests (with respect to vehicle speeds, orientations and trajec- tories; vehicle dimensions; and roadside device dimensions and installation features) and • Comparison of the outcomes of actual crashes that match test circumstances with the test outcomes to determine whether tests predict real-world outcomes under similar conditions. The AASHTO-FHWA Task Force investigation provides an example of the first kind of analysis. The Task Force defined two categories of circum- stances that, in its judgment, posed a heightened risk that the guardrail end treatment would not protect vehicle occupants from serious injury: impact conditions and installation conditions. The following impact conditions were identified as performance limitations (Joint AASHTO-FHWA Task Force on Guardrail Terminal Crash Analysis 2015, 34): • Side impacts, • Head-on, shallow-angle corner impacts (i.e., head-on impacts near the corner of the vehicle in the headlight region), and • Head-on, shallow-angle high-energy impacts. The installation conditions identified as performance limitations were as follows (Joint AASHTO-FHWA Task Force on Guardrail Terminal Crash Analysis 2015, 111):

NATIONALLY COORDINATED EVALUATION RESEARCH 61 • Hardware installation/maintenance/repair [e.g., installation not complying with manufacturer’s drawings] . . . • Grading (such as lack of relatively flat graded platform in advance of, and adjacent to, the terminal) . . . [and] • Placement (such as terminal located behind curb) [i.e., placement that does not conform to accepted practice]. The AASHTO-FHWA Task Force report documents these performance limitations with descriptions of the crash cases in the Task Force’s database that illustrate each limitation. The Task Force compared the impact conditions identified as perfor- mance limitations with impact conditions in the standard crash tests and noted (as quoted above) that some impact conditions are not represented in the tests. Moreover, the tests do not provide information on how the observed installation errors affect performance of the device. The Task Force report does not include an analysis of whether end treatments performed as predicted by the crash tests in crashes that were within the parameters of the test. The Task Force had only historical crash records of varying detail available; a prospective evaluation planned for the purpose of validating crash tests would have much more suitable data. A fully useful evaluation for the purpose of validating crash test pro- cedures would estimate the frequency of each defined category of crash conditions (e.g., the conditions of the Task Force’s performance limitations) and the increase in injury risk associated with each condition. (As its report acknowledges, the Task Force’s crash database was not adequate to support estimates of frequency.) These quantitative estimates would guide decisions on whether to modify crash test procedures and performance standards. The Manual for Assessing Safety Hardware (MASH) (AASHTO 2009, 9) states that the ranges of vehicle speed and impact angle that define the impact conditions in its test matrices are based on two information sources: a 1986 analysis of impact conditions in run-off-road crashes (Mak et al. 1986) and an NCHRP project on the design of roadside slopes and clear distances that was in progress at the time the MASH was published (NCHRP 2016). Data produced in the NCHRP project cited in the MASH were subject to systematic coding errors, which necessitated a reanalysis that was to be completed in 2017. The 1986 study estimated distributions of impact speed and angle by highway functional class for run-off-road crashes. The distributions were derived from preexisting data on 596 collisions with light poles, sign poles, bridge rails, and bridge approach guardrails in selected counties in Texas and Kentucky; these data were collected from 1975 to 1978 for the U.S. DOT (Mak et al. 1986, 45). This study also provides a second example (along with the comparisons

62 PERFORMANCE EVALUATION OF GUARDRAIL END TREATMENTS in the AASHTO-FHWA Task Force report described above) of a compari- son of the circumstances of actual crashes with crash test impact conditions. The authors observe that The full-scale crash test matrix for performance evaluation of roadside safety appurtenances has evolved over the years . . . with little consider- ation given to real-world impact conditions. It would be interesting to see how the full-scale crash test matrix currently in use would compare with real-world impact conditions. (Mak et al. 1986, 50) NCHRP 230 (Michie 1981) was the current crash test manual at the time. The authors compared the impact speeds and angles of the prescribed crash tests with the distribution of impact conditions observed in the real-world crashes, concluding that When both impact speed and angle criteria are taken into consideration, the percentage of accidents that exceed both criteria is actually quite small. For instance, even for freeways, only 3 percent of the accidents have impact speeds of more than 60 mph and impact angles greater than 25 degrees, and 9 percent of the accidents have impact speeds of more than 60 mph and impact angles greater than 15 degrees. This suggests that the current full-scale crash test conditions for longitudinal barriers are actually rather stringent. (Mak et al. 1986, 51) Because road design, vehicle characteristics, and traffic conditions have changed greatly since the 1970s, it cannot be assumed that this conclu- sion remains valid. A later study (Mak et al. 2010, 32–35) compared impact condition distributions from the 1986 study with distributions de- rived from 2000 and 2001 crash records of the National Highway Traffic Safety Administration (NHTSA) National Automotive Sampling System– Crashworthiness Data System (NASS-CDS). The 1986 study provides a basic model for any future crash test valida- tion study, although it was more limited than the validation study outlined above. The crash data related to passenger vehicles only; therefore, the 1986 study did not derive impact condition distributions for specific vehicle types and could not compare the large-vehicle crash test matrix with real- world impact conditions. It did not derive distributions for vehicle orienta- tion at impact. It was a retrospective analysis of data collected for purposes other than validation of crash testing; a prospective validation study could obtain data of greater pertinence. The most important limitation is that the 1986 study did not attempt to compare outcomes of real-world crashes with crash test outcomes. To validate a crash test procedure, it is necessary to show not only that the tests correspond to real-world impact conditions but also that crashes that appear to have a low risk of casualties in testing also are low risk in reality.

NATIONALLY COORDINATED EVALUATION RESEARCH 63 Information on crash outcomes also is necessary to determine the range of impact conditions that crash tests should cover. The 1986 study concluded that the crash tests were stringent because only small percentages of crashes exceeded both the test criteria. Similarly, the MASH states that the criterion for selecting the maximum speed and impact angle in its tests was that these “approximate the 85th percentile of the respective real-world impact conditions” (AASHTO 2009, 9). A basis for the 85th percentile cri- terion is not stated. However, the significance of the real-world crashes that fall outside the range of test impact conditions depends not only on their frequency but on the severity of their consequences. If the validation study found that a substantial share of all casualties in collisions with roadside devices occurred in crashes in which the impact conditions were outside the 85th percentile, this finding might constitute grounds for expanding the range of test conditions. Incorporating Simulation in Device Testing Computer simulation models of vehicle collisions with roadside safety devices can help in understanding observed in-service performance and improving device design and selection. The AASHTO-FHWA Task Force report illustrates the great variety of crash scenarios and installation and environmental conditions that occur on roads and that affect injury risk. Simulation models provide a method to supplement crash testing to guide the development of devices with performance optimized for the range of actual conditions. Models can be applied to • Evaluate crash circumstances that would be impractical to test physically. Performance can be evaluated with impact speed, im- pact angles, and vehicle size varying over the full relevant ranges. • Evaluate the effects of site details such as end treatment placement, slopes, shoulder dimensions, curbs, and soil conditions on vehicle impact characteristics and crash outcome. • Provide guidance on selecting the best device types and installation features for particular locations. The committee’s interviews with state highway agencies found that agencies today follow various rules of thumb for selecting the best end treatment type for loca- tions (Heimbecker and Lohrey 2016, 10–11). Simulation could help agencies to verify or revise these practices. • Guide the design of devices with improved performance. As an adjunct to in-service evaluation, possible applications of simulations include the following:

64 PERFORMANCE EVALUATION OF GUARDRAIL END TREATMENTS • If in-service data show that a crash scenario not similar to any of the standard crash tests occurs with some frequency but too few cases of the scenario are observed to reliably estimate the likelihood of a severe outcome, simulation can be used to predict severity. • If poor performance of a roadside device is observed in an in- service evaluation, simulation can help to determine the contribu- tions of device design and site characteristics (e.g., slopes, ditches, soils, installation errors) to crash outcomes. • In-service evaluation findings can support the specification and validation of simulation models of vehicle collisions with roadside Box 3-1 Simulation Modeling of Collisions with Roadside Safety Devices Simulation Models Crash simulation models use finite element models to represent vehicles and other objects involved in crashes, such as roadside safety devices. Finite element analysis is a modeling method used to predict the response of objects to external forces. The objects are represented as a mesh of nodes, at which are connected elements with defined mechanical properties. Increases in computing power have allowed for the accurate simulation of complex objects such as motor vehicles and roadside devices in complex situations such as a collision. The U.S. DOT and the states have supported the development of finite ele- ment models for safety assessments of highway design features. Finite element models are used routinely by manufacturers in designing roadside devices that will pass the MASH crash tests. (Marzougui et al. 2014, 1–4). Example Application: Retrofit Modifications to Guardrail The U.S. DOT sponsored research to use simulation modeling to identify retrofit modifications to installed barriers that would allow them to pass MASH crash tests (Marzougui et al. 2014). Crash testing had shown that two device designs that passed the previous standard crash tests, the G9 Thrie beam guardrail and the G4 (1S) median barrier, could not pass all the MASH tests. In one of the prescribed MASH tests, a collision of a 2,270-kilogram vehicle traveling at 100 kilometers per hour with the barrier at an angle of 25°, the vehicle rolled over after colliding with the Thrie beam guardrail and overrode the G4 median barrier. A finite element model was available for the 2,270-kilogram vehicle (equiva- lent to a large pick-up truck). Models of the barriers were developed by the U.S. DOT’s National Crash Analysis Center for the study. The system of two models was validated by detailed comparison of the crash test results with simulation predictions for the two barrier types.

NATIONALLY COORDINATED EVALUATION RESEARCH 65 safety devices. Models today are validated through a comparison of model predictions with crash test results. Comparison with vehicle and device behavior in actual crashes would further strengthen confidence in a model. Simulation already plays an important role in the development of road- side devices and has been used to improve the performance of a variety of devices. However, simulation has seen limited usage in support of roadside device testing and evaluation. The MASH acknowledges simulation as a useful tool in the development of roadside devices but notes that required Retrofit improvement options were identified by experts in barrier design. Options for the Thrie beam guardrail included, for example, modifications to the element that connects the rail to the post and raising the height of the rail. For most of the retrofit options in simulations of crashes with the modified Thrie beam guardrail, the behavior of the vehicle in the crash was similar to that with the original design. However, one of the modifications to the guardrail–post connection decreased the roll of the vehicle during the crash so that the rollover that caused the original guardrail design to fail the crash test was avoided (see Figure 1). The authors conclude that following this promising result, the next steps should be first simulating the performance of the modified guardrail design in the other MASH crash tests and then crash-testing the modified design. Original FIGURE 1 Simulation comparison of vehicle collision with original guardrail and retrofit modified guardrail. SOURCE: Courtesy of D. Marzougui, George Mason University. Marzougui et al. 2014, 9.

66 PERFORMANCE EVALUATION OF GUARDRAIL END TREATMENTS crash tests cannot be replaced by simulation modeling (AASHTO 2009, 205). Box 3-1 presents an example of the application of simulation models in the evaluation of roadside safety devices. EVALUATION METHODS FOR ROUTINE HIGHWAY AGENCY USE Objectives and Applications Chapter 1 observed that a primary reason highway agencies generally have not followed practices recommended by the MASH and by other authorities for regular in-service evaluation of road safety features may be that agen- cies do not see clear evidence of the benefits of the practice. In a nationally coordinated evaluation research program, the purposes of a demonstration of routine evaluation methods would be first to test practical methods and second to show how agencies can use in-service evaluations to improve safety and cost-effectiveness in their highway programs. The demonstration project would reveal the forms of evaluation and level of effort that would provide the greatest practical benefit to highway agencies. The results could be used to revise the guidance on in-service evaluation in the MASH (see Chapter 1, Box 1-3) and for preparation of manuals and training materials for use by highway agencies preparing to conduct in-service evaluations. Method The project would demonstrate the two components of the framework for state-level in-service evaluation proposed in the MASH (see Chapter 1, Box 1-2): • A new feature evaluation to collect information on a limited num- ber of devices of a new design installed as a trial and • A continuous monitoring system to record and periodically analyze the frequency and severity of crashes involving specified roadside features, with consideration of the effects of road characteristics and traffic characteristics on crash frequency and severity. Highway agencies would be recruited to participate voluntarily. The nationally coordinated program would provide technical support and pos- sibly funding assistance. Participating agencies would independently test methods with their own personnel and resources and assess the utility of results. Participants would be agencies with maintenance management sys- tems and safety management systems.

NATIONALLY COORDINATED EVALUATION RESEARCH 67 Each agency would prepare a plan for data collection and analysis and for application of evaluation results in maintenance, construction, and safety management decisions. Data collection, database structures, and analysis procedures would be integrated as far as possible with the existing maintenance management and safety management systems. For example, if maintenance staff now must enter a record of a damaged barrier in a database, the data elements in the record would be augmented to indicate the device type involved and characteristics of the damage. Standard pho- tographs could be added to the record, a procedure that several states are demonstrating in the FHWA pilot in-service evaluation. Chapter 4 of this report describes alternatives for procedures for state- level data collection and analysis; these alternatives were derived from the methods of NCHRP Report 490 (Ray et al. 2003), the FHWA pilot evalua- tion, and procedures developed for individual states. The highway agencies participating in the nationally coordinated demonstration would adopt or modify such procedures. Documenting the highway agency’s costs for data collection and evalu- ation would be an essential component of the demonstration. Agency staff time requirements would be recorded by category of staff, and impacts on specific agency activities (e.g., the productivity of maintenance crews repair- ing damaged devices) would be observed. Analysis After the highway agency demonstrations had been in full operation for a sufficient period to begin producing useful results (possibly 2 to 3 years), the benefits and costs of the evaluation programs to the states would be assessed. The main costs would be added time demands on agency person- nel. Benefits would be characterized in terms of the impact that information from the evaluations had on specific agency decisions and practices, includ- ing the scheduling of maintenance, specifications regarding roadside devices in construction plans, and programming of safety enhancements. Opinions of agency managers would be solicited regarding the improvement in the overall performance of their highway programs that the in-service evalua- tions provided. IMPACT OF DESIGN, INSTALLATION, AND MAINTENANCE PRACTICES ON PERFORMANCE Objective and Applications The objective of this nationally coordinated evaluation research project would be to provide a comprehensive understanding of the effects of device

68 PERFORMANCE EVALUATION OF GUARDRAIL END TREATMENTS design, installation, maintenance, and site characteristics on the perfor- mance of guardrail end treatments and other roadside safety devices, so as to provide guidance to highway agencies on the use of these devices. The results would support highway agency decisions in three areas: • Selection of device types to be installed at particular locations. Certain device types may have consistently better safety perfor- mance than others, or the device type that provides the greatest protection to vehicle occupants in a collision may depend on the characteristics of the installation site. As noted above, the state in- terviews conducted for the committee showed that highway agency practices vary with respect to matching device types to site char- acteristics. NCHRP Report 490 (Ray et al. 2003) cites the lack of specific guidance on the selection of roadside safety devices in the AASHTO Roadside Design Guide (AASHTO 2011) as a motiva- tion for developing in-service evaluation methods. The definition of warrants for the use of safety devices—that is, criteria for deter- mining locations where guardrail or other roadside safety devices should be installed and should not be installed—is a related safety design problem that in-service evaluation research could resolve. If a safety device is installed at an inappropriate location, crashes at the location may conceivably be more severe than if the device were not present. • Monitoring and maintenance of the agency’s inventory of road- side safety devices. The evaluation results would indicate main- tenance priorities for reducing injury risk. The AASHTO-FHWA Task Force identified improper installation and maintenance as performance limitations of end treatments but could not quantify the frequency of these conditions or their effect on injury risk. • Determining needs and priorities for replacement of roadside safety devices. For example, the evaluation might help highway agencies to identify circumstances in which older types perform satisfacto- rily and circumstances in which improved performance by newer designs would justify replacing the older types. The state interviews found that at least one state is undertaking such a program of selective replacement of roadside safety devices (Heimbecker and Lohrey 2016, 24). The project would be consistent with the AASHTO-FHWA Task Force recommendation for comprehensive in-service performance evaluations of guardrail end treatments that would compare the in-service performance of end treatment types and determine the frequency of occurrence of per-

NATIONALLY COORDINATED EVALUATION RESEARCH 69 formance limitations (Joint AASHTO-FHWA Task Force on Guardrail Terminal Crash Analysis 2015, 5). Methods The evaluation would be prospective and comparative. Its procedures would be an extension of those developed in NCHRP Report 490 and the FHWA pilot in-service evaluation for recruiting volunteer highway agency participants, arranging prompt and reliable notification of crashes, and collecting postcrash data. Special attention would be required to obtain a statistically valid sample of crashes. Two basic study design issues will be the measures of performance used and the method of controlling for confounding factors. Chapter 2 identified performance measures for roadside features used in past evaluations: • The full distribution of outcome severity, that is, the fractions of all crashes (police-reported and not reported) that result in prop- erty damage only, minor injuries only, incapacitating injuries, and fatalities; • The rate of severe crashes per vehicle passing the roadside feature; and • The ratio of severe crashes to all police-reported crashes involving the feature or the distribution of severity in police-reported crashes. Chapter 2 also identified the common methods of controlling for confound- ing factors: • Estimating a multivariate model of crash severity, • Use of crash modification factors as proposed in NCHRP Report 490 (Ray and Weir 2002, 80–83), and • Use of a case-control study design. Analysis Several alternative models of injury risk and crash risk would be specified in the planning phase of the study, making use of the alternative perfor- mance measures and alternative methods of controlling for the factors expected to influence severe injury risk in a collision. Data requirements among the models will differ with respect to minimum sample size, the need for records of unreported crashes, and roadside device inventory data needs. Evaluation of the effect of installation and maintenance practices will require detailed data from postcrash inspections. An evaluation that

70 PERFORMANCE EVALUATION OF GUARDRAIL END TREATMENTS considered only device type and location characteristics would not require postcrash device condition data. The analysis would quantitatively estimate the simultaneous effects of the device type, installation and maintenance features, and site features on the distribution of crash severity. The suitability of the alternative risk models would be compared with respect to the credibility of the results and the feasibility of collecting the data required for estimating each model. PLANNING AND ORGANIZATION The objectives of the three evaluation research projects outlined above are common concerns of the states and of the federal government. The states face common problems in managing their systems, and multistate or national programs of applied research or evaluation have proven to be an efficient way to attack these problems. Nationally coordinated evaluation research would allow pooling of resources and permit more efficient collec- tion of the large samples of cases that are needed to detect rare events and to distinguish incremental differences in safety performance. FHWA appropriately regards in-service performance evaluation as pri- marily a state function (FHWA n.d., 7), as the states are responsible for the operation and maintenance of their highway systems. However, the process of federal certification of particular types of roadside safety devices as eli- gible for reimbursement in the Federal-Aid Highway Program influences the states’ selection of these devices and entails a federal responsibility for ensuring that the criteria for certification are justifiable. An extension of the charge and term of the AASHTO-FHWA Task Force on Guardrail Terminal Crash Analysis would be one possible form of organization for the conduct of a national program of evaluation research. The Task Force performed well in its original role and effectively coordi- nated state and federal interests and resources. A second organizational alternative would be an AASHTO-led effort conducted through NCHRP. The entity undertaking the national evaluation program should first establish a planning and administrative function to define the objectives of evaluations (i.e., how the results will be applied in the management of the highway system); plan the scope of evaluations (the features that are to be evaluated and the schedule); determine funding needs and schedules; recruit cooperation among the federal, state, and local agencies and offices that would be involved; and monitor the conduct of evaluations and the application of results. The organizational structure should provide for independent expert review of plans, research methods, and results. Experts would be needed in two areas: evaluation research methods and highway management. Planning and organizing an in-service evaluation program devoted

NATIONALLY COORDINATED EVALUATION RESEARCH 71 solely to guardrail end treatments, which are involved in only a small share of serious crashes, would be difficult to justify. A more cost-effective ac- tivity would be to identify in-service evaluation needs covering guardrails together with guardrail end treatments or all MASH devices. As an adjunct to a nationally coordinated evaluation research program, there should be an effort to identify needs for improving the infrastructure of data systems, including crash records and roadside safety device inven- tories, which are the information basis of highway management. The data systems that would facilitate in-service evaluation of roadside devices are the same as the systems needed for comprehensive management purposes. Evaluation will be practical and useful only if it is integrated with the high- way agency’s safety management and asset management programs. Sample Size and Study Scale Considerations The cost of an evaluation will be determined primarily by the number of observations of collisions with the road safety devices being evaluated that are needed to measure the effects of interest with the desired level of preci- sion. The level of precision judged acceptable will depend on the specific intended application of the evaluation’s results. The sample size target (i.e., the number of crashes to be observed), together with the rate of collisions with the devices, will determine the study duration and extent of the road network for which data must be assembled. The cost of the evaluation will depend also on the level of detail sought in the record of each crash. For the crash test validation study outlined above, the target sample size will depend on two factors: • The number of categories by which crashes are to be classified, for purposes of comparison with crash test outcomes and • The required precision of the estimates of the frequency of each crash category and of the risk of a severe outcome in each category. The number of observations of crashes needed to estimate the risk of low-frequency severe crashes will be especially critical. The crash categories of interest include the recommended crash tests defined by the MASH. For example, for testing of guardrail end treatments, the MASH Test Level 3 test matrix (i.e., test crashes at 62 miles per hour) calls for nine tests, including tests for three vehicle types, three impact angles, and two points of impact on the device (AASHTO 2009, 20). Other crash categories of interest include the three impact conditions and three instal- lation conditions outside the MASH test boundaries that the AASHTO- FHWA Task Force characterized as performance limitations for guardrail end treatments.

72 PERFORMANCE EVALUATION OF GUARDRAIL END TREATMENTS The ideal test validation study would observe a sufficient number of crashes in each category to estimate the frequency of occurrence of each of the categories and to compare in-service crash outcome (i.e., the severity distribution) in each category with the outcome of the corresponding crash test. To keep sample size requirements within the practical range, the test validation study probably would need to combine test matrix cells into a smaller number of categories for comparison with in-service outcomes. For example, in an evaluation of guardrail end treatments, if a crash matching one of the impact conditions (defined by vehicle speed, vehicle size, angle of impact, and vehicle orientation with respect to the roadside device at impact) identified by the AASHTO-FHWA Task Force as a per- formance limitation occurred at random with an average frequency of once in every 50 end treatment crashes, then a 300-crash data set would be expected to provide six occurrences (and at least four with 85 percent probability) of the condition. This might be a minimally sufficient number to indicate whether the impact condition was associated with abnormally high average crash severity. However, if the frequency of the impact condi- tion was one in every 100 crashes, twice the sample size would be required to obtain the same number of cases of the condition. A possible means of obtaining a larger sample for estimating the fre- quency of each category of impact condition without expanding the term or geographic scope of the study would be to observe collisions with the guardrail face along the entire length of the guardrail as well as end treat- ment collisions. If an impact condition appeared to be independent of the location of the collision along the length of the guardrail, then it could be assumed that a similar distribution of conditions would occur at the end treatment. The larger data set obtained would be applicable for the valida- tion of crash testing of guardrails as well as of end treatments. In routine highway agency in-service evaluations, a primary analysis will be estimating the failure rate of devices to provide a warning if col- lisions with a particular device type are experiencing an exceptionally high risk of severe outcomes. The required sample size for the evaluation is the number of collisions needed to be confident that the failure rate is no greater than some specified percentage of all crashes. The appropriate confidence level will depend on the decision that the highway agency will make on the basis of the evaluation results. If the decision is simply whether to proceed to an in-depth investigation of the device, then any indication of poor performance, even in a small sample of crashes, may suffice. If the decision is whether to drop the device from the agency’s approved list or to replace the device on the road system, then a larger sample and a more confident estimate of the failure rate will be needed. For example, if a highway agency wished to be assured that a device failed to perform mechanically as intended in no more than 1 percent of

NATIONALLY COORDINATED EVALUATION RESEARCH 73 crashes, it would be necessary to observe several hundred crashes involving the device. If the device’s true failure rate were 1 percent, then in a sample of 400 crashes, the probability of observing two or more failures would be 90 percent; therefore, if one or no failures were observed, the agency could be reasonably confident that the failure rate was less than 1 percent. The experience of the past in-service evaluations described in Chapter 2 suggests that obtaining several hundred observations of crashes with a single device type in one state would require an impractically long data collection period. However, if 40 collisions, a more feasible sample size, were observed and one or no failures were found, then the agency could conclude that the failure rate was less than 10 percent. (If the failure rate were 10 percent, then in 40 crashes, the probability of observing 2 or more failures would be 90 percent.) For a comparative evaluation (to measure differences in performance among device types for specified site characteristics), the primary sample size question is the number of crashes required to observe a specified differ- ence in the risks of fatality and injury from collisions with the alternative device types. Knowing the difference in risk would allow the agency to estimate the safety benefit of discontinuing use of a device or of replacing the less well-performing device with the better one. As an example, in the comparative evaluation of end treatments in Missouri described in Chapter 2, Box 2-6, a sample of 156 end treatment crashes in a case-control study design was sufficient to observe a 50 percent difference between two device types in the risk that a crash will have a severe outcome. In the Washington State comparative evaluation described in Box 2-1, a sample of 30 crashes provided some evidence that the two de- vice types compared were equivalent in performance, although with such a small sample, moderate performance differences that could have important systemwide safety implications could not be observed reliably. To plan an evaluation, estimates of the frequency of collisions with the device being evaluated per vehicle passing the device are needed. Table 3-1 summarizes data from in-service evaluations of end treatments regarding the density of end treatments on highways and the rate of collisions with end treatments per vehicle passing. These few observations cannot be taken as representative but indicate the range of possible values. In the studies, densities are from 1 to 4 end treatments per mile on Interstates and 2 to 5 per mile on non-Interstate roads. Collision rates are extremely dispersed— from 0.16 to 5.0 collisions per 100 million vehicles passing—for collisions reported by maintenance personnel. Of the 50 combined collisions observed in the Wisconsin evaluation (Bischoff and Battaglia 2007) and Washington State evaluation (Igharo et al. 2004), 16 (32 percent) were crashes with injuries. Chapter 2 noted the observation on an Interstate segment in Iowa that the total number of end-treatment collisions was 10 times greater than

74 T A B L E 3 -1 G ua rd ra il E nd T re at m en t C ol lis io n R at es R ep or te d in I n- Se rv ic e E va lu at io ns R oa d C la ss Se gm en t L en gt h (m ile s) T w o- W ay A A D T (1 ,0 00 ) N um be r of D ev ic es D ev ic es pe r M ile St ud y Pe ri od (y ea rs ) So ur ce o f C ol lis io n R ep or t N um be r of C ol lis io ns C ol lis io ns p er 10 8 V eh ic le s Pa ss in g R ay a nd H op p (2 00 0) In te rs ta te 22 .2 32 24 1. 1 1 Po lic e 4 2. 86 In te rs ta te 22 .2 32 24 1. 1 1 M ai nt en an ce 7 5. 00 In te rs ta te 22 .2 32 24 1. 1 1 Sp ec ia l in sp ec ti on 69 49 .2 B is ch of f an d B at ta gl ia ( 20 07 ) In te rs ta te 14 .0 13 5 42 2. 9 5 Po lic e 20 0. 39 6 Ig ha ro e t al . (2 00 4) In te rs ta te 11 1 96 4 12 3. 7 1 M ai nt en an ce 17 0. 23 6 O th er N H S 19 1 29 9 60 5. 0 1 M ai nt en an ce 8 0. 16 0 N on -N H S 45 0 1. 1 1, 00 0 2. 2 1 M ai nt en an ce 5 2. 48 N O T E S : T he t hr ee c ol lis io n ra te s sh ow n fr om R ay a nd H op p (2 00 0) a re r ep or ts f ro m t hr ee s ou rc es f or t he s am e ro ad a nd t im e pe ri od . C ol lis io n ra te s ar e ca lc ul at ed p er p as si ng o f a ve hi cl e on t he s id e of t he r oa d w he re t he e nd t re at m en t is lo ca te d (i .e ., on t he b as is o f on e- w ay a nn ua l a ve ra ge da ily t ra ffi c [A A D T ], a ss um ed t o be h al f of t w o- w ay A A D T ). N H S = N at io na l H ig hw ay S ys te m .

NATIONALLY COORDINATED EVALUATION RESEARCH 75 the number found in maintenance or police records (Ray and Hopp 2000, 47). The great variation in reported crash rates presumably is partially attributable to differences in the procedures the studies employed for iden- tifying crashes, as well as to actual differences in crash rates depending on road and traffic characteristics. The data from these studies suggest the order of magnitude of the road mileage and observation periods that would be necessary to obtain a target number of crash observations for an evaluation of end treatments. For a network of roads with one-way annual average daily traffic (AADT) of 10,000 vehicles, a density of 3 devices per mile, and a collision rate of 0.4 collisions per 100 million vehicles passing (consistent with the ranges of values in Table 3-1), observation of 300 collisions in a 2-year study would require collection of data from a 3,400-mile road network: (300/[(3 devices per mile) × (10,000 vehicles per day) × (730 days) × (0.4 × 10−8 collisions per vehicle passing)] = 3,425 miles). Such a data set might be assembled by recruiting five states to each monitor crashes on 700 miles of roads. For comparison, the study area for the Washington State DOT end treatment evaluation was 750 miles of roads in three contiguous maintenance districts (Igharo et al. 2004, 10). The experience of past studies suggests that several hundred observa- tions would be desirable to obtain useful results in either of two of the evaluations described above, crash test validation or development of a ca- sualty risk model. The risk modeling study described in Chapter 2, Box 2-4 (Johnson and Gabler 2014) estimated a collision casualty risk model for guardrails and guardrail end treatment with a database of 711 vehicles involved in crashes. The 1986 study of impact conditions in run-off-road crashes described earlier in this chapter (Mak et al. 1986) derived distribu- tions of impact conditions for comparison with the range of conditions in crash tests on the basis of a sample of 596 run-off-road crashes involving collisions with poles or bridge rails. In the validation study, a sample on the order of 300 cases would be sufficient to reveal any large discrepancies between the distribution of ac- tual crash impact conditions and the range of impact conditions in the test matrices or between actual crash outcomes and test outcomes. If a sample of this size exposed no such discrepancies, the result would reinforce con- fidence in the validity of crash test results for end treatments. This total sample size would not be sufficient to obtain samples of every important combination of crash scenario, device type, and environment, or to estimate the average severity of rare events. The research sponsors could be guided by the results of analysis of the initial round of data collected in deciding whether more data collection would be justified. The documentation on the past in-service evaluations that the com- mittee reviewed did not contain cost information. The experience of

76 PERFORMANCE EVALUATION OF GUARDRAIL END TREATMENTS NHTSA’s NASS-CDS provides an indication of costs. The cost of op- erating the NASS-CDS in 2013 was $3,600 per crash case (GAO 2015, 22). The NASS-CDS crash investigations are more extensive than what would be required for the validation study, but candidate cases would be sparser in the validation study. At the NASS-CDS rate, and allowing for inflation, data collection for 300 crashes would cost $1.2 million. Study design, training for state participants, and analysis (including exploratory data analysis toward developing a severity risk model) might increase the cost to a total of $3 million. Costs for evaluating MASH-tested devices in addition to end treatments (longitudinal barriers, crash cushions, and support structures) would increase less than proportionally to the number of device categories because impact condition distributions may not vary greatly across the device categories; crashes would sometimes involve multiple devices (e.g., guardrail and end treatment); and administrative, training, and analysis costs would be approximately fixed. As a compari- son, NCHRP in 2013 allocated $650,000 to the project to conduct an in-service evaluation of guardrail end treatments described in Chapter 1 (NCHRP 2013). (The project was not carried out.)

Next: Annex 3-1 Summary of Procedural Guides to In-Service Evaluation of Roadside Safety Devices »
In-Service Performance Evaluation of Guardrail End Treatments Get This Book
×
 In-Service Performance Evaluation of Guardrail End Treatments
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB Special Report 323: In-Service Performance Evaluation of Guardrail End Treatments develops a research design for evaluating the in-service performance of guardrail end treatments and other roadside safety devices and identifies the data required to do so.

Given the substantial data requirements and methodological challenges of conducting successful evaluations of particular end treatments, the committee concludes that state highway agencies will require more information about the benefits, costs, and practicality of routine in-service evaluation of end treatments in general before deciding to undertake new data collection and analysis programs necessary to carry out more challenging analyses. The committee recommends research to advance practice and test the feasibility of and costs associated with more complex evaluations. It also recommends research to examine whether procedures for testing the performance of devices should be altered.

Associated with the report, three working papers are available online:

  • Chad Heimbecker and Eric Lohrey: Examples of State Highway Agency Practices Regarding Design, Installation, Maintenance, and Evaluation of Guardrail End Treatments
  • Bhagwant Persaud: Critical Review of Methodologies for Evaluating In-Use Safety Performance of Guardrail End Treatments and Other Roadside Treatments
  • Brian Wolshon and Anurag Pande: Critical Review of Methodologies for Evaluating In-Use Safety Performance of Guardrail End Treatments and Other Roadside Treatments

The report is accompanied by a two-page highlights document summarizing the findings and recommendations.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!