National Academies Press: OpenBook

Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention, and Crash Risk (2014)

Chapter: Appendix A - Actual and Potential Outcome Severity Scales

« Previous: References
Page 117
Suggested Citation:"Appendix A - Actual and Potential Outcome Severity Scales." National Academies of Sciences, Engineering, and Medicine. 2014. Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention, and Crash Risk. Washington, DC: The National Academies Press. doi: 10.17226/22297.
×
Page 117
Page 118
Suggested Citation:"Appendix A - Actual and Potential Outcome Severity Scales." National Academies of Sciences, Engineering, and Medicine. 2014. Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention, and Crash Risk. Washington, DC: The National Academies Press. doi: 10.17226/22297.
×
Page 118
Page 119
Suggested Citation:"Appendix A - Actual and Potential Outcome Severity Scales." National Academies of Sciences, Engineering, and Medicine. 2014. Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention, and Crash Risk. Washington, DC: The National Academies Press. doi: 10.17226/22297.
×
Page 119
Page 120
Suggested Citation:"Appendix A - Actual and Potential Outcome Severity Scales." National Academies of Sciences, Engineering, and Medicine. 2014. Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention, and Crash Risk. Washington, DC: The National Academies Press. doi: 10.17226/22297.
×
Page 120
Page 121
Suggested Citation:"Appendix A - Actual and Potential Outcome Severity Scales." National Academies of Sciences, Engineering, and Medicine. 2014. Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention, and Crash Risk. Washington, DC: The National Academies Press. doi: 10.17226/22297.
×
Page 121
Page 122
Suggested Citation:"Appendix A - Actual and Potential Outcome Severity Scales." National Academies of Sciences, Engineering, and Medicine. 2014. Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention, and Crash Risk. Washington, DC: The National Academies Press. doi: 10.17226/22297.
×
Page 122
Page 123
Suggested Citation:"Appendix A - Actual and Potential Outcome Severity Scales." National Academies of Sciences, Engineering, and Medicine. 2014. Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention, and Crash Risk. Washington, DC: The National Academies Press. doi: 10.17226/22297.
×
Page 123
Page 124
Suggested Citation:"Appendix A - Actual and Potential Outcome Severity Scales." National Academies of Sciences, Engineering, and Medicine. 2014. Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention, and Crash Risk. Washington, DC: The National Academies Press. doi: 10.17226/22297.
×
Page 124
Page 125
Suggested Citation:"Appendix A - Actual and Potential Outcome Severity Scales." National Academies of Sciences, Engineering, and Medicine. 2014. Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention, and Crash Risk. Washington, DC: The National Academies Press. doi: 10.17226/22297.
×
Page 125

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

117 A p p e n d i x A This appendix provides more details on the actual and poten- tial outcome severity scales described in Chapter 8. It includes a description of the methods, as well as details on assump- tions that were used in the construction of these methods. First, we describe the actual outcome severity scales in more detail, including issues that were identified related to the spe- cific implementation on the available data. Second, we describe the potential severity scales. Assumptions behind both of the methods are also discussed. A.1 Actual Outcome Severity Scales Crash/Near-Crash Dichotomy The crashes and near crashes were identified by VTTI through kinematic triggers and site reports (and for some crashes, crash notifications by the drivers). Classification of these events as crashes and near crashes was done through manual reduction of video. DeltaV In traditional crash databases and in-depth crash investiga- tions, the outcome severity is either directly related to physical harm (injury or death) or defined in monetary terms (cost of repairs, health care costs, loss-off-functional years, etc.). The Abbreviated Injury Scale (AIS) is a commonly used injury severity metric, used for classification of injuries for different parts of the body according to an established coding schema (Gennarelli and Wodzin 2005). AIS ranges from 1 to 6—AIS 1 is a minor injury and AIS 6 is generally not survivable. A large body of research has been aimed at establishing the risk of sus- taining a specific degree of injury given some measurable vari- able. One such variable is the change of velocity of the involved road users during the impact, the so-called DeltaV (Buzeman et al. 1998; Viano and Parenteau 2010). The calculation of DeltaV is based on the law of momentum conservation of the crashing road users. In this analysis we focus on lead-vehicle conflicts, thus Equation A.1 is relevant for DeltaV calculation (Kusano and Gabler 2010). Further, we assume pure rear-end collisions, thus the cos(a) term becomes 1. The masses of the involved road users are m1 and m2, while the speeds of the two road users at impact are V1 and V2. In lead-vehicle conflicts, the sum of speeds is equivalent to the relative velocity at the time of impact. (A.1)2 1 2 1 2 DeltaV m V V Cos m m ( ) = + α + p The calculation of DeltaV in our implementation relies on the range rate between the subject vehicle (SV) and the lead vehi- cle (LV). That is, the range rate at impact is used to estimate V1 + V2. Also, an estimate of the subject-vehicle and lead-vehicle masses is needed. Time to Collision and Minimum Time to Collision The definition of time to collision (TTC) and minimum time to collision (minTTC) used in Chapter 8 as a severity metric is a simple range divided by range rate operation, where the range and range rates are calculated as described in Chapter 2. This definition of TTC and minTTC is different from the definition in Chapter 2 (used in most of the main report), and also dif- ferent with respect to the inverse time to collision (invTTC) = 0.1 s-1 threshold used in Chapter 8. The available data pro- duce some challenges in the TTC and minTTC calculations, described below. Assumptions and Limitations In the specific implementation of the actual severity scales in this project, the following main assumptions and limitations were identified. Actual and Potential Outcome Severity Scales

118 Sampling Strategies in Naturalistic Driving Data The crash/near-crash dichotomy is provided as is from VTTI. There are major issues with the sampling strategies in general in naturalistic driving studies. In particular, the near crashes found in the data are taken from a set of cases that meet any of a group of kinematic and other filters. The purpose of filtering is to reduce the workload of video reviewers, but this goal is not equivalent to the goal of finding a random sample of near crashes. In fact, filters for near crashes have become increas- ingly severe (e.g., harder braking, lower TTC) over time as naturalistic studies have been performed. This biases the col- lection of near crashes in ways that are not understood because false negatives are not reviewed. Furthermore, at the end of the process, non-near-crash events are filtered out in a partly subjective manner by the annotator looking at the videos. An associated problem with this method is that the filters by which near crashes and crashes are typically found are different. Thus, near crashes, which are treated as a category of events that can serve as a surrogate for crashes, are fil- tered into the data set by different means. This in itself can make near crashes different from crashes in unintended and unmeasured ways. General Problems with Estimating DeltaV Several basic issues with estimating DeltaV pertain to several sources of data used in calculation of DeltaV and are relevant to our implementation of DeltaV calculations. First, we do not use (or have) any coefficient of restitution. That is, we are considering all impacts to be completely plastic. Second, we do not have the actual change in speed, we only have an esti- mate of the impact speed (that estimate may be better than what is available from many other data sources used to calcu- late DeltaV, however). Estimate of Vehicle Masses for DeltaV Calculations To perform the type of DeltaV calculations that we do, we need the masses of the involved vehicles. We do not have these. We do not have the subject vehicles’ mass in any event (although likely available elsewhere); for the lead vehicles, we have the masses extracted from databases when the make and model of the lead vehicle could be determined from image data. Thus, for the subject vehicles we use a “standard” mass, and for the lead vehicles, we use either the make-and-model mass (empty) or a “standard” mass according to vehicle-type classification. DeltaV and minTTC Based on Manual Lead-Vehicle Width Estimates Both the DeltaV and the minTTC metrics are based on the manually annotated lead-vehicle width and an estimate of the actual lead-vehicle width. This means that all issues related to this approach are also inherent in the DeltaV and minTTC estimates. One major issue related to this way of estimating range is this: width estimates of the lead vehicle (in meters) are needed to get range. A 10-cm width estimate error gives approximately a 6% error in the range (Bärgman et al. 2013). DeltaV and TTC use the derivative of the range, with calculation of TTC also involving the absolute range. Note, however, that although our DeltaV estimates may have built-in errors through the impact-speed (i.e., range rate at impact) estimates, our estimates are not derived post-hoc, by studying vehicle deformation, but rather use a more direct measure of impact speed. A.2 potential Outcome Severity Scales This section provides detailed descriptions of how the Model- estimated Injury Risk (MIR) severity scale can be calculated for two different gaze anchor point strategies. Details on the calculation of the potential severity scale are described, and assumptions and limitations of the what-if simulations and the corresponding MIR scale are discussed. As stated in the main report, the severity scale is formulated as the expectation of some function describing the severity of a crash or a near-crash event with respect to the probability distribution describing time to eyes-on-road given that the glance covers some critical time point (Approach 1). We will also consider the option of conditioning on the glance starting at some particular point (Approach 2). That is, we add one option to how the severity scales may be calculated. In math- ematical terms, the scale calculations can be formulated as follows. We provide a somewhat simple description, followed by a more mathematical description. General Description of the MIR Severity Scale Calculation Possible what-if questions include the following: • What if the driver looked away at a particular point (e.g., defined through invTTC = 0.1 s-1)? • What if the driver glanced away during a longer/shorter time than he or she actually did? We will attempt to answer these questions by considering the different driver behaviors (i.e., times when the driver looks back on road) that can occur in a rear-end crash sce- nario, what these behaviors lead to (e.g., risk of injury), and the probability of such behaviors occurring for an average driver in a particular traffic situation.

119 To answer the questions above, we need to obtain appro- priate distributions describing glance-off-road behavior. This is more difficult than it may seem. Data usually provide the frequencies that describe how often a glance of certain length occurs in a particular driving situation, such as the matched baseline data in our data set. These frequencies give an approx- imation of the statistical distribution of the glance lengths, given that the driver has, in fact, glanced away from the road (even if, depending on how the data were collected, some transformations may be necessary; Rootzén and Zholud, sub- mitted). Observe that, although below we will operate with these conditional distributions, we may very well apply simi- lar approaches to the unconditional ones (i.e., the distribu- tions that incorporate the possibility that the driver is not looking away from the road at the particular point of time that we are studying). Let us assume that we have obtained the empirical proba- bility density function (PDF) describing the off-road glance length. We also have constructed some sort of function describ- ing the danger of late reaction—assumed to be caused by an off-road glance (e.g., the probability of an injury). We are now interested in measuring the expected risk corresponding to a situation given that certain conditions are met (e.g., the brak- ing profile of the leading car looks a certain way, and the driver of the following vehicle had eyes off road at a crucial time point). The expected risk can be calculated by taking the expec- tation of the risk function with respect to the distribution describing when the driver of the following vehicle looks back on road. This distribution need not be the same as the glance length distribution obtained from the data, since both of the questions above involve conditioning: in the first the off-road glance covers a critical point, and in the second the off-road glance starts at a particular point. Let us start by constructing the severity index correspond- ing to the latter approach—that is, the one conditioned on a glance starting at a particular point (see Figure A.1). Consider what we need and what we have. We need a PDF describing time to eyes on road given that a glance has started. Call it h1. We have the raw data from which the empirical distribution of the glance length, given that a glance occurs, can be con- structed. That is, the raw data provide exactly what we want, and no further transformations of the glance distribution are necessary. We can calculate the severity index in a straightfor- ward manner by applying the formula for expectation of a function with respect to a discrete distribution: E[R(T)] = sum(fi z Ri). Observe that, in general, h1 can have any form. It may, for example, put all the probability mass on time points that are larger than time to collision, leading to the same E[R(T)] that would have resulted if all the probability mass was concentrated at the exact moment of collision. Constructing the severity index corresponding to the first question (Approach 1) is a bit more complicated. Again, it involves a conditioning. But now, rather than saying that a glance has to start at a certain point, we say that it has to cover this certain point. That is, we need the distribution of time to look back given that the glance is longer than the distance between the start of the glance t1 and the critical point (see Fig- ure A.2). The resulting distribution will then describe the probability of looking off the road for an additional number of seconds, given that the driver has already looked away from the road for a while and missed the time point that we are interested. This distribution will be nonnegative for all time points t such that t is greater than 0 and less than the longest glance length observed in the empirical data. It will be con- tinuous and nonincreasing. Because of the continuity, the severity index will be calculated as an integral, rather than a sum—that is, E[R(T)] = int(h(t) z R(t)dt). One of the conse- quences of this construction will be that E[R(T)], unlike in the previous case, will not become a constant if all the prob- ability mass of f is to the right of time to collision. That is, if time to collision is 5 seconds and we have two empirical Figure A.1. Application of a glance distribution (matched baseline) as the beginning of the original last off-path glance (example with a 2-second original last off-path glance). Figure A.2. Conditioning the glance distribution (matched baseline) on overlapping a fixed point (invTTC 5 0.1 s1, example with 2-second original last off-path glance).

120 f(x) distributions, one placing all its mass on 10 seconds and another on 15 seconds, E[R(T)] will be larger in the latter case than in the former. Mathematical Description of the MIR Severity Scale Calculation Let X denote the (stochastic) length of an off-road glance. We will call the corresponding PDF function f(x). In our case, this function is usually obtained from the data. However, we could also use a generic PDF to, for example, assess the impact that different hypothetical glance behaviors have on the severity scale. Further details on how f(x) is obtained can be found below. Further, assume that a glance of length X covers some criti- cal point Tcrit and consider the difference between X and Tcrit—that is, the remaining time of off-road glance given that Tcrit occurred somewhere in [0,X]. Denote this (stochastic) length for T, with the corresponding PDF h(t). In statistical terms, h(t) is the overshot distribution of f(x), and the cor- responding cumulative distribution function (CDF) describes P(X - Tcrit < t | 0 < Tcrit < X). For further details on construc- tion of h(t), see the next section. For each time point in a scenario, we construct a risk func- tion R(t). Theoretically, R(t) may be any function, but it usually describes consequences arising from the driver glancing back at t. Examples are the impact-speed DeltaV and probability of an injury; see the section on construction of the R(t) below. A severity scale is constructed by combining either R(t) and h(t) (Approach 1) or R(t) and f(x) (Approach 2). Explic- itly, in Approach 1 we calculate E intR t R t h t dt[ ] ( )( ) ( ) ( )= i and for Approach 2 E int alt E sum if is discrete, as is the case when it is obtained from empirical data . R t R t f t dt R t R x f x f i ii i( ) ( )( )([ ] [ ]( ) ) ( ) ( ) ( ) ( )= = Note that the second approach uses the distribution corre- sponding to the actual glance lengths, while the first approach uses the overshot distribution. The logic behind this can be most easily understood by considering the difference in con- ditioning in the first and the second approach. In the second approach, we are interested in the length of a glance given that it starts at a particular point, which is exactly what we obtain by considering the possible glance length in some population. In the first approach we are interested in the glance length given that it covers a particular point. The f(x) distribution is no longer applicable, but, rather, has to be transformed into h(t). Construction of the Overshot PDF h(t) To begin with, let us introduce some notation. For calculations, we operate with two probabilistic distributions: F(x), which is the CDF (cumulative distribution function) corresponding to the distribution of the glance lengths in a population, and H(t), which is the CDF of the so-called overshot distribution of F(x), defined as H(t) = P(X - Tcrit < t | 0 < Tcrit < X). We also have the corresponding PDF distributions f(x) and h(t). Below, we describe in detail how we may obtain h(t) from f(x). Observe that in the expression for H(t) above we have two random entities: the length of an off-road glance X, and the exact time at which Tcrit occurs. We also have a conditioning, namely that Tcrit should occur during an off-road glance. We also have h(t) = H′(t) (i.e., the PDF function is the derivative of the CDF function with respect to t). Last, to simplify calcu- lations, we assume that f(x) is discrete, as is the case if it is obtained from the real data. The approach can be easily gen- eralized to the case where f(x) is a generic continuous distribu- tion by, essentially, replacing summation with integration. Keeping this in mind, we rewrite the expression for H(t) as sum_ , 0 0 sum_ crit crit crit H t x P T x t X x T X P X x T X x s t g x ( ) ( ) ( ) { } { } ( ) ( ) = > − = < < = < < = i That is, we condition on X taking on a particular value x and sum over all possible x. Let us now consider the two terms within the sum, g(x) and s(t), separately. The g(x) term reflects the probability that we will see a glance of a certain length in a population, given that this glance con- tains some critical point. Observe that, if we assume that Tcrit can occur at random (i.e., as a Poisson process) during driving, then we are more likely to see a Tcrit within a longer glance than within a shorter glance. So, to obtain g(x) we need to perform a so-called size-bias correction of f(x) that reflects this prop- erty. Explicitly, we calculate g(x) through sum_g x f x x x f x x( )( ) ( ) { } ( )= i i so that the probability that Tcrit is inside a longer glance increases proportionally to glance length x. The s(t) term describes the probability that, given that we have a glance of a certain length x and that Tcrit is within this glance, the critical point is to the right of some time point t, 0 < t < x. To derive this probability, we make the assumption that, under these conditions, Tcrit is distributed according to Uniform(0,t) distribution—that is, it can occur anywhere within the glance with equal probability. Given this assump- tion, we have s(t) = t/x if x > t and s(t) = 0 if x < t, and we arrive at the expression H t x x t t f x x x f xsum_ : sum_i i( )({ }( ) ( ) { } ( )= >

121 Finally, using h(t) = H′(t), we can differentiate the sum with respect to t to obtain isum_ : sum_h t x x t f x x x f x( )({ }( ) ( ) { } ( )= > In words, h(t) for a certain t is basically the (normalized) sum of probabilities of the glances that correspond to x that are larger than t. One thing to note here is that h(t) is defined for all t and not just t such that t = x. This means that, although f(x) is discrete, h(t) is continuous. Construction of R(t) The construction of R(t) is less mathematical. The following is just an enhancement of the description in the main report (Chapter 8). Figure 8.1 in the main report is advantageously used in combination with this description. 1. Vehicle kinematics are extracted. The subject-vehicle speed (SVspeed) is the interpolation (usually up-sampling) of the CAN speed to match event time. The lead-vehicle speed is the sum of the SVspeed and the range rate. 2. Start of evasive maneuver is identified by using the man- ually annotated reaction point (reduction by VTTI) and set to the first time after the reaction point when the derivative of the SVspeed is negative (deceleration). 3. The evasive maneuver is “washed away” (removed) by setting all SVspeeds after the start of evasive maneuver to the SVspeed at the start of evasive maneuver. 4. A brake profile is chosen. We have chosen 8 m/s2 (see the assumptions section below for the reasoning). 5. A braking rate corresponding to a constant 8 m/s2 is applied at each time in event time (each 0.1 second), simulating different SV driver responses. 6. The simulated SVspeed and the lead-vehicle speed (LVspeed) are integrated for each simulation to get changes in distance. 7. Using the SV and LV distances from speed integration and the initial range between the two, each simulation is checked for ranges below zero (crash). 8. For the simulations resulting in a crash, the impact speed is calculated by extracting the relative speed between SVspeed and LVspeed at the first time the range goes below zero. 9. A hypothetical impact-speed profile is created in event time. That is, for each simulated start of evasive maneuver there is a corresponding impact speed. The impact speed is zero for all no-crash simulations, and it is the impact speed calculated in the previous step for all crash simulations. 10. The masses of the lead vehicle and the subject vehicle are estimated. The SV mass was set the same for all events; the LV mass was either retrieved from databases after identifying the make and model from video, or a mass per vehicle category was applied. 11. An injury-risk function is chosen. We created an injury- risk function for rear-end events based on National Auto- motive Sampling System–Crashworthiness Data System (NASS-CDS) rear-end-striking crashes, using logistic regression. The function used was injury_risk = 1/(1 + exp(-(-10.31316311 + 0.33308 * DeltaV))); it relates DeltaV to MAIS3+ injuries, with an intercept adjustment. The CDS data set contains only tow-away crashes, which are more severe than the average police-reported crash. This biases the intercept term in the relationship between DeltaV and injury, but, unfortunately, there is no police- reported crash data set that includes DeltaV. To remedy this, we adjust the intercept according to the Breslow (1996) formula. The formula requires a base risk of injury, which is estimated based on the National Automotive Sampling System–General Estimates System (NASS-GES) data set, a data set of police-reported crashes. 12. The hypothetical impact-speed profile is converted to a hypothetical injury-risk profile by applying the injury- risk function. Now there is an injury risk associated with each start of evasive maneuver. This is one example of R(t), used in the current implementation. However, R(t) can be any function describing the severity of an indi- vidual event as a time series. Using invTTC Instead of invTau The what-if simulation development has been run in parallel with other parts of the project. Until late in the project, inverse Tau (invTau) was used instead of invTTC in most analysis. Since the two metrics are similar, but TTC is under- standable by more people, the decision was made to change to invTTC in most places in the project and to describe the difference in Chapter 3. Specifically for the MIR/MCR devel- opment, it was the difference at invTau = 0.1 s-1 that mattered. In analyzing the difference between invTau and invTTC at invTau = 0.1 s-1, it was found that the difference is practically negligible (difference: M = -0.000526 s-1, SD = 0.000859 s-1, max = -0.000019 s-1, min -0.00081 s-1). The equation for cal- culation of the difference between optically defined invTau and optically defined invTTC at invTau = 0.1 s-1 is  Diff TTC 0.1 sin0.1 1 0.1 1 0.1 0.1 1 1 1 1( )= τ − = − θ θτ = − τ = − τ = τ = − − − − Note that the invTTC used in the project is calculated at the camera position and not at the vehicle bumper. Thus, it is optically defined invTTC at the camera. Maximum Severity DeltaV Severity Measure The Maximum Severity Delta Velocity (MSDeltaV) is the sever- ity that would result if the subject vehicle’s driver performed

122 no evasive maneuver. DeltaV is the most common approach for measuring crash severity in traditional accident investiga- tions and correlation with injury risk (see the DeltaV section above); we calculate what DeltaV would result if no evasive maneuver was made by the SV driver. The MSDeltaV severity requires the identification of the SV evasive maneuver (see Chapter 3, variable definitions). We define MSDeltaV as the relative velocity at impact if the subject vehicle continues with the same speed as just before the evasive maneuver, mul- tiplied with the mass ratio m2/(m1 + m2) as explained in the DeltaV section above. Since MSDeltaV requires an evasive maneuver (which is then “removed”), it is not defined for baseline events. However, the calculations could be applied for any chosen time point in a baseline event. This would in some events result in very high MSDeltaV values. Consider two vehicles, 1,000 m apart and both driving at a constant steady state speed of 100 km/h. Even if the lead vehicle braked softly to a stop, MSDeltaV calculated using the time point when the lead vehicle started braking as a “random” evasive maneuver would result in a 100-km/h MSDeltaV. This issue may be mitigated by adding some time window for calcula- tions. A main advantage and disadvantage of MSDeltaV is that it does not depend on driver actions/reactions, and thus the measure may be considered orthogonal to driver behavior indicators. Assumptions in the Creation of the What-If Simulations and Potential Severity Scales MIR, MCR, and MSDeltaV The following subsections describe and discuss the reasoning behind the major and some minor assumptions made when creating the what-if simulations and the potential severity scales presented in the main report. They also describe limita- tions of the approach in general and issues with the applica- tion of the method to the SHRP 2 data set available for this project. The headings below are statements of assumptions. Note that we only aim to show proof of concept. Some of the discussions below, however, are from the perspective of the actual use of the method, while others only discuss the assump- tions from the perspective of proof of concept. The subject-vehicle driver not looking on the road ahead is the only reason for crashing, except when the driver keeps such short headway that, even with 8-m/s2 deceleration, he or she cannot avoid a crash. As described in the sections of the report focusing on analy- sis of mechanism, it was found that—for a majority of the crashes—drivers are looking away close to the crash. Other evidence also showed that glances seem to be a major contrib- uting factor to the occurrence of crashes (Chapters 6 and 7). Further, in Chapter 7 it is shown that in some crashes the headway is too small for the driver to avoid crashing (Cate- gory 3 crashes described in Section 7.4), even if he or she is keeping an eye on the road and reacting fast. However, since not all rear-end crashes can be identified as having glances as a contributing factor (a portion of the Category 3 crashes, Section 7.4), the assumption and subsequent what-if simula- tions will fail to address such cases. This limitation has impli- cations for using what-if simulations as we are proposing them. Care needs to be taken with respect to interpretation of results. The single last glance is the only reason for a delay in reaction. This assumption is studied in Chapters 6 and 7 and is par- tially rejected. That is, the last glance is shown to play a large role in the occurrence of a crash, but it is not the only reason. However, since the aim of the implementation of the what-if simulations and potential severity scales was mainly to dem- onstrate the concept, this simpler model was chosen. A more complicated approach would be needed to also explain his- torical glances. Future research and applications of the what- if simulations should consider implementing more advanced and validated models of driver glance behavior. It is not un- realistic that most reaction models can be transformed into an “equivalent” last-glance distribution. The subject-vehicle driver’s glance behavior (glance off-road distribution) is the same as that in the matched baseline up until the chosen anchor point (invTTC = 0.1 s-1). We chose an anchor point of invTTC = 0.1 s-1 for several rea- sons. We wanted to capture the basic idea that drivers try to avoid critical situations by adapting to the situation. In the Tijerina et al. (2004) study of eyeglance behavior during car fol- lowing, it was found that drivers under normal car-following conditions generally do not take their eyes off the road unless both the subject vehicle and lead vehicle are traveling at approx- imately the same speed (i.e., at range rates close to zero). Although we want to adopt a strategy based on drivers avoiding critical situations, we want a driver model that is reasonably related to the kinematics of crashes and near crashes. In Chap- ter 7, Figure 7.5, it is shown that for the matched baseline, only a few drivers started to look away at an invTTC higher than 0.1 s-1. At the same time the mean of invTTC for near crashes is just below invTTC = 0.1 s-1. This leads us, for proof of concept, to choose invTTC = 0.1 s-1 as the gaze anchor point. Future applications of the method can change this anchor point. Another obvious anchor point would be invTTC = 0.2 s-1 as described in Chapter 7 (e.g., Figure 7.3). Previously in this appendix we also included a description of how to apply the

123 glance distributions at the original start of last glance off road instead of conditioning on an overlap with, for example, invTTC = 0.1 s-1 (or 0.2 s-1). Further development is needed on the choice of anchor point. Should lower-value anchor points be chosen instead, in line with Tijerina et al. (2004), or should thresholds such as invTTC = 0.2 s-1 be used? If a lower- value approach is chosen, data quality may also be a limiting factor (noise at longer ranges). For task analyses, driver’s glance behaviors are the same as the original task distribution up until the chosen anchor point. This has the same issues as for the matched baseline, but in addition, the initiation of a task is likely dependent on context. Thus, if analysis of task risks is to be performed using our method, it is advisable to do an extended scenario selec- tion. That is, if the analyst can identify the contexts in which drivers are (not) engaging in the tasks to be evaluated, then only crashes and near crashes matching those contexts should be selected for the analysis. In that way the context bias is minimized. Drivers respond by braking at 8 m/s2 in all situations. In the simulation framework any brake profile can be used, including dynamic brake responses that depend on context. However, in our work we chose 8 m/s2 as a conservative (hard) brake response. By choosing a relatively aggressive brake pro- file, estimates of risks are lowered. The brake profile choice has a pronounced effect on the outcome scale, and this choice needs to be an informed one, depending on the application of the method. The braking is initiated after a fixed reaction time (0.4 second) after the driver looks back on the road. Conclusion 3 of Section 7.6 states that the results from the analysis in Chapter 7 show a fixed reaction time is not correct, for several reasons. However, we chose a fixed reaction time of 0.4 second because it produces conservative estimates of risk. The value 0.4 second is the median of the reaction time for crashes when the reaction is made after invTTC ≥ 0.1 s-1 is fulfilled (Figure A.3). That is, the invTTC has reached at least 0.1 s-1 before the driver reaction. Alternative reaction models can be integrated into the current model. Peripheral vision does not help the driver react earlier (by moving the eyes back to the road). In this approach we are not considering glance-off-road eccentricity and potential effects of looking back earlier due to peripheral vision. To allow for inclusion of such effects, data on glance-off-road eccentricity are needed, together with a model of the effects as a function of context (e.g., looming). The quality of data is not affecting results significantly. Data in the available data set have several quality issues. The following issues have been identified as directly influencing what-if simulation results, but others may not have been found. The SVspeed signal is, for many events, not synchronized with other data, especially range rate. This quality issue has major implications in the what-if simulations. Since the LVspeed is Figure A.3. Driver response time between last glance on path and the driver reaction in crashes with invTTC ≥ 0.1 at glance on-road onset (n 5 26; mean 0.458 second; median 0.400 second).

124 a major component (kept constant in all simulations of an individual event) in the what-if simulations, the creation of LVspeed is crucial. Since LVspeed is calculated by adding SVspeed and range rate, the synchronization of those two sig- nals is important. If the SVspeed is a few hundred millisec- onds late (delay) in an event, a hard deceleration by the following vehicle—seen in the range rate—will be seen as a fast increase in LVspeed (acceleration). This lead-vehicle acceleration is then just an artifact, creating delays in the crash point in what-if simulations. It may be possible to address the synchronization issues per event, but we treat such artifacts as noise in the data. The SVspeed signal has a sample frequency of only 1 Hz for some events (vehicles). This issue is similar to that of synchro- nization, since 1 second between samples is effectively a delay when there are fast deceleration responses. The range, range rate, and invTTC signals are noisy at larger ranges between the SV and LV. Both LVspeed and invTTC are major components in the what-if simulations. Both are based on the manually annotated lead-vehicle widths. To under- stand the implications of data quality, sensitivity analysis would have to be run. Glance-off-path distributions from the matched baseline events in our sample are representative for car-following scenarios. Sample selection bias is an issue because the matched base- lines are matched to crashes and near crashes. It is not unlikely that these crashes and near crashes are created by drivers who have longer glances away from the roadway (since they actu- ally got into the events in the first place) or who are close followers (tailgaters). It would be interesting to compare matched baselines in which the situation, but not the driver, was matched, with the matched baseline. Such analysis would confirm or reject the potential bias concerns. Also, a method- ological enhancement would be to match events of particular scenario types with the glance-off-road distributions for each respective type. The choice of injury-risk function and transforming DeltaV into injury risk are relevant for our events. Calculation of injury risk is often difficult. Calculation of hypothetical injury risks may be controversial. The steps used to get to injury risk are crucial. We use an injury-risk function based on NASS-CDS data on rear-end collisions, with an intercept adjustment to account for the lower-severity event in our sample (see Chapter 8). Researchers implementing MIR or MCR should take care in using the impact-speed to injury-risk transformation most appropriate to the specific analysis at hand. Identification of the time point of start of evasive maneuver in the original events is correct. Our implementation of start of evasive maneuver is relatively crude. It uses manual annotation of the first driver reaction to the event as a basis for the definition of start of evasive maneu- ver. There are many approaches to extracting the start of eva- sive maneuver. However, implementing a pure mathematical definition is problematic, since events play out in very differ- ent ways in naturalistic data. It may be wise in future studies to evaluate the choice of start of evasive maneuver through sensitivity analyses. If an incorrect evasive maneuver is imple- mented, the initial conditions for the simulations will be wrong. Specifically, the initial SVspeed will be wrong. It is reasonable to assume that, had the event not become critical, the driver would have continued at the speed he or she was maintaining just before the evasive maneuver. This assumption is likely valid in many contexts, but there are definitely contexts in which this does not hold. When a driver is approaching an intersection with a red traffic light or is already turning, he or she may look away for a short period of time (for some reason), but that driver is not likely to take the long glances off path that drivers in car-following situations on a freeway do. That is, glance distributions are contextually dependent. To perform more detailed and con- textually correct MIR and MCR calculations, different glance distributions may have to be used for different con- texts (e.g., use freeway-driving matched baselines for free- way near crashes and crashes and identify other scenario types for other contexts). This may be difficult, and thus, future application of MIR and MCR may have a limitation in use. The assumptions necessary to be able to perform the what-if simulations, with respect to the lead-vehicle actions, are reasonable. When creating what-if simulations for near crashes, the LVspeed (SVspeed + range rate) is available from the origi- nal event. For crashes, however, LVspeed is not available after the crash. This means that for simulations extending after the crash point (higher-severity), assumptions must be made. We assume that the lead vehicle would have contin- ued with the same deceleration as just before the event, until it was standing still. If the lead vehicle was accelerating, we set the LVspeed for all times after the original crash point to the LVspeed observed in the sample before the crash point. The implications of this approach are not easily understood and have to be investigated further.

125 The exclusion of events in the MIR/MCR calculation does not bias results significantly. In our MIR and MCR analysis we excluded events that did not complete the what-if simulations for a number of reasons. One of these reasons was that the event did not become a crash even in the 5 seconds available after the event. That is, the relative velocities and the initial distance were such that there was not a crash for 5 seconds. This, and other outliers in the data that are excluded, may produce bias in results. However, if MIR/MCR are used to compare a distribution of MIR/MCR between, for example, two tasks, this is likely negligible. Also for other applications it is likely a minor issue. Individual drivers’ glance behavior and driving style are independent. It is clearly not the case that individual drivers’ glance behav- ior and driving style are independent. The frequency at which the critical situations are occurring should, to some extent, depend on the driver. Also, it could well be the case that the glancing behavior, reaction times, and so on are different for different drivers. The current approach does not take such dependencies into account. Potentially, there may be a clear correlation between a driver’s glancing behavior and his or her driving style—for example, a driver who knows himself or herself to be easily distracted may tend to keep at a longer distance from the leading vehicle. A.3 References Bärgman, J., J. Werneke, C.-N. Boda, J. Engström, and K. Smith. 2013. Using Manual Measurements on Event Recorder Video and Image Processing Algorithms to Extract Optical Parameters and Range. Presented at 7th International Driving Symposium on Human Fac- tors in Driver Assessment, Training, and Vehicle Design, Bolton Landing, New York. Buzeman, D., D. Viano, and P. Lövsund. 1998. Car Occupant Safety in Frontal Crashes: A Parameter Study of Vehicle Mass, Impact Speed, and Inherent Vehicle Protection. Accident Analysis & Prevention, Vol. 30, No. 6, pp. 713–722. Gennarelli, T. A., and E. Wodzin (eds.). 2005. Abbreviated Injury Scale 2005. Association for the Advancement of Automotive Medicine, Des Plaines, Ill. Jonasson, J. K., and H. Rootzén. 2014. Internal Validation of Near-Crashes in Naturalistic Driving Studies: A Continuous and Multivariate Approach. Accident Analysis & Prevention, Vol. 62, pp. 102–109. Kusano, K., and H. Gabler. 2010. Potential Occupant Injury Reduction in Pre-Crash System Equipped Vehicles in the Striking Vehicle of Rear-End Crashes. Annals of Advances in Automotive Medicine, Vol. 54, pp. 203–214. Lee, D. N. 1976. A Theory of Visual Control of Braking Based on Infor- mation About Time-to-Collision. Perception, Vol. 5, pp. 437–459. Rootzén, H., and D. Zholud. Submitted 2014. Tail Estimation for Window Censored Processes. http://www.zholud.com/articles/ Tail-estimation-for-window-censored-processes.pdf. Tijerina, L., F. S. Barickman, and E. N. Mazzae. 2004. Driver Eye Glance Behavior During Car Following. DOT HS 809 723. National High- way Traffic Safety Administration. Viano, D. C., and C. S. Parenteau. 2010. Ejection and Severe Injury Risks by Crash Type and Belt Use with a Focus on Rear Impacts. Traffic Injury Prevention, Vol. 11, No. 1, pp. 79–86.

Next: Safety Technical Coordinating Committee »
Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention, and Crash Risk Get This Book
×
 Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention, and Crash Risk
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) Report S2-S08A-RW-1: Analysis of Naturalistic Driving Study Data: Safer Glances, Driver Inattention, and Crash Risk explores the relationship between driver inattention and crash risk in lead-vehicle precrash scenarios (corresponding to rear-end crashes).

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!