National Academies Press: OpenBook
« Previous: Appendix A - Project 2 Data Dictionary
Page 87
Suggested Citation:"Appendix B - Project 5 Data Dictionary." National Academies of Sciences, Engineering, and Medicine. 2011. Feasibility of Using In-Vehicle Video Data to Explore How to Modify Driver Behavior That Causes Nonrecurring Congestion. Washington, DC: The National Academies Press. doi: 10.17226/14509.
×
Page 87
Page 88
Suggested Citation:"Appendix B - Project 5 Data Dictionary." National Academies of Sciences, Engineering, and Medicine. 2011. Feasibility of Using In-Vehicle Video Data to Explore How to Modify Driver Behavior That Causes Nonrecurring Congestion. Washington, DC: The National Academies Press. doi: 10.17226/14509.
×
Page 88
Page 89
Suggested Citation:"Appendix B - Project 5 Data Dictionary." National Academies of Sciences, Engineering, and Medicine. 2011. Feasibility of Using In-Vehicle Video Data to Explore How to Modify Driver Behavior That Causes Nonrecurring Congestion. Washington, DC: The National Academies Press. doi: 10.17226/14509.
×
Page 89
Page 90
Suggested Citation:"Appendix B - Project 5 Data Dictionary." National Academies of Sciences, Engineering, and Medicine. 2011. Feasibility of Using In-Vehicle Video Data to Explore How to Modify Driver Behavior That Causes Nonrecurring Congestion. Washington, DC: The National Academies Press. doi: 10.17226/14509.
×
Page 90
Page 91
Suggested Citation:"Appendix B - Project 5 Data Dictionary." National Academies of Sciences, Engineering, and Medicine. 2011. Feasibility of Using In-Vehicle Video Data to Explore How to Modify Driver Behavior That Causes Nonrecurring Congestion. Washington, DC: The National Academies Press. doi: 10.17226/14509.
×
Page 91
Page 92
Suggested Citation:"Appendix B - Project 5 Data Dictionary." National Academies of Sciences, Engineering, and Medicine. 2011. Feasibility of Using In-Vehicle Video Data to Explore How to Modify Driver Behavior That Causes Nonrecurring Congestion. Washington, DC: The National Academies Press. doi: 10.17226/14509.
×
Page 92
Page 93
Suggested Citation:"Appendix B - Project 5 Data Dictionary." National Academies of Sciences, Engineering, and Medicine. 2011. Feasibility of Using In-Vehicle Video Data to Explore How to Modify Driver Behavior That Causes Nonrecurring Congestion. Washington, DC: The National Academies Press. doi: 10.17226/14509.
×
Page 93

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

A P P E N D I X B Project 5 Data DictionaryCoding Key for RDCWS Alerts LDWS and CSWS alerts were coded using different criteria; driver behavior, however, was rated in the same way for each set of alerts. Those criteria are listed first below. Specific cat- egories regarding scenario details were different for each sys- tem. Each of the scenario coding keys is described after the driver behavior key. Driver Behaviors Location of the driver’s eyes during the last nonforward glance and time from the last nonforward glance. If the driver’s eyes were on the forward scene at the moment of the alert but had looked away during some portion of the clip before the alert, this location was recorded. Reviewers also recorded the amount of time between when the driver’s gaze began to return to the forward scene and the moment of the alert, according to the driver-vehicle interface (DVI) display on the computer monitor. The actual moment of the alert was not counted; the time represents the time between the change in gaze and the alert. Time was recorded in 10ths of seconds. If the driver was always looking forward, the time from the last nonforward glance was left null because that category was not applicable. If the driver was looking away 0.1 s before the alert and then was looking forward at the time of the alert, the time from the last nonforward glance was recorded as 0. If the eyes were not visible, typically because of glare, for any portion of the clip, the location was coded as 9 because one could not be certain there was not a glance away. The only exception to this rule is when reviewers could not see the driver’s eyes and then the eyes became visible so that reviewers could see the eyes and there was a glance away before the alert. This situation negates the fact that reviewers could not see the eyes at the beginning of the clip, because there was a nonforward glance after the portion during which the eyes were unclassifiable. If the eyes were unclassifiable again, before the alert but after the glance, the eyes were coded as 9 because reviewers could not87be certain what happened during that portion of the clip. If one eye location could be determined and the other eye’s location could not, location was still coded. Reviewers were confident in coding eye position when only one eye could be seen because normally eyes move in parallel. If the driver’s eyes were away before the alert and in transition at the time of the alert, the last forward glance code reflected where they were looking at the time of the alert, not where they had pre- viously been looking. For more details on eye location, see the information on Location of Eyes at Time of Alert. The criteria for classifying a glance as a specific location are the same as the criteria for eye location at the time of the alert. • 0 = Always looking forward at the forward scene. • 1 = Left outside mirror or window. • 2 = Looking over left shoulder. • 3 = Right outside mirror or window. • 4 = Looking over right shoulder. • 5 = Interior rearview mirror. • 6 = Head down, looking at instrument panel or lap area. • 7 = Head down, looking at center console area. (Console means the area where the stereo, thermostat, and clock are located.) • 8 = Driver wearing sunglasses or glasses with glare. (Glare pro- hibited the ability to classify where the eyes were looking.) • 9 = Cannot accurately evaluate eye location. (This was coded as 9 when reviewers were unsure of the eye position or clas- sification within a reasonable level of confidence, although not because of glasses. Typically, reviewers could see the actual eye but could not determine where the gaze was directed. Eyes in transition were often coded as 9 because it was unclear where the driver’s gaze was at that particular moment.) • 10 = Other. (For example, the driver may clearly be looking at the passenger side floor. When a glance was coded as other, the location was noted in the notes section. The most com- mon position recorded as other was the rearview mirror.)

88Location of Eyes at Time of Alert This category was coded at the actual time of the alert. Eye loca- tion was coded by what reviewers could see of the driver’s eyes at the time of the alert, even if they could not see the eyes before the alert. Reviewers coded the location of the driver’s eyes even if they could see only one eye because it was assumed that the driver’s eyes moved in parallel. Because of the absence of an eye- tracking camera and the limitations of the face camera, there was often some ambiguity about where the drivers were look- ing. Reviewers needed to be confident in the location of the driver’s eyes to code as a specific location. In many instances, reviewers were confident that the driver’s eyes were not looking forward but could not tell specifically where the eyes were look- ing. These instances were coded as 9s. One such example is when the driver appeared to be looking at the camera. In this situa- tion, it was difficult to determine if the driver was looking at the camera intentionally, glancing out the corner, or looking slightly out the left window; therefore, it was coded as 9. Another example is when the driver was looking toward the curve that elicited the alert. The exact location of the driver’s eyes could not be determined in these instances, although a notation was made in the notes field. The determination of whether glances were still forward or if they were glances away was also difficult and subjective. Reviewers agreed on an area or box they considered to be looking forward; this allowed for slight glances but even many scans across the forward scene were considered glances away. This process defined looking for- ward narrowly and essentially as meaning straight forward. Glances toward the right of the forward scene, the right area of the windshield, were glances away and were coded as 9s. • 0 = Looking forward at forward scene. (Looking forward included looking at the head-up display [HUD].) • 1 = Left outside mirror or window. • 2 = Looking over left shoulder. (The driver’s gaze needed to look over the driver’s shoulder but the driver’s chin did not necessarily need to cross over the driver’s shoulder.) • 3 = Right outside mirror or window. • 4 = Looking over right shoulder. (The driver’s gaze needed to look over the driver’s shoulder but the driver’s chin did not necessarily need to cross over the driver’s shoulder.) • 5 = Interior rearview mirror. • 6 = Head down, looking at instrument panel or lap area. (Looking at the HUD was not considered part of the instru- ment panel.) • 7 = Head down, looking at center console area. (Console means the area where the stereo, thermostat, and clock are located.) • 8 = Driver wearing sunglasses or glasses with glare. (The glare prohibited the ability to classify where the eyes were looking. In some instances, drivers were wearing sunglasses,but reviewers believed that they could confidently identify the location of the drivers’ eyes. In these instances, eye loca- tion was recorded.) • 9 = Cannot accurately evaluate eye location. (The code 9 was chosen when reviewers were unsure of the eye posi- tion or classification within a reasonable level of confi- dence but not because of glasses. Typically, reviewers could see the actual eye but could not determine where the gaze was directed. Eyes in transition were often coded as 9 because it was unclear where the driver’s gaze was at that particular moment.) • 10 = Other. (For example, the driver may clearly be looking at the passenger side floor. When a glance was coded as other, the location was noted in the notes section. The most com- mon position recorded as other was the rearview mirror.) Eyes on Task at Time of Alert • 0 = No. (The classification of no was used only when review- ers could confidently determine that the driver’s eyes were off the task of driving at the time of the alert [e.g., the driver was looking at a friend or the stereo system].) • 1= Yes. (The classification of yes does not mean looking forward; it means that the driver’s eyes were on the task of driving. Looking at the instrument panel, for example, was considered on task.) • 2 = Cannot determine (For instance, the driver was wear- ing glasses with glare or reviewers could not see the driver’s eyes for some other reason. This classification was also used when reviewers could not tell if the eye location was on task. For instance, the driver was looking out the win- dow [e.g., toward a curve in the road], but it was unclear whether the driver was looking at the road and traffic or at a fancy building that was distracting the driver’s attention. In any case, reviewers did not know whether the driver was on task.) Eyes in Transition To classify the eyes as in transition, the driver’s eyes must have been in transition at the time of the alert and must have started the transition at least 0.1 s before the alert. The eyes could not be at the beginning of a transition or the end of one; they must have been in the transition at the time of the alert. • 0 = No. • 1 = Yes, toward forward scene. • 2 = Yes, away from forward scene. • 3 = Cannot tell. (Cannot tell was selected when the driver was wearing sunglasses or reviewers could not see the driver’s eyes for some other reason; therefore, researchers were uncertain whether the eyes were in transition.)

89Visual Response to Alert and Time to Visual Response If the driver initiated a visual response to the alert, review- ers coded the time it took for the response by recording the number of 10ths of a second. The time counted was the time between the alert and when the look was initiated, not includ- ing the moment of the alert or the moment of response. If the response was initiated within 1.0 s, the driver was considered to have looked in response to the alert. The amount of time it took to look in response was always recorded for applicable situations, even if this was greater than 1.0 s. If the driver was already looking at the road and continued to look forward, the code was null (not applicable). If reviewers were not sure of the location of the driver’s eyes, the time to visual response was left as null. The time to visual response was recorded for Week 1, even though there was no alert to which to respond. The rationale for coding this was that a baseline would provide an idea of what a normal time to visual response was compared with the time to response with an alert. • 0 = Looked in response. (The driver initiated a look in response to the alert within 1.0 s. Glances qualified as a look in response.) • 1 = Did not look in response to alert. (The driver did not look within 1.0 s of the alert.) • 2 = NA. (This option was always used for Week 1 because there was no alert during Week 1; thus we could not code this category, although we still coded the time to visual response. This option was also selected when the driver was already looking forward at the time of the alert.) • 3 = Cannot tell. (The driver was wearing sunglasses or other glasses with glare, and reviewers could not tell where the driver’s eyes were.) Visual Occlusion Occlusion was coded with regard to the driver as well as to reviewers. For instance, heavy rain or bright sun might have occluded the scene for both parties, whereas blurry video occluded the scene only for the reviewer. The occlusion did not necessarily have to impact the reviewers’ ability to code the scene. • 0 = None. • 1 = Sun or headlight glare. (This classification includes when the scene was whitewashed from the sun. Only head- light glare was included in this section; taillight glare was coded as other.) • 2 = Other, specified in notes section. (The most common entry was taillight glare.)Startle Response This was subjective and the classification as such was often hotly debated. The driver had to be visibly rattled. The driver’s startle was observed by body response or dialogue or both. Cursing was not sufficient to be coded as startle, because it may have resulted from anger or frustration, not startle. This category tried to capture startle to either the situation or the alert. • 0 = No. • 1 = Yes. Steering in Response • 0 = No steering in response to alert. (Small, jerky reactions or slight wiggling in response to the alert or to the situation was classified as 0 and was not considered steering.) • 1 = Driver steered partially or fully in response to the alert. (Steering, for review purposes, was an evasive maneuver in an attempt to avoid striking a vehicle; thus there must have been a significant amount of steering.) Hand Location at Time of Alert Both hands were not often visible, so reviewers coded what could confidently be inferred from the scene. At times, play- ing the video farther helped determine what was ambiguous in a still frame at the time of the alert. For instance, at the time of the alert there may have been a small blur near the steering wheel. On continuation of the video the blur may have moved and come into view as a hand. • 0 = Cannot see the position of either hand or cannot determine the position of either hand. (Reviewers coded 0 if a hand could be seen but they could not tell if it was on the wheel.) • 1 = At least one hand on steering wheel. (This was coded when the position of one hand could not be determined but reviewers could see that at least one hand was on the steering wheel.) • 2 = Both hands on the steering wheel. • 3 = At least one hand off the steering wheel. (This was coded when the position of one hand could not be deter- mined but at least one hand was clearly off the steering wheel.) • 4 = One hand on, one hand off the steering wheel. (The classification was 4 when reviewers could clearly see both hands and one was on the wheel but the other was off.) • 5 = Both hands off the steering wheel. (This classification was used when reviewers could clearly see both hands and both were off the wheel.)

90Secondary Driving Behaviors Audio was used to assist in coding whenever possible. For instance, reviewers may have heard the radio station change and seen the driver look at the console; this would indicate in-car system use. The default for nondriving behaviors was none, coded as 0. Cell Phone • 10 = Conversation, in use. (Conversation could be coded for listening, talking, or both while using the cell phone.) • 11 = Reaching for phone. (This classification was used when the driver reached for the handheld phone to speak on that phone. If the driver reached for the phone simply to answer the phone and talk on the headset the driver was wearing, the classification was other. Simply answer- ing the phone involves far less physical activity by the driver than reaching for the phone and holding it during a conversation.) • 12 = Dialing phone. Headset, Hands-free Phone • 20 = Conversation. (This was selected when reviewers could tell that the driver was in a conversation.) • 21 = Reaching for headset. • 22 = Unsure of activity level. (The driver was wearing a headset but it was not clear whether the headset was in use. The driver may have been listening to someone or wearing it in case of an incoming call.) Eating • 30 = High involvement. (High involvement includes such activities as eating a burger or unwrapping food.) • 31 = Low involvement. (Low involvement includes such activities as eating candy or grabbing chips.) Drinking • 40 = High involvement. (High involvement includes situa- tions in which the driver was trying to open a straw or bottle or was blowing on a hot drink.) • 41 = Low involvement. (Low involvement includes situa- tions in which the driver was sipping a drink or drinking without looking.) • 50 = Conversation. (The driver and someone in the car were carrying on a conversation. The driver can be listening during the clip, talking during the clip, or doing both.) • 60 = In-car system use. (The driver was actively adjusting something. For example, the driver was not just listeningto the stereo but also adjusting the stereo. The car lighter was coded under the smoking section.) Smoking • 70 = Lighting. (This classification includes the in-car lighter.) • 71 = Reaching for cigarettes or lighter. (This classification includes the in-car lighter.) • 72 = Smoking. Grooming • 80 = High involvement. (High involvement includes apply- ing makeup or brushing hair.) • 81 = Low involvement. (Low involvement includes scratch- ing or running one’s fingers through his or her hair.) • 90 = Other/multiple behaviors, specified in notes section. (Behaviors may include whistling or classifications that reviewers were unsure of [e.g., if the driver’s lips were mov- ing but there was no audio, the behavior might be singing or conversation].) Seat Belt • 0 = Yes. • 1 = No. • 2 = Cannot tell. Curve Speed Warning System Scenario Elements Road Type • 0 = Freeway/interstate. • 1 = Ramp. (A ramp was defined as an entrance or exit ramp from a freeway or any ramp that connected two arterial roads.) • 2 = Ramp near merge point. (Near was defined as being within 10 s of the merge point or within 10 s of arriving at the straightening of the ramp leading to a merge.) • 3 = Surface road. • 4 = Other. (Enter in notes.) Road Condition Glare and reflection helped determine whether the road was dry or wet. • 0 = Dry. • 1 = Wet. (Any moisture on the road led to the classification as wet; there did not need to be standing water. The road was classified as wet if it was wet from snow but not snow covered.)

91• 2 = Snow covered. (Snow covered included ice covered if it was observed, but it was never observed. If any portion of the road, including turn lanes, was covered in snow, the classification was snow covered.) Precipitation Spots on the windshield or wiper activity helped determine if there was precipitation. • 0 = None. • 1 = Rain. (Light rain and drizzle were classified as rain, as were downpours.) • 2 = Snow. (This category included sleet. Several cues helped indicate that the precipitation was snow. Snow tended to be larger and fall more slowly than rain, it looked like white flurries, and was present on the ground, reinforcing the classification as snow. Precipitation that occurred in December through February was assumed to be snow rather than rain. Snow could be coded in other months, but the assumption that the precipitation was snow was not as strong.) Number of Through Lanes Turn lanes and dedicated exit lanes are not included in the count of the number of through lanes. • 1 = 1. • 2 = 2. • 3 = 3. • 4 = 4 or more. Recent Lane Change To be considered a recent lane change, the lane change had to occur no more than 5 s before the alert or the car had to be in the process of a lane change at the time of the alert. • 0 = No. • 1 = Yes, toward branch that triggered the alert. • 2 = Yes, away from the branch that triggered the alert. • 3 = Yes, but there was no branch triggering the alert or the branch triggering the alert is unknown. Curve Confidence This field was used to indicate when reviewers could not accu- rately determine which branch or curve triggered the alert. Most of the events categorized as confidence not high resulted from CSWS behavior that stems from artifacts of the map or CSWS implementation details.• 0 = Confidence not high. • 1 = Confidence high. Nearby Overpass or Underpass The criteria were that the driver had to pass an overpass or underpass 5 s before the alert or 10 s after the alert. • 0 = No. • 1 = Yes. Change in Number of Through Lanes • 0 = No. • 1 = Yes. Does the Vehicle Branch? This addresses whether the vehicle is or will be taking a branch that triggers the CSWS alert. • 0 = Not branching, and the alert is not triggered by a branch. (This can occur on a curvy rural road, for instance, or after the vehicle has exited onto a ramp and is approach- ing a curve.) • 1 = Not branching, but passing branch that triggers alert. • 2 = Branching onto segment that triggers alert. (This includes taking an exit or driving in a dedicated exit lane.) • 3 = Branching but alert was triggered by curve on initial roadway. • 9 = No confidence in identifying the curve. Branch Type When Branch Is Triggering Alert If the roadway is a ramp, the ramp being traveled is not con- sidered a branch. For instance, if the vehicle has exited the free- way onto an exit ramp and the roadway classification is ramp, an alert triggered by a curve along that ramp would be coded as 0, no branch, because the vehicle is already on the ramp. • 0 = A branch does not trigger the alert. • 1 = Ramp. • 2 = Turn lane. • 3 = Michigan left. • 4 = Intersection. • 5 = Other. • 9 = No confidence in identifying the curve. Road Geometry • 0 = Straight. • 1 = Curve.

92• 2 = Approaching curve. (This classification constituted sit- uations in which the driver was approaching but not in a curve at the time of the alert. The driver had to be driving through the curve within 5 s after the alert in order to be classified as approaching curve.) Notes A notes section recorded any unusual events or ambiguous situations not covered by categories for a particular question. This section also contains general notes on the clip if there was anything significant taking place that was not adequately covered by the coding process. Examples of items captured in the notes section are described below, but other, unforeseen events are also noted. Visual Occlusion Rear taillights, glare from rain and wetness on the road, blurry video, dirty windshield, temporary incapacitation, sneezing, flying debris, faulty wiper or defroster, and object in or over the driver’s eyes. Nondriving Behaviors Whistling, two or more behaviors, if there is no audio and the driver is clearly talking or singing but reviewers could not tell which, attempting to avoid insect in car, adjusting mirrors, read- ing map, reading other materials, checking watch, or yawning. LDWS Scenario Elements Road Type • 0 = Freeway/interstate. • 1 = Ramp. • 2 = Ramp near merge point. (Near is defined as being within 10 s of the merge point or within 10 s of arriving at the straightening of the ramp leading to a merge.) • 3 = Surface road. • 4 = Other. (Enter in notes.) Road Condition Glare and reflection helped determine whether the road was dry or wet. • 0 = Dry. • 1 = Wet. (Any moisture on the road led to the classification as wet; there did not need to be standing water. The road was classified as wet if it was wet from snow but not snow covered.) • 2 = Snow covered. (Snow covered included ice covered if it was observed, but it was never observed. If any portion ofthe road, including turn lanes, was covered in snow, the classification was snow covered.) Precipitation Spots on the windshield or wiper activity helped determine if there was precipitation. • 0 = None. • 1 = Rain. (Light rain and drizzle were classified as rain, as were downpours.) • 2 = Snow. (This category included sleet. Several cues helped indicate that the precipitation was snow. Snow tended to be larger and fall more slowly than rain, it looked like white flur- ries and it was also present on the ground, reinforcing the classification as snow. Precipitation that occurred in Decem- ber through February was assumed to be snow rather than rain. Snow could be coded in other months, but the assump- tion that the precipitation was snow was not as strong.) Road Curvature • 0 = Straight. • 1 = Right-hand curve. • 2 = Left-hand curve. Lane Marking Change • 0 = No. • 1 = Yes. Boundary Type This field refers to which type of boundary was on the side of the alert. For example, for an imminent LDW to the left in which there was a solid lane boundary to the left, it would be coded as 0. Options 4 and 5 refer to double-boundary situations. • 0 = Solid. • 1 = Dashed. • 2 = Double solid. • 3 = No marking. • 4 = Solid/dashed. • 5 = Dashed/solid. • 6 = Curb. • 7 = Cannot tell. Continuous Incidental Feature This feature applies to continuous markings on the road that are not lane lines but may appear as lane lines to the LDWS— for example, tar markings, shadows, or tire marks on wet pavement.

93• 0 = No. • 1 = Yes. Badly Placed Boundary At times the LDWS’s real or virtual boundary was not prop- erly placed according to actual conditions on the roadway. • 0 = No. • 1 = Yes. Boundary Interaction Ultimately, the position of the vehicle’s tires was used to determine its position in the lane. At the time of the alert, if the tires were on or over the lane line, the crossed/straddled line option was selected. • 0 = Crossed/straddled line at alert. • 1 = Lane change at alert. • 2 = Centered/slightly off-center in lane. • 3 = Drifted in lane. Postboundary Maneuver This field evaluates the first maneuver the vehicle makes after the alert. For example, if the vehicle was drifting in the lane at the time of the alert, then crossed the lane line, and finally returned to its original lane, only the eventually crossed option would be selected. The fact that the vehicle had ultimately returned to its original lane was addressed in the additional driv- ing circumstances field, option corrected per the alert, which is detailed in the Additional Driving Circumstances section. • 0 = Eventually crossed. • 1 = Eventually returned to original lane. • 2 = Stayed in lane. Beyond the Boundary The area within two-thirds of a lane width and outside the boundary in question was considered in this evaluation. Although the choices were not mutually exclusive, no attempt was made to quantify everything beyond the boundary. If the alert was propagated by the camera, the area directly to the right or left of the vehicle was evaluated. If, how- ever, information from the radar produced the alert, every effort was made to discern which object(s) had provoked the alert based on available maneuvering room (AMR) bin information.• 0 = Median/open space. • 1 = Solid barrier. • 2 = Turning lane. • 3 = Empty lane. • 4 = Adjacent same-direction vehicle. • 5 = Fixed, discrete objects. • 6 = Construction zone. • 7 = Stalled/slow traffic in adjacent lane. • 8 = Curb. • 9 = Other/unknown. • 10 = Adjacent opposing-direction vehicle. Additional Driving Circumstances These circumstances are intentional maneuvers by the driver that help explain why the vehicle crossed the boundary or, in the case of corrected per the alert, the action the driver took after the alert. • 0 = None. • 1 = Cut behind a car. • 2 = Clear a temporary obstacle. • 3 = Make room for a large truck. • 4 = Corrected per the alert. • 5 = Early or late exit/merge. False Alert Comments • 0 = None. • 1 = Cannot identify target. (For a radar-induced alert.) • 2 = Target seems far. (For a radar-induced alert, the target had to be within two-thirds of the lane width from the vehicle to be considered valid.) • 3 = Appears too sensitive. (This classification is usually applied when it appeared that the driver was not drifting.) • 4 = Other. (List in notes.) Lighting Issues • 0 = None. • 1 = Possible road reflection. • 2 = Recent change in road illumination. Notes A notes section recorded any unusual events or ambiguous situations not covered by categories for a particular question. This section also contains general notes on the clip if anything significant was taking place that was not adequately covered by the coding process.

Next: Appendix C - Project 7 and Project 8 Event Data Dictionary »
Feasibility of Using In-Vehicle Video Data to Explore How to Modify Driver Behavior That Causes Nonrecurring Congestion Get This Book
×
 Feasibility of Using In-Vehicle Video Data to Explore How to Modify Driver Behavior That Causes Nonrecurring Congestion
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) Report S2-L10-RR-1: Feasibility of Using In-Vehicle Video Data to Explore How to Modify Driver Behavior That Causes Nonrecurring Congestion presents findings on the feasibility of using existing in-vehicle data sets, collected in naturalistic driving settings, to make inferences about the relationship between observed driver behavior and nonrecurring congestion.

The report, a product of the SHRP 2 Reliability focus area, includes guidance on the protocols and procedures for conducting video data reduction analysis.

In addition, the report includes technical guidance on the features, technologies, and complementary data sets that researchers can consider when designing future instrumented in-vehicle data collection studies.

The report also highlights a new modeling approach for travel time reliability performance measurement across a variety of traffic congestion conditions.

An e-book version of this report is available for purchase at Google, Amazon, and iTunes.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!