Click for next page ( 18


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 17
17 Figure 3.3. Head unit assembly prototype. Figure 3.5. Main unit prototype. The GPRS cellular antennae will be mounted in the rear of the cabin. Wide field of view (FOV) forward radar capable Machine Vision Applications of assessing oncoming traffic will be small, lightweight, and mounted to the front license plate holder (Figure 3.6). Radar Several machinevision-based applications will be incorpo- data will be transmitted via Bluetooth wireless technology to rated into the DAS, including a lane tracker, head position the main unit and will capture data concerning the relative monitor, and driver identification software. There are crite- position and speed of multiple objects. Use of Bluetooth for ria for what to include as resident software on the DAS, as this purpose alleviates the need of running cables through the opposed to that which can be applied to stored data post hoc. vehicle's firewall to the main unit, thereby decreasing risk to the The video data are being stored in a compressed format, participant and saving valuable installation time and real costs because there are insufficient resources to store it in its native associated with permanently altering a participant's vehicle. resolution. However, several of the machinevision-based This is important because each S07 site must maintain an applications require the full resolution video to perform as installation throughput rate of two vehicles per bay per day expected, so these must operate on the native video coming (i.e., where the largest sites have three installation bays and directly from the cameras before the information is stored on the smallest have a single installation bay). the DAS hard drive. On the other hand, every such applica- DAS sensors and associated variables are listed in Table 3.8. tion running in real time consumes limited storage and pro- Figure 3.4. Composite snapshot of four continuous video camera views. Figure 3.6. Forward radar assembly prototype.

OCR for page 17
18 Table 3.8. Data Acquisition System Variables Location Sensor Data to Be Collected Head unit Multiple cameras/video Video images of the forward view, center stack view, rear and passenger side view, and driver face view, information for machine vision (MV) processes (including lane tracking and eyes forward data), as well as periodic, irrevocably blurred still photographs of the cabin interior to capture passenger presence Main unit Accelerometer data In 3 axes: Forward/reverse (x) Right/left (y) Down/up (z) Main unit Rate sensors Yaw rate Head unit GPS Latitude, longitude, elevation, time, velocity Radar unit Forward radar Object ID, range, and range rate Main unit Cell phone module Health checks, location notification, collision notification, and remote software upgrades Head unit Illuminance sensor Ambient lighting levels Head unit Passive alcohol sensor Presence of alcohol within the vehicle cabin Head unit Incident push button In the event of an unusual or interesting traffic safety-related event, allows participant to open an audio recording channel for 30 seconds; also "flags" the data stream for ease of location during data analysis Head unit Audio Available only in concert with the incident push button as noted above Turn signals (from vehicle network data or directly from the Turn signal actuation, which distinguishes between left and signals themselves) right indicated turns Main unit Vehicle network data Where available, the use of the accelerator, brakes, ABS, gear position, steering wheel angle, speed, horn, seat belt information, airbag deployment, and many other such variables cessing resources. Therefore the design must balance the need The VTTI Mask Head Tracker also operates on the DAS so that certain applications have for real-time, onboard process- that it can operate on uncompressed video (Figure 3.7). It is ing with the storage and processing limitations of the DAS. capable of identifying and distinguishing between a few gen- The VTTI Road Scout (VRS) application is designed to track eral glance locations (e.g., forward roadway, mirrors, center lane markings in real time on the DAS. Having this appli- stack) using software designed to find and determine charac- cation on the DAS provides the advantage of operating on teristics of a person's face in an image, and track those char- uncompressed video. The VRS has the ability to determine acteristics through subsequent images collected with the face location of lane lines, horizontal curvature of the road, and view camera. The mask generates a three-dimensional repre- angular offset of the vehicle within the lane. The VRS can sentation of a person's face using triangular surfaces to define determine if the lane lines are single, double, solid, or dashed. the shape. The software and system operation will function in Testing has shown that the lane-tracking algorithm has a high real time on the DAS using the raw video before compression. degree of accuracy when lane markings are clearly present A face recognition software solution is being sought that will and visible; however, the lane tracking functionality is unable permit researchers to automatically determine if a driver is a to gather sufficient roadway data in conditions where snow consented participant. Systems of this type are too processing- or other occlusions are present in the roadway. The data intensive to operate in real time, so this type of processing and available from the MV lane tracker will be useful in answer- analysis will be done on a post hoc basis. Driver identification ing many questions about road departure, because it is antic- will rely on a biometric software application to provide auto- ipated that many road departures and unintentional lane mated face recognition of drivers on the basis of their unique departures will be captured during the SHRP 2 NDS. facial characteristics. Face recognition software would substan-