Automated driving has experienced a research renaissance in the past decade as investigators have been motivated by organized competitions to increase safety and mobility. Key advances that have shaped the field during this period have been in the application of machine learning, large-scale mapping, improved LIDAR (light detection and ranging remote sensing technology) and RADAR sensing capability, and, more recently, a deeper understanding of the human factors that will influence the form in which this technology comes to market.
WHY SELF-DRIVING VEHICLES?
Traffic accidents are the leading cause of death for individuals aged 4 to 34 in the United States (Hoyert and Xu 2012). More than 30,000 people are killed each year on the road, and over 90 percent of these accidents are due to human error. Furthermore, the ability to move in, through, and around cities is decreasing as more and more drivers, preferring individual mobility, flood roadways. Yet the importance of personal mobility in the United States is such that when individuals lose the privilege of driving, and the social connections it enables, their life expectancy drops precipitously (Edwards et al. 2009). And in developing cities the rise in traffic deaths and significant pollution is further evidence of the tragedy of the commons.
Self-driving vehicles offer the promise of addressing all of these challenges: they should dramatically reduce accidents, enable people who cannot drive to get around, and, when deployed as part of an efficient shared vehicle fleet, reduce congestion.
A DEEP HISTORY
As early as the 1939 World’s Fair, General Motors showed a concept of the automated roadway of the future. In 1950 its research and development department introduced the Firebird II concept car, capable of following buried cables that emitted a radiofrequency signal. During the 1980s and ’90s the introduction of the microcomputer enabled practical, online computation on a mobile platform. Ernst Dickmanns was a pioneer in this space, introducing early versions of foveated stereovision systems (Dickmanns and Wünsche 2007).
Soon machine learning began to be applied to the problem. RALPH (a rapidly adapting lateral position handler; e.g., Thorpe and Kanade 1990) was one of the earliest applications of machine learning (neural networks in this case) to automated driving. By 1997 the combination of RALPH with a nascent forward-looking RADAR system enabled vehicles to drive thousands of miles. Elements of this technology have found their way into lane keeping assist systems, forward collision mitigation braking, and adaptive cruise control systems.
DARPA’S GRAND CHALLENGES
Much of the on-road automated driving work faded after the successful 1997 National Automated Highway Systems Consortium demonstration. The technology worked reasonably well, but automated driving research funding turned toward the military while the automotive industry slowly commercialized driver assistance systems.
In 2003 the driving research community was reenergized by the announcement of the DARPA Grand Challenges (http://grandchallenge.org/). The Floyd D. Spence National Defense Authorization Act for fiscal year 2001 called for one third of all US military ground vehicles to be unmanned by 2015. In a 2002 report the National Research Council indicated that this goal would not be achievable and that the Department of Defense should pursue other strategies (NRC 2002). Thus DARPA’s Grand and Urban Challenges were born.
The initial Grand Challenges were off-road races across the desert, with the notional goal of having autonomous vehicles drive from Los Angeles to Las Vegas without remote assistance. In 2004 the challengers went only 7 miles of the 150-mile course (Urmson et al. 2004). The following year, several vehicles completed the competition (Figure 1), which was won by a team from Stanford (Thrun et al. 2006).
The vehicles featured several notable technical innovations. All of the competitors were given a rough map of the route, but several of the successful teams augmented the map data with information from other publicly available sources. The notion of fusing such information with onboard sensing data was novel at the time (Urmson et al. 2006). The approach was enabled by newly available access to
FIGURE 1 The top three finishers in the 2005 DARPA Grand Challenge: Stanley, entered by Stanford University (left), and H1ghlander (center) and Sandstorm (right), both entered by Carnegie Mellon University.
high-resolution aerial imagery, and gave the vehicles a degree of foreknowledge of the terrain that resulted in better and safer driving.
The Stanford team used machine learning techniques extensively. For example, its vehicle used machine learning to bolster its visual system using LIDAR sensors, enabling it to drive faster than was possible using LIDAR alone. The vehicle was able to detect rough terrain and slow appropriately using a learned model of “bumpiness.” The team’s success in the challenge helped reinforce machine learning’s value in the field of autonomous driving.
THE URBAN CHALLENGE
While the Grand Challenge was indeed a grand challenge, the vehicles operated in a world devoid of other moving vehicles: when Stanley, the Stanford vehicle, passed H1ghlander, the Carnegie Mellon vehicle, to claim the victory, H1ghlander was paused and Stanley passed an inert vehicle.
The Urban Challenge was thus the next evolution of the DARPA competition, in which the vehicles now had not only to complete the challenge with moving vehicles but also to obey a subset of driving rules that human drivers take for granted (e.g., stay in the lane, follow precedence rules at intersections, avoid other vehicles). The competition, staged in 2007, required vehicles to drive 60 miles around a decommissioned Air Force base in Victorville, California. Six vehicles
finished the competition, with teams from Carnegie Mellon, Stanford, and Virginia Tech in the top three positions (Buehler et al. 2009).
Key technical advances came in the form of high-density LIDAR and further demonstration of the value of high-density maps. Single-plane LIDAR sensors were used in the original Grand Challenge, sometimes actuated to sweep volumes but generally carefully calibrated to sweep scan lines through the environment as the vehicle moved. The Urban Challenge introduced the concept of high-density LIDARs through a sensor developed by Velodyne. The new sensor had a spinning head that swept a set of 64 LIDAR emitters through space, generating over 1 million range measurements per second with relatively high angular resolution. This style of sensor enabled a new level of precision modelling that had until then been difficult, if not impossible, to achieve in real time.
The value of digital maps came to the forefront during the Urban Challenge. Using the maps, vehicles were able to anticipate the likely trajectory of other vehicles and focus their attention in appropriate directions at intersections. They were also able to use their limited computation more efficiently.
In the seven years since the Urban Challenge, industry has taken up the gauntlet of advancing self-driving technology. In 2009 Google started a program to develop self-driving vehicles and since then its vehicles have driven more than 700,000 miles autonomously on public roads.
The technology being developed by Google builds on many of the themes developed during the DARPA challenges. The vehicles use high-resolution maps (now being developed at city scale) to help guide the onboard system’s perception and planning behaviors as well as a combination of LIDAR, camera, and RADAR sensors to provide a partially redundant and multispectral model of the environment. The onboard software system leverages hundreds of thousands of miles of driving data and machine learning techniques to predict the behavior of other road users.
In parallel with Google’s efforts, the automotive industry is broadly engaged in the development of advanced driver assistance systems, with the major car companies and their suppliers developing varying degrees of automated driving. The largest difference between the approaches of the classical automotive companies and Google is the degree to which the driver is engaged. Google is developing vehicles to be fully self-driving, requiring a rider only to tell the vehicle where to go (Figure 2), whereas the automotive companies are primarily focused on delivering advanced driver assistance systems that require the driver to remain in the steering loop. The latter approach requires a smaller incremental technical step, but is challenged by problems of driver attentiveness and skill atrophy (Llaneras et al. 2013).
FIGURE 2 Google’s prototype fully self-driving vehicle.
In the coming years advanced driver assistance systems and self-driving vehicles will become commonplace, delivering on the promise of making roads safer and more convenient for all.
Buehler M, Iagnemma K, Singh S, eds. 2009. The DARPA Urban Challenge: Autonomous vehicles in city traffic. Spring Tracts in Advanced Robotics, vol. 56. London: Springer.
Dickmanns ED, Wünsche HJ. 2007. Dynamic vision for perception and control of motion. London: Springer.
Edwards JD, Perkins M, Ross LA, Reynolds SL. 2009. Driving status and three-year mortality among community-dwelling older adults. Journals of Gerontology Series A: Biological Sciences and Medical Sciences 64A(2):300–305.
Hoyert DL, Xu J. 2012. Deaths: Preliminary data for 2011. National Vital Statistics Reports 61(6):1–51.
Llaneras RE, Salinger J, Green CA. 2013. Human factors issues associated with limited ability autonomous driving systems: Drivers’ allocation of visual attention to the forward roadway. In Proceedings of the 7th International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, pp. 92–98.
NRC [National Research Council]. 2002. Technology Development for Army Unmanned Ground Vehicles. Washington: National Academies Press.
Thorpe C, Kanade T. 1990. Vision and Navigation. Dordrecht: Kluwer Academic Publishers.
Thrun S, Montemerlo M, Dahlkamp H, Stavens D, Aron A, Diebel J, and 24 others. 2006. Stanley: The robot that won the DARPA Grand Challenge. Journal of Field Robotics 23(9):661–692.
Urmson C, Anhalt J, Clark M, Galatali T, Gonzalez JP, Gowdy J, and 10 others. 2004. High speed navigation of unrehearsed terrain: Red Team technology for Grand Challenge 2004. Technical Report CMU-RI-04-37. Robotics Institute, Carnegie Mellon University, Pittsburgh.
Urmson C, Ragusa C, Ray D, Anhalt J, Bartz D, Galatali T, and 16 others. 2006. A robust approach to high-speed navigation for unrehearsed desert terrain. Journal of Field Robotics 23(8):467–508.
This page intentionally left blank.