One definition of autonomation, said Joichi Ito, director of the MIT Media Lab, during his plenary presentation, is that it describes a device or system that accomplishes, partially or fully, a function that was previously, or conceivably could be, carried out partially or fully by a human operator. In that case, an autonomous system is something that is highly automated. Such systems are not new: he cited an Austrian attack on the city of Venice in 1849 that used unmanned balloons carrying explosives. “When you think of an autonomous system, you might think of something sophisticated like a self-driving car, but technically speaking a balloon with a bomb on it is a form of automation.”
Unmanned autonomous vehicles (UAVs) are still used in warfare, with varying levels of autonomy. With direct control, an operator interacts directly with the vehicle. With management by consent, the system makes a decision and notifies the operator for approval. With management by exemption, the system provides a decision to the operator and begins executing that decision, but the operator has the ability to override the system’s decisions.
Comparable levels of control are involved in self-driving vehicles. Depending on the level of autonomy built into the vehicle, a driver could be in control of the vehicle with the vehicle essentially looking over the driver’s shoulder, or the vehicle could be in control with the driver supervising the operations of the car. However, even when the vehicle is in control, the driver who is supposed to be supervising will be responsible if the vehicle gets in an accident. Among those who think about the ethics of autonomous vehicles, said Ito, “we call this the ‘moral crumple zone.’ . . . It could be that you’re on a boring road where there’s really no
way that you could be paying attention if you weren’t actually driving the car, but the human is still responsible.” He identified similar challenges in other domains. In medicine, should doctors be able to override the diagnosis of a machine? Should a human pilot be allowed to override the autopilot? “It is a huge risk to override a machine and then be wrong.”
The issue can be framed in even more immediate terms, Ito continued. Say a drone operator is targeting someone at a gathering that the drone decides is a gathering of terrorists and the operator thinks is a wedding. Should the drone operator be able to intervene? Another example is the classic trolley dilemma, in which the driver of a runaway trolley has to make a decision between which people to sacrifice and which people to save.
A 2017 German report on the ethics of automated and connected driving concluded that an autonomous car should not discriminate among humans of different ages or other attributes in deciding who should be injured in an accident.1 Thus, while humans should be prioritized above animals and above things, they should not be prioritized above each other. However, a web-based poll about hypothetical events related to the trolley dilemma demonstrated that people would be biased toward saving children rather than adults. People also decided that an individual could be sacrificed to save a greater number of lives, but they did not believe that the government should regulate that decision.
As a more routine example, Ito cited the possibility of a machine supervising someone’s emails and observing, “Did you know that your language toward this gender is biased?”
These kinds of ethical considerations are related to the way interfaces are designed, Ito explained. All systems consist of subsystems and are parts of larger systems, so that “there is no such thing as pure autonomy.” Some people use the term “autonomish” as a reminder that consider-
1 Federal Ministry of Transport and Digital Infrastructure. Ethics Commission: Automated and Connected Driving. Berlin: BMVI.
ing a single system in isolation is usually not a good way of designing something or intervening in a system.
Systems also tend to expand and change over time, which can make them difficult to control or predict. When the founders of the United States wrote the US Constitution, they created a system that set the country in motion. “But once it’s in motion it’s very difficult to go back and say, ‘Hey, let’s redesign the system,’” said Ito. “We’ll never be able to go back to the day when we’re starting fresh.”
Systems are better treated as complex adaptive systems, he said, which are better understood and modified using tools such as evolutionary dynamics than through traditional tools such as product engineering or the design of simple control systems.
In Code and Other Laws of Cyberspace, author Lawrence Lessig proposed that systems be seen as products of four factors: technology, the law, social norms, and economic markets. “These four things relate with each other in how technologies get deployed into society,” said Ito, thereby providing a way “to think about how you might design or intervene in the deployment of a complex system.”
At the Media Lab, Ito and his colleagues have been using the term extended intelligence instead of artificial intelligence. The latter tends to
pit machines against humans, as in a science fiction movie. Extended intelligence, on the other hand, builds on the collective intelligence that already exists in organizations, where the whole is generally greater than the sum of the parts. “Corporations are a great example of a kind of automation that is like an artificial intelligence.”
To illustrate the need for systems thinking, Ito discussed analyses showing that the risk factors used in the criminal justice system to decide on bail, sentencing, and parole are biased against dark-skinned people. In this case, both the data and the algorithms used to make decisions can be flawed, requiring that the system as a whole be considered to reduce bias. For example, the city of Chelsea, Massachusetts, has been conducting an experiment with social workers, the police, and everyone else involved with the criminal justice system. Instead of calling a drug problem a case, it is now called a situation. Instead of identifying someone as a drug addict, the person is referred to as someone suffering from addiction. The initial reluctance of some participants in the experiment had to be overcome, but the eventual adoption of a broader perspective has much more clearly separated correlation from causation, said Ito.
“If you think about the whole system,” said Ito, “it’s more important to understand which inputs actually are causing crime, not just what things are correlating [with it]. A lot of machine learning right now is about correlation and prediction, which are really snapshots and do not have an evolutionary dynamic.” Similarly, deep learning is a powerful tool, but it is not good for understanding causality, “which, when we’re talking about things like crime, is tremendously important.”
Automation will increase both the capability and speed of these complex systems. “It’ll be like putting jet packs on everyone who’s already involved in this complex dance,” said Ito. Systems that do not work well now could get worse, while others could benefit. “We need to get our house in order before we turn on the jet packs,” he cautioned. “Some of these questions are not new questions. A lot of it is about how we bring the philosophers and the social scientists into the conversation.”
Intervening in complex systems is “a tricky and very perilous thing,” said Ito. It involves not just systems but the people using those systems—as a colleague of Ito’s once said when a friend complained
that he was stuck in traffic: “It’s not that you’re stuck in traffic, you are traffic.” People are not an objective third party trying to “fiddle with the system,” Ito continued: they are the system. “And the way you change it is you change yourself, you change your own institution, [and] if it goes well, maybe other people copy it.”
The systems researcher Donella Meadows identified 10 places to intervene in a system in increasing order of effectiveness. The three most effective were changing the goals of the system, changing the mindset or paradigm out of which the system arises, and transcending the paradigms behind the system. From this perspective, said Ito, an examination of first principles in ethics should focus on the following questions:
- What are the right goals?
- What are the right paradigms?
- How can existing paradigms be transcended?
Engineers tend to focus on solving problems, Ito observed, while questioning goals and paradigms is left to scientists and artists. But “it’s really important to be able to question the system even as an engineer,” he said. “Engineering has been a lot about control and about creating systems that work. But with a complex self-adaptive system, you need to throw questioning in there to make [the system] work in a healthy way.”
Design involves more than just engineering, Ito explained. In the same way that songs can have as lasting and substantial an impact on a society as laws, design includes factors like aesthetics that influence human behavior. “When you realize that one way to intervene in a system is through things like song,” he said, it “loops back to the beginning of design.”
Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University, San Luis Obispo, introduced a slightly different definition of automation in his plenary presentation. Artificial intelligence (AI), which he equated with “super-automation,” is a set of complex instructions to automate decisions and actions. In that respect, as Ito pointed out, artificial intelligence is comparable to a bureaucracy or policy, in that
all three “define and automate certain decisions.” When artificial intelligence is combined with robots, the result is an autonomous system that can act on its own, from self-driving cars to factory robots to civilian and military drones. “Basically, any place where you find a human being is a market for AI.”
Lin added two other domains to the four discussed in the forum. One is “inner space”—the space inside our bodies. Artificial intelligence is already being used for medical diagnoses, and in that context has the potential to affect people very intimately. Augmented reality flight helmets or heads-up displays in vehicles change how the wearer views the world. Researchers are looking at how to connect the human brain to a computer. And computers have been programmed to read people’s minds and even decipher dreams, at least to a limited extent.
Another important domain is cyberspace. Artificial intelligence is being used in criminal sentencing, hiring, bank lending, and many other operations, raising ethical questions related to job displacement, privacy, human-subjects research, psychological effects, and many other issues, Lin said.
Lin discussed “proper” decision making, dividing decisions into three types: those that are right, those that are wrong, and those that are not right or wrong and require judgment.
Artificial intelligence can make wrong decisions if it exhibits an emergent behavior—an unexpected behavior that results from interactions among the components of a complex system. For instance, autonomous financial trading robots working against each other at digital speeds can crash the stock market for milliseconds at a time—“this actually happens more often than you would think,” Lin said. Similarly, two pricing bots at Amazon once began an auction war between themselves that drove the price of a textbook to millions of dollars.
“With complexity you often get unpredictability, and when you have two autonomous systems meeting each other for the first time, that compounds the unpredictability,” Lin explained. Such unpredictability
could create chaos—for example, when robotic armies meet each other for the first time. “Our adversary is unlikely to lend us their robots to make sure they’re interoperable with ours.”
Artificial intelligence can also make a wrong decision if it makes use of biased, incorrect, or distorted data. For example, machine-learning programs taught to identify objects by analyzing millions of examples can be thrown off by the introduction of just a slight amount of “noise” in an image. Use of modified datasets or images can make humans walking down a street unrecognizable to a robot. The addition of just a few pieces of tape to a stop sign can fool a robot into seeing it as a 45-mile-per-hour speed limit sign.
Automated decision making also can introduce the risk of structural biases. For example, if a man goes online to look for jobs, he is more likely to be shown chief executive officer jobs than would be the case for a woman. Historically, more men have been CEOs than women, but that is a reflection of past sexism. In this way, algorithms tend to stop further thinking, end discussion, and “crystallize” bias, said Lin. As data scientist Cathy O’Neill put it in her book Weapons of Math Destruction, “algorithms are opinions embedded in code.”
Lin turned his attention to self-driving vehicles, which pose many
concerns related to ethical decision making. Say a car on a freeway suddenly has to swerve left or right into other vehicles to avoid a person in the roadway. Swerving into a small vehicle on the right side could endanger someone in the passenger seat and the occupants of the small vehicle. Swerving into a large vehicle on the left could protect a passenger but risk the driver’s life. “Once you encode the decision, you seem to be systematically discriminating against a particular class of vehicles through no fault of their own, other than that the owners couldn’t afford larger cars or the owners had large families,” said Lin. “It’s important to remember that programmed decisions are premeditated decisions, and law and ethics treat these two differently.”
Or imagine a car driving down a narrow street between groups of people and other vehicles. Should the car drive straight down the middle of the street, or should it give a wider berth to larger groups of people? Or say the car has to drive between a school bus on one side and a bicyclist on the other. On the one hand, the school bus is a large, lumbering object that may contain children. On the other hand, the bicyclist has less protection than anyone in a vehicle. In which direction should increased risk be allocated?
As a final example, Lin cited a navigation app that sends a driver down a residential street because the route is faster. “But sometimes the fastest route could be the more dangerous route,” he cautioned, with more intersections, unprotected left turns, or children playing nearby. If “an accident happened because the car chose this more dangerous route, arguably that could be on the manufacturer.”
Looking ahead, Lin posited that one way to extend ethics into new domains is to connect ethical issues to more easily analyzed circumstances. For example, new technologies could be seen as conferring “superpowers”—the ability to fly, have extraordinary strength or endurance, sense things that cannot normally be sensed, draw on vast amounts of data to make decisions, and so on. The question then becomes how the existence of superpowers changes ethical decision making. “How does it change institutions like privacy or education?
How does it change norms? As the saying goes, great powers come with great responsibilities.” Regular people may not have a responsibility to pick up a ticking bomb and throw it into outer space, but Superman would. “We have to think about the ways that technology is changing us—and changing our obligations and responsibilities.”
Everyone is a stakeholder in a technology-driven world, Lin concluded, but those who conceive of new ideas and implement them in society have distinct roles. He closed with a quotation from the British scientist Martin Rees2: “Scientists surely have a special responsibility. It is their ideas that form the basis of new technology. They should not be indifferent to the fruits of their ideas.”
2 “Dark materials,” essay in The Guardian, June 9, 2006.