Are such systems new? In one sense, no. A simple pressure-activated mine fulfills the definition of a fully autonomous lethal system—it explodes without human intervention when it experiences a pressure exceeding some preprogrammed threshold. Other newer, fully autonomous systems are more sophisticated—the radar-cued Phalanx Close-In Weapons System for defense against antiship missiles and its land-based counterpart for countering rocket, artillery, and mortar fire are examples. In these latter systems, the fully autonomous mode is enabled when there is insufficient time for a human operator to take action in countering incoming fire.3

Other systems, such as the Mark 48 torpedo, are mobile and capable of moving freely (within a limited domain) and searching for and identifying targets. A torpedo is lethal, but today it requires human intervention to initiate weapons release. Much of the debate about the future of autonomous systems relates to the possibility that a system will deliberately initiate weapons release without a human explicitly making the decision to do so.

Seeking to anticipate future ethical, legal, and societal issues associated with autonomous weapons systems, the Department of Defense promulgated a policy on such weapons in November 2012. This policy is described in Box 3.1.

3.1.3 Ethical, Legal, and Societal Questions and Implications

In some scenarios, the use of armed autonomous systems not only might reduce the likelihood of friendly casualties but also might improve mission performance over possible or typical human performance. For example, autonomous systems can loiter without risk near a target for much longer than is humanly possible, enabling them to collect more information about the target. With more information, the remote weapons operator can do a better job of ascertaining the nature and extent of the likely collateral damage should s/he decide to attack as compared with a pilot flying an armed aircraft in the vicinity of the target; with such information, an attack can be executed in a way that does minimal collateral damage. A remote human operator—operating a ground vehicle on the battlefield from a safe location—will not be driven by fear for his or her own safety in deciding whether or not to attack any given target, and thus is more likely in this respect to behave in a manner consistent with the law of armed conflict than would a soldier in immediate harm’s way.

___________________

3 Clive Blount, “War at a Distance?—Some Thoughts for Airpower Practitioners,” Air Power Review 14(2):31-39, 2011, available at http://www.airpowerstudies.co.uk/APR%20Vol%2014%20No%202.pdf.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement