3

Application Domains

Application domains are, by definition, associated with operational military problems. Solutions to these problems call for the application of various technologies, both foundational and specialized. Research and development work (applied research) on operational military problems is often classified.

This chapter addresses four application domains: robotics and autonomous systems, prosthetics and human enhancement, cyber weapons, and nonlethal weapons. For each, the relevant section provides a brief overview of the technologies relevant to that domain, identifies a few characteristic military applications within the domain, and addresses some of the most salient ethical, legal, and societal issues for that application domain. As with Chapter 2, the reader is cautioned that ELSI concerns are not handled uniformly from section to section—this lack of uniformity reflects the fact that different kinds of ethical, legal, and societal issues arise with different kinds of military/national security applications.

3.1 ROBOTICS AND AUTONOMOUS SYSTEMS

An autonomous system can be defined loosely as a system that performs its intended function(s) without explicit human guidance. The technology of autonomous systems is sometimes called robotics. Many such systems are in use today, both for civilian and military purposes, and more are expected in the future. And, of course, there are degrees of



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 79
3 Application Domains Application domains are, by definition, associated with operational military problems. Solutions to these problems call for the application of various technologies, both foundational and specialized. Research and development work (applied research) on operational military problems is often classified. This chapter addresses four application domains: robotics and auton- omous systems, prosthetics and human enhancement, cyber weapons, and nonlethal weapons. For each, the relevant section provides a brief overview of the technologies relevant to that domain, identifies a few characteristic military applications within the domain, and addresses some of the most salient ethical, legal, and societal issues for that appli- cation domain. As with Chapter 2, the reader is cautioned that ELSI concerns are not handled uniformly from section to section—this lack of uniformity reflects the fact that different kinds of ethical, legal, and societal issues arise with different kinds of military/national security applications. 3.1  ROBOTICS AND AUTONOMOUS SYSTEMS An autonomous system can be defined loosely as a system that per- forms its intended function(s) without explicit human guidance. The technology of autonomous systems is sometimes called robotics. Many such systems are in use today, both for civilian and military purposes, and more are expected in the future. And, of course, there are degrees of 79

OCR for page 79
80 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY autonomy that correspond to different degrees and kinds of direct human involvement in guiding system behavior. The overarching rationale for deploying such systems is that they might replace humans performing militarily important tasks that are dangerous, tedious, or boring or that require higher reliability or pre- cision than is humanly possible. If such replacement is possible, two consequences that follow are that (1) humans can be better protected and suffer fewer deaths and casualties as these important military tasks are performed, and (2) important military tasks will be performed with higher efficiency and effectiveness than if humans are directly involved. 3.1.1  Robotics—The Technology of Autonomous Systems Computer systems (without the sensors and actuators) have always had a certain kind of “autonomous” capability—the term “computer” once referred to a person who performed computations. Today, many computer systems perform computational tasks on large amounts of data and gener- ate solutions to problems that would take humans many years to solve. For purposes of this report, an autonomous system (without further qualification) refers to a standalone computer-based system that interacts directly with the physical world. Sensors and actuators are the enabling devices for such interaction, and they can be regarded as devices for input and output. Instead of a keyboard or a scanner for entering information into a computer for processing, a camera or radar provides the relevant input, and instead of a printer or a screen for providing output, the move- ment of a servomotor in the appropriate manner represents the result of the computer’s labors. Autonomous systems are fundamentally dependent on two technologies—information technology and the technology of sensors and actuators. Both of these technologies have developed rapidly. On the hardware side, the costs of processor power and storage have dropped exponentially for a number of decades, with doubling times on the order of 1 to 2 years. Sensors and actuators have also become much less expensive and smaller. On the software side, the technologies of artificial intelligence, statistical learning techniques, and information fusion have advanced a long way as well, although at the cost of decreased transpar- ency of operation in the software that controls the system. Software that controls the operation of autonomous systems is subject to all of the usual problems regarding software safety and reliability—pro- gramming errors and bugs, design flaws, and so on. Flaws can include errors of programming (that is, errors introduced because a correct per- formance requirement was implemented incorrectly) or errors of design (that is, a performance requirement was formulated incorrectly or stated improperly).

OCR for page 79
APPLICATION DOMAINS 81 To control an autonomous system, the software is programmed to anticipate various situations. An error of programming might be a mis- take made in the programming that controls the response to a particular situation, even when that situation is correctly recognized. An error of design might become apparent when a system encounters a situation that was not anticipated, and as a result either does something entirely unex- pected or improperly assesses the situation as one for which it does have a response, which happens to be inappropriate in that instance. Neuroscience may be an enabling technology for certain kinds of autonomous systems. Some neuroscience analysts believe that neurosci- ence will change the approach to computer modeling of decision making by disclosing the cognitive processes produced by millions of years of evolution, processes that artificial intelligence has to date been unable to capture fully. Such processes may become the basis for applications such as automatic target recognition. Even today, it is possible for automated pro- cesses to differentiate images of tanks from those of trucks, and such pro- cesses do not rely on neuroscience. However, neuroscience may contribute to an automated ability to make even finer distinctions, such as the ability to distinguish between friendly and hostile vehicles or even individuals. In general, the logic according to which any complex system operates—including many autonomous systems—is too complex to be understood by any one individual. This is true for three reasons. First, multiple individuals may be responsible for different parts of the system’s programming, and they will not all be equally conversant with all parts of the programming. Second, the programming itself may be large and complex enough to make it very hard to understand all of how it works in detail. Third, the program may combine and process inputs (sometimes unique inputs that depend on the very specific circumstances extant at a given moment in time) in ways that no human or team of humans can reasonably anticipate. System testing is one mechanism that can provide some information about the behavior of the system under various condi- tions, but it is well understood that testing can only provide evidence of flaws and that it cannot prove that a system is without flaw.1 It is worth noting that a flaw in the software controlling an autono- mous system may be far more damaging than a flaw in software that does 1 National Research Council, Software for Dependable Systems: Sufficient Evidence?, The National Academies Press, Washington, D.C., 2007, available at http://www.nap.edu/ catalog.php?record_id=11923. See also National Research Council, Summary of a Workshop on Software Certification and Dependability, The National Academies Press, Washington, D.C., 2004, available at http://books.nap.edu/catalog.php?record_id=11133. Real-time programming (the class of programming needed for robotics applications) is especially complicated by unanticipated “interaction” effects that are hard to detect by testing and also do not usually arise in non-real-time programming.

OCR for page 79
82 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY not control physical objects—in the latter case, a display may be in error (or indicate an error), whereas in the former case, the physical part of a system (such as a robotically controlled gun) may kill friendly troops. 3.1.2  Possible Military Applications Technologies for autonomous systems are the basis for a wide variety of real-world operational systems. Today, robots are available to clean pools and gutters, to vacuum and/or wash floors, and to mow lawns. Robotic dogs serve as personal companions to some children. Robots per- form a variety of industrial assembly line tasks, such as precision welding. A number of commercial robots also have obvious military applications as well—robots for security patrolling at home have many of the capa- bilities that robots for surveillance might need to help guard a military facility, and self-driving automobiles are likely to have many similarities to self-driving military trucks. In a military context, robots also conduct long-range surveillance and reconnaissance operations, disarm bombs, and perform a variety of other functions. In addition, these robots may operate on land, in the air, or on and under the sea. Perhaps the most controversial application of autonomous systems is equipping such systems with lethal capabilities that operate under human control. Even more controversially, some systems have lethal capabilities that can be directed without human intervention. Some of these systems today include:2 • A South Korean robot that provides either an autonomous lethal or nonlethal response in an automatic mode rendering it capable of making the decision on its own. • iRobot, which provides Packbots capable of tasering enemy com- batants; some are also equipped with the highly lethal MetalStorm gre- nade-launching system. • The SWORDS platform in Iraq and Afghanistan, which can carry lethal weaponry (M240 or M249 machine guns, or a .50 caliber rifle). A new Modular Advanced Armed Robotic System (MAARS) version is in development. • Stationary robotic gun-sensor platforms that Israel has considered deploying along the Gaza border in automated kill zones, with machine guns and armored folding shields. 2 Ronald C. Arkin, unpublished briefing to the committee on January 12, 2012, Washington, D.C.; and Ronald C. Arkin, “Governing Lethal Behavior,” Proceedings of the 3rd International Conference on Human Robot Interaction, ACM Publishing, New York, 2008.

OCR for page 79
APPLICATION DOMAINS 83 Are such systems new? In one sense, no. A simple pressure-acti- vated mine fulfills the definition of a fully autonomous lethal system—it explodes without human intervention when it experiences a pressure exceeding some preprogrammed threshold. Other newer, fully autono- mous systems are more sophisticated—the radar-cued Phalanx Close-In Weapons System for defense against antiship missiles and its land-based counterpart for countering rocket, artillery, and mortar fire are examples. In these latter systems, the fully autonomous mode is enabled when there is insufficient time for a human operator to take action in countering incoming fire.3 Other systems, such as the Mark 48 torpedo, are mobile and capable of moving freely (within a limited domain) and searching for and iden- tifying targets. A torpedo is lethal, but today it requires human interven- tion to initiate weapons release. Much of the debate about the future of autonomous systems relates to the possibility that a system will delib- erately initiate weapons release without a human explicitly making the decision to do so. Seeking to anticipate future ethical, legal, and societal issues associ- ated with autonomous weapons systems, the Department of Defense promulgated a policy on such weapons in November 2012. This policy is described in Box 3.1. 3.1.3  Ethical, Legal, and Societal Questions and Implications In some scenarios, the use of armed autonomous systems not only might reduce the likelihood of friendly casualties but also might improve mission performance over possible or typical human performance. For example, autonomous systems can loiter without risk near a target for much longer than is humanly possible, enabling them to collect more information about the target. With more information, the remote weapons operator can do a better job of ascertaining the nature and extent of the likely collateral damage should s/he decide to attack as compared with a pilot flying an armed aircraft in the vicinity of the target; with such infor- mation, an attack can be executed in a way that does minimal collateral damage. A remote human operator—operating a ground vehicle on the battlefield from a safe location—will not be driven by fear for his or her own safety in deciding whether or not to attack any given target, and thus is more likely in this respect to behave in a manner consistent with the law of armed conflict than would a soldier in immediate harm’s way. 3 Clive Blount, “War at a Distance?—Some Thoughts for Airpower Practitioners,” Air Power Review 14(2):31-39, 2011, available at http://www.airpowerstudies.co.uk/APR%20 Vol%2014%20No%202.pdf.

OCR for page 79
84 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY Box 3.1 Department of Defense Policy on Autonomy in Weapon Systems Department of Defense Directive 3000.09, dated November 21, 2012, on the subject of “Autonomy in Weapon Systems” establishes DOD policy regarding autonomous and semi-autonomous weapon systems. An autonomous weapon system is a weapon system that, once activated, can select and engage targets without further intervention by a human operator. A subset of autonomous weapon systems are human-supervised autonomous weapon systems that are designed to select and engage targets without further human input after activation but nevertheless allow human operators to override operation of the weapon system and to terminate engagements before unaccept- able levels of damage occur. A semiautonomous weapon system is a weapon system that, once activated, is intended to engage only individual targets or specific target groups that have been selected by a human operator. In semiautonomous weapon systems, au- tonomy can be provided for engagement-related functions including, but not limited to, acquiring, tracking, and identifying potential targets; cueing potential targets to human operators; prioritizing selected targets; timing of when to fire; or providing terminal guidance to home in on selected targets. Semiautonomous systems also include fire-and-forget or lock-on-after-launch homing munitions that rely on tactics, techniques, and procedures to maximize the probability that only the individual targets or specific target groups explicitly selected by a human operator will be attacked. This provision allows weapons such as the United States Air Force Low Cost Autonomous Attack System loitering missile system to operate within a des- ignated area in which only enemy targets are expected to be found. Not covered by the policy are autonomous or semiautonomous cyber weap- ons, unguided munitions, munitions manually guided by operators, or mines. The policy states that those who authorize the use of, direct the use of, or operate autonomous and semiautonomous weapon systems must do so in accor- dance with the laws of war, applicable treaties, weapon system safety rules, and the applicable rules of engagement (ROE). In addition, it directs that autonomous and semiautonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force. Autonomous and semiautonomous weapon systems should do the following: This list of advantages, including ethical ones, provides a strong incentive to develop and deploy autonomous systems. Despite such advantages, a variety of ELSI concerns have been raised about autono- mous systems and are discussed below.4 4The concerns described below are drawn from a number of sources, including Patrick Lin, “Ethical Blowback from Emerging Technologies,” Journal of Military Ethics 9(4):313-331, 2010.

OCR for page 79
APPLICATION DOMAINS 85 • Function as anticipated in realistic operational environments against adap- tive adversaries; • Complete engagements in a timeframe consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek ad- ditional human operator input before continuing the engagement; and • Be sufficiently robust to minimize failures that could lead to unintended engagements or to loss of control of the system to unauthorized parties. The policy permits use of semiautonomous weapons systems to deliver ki- netic or nonkinetic, lethal or nonlethal force in most combat situations, subject to the requirements described above regarding the laws of war and so on. The policy also permits the use of human-supervised autonomous weapon systems in local defense scenarios to select and engage (nonhuman) targets to respond to time- critical or saturation attacks against manned installations and onboard defense of manned platforms. (This provision allows systems such as the Phalanx Close-in Weapons System (CIWS) to operate in its fully autonomous mode.) Last, it per- mits the use of autonomous weapon systems in the context of applying nonlethal, nonkinetic force, such as some forms of electronic attack, against materiel targets. The DOD does not currently possess autonomous weapons systems de- signed for use in scenarios other than those described in the previous paragraph. But in the future, the acquisition of such weapons systems (that is, autonomous weapons systems designed for use in other scenarios) will be subject to two spe- cial additional reviews involving the Undersecretaries of Defense for Policy and for Acquisition, Technology and Logistics, and the Chairman of the Joint Chiefs of Staff. Before a decision to enter into formal development, a review will ensure that the development plan meets the requirements of the policy described above. Before a decision to field such a weapons system, a review will ensure that the weapon to be fielded does meet the requirements of the policy described above and, further, that relevant training, doctrine, techniques, tactics, and procedures are adequate to support its use. Finally, in an acknowledgment that technology will inevitably evolve, the direc- tive states that the policy will expire in 10 years (on November 22, 2022) if it has not been reissued, canceled, or certified current by November 22, 2017. International Law Autonomous systems—especially lethal autonomous systems— complicate today’s international law of armed conflict (LOAC) and domestic law as well. Some relevant complications include the following: • Individual responsibility is one of the most important mechanisms for accountability under LOAC. However, an autonomous system taking an action that would be a LOAC violation if taken by a human being

OCR for page 79
86 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY cannot be punished and is not “accountable” in any meaningful sense of the term. Behind the actions of that system are other actions of a number of human beings, who may include the system operator, those higher in the chain of command who directed that the system be used, the system developer/designer/programmer, and so on. How and to what extent, if any, are any of these individuals “responsible” for an action of the system?5 • How and to what extent can lethal autonomous systems distinguish between legitimate and illegitimate targets (such as civilian bystanders)? How and to what extent can such a system exercise valid judgment that “pulling the trigger” does not result in “excessive” collateral damage? • How might autonomous systems contribute to a lowering of the threshold for engaging in armed conflict? Some analysts argue that the use of remotely operated lethal autonomous systems in particular emboldens political leaders controlling the use of such weapons to engage in armed conflict.6 The argument, in essence, is that nation X will be more likely to wage war against nation Y to the extent that nation X’s troops are not in harm’s way, as would be the case with weapons system operators doing their work from a sanctuary (e.g., nation X’s homeland) rather than in the field (that is, on the battlefield with nation Y’s troops). Under such a scenario, the use of force (that is, the use of such systems) is less likely to be a true act of last resort, and thus violates the “last resort” principle underlying jus ad bellum. Impact on Users The armed forces of the world have a great deal of experience with traditional combat, and still the full range of psychological and emotional 5 A military organization provides a chain of command in which some specific party is responsible for deciding whether a system or weapon is used, and if untoward things happen as the result of such use, the presumption is that this individual specific party is still responsible for the bad outcome. This presumption can be rebutted by various mitigating circumstances (e.g., if further investigation reveals that the weapon itself was flawed in a way that led directly to the bad outcome and that the responsible party had no way of knowing this fact). 6 See, for example, Peter Asaro, “Robots and Responsibility from a Legal Perspective,” Proceedings of the IEEE 2007 International Conference on Robotics and Automation, Workshop on RoboEthics, April 14, 2007, Rome, Italy, available at http://www.peterasaro.org/writing/ ASARO%20Legal%20Perspective.pdf; Rob Sparrow, “Killer Robots,” Journal of Applied Phi- losophy 24(1):62-77, 2007; and Noel Sharkey, “Robot Wars Are a Reality,” The Guardian (UK), August 18, 2007, p. 29, available at http://www.guardian.co.uk/commentisfree/2007/ aug/18/comment.military. Also cited in Patrick Lin, George Bekey, and Keith Abney, Au- tonomous Military Robotics: Risk, Ethics, and Design, California Polytechnic State University, San Luis Obispo, Calif., 2008.

OCR for page 79
APPLICATION DOMAINS 87 effects of combat on soldiers is not well understood. Thus, there may well be some poorly understood psychological effects on soldiers who engage in combat far removed from the battlefield. For example, a 2011 report from the United States Air Force School of Aerospace Medicine, Department of Neuropsychiatry, on the psychologi- cal health of operators of remotely piloted aircraft and supporting units identified three groups of psychological stressors on these operators: 7 • Operational stressors (those related to sustaining operations) include issues such as restricted working environments (e.g., ground control stations with limited freedom for mobility) and poor workstation ergonomics. • Combat stressors (those that involve missions undertaken in direct support of combat operations) include stresses induced in operators of remotely piloted vehicles who must manage their on-duty warrior role contemporaneously with their role as one with domestic responsibilities arising from being stationed at home. • Career stressors (those arising from the placement of individuals into positions requiring the flying of remotely piloted vehicles) include poorly defined career fields with uncertain career progression, especially for those who have previously qualified for piloting manned aircraft. What is the psychological impact on a Navy pilot when a remotely piloted vehicle can land with ease on an aircraft carrier at night in a storm, or on a specialist in explosive ordnance disposal when a bomb disposal robot can disarm an improvised explosive device without placing the specialist at risk?8 How will such individuals demonstrate courage and skill to their superiors and colleagues when such technologies are available? Humanity of Operators In the context of armed remotely piloted vehicles (RPVs), concerns have been raised about psychological distancing of RPV operators from 7 Wayne Chappelle et al., Psychological Health Screening of Remotely Piloted Aircraft (RPA) Operators and Supporting Units, RTO-MP-HFM-205, USAF School of Aerospace Medicine, Department of Neuropsychiatry, Wright-Patterson Air Force Base, Ohio, 2011. 8 Peter Singer describes individuals from the Foster Miller Company in Waltham, Mas- sachusetts, talking about the moment at which they decided to use robots for explosive ordnance disposal (EOD). Teams had received robots for EOD but were not using them. But an incident occurred in which two EOD technicians were killed in Iraq, and the prevailing sentiment shifted quickly from ”We leave the robots in the back of the truck” and ”We don’t use them because we’re brave” to ”You know what? We really do have to start using them.” See Robert Charette, “The Rise of Robot Warriors,” IEEE Spectrum, June 2009, available at http://spectrum.ieee.org/robotics/military-robots/the-rise-of-robot-warriors.

OCR for page 79
88 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY their targets. Quoting from a report of the UN Human Rights Council,9 “[B]ecause operators are based thousands of miles away from the battle- field, and undertake operations entirely through computer screens and remote audiofeed, there is a risk of developing a ‘Playstation’ mentality to killing.” Others counter such notions by pointing out that killing at ever-larger distances from one’s target characterizes much of the history of warfare. Increasing the distance between weapons operator and target generally decreases the likelihood that the operator will be injured, and indeed there is no legal requirement that operator and target must be equally vulnerable. Organizational Impacts New technology often changes relationships within an organization. For example, the scope and nature of command relationships for the use of that technology are not arbitrary. Someone (or some group of individu- als) specifies these relationships. Under what circumstances, if any, is an individual allowed to make his or her own decision regarding placement of a system into a lethal autonomous mode? Who decides on the rules of engagement, and how detailed must they be? A second example of organizational impact is that autonomous systems reduce the need for personnel—in such an environment, what becomes of promotion opportunities, which traditionally depend in part on the number of personnel that one can command effectively? How do personnel needs affect the scale of financial resources required by an organization? A third example is that a military organization built around the use of autonomous systems may be regarded differently from one organized traditionally. For example, it is worth considering the controversy over a proposal to introduce a new medal to recognize combat efforts of drone and cyber operators (Box 3.2). The proposal was intended to elevate the status of the operators, recognizing their increasing importance to modern combat. But the public reaction to the proposal reflected skepticism of the idea that a soldier who operates a drone or engages in cyber operations should be recognized and decorated in the same way as the soldier who risks his or her life in the actual theater of battle. A final example of organizational impact is that autonomous systems 9 Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Study on Targeted Killings, Human Rights Council, ¶ 84, UN Doc. A/HRC/14/24/Add.6, May 28, 2010, available at http://www2.ohchr.org/english/bodies/hrcouncil/docs/ 14session/A.HRC.14.24.Add6.pdf.

OCR for page 79
APPLICATION DOMAINS 89 Box 3.2 The Distinguished Warfare Medal In February 2013, then-Defense Secretary Leon Panetta proposed the “Dis- tinguished Warfare Medal” to recognize drone operators and cyber warriors whose actions “contribute to the success of combat operations, particularly when they remove the enemy from the field of battle, even if those actions are physically removed from the fight.”1 While most agreed that electronic warriors deserve rec- ognition for their contributions to war efforts, many were upset at the proposal that this medal would rank above the Bronze Star (awarded for heroic or meritorious acts of bravery on the battlefield) and the Purple Heart (awarded to soldiers who have been injured in battle). In addition, military decorations and recognition are important for promotions. The designation of the Distinguished Warfare Medal as higher than other medals awarded for physical valor in the theater of battle left many veterans feeling insulted and created a great deal of backlash from the Pentagon, veterans groups, and many members of Congress. Shortly after taking office, Defense Secretary Chuck Hagel ordered a review of the new medal, resulting in a decision to replace the medal with a “distinguishing device” that would be placed on an existing medal to honor the combat achieve- ments of drone and cyber operators. Such a distinguishing device would be similar to the “V” placed on the Bronze Star to indicate valor. 1 Lolita Baldor, “Pentagon Creates New Medal for Cyber, Drone Wars,” Associated Press, February 11, 2013, available at http://bigstory.ap.org/article/pentagon-creates-new-medal- cyber-drone-wars. raise questions regarding accountability. If an autonomous system causes inadvertent damage or death, who is accountable? What party or parties, for example, are responsible for paying punitive or compensatory dam- ages? The party ordering the system into operation? The programmers who developed the controlling software? The system’s vendor? Is it pos- sible for no one to be responsible? If so, why? What counts as sufficient justification? Technological Imperfections Autonomous systems have been known to “go haywire” and harm innocents around them. Such problems obviously present safety issues. Moreover, how and to what extent are operators in the vicinity of an autonomous system entitled to know about possible risks? A pilot in an airplane that is partially out of control may be able to steer the airplane away from populated areas—what of the operator of a remotely piloted

OCR for page 79
104 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY 3.4.1  The Technology of Nonlethal Weapons The general class of nonlethal weapons includes a wide variety of technologies: • Kinetic weapons are decidedly low-tech—bean-bag rounds for shotguns and rubber bullets for pistols have been used for a long time. • Barriers and entanglements can be used to stop land vehicles mov- ing at high speed (such as a car trying to speed through a checkpoint) or to damage propellers of waterborne craft. • Optical weapons (e.g., dazzling lasers) are used to temporarily blind an individual using bright light—the individual must shut or avert his eyes to avoid pain. Such weapons are often used on individuals oper- ating a vehicle, with the intent of forcing the driver to stop or flee. • Acoustic weapons project intense sound waves in the direction of a target from long distances, and individuals within effective range feel pain from the loud sound. • Directed-energy weapons that project millimeter-wave radiation can cause a very painful burning sensation on human skin without actu- ally damaging the skin.32 Such weapons, used to direct energy into a large area, are believed to be useful in causing humans to flee an area to avoid that pain. Other directed-energy weapons direct high-powered micro- wave radiation to disrupt electronics used by adversaries. • Electrical weapons (e.g., tasers and stun guns) use high-voltage shocks to affect the nervous system of an individual, causing him or her to lose muscle control temporarily. One foundational science for under- standing such effects is neuroscience, as discussed in Chapter 2. • Biological and chemical agents may be aimed at degrading fuel or metal, or may target neurological functions to incapacitate people, repel them (e.g., with a very obnoxious odor), or alter their emotional state (e.g., to calm an angry mob, to induce temporary depression in people). For the latter types of effects, a foundational science for understanding such effects is neuroscience. • Cyber weapons are often included in the category of “nonlethal” weapons because they have direct effects only on computer code or hardware. 32 Directed-energy weapons with this effect are sometimes regarded as being weapons based on neuroscience, since they manipulate the central nervous system, even if the mechanisms involved are not chemically based. See, for example, Royal Society, Neuroscience, Conflict, and Security, Royal Society, London, UK, February 2012.

OCR for page 79
APPLICATION DOMAINS 105 3.4.2  Possible Applications Nonlethal weapons are intended to provide their users with options in addition to lethal force. Proponents of such weapons suggest that they may be useful in a variety of military engagements or situations that are “less than war,” such as in peacekeeping and humanitarian involvements, in situations in which it is hard to separate combatants and noncomba- tants, or in civilian and military law enforcement contexts such as riot control or the management of violent criminals. In such situations, the use of lethal force is discouraged—and so new nonlethal weapons (such as tasers) have tended to substitute for older nonlethal weapons (such as billy clubs). A key question concerning nonlethal weapons in combat is their rela- tionship to traditional weapons—are nonlethal weapons intended to be used instead of traditional weapons or in addition to traditional weapons? For example, an acoustic weapon can be used to drive troops or irregular forces from an area or to dissuade a small boat from approaching a ship. But it can also be used to flush adversaries out from under cover, where they could be more easily targeted and killed with conventional weapons. The latter uses are explicitly permitted by NATO doctrine on nonlethal weapons: Non-lethal weapons may be used in conjunction with lethal weapon systems to enhance the latter’s effectiveness and efficiency across the full spectrum of military operations.33 So it is clear that in at least some military contexts, military doctrine anticipates that nonlethal weapons can be used along with traditional weapons. But it is also clear that they are not always intended to be used in this way. Another issue is whether the availability of nonlethal weapons in addition to traditional weapons creates an obligation to use them before one uses traditional weapons that are (by definition) more lethal. On this point, NATO doctrine is also explicit: Neither the existence, the presence, nor the potential effect of non-lethal weapons shall constitute an obligation to use non-lethal weapons, or im- pose a higher standard for, or additional restrictions on, the use of lethal force. In all cases NATO forces shall retain the option for immediate use 33 Scienceand Technology Organization Collaboration and Support Office, Annex B: NATO Policy on Non-Lethal Weapons, available at http://ftp.rta.nato.int/public//PubFullText/ RTO/TR/RTO-TR-SAS-040///TR-SAS-040-ANN-B.pdf.

OCR for page 79
106 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY of lethal weapons consistent with applicable national and international law and approved Rules of Engagement.34 3.4.3  Ethical, Legal, and Societal Questions and Implications The diversity of nonlethal weapons types and of possible contexts of use complicate ethical analysis. Controversy over Terminology As suggested in the introduction to this section, the term “nonlethal weapon” is arguably misleading, because such weapons can indeed be used with lethal effects. The public policy debate over such weapons is thus clouded, because many of the issues that do arise in fact would not emerge were such weapons always capable of operating in a nonlethal manner. For example, how and to what extent, if any, should the intended targets of such weapons be taken into account in determining whether a weapon is “nonlethal”? The physical characteristics of the intended target must be relevant in some ways, but this requirement cannot mean that a machine gun aimed at an inanimate object should be categorized as a nonlethal weapon. Are cyber weapons nonlethal? Yes, to the extent that they do not cause damage to artifacts and systems connected to their primary targets. But many cyber weapons are also intended to have effects on systems that they control, and malfunctions in those systems may well affect humans. Are antisatellite weapons nonlethal? Yes, since most satellites are unmanned. But if fired against a crewed military spacecraft, they become lethal weapons. Are chemical incapacitants nonlethal? Yes (for the most part), when they are used in clinically controlled settings. But the Scien- tific Advisory Board of the Organization for the Prohibition of Chemical Weapons concluded in 2011 that, given the uncontrolled settings in which such agents are actually used, “the term ‘non-lethal’ is inappropriate when referring to chemicals intended for use as incapacitants.” 35 34 Science and Technology Organization Collaboration and Support Office, Annex B: NATO Policy on Non-Lethal Weapons, available at http://ftp.rta.nato.int/public//PubFullText/ RTO/TR/RTO-TR-SAS-040///TR-SAS-040-ANN-B.pdf. 35 Scientfic Advisory Board, Report of the Scientific Advisory Board on Developments in Science and Technology for the Third Special Session of the Conference of the States Parties to Review the Operation of the Chemical Weapons Convention, October 29, 2012, available at http://www.opcw.org/index.php?eID=dam_frontend_push&docID=15865.

OCR for page 79
APPLICATION DOMAINS 107 Impact on Existing Arms Control Agreements Certain nonlethal weapons raise concerns about eroding existing constraints associated with existing arms control agreements. One good example of such nonlethal weapons is that of biological or chemical agents that are intended to affect humans. The Biological and Toxin Weapons Convention forbids signatories from developing, producing, stockpiling, or otherwise acquiring or retaining biological agents or toxins “of types and in quantities that have no justification for prophylactic, protective or other peaceful purposes” and also “weapons, equipment or means of delivery designed to use such agents or toxins for hostile purposes or in armed conflict.”36 Similarly, the Chemical Weapons Convention (CWC) forbids parties to the treaty from developing, producing, otherwise acquiring, stock- piling, or retaining chemical weapons.37 Chemical weapons are in turn defined as “toxic chemicals and their precursors,” except when they are intended for permissible purposes and acquired in the types and quanti- ties consistent with the permissible purposes. A toxic chemical is one that through its chemical action on life processes can cause death, temporary incapacitation, or permanent harm to humans or animals. (Thus, incapaci- tating agents are included in the definition of “toxic chemicals” and the use of incapacitating agents is forbidden as a means and method of war.) Permissible purposes include “industrial, agricultural, research, medical, pharmaceutical or other peaceful purposes”; protective purposes (that is, purposes “directly related to protection against toxic chemicals and to protection against chemical weapons”; and law enforcement, includ- ing domestic riot control purposes. Signatories also agree not to use riot control agents as a means of warfare, where a riot control agent is an agent that “can produce rapidly in humans sensory irritation or disabling physical effects which disappear within a short time following termina- tion of exposure.” Many issues regarding arms control turn on the specific meaning of terms such as “temporary incapacitation,” “other harm,” and “sensory irritation or disabling physical effects.” In addition, they depend on deter- minations of the intended purpose for a given agent (there is no agreed definition of “law enforcement,” for example). Such definitional concerns have been particularly apparent in con- templating possible chemical weapons based on neuroscience (see the Chapter 2 section on neuroscience) that could create specific temporary effects in humans. Although there is a broad consensus that the CWC 36 See http://www.un.org/disarmament/WMD/Bio/. 37 See http://www.opcw.org/chemical-weapons-convention/.

OCR for page 79
108 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY prohibitions on using toxic chemicals in conflict extend to the use of inca- pacitating chemical agents (ICAs) in genuine combat situations, a number of countries, including the United States and Russia, have shown an active interest in ICAs for law enforcement and in situations such as counterter- rorism where the lines between combat and law enforcement may blur. For example, even after the signing of the CWC, research has been pro- posed to develop “calmatives”—chemical agents that, when administered to humans, change their emotional states from angry to calm (as one possibility);38 such agents might be useful in reducing the damage that a rioting crowd might cause or in sapping the will of adversary soldiers to fight on the battlefield. The first two CWC review conferences were unable to address the issue of ICAs. Although substantial discussion and debate during the third review conference in April 2013 clarified a number of national posi- tions, a Swiss proposal to undertake formal technical discussions was not included in the final document. At the first meeting of the Organization for the Prohibition of Chemical Weapons (OPCW) executive council fol- lowing the review conference, the U.S. ambassador stated: . . . we too are disappointed that time ran out before final agreement could be reached on language relating to substances termed “incapaci- tating chemical agents”. The United States believes that agreement on language is within reach. We will work closely and intensively with the Swiss and other delegations so that this important discussion can con- tinue. In this context, I also wish very clearly and directly to reconfirm that the United States is not developing, producing, stockpiling, or using incapacitating chemical agents.39 Beyond the debates over whether ICAs would be permitted in law enforcement, there is also concern that the use of such agents will under- mine the fundamental prohibitions of the treaty. To the extent that some 38 For example, the International and Operational Law Division of the Deputy Assistant Judge Advocate General of the Navy approved in the late 1990s a list of proposed new, advanced, or emerging technologies that may lead developments of interest to the U.S. nonlethal weapons effort, including gastrointestinal convulsives, calmative agents, aque- ous foam, malodorous agents, oleoresin capsicum (OC) cayenne pepper spray, smokes and fogs, and riot control agents (orthochlorobenzylidene malononitrile, also known as CS, and chloracetophenone, also known as CN). See, for example, Margaret-Anne Coppernoll, “The Nonlethal Weapons Debate,” Naval War College Review 52:112-131, Spring 1999. In 2004, a Defense Science Board study on future strategic strike forces (available at http://www.fas. org/irp/agency/dod/dsb/fssf.pdf) noted that calmatives could have value in neutralizing individuals while minimizing undesirable effects. 39 Robert Mikulak, Statement by Ambassador Robert P. Mikulak, United States Delegation to the OPCW at the Seventy-Second Session of the Executive Council, OPCW EC-72/NAT.8, available at http://www.opcw.org/index.php?eID=dam_frontend_push&docID=16511.

OCR for page 79
APPLICATION DOMAINS 109 ICAs also fall under the provisions of the Biological Weapons Convention (BWC), the same concerns apply. The pressures placed on the CWC and the BWC by the possibility of developing chemically or biologically based incapacitating agents may point to a broader lesson. Arms control agreements are often signed in a particular technological context. Changes in that context, whether driven by new S&T developments or new concepts of use for existing technolo- gies, mean that in order to remain effective treaties must strive to stay on top of relevant advances. In extreme cases, even changes in the basic language of the treaty or abrogation or creation of new legal mechanisms might become necessary in response.40 This lesson suggests that even research on certain new technology developments may have ELSI impli- cations for existing agreements long before such research bears fruit. International Law The BWC and the CWC are not the only legal frameworks that affect the potential development and use of nonlethal weapons. The law of armed conflict (specifically, Article 51 of Additional Protocol I to the Geneva Conventions) stipulates that civilians shall not be the subjects of attack. This is a key element of the principle of distinction, which distin- guishes between members of a nation’s armed forces engaged in conflict and civilians, who are presumed not to participate in hostilities directly and thus should be protected from the dangers of military operations.41 Although civilians (that is, noncombatants) have always contributed to the general war effort of parties engaged in armed conflicts (e.g., helped produce weapons and munitions), they have usually been at some dis- tance from actual ground combat. Since the end of World War II and the 40 Because science and technology are at the core of both treaties, both the CWC and the BWC call for regular review of developments in science and technology that could affect the future of conventions, both during the review conferences held every 5 years and in between (see National Research Council, Life Sciences and Related Fields: Trends Relevant to the Biological Weapons Convention, The National Academies Press, Washington, D.C., 2012). In 2012, for example, the Organization for the Prohibition of Chemical Weapons created a tem- porary working group on convergence to address the increasing overlap between chemistry and biology and how that affects the future of the CWC and the BWC. Members included a member of the staff of the BWC Implementation Support Unit and the chair of a major independent international review of trends in S&T for the seventh BWC review conference (see http://www.opcw.org/about-opcw/subsidiary-bodies/scientific-advisory-board/ documents/reports/). 41 Nils Melzer, ”Interpretive Guidance on the Notion of Direct Participation in Hostilities Under International Humanitarian Law,” International Committee of the Red Cross, Geneva, Switzerland, 2009, available at http://www.icrc.org/eng/assets/files/other/icrc-002-0990. pdf.

OCR for page 79
110 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY increase in civil wars over traditional interstate conflict and then the rise of nonstate actors, the assumption of separation has been increasingly challenged, and combatants and noncombatants are often intermingled. The fact of intermingling is one rationale for the development of nonlethal weapons—the use of these weapons when combatants and noncombatants are intermingled is intended to reduce the risk of incur- ring noncombatant casualties. A common use scenario is one in which a soldier confronting such a situation is unable to distinguish between a combatant and a noncombatant, and uses a nonlethal weapon to subdue an individual. The rationale for nonlethal weapons is thus that if the individual turns out to be a noncombatant, then no harm is done, but if the individual turns out to be a combatant, then he has been subdued. In this case, the argument turns on the meaning of the term “attack,” which is defined as an act of violence. For “nonlethal” weapons other than those covered by the CWC and BWC, is it an act of violence to use a weapon that causes unconsciousness? And if the answer is not categorical (that is, “it depends”), what are the circumstances on which the answer depends? A second requirement of the law of armed conflict is a prohibition on weapons that are “calculated to cause unnecessary suffering.”42 In the 1980s and 1990s, a question arose over whether a weapon intended to blind but not kill enemy soldiers—by definition, a nonlethal weapon— might be such a weapon. Box 3.3 recounts briefly some of the history of blinding lasers. At a high level of abstraction, lessons from this history suggest an interplay of ethical and legal issues. No specific international prohibitions against blinding lasers were in place in the early 1980s, and the United States sought to develop such weapons. However, over time, ethical concerns suggesting that blinding as a method of warfare was in fact particularly inhumane were one factor that led the United States to see value in explic- itly supporting such a ban, first as a matter of policy and then as a matter of international law and treaty, even if blinding lasers themselves could arguably have been covered under the prohibition of weapons that caused unnecessary suffering. Another distinct body of law, discussed further in Chapter 4, is inter- national human rights law, which addresses the relationship between a state and its citizens rather than relationships between states in con- flict addressed by the law of armed conflict. Many analysts, but by no means all, believe that international human rights law and international 42 Annex to Hague Convention IV Respecting the Laws and Customs of War on Land of October 18, 1907 (36 Stat. 2277; TS 539; 1 Bevans 631), article 23(e). Notably, neither the annex nor the convention specifies a definition for “unnecessary suffering.”

OCR for page 79
APPLICATION DOMAINS 111 humanitarian law (that is, the law of armed conflict) are closely related, however.43 International human rights law is codified in a number of general treaties as well as international agreements focused on particular issues. Among the provisions of international human rights law that could be relevant to nonlethal weapons are prohibitions on torture or on degrad- ing or inhumane punishments. More general provisions, such as a fun- damental right to life or to health, are also potentially relevant. Potential violations of international human rights law have been cited as part of the arguments against the use of incapacitating chemical agents,44 as well as against other forms of nonlethal weapons. Safety The extent to which a given weapon is nonlethal (or more precisely, less lethal) is often an empirical question. How might such weapons be tested for lower lethality? Animal testing and modeling do provide some insight, but high fidelity is sometimes available only through human testing. Laboratory testing conditions often do not reflect real-world con- ditions of use. In practice, then, certain information on lethality may be available only from operational experience—a point suggesting that the first uses of a given nonlethal weapon may in fact be more lethal than expected. In the cases of the nonlethal weapons described above: • Weapons that provide high-voltage shocks to an individual may cause serious injury or death if the person falls or if the person’s heart goes into cardiac arrest. • Dazzling lasers may cause a driver to lose control of a vehicle by 43 See,for example, Robert Kolb, “The Relationship Between International Humanitarian Law and Human Rights Law: A Brief History of the 1948 Universal Declaration of Human Rights and the 1949 Geneva Conventions,” International Review of the Red Cross, No. 324, Sep- tember 30, 1998, available at http://www.icrc.org/eng/resources/documents/misc/57jpg2. htm; Marco Sassoli and Laura Olson, “The Relationship Between International Humanitar- ian and Human Rights Law Where it Matters: Admissible Killing and Internment of Fighters in Non-International Armed Conflicts,” International Review of the Red Cross 90(871):599-627, September 2008, available at http://www.icrc.org/eng/assets/files/other/irrc-871-sassoli- olsen.pdf; and United Nations Office of the High Commissioner on Human Rights, “Inter- national Humanitarian Law and Human Rights,” July 1991, available at http://www.ohchr. org/Documents/Publications/FactSheet13en.pdf. 44 International Committee of the Red Cross, “Incapacitating Chemical Agents”: Law Enforcement, Human Rights Law, and Policy Perspectives, report of an expert meeting, Montreux, Switzerland, April 24-26, 2012, available at http://www.icrc.org/eng/resources/ documents/publication/p4121.htm.

OCR for page 79
112 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY Box 3.3 On the Compliance of Lasers as Antipersonnel Weapons with the Law of Armed Conflict In 1983, the New York Times reported that the U.S. Army was developing a weapon known as C-CLAW (Close Combat Laser Assault Weapon) that used low-power laser beams to blind the human eye at distances of up to one mile.1 Pentagon officials noted that the beam “would sweep around the battlefield and blind anyone who looked directly into it.” In September 1988, the DOD Judge Advocate General issued a memoran- dum of law concerning the legality of the use of lasers as antipersonnel weapons.2 This memorandum identified the key law-of-armed-conflict issue as whether the use of a laser to blind an enemy soldier would cause unnecessary suffering and therefore be unlawful. The memorandum noted that blinding a soldier “ancillary to the lawful use of a laser rangefinder or target acquisition lasers against material targets” would be legal. If so, the memorandum argued, consistency requires that it must not be illegal to target soldiers directly with a laser. If it were otherwise, “enemy soldiers riding on the outside of a tank lawfully could be blinded as the tank is lased incidental to its attack by antitank munitions; yet it would be regarded as illegal to utilize a laser against an individual soldier walking ten meters away from the tank.” The memorandum then noted that “no case exists in the law of war whereby a weapon lawfully may injure or kill a combatant, yet be unlawful when used in closely-related circumstances involving other combatants.” The memoran- dum then concluded that a blinding laser would not cause “unnecessary suffering when compared to other [legal] wounding mechanisms to which a soldier might be exposed on the modem battlefield,” and that thus the use of a laser as an an- tipersonnel weapon must be lawful. However, in September 1995, the U.S. Department of Defense promulgated a new policy that prohibited “the use of lasers specifically designed to cause per- manent blindness of unenhanced vision and supported negotiations prohibiting the use of such weapons” and continued training and doctrinal efforts to minimize accidental or incidental battlefield eye injuries resulting from using laser systems for nonprohibited purposes. One month later, the first review conference of the 1980 Convention on Certain Conventional Weapons adopted a protocol on blind- ing laser weapons, which the United States signed. Some of the issues raised in forcing the driver to shield his or her eyes, leading to injury or death as a result. • Acoustic weapons can cause permanent hearing losses through repeated exposure. • Chemical incapacitants can cause serious harm, or death may occur if overdoses occur or as the result of secondary effects (e.g., an incapaci- tated person who falls and hits his head on a rock).

OCR for page 79
APPLICATION DOMAINS 113 the lead-up to this conference included the desirability of a protocol to cover this issue; a debate over whether to prohibit blinding weapons per se or blinding as a method of warfare; and the possibility of a ban interfering with other military uses of lasers, such as the designation of targets. In January 2009, the United States deposited its instrument of ratification for Protocol IV of the Convention on Conventional Weapons, which prohibits the employment of laser weapons “specifically designed, as their sole combat function or as one of their combat functions, to cause permanent blindness to unenhanced vision.” 3 The protocol further prohibits the transfer of such weapons to any state or nonstate entity. However, it recognizes the possibility of blinding as “an incidental or collateral effect of the legitimate military employment of laser systems, including laser systems used against optical equipment,” and exempts such blinding from the prohibition of this protocol. One analyst suggests that a major factor in the adoption of the protocol was the support garnered from a variety of nongovernment organizations, such as medical associations and national Red Cross and Red Crescent organizations. 4 In addition, in May 1995, the European Parliament called on the Council of Europe to take action on the protocol. In the United States, Human Rights Watch (HRW)—an international nongovernmental organization—issued a report in May 1995 that documented U.S. efforts to develop military laser systems that were intended to damage optical systems and/or eyesight. Whether or not prompted by the HRW report, a number of influential U.S. senators and representatives shortly thereafter asked the administration to adopt a ban on blinding lasers. 1 See http://www.nytimes.com/1983/12/18/us/army-works-on-a-blinding-laser.html. 2 “Memorandum of Law: The Use of Lasers as Antipersonnel Weapons,” The Army Law- yer, DA PAM 27-50-191, November 1988, available at http://www.loc.gov/rr/frd/Military_Law/ pdf/11-1988.pdf. 3 See http://www.state.gov/r/pa/prs/ps/2009/01/115309.htm. 4 Louise Doswald-Beck, “New Protocol on Blinding Laser Weapons,” International Review of the Red Cross, No. 312, June 30, 1996, available at http://www.icrc.org/eng/resources/ documents/misc/57jn4y.htm. This article also provides some of the other information contained in this box. Unanticipated Uses Nonlethal weapons—at least some of them—raise issues that are not generally anticipated in the doctrines of their use. For example, although nonlethal weapons are often presented as a substitute for lethal weapons, they may in practice be a substitute for nonviolent negotiations—that is, they may be used to bypass the time-consuming process of negotiations.

OCR for page 79
114 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY Indeed, there are instances in which nonlethal weapons have been used when no force (rather than lethal force) would have been used. 45 Building on this possibility, nonlethal weapons could be used as a means for coercion—that is, they might be used to torture an individual or persuade an otherwise unwilling individual to cooperate The nonlethal- ity of some nonlethal weapons is premised on the ability of an individual to flee the scene of weapons use (as is true for nonlethal area-denial systems)—the weapon causes pain for an individual who is exposed to the weapon’s effects, but the individual is free to leave the area in which the weapon causes these effects. But if the individual is not free to leave (e.g., by being restrained), an area-denial system could plausibly be used as an instrument of torture. It is of course true that virtually any instrument can be used as an instrument of torture, which is prohibited under international law. In this context, a possible ELSI concern arises because certain nonlethal weapons technologies might be better suited for torture (if, for example, the use of a particular technology left no physical evidence of the torture). 45 In one study performed by the sheriff’s office in Orange County, Florida, officers on patrol were equipped with tasers and were trained to use them. One immediate effect was that the number of citizen fatalities due to police action decreased significantly—the intended effect. A second immediate (and unanticipated) effect was a significant increase in the frequency of police use of force overall. That is, without tasers, there were most likely a number of situations in which the police would not have used force at all, but with tasers available, they were more willing to use force (nonlethal force, but force just the same) than before. See Alex Berenson, 2004, “As Police Use of Tasers Soars, Questions Over Safety Emerge,” New York Times, July 18, 2004.