5

An Analytical Framework for Identifying Ethical, Legal, and Societal Issues

This chapter presents a possible framework for identifying and assessing ethical, legal, and societal issues that may be associated with a given research effort. Derived from considering the sources of insight described in Chapter 4 and ELSI commonalities that appear in many of the technologies discussed in Chapters 2 and 3, the framework is an organized list of ELSI-related questions that decision makers could ask about the development of any technology or application. The framework has two equally important parts. The first part describes the parties that have a stake, either direct or indirect, in ethical, legal, and societal issues, and it poses questions that might be relevant to these stakeholders. The second part of the framework poses questions in relation to crosscutting themes that arise for many or all of these stakeholders. The chapter then illustrates a worked example of how the framework might be used in practice, and it puts the framework in context by considering its utility from a variety of perspectives. Note that the framework is offered as a starting point for discussion and is not intended to be comprehensive. It is useful primarily for raising ELSI concerns that might not otherwise have been apparent to decision makers.

The approach taken in this framework—posing questions that are useful to assessment of ethical, legal, and societal issues in the context of R&D on emerging and readily available (ERA) technologies that are relevant to national security and providing some discussion of why answers to these questions may be relevant—is similar to the approach described in the framework for assessment of information-based programs offered



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 163
5 An Analytical Framework for Identifying Ethical, Legal, and Societal Issues This chapter presents a possible framework for identifying and assess- ing ethical, legal, and societal issues that may be associated with a given research effort. Derived from considering the sources of insight described in Chapter 4 and ELSI commonalities that appear in many of the technolo- gies discussed in Chapters 2 and 3, the framework is an organized list of ELSI-related questions that decision makers could ask about the develop- ment of any technology or application. The framework has two equally important parts. The first part describes the parties that have a stake, either direct or indirect, in ethical, legal, and societal issues, and it poses questions that might be relevant to these stakeholders. The second part of the framework poses questions in relation to crosscutting themes that arise for many or all of these stakeholders. The chapter then illustrates a worked example of how the framework might be used in practice, and it puts the framework in context by considering its utility from a variety of perspectives. Note that the framework is offered as a starting point for discussion and is not intended to be comprehensive. It is useful primarily for raising ELSI concerns that might not otherwise have been apparent to decision makers. The approach taken in this framework—posing questions that are useful to assessment of ethical, legal, and societal issues in the context of R&D on emerging and readily available (ERA) technologies that are rel- evant to national security and providing some discussion of why answers to these questions may be relevant—is similar to the approach described in the framework for assessment of information-based programs offered 163

OCR for page 163
164 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY in the 2008 National Research Council report Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Assessment.1 That frame- work was intended to help public officials charged with making decisions about the development, procurement, and use of information-based pro- grams to determine the effectiveness of such programs in achieving their intended goals, consistent with national and societal values, compliant with the laws of the nation, and reflective of the values of society. The Government Accountability Office has made use of that framework in assessing a number of programs.2 5.1 STAKEHOLDERS The first componment of the framework described in the present report is organized by stakeholder. That is, any given research project has a variety of stakeholders—parties that have an interest in the project because the project may, directly or indirectly, in the short term or in the long term, have a positive or negative impact on them. This report identifies as possible stakeholders in any research project those involved in or connected to the conduct of the research, the intended users of applications enabled by that research, adversaries against whom those applications may be directed, nonmilitary users of such applications, organizations, noncombatants, and other nations. Not all of these groups are necessarily stakeholders for any given research project or program, and an effort to identify the relevant stakeholder groups is therefore an essential part of any ELSI assessment. In principle and in fact, ethical, legal, and societal issues affect many groups of stakeholders, many of which are described below. However, not every technology or application will touch the interests of every one of these stakeholders, and part of an analysis of ethical, legal, and societal issues for any given technology or application is to determine the relevant stakeholder groups. An additional analytical step is to determine how the interests of each of these groups should be weighed (e.g., equally or with some other weighting). The science of effective public participation is summarized by a recent National Research Council report.3 1 National Research Council, Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Program Assessment, The National Academies Press, Washington D.C., 2008, available at http://www.nap.edu/catalog.php?record_id=12452. 2 For example, see Government Accountability Office, 9/11 Anniversary Observations on TSA’s Progress and Challenges in Strengthening Aviation Security, GAO-12-1024T, Washington, D.C., 2012, available at http://www.gao.gov/products/GAO-12-1024T. 3 Thomas Dietz and Paul C. Stern, eds., Public Participation in Environmental Assessment and Decision Making, The National Academies Press, Washington, D.C., 2008, available at http:// www.nap.edu/catalog.php?record_id=12434.

OCR for page 163
AN ANALYTICAL FRAMEWORK FOR IDENTIFYING ELSI 165 The sections below provide a brief description of stakeholder groups along with a number of ELSI-related questions that could apply to each group. 5.1.1  Those Involved in or Connected to the Conduct of Research The conduct of research in many ERA technology and application domains raises ethical, legal, and societal issues that are most troublesome when the research itself affects humans, which may include human beings directly involved by deliberate intent in the R&D, human beings who are not directly involved in the R&D, and human beings affected through changes in the environment that may occur as the result of the R&D. In addition, a variety of different impacts may need to be considered— direct and indirect impacts on physical, emotional, or psychological health and well-being; infringements on civil rights; economic status; and so on. For example, titration of a pharmaceutical agent to determine dose- response relationships is an essential element of research on such agents. In the context of incapacitating nonlethal weapons, titration is an issue in determining dosages that will incapacitate the largest percentage of indi- viduals while still being simultaneously nonlethal to them. Mood-altering drugs may need to be tested to determine if they have long-term effects. But the impact on operators and users of technology is relevant as well. Soldiers with prostheses that can enhance their function over normal human function or pilots of remotely piloted vehicles who execute their missions far away from immediate danger have a psychological relation- ship to their jobs different from that of soldiers who are not as privileged. Before widespread deployment of such technologies is contemplated, pol- icy makers may wish to understand the psychological effects of such phe- nomena—raising the question of how such research might be conducted. Matters such as the scope of populations to include as test subjects, the nature and duration of contemplated harms, and so on are well under- stood to be within the purview of mechanisms existing in the civilian sector for the protection of humans used as experimental subjects. For example, in testing incapacitants, the question of whether to include young children or the elderly or pregnant women in the test population would arise. The Belmont report (described in Chapter 4) articulated three ethical principles that can be generalized to the conduct of most R&D: benefi- cence, respect for persons, and justice.4 The remainder of this subsection 4 The Belmont report can be found at http://www.hhs.gov/ohrp/humansubjects/ guidance/belmont.html.

OCR for page 163
166 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY (Section 5.1.1) provides that generalization, and readers interested in the original analysis of the Belmont report should consult that source. Beneficence In the context of conducting R&D, the principle of beneficence sug- gests that the research effort should maximize the benefits and minimize the harms that result. Some key considerations include the following: • What defines “benefit” and “harm”? Note that a risk of harm is not necessarily the same thing as harm. How can an R&D effort benefit or harm research subjects? The investigator? Society at large? • When R&D is being conducted for applications that are intended to harm an adversary, how can the nature and extent of harm be ascertained in research? Note that there are many kinds of harm that may be at issue, as suggested in the previous question. Harm may include physical, men- tal, emotional, financial, and psychological harms. • How do the definitions of “benefit” and “harm” differ when dif- ferent stakeholders are involved? For example, different criteria may apply for individuals indirectly affected by a project and for those directly affected as research subjects. • How should benefits and harms to different stakeholder groups be determined, aggregated, and compared? • Learning what the benefits of an R&D effort may be sometimes requires exposing stakeholders to some harm or risk of harm. How should learning about possible benefits be weighed against actual or possible harm? As the Belmont report stated, “The problem posed by these impera- tives is to decide when it is justifiable to seek certain benefits despite the risks involved, and when the benefits should be foregone because of the risks.” Respect for Persons In the context of conducting R&D, the principle of respect for persons suggests that the effort should obtain voluntary informed consent from parties that are directly involved in such research and act in the best inter- ests of parties that are not capable of providing such consent (e.g., those indirectly affected by the research). Some considerations are as follows: • What constitutes genuine “informed consent” when information derived from possibly sensitive intelligence sources is part of a threat

OCR for page 163
AN ANALYTICAL FRAMEWORK FOR IDENTIFYING ELSI 167 assessment? For example, consider a research project to develop a vaccine against a particular biological agent. Specifics of the threat posed by the agent may well be derived from classified sources. How, if at all, is such information to be a part of any “informed consent” process? • If parties directly involved in research related to a particular appli- cation are members of the U.S. armed forces, how and to what extent—if any—is there a conflict between their obligation to obey legal orders and their provision of informed consent on a voluntary basis? For example, Section 3 of Executive Order 12139 authorizes the President to waive informed consent for deployed military personnel for the administration of certain investigational drugs, provided that the President determines that obtaining consent is not feasible; is contrary to the best interests of the (service) member; or is not in the interests of national security. 5 Have undue inducements been offered to persuade individuals to “volunteer”? What counts as an “undue” inducement? • Who, if anyone, will speak for the best interests of parties that are not capable of providing informed consent? Almost by definition, such parties are not themselves capable of articulating their interests. For example, the parties may be physically or temporally distant—in other words, future persons—or those with environmental concerns may be affected by certain R&D efforts. How should such concerns be identified, assessed, and ultimately weighed? Justice The principle of justice suggests that the benefits and burdens associ- ated with R&D should be fairly distributed. To paraphrase the Belmont report, injustice occurs if some benefit to which a person is entitled is denied improperly or when some burden is imposed unduly. Some con- siderations include the following: • On what basis are specific parties or groups of parties selected for direct involvement in a research effort? For example, why is one group rather than another chosen to be the pool of research subjects? Why is one geographical location rather than another the choice for situating a potentially dangerous research facility? • How and to what extent, if at all, do national security consider- ations demand that certain groups (e.g., warfighters) accept an excep- tional or a higher level of risk than that accepted by or imposed on other groups (e.g., civilians)? • How and to what extent, if at all, should new knowledge derived 5 See http://www.gpo.gov/fdsys/pkg/FR-1999-10-05/pdf/99-26078.pdf.

OCR for page 163
168 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY from research be subject to restrictions on distribution? For example, should such knowledge be kept from certain allies or the rest of the world? Should it be restricted from public distribution? If so, why? 5.1.2  Users of an Application Users are the parties that are intended to use an application—those who make decisions about how and when the application is deployed and operated in the field, and those who use it based on those decisions. • What could be the nature of the impact, if any, on users of an appli- cation? For example, the extended use of a particular application may cause physical damage (e.g., it may require a user to sit at a keyboard for extended periods of time and thereby cause repetitive stress injuries) or psychological stress (e.g., a weapons operator may feel stress if the con- cept of operations is something with which he is morally uncomfortable). • What could be the cumulative impact, if any, on users of an appli- cation? For example, the insertion of one prosthetic implant may not be harmful, but the insertion of multiple implants or the use of a certain implant with certain drugs may be harmful. By definition, cumulative effects will appear only when the application in question interacts with other components in the user’s environment or biology. • What could be the long-term impact, if any, on users of an applica- tion? The short-term impact on a user may be benign, but over the long term, the impact may be harmful. Hearing loss due to repeated exposure to loud noises is an example of such a long-term impact. The history of Agent Orange provides an example of long-term consequences.6 5.1.3 Adversaries Adversaries are parties against which an application might inten- tionally be directed or parties that might seek to harm U.S. interests. Adversaries are not “stakeholders” in the traditional sense understood in domestic policy matters—obviously, one does not seek adversary input or agreement on weapons intended to affect them, for example. Nonetheless, adversaries certainly are parties that a research project might affect, and 6 Agent Orange was a herbicide/defoliant used as a chemical weapon by the U.S. military during the Vietnam War which killed thousands and caused birth defects. See Le Cao Dai, Agent Orange in the Vietnam War: History and Consequences, Vietnam Red Cross Society, 2000. An Institute of Medicine (IOM) report addressing a number of ethical, legal, and societal issues related to Agent Orange is Institute of Medicine, Veterans and Agent Orange: Update 2010, The National Academies Press, Washington, D.C., 2011, available at www.nap.edu/ catalog.php?record_id=13166.

OCR for page 163
AN ANALYTICAL FRAMEWORK FOR IDENTIFYING ELSI 169 adversaries do have interests that the law of armed conflict requires all nations to take into account. Thus, considering adversary reactions to the use of new military applications against them is an important part of a framework for assess- ment of ethical, legal, and societal issues. These reactions fall into at least three general categories: • Adversary acquisition of similar applications for their own uses. The successful use of any new military application of technology is an affir- mative demonstration of its feasibility and value, and often carries much more weight with policy makers than any report or study regarding its utility. For example, Stuxnet was the first known operational use of cyber weapons to cause physical damage to infrastructure.7 The possibility and feasibility of such an attack were discussed in many reports on cyberse- curity, but Stuxnet galvanized the policy community as never before. U.S. use of remotely piloted vehicles in Afghanistan and Iraq has conclusively demonstrated their value in many battlefield situations, and dozens of nations are today pursuing the development of such systems for their own use. Further, such pursuits may from time to time result in systems that are even more advanced than those available to the United States. A final relevant point is that in using such applications against the United States, adversaries may not feel constrained in their observance of the law of armed conflict, as, for example, when they use human shields. • Adversary development of countermeasures that negate or reduce the advantages afforded by new military applications. For example, the micro- wave-based Active Denial System can be countered through the use of aluminum foil to protect exposed areas of skin.8 In some cases, a remotely piloted vehicle can be “spoofed” into thinking that its location is a long way from where it actually is.9 For those cases in which countermeasures are relatively easy and inexpensive to develop, the wisdom of pursuing a given application may be questionable unless the primary value of the 7 The Stuxnet computer worm, first discovered in June 2010, was aimed at disrupting the operation of Iran’s uranium enrichment facilities. See http://topics.nytimes.com/top/ reference/timestopics/subjects/c/computer_malware/stuxnet/index.html. 8 The Active Denial System (ADS) is a directed-energy nonlethal weapon first developed in the mid-2000s and designed for keeping humans out of certain areas. The ADS aims a beam of microwave energy at a target such as a human being, thus causing an intense burning sensation on the human’s skin. However, because the beam does not penetrate very far into the skin, it causes little lasting damage (no lasting damage in nearly all cases). The pain is intended to cause the human to turn away and flee the area. 9 Daniel P. Shepard, Jahshan A. Bhatti, and Todd E. Humphreys, “Drone Hack: Spoofing Attack Demonstration on a Civilian Unmanned Aerial Vehicle,” GPS World, August 1, 2012, pp. 30-33, available at http://radionavlab.ae.utexas.edu/images/stories/files/papers/ drone_hack_shepard.pdf.

OCR for page 163
170 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY application can be realized before countermeasures emerge. Indeed, ethi- cal, legal, and societal issues may arise without the hoped-for benefits of an application ever having been realized. • Adversary perceptions of a military application’s uses against them. The possible emotional and psychological reactions of an adversary to an application’s use span a wide range. At one end, an adversary may be so discouraged by the use of a very potent application that he simply loses the will to continue engaging in conflict. At the other end, an adversary may be so outraged and incensed by the use of a very potent application that he redoubles his hostile efforts and recruits others to his cause—such outcomes are made more likely when the use of such an application has caused nonnegligible collateral damage. Some questions that arise from these kinds of adversary reactions include the following: • What is the nature of the direct impact, if any, of use of an appli- cation against adversaries? Not all applications have a direct negative impact against adversaries—examples might include better battlefield medical care and sources of alternative fuel. • How and to what extent can the application’s impact be reversed? • How do considerations of symmetry apply? That is, what are the ELSI implications of an adversary pursuing the same technology develop- ment path as the United States? For example: —Under what circumstances, if any, would an adversary’s use of the same application against the United States, its allies, or its interests be regarded as unethical? —Assuming that the United States is conducting R&D on applica- tion X, how would the United States interpret the intentions of an adversary conducting similar research? • In the long term, what is the impact of an application on adversary behavior and perceptions? —How and to what extent could an adversary develop similar capabilities? What is the time scale on which an adversary could do so? How could an adversary use these capabilities? What advan- tages could an adversary gain from using these capabilities free of legal and ethical constraints? —How do the benefits to the United States of pursuing a particular application unilaterally compare to the potential losses should an adversary develop similar applications in the future?

OCR for page 163
AN ANALYTICAL FRAMEWORK FOR IDENTIFYING ELSI 171 —What countermeasures might an adversary take to negate the advantages conferred by the application in question? How long would it take for the adversary to obtain those countermeasures? How, if at all, could the developed countermeasure be worse in some way from an ethical standpoint than the application itself? —How could the application affect the adversary’s perception of the United States? For example, the application might instill a fear in the adversary that would inhibit the adversary from taking action against the United States, or it might instill a resentment or hatred that might inspire still others to take additional action against the United States. —What, if any, could be the application’s effect on deterrence? Note that the United States justifies nearly all military programs by their (putatively) enhancing effects on deterrence. But adversaries may not necessarily see U.S. military R&D activities in the same light, and in fact may initiate their own similar program because the United States appears to be seeking a technological advantage. —What effect, if any, could U.S. restraint in pursuing a particu- lar application have on inducing an adversary to exercise similar restraint? A relevant precedent is the ban on assassinations pro- mulgated by Executive Order 12333.10 The original rationale for this ban was the concern that in its absence, assassinations of U.S. political leaders would be legitimized. —What, if any, opportunities for adversary propaganda could an application enable or facilitate? For example, how, if at all, could an adversary be able to point to a U.S. program as indicative of an immoral, unethical, and hostile stance toward it? 5.1.4  Nonmilitary Users Military applications also sometimes have value to nonmilitary users. Changing the problem domain from a military to a civilian one can and often does raise other ethical and societal issues. Three of the most promi- nent nonmilitary problem domains are those of law enforcement, com- merce, and the general public. Law Enforcement From a technical standpoint, many of the problems facing law enforce- ment have military or other national security counterparts. Such problems include those of personal protection, surveillance, and intelligence analy- 10 See http://www.archives.gov/federal-register/codification/executive-order/12333.html.

OCR for page 163
172 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY sis. But law enforcement authorities, at least in the United States, operate under an entirely different legal regime than do military or other national security authorities, one premise of which is that residents of the United States enjoy certain rights that other groups (e.g., enemy combatants) do not have. For example, the U.S. military is legally permitted to participate in domestic law enforcement operations only at the request of civilian law enforcement authorities. Thus, a relevant question is the following: • If the military application in question were deployed to support law enforcement operations, how and to what extent, if any, could such deployment raise ethical, legal, and societal issues that do not arise in the military context? Possible differences include the different legal authori- ties provided in Title 18, Title 10, and Title 50 of the U.S. Code (dealing with criminal law enforcement, military, and intelligence affairs, respec- tively), and possible restrictions imposed by the U.S. Constitution on the U.S. government acting domestically. Commerce Technologies developed for military applications sometimes have commercial and economic relevance. A good example is the evolution of packet-switched communications, originally developed by the U.S. Air Force to enhance the survivability of military communications networks,11 into the ARPANET (supported by DARPA) and then the Internet. Again, commerce in the private sector is a different problem domain and thus raises different ethical issues. A relevant question is the following: • How and to what extent, if any, could a commercial adaptation of a military application raise ethical, legal, and societal issues that do not arise in the military context? Such issues might include issues of access (which commercial companies might profit from government efforts to develop the application), accountability (public accountability regimes of private-sector companies differ from those of the government), and possible adoption of technologies by adversaries after commercialization (such uses may be different from adoption as described above). The General Public Technologies developed for national security applications sometimes can be adapted for use by ordinary citizens, uses both good and bad. For 11 Paul Baran, “On Distributed Communications: Summary Overview,” RM-3767-PR, Rand Corporation, Santa Monica, Calif., August 1964.

OCR for page 163
AN ANALYTICAL FRAMEWORK FOR IDENTIFYING ELSI 173 example, it is possible today to purchase over-the-counter a remotely piloted aircraft for a few hundred dollars. Controlled via Wi-Fi, this airframe—called a quadricopter—can stay aloft for about 20 minutes and has an onboard video camera whose uses are limited only by the opera- tor’s imagination. A relevant question is thus: • How and to what extent could adaptations of a military application be used by ordinary citizens? What are the ELSI implications of such use? 5.1.5 Organizations For the U.S. armed forces, military applications of technology do not exist in a vacuum. The introduction of new technologies into military organizations often has a significant impact on the practices, procedures, and lines of authority embedded in those organizations. Individuals make decisions about deployment and use, and these individuals are them- selves embedded in organizations and are thus affected by the structure and culture of those organizations. Organizational structure and cul- ture are the foundations of accountability and chains of command, and affect matters such as promotion, respect, levels of cooperation between units, and influence within a hierarchy. Organizations determine rules of engagement and other orders that specify the conditions under which various applications may be used. For example, the significance of cyber conflict (in both its offensive and defensive aspects) has led the Department of Defense to establish Cyber Command, an entirely new element of U.S. Strategic Command and likely to become its own combatant command co-equal to other combatant com- mands. The U.S. Air Force is reorganizing itself to accommodate a large influx of pilots for remotely piloted vehicles, and such reorganization will inevitably have an impact on the Air Force’s organizational culture. Introducing new technology that affords new capabilities often affects the assumptions on which an organization is structured, and thus may have implications for the organization. Relevant questions may include the following: • How and to what extent, if at all, could a new military application influence or change traditional structures and mechanisms of account- ability and responsibility for its use? For example, some applications are intended to drive certain kinds of battlefield decision making to lower ranks in the military hierarchy. How will the organization react to such tendencies? How, if at all, will accountability for the use of the application in question be maintained? Conversely, might the application make it less likely for someone in the lower ranks to raise questions about ethical use?

OCR for page 163
AN ANALYTICAL FRAMEWORK FOR IDENTIFYING ELSI 201 an exceptional or a higher level of risk than that accepted by or imposed on other groups (e.g., civilians)? —How and to what extent, if at all, should new knowledge derived from research be subject to restrictions on distribution? Users of an Application • What could be the nature of the impact, if any, on users of an application? • What could be the cumulative impact, if any, on users of an application? • What could be the long-term impact, if any, on users of an application? Adversaries • What, if any, is the nature of the direct impact of use of an applica- tion against adversaries? • How and to what extent can the application’s impact be reversed? • How do considerations of symmetry apply? That is, what are the ELSI implications of an adversary pursuing the same technology develop- ment path as the United States? • In the long term, what is the impact of an application on adversary behavior and perceptions? —How and to what extent could an adversary develop similar capabilities? What is the time scale on which an adversary could do so? How could an adversary use these capabilities? What advan- tages could an adversary gain from using these capabilities free of legal and ethical constraints? —What countermeasures might an adversary take to negate the advantages conferred by the application in question? How long would it take for the adversary to obtain those countermeasures? How, if at all, could the developed countermeasure be worse in some way from an ethical standpoint than the application itself? —How could the application affect the adversary’s perception of the United States? —What, if any, could be the application’s effect on deterrence? —What effect, if any, could U.S. restraint in pursuing a particu- lar application have on inducing an adversary to exercise similar restraint? —What, if any, opportunities for adversary propaganda could an application enable or facilitate?

OCR for page 163
202 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY Nonmilitary Users • Law enforcement —If the military application in question were deployed to support law enforcement operations, how and to what extent, if any, could such deployment raise ethical, legal, and societal issues that do not arise in the military context? • Commerce —How and to what extent, if any, could a commercial adaptation of a military application in question raise ethical, legal, and societal issues that do not arise in the military context? • The general public —How and to what extent could adaptations of a military applica- tion be used by ordinary citizens? What are the ELSI implications of such use? Organizations • How and to what extent, if at all, could a new military application influence or change traditional structures and mechanisms of accountabil- ity and responsibility for its use? How will the organization react to such tendencies? How, if at all, will accountability for the use of the application in question be maintained? Conversely, might the application make it less likely for someone in the lower ranks to raise questions about ethical use? • Military organizations often place great value on personal bravery in combat. How and to what extent, if at all, could a technological applica- tion used in combat change such valuation? • Promotions in many military organizations are sometimes based on command opportunities. How and to what extent, if any, could an application change command structures? Noncombatants • How and to what extent could an application affect noncombatants on and off the battlefield? • How might the public at large perceive a given application? • How and to what extent could an application affect future genera- tions? And what might be the effects of such operation on those targeted by the application?

OCR for page 163
AN ANALYTICAL FRAMEWORK FOR IDENTIFYING ELSI 203 • How and to what extent could the operation of an application— especially large-scale operations—harm the environment? Other Nations • What, if any, could be the impact of a new military technology or application on political solidarity with the United States? • How, if at all, could the technology or application raise questions about the strength of U.S. commitments to other nations or allies? • What could be the impact, if any, on U.S. reluctance to share a technology or application with its allies? • How, if at all, could a technology or application affect the willing- ness of allies and nonaligned nations to participate in coalition efforts with the United States if the latter uses this technology? • How and to what extent, if any, could U.S. restraint in pursuing a new military application induce other nations to exercise similar restraint? • How and to what extent, if any, could an application help to com- promise human rights if used by another nation on its own citizens? Questions of Relevance by Crosscutting Issue Scale • Societal scope —How and to what extent, if any, could a change in the scale of deployment or use of a technology or application change an ethical calculation? —How and to what extent, if any, are the costs of using a particular application transferred from its immediate users to other entities? —If an application becomes successful because of the increased functionality it affords to its users and such functionality becomes essential for individuals participating in society, how and to what extent, if any, can the costs of obtaining an essential application be made broadly affordable so that all individuals can obtain its benefits equally? • Degree of harm —How and to what extent, if any, does the degree of inadvertent or undesirable harm compare to the benefits obtained from using that application?

OCR for page 163
204 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY • The nature of the activity —How does the scale of ethical, legal, and societal issues differ along the continuum from basic research to use of an application? How do the stakeholders and their interests change? • Timing considerations —What are the ELSI considerations in weighing short-term bene- fits against long-term costs, and how does the scale of such benefits and costs affect these considerations? Humanity • How and to what extent, if at all, does a new military application compromise something essential about being human? How and to what extent, if at all, might users believe that the application is unethical? • How and to what extent, if at all, is the application invasive of the human body or mind? • How and to what extent, if at all, could use of an application tread on religiously or culturally sensitive issues? • Does a technology threaten to cede control of combat capabilities to nonhuman systems to an unacceptable degree? Technological Imperfections • Who decides the appropriate safety requirements associated with a new application? • On what basis are such decisions made? • What, if any, are the tradeoffs between an application’s functional- ity or use and the safety requirements imposed on it? Unanticipated Military Uses • What military uses are possible for the application or technology in question that go beyond the stated concepts of operation? What are the ELSI implications of such uses? Crossovers to Civilian Use • How and to what extent, if any, could civilian-oriented adaptations of military applications made widely available to citizens raise ethical and societal issues that do not arise in the military context?

OCR for page 163
AN ANALYTICAL FRAMEWORK FOR IDENTIFYING ELSI 205 • How fast should such military-to-civilian transfers of applications be made? What safeguards should be put into place before they are made? How should such safeguards vary with the technology involved? Changing Ethical Standards • If an application is intended to address a military issue that previ- ously had to be addressd by humans, what is the minimum standard of performance that the application must meet before it is deemed accept- able for widespread use? • How and to what extent, if any, does a new application create new ethical obligations to use it in preference to older applications addressing similar problems that may raise ELSI concerns to a greater extent? ELSI Considerations in a Classified Environment • How can research in a classified environment be reviewed for ELSI purposes? • What is the appeals process for challenging classification designa- tions that may have been assigned inappropriately? Opportunity Costs • How should the value of an R&D effort be ascertained? • Why is the R&D effort proposed more valuable than another effort whose cost and likelihood of success are comparable? On what basis should one program be chosen over another? • How and to what extent does U.S. military effort in a selected R&D problem domain signal to adversaries that this domain may be a promis- ing one for military applications? Questions of Relevance by Source of Insight Chapter 4 describes a number of different sources of ELSI insight, and the discussion includes illustrative ELSI-related questions that may be derived from considering each of those sources. 5.4.2  Utility of the Framework Readers of this report who identify ethical, legal, and societal issues inherent in the above described scenario (Section 5.3.1) that do not derive from use of the framework may be dismayed about that fact. Such dismay would foreshadow material presented in Chapter 6, which argues that a

OCR for page 163
206 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY comprehensive identification of ethical, legal, and societal issues associ- ated with a given technology development is difficult indeed. Put differently, the framework is itself a starting point for discussion and is not comprehensive. The framework provides some structure for thinking through various ethical, legal, and societal issues, but as the sampling of such issues across various technologies and applications suggests, it is not necessary to treat all issues in the framework as equally important for any given technology or application—judgment is neces- sary to make the most effective use of the framework. That is, different ethical, legal, and societal issues may come into play or a given ELSI con- cern may be significant to varying degrees depending on the technology in question. On the other hand, not considering any given element in the framework must itself be a thoughtful and defensible decision rather than a reflexive one—a good and plausible argument must be available as to why that particular element is not relevant. As decision makers gain more experience with identifying and assess- ing ethical, legal, and societal issues, it should be expected that the content embedded in the framework will evolve. Years from now, it would be surprising indeed if the questions that policy makers posed regarding ethical and societal issues had not changed at all. Policy makers might wish to use this framework for new or existing R&D programs or projects. In addition, it may be appropriate to apply this framework when some unanticipated application emerges. One might regard use of this framework as part of an ongoing process that lasts throughout the lifetime of a given program. The purpose of this framework is not to impose compliance require- ments on program managers, but rather to help them to do their jobs bet- ter and to help ensure that basic American ethical values are not compro- mised. The analytical framework is necessarily cast in somewhat broad and abstract terms because it is designed to apply to most R&D programs; consequently, not all questions in the framework will necessarily be rel- evant to any specific technology or application. Furthermore, although it may not be likely that a contentious issue identified through this framework will be resolved in a decisive or final manner, this fact is not an adequate rationale for dismissing or ignor- ing the issues. Honest, well-reasoned analyses are useful to policy mak- ers, even if they might be incomplete, and such analyses can be supple- mented or corrected through adaptive processes as additional knowledge is gained over time, as discussed in Chapter 6. As for the framework itself, the number of stakeholder groups and the number of crosscutting themes described in this chapter are both large, reflecting the breadth of possible technologies whose ethical, legal, and societal consequences must be considered and the large number of inter-

OCR for page 163
AN ANALYTICAL FRAMEWORK FOR IDENTIFYING ELSI 207 ested parties, as well as the diverse nature of the concerns that any given stakeholder may bring to bear. Indeed, the committee found that attempts to make these lists more concise—in general an effort worthwhile as an analytical goal—would constrain the intended broad applicability of the framework. In part, the framework fills the role of a checklist, a mecha- nism that is often used to remind decision makers to consider the possible relevance to the project at hand of a wide range of issues that may not be related to each other. The framework provides information about ethical foundations and approaches that many people and organizations find useful in consider- ing difficult questions about research for technological innovations, with- out choosing a particular orientation from among them. This approach recognizes that weighting different ethical constraints and opportunities is difficult and does not lend itself to an algorithmic decision-making procedure. Under some set of specific circumstances and technological characteristics, certain criteria may have priority, whereas under a differ- ent set of circumstances, different criteria may have priority. At the level of generality at which this framework is cast, a few caveats are necessary. First, a full consideration of ethical issues some- times produces a cacophony of methodologies and perspectives that leads to dissonance and controversy. Similarly, “societal” issues range across such a broad range of possibilities that attempts to limit the scope of such issues inevitably generates questions about why this issue or that issue was included or excluded. Third, decision makers will surely face tradeoffs, satisfying no stakeholder fully in any ethically or societally controversial enterprise. Fourth, the framework does not provide a meth- odology for resolving or settling competing ethical claims, for choosing between ethical theories, or for providing specific answers to ethical ques- tions, although it does call for decision makers to attend to a variety of ethical positions and approaches. At the same time, the framework does not assume that “anything goes,” and it posits that through deliberation and discussion, it is often possible to identify initial ethical positions that are more well grounded and defensible or less so. Further deliberation and discussion may well lead to evolution in these initial positions and decisions. Because such discussion increases the likelihood that major ethical, legal, and societal concerns will be identified before any given technology R&D program or project gets underway, and casting the initial net broadly rather than narrowly will help to limit ELSI-related surprises, the committee believes that such a discussion is worthwhile as a part of any ELSI assessment. The framework above is useful primarily for bringing ethical, legal, and societal concerns to the surface that would not otherwise have been apparent to decision makers and program managers. The ELSI-related

OCR for page 163
208 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY questions included within the framework are intended to help decision makers develop useful knowledge on a variety of ethical, legal, and soci- etal issues regarding specific military science and technology programs and projects. The framework was developed to apply to decision making in a U.S. context, although decision makers and program officials in other nations may nonetheless find parts of it useful. In the end, the use of this framework can only provide input to deci- sion makers, who will have to make judgments about how, if at all, to proceed with a particular R&D program or project, and such judgments should be undertaken after the decision makers have examined the issues posed by the framework rather than before. Different individuals may develop different answers to the various questions raised by the frame- work about a given technology, but the important aspect of this process is that the questions be asked and that a discussion take place. This framework does not substitute for other processes and proce- dures that may be applicable for other reasons. In particular, program managers are obligated to conduct their programs in accordance with applicable law and regulation (such as the Common Rule,22 which sets forth federal policy for the protection of human subjects used in research). Judgments about the compliance of a specific program with applicable laws are beyond the scope of this report, although the report draws on relevant national and international standards in its discussion. 5.4.3  Identifying Fraught Technologies Not all technologies or applications are equally fraught from an ELSI standpoint. Technologies or applications are likely to be highly fraught if they satisfy one or more of the following attributes: • A technology or application that is relevant to multiple fields (for example, an enabling technology or application) will almost surely have more ELSI impact in the long run than one whose scope of relevance is narrow. • A technology or application whose operation has the potential to result in intended or unintended consequences that could cause harm to people on a very large scale is likely to raise more ELSI concerns than one without such potential. • A technology or application that challenges traditional (and often religious) notions of life and humanity or appears to do so is likely to 22 See http://www.hhs.gov/ohrp/humansubjects/commonrule/index.html.

OCR for page 163
AN ANALYTICAL FRAMEWORK FOR IDENTIFYING ELSI 209 raise more ELSI concerns. Under this heading are some concerns that the Wilson Center report describes as concerns over “nonphysical” harms. 23 A technology or application for which one of these statements is true is worthy of special consideration and effort to understand ELSI concerns, and a technology or application for which more than one is true is even more worthy of such consideration. Examples from history that have all of these attributes in some measure might include genetic engineering and recombinant DNA research, and Chapter 2 highlights the current discus- sion of what synthetic biology, as a similar kind of research, might pro- duce and how its potential benefits are accompanied by a range of ethical, legal, and societal issues that its proponents have worked hard to address. 5.4.4  Frequently Heard Arguments Finally, it is helpful to address a number of frequently heard argu- ments about ethics as they apply to new military technologies. Specifi- cally, one common thread of the arguments discussed below is that they are often made with the intent or desire of cutting off debate or discussion about ethical issues. • An argument. U.S. adversaries are unethical, and so ethics should not be a constraint in using advanced weaponry against them. Moreover, they seek every advantage over the United States that they can obtain, and thus the United States, too, must do the same in any conflict with adversaries. Response. The United States has publicly stated a commitment to abide by certain constraints in how it engages in conflict regardless of how its adversaries behave; these commitments are embodied in domestic law that criminalizes violations of the Geneva Con- ventions by the U.S. armed forces, and also by certain treaties that the United States has signed and ratified. The real question is not whether we constrain ourselves ethically but how and under what circumstances, and with what decision-making procedures we do so. • An argument. U.S. adversaries will pursue all technological opportunities that serve their interests, and if the United States doesn’t pursue those opportunities as well, it will wind up being at a military disadvantage. 23 The full report can be found at http://www.synbioproject.org/process/assets/files/ 6334/synbio3.pdf.

OCR for page 163
210 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY Response. From the standpoint of decision makers, there is a world of difference between the possibility that technology X could pro- vide military advantages and a clear demonstration that technol- ogy X does provide military advantages in specific and important operational scenarios. That is, the latter provides a proof of prin- ciple that technology X is worth a significant investment. This point argues that in some cases, it may make sense to separate decisions about exploring the value of a technology (a preliminary step) from decisions based on demonstrating how it can be used to confer military advantages (a more decisive step), and to make such decisions separately. • An argument. We don’t know the significance of technology X, so we must work on it in order to understand its implications, and we would be unwise to give up on it without knowing if and how it might have value to the United States. Response. This argument poses a false choice between cessation of all investigatory work on X and proceeding to work on X without any constraints at all. In fact, there are a variety of choices available in between these two extremes, the most significant of which is something along the lines of “proceed, but carefully.” Intermediate choices are addressed in Chapters 4 and 5 and in the recommenda- tions made in Chapter 8. • An argument. Consideration of ethical, legal, and societal issues will slow the innovation process to an unacceptable degree. Response. Although the argument is surely true in some cases, it is not necessarily true in all cases. For example, it depends on the nature and extent of such consideration. Moreover, a consideration of ethical, legal, and societal issues is hardly the only dimension of the military acquisition process on which that process may be slowed. Finally, a small slowdown in the process up front may in fact be worth the cost if it helps to prevent a subsequent explosion of concern that takes program managers by surprise. • An argument. Research on and development of defensive tech- nologies and applications is morally justified, whereas work on offensive technologies is morally suspect. Response. The categories of “offensive” and “defensive” technolo- gies are not conceptually clear, because offensive technologies (that is, technologies that can kill or destroy) can be used for defensive purposes, and, similarly, defensive technologies (that is, technolo- gies that prevent or reduce death or destruction) can be used for

OCR for page 163
AN ANALYTICAL FRAMEWORK FOR IDENTIFYING ELSI 211 offensive purposes. An example of the first is a defender’s use of an offensive weapon to destroy an incoming offensive weapon— in this case, the defender uses its offensive weapon to prevent or reduce the death and destruction that the attacker’s offensive weapon would otherwise cause. An example of the second is the use of a defensive system to protect an attacker that has launched a first strike—in this case, the attacker’s possession of a defensive system enables the attacker to attack without fear of retaliation, thus increasing the likelihood that it will in fact attack. In short, the distinction between the two categories often fails in practice. It should be stressed here that the responses to the various arguments outlined above are not intended to dismiss out of hand any of the fre- quently heard arguments. That is, all of the frequently heard arguments described above sometimes have at least a grain of truth that may be worth considering. At the same time, those grains of truth should not be amplified to the point that they render discussion of ELSI considerations illegitimate—the short responses to the frequently heard arguments are intended essentially as points of departure for further dialogue.