This chapter discusses sources of ELSI insight that might be relevant to considering the ethics of R&D on emerging and readily available (ERA) technologies in a military context. These sources include generalizable lessons arising from consideration of the science and technologies described in Chapters 2 and 3; philosophical ethics and existing disciplinary approaches to ethics; international law; social sciences such as anthropology and psychology; scientific and technological framing; the precautionary principle and cost-benefit analysis; and risk communication. The final section describes how these sources of insight might be used in practice. Also provided in this chapter is some background necessary for understanding the different kinds of ethical, legal, and societal issues that arise in Chapter 5.
A note on terminology: throughout this report, the terms “cost” and “benefit” are used in their broadest senses—all negative and positive impacts, whether financial or not.
Applications of most of the technologies described in Chapters 2 and 3 raise ethical, legal, and societal issues. Some of these issues are new; others put pressure on existing ELSI understandings and accommodations that have been reached with respect to more traditional military technologies. As a historical matter, such understandings have generally
been reached through a process in which society has addressed the ELSI implications of a new application or technology because its emergence has forced society to do so. Only rarely have ELSI implications been addressed prior to that point.
Because new technologies provide new capabilities and also allow old activities to be performed in new ways, situations can arise in which existing policy does not provide adequate guidance—giving rise to what Moor has characterized as a policy vacuum.1 But in practice, the vacuum involves more than policy—such situations also challenge existing laws, ethical understandings, and societal conventions that may have previously guided decision making when “old” technologies were involved. In a military context, it may be the existence of real-world hostilities that pushes policy makers to fill the policy vacuum.
Developing new ELSI understandings and accommodations is a fraught and complex process. For example, the technical implications of a new application may not be entirely clear when it first emerges. The intellectual concepts underpinning existing understandings may have ambiguities that become apparent only when applied to situations involving the new applications. An analogy used to extend previous understandings to the new situation may be incomplete, or even contradict the implications of other analogies that are used for the same purpose. As a practical matter also, new situations provide antagonists with the opportunity to reopen old battles over ethical, legal, and societal issues, thus potentially upending previously reached compromises on controversial issues.
In some cases, an R&D activity may be inherently suspect from an ELSI perspective. For example, advances in genetic engineering may someday enable the development of pharmaceutical agents that can act more effectively on individuals from certain ethnic groups. Although such agents might afford significant therapeutic benefit to members of those ethnic groups, the underlying science might also be used by a rogue state to harm those groups.2 Thus, R&D aimed at developing agents that have differential effects on various ethnic groups, whether or not intended for use in conflict, immediately raise a host of ELSI concerns.
In other cases, an application’s concept of operation is a central element in an ELSI analysis of that application. In general, an application of a given technology is accompanied by a concept of operation that articu-
1 James H. Moor, “Why We Need Better Ethics for Emerging Technologies,”Ethics and Information Technology 7:111-119, 2005.
2 The possibility that such weapons might be used was introduced in the professional military literature as early as 1970. See Carl Larson, “Ethnic Weapons,” Military Review 50(11):3-11, 1970.
lates in general terms how the application is expected to be used, and it may be an application’s concept of operation rather than the application itself that raises ethical, legal, and societal issues. A system with lethal capabilities may have “selectable” modes of operation: fully autonomous operation of its lethal capabilities; human-controlled operation of its lethal capabilities; and target identification only. A concept of operations for the fully autonomous mode that does not adequately specify the circumstances under which it may be activated may well be suspect from an ELSI perspective.
An application’s practical value helps to shape developing new ELSI understandings and accommodations. If an application turns out to have a great deal of practical or operational value, an ELSI justification may emerge after that value has been established. Similarly, if an application has little operational value, ELSI-based objections will seem more powerful, and may become part of the narrative against that application.
For example, the emergence of new weapons technologies often sparks a predictable ethical debate. Regardless of the actual nature of the weapon, some will argue that a new weapon is ethically and legally abhorrent and should be prohibited by law, whereas others will point to the operational advantages that it confers and the ethical responsibility and obligation to provide U.S. armed forces with every possible advantage on the battlefield. Sometimes this ethical debate ends in a consensus that certain weapons should not be used (e.g., weapons for chemical warfare). In other cases, existing ELSI understandings are eroded, undermined, or ignored (as was the case with the London Naval Treaty of 1930, which outlawed unrestricted submarine warfare but subsequently was abandoned for all practical purposes). But the point is that operational value has often made a difference in the outcome of an ELSI analysis.
The above points are relevant especially in an environment of accumulating incremental change and improvement. Ethical, legal, and societal issues often become prominent when a new technology offers a great deal more operational capability than previous ones. But as a technology is incrementally improved over many years and becomes much more capable than it was originally, the capabilities afforded by the improved technology may render the originally developed ELSI understandings obsolete, moot, or irrelevant.
Perhaps the most important point to be derived from synthesizing across technologies is that technology-related ELSI debates are ongoing. One should expect such debates as technology evolves, as applications evolve, as threat/response tradeoffs change (e.g., nation-state warfare, guerrilla warfare, terrorist warfare, cyber warfare), and as societal perceptions and analysis change. In some cases, new ELSI debates will emerge. In other cases, the ELSI debates will be familiar, even if they are newly
cast in terms of the relevant change at hand. And in still other cases, the ELSI debates will sound familiar right down to the literal words being used, simply because a proponent of a particular ELSI perspective sees an opportunity to (re-)present his or her point of view.
Classically, Western moral philosophers have advanced two general kinds of moral theories that have proven useful in analyzing moral problems. One kind of theory, consequentialism (or equivalently, utilitarianism), looks at the consequences of actions and asks, for example, which actions will provide the greatest net good for the greatest number of people when both harms and benefits are taken into account. Thus, an action is judged to be right or wrong given its actual consequences. Consequentialism allows the ranking of different actions depending on the outcomes of performing them.
A second kind of theory, deontological ethics, judges the morality of actions in terms of compliance with duties, rights, and justice. Examples are following the Ten Commandments or obeying the regulations spelled out in a professional code of ethics. The morality of killing or lying would be decided based on the nature of the act and not on its results or on who the actor is. That is, the act of killing an innocent person, for instance, would under some versions of deontological ethics be categorically wrong in every circumstance. Other versions of deontological ethics allow for some ranking of conflicting duties and therefore are less categorical.
In many cases, persons acting on the basis of any of these theories would view the rightness or wrongness of any given action similarly. In other cases, they might well disagree, and philosophers have argued extensively and in many academic treatises about the differences that may arise. In practice, however, few people act for purely deontological or purely utilitarian reasons, and indeed many ethical controversies reflect the tensions among these theories. For example, Party A will argue for not doing X because X is a wrong act that cannot be justified under any circumstances, whereas Party B will argue for doing X because on balance, doing X results in a greater good than not doing X.
Sometimes these different approaches work nicely together in generating a more ethical outcome. Consequentialist ethics allow for managing a complex ethical situation to mitigate its negative effects. In some cases, the rapid pace of a program may give rise to concerns that certain stakeholders will not have a fair chance for input into a decision-making process (a deontological ethical concern). Slowing the program or build-
ing in certain checkpoints may address some of these concerns. In such cases, the issue may not be so much whether or not to do something, but rather when it should be done.
A third perspective on philosophical ethics is called virtue ethics—this perspective emphasizes good personal character as most basic to morality. People build character by adopting habits that lead to moral outcomes. Good character includes being trustworthy, helpful, courteous, kind, and so on. Under this theory, a scientist with good character will not fabricate data or exaggerate outcomes in her published research. In a military context, an example of virtue ethics is the set of core values articulated by the U.S. Army for soldiers: loyalty, duty, respect, selfless service, honor, integrity, and personal courage.3 Actions or behavior that compromise one or more of these values are to be avoided.
Perhaps related to virtue ethics is the body of moral beliefs found in specific religions that often prescribe what should count as “good” and what individuals should, or should not, do. Specific notions such as what is humane or evil; what constitutes human nature; compassion; peace; stewardship; and stories of creation are often closely linked to religious worldviews. The discussion of the laws of war below notes that the major religions of the world are not silent on questions related to war and peace, civilian and military involvement in conflict, and so on, and further that there are some commonalities to the philosophical approaches taken by those religions. But answers to questions involving such concepts may well vary according to the specific religions in question, and a serious examination of the ethics involving conflict or technologies to be used in conflict may require a detailed look at those religions. A detailed examination of what various religions say about such matters is beyond the scope of this report, and thus apart from acknowledging that religion plays an important role in the formulation of answers, the role of any specific religion is not addressed in this report.
Some relevant questions derived from philosophical ethics include the following:
• On what basis can the benefits and costs of any given research effort be determined and weighed against each other, taking into account both the research itself and its foreseeable uses?
• What categorical principles might be violated by a research effort, again taking into account both the research itself and its foreseeable uses?
• How and to what extent, if any, might a research effort and its fore-
3 See, for example, http://www.history.army.mil/lc/the%20mission/the_seven_army_values.htm.
seeable uses compromise the character and basic values of researchers or military users?
• How and to what extent, if any, does a research effort implicate shared ethical or moral concerns of major religious traditions?
Just as specialization in general areas of science and engineering has become necessary and commonplace, the same is true for ethics. The sources of modern-day ethics continue to evolve, and ethical perspectives are dynamic. For example, new theoretical orientations coming from communitarian ethics raise and address issues for which the moral theories described above are not seen to provide sufficient guidance.4
New subfields of ethics, specializing in practical and professional ethics, are now commonplace and address the issues and problems relevant to a particular area. Included among these subfields are biomedical ethics, engineering ethics, and information technology ethics, among others.
All of these specializations are concerned with examining and assisting in the particularities of moral analysis and decision making that arise in those domains, and sometimes between domains.
The field of biomedical ethics (bioethics) has developed over several decades and encompasses medical ethics, research ethics, and concerns over the implications of biomedical research. The field is interdisciplinary, and thus its approach to ethics incorporates work from law, medicine, philosophy, theology, and social science. In addition, the field’s boundaries are indistinct and often overlap into medical ethics, research ethics, law, public policy, and philosophy. The field initially focused on the ethics of research with human subjects, but numerous key events in medicine and biomedical research have led to the development of the field’s basic principles.
The initial discussion on the ethics of human subjects research resulted in one of the primary standards of bioethics: informed consent. In 1947, the Nuremberg Trial of Nazi doctors spurred legal discussions of consent and examinations of medical codes of ethics. Although this ruling relied on a standard of informed voluntary consent, it had little initial direct
impact on U.S. medical ethics.5 The subsequent 1964 Declaration of Helsinki from the World Medical Association brought the issue of achieving informed consent in medical research to the attention of the U.S. medical community, and the declaration was incorporated into the professional codes of U.S. physicians.6 The difficulties with achieving and establishing standards for informed consent have been a consistent focus of bioethics. With the discovery of cases of human subjects’ abuses throughout the 1960s and 1970s, the field was pushed to hold stricter standards for both informing patients and research subjects and also for ensuring voluntary consent.
Henry Beecher’s 1966 article in the New England Journal of Medicine,7 in which he described numerous ethical abuses of patients by physicians and researchers, drew attention to the physicians’ behavior and raised concerns about physician authority. Specific cases, some identified by Beecher, focused attention on the issue of getting informed consent in medical research and also on the conflict of interest between advancing medical knowledge and not harming patients. These cases included the following:
• The Fernald School experiments. Mentally disabled children were fed radioactive calcium in their meals to learn about the absorption of calcium.
• The Jewish Chronic Disease Hospital. Terminally ill patients were injected with live cancer cells to learn about human ability to reject foreign cells.
• The Willowbrook State School. Children in the state school were deliberately given hepatitis in order to learn more about the virus and control the spread of the disease in the hospital.
• The Tuskegee Syphilis Study. African American men with syphilis were followed for over 40 years and denied treatment (penicillin) once it was available in order to learn about the disease progression.
5 Jay Katz, “The Consent Principle of the Nuremberg Code: Its Significance Then and Now,” The Nazi Doctors and the Nuremberg Code: Human Rights in Human Experimentation, George J. Annas and Michael A. Grodin, eds., Oxford University Press, New York, 1992.
6 Susan E. Lederer, “Research Without Borders: The Origins of the Declaration of Helsinki,” pp. 199-217 in Twentieth Century Ethics of Human Subjects Research: Historical Perspectives on Values, Practices, and Regulations, Volker Roelcke and Giovanni Maio, eds., Franz Steiner Verlag, Stuttgart, 2004; Jonathan D. Moreno and Susan E. Lederer, “Revising the History of Cold War Research Ethics,” Kennedy Institute of Ethics Journal 6(3):223-237, 1996.
7 H.K. Beecher, “Ethics and Clinical Research,” New England Journal of Medicine 274(24):1354-1360, June 16, 1966, available at http://whqlibdoc.who.int/bulletin/2001/issue4/79(4)365-372.pdf.
These cases all involved issues with the informed consent process, including the lack of information given and how voluntary the consent was.
The field of bioethics also developed principles around medical care, which have their roots in medical ethics and the physician-patient relationship. David J. Rothman has argued that the issues of informed consent and the resulting push for regulation in human experimentation overflowed into medical care during the 1960s.8 Whatever the cause, during the 1960s the physician-patient relationship was reconsidered and physician authority in making medical decisions was questioned. The results were calls for patient autonomy and an emphasis on physicians’ truthfully informing patients of their condition, rather than paternalistically shielding patients from the realities of their illnesses. These changes in norms emphasized personal autonomy and truth-telling, and were spurred by various developments in medical technology and experimental medical treatments.9 Organ transplantation and heart-lung machines raised questions about when death occurred and about patients’ rights to request withdrawal of care or deny treatment. Kidney dialysis and organ transplantation raised questions about the just allocation of limited resources, specifically asking if physicians should be the only ones making these decisions and how the decisions should be made.
The field of bioethics includes consideration of the impacts of scientific and technological developments on social morality. During the field’s development, research and advances in genetics and in vitro fertilization drove the field to think about the effects they have on society and its norms. A growing number of tests for genetic diseases raised issues of personal autonomy, genetic privacy, and claims of practicing eugenics. The development of in vitro fertilization in the 1970s and 1980s raised questions, for the first time, argued Alta Charo, about what was right and wrong regarding the manipulation of human embryos and about how to define personhood.10
In 1979, the first federal bioethics commission, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, formalized the principles of bioethics articulated in the Belmont report (Box 4.1). The commission was charged with focusing on the ethics of research on or involving human subjects; however, the moral principles it outlined have been applied, augmented, and adapted by a number of
8 David J. Rothman, Strangers at the Bedside: A History of How Law and Bioethics Transformed Medical Decision Making, Basic Books, New York, 1991.
9 Alta Charo, “Prior ELSI Efforts—Biomedical/Engineering Ethics,” presentation to the committee, August 31, 2011.
10 Alta Charo, “Prior ELSI Efforts—Biomedical/Engineering Ethics,” presentation to the committee, August 31, 2011.
commentators and analysts not just to human subjects research but to all aspects of bioethics, including medical care, and to the impacts of biotechnology and life sciences research on society, and to other domains as well.11
Since the Belmont report, the biomedical ethics field has explored and focused on how these principles apply to specific areas of medicine and research, including end-of-life care, genetics and biotechnology, health systems, global health, nanotechnology, stem cell research, assisted human reproduction, gene therapy, cloning, and health care policy. Notably, in 1988 when James Watson launched the National Institutes of Health’s Human Genome Project (HGP), he also announced that 3 percent (later increased to 5 percent) of the funding would go to researching the ethical, legal, and societal issues associated with genetics, which is where the term “ELSI” originated. HGP-supported ELSI research focused the field of bioethics on the issues with genetics. In addition, the support for ELSI research also funded centers for bioethics across the country, which enabled the field to spread and resulted in more scholars and researchers being educated in bioethics or in becoming bioethicists. This NIH-supported genetics ELSI research continues today.
Questions of interest in biomedical ethics include the following:
• How do standards for achieving informed consent change with different populations? Do different stresses on volunteers or patients alter the ability to achieve informed consent?
• What kinds of inducements overwhelm voluntarism? What protections are necessary to maintain a person’s voluntary choice in decision making?
• How should public good be weighed against risks to individuals?
• How should research populations be chosen to address issues of social justice while balancing the vulnerability of populations?
• What obligations for truth telling exist in research? Are there justi-
11 See, for example, Amy Gutmann, “The Ethics of Synthetic Biology: Guiding Principles for Emerging Technologies,” The Hastings Center Report 41(4):17-22, 2011, available at www.upenn.edu/president/meet-president/ethics-synthetic-biology-guiding-principles-emerging-technologies; David Koepsell, “On Genies and Bottles: Scientists’ Moral Responsibility and Dangerous Technology R&D,” Science and Engineering Ethics 16(1):119-133, 2010, available at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2832882/; David Koepsell, Innovation and Nanotechnology: Converging Technologies and the End of Intellectual Property, Bloomsbury Academic, New York, 2011; and U.S. Department of Homeland Security, “Applying Ethical Principles to Information and Communication Technology Research: A Companion to the Department of Homeland Security Menlo Report,” GPO, January 3, 2012, available at http://www.cyber.st.dhs.gov/wp-content/uploads/2012/01/MenloPrinciplesCOMPANION-20120103-r731.pdf.
Box 4.1 Fundamental Principles of Biomedical Ethics
The Belmont report of 1979 articulated three principles to govern the conduct of biomedical research: respect for persons, beneficence, and justice.1 In the discussion below, which is based on a discussion of biomedical ethics by Thomas L.
Beauchamp and James L. Childress, a fourth principle is added: nonmaleficence.2
From each of these principles are drawn obligations and rules for how to act.
• Respect for autonomy. Autonomy is defined as including two essential conditions: “(1) liberty (independence from controlling influences) and (2) agency (capacity for intentional action).”3 This principle holds that the autonomy of people should not be interfered with. Autonomy should be respected, preserved, and supported. In the case of health care and human subjects research, the principle obliges physicians and researchers to get informed consent, tell the truth, respect privacy, and only when asked help others to make important decisions. Discussions in biomedical ethics around how to abide by this principle often focus on a few areas: evaluating capacity for making autonomous choices, the meanings and justifications of informed consent, disclosing information, ensuring voluntariness, and defining standards for surrogate decision making.
• Nonmaleficence. This principle asserts “an obligation not to inflict harm on others.”4 It does not require that a specific action be taken, but rather that one intentionally refrain from taking action that will either cause harm or impose a risk of harm. The specific rules drawn from this principle include: do not kill, do not cause pain or suffering, do not incapacitate, do not cause offense, and do not deprive others of the goods of life.5 When applied to the health care and research experiences, the discussion over the implementation of this principle focuses on distinctions and rules for nontreatment, quality-of-life discussions, and justifications and questions regarding allowing patients to die or arranging deaths. This principle is most closely connected with the physicians’ code of ethics rule that they “do no harm.”
• Beneficence. Closely related to the principle of nonmaleficence, this principle is “a moral obligation to act for the benefit of others.”6 This includes two aspects:
fications for not telling the whole truth or leaving patients or volunteers in the dark?
• What impacts do conflicts of interest have on research results and participants’ involvement? How can conflicts of interest be resolved, or must they be avoided entirely?
• When and how do privacy issues and the collection of data negatively affect autonomy?
• How do cultural perspectives alter bioethics standards? How flexible should bioethics standards be in response to different cultures?
(1) positive beneficence, which requires one to take action to provide benefits, and (2) utility, which requires that one balance benefits and costs to ensure the best result. The more specific rules drawn from this principle include: protect and defend the rights of others, prevent harm from occurring, remove conditions that will cause harm, help persons with disabilities, and rescue persons in danger.7 In reference to human experimentation this principle obliges researchers and institutional review boards to weigh the risk to subjects and to ensure that the risk be minimal unless there is a direct benefit to the subject. In the case of medical care this principle obliges physicians to promote patient welfare.
• Justice. An obligation to treat people fairly, equitably, and appropriately in light of what is due or owed to them. This principle includes the concept of distributive justice, which refers to the just distribution of materials, social benefits (rights and responsibilities), and/or social burdens.8 Determinations of what is a morally justifiable distribution vary based on different philosophical theories; for instance, a utilitarian view emphasizes maximizing public good, whereas an egalitarian view emphasizes equal access to the goods. In the medical context this principle focuses on rules regarding access to decent minimal health care, such as emergency care, the allocation of health resources, and the rationing of and priority setting for resources and treatments. Regarding human experimentation, this principle is often used to ensure that vulnerable populations are not exposed to more risk than other populations.
1 The Belmont report can be found at http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html.
2 Thomas L. Beauchamp and James F. Childress, Principles of Biomedical Ethics, 5th Edition, Oxford University Press, New York, 2001.
3 Ibid., p. 58.
4 Ibid., p. 113.
5 Ibid., p. 117.
6 Ibid., p. 166.
7 Ibid., p. 167.
8 Ibid., p. 226.
The academic field of engineering ethics developed in the United States in the early 1970s with other inquiry concerning issues of practical and professional ethics. Perhaps biomedical ethics was earliest to gain both scholarly and public interest; engineering and research ethics soon followed.
Controversies concerning engineering catastrophes and research misconduct are likely to have fueled public demands and professional response. Work in the field accelerated when ABET, the accrediting body for engineering and technology programs at colleges and universities,
initiated a requirement in 1985 that engineering students demonstrate understanding of ethics for the profession and practice. Current National Science Foundation (NSF) and NIH requirements for ethics mentoring of postdoctoral students and ethics education for graduate and undergraduate students have also stimulated activity.
Initially, research by philosophers, often with engineers as collaborators, focused on ethical problems from the perspective of individual engineers. Ethical theory can provide useful conceptual clarification of the ethical dimensions of possible action. In addition, codes of ethics and other guidance concerning human development and human rights, from professional societies and national and international bodies, provided other resources, as did laws and other regulations.
More recent research includes historians and social and behavioral scientists as well as science and technology studies scholars, and examines issues of complex systems and collective as well as individual responsibility. These issues involve the responsibilities of engineers in organizations and the collective responsibilities of engineering societies and the organizations and networks that employ engineers and that develop, promote, and regulate engineering innovations. They also involve issues of design and implementation and ask whether traditional theories and approaches in ethics must be revised, augmented, or cast aside in light of the difficulties that complexity creates for development and management.
An important resource for the field is case studies, which take numerous forms and have many uses. For example, the case descriptions, commentaries, and findings of the Board of Ethical Review of the National Society of Professional Engineers is a rich source of material for engineers faced with, and scholars wishing to examine, ethical problems. Cases can be hypothetical or historical, provide positive or negative role models, focus on everyday or rare and large-scale events, or emphasize individual or organizational actions. They can take a prospective or retrospective view—that of the agent or the judge. They can describe value conflicts or problems of drawing lines between what is permissible, unacceptable, recommended, or forbidden. The cases can be simplified to illustrate a particular concept (called thin description) or illustrate real-life messiness so as to demonstrate how people may legitimately arrive at different solutions. Finally, cases may illuminate a problem from the perspective of an individual engineer, or they may document and analyze an issue that can be resolved only at an organizational or societal level.
As noted above, the field of engineering has also begun to grapple with the implications of complexity for individual and organizational responsibility. Some scholars believe that the increasing complexities require new ethical theories, concepts, and approaches if they are to be resolved, whereas others hold that further elucidation of already extant
understandings can handle most such problems, while acknowledging that new policies and practices may be required. This is a bifurcation in views that seems common to considerations of ethics in a great many fields of science, engineering, and technology.
Some of the questions of interest in engineering ethics include the following:
• How can the domain of professional engineering responsibility be legitimately circumscribed? Are there ethical commonalities covering all engineering fields, or is different field-specific guidance needed?
• How can engineered systems identify and address issues of social and societal inequities? Who has responsibilities to do this; who shares these responsibilities?
• How should engineers participate in societal determinations about promoting innovation? Who should bear the costs and risk of failure? Are there ethically better and worse ways to distribute benefits? Who should decide?
• Recognizing both that R&D on some military technologies is necessary for the safety of the nation and that engineers have paramount responsibility for health, the environment, and safety, are there engineered systems that are too complex or dangerous to introduce in society?
• How should engineers and the engineering profession contribute to a future that is economically, environmentally, and socially sustainable?
• How, and to what extent if any, do engineering and engineering ethics translate across political, geographical, and generational boundaries?
• What are legitimate social expectations concerning the development and use of engineered systems and services, vis-à-vis feasible control and due care? Should ethical distinctions be made between deliberate and accidental misuse? How should legal, educational, and professional institutions address the limits of “good enough” engineering and problems of unintended uses and users?
Information Technology Ethics
Scholars have advanced a number of views on the nature of information technology ethics.12 One view is that it is simply the application of traditional ethical theories (e.g., consequentialism, deontology) to problems associated with the use of information technology, some of which have manifestations even without information technology and others of
12 Much of the discussion in this section is based on Terrell Bynum, “Computer and Information Ethics,” Stanford Encyclopedia of Philosophy, 2008, available at http://plato.stanford.edu/entries/ethics-computer/.
which come into existence because of information technology. A variant of this view is that the latter category (problems that exist because of information technology) is vanishingly small, and that for the most part what appear to be new ethical problems are really old problems with a different technological underpinning.
Others believe that information technology results in entirely new ethics problems that would not exist in the absence of such technology. For example, Walter Maner noted that ethical analysts are often unable to find a satisfactory noncomputer analogy to a problem arising with information technology—a fact that for Waner testified to the uniqueness of problems in information technology ethics. In this context, “lack of an effective analogy forces us to discover new moral values, formulate new moral principles, develop new policies, and find new ways to think about the issues presented to us.”13
Still others argue that information technology ethics is concerned with ethical problems that become apparent or manifest only when unprecedented IT applications emerge. These problems arise because IT provides new capabilities and thus new possibilities for action—and either there are no policies or guidance in place that address the new possibilities or existing policies and guidance are inadequate. (For example, hiding information deep inside a computer system’s file structure is no longer a viable method for protecting it, since search engines can find such information no matter where it is located as long as there is at least one path, however obscure, to it; thus, privacy policies based on hiding information in obscure locations are less viable than they once were.) IT ethics address what constitutes ethical behavior in new cases.
Last, some regard IT ethics as a subset of professional ethics—what are the ethical responsibilities of individual practitioners or researchers in the field of IT? For example, the ACM and IEEE-CS Software Engineering Code of Ethics and Professional Practice calls on software engineers to commit themselves to the health, safety, and welfare of the public through adherence to eight principles14—acting consistently with the public interest; acting in a manner that is in the best interests of their client and employer consistent with the public interest; ensuring that their products and related modifications meet the highest professional standards possible; maintaining integrity and independence in their professional judgment; subscribing to and promoting an ethical approach to the management of software development and maintenance; advancing the integrity and reputation of the profession consistent with the public interest; being
13 Walter Maner, “Unique Ethical Problems in Information Technology,” in Terrell Bynum and S. Rogerson, eds., Science and Engineering Ethics 2(2):137-154, 1996.
fair to and supportive of their colleagues; and participating in lifelong learning regarding the practice of their profession and promoting an ethical approach to the practice of the profession.
Some of the topics considered under the rubric of information technology ethics or computer ethics include the following:15
• Computers in the workplace, e.g., what is an ethical policy for employee use of computers in the workplace?
• Computer crime, e.g., how does a crime committed with the use of a computer differ, if at all, from a similar crime that is committed without a computer?
• Privacy and anonymity, e.g., what are the consequences (both incremental and cumulative) for privacy and anonymity of any given deployment of information technology?
• Intellectual property, e.g., how and to what extent, if any, should intellectual property rights be associated with software?
• Professional responsibility, e.g., what are the special ethical responsibilities of IT workers, if any, in the course of their employment?
• Globalization, e.g., how and to what extent should disparities in accessibility of information technology between “have” and “have-not” nations be addressed?
A common thread among the disciplinary ethics described above is the phenomenon of convergence among the technology disciplines. In this context, convergence means that the disciplines in question are to varying degrees becoming increasingly interdependent. To the extent that this is true, the different ethics of each discipline may—or may not—pose conflicts with each other.
Modern international law has its origins in the 1648 Treaty of Westphalia, which is commonly considered the beginning of an international system based on nation-states. At its root, the nation-state arrangement means that international law governs relationships between sovereign states, and that individual states have exclusive jurisdiction over events and matters in their own territories.
15 See Terrell Ward Bynum, “Computer Ethics: Basic Concepts and Historical Overview,” Stanford Encyclopedia of Philosophy, 2001, available at http://plato.stanford.edu/archives/win2001/entries/ethics-computer/.
Subsequent treaties and conventions relying on the framework provided by the Treaty of Westphalia (e.g., the various Geneva Conventions addressing armed conflict) share a common goal: to regulate certain armed activities among nations. Over the centuries, international law has sought to adapt to changing patterns of armed conflict while retaining its fundamental principles. The rights accorded by national sovereignty have been increasingly challenged by such changes.
In the 60 years following World War II, the world experienced a remarkable decline in interstate conflict, with internal armed conflict becoming by far the most common form.16 More recently, terrorism has emerged as a major threat, including the increasing links between terrorists and organized crime in a number of settings. Internal conflict and terrorism/organized crime raise questions about how the international community can respond to threats arising within nations, especially in cases where nations lack the willingness or capacity to respond.
A growing list of international conventions address international security threats that cannot be readily met by national responses alone, such as those against terrorism, piracy, or organized crime. Along with the arms control treaties discussed below in this section, these agreements call on their member states to enact national legislation to implement their provisions. For weapons of mass destruction, UN Security Council Resolution 1540, adopted in 2004, obliges member states to adopt measures to prevent terrorists or organized criminal groups from gaining access to weapons of mass destruction or the means to deliver them. The United States has actively supported many of these measures and has provided assistance to countries to help them adopt appropriate legislation.
The UN treaty process is not the only means through which treaties emerge. In some cases, groups of states come together to craft treaties. The Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on Their Destruction, also known as the Ottawa Treaty, is such an example. In the mid-1990s, widespread use of land mines in violation of traditional military practices17 prompted humanitarian organizations that could not carry out their missions in postconflict areas because of mine-related hazards to propose a ban on antipersonnel mines. When efforts to change the additional protocol to the Convention on Certain Conventional Weapons (usually acronymized
16 This shift has occurred despite the fact that the number of nations belonging to the United Nations has almost quadrupled since its creation. Trends in various forms of armed conflict may be found on the Uppsala Conflict Data Program Web site at http://www.pcr.uu.se/research/UCDP/.
17 Traditional military practice calls for the marking of minefields and the subsequent clearing of those minefields by those who lay them. But in the 1990s, a number of military forces, both national and insurgent, were using mines more or less indiscriminately.
as CCW) covering antipersonnel land mines to create a ban on land-mine use failed, a treaty was negotiated outside the UN framework—namely, the Ottawa Treaty. The treaty has 160 members, but a number of major nations and land-mine producers—the United States, Israel, India, Pakistan, Russia, and China—are not parties to the treaty. However, the UN treaty process did amend Protocol II to the CCW, for example to include internal as well as interstate conflict, and the United States, Israel, India, Pakistan, Russia, and China are parties to this protocol.18
In addition, nations can and do come to international agreements outside of any treaty process. For example, the Global Partnership Against the Spread of Weapons and Materials of Mass Destruction is not a formal treaty;19 rather, it is a multilateral nonproliferation initiative created by the G-8 countries (Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and Russia) in 2002 to support the implementation of arms control treaties and customary international law. The members of the partnership (now 25 nations) fund and implement projects to prevent terrorists and other proliferators from acquiring weapons of mass destruction.
The challenges in the new international security context for the application of the modern law of armed conflict to deal with nonstate actors are particularly vexing. In an era of international terrorism, the distinction between “inside a state” and “state-to-state” has been blurred, and legal systems (such as that of the United States) that draw a sharp distinction between law enforcement authorities that operate domestically and military forces that operate internationally have come under considerable pressure. To the extent that new emerging and readily available (ERA) technologies for military purposes are relevant to this new environment (e.g., when they are used by terrorists or to combat terrorists), the development and use of such technologies will challenge existing understandings about when and under what circumstances the use of lethal force is appropriate from legal and ethical standpoints.
The U.S. struggle against terrorism is beset by questions and uncer-
18 More specifically, Protocol II of the convention had prohibited the indiscriminate use of mines and their intentional use against civilians. It also requires that remotely delivered land mines have effective self-destructing and self-deactivating mechanisms. An amendment to Additional Protocol II, agreed to in 1996, extends the original Protocol II to apply to non-international armed conflicts as well as conflicts between states and to prohibit the use of antipersonnel mines that do not contain enough iron to be detected with standard demining equipment; it also regulates the transfer of land mines. See http://www.gichd.org/international-conventions/convention-on-certain-conventional-weapons-ccw/amended-protocol-ii/.
tainties about whether this struggle should be governed by the laws of war or by the laws governing law enforcement. Domestic law enforcement authorities in the United States operate on the assumption that lethal force is an option of last resort to protect citizens from imminent harm, whereas military forces engaged in hostilities do not operate with such an assumption. In addition, this struggle is conducted against adversaries for whom national borders are irrelevant; how and to what extent are matters of national sovereignty relevant in such a struggle?
Thus, the post-9/11 security context adds another layer of stress on the traditional nation-state system. In addition, more states are seen as inconsistently willing, or even able, to protect the rights of their citizens, and indeed, may be seen as oppressors of their citizens. As a result, the conduct of such states has increasingly prompted international intervention in the internal affairs of individual nation-states, as in the recent cases of Iraq and Libya. Similarly, states increasingly lack the ability to restrain their citizens if they reach out to attack others, even if the targets of these attacks are nations and even if they may strike with force of existential proportion.
At the highest level of abstraction, the ethics of war and peace can be divided into three major schools of thought—realism, pacifism, and just-war theory. Realists argue that nations, governments, and even individuals resort to war (or armed conflict) when such actions serve their interests, and by extension, that actions taken to serve vital state interests should not be constrained by ethical considerations. Pacifists argue that as a matter of ethics, war and armed conflict are never appropriate. Because neither of these positions are associated with stated U.S. policy, they are not discussed further in this report.
Just-war theory has existed in some form for many centuries. Just-war theorists—the first of whom came from religious and philosophical traditions rather than legal traditions—argue that war or armed conflict can be justified under some circumstances. That is, a state that uses force or violence against another state must have “good” reasons for doing so. The principle is relevant because it assumes that not using force or violence is the normative and preferred state of affairs, and that the use of force or violence is an unusual act that requires some justification. The set of ethical principles regarding justifications for using force or violence is known
20 The discussion of the law of armed conflict and of related material in this section is based largely on Brian Orend, “War,” Stanford Encyclopedia of Philosophy: Fall 2008 Edition, Edward N. Zalta, ed., available at http://plato.stanford.edu/entries/war.
as jus ad bellum, and it answers the question, When is it permissible for a nation to use force against another nation? Another set of ethics known as jus in bello speaks to the question of what behavior is permissible for parties engaged in armed conflict.
The distinction between jus ad bellum and jus in bello is accepted in many ethical systems, although what specifically is permissible does vary. For example, in his presentation to the committee, Steven Lee noted that the major religions do accept this distinction. Further, they generally acknowledge two other important points. First, going to war should be an enterprise or an activity that does not do more harm than good, however “good” and “harm” are measured. Second, certain people (e.g., civilians) who might get caught up in armed conflict should be exempt from harm if possible. Neither of these points is absolute, and religions may differ in the weight or prioritization they give to these points under different circumstances. The cultural milieu in which a religion is embedded (e.g., an Islamic culture in East Asia as compared with an Islamic culture in Africa) is particularly important in this regard.
Within the Western tradition of jus ad bellum and jus in bello, there are a number of ethical principles underlying how the international law of armed conflict has been formulated. (The term “law of armed conflict” (LOAC) is used interchangeably with “laws of war.”)
Jus ad Bellum
Decisions about using force have ethical impact. In the formulation of Brian Orend,21 the Western tradition of jus ad bellum identifies six principles (just cause, right authority, right intention, reasonable hope, last resort, proportionality) that must be satisfied for war to be ethically justified. Four of these principles appear to have relevance for the development of technology for military purposes:
• Just cause addresses the reason for engaging in conflict. Some of the reasons offered include self-defense from external attack, defense of others from external attack, and protection of innocents from aggression. In a technology development context, the principle suggests that a distinction might be made between defensive and offensive technologies or applications. In practice, it rarely if ever happens that a particular technology application cannot be used for offensive purposes. (For example, any “defensive” technology might be used to blunt an adversary’s response, leaving the adversary in a weaker position.) Also, “self-defense” is some-
21 Orend, “War,” 2008.
times interpreted to allow preemptive or anticipatory offensive action for defensive purposes.
• Right authority addresses legitimacy and political accountability. According to the just-war theorists, individuals and other nonstate actors are not permitted to initiate war or armed conflict; only legitimate government authorities, acting in accordance with specified processes, can do so. In a technology development context, the principle might inhibit technology or applications that would facilitate nonstate initiation of armed conflict. Of course, the very premise of this report is that limiting access of nonstate actors to many emerging technologies of military importance will be increasingly difficult if not impossible.
• Last resort requires that a state may resort to war only if all less violent alternatives (e.g., negotiations and other nonviolent measures such as economic pressure) to resolving a conflict have proven fruitless. In a technology development context, the principle might suggest the desirability of developing nonviolent but coercive applications that could be used before violent force is used, and other nonviolent applications might be developed to reduce the likelihood of using force. It might also suggest the possible undesirability of technologies that increase the likelihood of a policy maker deciding to use force.
For example, nonlethal weapons are not explicitly designed for causing death and destruction, a fact that may lead policy makers and/or users to favor their use before exhausting other nonforceful options, such as negotiation. Remotely operated systems enable the projection of lethal military force without putting friendly forces at risk, a fact that may lead policy makers to have fewer qualms about the use of force. The use of cyber weapons is inherently deniable, from a technical standpoint, with high-quality tradecraft, a fact that may lead policy makers to use such weapons when deniability is politically advantageous. Such factors, if operative, may lower the thresholds for the use of force by national leaders and/or by troops on the ground.
• Proportionality requires that the degree of violence expected by initiating armed conflict should be commensurate with the harm suffered. Moreover, the overall good (such as restoration of the status quo ante) must be worth the costs that will be incurred if armed conflict is begun. In a technology development context, the principle suggests that new and different kinds of harm caused by new weapons might have to be considered. Furthermore, the principle suggests that harm to all parties (including civilians) would reasonably be within the scope of consideration.
From an international legal standpoint, jus ad bellum is embodied today in the UN Charter, which generally prohibits “the use or threat of force” by nations (Article 2(4)) except under two circumstances. First,
Articles 39 and 42 of the charter permit the Security Council to authorize uses of force in response to “any threat to the peace, breach of the peace, or act of aggression” in order “to maintain or restore international peace and security.” Second, Article 51 provides: “Nothing in the present Charter shall impair the inherent right of individual or collective self-defense if an armed attack occurs against a Member of the United Nations, until the Security Council has taken measures necessary to maintain international peace and security.”22
What actions might constitute the use of force, the threat of force, or an armed attack, especially when weapons based on new technologies might be involved? Traditionally, an armed attack was the use of kinetic weaponry to cause a significant degree of death and destruction.
But cyber attack raises the possibility that a nation might be attacked economically (e.g., might be bankrupted) through cyber means without significant death or destruction. Mood-changing chemical agents that do no lasting harm to individuals raise the possibility that their use might not be considered a use of force (although it might be regarded as a violation of the Chemical Warfare Convention). Loss of privacy or loss of computer functionality for civilians is arguably collateral damage when cyber weapons are used, even if today’s interpretations of LOAC do not allow for that possibility. Given that there is no legal consensus on these terms even when traditional kinetic weapons are involved (there are only precedents whose applicability to new situations is often unclear), it should not be surprising that consensus may be lacking when new technologies are involved.
Considering jus ad bellum from an ethical standpoint raises additional issues by implicating actions that fall below the level of a use of force or an armed attack. That is, even if an unfriendly or hostile action may not rise to such levels, that action would still be subject to scrutiny with respect to the principles described above.
Jus in Bello
A premise of the law of armed conflict is that unnecessary human suffering during the course of conflict should be minimized even if violent conflict is inevitable from time to time. Again following Orend,23 jus in bello is also based on six principles: adherence to international law on the use (or nonuse) of certain weapons; discrimination between combatants and noncombatants (and immunity for the latter); proportionality;
22 Article 51 is silent on whether actions taken in self-defense are permissible under other circumstances.
23 Orend, “War,” 2008.
humane treatment for prisoners of war; nonuse of weapons or methods that are “evil in themselves”; and prohibition on reprisals. (Legal analysts also traditionally add military necessity to this list.) Several of these principles appear to have relevance for the development of technology for military purposes.
• Discrimination between combatants and noncombatants (and immunity for the latter). Under this principle, weapons that kill or destroy or cause damage indiscriminately (in a way that cannot distinguish between protected civilian entities and legitimate military targets) may not be used. In a technology development context, the principle would forbid applications that cannot be discriminating in their application, and might impose requirements (or at least preferences) for capabilities that enable users to avoid harm to noncombatants. Chemical agents, certain nonlethal weapons (such as area denial systems), and certain cyber weapons may be regarded under some scenarios for use as indiscriminate in their targeting.
• Proportionality. The degree of violence used should be proportional, and not excessive, to the sought military objective. In a technology development context, this principle might require that a weapon be capable of selectivity in the destruction it can cause.
• Humane treatment for prisoners of war. In a technology development context, this requirement might inhibit the development of tools for interrogation that might be regarded as inhumane.
• Prohibition of the use of weapons or methods that are “evil in themselves.” Arms control treaties (discussed below) that prohibit the use of certain kinds of weapons arguably address this category of weapons.
In addition, LOAC presumes that combatants are subject to a military chain of command. Responsibility for actions taken in war is assumed by military commanders and soldiers in a chain of command. Weapons that operate without explicit human direction raise questions about the ability of a military chain of command to maintain affirmative control over the actions of such weapons. In a technology development context, this principle might inhibit applications that call into question that chain of command.
LOAC is extensively, although not comprehensively, codified in the Hague Conventions, the 1977 Geneva Protocols, and a number of other conventions dealing with particular weapons (such as antipersonnel land mines and blinding lasers) and particular targets (such as cultural objects). Much of LOAC is still found in customary international law. Some LOAC violations are criminalized by the Rome Statute of the International Criminal Court.
The United States is a party to a number of these conventions. Some
are regarded as reflecting, in whole or in part, customary international law, which until recently was almost universally regarded as incorporated into U.S. law and enforceable in U.S. courts. In addition, some LOAC violations are punishable as crimes under U.S. domestic law. The War Crimes Act of 1996, for example, sets forth criminal sanctions for grave breaches of the Geneva Conventions. Other actions are proscribed by U.S. criminal law but are not expressly described as violations of international law. All U.S. military personnel receive training in how to observe LOAC.
The discussion above relates to international armed conflict—armed conflict between nations. But international law also governs non-international armed conflict, which includes but is not limited to civil war.24
The distinction between international and non-international conflicts has always been troublesome. Common Article 3 and Protocol II to the Geneva Conventions are the only general measures addressing non-international conflicts. They contain many of the same protections for noncombatants as the rest of LOAC. In practice, states may treat both kinds of conflicts as the same, and some prominent legal scholars argue that LOAC norms for the international and non-international conflicts “have become nearly indistinguishable.”25
Application of these rules to conflicts between state and nonstate belligerents has also been troublesome. In the aftermath of 9/11, a number of analysts argued that the laws of war did not apply to the Taliban or members of Al-Qaeda,26 but they did not say what law, if any, would provide them with humanitarian protections. The search for protective principles continues today, as nations like the United States struggle, for example, to justify targeted killings based on the certain identification of individuals targeted and the imminence of the threat they pose.
In the struggle against international terrorism, nations continue to be bound by the transcendent principles of necessity, distinction, and proportionality, even if the effect of their application in a given case may be exceedingly difficult to predict. These same principles apply to the deployment and use of all kinds of weaponry. Sanctions for violations of these principles may be found in domestic criminal laws, including
24 More formally, non-international armed conflict is “armed confrontation occurring within the territory of a single State and in which the armed forces of no other State are engaged against the central government.” See Michael Schmitt, “The Manual on the Law of Non-International Armed Conflict With Commentary,” International Institute of Humanitarian Law, 2006, available at http://www.iihl.org/iihl/Documents/The%20Manual%20on%20the%20Law%20of%20NIAC.pdf.
25 Michael N. Schmitt, “Targeting and International Humanitarian Law in Afghanistan,” Naval War College International Law Studies 85:307, 308, 312, 323, 2009.
26 See, for example, John C. Yoo and James C. Ho, “The Status of Terrorists,” Virginia Journal of International Law 44:207, 2003.
the Uniform Code of Military Justice. Punishment may be imposed by ad hoc international tribunals, military commissions, courts martial, or domestic courts.
A relevant ethical question derived from considering the law of armed conflict is the following:
• How and to what extent, if any, does the research effort and foreseeable uses of its results implicate the ethical principles underlying the law of armed conflict? For example:
—What is its impact on policy makers regarding their willingness to resort to the use of force?
—How and to what extent, if any, should the effects of an application be regarded as “harm” that implicates the law of armed conflict?
—How does it affect discrimination?
—How might it affect command responsibilities and authority?
Human rights are restraints on the actions of governments with respect to the people under their jurisdiction. These rights may originate nationally (e.g., the civil and political rights granted under the U.S. Constitution), through international human rights treaties (e.g., the International Covenant on Civil and Political Rights), or through customary international law.
The Universal Declaration of Human Rights (UDHR) is a UN General Assembly declaration adopted in 1948. It is not a treaty, and therefore it is not binding on nations, although some provisions have become a part of customary international law (e.g., prohibitions against torture). However, the UDHR is sometimes cited as one basis for the existence of customary international law regarding human rights.
The UDHR covers such areas as prohibitions on torture and cruel, inhuman, or degrading treatment; freedom to freely seek, receive, and impart information and ideas; freedom to assemble peaceably; and freedom to move and reside freely within the borders of one’s state. In a technology development context, the UDHR might suggest special examination for technologies that governments could use to suppress or curtail the human rights of their citizens.
For example, Article 19 of the UDHR speaks to freedom of opinion and expression and explicitly includes the right to seek, receive, and impart information and ideas through any media and regardless of national borders. Thus, development of information technologies that
could interfere with this right (e.g., technologies that could be used for censorship) potentially raises ethical issues. Article 13 recognizes freedom of movement, thus potentially raising ethical issues with respect to the development of technologies that can enable or facilitate tracking of individual movements.
The UDHR is not a treaty, but over time it has led to the creation of a wide range of legal instruments and customary international law. In 1966, the UN Commission on Human Rights produced the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights. The two treaties contain most of the rights laid out in the UDHR and make them binding on those that have ratified the agreements. Taken together, the three documents are said to constitute the International Bill of Human Rights. The commitments embodied in the UDHR have “inspired more than 80 international human rights treaties and declarations, a great number of regional human rights conventions, domestic human rights bills, and constitutional provisions.”27
International human rights law shares many principles with LOAC. Many states regard human rights law, which in some respects is more protective than LOAC, as applicable in peacetime and in armed conflicts alike.28 However, the United States takes the position that during armed conflicts human rights law gives way to LOAC. If human rights law is intended to codify ethical issues related to human rights—and the discussion of the UDHR above and that of nonlethal weapons in Chapter 3 suggest that a number of the technologies considered in this report have implications for human rights—then assessments of ethical issues may do well to consider human rights as a source of insights.
A relevant ethical question derived from considering international human rights law is the following:
• How and to what extent, if any, do the research effort and the
28 For a comparison between human rights law and the law of armed conflict, see International Committee on the Red Cross, “International Humanitarian Law and International Human Rights Law: Similarities and Differences,” 2003, available at http://www.ehl.icrc.org/images/resources/pdf/ihl_and_ihrl.pdf and “What Is the Difference Between Humanitarian Law and Human Rights Law?,” 2004, available at http://www.icrc.org/eng/resources/documents/misc/5kzmuy.htm. For arguments in favor of the simultaneous applicability of human rights law and the law of armed conflict, see United Nations, “International Legal Protection of Human Rights in Armed Conflict,” 2011, HR/PUB/11/01, available at www.unhcr.org/refworld/docid/4ee9f8782.html; and Kenneth Watkin, “Controlling the Use of Force: A Role for Human Rights Norms in Contemporary Armed Conflict,” American Journal of International Law 98(1):1-34, 2004.
foreseeable uses of its results implicate the ethical principles underlying international human rights law and/or the UN Universal Declaration of Human Rights?
The theory underlying arms control agreements is that such agreements could serve three broad purposes in principle:29
• Reducing the likelihood that conflict will occur. Confidence-building measures—arrangements in which the involved parties agree to refrain from conducting certain activities that might be viewed as hostile or escalatory, to notify other signatories prior to conducting such activities, or to communicate directly with each other during times of tension or crisis—are supposed to reduce the likelihood of conflict due to accident or misunderstanding.
• Reducing the destructiveness of any conflict that does occur. Limitations or bans on the use of certain weapons, or on the types of entities that may be targeted, could have such effects, thereby reducing the likelihood of conflict escalation or facilitating more rapid cessation of hostilities. One important aspect of reducing destructiveness is reducing unnecessary destructiveness—a point related to the principle that weapons should not cause superfluous injury or unnecessary suffering.
• Reducing financial costs. Limitations on acquisition of weapons may reduce expenditures on those weapons.
All of these rationales arguably reflect ELSI concerns.
Treaties that ban or restrict the use of certain weapons tend to inhibit technology or applications that might resemble, be confused with, or be associated with any prohibited weapon. In addition, the possibility of developing any given technology or application with military value raises the issue of whether U.S. interests are better served by its unrestricted development (and use) or in a world in which its development and use are restricted by mutual agreement with other nations that might also develop and/or use that technology or application. Some of the considerations in addressing such an issue may include the following:
• The technological capabilities of other parties to exploit the technology or application in question, taking into account the time scale on which these other parties will be able to do so.
29 These three purposes can be found in Thomas C. Schelling and Morton H. Halperin, Strategy and Arms Control, Pergamon Brassey’s, Washington, D.C., 1985.
• The value of unilateral U.S. advantages afforded by the technology or application, taking into account the time scale on which the United States will have such advantages.
• The efficacy with which U.S. advantages can be countered.
• The value of setting an example of restraint in a global environment in which leading states set precedents for the legitimacy of other states to follow in the footsteps of those leading states. (That is, once the United States claims the right to develop and potentially use a given technology or application for military purposes, other states are likely to have fewer inhibitions against making similar claims.)
• The potential for nonstate actors to develop and use the technology, especially if it has low barriers to entry (ERA technologies). If nations restrict the development or use of a technology by treaty but nonstate actors exploit it, the nations may be disadvantaged.
New military technologies or applications may sometimes have the potential to erode constraints initially imposed by existing treaties. Supporters of such treaties often view such erosion as a negative consequence of proceeding with a new military technology or application, and they argue that if a new technology or application is not addressed adequately under existing understandings, it should not be developed until new understandings are formulated that can in fact do so. Others argue that if existing understandings do not address a new technology or application, it should be allowable to proceed with its development until new constraining understandings are reached.
Recognizing concerns about such erosion, many treaties include provisions for addressing new scientific or technological developments that might affect constraints in the treaty.30 Where rapidly changing technologies are involved, the forums established in accordance with these provisions are often quite active.
Advances in science and technology can also provide positive benefits for arms control treaties. For example, new technology can improve national capabilities to monitor compliance with treaties, carry out inspections, or investigate allegations of controlled or prohibited activities. Discussions of how S&T advances can support treaty implementation are common at many review conferences, along with discussion of potential negative impacts. In addition to improving traditional approaches, there
30 For example, Article 8 of the Chemical Weapons Convention provides for a regular review conference to take into account “any relevant scientific and technological developments.” See http://www.opcw.org/chemical-weapons-convention/articles/article-viii-the-organization/.
is current interest in harnessing new data-mining, crowd-sourcing, and social media applications.31
A relevant ethical question derived from considering arms control treaties is the following:
• How and to what extent, if any, do a research effort and its foreseeable uses implicate existing arms control treaties? How, if at all, does the effort make the treaty regime harder or easier to sustain in the future?
The impacts of technology depend directly on human behavior, because people are intimately involved in the design, manufacture, inspection, deployment, monitoring, use, operation, maintenance, regulation, and financing of technology. Thus the social and behavioral sciences have an important role to play. They can help to predict the social effects of a new technology or application (e.g., how people are likely to react to a crisis, respond to contradictory information, develop new laws or policies, consume recreational drugs, maintain equipment, write and implement workplace rules, use media, and so on). They can also provide insight into when a proposed design makes unrealistic demands on operators’ vigilance, provides perverse incentives (e.g., for denying problems), or can be easily captured by others. In response, the social and behavioral sciences can also affect those impacts by informing the design of new technologies (especially if they are involved early in the process). They can help to promote fair judgments of technology by contributing to the creation of sound and inclusive communication processes. And they can try to predict those judgments by eliciting commentary from members of various stakeholder groups.
Involving the social and behavioral sciences in the R&D process will help to produce better and more informed scientific outcomes. Including these human sciences in the initial design of an application is particularly important to increasing the usability of a new technology, without subsequent costly failures and retrofits. It will also identify the basic research needed for other aspects of the design (e.g., training programs, communication, user interfaces, organizational accommodations). Including the human sciences at later stages allows responding to the new knowledge that becomes available as a science or technology matures.
31 For example, see the State Department’s “Innovation in Arms Control Challenge,” which “sought creative ideas from the general public to use commonly available technologies to support arms control policy efforts.” See http://www.state.gov/r/pa/prs/ps/2013/03/205617.htm.
The subsections below address possible insights from a number of specific social sciences.
Sociology and anthropology provide some of the scientific foundations for anticipating how new technologies will be used and viewed. For example, the prevalent culture in any given society influences the views of its inhabitants on how and when to use force (that is, acts of physical violence). When two societies come into conflict,32 it is not surprising that one party to the conflict interprets the wartime behavior of the other society through its own cultural frame. If the two societies are culturally distant, they will almost surely have very different views on the appropriate roles and statuses of individuals engaged in the conflict, different norms regarding how and when force can be used, and different sanctions for violating those norms.
In some cases, cultural views of conflict are formally expressed in law. For example, as described above, the United States and many other nations codify some of their views of conflict through the law of armed conflict and arms control treaties such as the Chemical Weapons Convention. Of course, the fact that a nation may be party to an international agreement does not necessarily mean that members of its armed forces will always act in adherence to that agreement, or even that the nation itself will always comply with the requirements of the agreement.
Perhaps most importantly, norms and values—whether or not formally codified—are subordinate to the context of the conflict in which they may come into play. For example, a perceived serious threat to survival is likely to reduce adherence to even strongly held norms and values regarding conflict.33
In her presentation to the committee, Montgomery McFate of the U.S. Naval War College introduced the concept of normative mismatch to describe differences in cultural perspectives on conflict. U.S. military forces may conduct themselves in combat against an adversary entirely in accordance with the laws of war, the Uniform Code of Military Justice,
32 In the context of the present discussion, the term “society” should be understood to refer to the groups that are engaged in conflict. Extrapolating a discussion of “society” to a discussion of “nation-state” makes sense only to the extent that within-nation variability of values and frames is not significant with respect to the discussion at hand. In some cases, the normative perceptions of a dominant group within a nation are most significant, and the views of other groups within that nation may not need to be considered. In other cases, consideration of within-nation variability is essential to the policy goal at hand.
33 Eric Luis Uhlmann, David A. Pizarro, David Tannenbaum, and Peter H. Ditto, “The Motivated Use of Moral Principles,” Judgment and Decision Making 4(6):476-491, 2009.
and other relevant codified and uncodified norms of Western society regarding the use of force, but the adversary may well see the U.S. conduct as disrespectful and dishonorable—and thus subsequently employ tactics that it feels are justified against any disrespectful and dishonorable enemy.34
The concept of normative mismatch is relevant to the development of new military technologies and applications. A first issue might be whether the concept of operation for a new application might point to potential normative mismatches.35 Some examples include:
• The range of a weapon. Many U.S. concepts for weapons emphasize the ability to strike from a long distance away, whereas certain societies place different normative value on face-to-face or close-quarters combat.
• The damage inflicted by a weapon. A weapon that damages a warrior’s dead body, after his death, may violate cultural norms about honor and death. For example, some cultures treat dead bodies as sacred items in a religious tradition.
• The invasiveness of a device. For example, a device that checks individuals for concealed weapons may violate cultural norms against inspection of female bodies.
Normative mismatches may occur at higher levels of abstraction as well. In his presentation to the committee, Steven Lee of the Hobart and William Smith Colleges noted the existence of a worldview based on fairness—either no one should have certain weapons that provide overwhelming advantage or every party to a conflict should have them. To the extent that emerging military technologies do provide such advantages over an adversary (as is the intent of the technologically enabled U.S. approach to armed conflict described in Chapter 1), their use potentially violates fairness norms that are held by that adversary.
Lee further argued that perceived violations of a fairness norm are partly responsible for adversaries resorting to terrorism as a method of conflict, even when they have norms regarding the moral impermissibility of targeting noncombatants in conflict. That is, given the inability of
34 In an acknowledgment of such concerns, a speech by John Brennan recognized that the United States “must do a better job of addressing the mistaken belief among some foreign publics that we engage in these [drone] strikes casually, as if we are simply unwilling to expose U.S. forces to the dangers faced every day by people in those regions.” See John Brennan, Assistant to the President for Homeland Security and Counterterrorism, “The Efficacy and Ethics of U.S. Counterterrorism Strategy,” Wilson Center, April 30, 2012, available at http://www.wilsoncenter.org/event/the-efficacy-and-ethics-us-counterterrorism-strategy.
35 The concept of operation for a weapon specifies how and the circumstances under which the weapon’s users are expected to use the weapon.
an adversary to counter the stronger party using traditional means of warfare, the adversary may feel less reluctant to violate norms against targeting noncombatants.
A second issue is the impact and significance of the mismatch. The existence of a mismatch, by definition, points to an idea that is outside one’s own normative frame of reference. Ideas that are unfamiliar in this sense may result in surprise—something not within one’s own norms is likely to be outside one’s own set of expectations. For example, the Japanese use of kamikaze missions in World War II came as a surprise to the U.S. Navy—suicide missions against adversaries were not within the Navy’s normative expectations. More than a half-century later, the U.S. intelligence community failed to anticipate the use of airplanes as guided missiles, as the 9/11 Commission pointed out, even though every intelligence analyst was familiar with the idea of suicide bombers and Japanese kamikaze missions.
Still another issue is how to develop approaches for dealing with a normative mismatch. Here understanding the source of the norm is important. For example, a preference for face-to-face short-range combat may be rooted in part of a warrior’s code, in a manner similar to other concepts such as vengeance. Suicide bombers may be driven by cultural honor codes. If a suicide bomber is motivated by honor, a countermeasure might be turning such bombing into a dishonorable act.36
Cultural and societal issues also affect relationships with nations that are not overt adversaries. Such nations include long-term allies and allies of convenience and/or nonaligned nations.
Long-term allies generally share a set of common values and ethical standards with the United States. However, agreement in general does not necessarily translate into perfect agreement on all issues, and there are examples of military technologies on which the United States and its allies do not necessarily see eye to eye. For instance, the United States and the United Kingdom parted company in 2008 when the latter decided to sign the Convention on Cluster Munitions, which prohibits the use, production, stockpiling, and transfer of cluster munitions. A similar situation exists with respect to the Convention on the Prohibition of the Use, Stockpiling, Production, and Transfer of Anti-Personnel Mines and on Their Destruction (often known as the Ottawa Treaty)—the United States has refrained from signing this treaty, whereas a number of its closest allies have done so.
The fact that the United States has chosen to refrain from signing these treaties does not mean that it does not share the humanitarian
36 Scott Atran, Robert Axelrod, and Richard Davis, “Sacred Barriers to Conflict Resolution,” Science 317(5841):1039-1040, 2007.
concerns motivating these treaties—indeed, in both instances, the United States has stated through official channels that it understands and respects these concerns, and further that its policies will in many ways conform to or exceed the requirements provided for by these treaties. Nevertheless, its unwillingness to sign these treaties when some of its closest allies have been willing to do so suggests at least the possibility that differences between the United States and its allies may cause political friction under some circumstances or impede planning/execution of coalition operations.
The United States also has relationships with allies of convenience and nonaligned nations. Nations in this category may or may not share U.S. values and may have relationships with the United States that are simultaneously mutually dependent and/or beneficial on one hand and antagonistic and/or competitive on the other. Such relationships are often characterized by suspicion and mistrust.
In this environment, differences in ethical stances toward, for example, a novel military technology or application would not be surprising. A technology regarded by the United States as efficient, cutting-edge, and inexpensive may be seen by an ally of convenience as cruel and cowardly.
The United States has recognized such concerns with respect to the use of armed remotely piloted vehicles (RPVs). In a 2012 speech, John Brennan made the case for the legality, justness, and prudence of U.S. drone strikes, including such strikes in Pakistan.37 He acknowledged that “the United States is the first nation to regularly conduct strikes using remotely piloted aircraft in armed conflict.” Because “many more nations are seeking” this technology and “more will succeed in acquiring it,” Brennan argued, the United States is “establishing precedents that other nations may follow.” “If we want other nations to use these technologies responsibly,” Brennan stated, “we must use them responsibly. If we want other nations to adhere to high and rigorous standards for their use, then we must do so as well. We cannot expect of others what we will not do ourselves.”
But this speech was given long after the first U.S. use of these weapons, during which time a backlash against such use developed. In trying to make an ethical, practical, and strategic case for the legitimate use of such weapons in combat in 2012, the United States was clearly reacting to the backlash rather than proactively leading and shaping the debate—and the former is clearly a weaker position than the latter.
37 John Brennan, Assistant to the President for Homeland Security and Counterterrorism, “The Efficacy and Ethics of U.S. Counterterrorism Strategy,” Wilson Center, April 30, 2012, available at http://www.wilsoncenter.org/event/the-efficacy-and-ethics-us-counterterrorism-strategy.
Questions derived from sociology and anthropology include the following:
• Considering anticipated scenarios for using the results of a research effort, how, if at all, do such scenarios implicate values and norms held by users? By adversaries? By observers?
A psychological perspective on cultural issues is the focus of the subsection below titled “Social Psychology and Group Behavior.”
Several branches of psychology are relevant to gaining insights on ethical, legal, and societal issues, including for example, behavioral decision science and the psychology of risk, social psychology and group behavior, political psychology, and human-systems integration.
Behavioral Decision Sciences and the Psychology of Risk
There is a substantial research literature, particularly in psychology, on how people perceive risk, manage those perceptions, and make decisions under conditions of risk. This includes research on specific questions related to scientific or technical risks (for example, nuclear radiation), willingness to accept risks of different kinds, and how risk perceptions change, including on the basis of S&T developments.38
Risk analysis is relevant to the anticipation of ethical, legal, and societal issues as well.39 Predicting the impacts of a technology that is both complex and uncertain—which generally characterizes analyses involving emerging technologies—requires the disciplined use of expert judgment.40 Risk analysis provides a set of methods to assist in estimating the effects of complex, uncertain technologies (including both benefits and risks). In the end, of course, risk analysis can only inform judgment; it cannot replace it.
Ethics is relevant to risk analysis with respect to (1) which impacts should be considered (e.g., Does the environment have standing?);
38 Paul C. Stern and Harvey V. Fineberg, eds., Understanding Risk: Informing Decisions in a Democratic Society, National Academy Press, Washington, D.C., 1996.
39 Baruch Fischhoff and John Kadvany, Risk: A Very Short Introduction, Oxford University Press, Oxford, 2011.
40 Ronald A. Howard, “Knowledge Maps,” Management Science 35:903-922, 1989; M. Granger Morgan, Max Henrion, and Mitchell Small, Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis, Cambridge University Press, New York, 1990.
(2) how each impact should be measured (e.g., Are distributional effects considered, or just the mean?); and (3) how different outcomes should be weighted.41 For example, risk analysis of nuclear power plants might raise questions about their ability to deliver energy at the promised price, as well as about their potential threats to society. Those risks and benefits may involve human health, the environment, and the economy, as well as the distribution of these risks and benefits, all of which are central societal and ethical concerns. If the ethics of such matters are not explicitly considered, risk analysts are likely to resolve ethical issues by deferring to professional conventions (which are usually based on some ethical framework agreed on in advance) or by imposing their own ethical values and standards.42
Risk analysis seeks to provide a disciplined, transparent way to integrate the knowledge of diverse experts in predicting the performance of a technology in advance of its deployment. It can focus the design process by comparing competing designs and identifying vulnerabilities requiring additional research (e.g., poorly understood properties of materials or social controls on potential uses).43 It can show when the design team lacks critical expertise. It can help decision makers decide whether the benefits of a new technology outweigh its risks, as well as provide the evidence that they need to explain their choices to others.
Risk analyses are soundest when they accommodate a broad range of relevant evidence (e.g., not just readily quantified factors); when they retain awareness of factors that have not been analyzed (e.g., potential design flaws); when they elicit expert judgment with proven methods that are structured to obtain the maximum amount of information from experts; when they do not seek to defend a particular outcome or design or approach; and when they account for uncertainty in the available evidence (e.g., with sensitivity analyses).44 Decision makers need candid
41 Canadian Standards Association, Risk Management Guidelines for Decision Makers, CAN/CSA-850, Ottawa, Ontario, 1997 (reaffirmed 2002); HM Treasury, Managing Risks to the Public: Appraisal Guidance, Her Majesty’s Stationary Office, London, 2005; Sheldon Krimsky and Dominic Golding, Social Theories of Risk, Praeger, New York, 1992.
42 National Research Council, Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget, The National Academies Press, Washington, D.C., 2006; Presidential/Congressional Commission on Risk Assessment and Risk Management, Risk Assessment and Risk Management in Regulatory Decision-Making, riskworld.com, 1997, available at http://www.riskworld.com/Nreports/1996/risk_rpt/html/nr6aa001.htm.
43 Baruch Fischhoff, Risk Analysis and Human Behavior, Routledge/Earthscan, Oxford, 2011; Michael S. Wogalter, The Handbook of Warnings, Lawrence Erlbaum Associates, Hillsdale, N.J., 2006.
44 Anthony O’ Hagan, Caitlin E. Buck, Alireza Daneshkhah, et al., Uncertain Judgements: Eliciting Expert Probabilities, John Wiley & Sons, Ltd., Chichester, West Sussex, 2006; E.C. Poulton, Bias in Quantifying Judgment, Lawrence Erlbaum, Hillsdale, N.J., 1989.
assessments of the quality of the knowledge that they have for making and defending their choices. Risk analyses can provide that assessment, as long as they are accompanied by acknowledgment of their own strengths and limits.45
Some of the questions derived from the psychology of risk include the following:
• How can organizations responsible for technology development ensure that they have the expertise needed to assess all aspects of the technology’s performance?
• How can technology-driven and technology-driving organizations improve their ability to identify, analyze, and manage risks?
• When do normal cognitive processes impede the development, deployment, and operation of technology (e.g., wishful thinking, fallacies of intuition, overconfidence)?
• How, if at all, can both deontological and utilitarian (cost-benefit) concerns be accommodated in decision-making processes?
Social Psychology and Group Behavior
An understanding of group behavior may yield insight into how an adversary might react to U.S. deployment or use of certain types of weapons. One of the most important areas of research in providing an understanding of individual and group behavior is the literature from social psychology on the origins and implications of group identity. For example, in a review of the lessons of social psychology for understanding the virulent nationalism plaguing international politics in the years immediately after the Cold War, Druckman suggested:
… they [social psychologists] have explored the factors that arouse feelings of group loyalty when such group loyalty promotes hostility toward other groups; how cross-cutting or multiple loyalties can change the face of nationalism; and how individual group loyalties influence and shape collective behavior.46
A 2011 NIH/DOD workshop discussed psychologically motivating factors of terrorism under the rubric of terror management theory, which
45 Silvio O. Funtowicz and Jerome R. Ravetz, Uncertainty and Quality in Science for Policy, Kluwer Academic Publishers, London, 1990; National Research Council, Intelligence Analysis for Tomorrow, The National Academies Press, Washington, D.C., 2011.
46 Daniel Druckman, “Nationalism, Patriotism, and Group Loyalty: A Social Psychological Perspective,” Mershon International Studies Review 38:43-68, 1994, available at http://bev.berkeley.edu/Ethnic%20Religious%20Conflict/Ethnic%20and%20Religious%20Conflict/2%20National%20Identity/Druckman%20nationalism.pdf.
states that “human beings are motivated to adopt and police a cultural belief system in order to allay their concerns over their own mortality. Sets of sacred values underpin strong belief systems; such values include those beliefs that an individual is unlikely to barter away or trade no matter how enticing the offer is.”47 The workshop summary further noted that “sacred values may prove a pathway towards better understanding the deep underlying motivations behind certain acts of political violence and identifying values that are less resistant to change.”
There are other examples of ways in which the expertise of social psychology may be relevant. For example, experiments have also shown that individuals are more willing to inflict pain on or otherwise abuse those who are not part of “their” group.48 How these fundamental aspects of human psychology play out in the context of conflict is addressed in the next section.
Some of the questions derived from social psychology include the following:
• When do attitudes toward a technology become a sacred value, so that groups support or oppose it as a matter of principle, indifferent to cost-benefit concerns?
• How do affinity groups form around new technologies, and when are they mobilized to action?
• How will knowledge about new technologies be disseminated through existing and evolving social networks, among allies and adversaries?
• How can prejudices regarding other groups affect assessments of their ability to use appropriate technologies?
Political psychology is another relevant branch of psychology.49 For example, the United States uses armed remotely piloted vehicles (RPVs) in Pakistan, a nominal ally in the fight against Al-Qaeda. Such use has
47 Tessa Baker and Sarah Canna, “The Neurobiology of Political Violence: New Tools, New Insights,” Nsiteam.com, 2010, available at http://www.nsiteam.com/pubs/U_Neurobiology%20of%20Political%20Violence%20-%20Dec10%20Final%20Approved%20for%20Release%205.31.11.pdf.
48 See, for example, James E. Waller, Becoming Evil: How Ordinary People Commit Genocide and Mass Killing, Oxford University Press, London, 2007; and Stanley Milgram, Obedience to Authority, Harper and Row, New York, 1974.
49 A relevant paper providing an overview of some aspects of political psychology is Stephan Lewandowsky et al., “Misinformation and Its Correction: Continued Influence and Successful Debiasing,” Psychological Science in the Public Interest 13(3):106-131, 2012.
evoked a powerful psychological reaction in the Pakistani populace regarding the collateral damage to Pakistani civilians. In a paper commissioned by the committee, George Perkovich of the Carnegie Endowment for International Peace reports that although Pakistani citizens complain about and bitterly resent the use of such vehicles to fight Al-Qaeda, their resentment is based not on the actual use of RPVs or the collateral damage they cause, but rather on the fact that these vehicles are controlled by U.S. forces rather than Pakistani forces.50
Perkovich explains this psychological reaction in two ways. First, the Pakistanis perceive Americans as being arrogant. Second, they also resent the inference of weakness which unequal participation reveals, that is, when one party (the United States) has possession of a needed technology and the second (Pakistan) is denied commensurate control.
Some of the questions derived from political psychology include the following:
• When will a technology be politicized, with the result that attitudes and beliefs about it are determined by ideology rather than by scientific assessments (as has occurred with climate science and evolution, in some quarters)?
• How, if at all, is it possible to correct misconceptions created by politically motivated disinformation campaigns?
• How can political partisans’ convictions blind them to the flaws in the technologies with which they are identified?
The value of any technology depends on individuals’ willingness and ability to use it. Having the best chance of realizing that value requires incorporating the best available science of human behavior in the technology’s design from the beginning and then in evaluating its performance on an ongoing basis.
Numerous examples of inadequate attention to the human factor in technology design show how a technology’s effectiveness can be reduced. For instance:
• Night vision goggles. Weight and poor mounting compatibility with standard helmets produce fatigue and decreased performance in visual
50 George Perkovich, “Managing Ethical and Social Implications of Militarily Significant Technology: Lessons from Nuclear Technology and Drones,” paper commissioned by the study committee, 2012.
and motor skills by users employing night vision goggles over extended time periods.51
• Remote operation of unmanned ground vehicles. A single human operator cannot effectively operate more than one unmanned ground vehicle under active combat conditions (e.g., during times of attack). Further, in the absence of other knowledge, operators of unmanned vehicles tend to use tactics, techniques, and procedures (TTPs) originally developed for operating manned vehicles, pointing to the need for TTPs for the use of unmanned ground vehicles that are specific to the tasks, features, and characteristics of those systems.52
• Passwords and cybersecurity. Authentication of an asserted identity is central to controlling access to information technology resources. Passwords are an essential element—in many cases, the only element—of the most commonly used approaches to authentication. But it is well known that individuals tend to choose easy-to-remember passwords—thus making such passwords easy for an adversary to guess.
• Body armor for female soldiers. Traditionally, body armor has been designed to protect male bodies. Some research suggests that such armor is less protective of female bodies53 and also that the poor fit of such armor on female soldiers makes it difficult for them to properly aim their weapons and enter or exit vehicles.54
• Coordination. The effective operation of any complex system requires coordination among the individuals responsible for its design, operation, maintenance, and upgrading. When that coordination fails, designers may require operators to do the impossible, with a technology that they understand incompletely or cannot support with the resources available to them. Such failures affected Three Mile Island, Chernobyl, and Fukushima.55
51 Albert L. Kubala, Final Report: Human Factors Research in Military Organizations and Systems, Human Resources Research Organization, Alexandria, Va., 1979, available at www.dtic.mil/dtic/tr/fulltext/u2/a077339.pdf.
52 Jessie Y.C. Chen, Ellen C. Haas, Krishna Pillalamarri, and Catherine N. Jacobson, Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies, Army Research Laboratory, 2006, available at http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA451379.
53 Marianne Resslar Wilhelm, “A Biomechanical Assessment of Female Body Armor,” ETD Collection for Wayne State University, Paper AAI3117255, January 1, 2003, available at http://digitalcommons.wayne.edu/dissertations/AAI3117255.
54 Anna Mulrine, “Army Uses ‘Xena: Warrior Princess’ as Inspiration for New Body Armor for Women,” July 9, 2012, Christian Science Monitor Online, available at www.csmonitor.com/USA/Military/2012/0709/Army-uses-Xena-Warrior-Princess-as-inspiration-for-new-body-armor-for-women.
55 James R. Chiles, Inviting Disaster: Lessons from the Edge of Technology, Harper Collins, New York, 2002; Charles Perrow, Normal Accidents: Living with High Risk Technologies, Princeton University Press, Princeton, N.J., 1999.
Human factors engineering (also called ergonomics) has long been part of the design of many systems (e.g., cockpits, computer interfaces) in both the civilian and the defense sectors.56 It is most useful when incorporated in the earliest stages of the design process, when there is a wide range of opportunities to respond to users’ needs. At the other extreme, the need to rely on warning labels in many cases reflects a design failure.
Some of the questions derived from human–systems integration include the following:
• How can requirements be written in order to ensure that technologies can be operated and maintained under field conditions?
• How can the acquisition process evaluate and ensure the operational usability of future technologies?
• What are the institutional barriers to incorporating human-systems expertise in the design process?
• What kinds of expertise and social organization are needed to support a technology, by the United States (so as to increase operability) and by its adversaries (so as to limit the technology’s appropriation by them)?
In some cases, ethical insights emerge from a scientific and technological framing different from that which is initially offered. To the extent that a given technology or application is based on an erroneous or an incomplete scientific understanding, any risk analysis of that technology or application will itself be incomplete. New ethical, legal, and societal issues may well emerge if and when the underlying science becomes more complete.
For example, assumptions of system linearity and decomposability often enable scientists to make headway in their investigations of phenomena, and so it is natural to turn at first to techniques based on these assumptions. But some systems are not well characterized by these assumptions in the domains of interest to investigators, although it may take some time to recognize this reality. In other instances, there is considerable uncertainty about the relevant data, for example, because they have not yet been collected, or there may be defects in the data that have been collected. In still other cases, system behavior may be emergent and path-
56 Steven Casey, Set Phasers on Stun: And Other True Tales of Design, Technology, and Human Error, Aegean, New York, 1993; Peter A. Hancock, Human Performance and Ergonomics: Perceptual and Cognitive Principles, Academic Press, New York, 1999; and Christopher D. Wickens, Sallie E. Gordon, and Yili Liu, An Introduction to Human Factors Engineering, Prentice-Hall, New York, 2004. A historical perspective can be found in Paul M. Fitts, ed., Psychological Research and Equipment Design, U.S. Government Printing Office, Washington, D.C., 1947.
dependent, may be very sensitive to initial conditions, or may depend on incompletely known relationships between the system and its environment. Predictions about system behavior may be possible only through high-fidelity computer simulations, may be probabilistic in nature, or may be exponentially inaccurate depending on the time horizons in question. If these realities are not recognized when ethical, legal, and societal issues are considered, such a consideration will be based on an incomplete scientific understanding.
Systems with some of the analytically problematic characteristics are often biological or environmental in nature. For example, early in the history of biology, a “one-gene, one-protein” phenomenology was widely accepted. Today, it is generally accepted that many noncoding parts of DNA control the circumstances under which a specific gene will be expressed, and the rules governing regulation are not well understood. In addition, it is not always possible to predict how natural selection will act on a system over time.
Concerns over ethical, legal, and societal issues may thus sometimes be rooted in disagreements over the fundamental science involved. Are the nonlinearities in the system in question significant? Does the model being used to understand the relevant phenomena capture all essential elements? How sensitive is the model to initial conditions? How far into the future can a model’s predictions be trusted?
A relevant ethical question derived from considering scientific framing is the following:
• How and to what extent, if any, are known ethical, legal, and societal issues related to uncertainties in the underlying science or maturity of the technology?
Commentators differ in their psychological as well as social orientation toward technology development and application. Those most concerned about potential negative results tend to promote the precautionary principle,57 doing so in response to traditional cost-benefit analysis that they regard as using approaches that give innovation the benefit of the doubt.
57 A substantial amount of background information on the precautionary principle can be found in Ragnar E. Löfstedt, Baruch Fischhoff, and Ilya Fischhoff, “Precautionary Principles: General Definitions and Specific Applications to Genetically Modified Organisms (GMOs),” Journal of Policy Analysis and Management 21(3):381-407, 2002.
The strongest form of the precautionary principle states that when a technology or an application threatens harm—to society, to individuals, to the environment, and so on—precautionary measures should be taken before a decision is made to proceed with developing that technology or application and in general the technology should not be pursued until those concerns are decisively addressed.
Some formulations of the precautionary principle require strong evidence of risks, in the sense of developing a full set of relevant cause-and-effect relationships. Other formulations require less evidence, suggesting that high levels of uncertainty about causality should not be a bar to precautionary action. In these latter formulations, the postulated harms can be merely possible and may be speculative in the sense that the full set of relevant cause-and-effect relationships (that is, relationships between developing the technology or application and the harm that may result) may not have been established with sufficient scientific rigor, or in the sense that the probability of the harms occurring may be low.
The precautionary principle places the burden of proof on those who advocate certain technologies to produce evidence that will reassure reasonable skeptics, rather than on the public to show that development can cause unacceptable harm. Further, the principle often requires that precautionary measures be taken before any development work occurs, and such measures may include a complete cessation of all development work. Advocates of the precautionary principle often invoke ethical commitments to protect the environment from the results of humans’ mistakes and to safeguard the public from terrorists.58 In the view of these critics, one of the biggest risks is that science and technology will move forward too quickly, causing irreversible damage. An example of applying the precautionary principle to biological research could be the outcome of the 1975 Asilomar Conference on Recombinant DNA Research, discussed in Chapter 1.
A different principle is traditional cost-benefit analysis, which is fundamentally rooted in utilitarian ethics. Cost-benefit analysis relies on the ability to quantify and weigh the value of putative costs and benefits. Quantification is intended to make the assessment of costs and benefits a more objective process, although serious analysts usually recognize the value-laden nature of quantification. For example, in some formulations of cost-benefit analysis, uncertainty about costs or benefits implies that those costs or benefits can be discounted or even dismissed. Costs or benefits that cannot be objectively quantified are not taken into account at all. Examples of such costs could include the costs to the credibility of an organization when a technology fails, is introduced improperly, or causes
harm, or the costs of disruptions to social systems caused by particular technologies. Some versions of cost-benefit analysis do seek to address such matters as well as the impact of uncertainty and risk tolerance.
Any calculation must treat the distribution of risks and benefits in some way, if only to ignore them, without regard for whether those who bear the risks do not get the benefits. A common compromise is to ask whether the beneficiaries from a project could, in principle, compensate the losers—without ensuring that there are mechanisms for effecting those transfers. In cost-benefit analysis, opponents of developing a new technology or application bear the burden of proof of showing that costs outweigh benefits.
Differences among those who advocate cost-benefit analysis can be found in their relative weightings of benefits and costs, how and when to account for uncertainty, and how to bound the universe of costs and benefits. For example, benefits and costs may be realized in the short term or in the long term: How and to what extent, if any, should long-term benefits and costs be discounted compared to short-term benefits and costs? Benefits and costs may be unequally distributed throughout the world: Which parties have standing in the world to claim that their costs or benefits must be taken into account? Inaction is itself an action: How should the costs and benefits of the status quo factor into the weighing of overall costs and benefits?
In practice, a middle ground can often be found between the precautionary principle and cost-benefit analysis. For example, a less traditional approach to cost-benefit analysis sometimes attempts to quantify intangible and long-term costs that would not usually be taken into account in a traditional cost-benefit analysis. One less extreme form of the precautionary principle allows precautionary measures to be taken when there is uncertainty about costs and harms, but does not require such measures. Another less extreme form requires the existence of some scientific evidence relating to both the likelihood and magnitude of harm and the significance of such harm should it occur.
A middle ground requires calculating the costs and benefits of all outcomes for which there are robust methods, along with explicit disclosure of the quality of those analyses, the ethical assumptions that they entail (e.g., regarding distributional effects), the uncertainty surrounding them, and the issues that are ignored. Seeing the limits to the analysis allows decision makers to assess the measure of precaution that is needed.
Some relevant ethical questions derived from considering cost-benefit analysis and the precautionary principle are the following:
• How and to what extent, if any, can ELSI-related tensions between cost-benefit analysis and the precautionary principle be reconciled in any given research effort?
• If a cost-benefit approach is adopted, how will intangible costs and benefits of a research effort be taken into account?
• If a precautionary approach is adopted, what level of risk must be posed by a research effort before precautionary actions are required?
Those who fund, design, and deploy new technologies must communicate the associated risks and benefits effectively both to those who would use them and to the public that will pass judgment on their work. If users misunderstand a technology’s costs and capabilities, they may forgo useful options or invest in ones that leave them vulnerable if they fail to fulfill their promise. If the public misunderstands a technology’s risks and benefits, then it may prevent the development of valuable options or allow ones that undermine its welfare.
Communicating about complex, uncertain, risky technologies poses special problems and is often done poorly,59 in part because technical experts often have poor intuitions about and/or understanding of their audiences’ knowledge and needs. Scientific approaches to that communication have been developed over the past 40 years, building on basic research in cognitive psychology and decision science. The National Research Council’s report Improving Risk Communication (1989) provided an early introduction to that research.60 There are many other sources,61 including an upcoming special issue of the Proceedings of the National Academy of Sciences with scientific papers from the May 2012 Sackler Colloquium on the Science of Science Communication.
59 Baruch Fischhoff, “Communicating the Risks of Terrorism (and Anything Else),” American Psychologist 66(6):520-531, 2011; Raymond S. Nickerson, “How We Know—and Sometimes Misjudge—What Others Know,” Psychological Bulletin 125(6):737-759, 1999.
60 National Research Council, Improving Risk Communication, National Academy Press, Washington, D.C., 1989.
61 Baruch Fischhoff and Dietram A. Scheufele (eds.), “The Science of Science Communication,” Arthur M. Sackler Colloquium, National Academy of Sciences, held May 21-22, 2012, printed in Proceedings of the National Academy of Sciences of the United States of America 110(Supplement 3):13696 and 14031-14110, August 20, 2013; Baruch Fischhoff, Noel T. Brewer, and Julie S. Downs, eds., Communicating Risks and Benefits: An Evidence-Based User’s Guide, U.S. Food and Drug Administration, Washington, D.C., 2011; M. Granger Morgan, Baruch Fischhoff, Ann Bostrom, and Cynthia J. Atman, Risk Communication: A Mental Models Approach, Cambridge University Press, New York, 2001; Paul Slovic, The Perception of Risk, Earthscan, London, 2000; and Baruch Fischhoff, “Risk Perception and Communication,” pp. 940-952 in Oxford Textbook of Public Health, 5th Edition, R. Detels, R. Beaglehole, M.A. Lansang, and M. Gulliford, eds., Oxford University Press, Oxford, 2009, reprinted in Judgement and Decision Making, N.K. Chater, ed., Sage, Thousand Oaks, Calif., 2011, available at http://www.hss.cmu.edu/departments/sds/media/pdfs/fischhoff/RiskPerceptionCommunication.pdf.
All these sources prescribe roughly the same process for developing and vetting a strategic approach to communication, a defensible risk/benefit analysis in advance of any controversy, and communication activities that are both audience-driven and interactive. This process calls for:
• Identifying the information regarding context and scientific background that is most critical to members of the audience for making the decisions that they face (e.g., whether to accept or adopt a technology, how to use it, whether it is still effective). That information may differ from the facts most important to an expert or the ones that the expert would love to convey in a teachable moment.
• Conducting empirical research to identify audience members’ current beliefs, including the terms they use and their organizing mental models.62 Effective messages depend as much on the nature of the target audience as on the content of the messages themselves. Crafting effective messages nearly always requires the participation of and input from individuals who are representative of the audience. And since it is often impossible to obtain participation and input from the target audience on the time scales needed for response, such input must be obtained before controversies erupt.
• Designing messages that close the critical gaps between what people know and what they need to know, taking advantage of existing knowledge and the research base for communicating particular kinds of information (e.g., uncertainty).63
• Evaluating those messages until the audience reaches acceptable levels of understanding.
• Developing in advance multiple channels of communication to the relevant audiences, including channels based on media contacts, opinion leaders, and Internet-based and more traditional social networks, and avoiding undue dependence on traditional media and public authorities for such communication.64
• Disclosing problematic ethical, legal, and societal issues earlier rather than later. Early disclosure is almost always in the interest of the researchers and/or sponsoring agency, provided the disclosure can be
62 Dedre Gentner and Albert Stevens, eds., Mental Models, Erlbaum, Hillsdale, N.J., 1983.
63 David V. Budescu, Stephen Broomell, and Han-Hui Por, “Improving Communication of Uncertainty in the Reports of the Intergovernmental Panel on Climate Change,” Psychological Science 20(8):299-308, 2009; Mary C. Politi, Paul K.J. Han, and Nananda F. Col, “Communicating the Uncertainty of Harms and Benefits of Medical Procedures,” Medical Decision Making 27(5, September-October):681-695, 2007.
64 Philip Campbell, “Understanding the Receivers and the Reception of Science’s Uncertain Messages,” Philosophical Transactions of the Royal Society A: Mathematical, Physical, and Engineering Sciences 369:4891-4912, 2011.
handled properly (e.g., without initially providing information that turns out to be wrong and controversial).
• Ensuring that messages reach the intended audiences in a prompt and timely fashion. Controversies can emerge and grow on the time scale of a day, requiring responses on similar time scales. Any message will be less effective if audience members have already formed their opinions or feel that its content was not forthcoming.
• Persisting in such public engagements even over long periods of time.65
Achieving these goals typically requires a modest investment of resources, along with a strategic commitment to ensuring that critical audiences are informed—and not blindsided.66 Nonetheless, the comments above should not be taken to mean that the process of risk communication is an easy one. Some of the important issues that arise in crafting an appropriate strategy for risk communication include the following:
• Identifying stakeholders and social networks. For any emerging and readily available technology, the stakeholders are likely to vary. Identification of the appropriate stakeholder groups and the communication environment in which those stakeholders interact is key to understanding their engagement and their beliefs, attitudes, and values.67 Information is commonly shared among interpersonal networks. Understanding the way information is shared among social networks should be foundational to risk communication activities. Research in this area examines how members of social systems share information, how normative information is communicated, the role of group identification in this process, and so on.68
• Identifying the goal(s) of communication. Communication efforts may be designed with any number of potential goals in mind: enhancing knowledge about an issue, influencing attitudes or behaviors, facilitating decision making, and so on. The specific goal drives formative data col-
65 Campbell, ”Understanding the Receivers and the Reception of Science’s Uncertain Messages,” 2011.
66 Thomas Dietz and Paul C. Stern, eds., Public Participation in Environmental Assessment and Decision Making, The National Academies Press, Washington, D.C., 2008; Presidential/Congressional Commission on Risk, Risk Management, Washington, D.C., 1998.
67 Rajiv N. Rimal and A. Dawn Adkins, “Using Computers to Narrowcast Health Messages: The Role of Audience Segmentation, Targeting, and Tailoring in Health Promotion,” pp. 497-514 in Handbook of Health Communication, T.L. Thompson, A.M. Dorsey, K.I. Miller, and R. Parrott, eds., Lawrence Erlbaum and Associates, Mahwah, N.J., 2003.
68 Saar Mollen, Rajiv N. Rimal, and Maria Knight Lapinski, “What Is Normative in Health Communication Research on Norms? A Review and Recommendations for Future Scholarship,” Health Communication 25(6-7, September):544-547, 2010.
lection and subsequent content, as well as choices about channels for communication. Once the goal of communication efforts is clearly identified, crafting the content of information and messages that are shared with stakeholder groups is critical. Message design and rapid message testing methodologies address the content of communication interventions—from the types of appeals used in messages to the nature of evidence and arguments presented in communications.69
• Enhancing public perceptions of source credibility, especially in an environment of ubiquitous media and multitudes of sources. Expertise, similarity, and other cues about people are known to influence how we respond to those people—audiences gather such information through communication. Since the early 1960s,70 researchers have documented the effects of perceptions of source credibility (trust, expertise, etc.) on responses to information.71
• Accounting for the role of emotion in risk communication processes that might facilitate or inhibit appropriate behavior. As identified by Janoske et al.,72 these emotions include anger, sadness, fear, and anxiety. Acknowledging the impact of such emotions helps in designing more effective communication processes. For example, fear arises in situations over which individuals cannot exercise control—thus, effective risk communication will suggest specific actions or preparedness activities that can be undertaken.
• Maximizing the positive utility of social media and other emergent communications technologies. Research addressing the role of new and emerging media in risk communication processes is in its infancy, but research might be conducted on media effects, uses, the spread of information
69 Charles Salmon and Charles Atkin, “Using Media Campaigns for Health Promotion,” pp. 263-284 in Handbook of Health Communication, T.L. Thompson, A.M. Dorsey, K.I. Miller, and R. Parrott, eds., Lawrence Erlbaum and Associates, Mahwah, N.J., 2003.
70 J.C. McCroskey, “Scales for the Measurement of Ethos,” Speech Monographs 33: 65-72, 1966.
71 Salmon and Atkin, “Using Media Campaigns for Health Promotion,” 2003.
72 See for example, Melissa Janoske, Brooke Liu, and Ben Sheppard, “Understanding Risk Communication Best Practices: A Guide for Emergency Managers and Communicators,” Report to Human Factors/Behavioral Sciences Division, Science and Technology Directorate, U.S. Department of Homeland Security, College Park, Md.: START, 2012. Available at www.start.umd.edu/start/publications/UnderstandingRiskCommunicationBestPractices.pdf; Monique Mitchell Turner, “Using Emotion in Risk Communication: The Anger-Activism Model,” Public Relations Review 33:114-119, 2007; Kim Witte, “Putting the Fear Back into Fear Appeals: The Extended Parallel Process Model,” Communication Monographs 59:329-349, 1992; and Robin L. Nabi, “A Cognitive-Functional Model for the Effects of Discrete Negative Emotions on Information Processing, Attitude Change, and Recall,” Communication Theory 9:3:292-320, 2006.
through social media, data mining as a mechanism for media monitoring, and so on.
Last, effective risk communication has a relationship to other ethical, legal, and societal issues, such as informed consent. That is, the process of obtaining informed consent can be viewed as a risk communication event.73 Taking such a view suggests questions such as: When do people make decisions about consenting in research studies? How are the risks and benefits communicated to potential participants? What is the nature of the communication in informed consent documents? What is the role of the sources of information (their characteristics) in this process? What are the cultural and social dynamics of the risk communication process?
Some of the questions derived from risk communication include the following:
• How can technology developers communicate the risks and benefits of technologies to the American public, so as to ensure a fair judgment, without revealing properties that would aid U.S. adversaries?
• What aspects of a technology are fundamentally difficult to understand by nonexperts? How can communications be developed to create the mental models needed for informed consent?
• How can technology developers communicate with the public (and its representatives) to reveal concerns early enough in the development process to address them in the design (rather than with costly last-minute changes)?
• How can communication channels be modeled so as to ensure that members of different groups hear and are heard at appropriate times?
• How can organizations ensure the leadership needed to treat communication as a strategic activity, which can determine the success and acceptability of a technology?
The sources of ELSI insight described above are varied and heterogeneous. This report provides such a variegated list because consideration of each of these sources potentially provides insight into ethical, legal, and societal issues from different perspectives. But in considering what
73 See, for example, Terrence L. Albrecht, Louis A. Penner, Rebecca J.W. Cline, Susan S. Eggly, and John C. Ruckdeschel, “Studying the Process of Clinical Communication: Issues of Context, Concepts, and Research Directions,” Journal of Health Communication 14, Supplement 1:47-56, January 2009.
insights these sources might offer in the context of any specific science or technology effort, two points are worth noting.
First, many of the sources described above are linked. For example, philosophical ethics—suitably elaborated—is in part a basis for disciplinary ethics and law. Differences between the precautionary principle and cost-benefit analysis mirror distinctions between deontology and consequentialism. The social sciences provide tools to examine the realities of behavior and thought when humans are confronted with the need to make ethical choices.
Second, consideration of a problem from multiple perspectives may from time to time lead to conflicting assessments of the ethics of alternative courses of action. Indeed, perfect consistency across these different perspectives is unlikely. If such consistency is indeed the case, then perhaps the celebration of a brief moment of ethical clarity is in order. But experience suggests that a finding of such consistency sometimes (often) results from either an unconscious attempt to reduce cognitive dissonance and/or a deliberate “stacking of the deck” toward favorable assumptions or data selection to build support for a particular position.
In the more likely case that the assessments from each perspective are not wholly congruent with each other, debate and discussion of the points of difference often help to enrich understanding in a way that premature convergence on one point of view cannot.