4

Sources of ELSI Insight

This chapter discusses sources of ELSI insight that might be relevant to considering the ethics of R&D on emerging and readily available (ERA) technologies in a military context. These sources include generalizable lessons arising from consideration of the science and technologies described in Chapters 2 and 3; philosophical ethics and existing disciplinary approaches to ethics; international law; social sciences such as anthropology and psychology; scientific and technological framing; the precautionary principle and cost-benefit analysis; and risk communication. The final section describes how these sources of insight might be used in practice. Also provided in this chapter is some background necessary for understanding the different kinds of ethical, legal, and societal issues that arise in Chapter 5.

A note on terminology: throughout this report, the terms “cost” and “benefit” are used in their broadest senses—all negative and positive impacts, whether financial or not.

4.1 INSIGHTS FROM SYNTHESIZING ACROSS EMERGING AND READILY AVAILABLE TECHNOLOGIES

Applications of most of the technologies described in Chapters 2 and 3 raise ethical, legal, and societal issues. Some of these issues are new; others put pressure on existing ELSI understandings and accommodations that have been reached with respect to more traditional military technologies. As a historical matter, such understandings have generally



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 115
4 Sources of ELSI Insight This chapter discusses sources of ELSI insight that might be rele- vant to considering the ethics of R&D on emerging and readily available (ERA) technologies in a military context. These sources include general- izable lessons arising from consideration of the science and technologies described in Chapters 2 and 3; philosophical ethics and existing disci- plinary approaches to ethics; international law; social sciences such as anthropology and psychology; scientific and technological framing; the precautionary principle and cost-benefit analysis; and risk communica- tion. The final section describes how these sources of insight might be used in practice. Also provided in this chapter is some background neces- sary for understanding the different kinds of ethical, legal, and societal issues that arise in Chapter 5. A note on terminology: throughout this report, the terms “cost” and “benefit” are used in their broadest senses—all negative and positive impacts, whether financial or not. 4.1  INSIGHTS FROM SYNTHESIZING ACROSS EMERGING AND READILY AVAILABLE TECHNOLOGIES Applications of most of the technologies described in Chapters 2 and 3 raise ethical, legal, and societal issues. Some of these issues are new; others put pressure on existing ELSI understandings and accommoda- tions that have been reached with respect to more traditional military technologies. As a historical matter, such understandings have generally 115

OCR for page 115
116 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY been reached through a process in which society has addressed the ELSI implications of a new application or technology because its emergence has forced society to do so. Only rarely have ELSI implications been addressed prior to that point. Because new technologies provide new capabilities and also allow old activities to be performed in new ways, situations can arise in which existing policy does not provide adequate guidance—giving rise to what Moor has characterized as a policy vacuum.1 But in practice, the vacuum involves more than policy—such situations also challenge existing laws, ethical understandings, and societal conventions that may have previ- ously guided decision making when “old” technologies were involved. In a military context, it may be the existence of real-world hostilities that pushes policy makers to fill the policy vacuum. Developing new ELSI understandings and accommodations is a fraught and complex process. For example, the technical implications of a new application may not be entirely clear when it first emerges. The intellectual concepts underpinning existing understandings may have ambiguities that become apparent only when applied to situations involv- ing the new applications. An analogy used to extend previous under- standings to the new situation may be incomplete, or even contradict the implications of other analogies that are used for the same purpose. As a practical matter also, new situations provide antagonists with the oppor- tunity to reopen old battles over ethical, legal, and societal issues, thus potentially upending previously reached compromises on controversial issues. In some cases, an R&D activity may be inherently suspect from an ELSI perspective. For example, advances in genetic engineering may someday enable the development of pharmaceutical agents that can act more effectively on individuals from certain ethnic groups. Although such agents might afford significant therapeutic benefit to members of those ethnic groups, the underlying science might also be used by a rogue state to harm those groups.2 Thus, R&D aimed at developing agents that have differential effects on various ethnic groups, whether or not intended for use in conflict, immediately raise a host of ELSI concerns. In other cases, an application’s concept of operation is a central ele- ment in an ELSI analysis of that application. In general, an application of a given technology is accompanied by a concept of operation that articu- 1 James H. Moor, “Why We Need Better Ethics for Emerging Technologies,”Ethics and Information Technology 7:111-119, 2005. 2 The possibility that such weapons might be used was introduced in the professional military literature as early as 1970. See Carl Larson, “Ethnic Weapons,” Military Review 50(11):3-11, 1970.

OCR for page 115
SOURCES OF ELSI INSIGHT 117 lates in general terms how the application is expected to be used, and it may be an application’s concept of operation rather than the application itself that raises ethical, legal, and societal issues. A system with lethal capabilities may have “selectable” modes of operation: fully autonomous operation of its lethal capabilities; human-controlled operation of its lethal capabilities; and target identification only. A concept of operations for the fully autonomous mode that does not adequately specify the circum- stances under which it may be activated may well be suspect from an ELSI perspective. An application’s practical value helps to shape developing new ELSI understandings and accommodations. If an application turns out to have a great deal of practical or operational value, an ELSI justification may emerge after that value has been established. Similarly, if an application has little operational value, ELSI-based objections will seem more power- ful, and may become part of the narrative against that application. For example, the emergence of new weapons technologies often sparks a predictable ethical debate. Regardless of the actual nature of the weapon, some will argue that a new weapon is ethically and legally abhorrent and should be prohibited by law, whereas others will point to the operational advantages that it confers and the ethical responsibility and obligation to provide U.S. armed forces with every possible advan- tage on the battlefield. Sometimes this ethical debate ends in a consensus that certain weapons should not be used (e.g., weapons for chemical warfare). In other cases, existing ELSI understandings are eroded, under- mined, or ignored (as was the case with the London Naval Treaty of 1930, which outlawed unrestricted submarine warfare but subsequently was abandoned for all practical purposes). But the point is that operational value has often made a difference in the outcome of an ELSI analysis. The above points are relevant especially in an environment of accu- mulating incremental change and improvement. Ethical, legal, and soci- etal issues often become prominent when a new technology offers a great deal more operational capability than previous ones. But as a technology is incrementally improved over many years and becomes much more capable than it was originally, the capabilities afforded by the improved technology may render the originally developed ELSI understandings obsolete, moot, or irrelevant. Perhaps the most important point to be derived from synthesizing across technologies is that technology-related ELSI debates are ongoing. One should expect such debates as technology evolves, as applications evolve, as threat/response tradeoffs change (e.g., nation-state warfare, guerrilla warfare, terrorist warfare, cyber warfare), and as societal percep- tions and analysis change. In some cases, new ELSI debates will emerge. In other cases, the ELSI debates will be familiar, even if they are newly

OCR for page 115
118 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY cast in terms of the relevant change at hand. And in still other cases, the ELSI debates will sound familiar right down to the literal words being used, simply because a proponent of a particular ELSI perspective sees an opportunity to (re-)present his or her point of view. 4.2  ETHICS 4.2.1  Philosophical Ethics Classically, Western moral philosophers have advanced two general kinds of moral theories that have proven useful in analyzing moral prob- lems. One kind of theory, consequentialism (or equivalently, utilitarian- ism), looks at the consequences of actions and asks, for example, which actions will provide the greatest net good for the greatest number of people when both harms and benefits are taken into account. Thus, an action is judged to be right or wrong given its actual consequences. Con- sequentialism allows the ranking of different actions depending on the outcomes of performing them. A second kind of theory, deontological ethics, judges the morality of actions in terms of compliance with duties, rights, and justice. Examples are following the Ten Commandments or obeying the regulations spelled out in a professional code of ethics. The morality of killing or lying would be decided based on the nature of the act and not on its results or on who the actor is. That is, the act of killing an innocent person, for instance, would under some versions of deontological ethics be categorically wrong in every circumstance. Other versions of deontological ethics allow for some ranking of conflicting duties and therefore are less categorical. In many cases, persons acting on the basis of any of these theories would view the rightness or wrongness of any given action similarly. In other cases, they might well disagree, and philosophers have argued extensively and in many academic treatises about the differences that may arise. In practice, however, few people act for purely deontological or purely utilitarian reasons, and indeed many ethical controversies reflect the tensions among these theories. For example, Party A will argue for not doing X because X is a wrong act that cannot be justified under any cir- cumstances, whereas Party B will argue for doing X because on balance, doing X results in a greater good than not doing X. Sometimes these different approaches work nicely together in gener- ating a more ethical outcome. Consequentialist ethics allow for manag- ing a complex ethical situation to mitigate its negative effects. In some cases, the rapid pace of a program may give rise to concerns that certain stakeholders will not have a fair chance for input into a decision-making process (a deontological ethical concern). Slowing the program or build-

OCR for page 115
SOURCES OF ELSI INSIGHT 119 ing in certain checkpoints may address some of these concerns. In such cases, the issue may not be so much whether or not to do something, but rather when it should be done. A third perspective on philosophical ethics is called virtue ethics— this perspective emphasizes good personal character as most basic to morality. People build character by adopting habits that lead to moral outcomes. Good character includes being trustworthy, helpful, courte- ous, kind, and so on. Under this theory, a scientist with good character will not fabricate data or exaggerate outcomes in her published research. In a military context, an example of virtue ethics is the set of core values articulated by the U.S. Army for soldiers: loyalty, duty, respect, selfless service, honor, integrity, and personal courage.3 Actions or behavior that compromise one or more of these values are to be avoided. Perhaps related to virtue ethics is the body of moral beliefs found in specific religions that often prescribe what should count as “good” and what individuals should, or should not, do. Specific notions such as what is humane or evil; what constitutes human nature; compassion; peace; stewardship; and stories of creation are often closely linked to religious worldviews. The discussion of the laws of war below notes that the major religions of the world are not silent on questions related to war and peace, civilian and military involvement in conflict, and so on, and further that there are some commonalities to the philosophical approaches taken by those religions. But answers to questions involving such concepts may well vary according to the specific religions in question, and a serious examination of the ethics involving conflict or technologies to be used in conflict may require a detailed look at those religions. A detailed examina- tion of what various religions say about such matters is beyond the scope of this report, and thus apart from acknowledging that religion plays an important role in the formulation of answers, the role of any specific reli- gion is not addressed in this report. Some relevant questions derived from philosophical ethics include the following: • On what basis can the benefits and costs of any given research effort be determined and weighed against each other, taking into account both the research itself and its foreseeable uses? • What categorical principles might be violated by a research effort, again taking into account both the research itself and its foreseeable uses? • How and to what extent, if any, might a research effort and its fore- 3 See, for example, http://www.history.army.mil/lc/the%20mission/the_seven_army_ values.htm.

OCR for page 115
120 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY seeable uses compromise the character and basic values of researchers or military users? • How and to what extent, if any, does a research effort implicate shared ethical or moral concerns of major religious traditions? 4.2.2  Disciplinary Approaches to Ethics Just as specialization in general areas of science and engineering has become necessary and commonplace, the same is true for ethics. The sources of modern-day ethics continue to evolve, and ethical per- spectives are dynamic. For example, new theoretical orientations coming from communitarian ethics raise and address issues for which the moral theories described above are not seen to provide sufficient guidance.4 New subfields of ethics, specializing in practical and professional ethics, are now commonplace and address the issues and problems relevant to a particular area. Included among these subfields are biomedical ethics, engineering ethics, and information technology ethics, among others. All of these specializations are concerned with examining and assist- ing in the particularities of moral analysis and decision making that arise in those domains, and sometimes between domains. Biomedical Ethics The field of biomedical ethics (bioethics) has developed over several decades and encompasses medical ethics, research ethics, and concerns over the implications of biomedical research. The field is interdisciplinary, and thus its approach to ethics incorporates work from law, medicine, philosophy, theology, and social science. In addition, the field’s boundar- ies are indistinct and often overlap into medical ethics, research ethics, law, public policy, and philosophy. The field initially focused on the ethics of research with human subjects, but numerous key events in medicine and biomedical research have led to the development of the field’s basic principles. The initial discussion on the ethics of human subjects research resulted in one of the primary standards of bioethics: informed consent. In 1947, the Nuremberg Trial of Nazi doctors spurred legal discussions of consent and examinations of medical codes of ethics. Although this ruling relied on a standard of informed voluntary consent, it had little initial direct 4 See http://plato.stanford.edu/entries/communitarianism/.

OCR for page 115
SOURCES OF ELSI INSIGHT 121 impact on U.S. medical ethics.5 The subsequent 1964 Declaration of Hel- sinki from the World Medical Association brought the issue of achieving informed consent in medical research to the attention of the U.S. medical community, and the declaration was incorporated into the professional codes of U.S. physicians.6 The difficulties with achieving and establishing standards for informed consent have been a consistent focus of bioethics. With the discovery of cases of human subjects’ abuses throughout the 1960s and 1970s, the field was pushed to hold stricter standards for both informing patients and research subjects and also for ensuring voluntary consent. Henry Beecher’s 1966 article in the New England Journal of Medicine,7 in which he described numerous ethical abuses of patients by physicians and researchers, drew attention to the physicians’ behavior and raised concerns about physician authority. Specific cases, some identified by Beecher, focused attention on the issue of getting informed consent in medical research and also on the conflict of interest between advancing medical knowledge and not harming patients. These cases included the following: • The Fernald School experiments. Mentally disabled children were fed radioactive calcium in their meals to learn about the absorption of calcium. • The Jewish Chronic Disease Hospital. Terminally ill patients were injected with live cancer cells to learn about human ability to reject foreign cells. • The Willowbrook State School. Children in the state school were delib- erately given hepatitis in order to learn more about the virus and control the spread of the disease in the hospital. • The Tuskegee Syphilis Study. African American men with syphilis were followed for over 40 years and denied treatment (penicillin) once it was available in order to learn about the disease progression. 5 Jay Katz, “The Consent Principle of the Nuremberg Code: Its Significance Then and Now,” The Nazi Doctors and the Nuremberg Code: Human Rights in Human Experimentation, George J. Annas and Michael A. Grodin, eds., Oxford University Press, New York, 1992. 6 Susan E. Lederer, “Research Without Borders: The Origins of the Declaration of Helsinki,” pp. 199-217 in Twentieth Century Ethics of Human Subjects Research: Historical Perspectives on Values, Practices, and Regulations, Volker Roelcke and Giovanni Maio, eds., Franz Steiner Verlag, Stuttgart, 2004; Jonathan D. Moreno and Susan E. Lederer, “Revising the History of Cold War Research Ethics,” Kennedy Institute of Ethics Journal 6(3):223-237, 1996. 7 H.K. Beecher, “Ethics and Clinical Research,” New England Journal of Medicine 274(24):1354-1360, June 16, 1966, available at http://whqlibdoc.who.int/bulletin/2001/ issue4/79(4)365-372.pdf.

OCR for page 115
122 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY These cases all involved issues with the informed consent process, including the lack of information given and how voluntary the consent was. The field of bioethics also developed principles around medical care, which have their roots in medical ethics and the physician-patient rela- tionship. David J. Rothman has argued that the issues of informed con- sent and the resulting push for regulation in human experimentation overflowed into medical care during the 1960s.8 Whatever the cause, during the 1960s the physician-patient relationship was reconsidered and physician authority in making medical decisions was questioned. The results were calls for patient autonomy and an emphasis on physicians’ truthfully informing patients of their condition, rather than paternalisti- cally shielding patients from the realities of their illnesses. These changes in norms emphasized personal autonomy and truth-telling, and were spurred by various developments in medical technology and experimen- tal medical treatments.9 Organ transplantation and heart-lung machines raised questions about when death occurred and about patients’ rights to request withdrawal of care or deny treatment. Kidney dialysis and organ transplantation raised questions about the just allocation of limited resources, specifically asking if physicians should be the only ones mak- ing these decisions and how the decisions should be made. The field of bioethics includes consideration of the impacts of scien- tific and technological developments on social morality. During the field’s development, research and advances in genetics and in vitro fertilization drove the field to think about the effects they have on society and its norms. A growing number of tests for genetic diseases raised issues of personal autonomy, genetic privacy, and claims of practicing eugenics. The development of in vitro fertilization in the 1970s and 1980s raised questions, for the first time, argued Alta Charo, about what was right and wrong regarding the manipulation of human embryos and about how to define personhood.10 In 1979, the first federal bioethics commission, the National Commis- sion for the Protection of Human Subjects of Biomedical and Behavioral Research, formalized the principles of bioethics articulated in the Belmont report (Box 4.1). The commission was charged with focusing on the ethics of research on or involving human subjects; however, the moral principles it outlined have been applied, augmented, and adapted by a number of 8 David J. Rothman, Strangers at the Bedside: A History of How Law and Bioethics Transformed Medical Decision Making, Basic Books, New York, 1991. 9 Alta Charo, “Prior ELSI Efforts—Biomedical/Engineering Ethics,” presentation to the committee, August 31, 2011. 10 Alta Charo, “Prior ELSI Efforts—Biomedical/Engineering Ethics,” presentation to the committee, August 31, 2011.

OCR for page 115
SOURCES OF ELSI INSIGHT 123 commentators and analysts not just to human subjects research but to all aspects of bioethics, including medical care, and to the impacts of biotechnology and life sciences research on society, and to other domains as well.11 Since the Belmont report, the biomedical ethics field has explored and focused on how these principles apply to specific areas of medi- cine and research, including end-of-life care, genetics and biotechnol- ogy, health systems, global health, nanotechnology, stem cell research, assisted human reproduction, gene therapy, cloning, and health care pol- icy. Notably, in 1988 when James Watson launched the National Institutes of Health’s Human Genome Project (HGP), he also announced that 3 per- cent (later increased to 5 percent) of the funding would go to researching the ethical, legal, and societal issues associated with genetics, which is where the term “ELSI” originated. HGP-supported ELSI research focused the field of bioethics on the issues with genetics. In addition, the support for ELSI research also funded centers for bioethics across the country, which enabled the field to spread and resulted in more scholars and researchers being educated in bioethics or in becoming bioethicists. This NIH-supported genetics ELSI research continues today. Questions of interest in biomedical ethics include the following: • How do standards for achieving informed consent change with different populations? Do different stresses on volunteers or patients alter the ability to achieve informed consent? • What kinds of inducements overwhelm voluntarism? What protec- tions are necessary to maintain a person’s voluntary choice in decision making? • How should public good be weighed against risks to individuals? • How should research populations be chosen to address issues of social justice while balancing the vulnerability of populations? • What obligations for truth telling exist in research? Are there justi- 11 See, for example, Amy Gutmann, “The Ethics of Synthetic Biology: Guiding Principles for Emerging Technologies,” The Hastings Center Report 41(4):17-22, 2011, available at http:// www.upenn.edu/president/meet-president/ethics-synthetic-biology-guiding-principles- emerging-technologies; David Koepsell, “On Genies and Bottles: Scientists’ Moral Respon- sibility and Dangerous Technology R&D,” Science and Engineering Ethics 16(1):119-133, 2010, available at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2832882/; David Koepsell, Innovation and Nanotechnology: Converging Technologies and the End of Intellectual Property, Bloomsbury Academic, New York, 2011; and U.S. Department of Homeland Security, “Ap- plying Ethical Principles to Information and Communication Technology Research: A Com- panion to the Department of Homeland Security Menlo Report,” GPO, January 3, 2012, available at http://www.cyber.st.dhs.gov/wp-content/uploads/2012/01/MenloPrinciples COMPANION-20120103-r731.pdf.

OCR for page 115
124 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY Box 4.1 Fundamental Principles of Biomedical Ethics The Belmont report of 1979 articulated three principles to govern the conduct of biomedical research: respect for persons, beneficence, and justice.1 In the dis- cussion below, which is based on a discussion of biomedical ethics by Thomas L. Beauchamp and James L. Childress, a fourth principle is added: nonmaleficence.2 From each of these principles are drawn obligations and rules for how to act. • Respect for autonomy. Autonomy is defined as including two essential conditions: “(1) liberty (independence from controlling influences) and (2) agency (capacity for intentional action).”3 This principle holds that the autonomy of people should not be interfered with. Autonomy should be respected, preserved, and supported. In the case of health care and human subjects research, the principle obliges physicians and researchers to get informed consent, tell the truth, respect privacy, and only when asked help others to make important decisions. Discus- sions in biomedical ethics around how to abide by this principle often focus on a few areas: evaluating capacity for making autonomous choices, the meanings and justifications of informed consent, disclosing information, ensuring voluntariness, and defining standards for surrogate decision making. • Nonmaleficence. This principle asserts “an obligation not to inflict harm on others.”4 It does not require that a specific action be taken, but rather that one intentionally refrain from taking action that will either cause harm or impose a risk of harm. The specific rules drawn from this principle include: do not kill, do not cause pain or suffering, do not incapacitate, do not cause offense, and do not deprive others of the goods of life.5 When applied to the health care and research experiences, the discussion over the implementation of this principle focuses on distinctions and rules for nontreatment, quality-of-life discussions, and justifications and questions regarding allowing patients to die or arranging deaths. This principle is most closely connected with the physicians’ code of ethics rule that they “do no harm.” • Beneficence. Closely related to the principle of nonmaleficence, this princi- ple is “a moral obligation to act for the benefit of others.”6 This includes two aspects: fications for not telling the whole truth or leaving patients or volunteers in the dark? • What impacts do conflicts of interest have on research results and participants’ involvement? How can conflicts of interest be resolved, or must they be avoided entirely? • When and how do privacy issues and the collection of data nega- tively affect autonomy? • How do cultural perspectives alter bioethics standards? How flex- ible should bioethics standards be in response to different cultures?

OCR for page 115
SOURCES OF ELSI INSIGHT 125 (1) positive beneficence, which requires one to take action to provide benefits, and (2) utility, which requires that one balance benefits and costs to ensure the best result. The more specific rules drawn from this principle include: protect and defend the rights of others, prevent harm from occurring, remove conditions that will cause harm, help persons with disabilities, and rescue persons in danger.7 In reference to human experimentation this principle obliges researchers and institutional review boards to weigh the risk to subjects and to ensure that the risk be minimal unless there is a direct benefit to the subject. In the case of medical care this principle obliges physicians to promote patient welfare. • Justice. An obligation to treat people fairly, equitably, and appropriately in light of what is due or owed to them. This principle includes the concept of distributive justice, which refers to the just distribution of materials, social benefits (rights and responsibilities), and/or social burdens.8 Determinations of what is a morally justifiable distribution vary based on different philosophical theories; for instance, a utilitarian view emphasizes maximizing public good, whereas an egalitarian view emphasizes equal access to the goods. In the medical context this principle focuses on rules regarding access to decent minimal health care, such as emergency care, the allocation of health resources, and the rationing of and priority setting for resources and treatments. Regarding human experimentation, this principle is often used to ensure that vulnerable populations are not exposed to more risk than other populations. 1 The Belmont report can be found at http://www.hhs.gov/ohrp/humansubjects/guidance/ belmont.html. 2 Thomas L. Beauchamp and James F. Childress, Principles of Biomedical Ethics, 5th Edi- tion, Oxford University Press, New York, 2001. 3 Ibid., p. 58. 4 Ibid., p. 113. 5 Ibid., p. 117. 6 Ibid., p. 166. 7 Ibid., p. 167. 8 Ibid., p. 226. Engineering Ethics The academic field of engineering ethics developed in the United States in the early 1970s with other inquiry concerning issues of practical and professional ethics. Perhaps biomedical ethics was earliest to gain both scholarly and public interest; engineering and research ethics soon followed. Controversies concerning engineering catastrophes and research misconduct are likely to have fueled public demands and professional response. Work in the field accelerated when ABET, the accrediting body for engineering and technology programs at colleges and universities,

OCR for page 115
152 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY and motor skills by users employing night vision goggles over extended time periods.51 • Remote operation of unmanned ground vehicles. A single human opera- tor cannot effectively operate more than one unmanned ground vehicle under active combat conditions (e.g., during times of attack). Further, in the absence of other knowledge, operators of unmanned vehicles tend to use tactics, techniques, and procedures (TTPs) originally developed for operating manned vehicles, pointing to the need for TTPs for the use of unmanned ground vehicles that are specific to the tasks, features, and characteristics of those systems.52 • Passwords and cybersecurity. Authentication of an asserted identity is central to controlling access to information technology resources. Pass- words are an essential element—in many cases, the only element—of the most commonly used approaches to authentication. But it is well known that individuals tend to choose easy-to-remember passwords—thus mak- ing such passwords easy for an adversary to guess. • Body armor for female soldiers. Traditionally, body armor has been designed to protect male bodies. Some research suggests that such armor is less protective of female bodies53 and also that the poor fit of such armor on female soldiers makes it difficult for them to properly aim their weapons and enter or exit vehicles.54 • Coordination. The effective operation of any complex system requires coordination among the individuals responsible for its design, operation, maintenance, and upgrading. When that coordination fails, designers may require operators to do the impossible, with a technology that they understand incompletely or cannot support with the resources available to them. Such failures affected Three Mile Island, Chernobyl, and Fukushima.55 51 Albert L. Kubala, Final Report: Human Factors Research in Military Organizations and Sys- tems, Human Resources Research Organization, Alexandria, Va., 1979, available at http:// www.dtic.mil/dtic/tr/fulltext/u2/a077339.pdf. ��Jessie Y.C. Chen, Ellen C. Haas, Krishna Pillalamarri, and Catherine N. Jacobson, Human- Robot Interface: Issues in Operator Performance, Interface Design, and Technologies, Army Research Laboratory, 2006, available at http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA451379. 53 Marianne Resslar Wilhelm, “A Biomechanical Assessment of Female Body Armor,” ETD Collection for Wayne State University, Paper AAI3117255, January 1, 2003, available at http:// digitalcommons.wayne.edu/dissertations/AAI3117255. 54 Anna Mulrine, “Army Uses ‘Xena: Warrior Princess’ as Inspiration for New Body Armor for Women,” July 9, 2012, Christian Science Monitor Online, available at http:// www.csmonitor.com/USA/Military/2012/0709/Army-uses-Xena-Warrior-Princess-as- inspiration-for-new-body-armor-for-women. 55 James R. Chiles, Inviting Disaster: Lessons from the Edge of Technology, Harper Collins, New York, 2002; Charles Perrow, Normal Accidents: Living with High Risk Technologies, Princeton University Press, Princeton, N.J., 1999.

OCR for page 115
SOURCES OF ELSI INSIGHT 153 Human factors engineering (also called ergonomics) has long been part of the design of many systems (e.g., cockpits, computer interfaces) in both the civilian and the defense sectors.56 It is most useful when incor- porated in the earliest stages of the design process, when there is a wide range of opportunities to respond to users’ needs. At the other extreme, the need to rely on warning labels in many cases reflects a design failure. Some of the questions derived from human–systems integration include the following: • How can requirements be written in order to ensure that technolo- gies can be operated and maintained under field conditions? • How can the acquisition process evaluate and ensure the opera- tional usability of future technologies? • What are the institutional barriers to incorporating human-systems expertise in the design process? • What kinds of expertise and social organization are needed to sup- port a technology, by the United States (so as to increase operability) and by its adversaries (so as to limit the technology’s appropriation by them)? 4.5  SCIENTIFIC AND TECHNOLOGICAL FRAMING In some cases, ethical insights emerge from a scientific and techno- logical framing different from that which is initially offered. To the extent that a given technology or application is based on an erroneous or an incomplete scientific understanding, any risk analysis of that technology or application will itself be incomplete. New ethical, legal, and societal issues may well emerge if and when the underlying science becomes more complete. For example, assumptions of system linearity and decomposabil- ity often enable scientists to make headway in their investigations of phenomena, and so it is natural to turn at first to techniques based on these assumptions. But some systems are not well characterized by these assumptions in the domains of interest to investigators, although it may take some time to recognize this reality. In other instances, there is consid- erable uncertainty about the relevant data, for example, because they have not yet been collected, or there may be defects in the data that have been collected. In still other cases, system behavior may be emergent and path- 56 Steven Casey, Set Phasers on Stun: And Other True Tales of Design, Technology, and Human Error, Aegean, New York, 1993; Peter A. Hancock, Human Performance and Ergonomics: Percep- tual and Cognitive Principles, Academic Press, New York, 1999; and Christopher D. Wickens, Sallie E. Gordon, and Yili Liu, An Introduction to Human Factors Engineering, Prentice-Hall, New York, 2004. A historical perspective can be found in Paul M. Fitts, ed., Psychological Research and Equipment Design, U.S. Government Printing Office, Washington, D.C., 1947.

OCR for page 115
154 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY dependent, may be very sensitive to initial conditions, or may depend on incompletely known relationships between the system and its environ- ment. Predictions about system behavior may be possible only through high-fidelity computer simulations, may be probabilistic in nature, or may be exponentially inaccurate depending on the time horizons in question. If these realities are not recognized when ethical, legal, and societal issues are considered, such a consideration will be based on an incomplete sci- entific understanding. Systems with some of the analytically problematic characteristics are often biological or environmental in nature. For example, early in the history of biology, a “one-gene, one-protein” phenomenology was widely accepted. Today, it is generally accepted that many noncoding parts of DNA control the circumstances under which a specific gene will be expressed, and the rules governing regulation are not well understood. In addition, it is not always possible to predict how natural selection will act on a system over time. Concerns over ethical, legal, and societal issues may thus sometimes be rooted in disagreements over the fundamental science involved. Are the nonlinearities in the system in question significant? Does the model being used to understand the relevant phenomena capture all essential elements? How sensitive is the model to initial conditions? How far into the future can a model’s predictions be trusted? A relevant ethical question derived from considering scientific fram- ing is the following: • How and to what extent, if any, are known ethical, legal, and soci- etal issues related to uncertainties in the underlying science or maturity of the technology? 4.6  THE PRECAUTIONARY PRINCIPLE AND COST-BENEFIT ANALYSIS Commentators differ in their psychological as well as social orienta- tion toward technology development and application. Those most con- cerned about potential negative results tend to promote the precautionary principle,57 doing so in response to traditional cost-benefit analysis that they regard as using approaches that give innovation the benefit of the doubt. 57 A substantial amount of background information on the precautionary principle can be found in Ragnar E. Löfstedt, Baruch Fischhoff, and Ilya Fischhoff, “Precautionary Principles: General Definitions and Specific Applications to Genetically Modified Organisms (GMOs),” Journal of Policy Analysis and Management 21(3):381-407, 2002.

OCR for page 115
SOURCES OF ELSI INSIGHT 155 The strongest form of the precautionary principle states that when a technology or an application threatens harm—to society, to individuals, to the environment, and so on—precautionary measures should be taken before a decision is made to proceed with developing that technology or application and in general the technology should not be pursued until those concerns are decisively addressed. Some formulations of the precautionary principle require strong evi- dence of risks, in the sense of developing a full set of relevant cause-and- effect relationships. Other formulations require less evidence, suggesting that high levels of uncertainty about causality should not be a bar to pre- cautionary action. In these latter formulations, the postulated harms can be merely possible and may be speculative in the sense that the full set of relevant cause-and-effect relationships (that is, relationships between developing the technology or application and the harm that may result) may not have been established with sufficient scientific rigor, or in the sense that the probability of the harms occurring may be low. The precautionary principle places the burden of proof on those who advocate certain technologies to produce evidence that will reassure rea- sonable skeptics, rather than on the public to show that development can cause unacceptable harm. Further, the principle often requires that precautionary measures be taken before any development work occurs, and such measures may include a complete cessation of all development work. Advocates of the precautionary principle often invoke ethical com- mitments to protect the environment from the results of humans’ mistakes and to safeguard the public from terrorists.58 In the view of these critics, one of the biggest risks is that science and technology will move forward too quickly, causing irreversible damage. An example of applying the precautionary principle to biological research could be the outcome of the 1975 Asilomar Conference on Recombinant DNA Research, discussed in Chapter 1. A different principle is traditional cost-benefit analysis, which is fun- damentally rooted in utilitarian ethics. Cost-benefit analysis relies on the ability to quantify and weigh the value of putative costs and benefits. Quantification is intended to make the assessment of costs and benefits a more objective process, although serious analysts usually recognize the value-laden nature of quantification. For example, in some formulations of cost-benefit analysis, uncertainty about costs or benefits implies that those costs or benefits can be discounted or even dismissed. Costs or benefits that cannot be objectively quantified are not taken into account at all. Examples of such costs could include the costs to the credibility of an organization when a technology fails, is introduced improperly, or causes 58 See http://www.synbioproject.org/process/assets/files/6334/synbio3.pdf.

OCR for page 115
156 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY harm, or the costs of disruptions to social systems caused by particular technologies. Some versions of cost-benefit analysis do seek to address such matters as well as the impact of uncertainty and risk tolerance. Any calculation must treat the distribution of risks and benefits in some way, if only to ignore them, without regard for whether those who bear the risks do not get the benefits. A common compromise is to ask whether the beneficiaries from a project could, in principle, compensate the losers—without ensuring that there are mechanisms for effecting those transfers. In cost-benefit analysis, opponents of developing a new technology or application bear the burden of proof of showing that costs outweigh benefits. Differences among those who advocate cost-benefit analysis can be found in their relative weightings of benefits and costs, how and when to account for uncertainty, and how to bound the universe of costs and benefits. For example, benefits and costs may be realized in the short term or in the long term: How and to what extent, if any, should long-term benefits and costs be discounted compared to short-term benefits and costs? Benefits and costs may be unequally distributed throughout the world: Which parties have standing in the world to claim that their costs or benefits must be taken into account? Inaction is itself an action: How should the costs and benefits of the status quo factor into the weighing of overall costs and benefits? In practice, a middle ground can often be found between the precau- tionary principle and cost-benefit analysis. For example, a less traditional approach to cost-benefit analysis sometimes attempts to quantify intan- gible and long-term costs that would not usually be taken into account in a traditional cost-benefit analysis. One less extreme form of the precau- tionary principle allows precautionary measures to be taken when there is uncertainty about costs and harms, but does not require such measures. Another less extreme form requires the existence of some scientific evi- dence relating to both the likelihood and magnitude of harm and the significance of such harm should it occur. A middle ground requires calculating the costs and benefits of all out- comes for which there are robust methods, along with explicit disclosure of the quality of those analyses, the ethical assumptions that they entail (e.g., regarding distributional effects), the uncertainty surrounding them, and the issues that are ignored. Seeing the limits to the analysis allows decision makers to assess the measure of precaution that is needed. Some relevant ethical questions derived from considering cost-benefit analysis and the precautionary principle are the following: • How and to what extent, if any, can ELSI-related tensions between cost-benefit analysis and the precautionary principle be reconciled in any given research effort?

OCR for page 115
SOURCES OF ELSI INSIGHT 157 • If a cost-benefit approach is adopted, how will intangible costs and benefits of a research effort be taken into account? • If a precautionary approach is adopted, what level of risk must be posed by a research effort before precautionary actions are required? 4.7  RISK COMMUNICATION Those who fund, design, and deploy new technologies must com- municate the associated risks and benefits effectively both to those who would use them and to the public that will pass judgment on their work. If users misunderstand a technology’s costs and capabilities, they may forgo useful options or invest in ones that leave them vulnerable if they fail to fulfill their promise. If the public misunderstands a technology’s risks and benefits, then it may prevent the development of valuable options or allow ones that undermine its welfare. Communicating about complex, uncertain, risky technologies poses special problems and is often done poorly,59 in part because technical experts often have poor intuitions about and/or understanding of their audiences’ knowledge and needs. Scientific approaches to that commu- nication have been developed over the past 40 years, building on basic research in cognitive psychology and decision science. The National Research Council’s report Improving Risk Communication (1989) provided an early introduction to that research.60 There are many other sources,61 including an upcoming special issue of the Proceedings of the National Academy of Sciences with scientific papers from the May 2012 Sackler Col- loquium on the Science of Science Communication. 59 Baruch Fischhoff, “Communicating the Risks of Terrorism (and Anything Else),” American Psychologist 66(6):520-531, 2011; Raymond S. Nickerson, “How We Know—and Sometimes Misjudge—What Others Know,” Psychological Bulletin 125(6):737-759, 1999. 60 National Research Council, Improving Risk Communication, National Academy Press, Washington, D.C., 1989. 61 Baruch Fischhoff and Dietram A. Scheufele (eds.), “The Science of Science Communi- cation,” Arthur M. Sackler Colloquium, National Academy of Sciences, held May 21-22, 2012, printed in Proceedings of the National Academy of Sciences of the United States of Amer- ica 110(Supplement 3):13696 and 14031-14110, August 20, 2013; Baruch Fischhoff, Noel T. Brewer, and Julie S. Downs, eds., Communicating Risks and Benefits: An Evidence-Based User’s Guide, U.S. Food and Drug Administration, Washington, D.C., 2011; M. Granger Morgan, Baruch Fischhoff, Ann Bostrom, and Cynthia J. Atman, Risk Communication: A Mental Mod- els Approach, Cambridge University Press, New York, 2001; Paul Slovic, The Perception of Risk, Earthscan, London, 2000; and Baruch Fischhoff, “Risk Perception and Communica- tion,” pp. 940-952 in Oxford Textbook of Public Health, 5th Edition, R. Detels, R. Beaglehole, M.A. Lansang, and M. Gulliford, eds., Oxford University Press, Oxford, 2009, reprinted in Judgement and Decision Making, N.K. Chater, ed., Sage, Thousand Oaks, Calif., 2011, avail- able at http://www.hss.cmu.edu/departments/sds/media/pdfs/fischhoff/RiskPerception Communication.pdf.

OCR for page 115
158 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY All these sources prescribe roughly the same process for developing and vetting a strategic approach to communication, a defensible risk/ benefit analysis in advance of any controversy, and communication activi- ties that are both audience-driven and interactive. This process calls for: • Identifying the information regarding context and scientific back- ground that is most critical to members of the audience for making the decisions that they face (e.g., whether to accept or adopt a technology, how to use it, whether it is still effective). That information may differ from the facts most important to an expert or the ones that the expert would love to convey in a teachable moment. • Conducting empirical research to identify audience members’ cur- rent beliefs, including the terms they use and their organizing mental models.62 Effective messages depend as much on the nature of the target audience as on the content of the messages themselves. Crafting effec- tive messages nearly always requires the participation of and input from individuals who are representative of the audience. And since it is often impossible to obtain participation and input from the target audience on the time scales needed for response, such input must be obtained before controversies erupt. • Designing messages that close the critical gaps between what people know and what they need to know, taking advantage of existing knowledge and the research base for communicating particular kinds of information (e.g., uncertainty).63 • Evaluating those messages until the audience reaches acceptable levels of understanding. • Developing in advance multiple channels of communication to the relevant audiences, including channels based on media contacts, opinion leaders, and Internet-based and more traditional social networks, and avoiding undue dependence on traditional media and public authorities for such communication.64 • Disclosing problematic ethical, legal, and societal issues earlier rather than later. Early disclosure is almost always in the interest of the researchers and/or sponsoring agency, provided the disclosure can be 62 Dedre Gentner and Albert Stevens, eds., Mental Models, Erlbaum, Hillsdale, N.J., 1983. 63 David V. Budescu, Stephen Broomell, and Han-Hui Por, “Improving Communication of Uncertainty in the Reports of the Intergovernmental Panel on Climate Change,” Psychological Science 20(8):299-308, 2009; Mary C. Politi, Paul K.J. Han, and Nananda F. Col, “Communicating the Uncertainty of Harms and Benefits of Medical Procedures,” Medical Decision Making 27(5, September-October):681-695, 2007. 64 Philip Campbell, “Understanding the Receivers and the Reception of Science’s Uncer- tain Messages,” Philosophical Transactions of the Royal Society A: Mathematical, Physical, and Engineering Sciences 369:4891-4912, 2011.

OCR for page 115
SOURCES OF ELSI INSIGHT 159 handled properly (e.g., without initially providing information that turns out to be wrong and controversial). • Ensuring that messages reach the intended audiences in a prompt and timely fashion. Controversies can emerge and grow on the time scale of a day, requiring responses on similar time scales. Any message will be less effective if audience members have already formed their opinions or feel that its content was not forthcoming. • Persisting in such public engagements even over long periods of time.65 Achieving these goals typically requires a modest investment of resources, along with a strategic commitment to ensuring that critical audiences are informed—and not blindsided.66 Nonetheless, the com- ments above should not be taken to mean that the process of risk commu- nication is an easy one. Some of the important issues that arise in crafting an appropriate strategy for risk communication include the following: • Identifying stakeholders and social networks. For any emerging and readily available technology, the stakeholders are likely to vary. Identi- fication of the appropriate stakeholder groups and the communication environment in which those stakeholders interact is key to understanding their engagement and their beliefs, attitudes, and values.67 Information is commonly shared among interpersonal networks. Understanding the way information is shared among social networks should be foundational to risk communication activities. Research in this area examines how mem- bers of social systems share information, how normative information is communicated, the role of group identification in this process, and so on.68 • Identifying the goal(s) of communication. Communication efforts may be designed with any number of potential goals in mind: enhancing knowledge about an issue, influencing attitudes or behaviors, facilitating decision making, and so on. The specific goal drives formative data col- 65 Campbell, ”Understanding the Receivers and the Reception of Science’s Uncertain Messages,” 2011. �� Thomas Dietz and Paul C. Stern, eds., Public Participation in Environmental Assessment and Decision Making, The National Academies Press, Washington, D.C., 2008; Presidential/ Congressional Commission on Risk, Risk Management, Washington, D.C., 1998. 67 Rajiv N. Rimal and A. Dawn Adkins, “Using Computers to Narrowcast Health Messages: The Role of Audience Segmentation, Targeting, and Tailoring in Health Promotion,” pp. 497- 514 in Handbook of Health Communication, T.L. Thompson, A.M. Dorsey, K.I. Miller, and R. Parrott, eds., Lawrence Erlbaum and Associates, Mahwah, N.J., 2003. 68 Saar Mollen, Rajiv N. Rimal, and Maria Knight Lapinski, “What Is Normative in Health Communication Research on Norms? A Review and Recommendations for Future Scholarship,” Health Communication 25(6-7, September):544-547, 2010.

OCR for page 115
160 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY lection and subsequent content, as well as choices about channels for com- munication. Once the goal of communication efforts is clearly identified, crafting the content of information and messages that are shared with stakeholder groups is critical. Message design and rapid message testing methodologies address the content of communication interventions—from the types of appeals used in messages to the nature of evidence and argu- ments presented in communications.69 • Enhancing public perceptions of source credibility, especially in an envi- ronment of ubiquitous media and multitudes of sources. Expertise, similarity, and other cues about people are known to influence how we respond to those people—audiences gather such information through communica- tion. Since the early 1960s,70 researchers have documented the effects of perceptions of source credibility (trust, expertise, etc.) on responses to information.71 • Accounting for the role of emotion in risk communication processes that might facilitate or inhibit appropriate behavior. As identified by Janoske et al.,72 these emotions include anger, sadness, fear, and anxiety. Acknowl- edging the impact of such emotions helps in designing more effective communication processes. For example, fear arises in situations over which individuals cannot exercise control—thus, effective risk communi- cation will suggest specific actions or preparedness activities that can be undertaken. • Maximizing the positive utility of social media and other emergent com- munications technologies. Research addressing the role of new and emerg- ing media in risk communication processes is in its infancy, but research might be conducted on media effects, uses, the spread of information 69 Charles Salmon and Charles Atkin, “Using Media Campaigns for Health Promotion,” pp. 263-284 in Handbook of Health Communication, T.L. Thompson, A.M. Dorsey, K.I. Miller, and R. Parrott, eds., Lawrence Erlbaum and Associates, Mahwah, N.J., 2003. 70 J.C. McCroskey, “Scales for the Measurement of Ethos,” Speech Monographs 33: 65-72, 1966. 71 Salmon and Atkin, “Using Media Campaigns for Health Promotion,” 2003. 72 See for example, Melissa Janoske, Brooke Liu, and Ben Sheppard, “Understanding Risk Communication Best Practices: A Guide for Emergency Managers and Communicators,” Re- port to Human Factors/Behavioral Sciences Division, Science and Technology Directorate, U.S. Department of Homeland Security, College Park, Md.: START, 2012. Available at http:// www.start.umd.edu/start/publications/UnderstandingRiskCommunicationBestPractices. pdf; Monique Mitchell Turner, “Using Emotion in Risk Communication: The Anger-Activism Model,” Public Relations Review 33:114-119, 2007; Kim Witte, “Putting the Fear Back into Fear Appeals: The Extended Parallel Process Model,” Communication Monographs 59:329-349, 1992; and Robin L. Nabi, “A Cognitive-Functional Model for the Effects of Discrete Negative Emotions on Information Processing, Attitude Change, and Recall,” Communication Theory 9:3:292-320, 2006.

OCR for page 115
SOURCES OF ELSI INSIGHT 161 through social media, data mining as a mechanism for media monitoring, and so on. Last, effective risk communication has a relationship to other ethical, legal, and societal issues, such as informed consent. That is, the process of obtaining informed consent can be viewed as a risk communication event.73 Taking such a view suggests questions such as: When do people make decisions about consenting in research studies? How are the risks and benefits communicated to potential participants? What is the nature of the communication in informed consent documents? What is the role of the sources of information (their characteristics) in this process? What are the cultural and social dynamics of the risk communication process? Some of the questions derived from risk communication include the following: • How can technology developers communicate the risks and ben- efits of technologies to the American public, so as to ensure a fair judg- ment, without revealing properties that would aid U.S. adversaries? • What aspects of a technology are fundamentally difficult to under- stand by nonexperts? How can communications be developed to create the mental models needed for informed consent? • How can technology developers communicate with the public (and its representatives) to reveal concerns early enough in the development process to address them in the design (rather than with costly last-minute changes)? • How can communication channels be modeled so as to ensure that members of different groups hear and are heard at appropriate times? • How can organizations ensure the leadership needed to treat com- munication as a strategic activity, which can determine the success and acceptability of a technology? 4.8  USING SOURCES OF ELSI INSIGHT The sources of ELSI insight described above are varied and heteroge- neous. This report provides such a variegated list because consideration of each of these sources potentially provides insight into ethical, legal, and societal issues from different perspectives. But in considering what 73 See, for example, Terrence L. Albrecht, Louis A. Penner, Rebecca J.W. Cline, Susan S. Eggly, and John C. Ruckdeschel, “Studying the Process of Clinical Communication: Issues of Context, Concepts, and Research Directions,” Journal of Health Communication 14, Supplement 1:47-56, January 2009.

OCR for page 115
162 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY insights these sources might offer in the context of any specific science or technology effort, two points are worth noting. First, many of the sources described above are linked. For example, philosophical ethics—suitably elaborated—is in part a basis for disciplin- ary ethics and law. Differences between the precautionary principle and cost-benefit analysis mirror distinctions between deontology and con- sequentialism. The social sciences provide tools to examine the realities of behavior and thought when humans are confronted with the need to make ethical choices. Second, consideration of a problem from multiple perspectives may from time to time lead to conflicting assessments of the ethics of alterna- tive courses of action. Indeed, perfect consistency across these different perspectives is unlikely. If such consistency is indeed the case, then per- haps the celebration of a brief moment of ethical clarity is in order. But experience suggests that a finding of such consistency sometimes (often) results from either an unconscious attempt to reduce cognitive dissonance and/or a deliberate “stacking of the deck” toward favorable assumptions or data selection to build support for a particular position. In the more likely case that the assessments from each perspective are not wholly congruent with each other, debate and discussion of the points of difference often help to enrich understanding in a way that premature convergence on one point of view cannot.