D

Established Institutional Mechanisms for Addressing Ethical, Legal, and Societal Issues

D.1 DOD LAW-OF-ARMED-CONFLICT REVIEW AND TREATY COMPLIANCE

The 1977 Additional Protocol I of the Geneva Conventions of August 12, 1949 (to which the United States is a signatory) states in Article 36 that with respect to “development, acquisition or adoption of a new weapon, means or method of warfare, [a signatory] is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable.”1 Thus, weapons acquired by the Department of Defense are subject to a review that determines whether the normal or expected use of the weapon is consistent with the law of armed conflict (LOAC).2 (According to Parks, the “normal and expected use” of a weapon is associated

___________________

1 International Committee of the Red Cross (ICRC), Additional Protocol I to the Geneva Conventions on the Protection of Victims of International Armed Conflicts, June 8, 1977, available at http://www.icrc.org/ihl.nsf/b466ed681ddfcfd241256739003e6368/f095453e41336b76c12563cd00432aa1!OpenDocument.

2 The legal authority for this review is derived from a variety of DOD regulations: U.S. Department of Defense Directive 5000.1, “Defense Acquisition,” March 15, 1996 (hereinafter DODD 5000.1); U.S. Department of the Army, Army Regulations 27-53, “Review of Legality of Weapons Under International Law,” January 1, 1979 (hereinafter AR 27-53); U.S. Department of the Navy, Secretary of the Navy Instructions 5711.8A, “Review of Legality of Weapons Under International Law,” January 29, 1988; U.S. Department of the Air Force, Air Force Instruction 51-402, “Weapons Review,” May 13, 1994 (hereinafter AFI 51-402).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 306
D Established Institutional Mechanisms for Addressing Ethical, Legal, and Societal Issues D.1  DOD LAW-OF-ARMED-CONFLICT REVIEW AND TREATY COMPLIANCE The 1977 Additional Protocol I of the Geneva Conventions of August 12, 1949 (to which the United States is a signatory) states in Article 36 that with respect to “development, acquisition or adoption of a new weapon, means or method of warfare, [a signatory] is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable.”1 Thus, weapons acquired by the Department of Defense are subject to a review that determines whether the normal or expected use of the weapon is consistent with the law of armed conflict (LOAC).2 (Accord- ing to Parks, the “normal and expected use” of a weapon is associated 1 International Committee of the Red Cross (ICRC), Additional Protocol I to the Geneva Conventions on the Protection of Victims of International Armed Conflicts, June 8, 1977, available at http://www.icrc.org/ihl.nsf/b466ed681ddfcfd241256739003e6368/f095453e41336b76c12 563cd00432aa1!OpenDocument. 2 The legal authority for this review is derived from a variety of DOD regulations: U.S. Department of Defense Directive 5000.1, “Defense Acquisition,” March 15, 1996 (hereinafter DODD 5000.1); U.S. Department of the Army, Army Regulations 27-53, “Review of Legality of Weapons Under International Law,” January 1, 1979 (hereinafter AR 27-53); U.S. Depart- ment of the Navy, Secretary of the Navy Instructions 5711.8A, “Review of Legality of Weap- ons Under International Law,” January 29, 1988; U.S. Department of the Air Force, Air Force Instruction 51-402, “Weapons Review,” May 13, 1994 (hereinafter AFI 51-402). 306

OCR for page 306
APPENDIX D 307 with the “means and method of warfare.”3) Notably, the review is “not required to foresee or analyze all possible misuses of a weapon, for almost any weapon can be misused in ways that would be prohibited.” In accordance with Additional Protocol I (AP I), the LOAC review is confined to “weapons, means or method of warfare”; thus, research or development work that is not intended to result in a weapon being pro- cured is not included. In addition, AP I does not define what is covered by the term “weapon.” The U.S. military services (Army, Navy, Air Force) do have regulations that specify what is covered and thus what is subject to LOAC review: • Army.4 Weapons are defined as “all conventional arms, munitions, materiel, instruments, mechanisms, or devices which have an intended effect of injuring, destroying, or disabling enemy personnel, materiel or property.” Weapons systems refer to the weapon itself and those compo- nents required for its operation, but the definition is limited to those com- ponents having a direct injurious or damaging effect on individuals or property (including all munitions such as projectiles, small arms, mines, explosives), and that are injury- or casualty-producing. • Navy.5 Weapons or weapon systems for the purpose of the legal review are defined as “all arms, munitions, materiel, instruments, mecha- nisms, devices and those components required for their operation, that are intended to have an effect of injuring, damaging, destroying, or disabling personnel or property, [including] non-lethal weapons. For [the] purpose of the legal review, weapons do not include launch or delivery platforms, such as, but not limited to, ships or aircraft, but rather the weapons or weapon systems contained on those platforms.” • Air Force.6 Weapons are defined as “devices designed to kill, injure or disable people, or to damage or destroy property. Weapons do not include devices developed and used for training and practice; aircraft, intercontinental ballistic missiles, and other launch platforms; or elec- tronic warfare devices.” 3 W. Hays Parks, “Conventional Weapons and Weapons Reviews,” Yearbook of International Humanitarian Law 8:55-142, 2005. 4 AR 27-53, January 1, 1979; see Federation of American Scientists, available at http:// www.fas.org/irp/DODdir/army/ar27-53.pdf. 5 U.S. Department of the Navy, Secretary of the Navy Instructions 5000.2C, “Implementation and Operation of the Defense Acquisition System and the Joint Capabilities Integration and Development System,” November 19, 2004, p. 23, para. 2.6. 6 AFI 51-402, May 13, 1994.

OCR for page 306
308 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY Parks points to various issues in these definitions.7 For example, he notes the difference between a weapon and a weapons system, and points out that in essence these regulations exclude from review parts of a system that do not cause injury. While he accounts for the possibility that devices for electronic warfare (e.g., jammers) might be included, he also notes that the Air Force definition specifically excludes these devices from LOAC review. AP I does not specify when in the life cycle of a weapon a review must be conducted. Parks indicates only that the LOAC review takes place “early” in the acquisition process. At the time of the writing of the present report (summer 2013), there is no written guidance known to the committee specifying precisely when in the acquisition process such a review must take place. Parks, who has personally conducted many LOAC reviews of weap- ons to be acquired, argues that the process has been successful and effec- tive. In his words, program managers generally have a good sense of and respect for the laws of war, and are cognizant of areas that may raise legal issues. This [familiarity] prompts requests for legal reviews early in the research, development and acquisition process, particularly where the office re- sponsible for conducting the legal review has taken the necessary steps to identify itself and the requirement to engineering, research, develop- ment and acquisition commands, and establish an effective working relationship. He indicates the importance of speaking at professional meetings of weapons development and acquisition experts to inform attendees of the review program, to explain the rationale behind the program, and to indicate the steps or procedures to be taken. In addition, he stresses the need to convince program managers that the review is intended to assist rather than hinder the acquisition process, even though there may be individual instances in which weapons or munitions may be found legally unacceptable. Finally, Parks notes that to the best of his knowledge, there has never been a delay in providing weapons reviews “as a result of the nature of the [DOD] legal review process”; delays have occurred only when the requester has “failed to provide adequate information for the conduct of the legal review.” The committee notes that this legal review necessarily takes place after the point at which a specific weapon is available to review; it does 7 W. Hays Parks, “Conventional Weapons and Weapons Reviews,” Yearbook of International Humanitarian Law 8:55-142, 2005.

OCR for page 306
APPENDIX D 309 not apply to research and development efforts. Moreover, the review is—by assumption—narrow. It examines the weapon only in the context of its stated concept of operations (that is, how the weapon is expected to be used). It is also limited to LOAC issues, with broader ethical or societal issues not within scope. Similar processes attach to efforts that might implicate obligations stemming from treaties that constrain or restrict research or development in some way. D.2  CODES OF ETHICS AND SOCIAL RESPONSIBILITY IN MEDICINE, ENGINEERING, AND SCIENCE Medicine, engineering, and science are fields that generally hold practitioners accountable for considering at least some of the ethical rami- fications of their medical, technical, or scientific work. In some cases, these ramifications include those related to matters such as safety and the protection of human subjects; in others, they include those related to the impact of such work on the broader society at large. Professional standards and codes of ethics may be implied or implicit rather than codified or formalized, and may incorporate standards for behavior (what must a responsible practitioner do in providing services to clients) as well as a sense of social responsibility (e.g., a responsibility for practitioners to provide services and expertise to society in addition to those they provide to their clients; a responsibility to protect a vulnerable public from harm). Brian Rappert identifies three broad categories of codes:8 • Aspirational codes (often designated as “codes of ethics”) set out ideals that practitioners should uphold, such as standards of research integrity, honesty, or objectivity. . . . • Educational/advisory codes (often designated as “codes of conduct”) go further than merely setting aspirations by providing guidelines suggest- ing how to act appropriately. . . . • Enforceable codes (often designated as “codes of practice”) seek to further codify what is regarded as acceptable behavior. Rather than in- spiring or educating in the hope of securing certain outcomes, enforce- able codes are embedded within wider systems of professional or legal regulation. 8 Brian Rappert, “Towards a Life Science Code: Countering the Threats from Biological Weapons,” Bradford Briefing Paper No. 13, September 2004, available at http://www.brad. ac.uk/acad/sbtwc.

OCR for page 306
310 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY In general, these standards and codes of ethics serve three important functions: • They make clear to practitioners that they do have affirmative responsibilities to consider ethical and societal issues that go beyond the narrow scope of their clients’ stated needs. • They make clear to society that practitioners recognize an obliga- tion to society to consider such ethical and societal issues. • They provide standing for practitioners to resist pressures to pro- ceed in technical directions that may be harmful to society at large and provide, when necessary, a justification for supporting social consider- ations ahead of financial, management, or technical goals in decisions. Professional standards have often emerged from the process by which a field becomes a profession but have also developed without experts forming a profession. Some experts, like physicians and some engineers, identify themselves as professionals (as further described below). Historically, a profession by definition is self-regulating, is autono- mous, and serves clients. Often it organizes a society for its members that sets rules and standards and represents its members in the larger society.9 The self-regulating component often involves standardized education requirements for degrees, licensing, or certification, as well as codes of conduct or ethics that are enforceable. Society grants autonomy to pro- fessions in exchange for this self-regulation, a privilege that results in the restriction of the practice of the profession to qualified individuals only, thereby providing some protection to society.10 Autonomy and self- regulation in turn allow professionals to be the sole experts in a society in one specific area. Over time the historical understanding of a profession has evolved and broadened in common parlance to include fields with 9 Michael Davis, “Defining Engineering: How to Do It and Why It Matters,” Journal of Engineering Education 85 (April 1996):99, 1996, available at http://www.synbioproject.org/ process/assets/files/6452/_draft/davis_defining_engineer.pdf. 10 More specifically, professions traditionally assume responsibilities for self-regulation, including the promulgation of certain standards to which all members are supposed to adhere. These standards are of two kinds: technical standards that establish the minimum conditions for competent practice, and ethical principles that are intended to govern the conduct of members in their practice. In exchange for exercising this responsibility, society implicitly grants professions a degree of autonomy. The privilege of this autonomy in turn creates certain special obligations for the profession’s members. See Advisory Committee on Human Radiation Experiments, Part I, Ethics of Human Subjects Research: A Historical Perspective, Final Report, p. 115, Government Printing Office, Washington, D.C., 1995.

OCR for page 306
APPENDIX D 311 experts who have technical or topical expertise and can join voluntary societies with standards of behavior or codes of ethics. The professional standards and codes of ethics that help practitioners to maintain their professions’ standing in society change over time and are continually being renegotiated, as is the understanding of what makes a field a profession. The historical origins of social responsibility are significant because they frame the manner in which social responsibility is understood in medicine and engineering compared to science. Physicians’ and engi- neers’ social responsibility traditionally has been about upholding their professional standards (which include standards of social responsibil- ity), whereas social responsibility in science traditionally has been about upholding the social contract that results in funding and intellectual free- dom for scientists. D.2.1  Medicine and Engineering The medical profession exemplifies well the understanding of professional social responsibility. Physicians take some version of the Hippocratic Oath upon graduation from medical school. Further, the medical profession sets education standards through the leadership of the American Medical Association and the Association of American Medical Colleges, both nonprofit member organizations, with the latter especially focused on academic medicine, as well as through licensing requirements through state medical boards. Through the American Medical Associa- tion, the profession also has a code of ethics that concerns physicians’ interactions with each other and with their patients. However, in addi- tion to the professional ethics code, public policy and legal rulings since the 1960s have increasingly regulated the ethical conduct of physicians, especially in regard to research on human subjects. The medical profession contains society’s experts on medicine and thus is allowed a considerable degree of autonomy in medical matters. The field has evolved around serving its clients, the patients. Physicians’ social responsibility developed out of their standing as a profession in society and their desire to maintain their authority and autonomy. Second to serving their patients, physicians are expected to inform, warn, and protect the general public in medical issues. An example is the responsi- bility that physicians have to serve society in epidemics even at the risk of their own health. The engineering field also developed as a profession characterized by accreditation, licensure, service to clients, and organization into societies. Engineers set their own standards for education through the Accredita- tion Board for Engineering and Technology (ABET) and for licensing

OCR for page 306
312 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY through the state boards of professional engineers, which are represented by the National Council of Examiners for Engineering and Surveying. In addition, the various specializations in engineering have their own organized societies, such as the American Society of Civil Engineers, the American Society of Mechanical Engineers, and the Institute of Electrical and Electronics Engineers. However, the engineering fields distinguish between a professional engineer and what is sometimes called a graduate engineer: the profes- sional engineer has to be licensed and must uphold professional stan- dards or risk losing his or her license, whereas the graduate engineer does not. Graduate engineers have earned a degree from an accredited pro- gram and do work that draws on their engineering knowledge, but they have no state engineering license.11 In addition to graduate engineers, employees in companies with “engineer” in their job title need not have engineering training that would qualify them for obtaining a license; this is a result of an industrial exemption in the engineering licensure, which allows the use of the term “engineer” but never “professional engineer” in such cases.12 Despite the distinction between professional engineers and graduate engineers, both groups are included in the specialized professional engi- neering societies. Many of these societies have codes of ethics for their members which include a responsibility to “hold paramount the safety, health and welfare of the public in the performance of their professional duties.”13 What is known as the paramountcy clause was added in the 1970s to engineering ethics codes and demonstrates how these codes and standards are constantly being renegotiated among the profession and with society. In return for upholding and respecting this social responsi- bility and the rest of the code of ethics, engineers are granted the privilege of autonomy and authority in engineering matters. This benefit is granted to engineers regardless of their membership in a professional society. So even graduate engineers without membership in a professional society get the benefit of calling themselves engineers and the requisite standing and authority that that title holds in society. Michael Davis argues that this benefit morally obligates all those 11 National Society of Professional Engineers, “What Is a PE?”, available at http://www. nspe.org/Licensure/WhatisaPE/index.html; “Regulation and Licensure in Engineering,” Wikipedia, available at http://en.wikipedia.org/wiki/Professional_Engineer; Washington University in St. Louis, “Professional vs. Non-Professional Degrees,” available at http://ese. wustl.edu/undergraduateprograms/Pages/ProfessionalvsNon-ProfessionalDegrees.aspx. 12 Online Ethics Center, National Academy of Engineering, “Signing Off on Engineering Documents,” available at http://www.onlineethics.org/cms/4606.aspx. 13 Accreditation Board of Engineering and Technology, “Fundamental Canon 1,” in Code of Engineering Ethics, 1977. (Adopted by most U.S. engineering societies.)

OCR for page 306
APPENDIX D 313 who call themselves engineers to follow the standards and codes that help make these benefits possible. One of the most significant benefits is the backing on which to draw when standing up to a management that is placing other priorities, such as profits or expediency, above safety and reliability. This support that professional engineering societies provide can help to protect engineers in the event they must resort to not approving a proj- ect or to whistleblowing. Such support and its limits were demonstrated in a case involving three electrical engineers working on the San Francisco Bay Area Rapid Transit (BART) system in 1971. The engineers discovered an engineering flaw in the design of the project that would have resulted in the doors of the train opening before its arriving in the station. The engineers reported their findings to a member of the BART Board of Directors and their supervisor, but no action was taken to remedy the problem. The board subsequently fired the engineers, whose findings had been reported in the local news media. The case resulted in a lawsuit in which the Institute of Electrical and Electronics Engineers (IEEE) filed a friend-of-the-court brief on the engineers’ behalf. The IEEE argued that BART had violated the employment contract with the engineers by firing them for upholding their professional code of ethics. Ultimately the case was settled out of court. D.2.2 Science Unlike physicians and engineers, scientists did not professionalize in the United States according to the terms described above. Instead, scien- tists have remained an independent group of scholars who share knowl- edge and academic pursuits but do not rely on professional credentials, licenses, or certification to define those who are part of the profession. In addition, scientists as scientists may not serve individual clients per se; many teach, train, and seek funding for their intellectual pursuits. The early national science organizations in the United States, such as the American Association for the Advancement of Science, were devoted from the beginning to promoting scientific research, not to regulating the profession.14 Yet, despite the lack of formal professionalization in science, the field does share the societal grant of authority and autonomy that the medical and engineering professions have. Scientists are considered experts with the authority to determine the scientific value of research proposals and results. This autonomy and authority were granted to scientists during 14 Paul Lucier, “The Professional and the Scientist in Nineteenth-Century America,” Isis 100(4):711, 2009.

OCR for page 306
314 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY the World War II era and thereafter, as the system of federal funding of science was negotiated and established.15 A social contract indicated that scientists would receive funds from the federal government to perform basic research that might eventually benefit society. Scientists were given the authority to decide which projects to fund and which researchers were qualified and in exchange provided assurances to society that the research would be beneficial. Ideas of social responsibility in science developed over the postwar period and through the 1970s and 1980s and continue to evolve today. Notions of social responsibility evolved out of scientists’ concern over the implications of their research and their desire to maintain the trust of the public and the provision of financial support for scientific research. Different fields of science developed ideas of social responsibility through different pathways and at different times. These differences and similarities provide valuable lessons on how society and the sciences interpret their relationship in response to the implications of the research they do. For example, physicists were one of the first groups of scientists to express the view that scientists are to be responsible for the social impli- cations of their research. Their social conscience came to public attention around the time that research was conducted on the atomic bomb during World War II. During the 1960s and 1970s biological scientists started to discuss their social responsibility as research in genetics, organ transplantation, and cellular biology began to provide an increasing capability for con- trolling the human body through research on manipulation of DNA and nuclear transplantation (which came to be known as cloning). An example is the previously mentioned 1975 Asilomar conference.16 Similar expres- sions of social responsibility also appeared in other fields. For example, the American Anthropological Association developed during the 1970s a statement of principle recognizing the special responsibilities of anthro- pology as a field of study.17 15 Daniel J. Kevles, The Physicists: The History of a Scientific Community in Modern America, Harvard University Press, Cambridge, 1995; Daniel J. Kevles, “The National Science Foun- dation and the Debate over Postwar Research Policy, 1942-1945: A Political Interpretation of Science—The Endless Frontier,” Isis 68(1; March):5-26, 1977; Steven Shapin, The Scientific Life: A Moral History of a Late Modern Vocation, University of Chicago Press, 2008. 16 Charles Weiner, “Drawing the Line in Genetic Engineering: Self-Regulation and Public Participation,” Perspectives in Biology and Medicine 44(2):208-220, 2001. 17 The American Chemical Society, American Institute of Chemists, American Society for Biochemistry and Molecular Biology, Society for Neuroscience, Ecological Society of America, and International Society of Ethnobiology, “Codes of Ethics Collection,” Center for the Study of Ethics in the Professions, Illinois Institute of Technology, available at http:// ethics.iit.edu/ecodes/ethics-area/12.

OCR for page 306
APPENDIX D 315 In large part because scientists did not establish themselves as one profession in the traditional sense and because they did not have clients, the various fields of science did not develop codes of ethics as they orga- nized. The lack of explicit attention to ethical concerns became an issue in the late 1970s and 1980s when a number of scandals over the behavior of scientists brought the lack of ethical standards to the attention of the public and Congress. In response, scientists and policy makers developed expectations and regulations for proper behavior, concerns about which focused on falsification, fabrication, and plagiarism of data and research results starting in the 19th century.18 Physicians and biomedical research- ers earlier in the 1960s and 1970s were also regulated when a growing number of scandals over the use of human subjects were made public and legislators concluded that the medical profession’s codes were insufficient to prevent abuses. Going beyond the regulations, professional scientific bodies adopted the standards expressed in the regulations and outlined other standards for proper behavior. Examples of such standards include those discussed in the report Responsible Science,19 plus the development by numerous scientific societies of codes of ethics. Today, many scientific professional societies have included in their code of ethics stipulations about following federal guidelines. It is important to note, however, that these guidelines are often enforced by being attached to federal research funding laws rather than through professional membership.20 D.2.3  Summary Observation on Codes of Social Responsibility The distinction between scientists and both engineers and physicians is that scientists’ codes of ethics and social responsibility developed out of a need to renew and keep public trust as well as to maintain the social contract, whereas those of engineers and physicians grew along with the desire to maintain professionalization. Today many scientific societies 18 Paul Lucier, “The Professional and the Scientist in Nineteenth-Century America,” Isis 100(4):719, 2009. 19 National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, Responsible Science, Volumes 1 and 2, National Academy Press, Washington, D.C., 1992-1993. 20 For example, the Federal Policy for the Protection of Human Subjects or the Common Rule, was published in 1991 and codified in separate regulations by 15 federal departments and agencies. The Common Rule requires that as a condition of receiving certain federal research funding, researchers and institutions must establish institutional review boards and follow the ethical principles for research involving human subjects research first laid out in the Belmont report in order to receive research funding. See http://www.hhs.gov/ohrp/ humansubjects/commonrule/. The Belmont report can be found at http://www.hhs.gov/ ohrp/humansubjects/guidance/belmont.html.

OCR for page 306
316 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY have codes of ethics, and some include components that refer to social responsibility in science as well as standards of proper behavior and ethi- cal guidelines for research with humans and animals.21 To the extent that scientists and engineers involved in research with potential ethical, legal, and societal implications work in private indus- try or are funded by grants or contracts, it is possible to tie obligations for ethical behavior or social responsibility directly to the conditions of employment or the funding agreements. These mechanisms give codes of ethics significant potential for enforcement not generally attributed to codes. D.3  RESEARCH ON ELSI D.3.1  Federally Supported ELSI Research National Human Genome Research Institute For a number of years, the National Human Genome Research Insti- tute (NHGRI) has supported a research program in the ethical, legal, and social implications of genetic and genomic research for individuals, families, and communities.22 The individual research program solicits research projects that anticipate, analyze, and address the ethical, legal, and societal implications of the discovery of new genetic technologies and the availability and use of genetic information resulting from human genetics and genomic research. In FY 2012, the NHGRI issued a request for applications that explicitly called for scientific proposals with an ELSI research component. That is, qualifying proposals were required to include both a biological science component and an ELSI research component, and work associated with these two components was required to be integrated. In addition, project teams had to have genuine expertise in and experience with dealing with ethical, legal, and societal issues in a genome research context. Successful proposals had to be at least moderately strong in both the science and 21 The American Chemical Society, American Institute of Chemists, American Society for Biochemistry and Molecular Biology, Society for Neuroscience, Ecological Society of America, and International Society of Ethnobiology, “Codes of Ethics Collection,” Center for the Study of Ethics in the Professions, Illinois Institute of Technology, available at http://ethics.iit.edu/ecodes/ethics-area/12; Society for Neuroscience, “SfN Ethics Policy,” available at http://www.sfn.org/index.aspx?pagename=guidelinesPolicies_PolicyonEthics; Ecological Society of America, “Code of Ethics,” available at http://www.esa.org/aboutesa/ codeethics.php. 22 National Human Genome Research Institute, “ELSI Research Program,” 2012, available at http://www.genome.gov/10001618#al-1.

OCR for page 306
318 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY dimensions in science, engineering, and technology.”23 Currently, the NSF Science, Technology, and Society (STS) program considers research pro- posals focusing on ethics issues. The 2012 STS program announcement reads as follows: STS considers proposals for scientific research into the interface be- tween science (including engineering) or technology, and society. STS researchers use diverse methods including social science, historical, and philosophical methods. Successful proposals will be transferrable (i.e., generate results that provide insights for other scientific contexts that are suitably similar). They will produce outcomes that address pertinent problems and issues at the interface of science, technology and society, such as those having to do with practices and assumptions, ethics, val- ues, governance, and policy.24 In the first decade of the 21st century, NSF began a second program with a focus on ethics education for graduate students in science and engi- neering. Housed in the same division with the STS program, it involves all of the NSF directorates. The 2011 program solicitation stated: The Ethics Education in Science and Engineering (EESE) program funds research and educational projects that improve ethics education in all fields of science and engineering that NSF supports, with priority con- sideration given to interdisciplinary, inter-institutional, and international contexts. Although the primary focus is on improving ethics education for graduate students in NSF-funded fields, the proposed programs may benefit advanced undergraduates as well.25 Each of these NSF programs has received relatively few proposals focused on ethical issues in military research, development, or use of technologies. Thus, NSF has made relatively few awards in this domain. D.3.2  Centers of ELSI Research A number of centers for research on the ethical, legal, and societal implications of biomedical and behavioral research have been established, some with government support. For example, the NHGRI, the Depart- ment of Energy, and the National Institute of Child Health and Human 23 National Science Foundation, Societal Dimensions of Engineering, Science and Technology: Ethics and Values Studies Research on Science and Technology, Program Announcement, NSF 99-82, 1999, available at http://www.nsf.gov/pubs/1999/nsf9982/nsf9982.htm. 24 NSF, Science, Technology, and Society (STS), Program Solicitation, NSF 12-509, 2012, avail- able at http://www.nsf.gov/pubs/2012/nsf12509/nsf12509.htm. 25 NSF, Ethics Education in Science and Engineering (EESE), Program Solicitation, NSF 11-514, 2011, available at http://www.nsf.gov/pubs/2011/nsf11514/nsf11514.htm#toc.

OCR for page 306
APPENDIX D 319 Development have collaborated to create interdisciplinary centers of excellence in ELSI research. These centers bring together investigators from multiple disciplines to work on ethical, legal, and societal issues related to advances in genetics and genomics. The centers also nurture the growth of the next generation of ESLI researchers working on genome research. In a similar vein, the National Nanotechnology Initiative (NNI) men- tioned in Chapter 1 seeks to “identify and manage the ethical, legal, and societal implications (ELSI) of research leading to nanotechnology- enabled products and processes.”26 Activities to do so call for “increas- ing the capacity of Federal agencies to identify and address ELSI issues specific to nanotechnology by fostering the development of a community of expertise on ELSI issues related to nanotechnology,” “building collabo- rations among the relevant communities . . . to enable prompt consider- ation of the potential risks and benefits of research breakthroughs and to provide perspectives on new research directions,” and “developing information resources for ethical and legal issues related to intellectual property and ethical implications of nanotechnology-based patents and trade secrets.” To pursue these activities, the NNI has established two independent centers of research on societal implications of nanotechnol- ogy research, one at Arizona State University27 and the other at the Uni- versity of California, Santa Barbara.28 D.4  OVERSIGHT BODIES D.4.1  Institutional Review Boards Institutional review boards (IRBs) are a mechanism intended to address ELSI concerns directly related to the safety of human subjects that arise in the conduct of research (usually of a biomedical, social, or behavioral nature). Federal law establishes IRBs at all institutions receiv- ing direct or indirect support from the Department of Health and Human Services (DHHS) and numerous others, and requires that all federally funded research involving human subjects must be approved by the IRB before the research can begin. (Separately, many institutions have biosafety committees, radiation safety committees, and so on.) IRBs must review and renew research approvals annually; they have broad authority 26 Seehttp://www.nano.gov/goalfourobjectives. 27 Arizona State University, “Center for Nanotechnology in Society,” available at http:// cns.asu.edu/. 28 Center for Nanotechnology in Society, “Nano in Society Conference Features CNS-UCSB Researchers,” 2009, available at http://www.cns.ucsb.edu/news/nano-society-conference- features-cns-ucsb-researchers.

OCR for page 306
320 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY to review, require revisions in, or halt research that poses safety risks to human subjects, participants, researchers, and the general public, espe- cially where participants are vulnerable populations such as children, the disabled, and so on. The IRB system is in its most basic sense a review process that relies on the expertise of researchers and community members to protect the rights of subjects and to weigh the risks and benefits to research sub- jects. This is often achieved by the IRB members imagining the research through the eyes of a subject and by ensuring that the subject’s perspec- tive is considered. At the end of the review, the IRB has the flexibility to suggest or require revisions in a research protocol, which they do more often than disapproving studies outright. Suggesting revisions to a pro- tocol allows the IRB to serve as a collaborator in finding an ethical way for the research to proceed.29 IRBs are usually local bodies whose members are from the same insti- tution where the research under review is to be performed. The historical reasoning for this setup was to create a localized responsibility and to allow some flexibility in response to the unique environments in which the research was being conducted. This structure means that researchers serving on an IRB are often reviewing the work of their colleagues and that members of the IRB are familiar with being on the other side of the situation. This shared group review process means that when an experi- ment is approved, the researchers and the IRB members share responsibil- ity for conducting research in an ethical manner.30 IRBs have been criticized on several grounds. For example, because IRBs are a localized and flexible review process, different IRBs examin- ing multisite clinical research may come to different conclusions about the same research, which may lead to confusion and frustration among researchers from different institutions.31 Another criticism argues that because of the power of IRBs to control the specifics of research protocols through the rejection or acceptance of the research protocol, IRBs may cre- ate an adversarial relationship between researchers and ethicists instead of encouraging communication and collaboration.32 Others worry about the scope of IRB review. Some criticisms suggest 29 Laura Stark, Behind Closed Doors: IRBs and the Making of Ethical Research, University of Chicago Press, 2012, pp. 2-19. 30 Stark, Behind Closed Doors, 2012. 31 An accreditation process has been proposed as one way to overcome these kinds of difficulties; the Association for the Accreditation of Human Research Programs, Inc., is an example. 32 Inmaculada de Melo-Martín, “Developing a Research Ethics Consultation Service to Foster Responsive and Responsible Clinical Research,” Academic Medicine 82(9):900-904, 2007.

OCR for page 306
APPENDIX D 321 that the scope of IRB review is too limited (e.g., IRB review does not go beyond a few specific areas of scientific research, such as research involv- ing human subjects).33 IRBs are specifically prohibited from addressing possible societal harms.34 At the same time, other criticisms suggest that IRBs have too much power to “impos[e] increasing burdens on research- ers, creat[e] bureaucratic nightmares, and otherwise hinder . . . the prog- ress of research.”35 As argued in a University of Illinois report, “As IRBs expand their responsibilities, terminology that might have been very clear in its original context is strained or ambiguous when applied to new areas, leading to imprecision and unreasonable regulatory burden as well as inappropriate regulation and restriction.”36 Another set of criticisms suggests that IRBs today lack focus. For example, a 2003 National Research Council report37 argued that IRBs are often “overloaded and underfunded and so may not be able to adequately protect participants from harm in high-risk research, such as clinical trials of experimental drugs”; are excessively focused on “documenting consent to participate in research so as to satisfy the letter of federal requirements [rather than on] helping individuals reach an informed, voluntary deci- sion about participation”; and have a tendency to “delay research or impair the integrity of research designs, without necessarily improving participant protection, because the type of review is not commensurate with risk.” Others argue that IRBs focus too much on protecting their respective institutions from lawsuits and bad press.38 D.4.2  Embryonic Stem Cell Research Oversight Committees In 2004, the National Academies began a project to develop guide- lines for responsible and ethical research involving human embryonic 33 Mildred K. Cho et al., “Strangers at the Benchside: Research Ethics Consultation,” American Journal of Bioethics 8(3):4-13, 2008. 34 Code of Federal Regulations, Title 45, Public Welfare, Part 46, Protection of Human Subjects, 2009. 35 See http://www.apa.org/monitor/feb06/sd.aspx. 36 C.K. Gunsalus, Edward M. Bruner, Nicholas C. Burbules, Leon Dash, Matthew Finkin, Joseph P. Goldberg, William T. Greenough, Gregory A. Miller, Michael G. Pratt, Masumi Iriye, and Deb Aronson, The Illinois White Paper: Improving the System for Protecting Human Subjects—Counteracting IRB “Mission Creep,” Sage Publications, Thousand Oaks, Calif., 2007, available at http://www.primr.org/uploadedFiles/PRIMR_Site_Home/Resource_Center/ Articles/11.%20Illinois%20Whitepaper.pdf. 37 National Research Council, Protecting Participants and Facilitating Social and Behavioral Sciences Research, The National Academies Press, Washington, D.C., 2003, available at http:// www.nap.edu/catalog.php?record_id=10638. 38 Steven J. Breckler, “The IRB Problem,” Monitor on Psychology 37(2):21, 2006, available at http://www.apa.org/monitor/feb06/sd.aspx.

OCR for page 306
322 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY stem cells. The final report of that project recommended that institutions involved in such research establish an embryonic stem cell research over- sight (ESCRO) committee to oversee “all issues related to derivation and use of hES cell lines and to facilitate education of investigators involved in hES cell research”39 Of particular significance is the fact that ESCRO committees were supposed to approve the scientific merit of research proposals involving hES cell lines. According to the 2005 report, such committees should include rep- resentatives of “the public and persons with expertise in developmental biology, stem cell research, molecular biology, assisted reproduction, and ethical and legal issues in hES cell research” with “suitable scientific, medical, and ethical expertise to conduct its own review.” An ESCRO committee was not intended to be explicitly coupled to the IRB mecha- nism, and its responsibilities went beyond those related to human subject protections. Moreover, much of the research in question did not require IRB review. Since the 2005 report’s publication, most institutions performing such research have in fact adopted ESCRO committees with the responsibilities described in that report. In addition, the National Institutes of Health has taken on an expanded role in overseeing hES cell research, specifically with respect to determining the particular hES cell lines that are eligible for federal research funding. In a 2010 report based in part on a 2009 NRC-IOM workshop held to review the status of the 2005 guidelines and their implementation, 40 the NRC observed that most participants in that workshop thought that ESCRO committees play “valuable roles and function in such a way that their elimination could leave gaps not filled by other oversight bodies (e.g., Institutional Review Boards, Institutional Animal Care and Use Commit- tees, Institutional Biosafety Committees).” In addition, some stakeholders at the workshop suggested that in the future, controversies and concerns over the uses of stem cells were likely to grow relative to controversies and concerns regarding the derivation of new stem cell lines. 39 National Research Council and Institute of Medicine, Guidelines for Human Embryonic Stem Cell Research, The National Academies Press, Washington, D.C., 2005, available at https://download.nap.edu/catalog.php?record_id=11278. 40 National Research Council and Institute of Medicine, Final Report of the National Academies’ Human Embryonic Stem Cell Research Advisory Committee and 2010 Amendments to the National Academies’ Guidelines for Human Embryonic Stem Cell Research, The National Academies Press, Washington, D.C., 2010, available at http://www.nap.edu/catalog.php?record_id=12923.

OCR for page 306
APPENDIX D 323 D.5  ADVISORY BOARDS Advisory boards and committees are a time-honored way to focus attention on ethical, legal, and societal issues associated with S&T. For example, the Recombinant DNA Advisory Committee (RAC) informs and advises the NIH on issues related to recombinant DNA research and reviews human gene transfer research. Established by the NIH in the 1970s, the RAC serves two functions, one as a forum for “open, public deliberation on the panoply of scientific, ethical, and legal issues raised by recombinant DNA technology and its basic and clinical research applica- tions” and the other to review and publicly discuss on behalf of the NIH “protocols that raise novel or particularly important scientific, safety or ethical considerations.”41 It does so in part by advising the government on potentially controversial areas of genetics research as well as by review- ing novel genetics research proposals that raise new and challenging ELSI concerns. Another example of an advisory board concerned with issues related to science and technology is the National Science Advisory Board for Bio­ security (NSABB), whose mandate is to provide “advice, guidance, and leadership regarding biosecurity oversight of . . . biological research with legitimate scientific purpose that may be misused to pose a biologic threat to public health and/or national security.”42 In this context, the ELSI con- cern in question is that the results of work on certain biological research may also have harmful effects on public health and/or national security. Some boards and committees (such as the two described above) have an enduring presence regarding ethical, legal, and societal issues in a specific domain. Others issue a report on a particular topic and then move on to other areas. An example of the latter is the Presidential Commis- sion for the Study of Bioethical Issues (PCSBI), an advisory panel of the nation’s leaders in medicine, science, ethics, religion, law, and engineer- ing that advises the President on bioethical issues arising from advances in biomedicine and related areas of science and technology. The PCSBI seeks to “identify and promote policies and practices that ensure scientific research, health care delivery, and technological innovation are conducted in a socially and ethically responsible manner.”43 Still another example is the “community acceptance panels” some- times convened by the National Institute of Justice (NIJ) to gather input 41 “About Recombinant DNA Advisory Committee (RAC),” available at http://oba.od. nih.gov/rdna_rac/rac_about.html. 42 National Institutes of Health, “About NSABB,” available at http://oba.od.nih.gov/ biosecurity/about_nsabb.html. 43 For more information about PCSBI, see “Presidential Commission for the Study of Bio- ethical Issues,” available at www.bioethics.gov.

OCR for page 306
324 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY regarding new research and development initiatives from relevant com- munities. For example, in 2007, the NIJ convened such a panel to discuss efforts to develop safer, more effective use-of-force options for law enforce- ment officers. According to the NIJ, the panel, consisting of practitioners from the medical, research, legal, and ethical communities, discussed “chemical options, the risk factors associated with their use, potential delivery mechanisms, the empirical studies available from the relevant community, and legal and ethical issues associated with these agents.” 44 Advisory boards and committees can and do shed light on important ethical, legal, and societal issues. But by definition and as is true with certain other mechanisms such as ELSI research or research ethics con- sultation services, they have no actual decision-making authority, and the decision makers to whom they report are free to adopt, disregard, or ignore any or all of the findings, conclusions, or recommendations of these boards and committees. Further, because they often work closely with these decision makers in the course of their deliberations, the extent to which they are truly free to make independent findings, conclusions, or recommendations is sometimes questioned. D.6  ADDITIONAL MECHANISMS D.6.1  Research Ethics Consultation Services Research ethics consultation services (RECS) have been established in a number of research environments to help raise awareness of issues related to the ethics of human subjects research and to assist investigators in resolving these issues.45 Using an “ELSI consultants on call” model, RECS provide real-time advice to scientists about how to recognize and address ELSI concerns in ongoing research and at the same time may lead those involved to discuss broader ethical, legal, and societal issues. Advocates of RECS believe that their approach can better encourage com- munication and collaboration and create a mutually beneficial relation- ship between researchers and ethicists, in contrast to other mechanisms that may create more adversarial relationships. Approaches to providing RECS vary. For example, in some cases, the personnel providing RECS are embedded with the research team and are likely regarded as collaborators in research; in other cases, they meet with 44 National Institute of Justice, “Community Acceptance Panel—Riot Control Agents,” conference, April 30, 2007, Washington, D.C., available at http://www.nij.gov/topics/ technology/less-lethal/riot-control-agents.htm. 45 Mildred K. Cho et al., “Strangers at the Benchside: Research Ethics Consultation,” American Journal of Bioethics 8(3):4-13, 2008.

OCR for page 306
APPENDIX D 325 the research team as needed but are independent and are likely regarded as service providers. RECS can be provided by either individuals or teams, and RECS of various kinds are in use at a number of universities. Although RECS can and do provide ELSI-related input that might not otherwise be available, they also have certain disadvantages.46 For example, training for RECS consultants has not been standardized in any way, which means that the results of consultations may vary greatly. The consulting services model can create financial conflicts of interest, given that RECS consultants could alter the advice they give in order to continue being paid, although different arrangements can be institutionalized to insulate payment mechanisms from the specific advice given.47 Embed- ded consultants may be co-opted by their proximity to and relationships with the researchers, losing their objectivity, whereas independent con- sultants may not have sufficient knowledge or a sufficient opportunity to influence the research work being performed. When individuals provide RECS, available expertise is limited to that of a single individual, and few individuals are qualified to consult comprehensively. The use of teams can overcome this problem, but cost and scheduling can be problematic. D.6.2  Chief Privacy Officers Privacy is widely regarded as a key ELSI concern associated with technology in many contexts. One approach to protecting the privacy of citizens and customers in the public and private sectors, respectively, is the use of chief privacy officers who have overall responsibility for such protection within a government agency or a private organization. For example, the Department of Homeland Security (DHS) estab- lished the position of chief privacy officer (CPO) in 2002 pursuant to the Homeland Security Act of 2002. The CPO, a senior official in the DHS hierarchy, has responsibilities “to ensure privacy and transparency in government are implemented throughout the Department.”48 More spe- cifically, the CPO’s responsibilities include assuring that the departmental uses of technologies sustain, and do not erode, privacy protections relat- ing to the use, collection, and disclosure of personal information; assuring that personal information contained in Privacy Act systems of records is handled in full compliance with fair information practices as set out in 46 Roberta M. Berry, Jason Borenstein, and Robert J. Butera, “Contentious Problems in Bioscience and Biotechnology: A Pilot Study of an Approach to Ethics Education,” Science and Engineering Ethics 19(2; June):653-68, 2013; Cho et al., “Strangers at the Benchside,” 2008. 47 Cho et al., “Strangers at the Benchside,” 2008. 48 U.S. Department of Homeland Security, “Authorities and Responsibilities of the Chief Privacy Officer,“ available at http://www.dhs.gov/chief-privacy-officers-authorities-and- responsibilities.

OCR for page 306
326 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY the Privacy Act of 1974; evaluating legislative and regulatory proposals involving the collection, use, and disclosure of personal information by the federal government; and conducting a privacy impact assessment of proposed rules of the DHS on the privacy of personal information, including the type of personal information collected and the number of people affected. Within DHS, the CPO is not expected to take an adversarial role with respect to departmental programs. Rather, the CPO’s role is a coopera- tive one—working with various departmental programs that may have an impact on citizen privacy to find ways of meeting program objectives without harming privacy. Many government departments have CPOs. But a CPO is also likely to have other responsibilities, such as oversight and/or implementa- tion of policy regarding the Freedom of Information Act. Perhaps more importantly, CPOs may be seen as serving primarily a public relations role rather than actually creating and enforcing policies that protect privacy.49 D.6.3  Environmental Assessments and Environmental Impact Statements Under the National Environmental Policy Act (NEPA) of 1969,50 many federal projects that potentially affect the environment require an environ- mental assessment (EA) that provides sufficient evidence and analysis for determining whether to prepare an environmental impact statement (EIS) or a finding of no significant impact for any given project. An environ- mental assessment is typically a short document, at least by comparison with an environmental impact statement. If an EIS is required, an analysis is prepared that systematically addresses environmental dimensions of the project in question. An EIS must articulate the beneficial and harmful environmental impacts of a proposed action as well as alternative courses of action. Environmental impact statements have been criticized by those who believe that they are too lenient and others who believe they are too oner- ous. Those who believe that EISs are too lenient argue that they are not impartial analyses but rather analyses undertaken by proponents of a project, and thus those proponents may well place their own self-interests ahead of the public interest. Much of the interested public believes, mis- 49 Tischelle George, “Say Hello to Your Friend, the Chief Privacy Officer,” Information Week.com, May 14, 2001, available at http://www.informationweek.com/837/ethics_cpo. htm. 50 Environmental Protection Agency, “Environmental Assessments & Environmental Impact Statements,” available at http://www.epa.gov/reg3esd1/nepa/eis.htm.

OCR for page 306
APPENDIX D 327 takenly, that EISs can mandate cessation of a project, whereas the EIS is instead a tool to provide decision makers with the information they need to make a fully informed decision. Those who believe that EISs are too onerous argue that EISs can introduce unnecessary and often significant delay into project timelines because the content of EISs can be challenged in court. Further, they argue, the significance of the environmental issues EISs address all too often pales against the economic and/or national significance of the project in question. Sometimes, those responsible for environmental assessment and deci- sion making also seek to involve the public in providing input to the decision-making process. As stated in a 2008 NRC report,51 many ana- lysts have argued that broader and more direct participation of both the public and interested or affected groups in the official environmental policy processes will increase the legitimacy and the substantive quality of policy decisions. Melnick argued in 1983 that the National Environmen- tal Policy Act was an important reason for the increasing participation of environmental and other nontraditional groups in administrative decision making.52 Others have argued that public participation is not an unalloyed good, raising issues such as “the accountability and representativeness of self-appointed public participants, the inability of nonexpert communities to understand and process complex scientific relationships, the unlikeli- hood of reaching a meaningful consensus among conflicting interests, the effects of misdirected pressure to achieve consensus at the expense of achieving other important societal goals, and manipulation of outcomes either by those who frame the questions to be addressed or by those who get a ‘seat at the table.’”53 D.6.4  Drug Evaluation and Approval The Food and Drug Administration (FDA) has long faced ELSI-related decisions having certain properties similar to those faced by military 51 National Research Council, Public Participation in Environmental Assessment and Decision Making, The National Academies Press, Washington, D.C., 2008, available at http://www. nap.edu/openbook.php?record_id=12434&page=10. 52 Thomas Sander, “Environmental Impact Statements and Their Lessons for Social Capi- tal Analysis,” conference, Saguaro III, Indianapolis, Ind., December 7-9, 1997, available at http://www.hks.harvard.edu/saguaro/pdfs/sandereisandsklessons.pdf. This paper cites R. Shep Melnick, Regulation and the Courts: The Case of the Clean Air Act, Brookings Institu- tion, Washington, D.C., 1983, and James P. Lester, Environmental Politics and Policy: Theories and Evidence, Duke University Press, Durham, N.C., 1995. 53 National Research Council, Public Participation in Environmental Assessment and Decision Making, 2008.

OCR for page 306
328 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY R&D: innovative products offering unique benefits and risks, proprietary information that must be protected, technical information whose evalu- ation requires scientific expertise, uncertainty that may be reduced by research conducted before or after product use begins, and time pressure that must be respected. The FDA has developed procedures for addressing ethical, legal, and societal issues in drug development. These procedures are intended to have the following properties: • Expert driven. Evidence is evaluated by scientists, looking at issues identified by officials charged with representing the public interest. • Confidential. Experts have access to all evidence, under conditions that protect proprietary interests. • Advisory. Experts’ assessments inform but do not bind policy mak- ers, who must balance conflicting interests when those arise. • Predictable. A growing legacy of decisions expressed in common terms provides developers with guidance about the eventual acceptability of products. • Constructive. Evaluators communicate with developers early enough to incorporate ethical and social concerns in their designs and data collection. • Timely. Evaluations face tight timelines (accelerated for products of great public interest), with documentation proceeding concurrently with development. • Efficient. Evaluations add relatively little to overall development dollar costs, benefiting from economies of scope as issues (e.g., equity, special populations) recur in different contexts. The major costs are argu- ably in the time required for data collection. The developers of individual products are not always happy with the decisions that these evaluations produce. However, the pharmaceutical industry supports the process as one that protects the industry by pro- viding equitable standards for all products, while reducing the risk from undisciplined (or unscrupulous) developers. Critics of the FDA process have pointed to what they regard as exces- sive delays in drug approval, capture of the process by pharmaceutical companies at the expense of the public interest, imposition of excessive demands for data on new drugs and devices, and improper censorship of medical claims of efficacy (e.g., those regarding supplements), among other things.