Assessment of ethical, legal, and societal issues associated with military R&D can be considered in light of the fact that many nonmilitary organizations, both public and private, have established mechanisms for attending to such issues. These mechanisms span a broad range along a number of interrelated dimensions.
For example, the degree of formality may vary. Formal mechanisms are similar to process-oriented proceedings (in some cases, they are legal proceedings) in that they are governed by specified procedures, and their operation and often their existence are backed by law and governmental power. Informal mechanisms are more akin to conversations between colleagues and friends that enlighten and provide information to those who must make decisions about ELSI concerns. Lightweight and flexible, informal mechanisms tend to have a cooperative and advisory character, and whether these characteristics are an advantage or a disadvantage often depends on the perspective of the viewer. In between are voluntary mechanisms such as government-developed guidelines that do not have the force of law or regulation but nevertheless reflect government policy decisions. For example, a research-performing institution may be required to adhere to certain research guidelines, which might touch on
ELSI concerns, developed by a particular agency as a condition of receiving funding from that agency.1
Mechanisms also differ in their degree of authority. Binding mechanisms result in rulings, decisions, and regulations to which all parties to a dispute must accede, even if some parties may dispute the particulars in any given case. Generally, rulings, decisions, contractual agreements, and regulations can be enforced by law, although there are mechanisms for court challenges. Nonbinding mechanisms are established to encourage thought and attention to various ethical, legal, and societal issues.
In general, formal and binding mechanisms are established in adversarial contexts when parties that might be critical of a decision do not trust that policy makers will take their interests into account to an adequate degree. But it can also happen that an agency forced or required by law to engage in a formal process may eventually internalize the rationale for that process.2
Another differentiating characteristic of various mechanisms for addressing ELSI concerns is the degree to which a mechanism is integrated with or operates independently of a science or technology research effort. Either approach can work well, although one may be more appropriate than the other depending on the circumstances. They can also be used in tandem.
When an ELSI effort is conducted independently of the associated R&D, it can, in the experience of some committee members, operate with more autonomy and with greater control of its resources, thus enabling the pursuit of a long-term ELSI research agenda that aligns well with institutions’ disciplinary perspectives expressed, for example, in a science-technology-society (STS) program or a public policy program. The integration of technical and STS/policy work is harder to achieve, however, when the institutions involved are separate.
One major advantage of a mechanism for addressing ELSI concerns that is integrated with R&D is the easy access to detailed knowledge of the technical work, knowledge that is often integral to the effective pur-
1 For example, all research projects involving recombinant DNA if funded by the National Institutes of Health or if conducted at an institution receiving any NIH funding at all must comply with the NIH Guidelines for Research Involving Recombinant or Synthetic Nucleic Acid Molecules. See http://oba.od.nih.gov/oba/rac/Guidelines/NIH_Guidelines.htm#_Toc351276220.
2 For example, a 1979 book by Daniel A. Mazmanian and Jeanne Nienaber Clarke (Can Organizations Change?: Environmental Protection, Citizen Participation, and the Corps of Engineers, Brookings Institution, Washington, D.C., 1979) expressed optimism that the Army Corps of Engineers might be changing its decision-making processes based on what it had learned from using environmental impact statements. See http://www.hks.harvard.edu/saguaro/pdfs/sandereisandsklessons.pdf. See also http://www.mvp.usace.army.mil/docs/history/04.chaptertwo.pdf.
suit of STS/policy-oriented research (e.g., research involving biosecurity, biosafety, or intellectual property rights). In addition, in an integrated effort technical work can be informed by work on ethical, legal, and societal issues grounded in the social sciences. In the experience of some committee members, one disadvantage of the integrated model has been the frequent lack of adequate funding for research on ethical, legal, and societal issues and for social science research and a corresponding lack of autonomy to shape a research agenda. Integrated mechanisms are also potentially subject to a certain degree of co-optation, in which the original intent of the mechanism may be undermined to some extent by the way in which it is implemented.
Different mechanisms for addressing ELSI concerns also differ substantially in their financial cost, with formal mechanisms tending to cost more than informal ones. Nevertheless, it is unrealistic to expect that addressing ethical, legal, and societal issues will be cost-free, and investments in mechanisms to address such issues may be cost-effective if they help policy makers to avoid expenses that might be incurred in the future when programmatic changes are harder and more costly to make.
It should be noted that good judgment is the first and foremost mechanism for identifying problematic ethical, legal, and societal issues that may be associated with a given research project. Scientific research is supported largely on the assumption that researchers will make positive contributions to society, an assumption that posits a “floor” for ethical standards. Project proposers are expected to exercise good judgment in not submitting proposals that are unethical with respect to either the conduct of the research that would be supported or the applications that they anticipate will result from that research.
The same applies to program officials, who are expected not to approve or support projects that are unethical. Indeed, senior program officials such as agency directors—who admittedly may not know in detail of every project undertaken in their agencies—sometimes say they hope their actions and agencies are kept off the front pages of the New York Times and the Washington Post; such sentiments reflect awareness that they are accountable for projects that might cause public outrage for whatever reason (including ELSI concerns).
But these expectations for good judgment are generally not reflected in any explicit or systematic guidance to program officials, or to project proposers. Thus, these individuals must rely on their own sensitivities, awareness, and knowledge of ELSI-relevant history to make such judgments or even to know that there are judgments to be made. Although it is most likely that project proposers and program officials do not believe that proposals in question are problematic from an ELSI standpoint, they
may not have even considered the question of what ethical, legal, and societal issues could arise.
Thus, good judgment cannot be taken for granted. Indeed, good judgment needs to be fostered, developed, and reinforced. To go beyond the judgment of individual program managers and individual researchers who submit proposals, a number of mechanisms with larger scope have been used to address ELSI concerns—some apply to research, and some to actual deployments of technology.
Effective self-regulation goes beyond the judgment of individual scientists working on individual projects. Self-regulation in an ELSI context is generally understood to mean scientists themselves working deeply to understand ethical, legal, and societal issues associated with their research fields and then developing responses to these issues. An implicit goal is to create an ELSI-sensitive culture among such scientists. There are a number of successful examples of such efforts:
• The Asilomar Conference on Recombinant DNA of 1975 mentioned in Chapter 1 was convened by concerned scientists to consider dangers of research in recombinant DNA; it led to recommendations on a variety of safety guidelines for overseeing DNA-related research and also prohibited certain kinds of experiments. This multidisciplinary conference brought together a number of scientists, health care practitioners, and lawyers. Notably, it was organized entirely at the initiative of bench scientists without direct involvement by governmental representatives.
• In 2004, the National Academies initiated a project to develop guidelines for all human embryonic stem cell research (that is, without regard for funding source) that took both ethical and legal concerns into account.3 The covered research included the “use and derivation of new stem cell lines derived from surplus blastocysts, from blastocysts produced with donated gametes, or from blastocysts produced using nuclear transfer.” The study also considered health science policy issues related to the development and use of human embryonic stem cells for eventual therapeutic purposes. As a result of the complexity and novelty of
3 National Research Council and Institute of Medicine, Guidelines for Human Embryonic Stem Cell Research, The National Academies Press, Washington, D.C., 2005, available at https://download.nap.edu/catalog.php?record_id=11278.
many issues involved in this cell research, the report recommended that involved research institutions create special review bodies that would be responsible for ensuring that all applicable regulatory requirements were met and that cell research was conducted according to report guidelines. This project is addressed in greater detail in Appendix D.
• An exercise in the synthetic biology community is underway today to incorporate social science expertise into understanding ELSI dimensions of such research.4 On May 26, 2006, synthetic biologists issued the Declaration of the Second International Meeting on Synthetic Biology, which addressed several widespread challenges in the field, such as commercial providers accepting orders for DNA sequences that may encode hazardous biological agents.5 The declaration called for the synthetic biology community to adopt the use of software tools and best practice to check for DNA sequences that encode hazardous biological agents, as well as to engage in discussions with various stakeholders and policy makers to develop governance options for the community.
Some critics have argued against self-regulation. For example, Patrick Taylor argues that many efforts at self-regulation fail because of “conflicts of interest …, fragmented, disconnected oversight; and failure to embody genuine scientific and public consensus.” 6 To be credible and effective, he argues, self-regulation must be “inclusive and multidisciplinary, publicly engaged, sufficiently disinterested, [and] operationally integrated with institutional goals, and must implement a genuine consensus among scientists and the public. The mechanisms of self-regulation must be sufficiently broad in their oversight, and interconnected with other institutional forces and actors, that they do not create fragmented solutions.”
Nonetheless, self-regulation has been used with considerable success in a number of instances, although its acceptability to the community as a regulatory mechanism continues to be in question. Because self-regulation is driven by scientists themselves (and especially so when Nobel laureates and other luminaries in the field are known to be the driving forces), the recommendations of self-regulatory bodies can have considerable credibility in the scientific community and are less likely to be perceived as overbearing and excessive.
4 Lewis D. Solomon, Synthetic Biology: Science, Business, and Policy, p. 160, Transaction Publishers, New Brunswick, N.J., 2011.
6 Patrick L. Taylor, “Scientific Self-Regulation—So Good, How Can It Fail?”, Science and Engineering Ethics 15(3):395-406, 2009, available at http://www.springerlink.com/content/pnn32878785v1n33/fulltext.pdf.
As noted above, many civilian organizations have established mechanisms addressing ethical, legal, and societal concerns. Sometimes, these mechanisms address such concerns in a specific field or problem domain, such as nanotechnology or drug approval. A number of these mechanisms are described below in summary form and without references. These established mechanisms are discussed in more detail in Appendix D, which also provides references when necessary.
• DOD law-of-armed-conflict review and treaty compliance. Weapons acquired by the Department of Defense are subject to a review early in the acquisition process that determines whether the normal or expected use of the weapon is consistent with the law of armed conflict (LOAC). However, such reviews are not required to foresee or analyze all possible misuses of a weapon. R&D is also not subject to such review. Similar processes attach to efforts that might implicate obligations stemming from treaties that constrain or restrict research or development in some way.
• Codes of ethics and social responsibility in medicine, engineering, and science. Medicine, engineering, and science are fields that generally hold practitioners accountable for considering at least some of the ethical ramifications of their medical, technical, or scientific work. Professional standards and codes of ethics may be implied or implicit rather than codified or formalized, and incorporate both standards for behavior (what must a responsible practitioner do in providing services to clients) and social responsibility (e.g., a responsibility for practitioners to provide services and expertise to society in addition to those they provide to their clients; a responsibility to protect a vulnerable public from harm).
• Research on ethical, legal, and societal issues. The federal government has supported such research in the context of specific scientific efforts such as genome research and the National Nanotechnology Initiative. Through the National Science Foundation, it has also supported a research program on improving knowledge of ethical and value dimensions in science, engineering, and technology and a program focusing on ethics education for graduate students in science and engineering. Both individual ELSI investigators and ELSI research centers have been supported by various U.S. government efforts. In addition, there are some efforts to integrate ELSI research into individual proposals for certain scientific research, so that knowledge about ethical, legal, and societal issues can have an impact on how the scientific research is conducted.
• Oversight bodies. Established by federal law, institutional review boards (IRBs) address ELSI issues directly related to the safety of human subjects that arise in the conduct of research (usually of a biomedical, social, or behavioral nature). IRB approval is needed before any federally
funded research involving human subjects can begin at an affected institution. (Separately, many institutions have biosafety committees, radiation safety committees, and so on.) In addition, some institutions performing embryonic stem cell research have established oversight committees to oversee all issues related to derivation and use of human embryonic stem cells; these committees are also supposed to approve the scientific merit of research proposals.
• Advisory boards. Advisory boards and committees are a time-honored way to focus attention on ELSI issues associated with S&T. For example, the Recombinant DNA Advisory Committee informs and advises the NIH on certain ethical, legal, and societal issues related to recombinant DNA research and reviews human gene transfer research. The National Science Advisory Board for Biosecurity provides advice regarding biosecurity oversight of legitimate biological research that may be misused to pose a public health and/or national security threat. The Presidential Commission for the Study of Bioethical Issues advises the President on bioethical issues arising from advances in biomedicine and related areas of science and technology. Community acceptance panels are convened by the National Institute of Justice to gather input regarding new research and development initiatives from relevant communities.
• Research ethics consultation services. Such services have been established in a number of research environments to help raise awareness of issues related to the ethics of human subjects research and to assist investigators in resolving these issues. Using an “ELSI consultants on call” model, these services provide real-time advice to scientists about how to recognize and address ELSI concerns in ongoing research and at the same may lead those involved to discuss broader ethical, legal, and societal issues.
• Chief privacy officers. Privacy is widely regarded as a key ELSI concern associated with technology in many contexts. Many institutions have vested responsibility for protecting the privacy of citizens and customers in the public and private sectors, respectively, in chief privacy officers (CPOs). Such officers are intended to be part of an institution’s senior management. In many institutions, the CPO does not take an adversarial role with respect to its programs, but rather works with those programs to find ways of meeting program objectives without harming privacy.
• Environmental assessments and environmental impact statements. Under federal law, certain federal projects that potentially affect the environment require an environmental assessment (EA) that provides evidence and analysis for determining whether a project has a significant environmental impact. If so, an environmental impact statement (EIS) must be prepared. An EA is typically a short document. If an EIS is required, an analysis is prepared that systematically addresses environmental dimensions of the project in question. An EIS must articulate the beneficial and
harmful environmental impacts of a proposed action as well as alternative courses of action. Public input is often sought in these processes.
• Drug evaluation and approval. The Food and Drug Administration has long faced decisions with ethical, legal, and societal issues having certain properties similar to those faced by military R&D: innovative products offering unique benefits and risks, proprietary information that must be protected, technical information whose evaluation requires scientific expertise, uncertainty that may be reduced by research conducted before or after usage begins, and time pressure that must be respected. As illustrated in Box 7.1, the FDA has developed procedures for addressing ELSI concerns in drug development that are intended to be expert driven, confidential, advisory, predictable, constructive, timely, and efficient.
DARPA acknowledges publicly that there is often a tension between research on novel technological concepts and an underdeveloped ethical, legal, and societal framework for addressing the full implications of such research, noting that “[i]f we [DARPA] do our research well, we will necessarily bump up against these concerns. Our responsibility to the defense of the Nation is such that we must thoughtfully address these issues, while simultaneously pursuing our work.”7
For example, citing privacy as an ELSI concern of the first order and recognizing the history of its own Total Information Awareness program as being at “the leading edge of the tension created between new technological approaches to addressing threats to the Nation’s security and individual privacy or civil liberties that are core values for the Nation,” DARPA has enunciated a number of principles to describe its renewed commitment to addressing privacy implications throughout an R&D program’s life cycle.8 These principles call on DARPA to do the following:
• Consistently examine the impact of its research and development on privacy.
• Responsibly analyze the privacy dimension of its ongoing research endeavors with respect to their ethical, legal, and societal implications.
• Transparently respond to the findings of its assessments of its unclassified work, and ensure independent review of its classified work, in accordance with a commitment to shared responsibility for addressing the privacy issue.
7 These principles were listed on the DARPA Web site on September 1, 2013, but the Web page has since been taken down. However, an archived version can be found at http://web.archive.org/web/20130901062709/http://www.darpa.mil/About/Initiative/DARPA%E2%80%99s_S_T_Privacy_Principles.aspx.
Box 7.1 The FDA Center for Drug Evaluation and Research
To manage ethical, legal, and societal issues associated with drug approval, the Food and Drug Administration (FDA) established the Center for Drug Evaluation and Research (CDER) to take responsibility for approving drugs.1 Its decisions determine the availability of drugs, but not their use, because the FDA does not regulate the practice of medicine. The FDA’s reviewers focus on the proposed use of a product (e.g., to treat initial infections from a disease). They may, however, note other potential uses that FDA’s decision makers may wish to consider when making the approval decision (e.g., use for repeated infections or with more vulnerable populations than those in the clinical trial). Unlike the FDA, which may be prevented from considering off-label uses of approved products, review teams for military R&D would be required under many circumstances to consider such uses.
Under the Prescription Drug Users Fee Act, drug manufacturers cover the costs of the FDA’s review process. Great effort is made to ensure the independence of the review process from any sponsor influence—and to protect the confidentiality of the data that reviewers receive. The cost of reviewing a new drug is approximately $1 million, or about 0.1 percent of the approximate investment in recent years in an approved product, and hence a modest cost of doing business. Producing and summarizing the reviewed data entail activities that manufacturers would, largely, perform in any case, and that thus add minimal costs. The FDA’s data needs are known early enough to affect the design of the clinical trials, so as not to slow things up. The review process itself can be accelerated when the need arises.
There is reason to believe that the quality of pharmaceutical research is improved by receiving the FDA’s input during trial design and its technical review at the end. The FDA’s evidentiary needs are sufficiently standardized for firms so that the needed expertise is widely available (from inside firms or from contractors).
To fulfill its responsibilities, DARPA has (among other things) assigned an internal privacy ombudsman to work closely with the DOD Privacy Office, and has created an independent privacy review panel to assess existing and emerging privacy laws, regulations, technologies, and norms and to analyze their potential effects. The panel is composed of leading scholars and policy and technology experts in the privacy field. In February 2011, the panel met with DARPA to discuss “the implications of privacy laws and policies on DARPA programs” and “to help DARPA create an internal privacy accountability process.”9 It is the intent that the panel’s experts will consult with individual DARPA program managers to help them address privacy concerns that arise early in a program’s life cycle and to ensure that each program’s privacy implications are understood.
By imposing uniform standards, the FDA helps to level the playing field across products. It may create barriers to entry for smaller firms, unless they can team with entities having the needed risk analysis and management capabilities. The ensuing regulatory decisions are sufficiently predictable that manufacturers can often look at preliminary results from testing a product and decide whether to continue its development.
To make its decision-making process more predictable and transparent, the FDA has recently committed to producing a standard summary of the rationale for its approval decisions. (When products are not approved, no public statement is issued, allowing manufacturers to revise, or drop, projects while revealing minimal details.) That summary distinguishes between evidence and reasons for the decision. The former involves scientific results, including associated uncertainties. The latter contains the scientific opinions of expert reviewers about the implications of that evidence for the regulatory decision—recognizing that scientists’ perspectives may be valuable, even if someone else makes the approval decision.
The summary includes analyses of risks and benefits, as well as the “unmet medical need” that captures the case for innovative treatments—which may be approved even if their risk-benefit profile is no better than that of existing products. For many products, the summary concludes with a risk evaluation and mitigation strategy, with recommendations for additional measures that could increase a product’s benefits (e.g., by ensuring patient compliance), reduce its risks (e.g., by requiring pregnancy tests), or improve its evidentiary base (e.g., by having a patient registry or by conducting a postmarketing clinical trial, the details of which must be approved by the FDA as a condition of licensing).
1 For more information on CDER, see http://www.fda.gov/Drugs/ResourcesForYou/Consumers/ucm143462.htm.
A second DARPA effort has been to create an advisory committee for the Living Foundries program. As noted in Chapter 2, that committee is modeled on the privacy panel described above, and its purpose is to advise program staff on the inherent ethical and societal issues that might be raised by DARPA’s investment in synthetic biology R&D. In practice, the advisory committee (AC) has several responsibilities:
• It helps to shape broad agency announcements and requests for proposals;
• It reviews all incoming proposals and flags potential areas of concern in advance;
• It tracks research as it is conducted and flags emerging issues;
• It assesses how results should be released and publicized; and
• It assesses potential applications of research.
The AC has a number of different modes of doing its work. In 2012, it held an initial day-long meeting to orient DARPA program officials to ethical, legal, and societal issues related to synthetic biology. The AC will also engage with program managers directly, one-on-one, and in retreats with research performers. Feedback will be provided from the AC to the DARPA director through the program manager and directly to research performers. AC members are encouraged to discuss their work with anyone they choose, whether in or out of DARPA.
There are no predetermined processes in place for how to handle problematic ELSI concerns that are identified through the AC. It is not expected by DARPA that one process will be applicable to all issues, and significant variations from case to case and situation to situation seem likely.
DARPA has also established a working group in cooperation with the National Science Foundation to address the ethical, legal, and societal implications of personally identifiable information during the R&D activities it supports. This activity is strongly influenced by the unique national security concerns associated with operational security and the need to protect sources and methods.
All of the mechanisms described above speak to some of the ethical, legal, and societal issues in some S&T research and development efforts to some degree. How, if at all, any of these mechanisms might be useful for addressing ethical, legal, and societal issues associated with R&D in a military context is the question that this section explores.
Toward characterizing the attributes of a process for addressing ELSI concerns related to R&D with military relevance, the above discussion is a point of departure. Abstracting from this discussion, the following attributes seem relevant:
• Awareness. Most of the mechanisms described above are predicated on the awareness of the scientists and engineers engaged in an R&D effort. These individuals have a significant stake in how problematic ELSI concerns are resolved, because they may have to revisit and modify or curtail some of their technical efforts to overcome or resolve the issues. Communication and analysis of ethical, legal, and societal issues is a key part of this process. Such communication enlightens and also serves as a statement of values by the entity conducting the analysis.
• Accountability and responsibility. Mechanisms such as IRBs, chief privacy officers, and the formal LOAC review of weapons prior to procurement acknowledge the need for accountability in discussions of ELSI-related matters. These efforts combine program responsibilities with functional responsibilities.10 Personnel working on an R&D effort thus have loyalties to the project (they are committed to making the project work), and they also have responsibilities for exercising and deploying their skill sets as well as they can. In large organizations, personnel are accountable both to the project managers and to their functional management. In small organizations, project management and functional management may be combined in the same person(s).
• Expertise. Some of the mechanisms described above (e.g., IRBs, advisory boards, interdisciplinary ELSI research, research ethics consultation services) are predicated on the idea that addressing ethical, legal, and societal issues requires deep and serious expertise both from the scientific disciplines involved and from specialists in ethics, law, and the social sciences. Furthermore, such expertise must be available both to program officials (who decide on the scope and nature of the support that they will provide to any given R&D project) and to project personnel (who will execute the project, presumably within the parameters specified by program officials).
• Access to relevant scientific and technical information. One of the fundamental rationales for interdisciplinary work is that knowledge from one discipline can prompt and facilitate insight and analysis by another—and barriers to passing such information between researchers inhibits such analysis. ELSI research and discussions of ELSI concerns are no exceptions to this rationale. Analysis of ethical, legal, and societal issues can make greater progress when scientific and technical information passes freely between ELSI researchers and the R&D researchers, and the same is true for the ELSI information.
• Time. All of the mechanisms above call for the expenditure of some amount of time. In some cases, the calendar time needed for invocation of any of these mechanisms can be reduced by operating the mechanism in parallel with the scientific work. But in those instances where the mechanism serves as a gateway to future work, there is much potential for delay.
10 A program or project typically has budget, performance, and schedule goals that project managers are accountable for meeting. That is, a project promises to achieve certain goals (performance) within a certain time frame (schedule) and a certain budget. Functional responsibilities are the skill sets that are necessary to reach these goals. Functional responsibilities include a technical skill set (e.g., engineering), but also may include skill sets related to legal and regulatory matters, human resources, finance, and so on.
• Variety in perspectives. A number of the mechanisms described above (e.g., enviromental impact statements, research ethics consultation services, advisory groups) are based on the idea that taking input from a broader range of perspectives (especially perspectives that are not necessarily similar to those of the scientific researchers) will surface issues specific to a particular project or program that those involved in the program might not have considered otherwise. In addition to the mechanisms described above, the DOD R&D community has a tradition of red-team analysis to find technical and operational weaknesses in proposed acquisition projects—an approach that could be adapted specifically for raising ELSI concerns underlying a given research direction. Insiders who see that certain ethical issues are being ignored and others who are not associated with or advocates for particular projects are also sources of insight.
• Comprehensiveness. The mechanisms discussed above focus on different kinds of ethical, legal, and societal issues—those related to the environment or human subjects or specific technologies, for example. Thus, with the application of any one such mechanism, important ELSI concerns—even those that may have been known in advance or anticipated—may go unaddressed simply because there is no comprehensive mechanism in place for addressing a range of such issues.
• Cooperation. The mechanisms described above work best when project and program managers can address ELSI concerns in a cooperative manner early enough to affect the way a project or program is laid out, that is, before addressing ELSI concerns becomes very expensive either in time or financial resources.
Depending on their goals, policy makers will have to decide how far to go with respect to any of these attributes in designing an approach for addressing ethical, legal, and societal issues in the context of military R&D.
In any event, the approach will have to include a process for identifying and assessing ELSI concerns at the outset of an R&D project and also a process for monitoring and assessing the subsequent emergence of such issues throughout the project’s timeline. Both in-house expertise and external expertise with ethical, legal, and societal issues in the context of military R&D are necessary for these processes to work well. Appropriate public engagement to identify issues and to build legitimacy for a particular R&D project is necessary as well.
The FDA process described in the section on drug evaluation and approval in Appendix D has some of the elements outlined in the bulleted list of attributes above, and is thus suggestive of a point of departure for a model that fits the conditions of certain kinds of military R&D under
some circumstances. The subject-matter expertise and deciding authorities will be very different, as might some of the ethical and social issues. However, a credible, workable system for evaluation of military R&D would have to have many of the attributes described in the bulleted list above.
It is very important that an approach for addressing ELSI concerns for military R&D take into account the special characteristics of the military environment described in Chapter 1. To defend the nation and its interests, the United States develops some military technologies and applications for use as weapons, and weapons are designed to cause harm, possibly extensive, to people (specifically, combatants) and to property (specifically, property with military purposes). That such development can be ethical is therefore a fundamental premise of such work. Thus, a chosen approach to addressing ethical, legal, and societal issues for military R&D must maintain control over processes for receiving input from individuals who do not share or are not willing to set aside discussion of this premise.
In addition, an approach for addressing ELSI concerns with R&D of military relevance must be capable of accommodating the classified dimensions of military research. Although classification does limit the number of individuals who can participate in any kind of ELSI review, the fact that a program is classified is not ipso facto a valid reason for asserting the impossibility of conducting a useful review. One major reason is that the ELSI dimensions of a project can often be discussed without referring to the parts of a project that involve classified information. A second reason is that a significant breadth of input can be gathered by using cleared individuals not formally associated with a given project.
It is also noteworthy that some of the issues raised by research classified for national security purposes also occur in considering certain kinds of civilian research and development. In particular, many industrial research labs operate with as high a level of secrecy as they can manage for obvious commercial reasons. Thus, under some circumstances, it is possible that experience with handling ELSI considerations in a quasi-classified civilian environment might have some relevance to handling such considerations for classified research.
Finally, urgent military needs sometimes emerge under the pressure of operations (e.g., new adversary weapons or tactics), and R&D may be needed on a time scale that does not allow ELSI concerns to be fully considered or accommodated before the technical work on a specific application is completed. Three observations are relevant here. First, it is not necessary to handle all relevant ELSI concerns as “gateway” issues—and to the extent that they can be handled in parallel, they need not necessarily add calendar time to a project timeline. Second, such time pressures
are usually not relevant to research aimed at advancing foundational or enabling technologies; rather, they emerge primarily in the context of specific applications to address urgent needs. Third, nothing in the discussion above limits consideration of ethical, legal, and societal issues after a new application is deployed for use, and indeed policy makers should be prepared for the possibility that actual operational use of a given application will raise ethical, legal, and societal issues that they will have to address.
The recommendations in Chapter 8 elaborate one version of the approach suggested above.