Chapter 1 of this report describes the approach of the United States to national security as one that emphasizes technologically derived qualitative advantages over its adversaries and the centrality of technology development to its national security efforts. This emphasis drives the U.S. Department of Defense and a variety of other agencies with national security responsibilities to invest heavily in activities that promote the development of technology with applications to military and other national security needs.
The emergence of new technologies often raises ethical, legal, and societal issues. Sometimes, these issues are new; other times, they are familiar but must be reexamined in the light of a new technological milieu or societal sensitivities that may not have been present when these ethical, legal, and societal concerns first appeared.
Although substantial work has been done over the past few decades to explore ELSI implications of new technologies, such work has been done largely in a civilian context. This report explores the ELSI implications of emerging and readily available technologies (ERA technologies) in a military context, and suggests the possibility that some of the ELSI understandings formulated in the context of civilian applications may need to be modified or extended when cast against a military or other national security backdrop.
and technology enable progress and applications in a variety of application domains; an application domain is associated with a set of specific operational military problems, the solutions to which may draw on many different technologies. Chapter 4 describes sources of ELSI insight, including a variety of theoretical and disciplinary approaches to ethics; international law; and insights from social sciences such as anthropology and psychology.
Building on the examples offered in Chapters 2 and 3 and informed by an understanding of sources of ELSI insight outlined in Chapter 4, Chapter 5 develops an analytical framework for systematically identifying and assessing ethical, legal, and societal issues that may arise with ERA technologies for military and national security purposes.
Chapter 6 focuses on the fact that foresight regarding the direction and outcomes of technology development is never entirely complete or accurate, and it describes a variety of approaches that can be used to help compensate for such fallibilities in anticipating ethical, legal, and societal issues associated with ERA technologies.
Chapter 7 identifies a variety of mechanisms that have been used in a civilian context to address ethical, legal, and societal issues and also describes some approaches to addressing such issues in the context of novel technological developments with military applications.
In the review process for this report, a number of reviewers suggested that the framework, findings, and recommendations offered in the report apply across the board to essentially all science and technology research of military significance, and not just those that are emerging and readily available. The committee examined only ERA technologies, and thus declines to assert the relevance of this report so broadly, but the committee would be gratified if the discussion in this report turns out to be relevant to non-ERA technologies as well.
Finding 1: Some developments in emerging and readily available technologies in a military context are likely to raise complex ethical, legal, and societal issues, some of which are different from those associated with similar technologies in a civilian context.
The history of science and technology (S&T) shows that S&T developments have always raised ethical, legal, and societal issues to one degree or another. But as noted in Chapter 1, the foundational technologies of interest for this report are associated with a high degree of uncertainty about their future trajectories and what the useful applications of these technologies will turn out to be. A broad range of uncertainty in technol-
ogy development suggests a correspondingly broad range in the ethical, legal, and societal issues that are likely to emerge—and inevitably some of these issues will be thorny and complex.
Previous work on ethical, legal, and societal issues in the civilian S&T context provides a valuable point of departure for any effort to examine such issues in a military context. But essential differences between civilian and military contexts must be taken into account.
For example, new technologies with military application can confound the conceptual basis on which ethical norms are founded. Consider the connection between S&T and principles of “avoiding unnecessary harm.” Ethics is in part about the avoidance of harm, and in a civilian context, science and technology researchers do not ordinarily seek to enable or develop applications that would be harmful to people or damaging to property. But in some military contexts, these are explicitly the goals—and presumptions about avoiding harm in civilian technology development give way to notions of avoiding unnecessary harm in the development of certain military technologies.
New science and technology also open new areas in which the concept of harm may operate. The first Hague Convention on the laws of war was formulated in 1899, and at that time, the notion of harm did not—indeed could not—acknowledge notions of harm to an individual’s genome or harm caused by radiation. Today, information technology and cyber weapons offer possibilities for harming individuals and societies without death or destruction; economic harm and social harm are two possible outcomes of cyber conflict. Physical proximity as an indicator of risk for harm (e.g., a civilian’s distance from a military target, such as a munitions factory) is not particularly relevant when cyber weapons are considered.
A related point is that new technologies with military application may well generate ELSI controversy even if the ELSI concerns are in some sense not new. A new technology often provides new ways of accomplishing certain military tasks, and the new ways as well as the tasks themselves may create controversy or need further ELSI-related scrutiny. Over time, a consensus may develop regarding these ethical, legal, and societal issues.
If such a consensus does not develop, the controversy may fade from public view or continue with high public visibility. When a controversial ELSI concern remains unsettled, the (actual or potential) use of a new technology to accomplish military tasks may well re-open the debate, or at least put a new spotlight of public attention on it. From the standpoint of public understanding and accountability, the assertion that the ethical, legal, and societal issues themselves are not new can only be a starting point for exploring the ELSI ramifications of new technologies. In a new
and different context, a new technology may change weightings of different factors that need to be taken into account or even their salience or relevance to the ELSI concerns in question.
As for the “readily available” aspect of ERA technologies, the most significant impact on ethical, legal, and societal issues arises from the fact that more parties with access to such technologies have a greater collective ability to create new applications, and the larger set of applications thus made possible expands the scope of ELSI concerns that could arise. In addition, ERA technology characteristics such as rapid change and low barriers to entry may have ELSI implications in and of themselves.
Finding 2: Sustainable policy—policy whose goals and conduct can be supported over the long run—regarding science and technology requires decision makers to attend to the ELSI aspects of the S&T involved.
Why should ethical, legal, and societal issues be addressed at all? One obvious answer is normative: as a nation, we wish to conduct ourselves and our activities in an ethically defensible manner and for ethically supportable purposes. But a more practical answer is found in the idea that policy makers want the policies that they formulate to be sustainable over the long run. High-quality science is one of the more important and obvious factors that contribute to the success of any particular S&T effort. But inattention to ELSI aspects of an R&D endeavor can undermine even scientifically sound R&D efforts and call into question policy decisions that led to those efforts, regardless of the initial intent underlying those original decisions.
One illustration comes from DARPA’s own history: the Policy Analysis Market. As noted in Chapter 6 (Box 6.1), the goal of that project was to develop a new technique for predicting political events based on a futures market—a technique with some support in the scientific literature. But the undertaking ran afoul of public concerns regarding the ethics of a project that might give individuals incentives to conduct terrorist activities, even if such incentives were in some absolute sense minimal. The methodology—arguably a promising one in the appropriate context—was not as thoroughly explored as it might have been, and the project was canceled.
Finding 3: Public reaction to a given science or technology effort or application is sometimes an important influence on the degree of support it receives.
A public perception that an R&D project is unethical may undermine support for it, even if the project is technically sound. A lack of support
may manifest itself through adverse journalistic and editorial treatment, greater political scrutiny, reduced budgets (especially in a time of constrained finances), additional restrictions on research, and so on. On the other hand, a positive perception regarding the ethics of an R&D project may enhance public support for pursuit of that science or technology, irrespective of the scientific or technical basis for such pursuit.
Finding 4: The ethical, legal, and societal issues of concern that may be associated with a given technology development are very hard to anticipate accurately at the start of that development.
The discussion above implies that decision makers must exercise a kind of due diligence in identifying and assessing ethical, legal, and societal issues associated with the R&D they support
Issues that may arise along a known technological path can emerge at any point in the R&D process and indeed may do so on a very short time scale. The salience of such issues can also be amplified through multiple channels (e.g., social media). On a longer time scale, ethical concerns and issues often change as technology evolves and matures, and as society becomes more familiar with the technology.
In addition, overly optimistic technological forecasts made by interested parties about possible applications of a given S&T base can distort the decision-making calculus and interfere with a fair weighing of the pros and cons of pursuing a given line of research. This distortion may be especially problematic among decision makers who are unable to critically evaluate technological feasibility.
Ethical, legal, and societal issues may also arise along as-yet-unknown technological paths. If a path is not known, it will be very hard to undertake a meaningful assessment of the ethical, legal, and societal issues associated with that path. As the discussion in Chapter 6 indicates, prognosticators do not have a good track record in forecasting technology outcomes.
This is especially true when the technologies in question are foundational and worthy of basic research. Although nearly all such research supported by the government in a military context inevitably has an arguable (if speculative) nexus with military applications, proposals to the DOD for basic research do not generally mention specific applications (military or otherwise) that such research might support. But funding agencies make decisions about specific proposals based on the likelihood that such research will in fact advance the science in which they are interested, and these agencies are interested in developing the science base to (eventually) support a variety of as-yet-unknown applications useful to DOD missions.
To increase the likelihood that relevant ELSI concerns will be revealed, responsible parties would do well to consult a diversity of sources with different intellectual and political perspectives.
Finding 5: Any approach to promote consideration of ethical, legal, and societal issues in R&D of military significance will have to address how such plans are implemented at both the program and the project levels.
Policies and plans intended to promote consideration of ethical, legal, and societal issues in R&D do not by themselves ensure that any given implementation of such policies will actually address such issues in the manner intended by the originators of such plans. Implementation is critical to the success of any policy or plan, and controversy and concern can easily be fueled by inadequate attention to detail and implementational oversights as well as by the inadequacy or absence of a high-level plan to address relevant issues.
For example, it is not without precedent in large organizations that well-intentioned policies promulgated by senior management are ultimately implemented as bureaucratic checklists and mindless procedures that emphasize the letter of the policies rather than their spirit. The intent of this committee’s findings and recommendations is not to impose undue compliance requirements on program managers or agencies, but rather to help well-meaning program managers in these agencies to do their jobs more effectively and to help ensure that basic American ethical values (such as those embodied in the U.S. Constitution’s Bill of Rights) are not compromised. The use of common sense, judgment, and understanding of the fundamental intent of policies to address ethical, legal, and societal issues—not simply formal compliance—is the goal and is an important foundation for developing an ELSI-sensitive culture. Accordingly, the committee believes that policies originated by an agency’s senior management to address ethical, legal, and societal issues systematically should have a light footprint when they are implemented by program managers.
The committee also suspects that if an agency’s culture routinely addresses ELSI concerns, the additional work required to address ELSI matters on any individual project will be small. That is, the cost of putting into place the necessary processes and procedures to address the first R&D projects to be assessed for ELSI significance is likely to be at least partially amortizable over succeeding projects subject to the same processes and procedures, and a new project addressing approximately the same problem domains might require only incremental work.
The committee recognizes that having to grapple with ELSI issues may well complicate the conduct of a given S&T research project in the
short term. After all, R&D generally requires a great deal of attention focused on the science and nothing but the science. On the other hand, in the long term, addressing such issues may well be necessary for sustaining support for projects.
Consideration of ethical issues may also improve the quality of the research by pointing to other overlooked problems in the research or opportunities for improvement in the science or technology to be pursued. For example, an ethical objection to some proposed research may be based on possible harm to people resulting from that research. A scientific exploration of the mechanisms underlying that possible harm may generate additional information that could help put such fears to rest as well as make the overall research more complete from a science point of view. Indeed, critics who raise ethical objections often do so in part because they have a different perspective and ask questions different from those asked by advocates of such research.
The history of the FDA approval process for drugs to treat AIDS is an example. In the late 1980s, a variety of AIDS advocacy groups argued that the timeline for delivering promising drugs for AIDS treatment was simply too long, and that on ELSI grounds, that timeline should be accelerated. Their arguments were ultimately successful, and the FDA adopted an approval standard for certain drugs based on a risk-benefit calculus rather than on the traditional criteria of being shown to be “safe and effective.” 1
Chapters 1 through 5 point out ways in which developments in ERA technologies in a military context may end up raising significant ethical, legal, and societal issues. Such issues can raise a variety of problems, both when these technologies are used as intended and in their unintended applications. The results may include public outrage, negative political effects, or problems with internal morale, not to mention negative consequences for society as a whole.
Agencies sponsoring research have an obligation to the people they serve at least to assess and to consider the possible negative effects of that research on individuals and society. Advance consideration of those issues should be an important task for agencies that fund such research.
1 Harold Edgar and David Rothman, “New Rules for New Drugs: The Challenge of AIDS to the Regulatory Process,” Milbank Quarterly 68(Supplement 1): 111-142, 1990.
Agencies also have a self-interest in assessing these implications. Research that raises complex ethical and social issues can harm the sponsoring agency—and that can be true whether or not the research actually leads to ELSI-related problems, or even whether or not that research is even carried out. Ethical and social issues are much more than public relations problems, but they also definitely are public relations problems. Even if an agency were concerned solely with its own future, and not with the broader consequences of its actions, it would still have to worry about the ethical and social implications of its work.
As a result, both for the public interest and in its own self-interest, any agency funding research that is likely to raise complex ethical, legal, and societal issues should have in place processes to identify, assess, and monitor those issues. Any such processes will require the agency to create the capacity to operate the processes. Exactly what processes and what capabilities a particular agency needs will depend on the agency and its research. Nevertheless, there are some common features of any useful institutional response to these kinds of challenges.
For example, the committee believes that a mix of centralized management attention to ethical, legal, and societal issues and continuing responsibility for ELSI concerns distributed among program managers will be needed. But nothing in this notion necessarily implies that extra layers of management for formal review of ethical, legal, and societal issues should be required. Instead, the committee was guided by the principle that review processes should be as lightweight as possible, consistent with focusing necessary agency attention on ELSI concerns.
To implement useful mechanisms for addressing ELSI concerns in the context of military R&D, agencies supporting research of potential military value need to take action. The findings above help to shape the committee’s four recommendations to agencies that support R&D on emerging and readily available technologies of military significance and that are interested in addressing ethical, legal, and societal issues inherent in their R&D portfolios. (In the recommendations below, the term “interested agency” is used to mean agencies interested in addressing ELSI concerns inherent in their R&D portfolios. In this context, an “agency” could also include a coordination office for R&D efforts across multiple agencies, such as the Networking and Information Technology Research and Development (NITRD) coordination office.)
Recommendation 1: The senior leaders of interested agencies that support R&D on emerging and readily available technologies of military significance should be engaged with ethical, legal, and societal issues in an ongoing manner and declare publicly that they are concerned with such issues. Such a public declaration should
include a designation of functional accountability for ethical, legal, and societal issues within their agencies.
An agency’s senior leadership has a critical ongoing role to play in ensuring that ELSI concerns are an important consideration for the R&D it supports. High-level support from senior agency leadership is required if an agency is to seriously address ethical, legal, and societal issues associated with the research it funds. Such support must be visible and sustained over time: in its absence, little will happen. An agency’s senior leadership sets the tone by publicly communicating to the organization and its stakeholders its values and their rationale. In general, a public declaration would include a statement about the importance of addressing ethical, legal, and societal issues, the willingness of the agency to learn from outside perspectives, and the intent of the ELSI-related processes. In the long run, these are key elements in creating an institutional culture that is sensitive to ELSI concerns.
Furthermore, statements of public support need to be repeated periodically, to remind experienced program managers of the importance that the agency places on the subject and to introduce new program managers to the idea of doing so. Presenting such statements at events involving the research community (e.g., professional meetings, proposers’ days) as well will help to inform researchers of an agency’s ELSI concerns.
The recommendations that follow provide an approach for dealing with these kinds of issues, but recommendations are not self-implementing. Even the adoption of some form of this committee’s recommendations would not necessarily mean that they had been implemented, let alone implemented effectively. Organizations implement measures effectively when the people who make up those organizations believe that the measures are important. It is crucial, therefore, that an agency understand why the assessment of ethical, legal, and societal issues is important, from the top of the agency down through its ranks.
Of course, public declarations are by themselves insufficient to drive an agency’s program managers to attend to ELSI concerns that may be inherent in the R&D projects they support. The fact that the agency’s leadership thinks something is important may ensure that the staff will pay it lip service. To get more than lip service will often require more than a mandate from above.
To maximize the likelihood that ethical, legal, and societal issues will be addressed, an agency’s senior leadership should designate a point of functional accountability for this responsibility. The rationale for ensuring such accountability arises from the complexity of an agency’s operating environment. In the private sector, high-consequence businesses are characterized by an environment where hundreds of people engaged
in an effort make thousands of decisions, and one person making one mistake that goes undetected and uncorrected can cause unacceptable outcomes, such as loss of human life or enormous financial losses.
The primary responsibility for preventing such outcomes rests with the team executing the program. However, management often assigns functional organizations to provide oversight as a secondary line of defense against unacceptable outcomes. Functional managers also have ultimate responsibility as points of contact for anyone within their agencies with concerns about functional matters—in principle, anyone with a financial concern can bring that concern to the attention of the chief financial officer, anyone with a legal concern can bring that concern to the attention of the general counsel, and so on.
Internal functional organizations such as “Engineering,” “Quality Assurance,” and “Mission Success” assign people to the project team who report both up the reporting chain of the project line management and to the relevant functional manager, e.g., the VP of Engineering or the VP for Quality. Sometimes this approach is referred to as “two to hire, one to fire.” To assign someone to a project, both the project manager and the functional manager must agree on the selection. Either can remove the individual if reporting accountabilities are not met.
These individuals with two reporting lines have dual accountabilities. First, they are accountable for supporting the project team in achieving cost/schedule and financial objectives, and also accountable in their functional reporting chain to ensure that programs do not take unacceptable risks in their functional areas. For example, those from Engineering ensure that the engineering is done properly, using the established processes and tools approved by the functional organization. They are expected to “blow the whistle” to their functional management line if questionable engineering or quality practices are used by the project team, and they also serve as points of contact if project staff come across problematic issues to which the line program management is not responsive.
The functional management line is responsible for ensuring that the people it deploys to projects are accountable and satisfy their responsibilities. In safety and reliability engineering, for example, most lapses result from people not doing what the organization is relying on them to do. The result can sometimes be a multibillion-dollar disaster in which someone on the project team made a mistake (e.g., a typing error in input data to a launch vehicle) that was not caught by the several layers of project people and functionally deployed people who were accountable for checking and correcting such mistakes and who each failed to be accountable and to satisfy their responsibilities.
Risks from unaddressed ELSI concerns may, or may not, be less consequential. The concept of holding functional people accountable is
the same, and the intent is that when those accountabilities clearly include appropriate consideration of ethical, legal, and societal issues arising from research, such issues are more likely to be considered. But the committee notes that there are many ways to create and maintain such accountability, and does not think that any one way is necessarily best.
Last, an agency should subject all R&D projects carried out using agency resources to a screening to identify plausible ethical, legal, and societal issues that they might entail. This implies that agency staff must not be allowed to carry out R&D projects “off the books,” that is, to conduct projects without the knowledge of the senior agency management responsible for attending to ELSI concerns.
Recommendation 2: Interested agencies that support R&D on emerging and readily available technologies of military significance should develop and deploy five specific processes to enable these agencies to consider ethical, legal, and societal issues associated with their research portfolios: (a) initial screening of all proposed R&D projects for ELSI concerns, (b) review of projects that do raise such concerns, (c) monitoring of projects as they proceed for the emergence of unanticipated ELSI concerns and to make periodic midcourse corrections to the research when necessary, (d) engagement with various segments of the public as needed, and (e) periodic review of ELSI-related processes in the agency.
2.a–Initial screening of proposed R&D projects
Before supporting a project in a particular area of S&T research, agencies should conduct a preliminary assessment to identify ethical, legal, and societal issues that the research might raise. Both the sponsoring agency and project managers would have responsibilities for identifying if not resolving ethical issues that they believe might attend to the effort in question. The agency should require those seeking research funding to identify in their proposals the plausible ELSI concerns that they believe their research might raise. Using such information as a starting point, the funding agency should then make its own assessment about the existence and extent of such issues. Note that this initial assessment should be carried out for all R&D projects (both classified and unclassified).
At this stage, the goal is to identify whether the proposed research would raise significant ELSI concerns that require further consideration.
In most cases, the result of an initial screening will be “no, the project raises no new issues that have not been thoroughly explored before,” and assessment of the proposed research will proceed without any necessary further consideration of ethical, legal, and societal issues. This procedure
is not intended to assess the significance of the issues or the agency’s response to them. It is intended solely to differentiate between research proposals that are explicitly determined to raise no new ELSI concerns and those that do—those in the former category should not be subject to further ELSI review in this phase.
How this identification process should be performed would surely vary with the setting. Depending on the size of the agency, the number of research proposals it handles, and the nature of that research (research on cosmology, for example, may raise fewer ethical and societal issues than research on specific weapons applications), the identification process might be performed by one employee as a part-time effort or may require a committee. It might be formal; it might be informal. The point is that is has to be done—and those who do it must have both enough knowledge of the underlying technology and enough familiarity with the kinds of ELSI concerns that are likely to arise to ensure that they can make sufficiently accurate decisions.
A systematic methodology is useful for identifying ethical, legal, and societal issues related to R&D. One such methodology is the framework described in Chapter 5, which can serve at least as a point of departure. Of course, no human decisions are completely accurate. The history described in Chapter 6 suggests that despite the best efforts of analysts to identify ethical issues that might arise in the course of an R&D effort, those efforts will be at best only partially successful, and that ethical issues are likely to arise or become important that were not predicted despite initial best efforts to do so.
A false positive, involving the identification of an ELSI concern that a proposal in fact does not raise, may be corrected in the next step, namely, the assessment process discussed in Recommendation 2.b. A false negative, involving the failure to identify an ELSI concern that a project in fact does raise, would need correction only if the research proposal is actually funded; the monitoring process discussed below in Recommendation 2.c is intended to help catch those false negatives.
2.b–Reviewing proposals that raise ELSI concerns
Once an agency has identified research proposals or projects that may raise complex ethical, legal, and societal issues, it needs to decide how to proceed. That requires some closer scrutiny of those issues, including asking how likely they are to arise, how serious they are likely to be, and whether there are ways to mitigate them.
This is, in essence, a risk assessment exercise, one that looks at the ELSI risks posed by the research in question. If and when such issues are identified, program managers should have the opportunity to take action
in response to such issues. (Of course, program managers are themselves subject to higher authorities, and the latter may take action as well.) Possible responses include not pursuing a given R&D effort, pursuing it more slowly, pursuing it in a modified form that mitigates the ethical or societal concerns, pursuing the original effort but also pursuing research to better understand the ethical or societal impacts, and so on. The responses should not be limited simply to a decision to proceed or not to proceed.
The method by which an agency conducts assessments of proposed R&D may vary. In some cases, several people may need to be involved in order to provide different perspectives—for example, someone from the agency’s communication group or its legal counsel might, in some cases, make useful contributions to understanding the ethical and societal implications of the research. In some cases it may also be useful to bring in voices from outside the agency, such as experts in the technology, experts in the particular ethical, legal, and societal issues, or representatives of the groups that might be affected by the issues. All such possibilities are based on the idea that engagement with a variety of different intellectual and political perspectives increases the likelihood that relevant ELSI concerns will be revealed. Furthermore, because consideration of ethical, legal, and societal issues is fraught with fundamental questions of inclusivity and trust (e.g., whose opinions and ethical standards should be taken into consideration?), casting a broad net may forestall downstream politically powerful complaints about a lack of inclusivity.
It should be expected that the initial assessment of a proposed R&D project will not be correct in all aspects. If so, what is the value of an initial assessment if that assessment cannot be expected to predict the ethical, legal, and societal issues that are likely to arise? It is often said that no battle plan survives first contact with the enemy, but no commander believes that this undeniable reality obviates the need for planning battles. The very effort of planning assembles resources that are likely to be helpful in a battle, even if how and when such resources are used may be very different from what the original plan specified. In addition, the initial assessment is a concrete point of departure for evolving an approach to handling ELSI issues as circumstances change. Similar observations hold for an initial assessment of ethical, legal, and societal issues related to R&D on ERA technologies.
The process described here seems likely to call for a committee, but other methods may well be possible or better in some circumstances. The key is to have people with relevant knowledge look at the implications and decide what should be done. The answer may be “nothing” because the issues involved are seen, on closer examination, to be minor or nonexistent. It may be to flag the research for decision by higher authorities in the agency. Or it could be anything in between.
One hard question about the process of proposal review for ELSI concerns is how it should interact with the process of making decisions on research projects or proposals. Should it take place before a funding decision, as part of the decision, or after the (initial) decision? Again, the committee believes that different models will be appropriate for different agencies and/or different research portfolios and volumes of research, even within a single agency. It is important, though, that the assessment be able to feed back into the research proposal, because one result of the assessment process may be a recommendation that the research be modified to mitigate some of the ethical and societal concerns identified.
2.c–Monitoring R&D projects for the emergence of ethical, legal, and societal issues and making midcourse corrections when necessary
Perfect prediction of significant ELSI concerns is virtually impossible, especially in an area as fraught with uncertainty as research on emerging and readily available technologies. Projects that seemed to raise substantial ethical, legal, and societal issues may turn out to raise none; projects that seemed to have no ethical or societal implications may turn out to have hugely important consequences.
A process for monitoring the course of R&D projects is thus essential to help agencies to adjust to such changing realities. If the perceived ELSI concerns change significantly during the course of a project (that is, if and when new issues are identified, if and when previous attempts to address already-identified issues prove inadequate, or if and when public perceptions change even if the issues themselves have not), programmatic or project responses are developed and the program or project plan can be modified accordingly. This is an adaptive approach that plans and relies on continual (or at least frequent) midcourse changes in response to such feedback.
An agency needs to be able to adjust to these changing realities. A twofold monitoring strategy would allow that kind of flexibility.
First, for projects for which the assessment process discussed in Recommendation 2.b did identify issues that required attention, that process should be repeated periodically during the life of the research. Such periodic assessment will enable an agency to see whether the research project needs fewer, more, or different methods to deal with those issues. It would also allow a decision as to whether the research, as it has developed, has surfaced ethical, legal, and societal issues that require that the research be examined, or examined again, at higher levels of the agency.
A monitoring process could, in principle, be similar to the initial screening process, with the important proviso that the baseline be updated to take into account what has been learned since the last look
at the project. To catch ethical, legal, and societal issues that may have appeared in the interim, the monitoring process should touch all projects in the agency’s R&D portfolio, so that projects that were previously determined to not raise ELSI concerns can be reexamined. But the intent of this requirement is not to reopen a debate over a project as initially characterized but rather to see if new issues have arisen in the period of time since the last examination—and in most cases, a project originally determined to not raise ethical, legal, and societal issues will retain that status upon reexamination despite progress in the project. It may also be the case that projects originally determined to raise ELSI concerns have evolved in such a way that it becomes clear that they do not.
Second, for some projects, the review advocated in Recommendation 2.b will conclude that no ethical or societal issues require consideration or modification. Such projects should be reexamined periodically to see whether that situation has changed.
On either path, if the perceived ELSI concerns associated with an R&D project change significantly, the interested agency will have to adapt to those changes. When new issues are identified (or previous attempts to address already-identified issues prove inadequate) and programmatic or project responses are developed, the program or project plan can be modified accordingly. That is, an adaptive approach relies on continual (or at least frequent) midcourse changes in response to feedback.
How, if at all, should a follow-on assessment differ structurally from an initial assessment? On one hand, involving the same person or persons provides an important degree of continuity and reduces the burden of getting up to speed on any given project. On the other hand, involving others who were not involved in the initial assessment provides new perspectives that may be valuable and more likely to reveal new issues. A mix of those familiar and unfamiliar with a given project helps to resolve the tension between these two propositions, but a mix implies that at least two people must consider each project—a fact that entails a higher degree of overhead. Agencies must decide how to manage these tensions, and the outcome may well vary by agency.
These first three subrecommendations lay out the elements of a process for identifying, assessing, and monitoring ethical, legal, and societal issues that may arise from research. Box 8.1 provides an example.
2.d–Engaging with various segments of the public as needed
With the stipulation that engagement with various segments of the public does not necessarily mean coming to consensus with them, an agency’s ELSI deliberations will often benefit from such external engagement. For example, public concerns about a given R&D project are often
Box 8.1 One Example of How to Implement Subrecommendations 2.a, 2.b, and 2.c
Subrecommendations 2.a, 2.b, and 2.c lay out the elements of a process for identifying, assessing, and monitoring ethical, legal, and societal issues that may arise from research. What follows is a concrete example of one way that a flexible and minimally bureaucratic process might be implemented. This is not the only possible method of implementation and it will not be, in all circumstances, the best, but it does provide an example of the committee’s thinking. The example is based on an agency funding extramural research projects, but it could also be applied to other kinds of research support.
All researchers applying for funding support would be required to answer a question (or questions) in the application about the ethical, legal, and societal issues that they see as being raised by their research. As part of the review of the research proposal, someone within the agency would examine all proposals to identify which ones appear likely to raise significant ELSI concerns. Depending on the size and breadth of the research portfolio at the agency and its internal organization, that examination might be conducted by one person or several. Rarely if ever would the full-time effort of one employee be required.
The person in charge of the examination process would have the benefit of the applicant’s self-assessment, but would not be bound by it. The screening process would result in one of two decisions. It might conclude that there were no significant ELSI concerns in this research. In that case, the proposal would be released to the more general funding process. Alternatively, the examination could conclude that the proposal did raise potentially significant issues. In that case, the proposal would be sent to the assessment process.
formulated in ELSI terms rather than in technical terms. As indicated above, policy makers must be prepared for the emergence of unforeseen outcomes of technology development and thus must have structures in place that will detect such outcomes and focus attention on them in a timely way. When unforeseen outcomes do emerge, policy makers must be prepared to communicate with the public using proven techniques (as described in Chapter 4 of this report). A developed strategy for public communication is also useful when anticipated ELSI concerns become public. Government actions in the United States ultimately depend, legally and practically, on the consent of the governed. Building public understanding of an agency’s actions, the reasons for those actions, and the precautions the agency has taken will normally be the best strategy, for democracy and for the agency.
In addition, members of these various publics (examples include communities of expertise that may be relevant to an R&D project who are not formally associated with it, including technical experts, experts on risk
The assessment process could be done by a committee, made up of designated agency personnel, but with the power to ask for participation by other agency employees or even by outside experts when relevant. Early in the process the committee should ask itself whether its membership has the appropriate expertise.
This committee would be charged with assessing the likelihood and significance of the ethical, legal, and societal issues that the research proposal raises. It would be empowered to conclude that the issues were not sufficiently important to require action, to recommend actions to mitigate the effects of those issues, to recommend against funding the research, or to refer the issues to higher authorities within the agency. It could also combine some of these actions, or take others as appropriate.
For research proposals that were funded, the funding agency might require an annual review for ethical, legal, and societal issues. Researchers might also be encouraged to bring to the attention of program managers new ELSI concerns if they become aware of them during the course of their work. Proposals that initially were not seen as raising such issues, either during the screening or after the assessment process, could be sent back through the screening process. The screening process, once again, could conclude that the research still did not raise any ethical or societal issues that required consideration; could conclude that the issues were modest enough to require only staff review; or could send the project to the assessment committee for possible action.
If the research proposal had initially reached the assessment committee, and its review raised significant issues regarding proposed research, then the assessment committee would review the research, its progress, and the ethical, legal, and societal issues it raises each year. It could then recommend changes as necessary.
assessment and communication, and those with ELSI expertise broadly defined) may have points of view that were not well represented in an agency’s internal deliberations about a given R&D project. Engagement with these publics may well yield information that may have been overlooked or underweighted in these deliberations. Ongoing engagement throughout the course of a project may reveal the impending appearance of initially unanticipated ethical, legal, and societal issues, and thus provide early warning to program managers and enable a more rapid response if and when these new issues do appear. Finally, the mere fact of consultation and engagement with a wide range of stakeholders helps to defuse later claims that one perspective or another was ignored or never taken into account.
For example, the ethical perspectives of potential users of a given application may be relevant. If an application (as presented to a potential user) offends the user’s ethical sensibilities, the likelihood that the user will actually use the application is obviously diminished. (Although it is
true that U.S. military users could be ordered to use a given application in any case, the same does not hold true for U.S. coalition allies, who might benefit from the capabilities afforded by a new application.)
Finally, a relevant stakeholder that interested agencies should engage is the community of researchers themselves. An agency that is considering substantive changes in its requirements for proposals and that wants researchers to attend to ethical, legal, and societal issues as part of the R&D it supports has some responsibility to engage with and inform the research community about what it means to do so. What is the rationale for these changes? How, if at all, will research projects have to change? What, if anything, does “attending to ethical, legal, and societal issues” mean in the context of decisions about specific proposals? The particulars of how best to proceed with such explanations almost certainly depend on the specific agency involved.
For R&D projects that are classified, public engagement is obviously constrained to a certain extent. Nevertheless, even if such projects can be discussed only with the cleared subsets of the various stakeholder groups, the result will still be more robust and defensible than if the project had not been discussed at all.
2.e–Periodically reviewing ELSI-related processes in an agency
As noted above, well-meaning policy statements are sometimes translated into excessively bureaucratic requirements when they have been implemented “on the ground.” To ensure that ELSI-related processes do not place undue burdens on researchers or on program managers in an agency, these processes should themselves be reviewed periodically to ensure that they are consistent with the intent of high-level policy statements regarding the handling of ethical, legal, and societal issues in the agency. If the agency finds that the promulgated policies and implementations are both consistent with the senior leadership’s intent and helpful to the agency, it has the option of advocating through appropriate chains of command similar efforts to other agencies that fund S&T research.
Recommendation 3: Interested agencies supporting R&D on emerging and readily available technologies of military significance should undertake an effort to educate and sensitize program managers to ethical, legal, and societal issues.
One critical element of effective implementation is education. As a general rule, program managers are not selected for their jobs on the basis of their knowledge of ELSI concerns that a given R&D effort might raise. Indeed, such individuals generally lack any formal training at all in such
matters, and it is unreasonable to expect these individuals to attend to (or even notice) ethical, legal, and societal issues in any systematic manner in the absence of such training. For work with military relevance, however, program managers are sometimes current or former military personnel, all of whom have had some training in and exposure to the laws of war.
In an agency that funds or oversees research likely to raise complex ethical, legal, and societal issues, the relevant staff should be educated in the problems that can arise, and have arisen, for similar agencies when those issues are ignored. The fate of the Total Information Awareness program and its intended proponents, as well as negative effects it had for DARPA overall, could be one useful object lesson.
If funding agencies are to screen, assess, and monitor research proposals and projects for possibly significant ethical, legal, and societal issues, they will need people with the ability to recognize those issues. Like all fields, the fields that assess ELSI concerns arising with various technologies have their own vocabularies. At the very least, the agency personnel dealing with these issues will have to understand, at some level, the relevant “language.” At the same time, those with ELSI responsibilities and/or expertise must have some understanding of the underlying research in order to identify issues that may or may not emerge.
One crucial, and easily overlooked, aspect of building internal expertise is building history. If an agency has no institutional memory of what ethical, legal, and societal issues it has faced, how it dealt with those issues, and what the consequences were, its ability to learn from that past is diminished. This diminished capability will be a particular problem for agencies that have frequent turnover. An interested agency needs to make it a priority to collect—and to use—information about how it has dealt with these issues. The agency party invested with functional accountability for ELSI concerns (as mentioned in Recommendation 1) might be in a good position to collect and organize that kind of information.
Once again, the committee does not believe there is one perfect method for building that expertise. Depending on the agency, it might make sense to hire employees who are trained in ethical, legal, and societal issues arising from technology. In other settings, it may make the most sense to provide existing agency employees with additional training to help them understand these issues. Without making a specific recommendation to use this particular mechanism, the committee notes that the Defense Acquisition University, an educational institution within the Department of Defense that seeks to educate professionals in the fundamentals of defense acquisition, could be a vehicle through which agency employees might be sensitized to ethical, legal, and societal issues.
In most cases, the committee expects that funding agencies will need to have several people involved in making decisions concerning ethical,
legal, and societal issues in research proposals and projects. One, or a few, might require extensive training, whereas for others involved in the assessment or monitoring process, more limited training may be sufficient. For example, in an agency with one person screening proposals or projects for ELSI concerns, that person might need substantial training. The other people involved at the assessment stage, though, might be sufficiently trained through a series of a few lectures or seminars, possibly even delivered online or in videos.
Recommendation 4: Interested agencies supporting R&D on emerging and readily available technologies of military significance should build external expertise in ethical, legal, and societal issues to help address such issues.
The need for some training or expertise in identifying and assessing ethical, legal, and societal issues may also exist within the research projects funded by an agency. One possible intervention that agencies could suggest for projects raising such issues could be that the project itself should include people who have had, or would receive, some training in dealing with those issues. For example, institutions that apply for funding from the National Science Foundation are required to specify how they will provide training and oversight in the responsible and ethical conduct of research to undergraduate students, graduate students, and postdoctoral researchers participating in the proposed research project.2
Similar mechanisms might be used to promote awareness of ethical, legal, and societal issues in the next generation of researchers.
However, not all expertise should be, or can be, internal to an agency. Agencies should seek advice from external experts, because properly addressing some ELSI concerns will require a depth of knowledge that cannot realistically be expected of program managers or scientists. If such expertise is not immediately available, it should be cultivated. Such cultivation would have both immediate and longer-term benefits. It would help the agency directly by providing that expertise, but, in the longer run, it could also build knowledge, expertise, and even trust outside the agency about what it does about ethical, legal, and societal issues, and why.
In addition, outside advisors can help to reduce conflicts of interest and to ensure honest, objective feedback. Agency employees may be pressured or otherwise reluctant to be as forthcoming or straightforward as needed. Of course, even outside advisors are prone to the same vulnerabilities, if they are worried that harsh criticism means they will not be
retained in the future. But as outsiders, they presumably have less at risk than do agency employees.
Many methods exist for involving outside experts. They could be consulted on individual cases, on a consultant or contractual basis. At the other extreme, an agency might want to set up an advisory committee for agency leadership to consider ethical issues associated with ERA technologies in a national security context. An agency could bring in outside experts full-time for limited terms of 1 or 2 years, or could hold a quarterly lecture or seminar series.
There are some other ways that an agency might try to build a knowledgeable and useful relationship with outside experts. It might, for example, fund research into the ELSI implications of some of its work. The ELSI program at the National Human Genome Research Institute has done that for more than two decades. An agency might also host an occasional conference at the agency on the ethical, legal, and societal issues raised by the agency’s research. Many approaches are possible; what is important is that an agency focuses on the goal of getting help from outside experts who understand ELSI concerns but also understand, to some extent, the agency and its mission. Such expertise may be rare, in which case new training grants on ethical, legal, and societal issues in S&T with regard to specific agency culture, procedures, and mission might be indicated.
Recommendation 5: Research-performing institutions should provide assistance for researchers attending to ethical, legal, and issues in their work on emerging and readily available technologies of military significance.
Recommendations 1 through 4 address government agencies that fund research on ERA technologies of military significance. To the extent that these recommendations are adopted, researchers supported by these agencies may need assistance in identifying and responding to ethical, legal, and societal issues—indeed, many researchers are likely to not have previously considered at all or in any systematic manner ELSI concerns that might be associated with their research.
Depending on the research field, investigators can be expected to be more or less familiar with ethical, legal, and societal issues. In all cases, the starting point for efforts at assistance should be the assumption that researchers will want to do the right thing. In addition, all researchers should have access to assistance in anticipating the consequences of complex, uncertain research programs, and that assistance should be available
early enough in the research planning process to enable researchers to accommodate and benefit from it.
The committee believes that research-performing institutions should provide ELSI-related assistance to researchers working under their aegis in much the same way that they provide other functional support, such as legal, contracting, and various kinds of administrative support.
Research-performing institutions have processes and standards for addressing certain ELSI concerns in certain research contexts, such as protections for human subjects or environmental safety and health. When research on ERA technologies of military significance involves such issues, these processes and standards may be relevant. In cases where existing requirements and procedures are not applicable, research-performing institutions should encourage researchers to use their creativity and provide additional institutional assistance to examine ethical, legal, and societal issues and determine how best to proceed, rather than stipulating bureaucratic requirements for compliance with a single uniform policy.
Finally, certain research-performing institutions (e.g., universities) are likely to have access to in-house ELSI-related resources, such as academic researchers who specialize in ELSI-related matters. In such cases, these institutions may be able to play a useful matchmaking role in linking with sources of expertise scientific researchers who wish to address potential ethical, legal, and societal issues.
Providing assistance of this nature will help researchers to respond to any ethical, legal, and societal issues of concern to agencies that might fund their research.
In addition, many institutions performing research on ERA technologies with military significance already have in place policies and procedures to address a variety of ethical, legal, and societal issues that arise in some S&T research. For example, institutional review boards for research involving human subjects are quite common. Leveraging policies and procedures already in place to address ELSI concerns associated with certain kinds of research will help to minimize unnecessary overhead in institutions performing research on ERA technologies with military significance, and where policies and procedures already exist to address ethical, legal, and societal issues that are common to both military and civilian-oriented research, new ones should not be created to address them.
Although ethical, legal, and societal issues have always accompanied the development of technology for military purposes, ERA technologies present special challenges because of the difficulties in anticipating how they might be researched and ultimately used. Fortunately, previous
efforts to address ethical, legal, and societal issues associated with S&T in a civilian context provide a useful base of knowledge for addressing such issues in a military context. Thus, addressing ELSI concerns in a military R&D context should not be regarded as an entirely new intellectual enterprise. That said, civilian-oriented ELSI mechanisms cannot be used in a military context without taking into account the special and unique aspects of that context.
Apparent in DARPA’s charge to the committee is a concern about what it means to undertake R&D in an ethical manner. The committee applauds this concern, recognizes the difficulties posed by this concern, and hopes that its report is a first step forward in helping DARPA—and indeed all agencies that support military R&D—address these very important and human issues.