Unanticipated impacts of a given science, technology, or application are a frequent source of ethical, legal, and societal issues; it is therefore important that decision makers and scientists and engineers give consideration to as broad a range of potential impacts as possible. By doing so, scientists and engineers maximize their ability to improve designs in ways that can reduce risks and increase benefits, and decision makers and scientists and engineers can consider how best to engage with citizens in consideration of what technologies to develop and how to deploy and evaluate the applications and their uses.
The analytical framework described in Chapter 5 is based on the idea that undertaking a systematic search for ethical, legal, and societal issues that could come up in the context of a given technology or application will surface more possible issues than if no such search is undertaken. That is, there is value in an a priori consideration of ELSI concerns before a technology or an application is developed. But how might one anticipate ethical, legal, and societal issues associated with unknown applications that may—or may not—lie in the future?
Predictive analysis is arguably the most difficult task in any assessment of ethical, legal, and societal issues. Indeed, it sometimes has overtones of “expecting the unexpected” and identifying issues before they can be known. To be sure, literal talk of anticipating unanticipated ethical, legal, and societal issues is oxymoronic. But the ability to respond quickly to unanticipated issues that do arise can be enhanced by addressing in
advance a wide variety of identified issues, because that exercise provides building blocks out of which responses to unanticipated ethical, legal, and societal issues can be crafted.
Moor argues that
because new technology allows us to perform activities in new ways, situations may arise in which we do not have adequate policies in place to guide us…. [Furthermore,] the subtlety of the situation may escape us at least initially, and we will find ourselves in a situation of assessing the matter as consequences unfold. Formulating and justifying new policies is made more complex by the fact that the concepts that we bring to a situation involving policy vacuums may not provide a unique understanding of the situation. The situation may have analogies with different and competing traditional situations. We find ourselves in a conceptual muddle about which way to understand the matter in order to formulate and justify a policy.1
Anticipating ethical, legal, and societal issues associated with applications that may—or may not—lie in the future should, in principle, be enhanced by good technology forecasting. If the specific trajectory of a given science or technology development were known in advance, anticipating the ethical, legal, and societal implications associated with that trajectory would be little different from anticipating the ethical, legal, and societal implications associated with a known application of that technology.
But as it turns out, it is very difficult to predict trajectories of science or technology development. The history of technology forecasting suggests that inaccurate technology forecasts are the rule rather than the exception—and these inaccuracies are major rather than minor. In very broad terms, a variety of trajectories for any given scientific or technological development are possible. Some unanticipated applications have positive impacts—it had been expected that the most common use of the ARPANET (the forerunner of the Internet) would be the remote use of computer facilities of a university from a second university a long distance away. Instead, the Internet has richly and densely connected people as well as computers. Other unanticipated applications have negative impacts—the introduction of nonlethal weapons into a police force
1 James H. Moor, “Why We Need Better Ethics for Emerging Technologies,” Ethics and Information Technology 7:111-119, 2005.
can result in an increased use of force overall, as noted in Chapter 3. But what a great deal of experience with technology development shows is that unanticipated outcomes are quite common and indeed are more the rule than the exception.
For example, from an initial orientation toward particular military applications, one might consider unanticipated “off-label” military applications or nonmilitary applications. Examples of off-label military applications include the use of timing signals from Global Positioning System satellites to synchronize frequency-hopping communications systems, the use of bulldozers as weapons to bury enemy soldiers in trenches,2 and the use of helmets as cooking pots. The primary characteristic of an off-label military application is that the designers of the application did not intend for it to be used that way in practice. Such applications are generally improvised in the field after soldiers have been provided with the technology in question.
In addition, it is often said that the short-term impact of a given technology is overestimated and that the long-term impact is underestimated. Excessive optimism about short-term effects may lead to disillusionment—and as a given technology falls out of favor for its promised applications, pressures will arise to preserve investments already made by considering other applications. Underestimation of long-term effects reflects the substantial difficulties in making predictions with long time horizons—and it is in the long term that many actual real-world consequences that raise ELSI concerns will become manifest.
What helps to explain the limited utility of technology forecasting in addressing ethical, legal, and societal issues in advance of their appearance? It is helpful to consider multiple sources of uncertainty in such forecasts.
Unproven Fundamental Science
The fundamental science underlying a proposed application must be sound. From time to time, advanced applications are proposed or suggested when the fundamental underlying science has not yet been proven or, more often, has simply not been adequately developed. In such cases,
2 Patrick J. Sloyan, “Iraqis Buried Alive—U.S. Attacked with Bulldozers During Gulf War Ground Attack,” Newsday, September 12, 1991, available at http://community.seattletimes.nwsource.com/archive/?date=19910912&slug=1305069.
the applications in question are both speculative and grand in their scope and scale.
As an example, consider the promises of virtually unlimited and free energy made when cold fusion first made the headlines. Martin Fleischmann testified to the U.S. Congress that cold fusion was about “cheaper energy … unlimited energy, and energy that may be less destabilizing to our environment.” 3 In the same hearing, a senior Administration advisor at the time advocated a development model in which “even before [the] basic science is proven, applied research [would] begin …, product developments [would be] undertaken, market research [would be] done, and manufacturing processes [would be] working.” He further argued against “dawdling and waiting” until the science of cold fusion is proven. It is not hard to imagine such thinking applied in a wartime situation when development of a new technology needs to happen rapidly.
Lack of Real-World Viability Despite Technology Proof of Principle
Even when the fundamental science is sound, it is an open question as to whether anything immediately useful can be accomplished with the knowledge discovered. Many important fields of science do not easily lend themselves to practical application, at least not on a time scale shorter than many years. And although there are many definitions of practical application, a necessary if not sufficient condition is that the science can help accomplish a task that at least some elements of society find useful.
In this context, “useful” should be understood as something that some humans value in an absolute rather than a relative sense—that is, a means of accomplishing a task at lower expense and with higher confidence than is possible by another, and thus less useful, means.
An example in this category comes from synthetic biology. The fundamental principles of synthetic biology have been scientifically validated, and some “in-principle” demonstrations have been conducted—cyanobacteria that produce hydrocarbon fuel,4E. coli modified to produce amorphadiene, a precursor for the antimalarial drug artemisinin,5 and
3 U.S. House Committee on Science, Space, and Technology, “Recent Developments in Fusion Energy Research: Hearing before the Committee on Science, Space, and Technology,” 101st Congress, 1st Session, April 26, 1989.
4 Anne M. Ruffing, “Engineered Cyanobacteria: Teaching an Old Bug New Tricks,” Bioengeineer Bug 2(3):136-149 (citing inventors P.G. Roessler, Y. Chen, B. Liu, and C.N. Dodge, “Secretion of Fatty Acids by Photosynthetic Microorganisms,” U.S. patent application publication number W02009076559A1, Synthetic Genomics, applicant, June 18, 2009).
5 Steven A. Benner and A. Michael Sismour, “Synthetic Biology,” Nature Reviews Genetics 6(7):533-543, 2005.
E. coli modified to detect arsenic in water.6 But none of these demonstrations has yet yielded commercial value, thus illustrating that proof of principle is not the same as marketplace viability. (In a military context, a technology or application does not have to demonstrate commercial value in the same sense, but does need to be “weaponized” to be useful. For example, weaponizing a technology that demonstrates proof of principle may involve making it sufficiently rugged to use in the field, simplifying its operation so that large amounts of training are not necessary to use it, and so on.)
As for certain more futuristic applications, being able to control even a very simple operating organism based on a synthesized genome is today an achievement that strains the current state of the art.7 Indeed, in this case, the term “synthesized genome” does not refer to a genome designed from scratch but rather one whose biological functionality is based primarily on the genome of an existing organism (and hence shares many of the same DNA sequences). The work referred to was rightly hailed as a major step forward toward the synthesis of novel and useful life-forms, but it is nevertheless just the first step in a very long journey of scientific discovery. Still, some of those responsible for this achievement write that “the ability to routinely write the software of life will usher in a new era in science, and with it, new products and applications such as advanced biofuels, clean water technology, and new vaccines and medicines.”8
Dependence of Technology Advancement on Nontechnical Influences
Scientific progress and technology refinement do not necessarily stop at the point that the first useful application is conceived or implemented. But the pace at which such progress and refinement take place is dependent on many factors other than the science and scientists themselves. Such factors include politics, budgets, the state of the economy, the availability of appropriate human capital, and so on.
To take one example, Moore’s law is often cited as an example of the inexorable development of information technology (in its most basic form, Moore’s law states that the areal density of transistors on a chip increases
6 Jennifer Chu, “A Safe and Simple Arsenic Detector” January 25, 2007, MIT Technology Review, available at http://www.technologyreview.com/news/407222/a-safe-and-simple-arsenic-detector/. Read more at http://www.ukessays.com/essays/biology/synthetic-biology-and-development-of-biofuels.php#ixzz2KFdcGUeL.
7 Daniel G. Gibson et al., “Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome,” Science 329(5987):52-56, July 2, 2010, available at http://www.sciencemag.org/content/329/5987/52.full.
exponentially with time with a doubling time of 18 months). And indeed that pace of technology advancement has been an important driver/enabler. But no law of nature underlies it, and in fact concerns have been raised in two dimensions. First, fundamental physics does limit the physical size of transistors, and thus there is indeed a limit to the areal density of transistors on a chip. Can other high-density technologies be developed to store and process information? Perhaps. But even that question changes the form of Moore’s law from one involving the number of transistors on a chip to one involving (for instance) the number of bits on a chip. So the metric of progress must be chosen carefully. And the question of how far into the future Moore’s law will hold is an open one.
Second, Moore’s law is at least as much an economic statement as a technological statement—the fact that the areal density of transistors has followed an exponential growth curve with a doubling time of 18 months reflects the investments that semiconductor and semiconductor equipment manufacturers have made in new fabrication plants, and they have been able to financially justify such expenditures of capital. If they did not believe that they were capable of extracting appropriate value from such expenditures, they would not have made them in the first place—and the doubling time would no longer be 18 months.9
Building on this example, economics is often one of the most unpredictable and powerful influences on technology evolution. If the cost of implementing an application becomes very low because of manufacturing advances (e.g., as described by Moore’s law), commodity component markets (a particular concern for IT hardware and software), or other factors, the application may become affordable for uses and users that were not initially anticipated. This is a common trajectory that lowers the barrier to entry for a technology and turns a technology into one that is readily available.
Competitiveness with Respect to Possible Alternatives
A proof of principle is only the first step in developing a viable application—that is, an application that is at least a good or a better way to accomplish a needed task. If there is no other way to accomplish that task, then the path forward is likely to be more straightforward, and perhaps more predictable, simply because there are no alternatives.
But the situation is much more complicated when an application based on new technology must compete with existing or proven alterna-
9 David E. Liddle, “The Wider Impact of Moore’s Law,” Journal of Solid State Circuits 11(5):28-30, September 2006, available at http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=4785858.
tives. Compared to the existing alternatives, the new application must perform the task better, or afford the user a wider range of advantages, or be cheaper, or easier to produce, or less environmentally damaging, and so on. If the new application for performing a task affords no advantages over existing ones that perform the same task, there is no compelling reason for anyone to adopt it. When the new application offers only incremental advantages over existing applications, there is often uncertainty about whether those incremental advantages are sufficiently important, although during times of national emergency (such as being in a war), incremental advantages are often sought with less attention to matters such as cost.
If the new application can be shown to be competitive with existing alternatives, it has a chance of being widely adopted. Wide adoption of such an application, in turn, lays a foundation for even more applications to be developed using the underlying technology. But predicting such an outcome, given the large set of ELSI concerns that must be resolved successfully, is thus problematic.
ELSI Acceptability of Anticipated Use
Even when a new application can be shown to be competitive with existing alternatives, it may not succeed when ethical, legal, and societal issues are a concern. For example, an application may be competitive only when certain intangible costs are ignored, and controversy over the magnitude or significance of those costs may emerge. Advocates of the application will argue that those costs are low, or that because they are intangible, they should not be considered at all; opponents of the application will argue the reverse position. Such controversy may well delay or even halt the adoption of an otherwise promising application.
One example is the Active Denial System (ADS), a directed-energy nonlethal weapon first developed in the mid-2000s and designed for keeping humans out of certain areas.10 The ADS aims at a target such as a human being a beam of microwave energy that causes an intense burning sensation on the human’s skin. However, because the beam does not penetrate very far into the skin, it causes little lasting damage (no lasting damage in nearly all cases). The pain is intended to cause the human to turn away and flee the area.
In 2003, a senior U.S. Air Force scientist asserted that the use of the ADS would have averted an incident in which U.S. soldiers in Iraq fired into a crowd that was protesting their presence in the city of Fallujah.11
The ADS could have been used to force the crowd to disperse. However, the Department of Defense refused to deploy the weapon as late as December 2006, apparently in part because the weapon might have been misconstrued by the public as a device for torture.
A second example is the various lasers that have been considered for use as antipersonnel weapons, as discussed in Chapter 3. Such weapons would have been able to injure (blind) enemy soldiers at long range; furthermore, by inflicting serious injury on enemy soldiers but not killing them, such weapons could have seriously increased the logistical burden on the enemy to care for injured soldiers. However, despite such operational advantages, the United States promulgated policy that prohibited the use of lasers “specifically designed to cause permanent blindness of unenhanced vision”12 and later signed on to an international treaty banning such use, in part for ethical reasons.
A given technology that spawns one widely adopted application often spawns others that were entirely unanticipated when the first application was conceived. And the success of these unanticipated uses often depends on the development of other technologies. For example, although lasers were recognized at first for being applicable to communications, such applications were wireless. The use of lasers for fiber-optic communications depended on the availability of low-cost fiber optics, a technology that was for the most part unanticipated when lasers were first invented.
Any one of the several factors described above entails some degree of uncertainty as to outcome. But when the uncertainties associated with all of these factors are compounded, it should not be surprising that in general, long-term predictions about a technology’s effects are not particularly accurate. This observation, along with the potential for unanticipated use mentioned above, is applicable to nearly any kind of new technology.
Technologies that are easily accessible to many parties introduce two additional noteworthy complications. First, increasing the number of parties with access to a technology will increase the number of applications that will come to fruition. Increasing the number of applications that will
be fruitful makes it less likely that any kind of a priori process to anticipate trajectories of technology evolution will anticipate all of them.
Second, increasing the number of parties—especially across international (and thus cultural) lines—increases the likelihood that different ELSI perspectives on a given technology or application will be relevant to any consideration of the ethical, legal, and societal issues. In this context, knowing where important ELSI differences will arise becomes problematic, especially when the process is limited to an analytic process conducted by only a few people with narrow perspectives. Alternative approaches to consider for identifying, anticipating, and addressing ethical, legal, and societal issues include the use of deliberative processes to tap a broad range of perspectives; anticipating governance, a new approach to examining societal dimensions of R&D; and adaptive planning and policy making.
The analytical framework outlined in Chapter 5 speaks to insights that can be obtained through a careful consideration of various domains of possible ethical concern. Thus, it is an important tool for anticipating and predicting ethical, legal, and societal issues that might be associated with the pursuit of a given technology or application. A policy maker faced with deciding about how or whether to proceed in a particular technological direction might examine each of the sources of insight described in Chapter 4 and ask if the particular direction in question might raise relevant ELSI questions in any of them.
But as Chapter 5 points out and the discussion at the outset of the present chapter suggests, that framework cannot be regarded as comprehensive. To improve their capability for anticipating and predicting and to exploit opportunities to gain new insights, policy makers have sometimes turned to deliberative processes that seek to identify a broad range of perspectives and possible stakeholders in discussions of any given issue.
Deliberative processes were described in a 1996 report of the National Research Council entitled Understanding Risk: Informing Decisions in a Democratic Society.13 The study committee responsible for that report was originally charged with developing an approach to risk analysis structured to enable making better and more broadly acceptable governmental decisions regarding regulatory actions. The report noted that risk characterization involved “complex, value-laden judgments” and required “effective dialogue between technical experts and interested and affected
13 National Research Council, Understanding Risk: Informing Decisions in a Democratic Society, National Academy Press, Washington D.C., 1996.
citizens who may lack technical expertise, yet have essential information and often hold strong views and substantial power in our democratic society.”
In particular, the 1996 report drew a contrast between analytical and deliberative modes of inquiry as “complementary approaches to gaining knowledge about the world, forming understandings on the basis of knowledge, and reaching agreement among people.” Key to an analytical mode of inquiry was the involvement of an expert community that was capable of answering factual questions. By contrast, a deliberative mode of inquiry emphasizes communication among stakeholders and between stakeholders and policy makers and collective consideration of issues. In the words of the report, “participants in deliberation discuss, ponder, exchange observations and views, reflect upon information and judgments concerning matters of mutual interest, and attempt to persuade each other.” Both modes of inquiry, the report argued, were essential to effective risk characterization.
The 1996 report articulated three separate rationales for broad participation in risk decisions: normative, substantive, and instrumental.
• From a normative standpoint, the principle that government should obtain the consent of the governed drives the idea that citizens have the right to participate meaningfully in public decision making.
• From a substantive standpoint, the report argued that “relevant wisdom is not limited to scientific specialists and public officials and that participation by diverse groups and individuals will provide essential information and insights about a risk situation” and further that “nonspecialists may contribute substantively to risk characterization … by identifying aspects of hazards needing analysis, by raising important questions of fact that scientists have not addressed, and by offering knowledge about specific conditions that can contribute more realistic assumptions for risk analysis … [and by] help[ing] design decision processes that allow for explicit examination, consideration, and weighing of social, ethical, and political values that cannot be addressed solely by analytic techniques.”
• From an instrumental standpoint, the report argued that “broad public participation may decrease conflict and increase acceptance of or trust in decisions by government agencies” and that “mistrust is often at the root of the conflicts that arise over risk analysis in the United States.” Furthermore, the report said that “providing people an opportunity to learn about the problem, the decision making process, and the expected benefits of a decision may improve the likelihood that they will support the decision” and/or “clear up misunderstandings about the nature of a controversy and the views of various participants. And it may contribute
generally to building trust in the process, with benefits for dealing with similar issues in the future.”
After describing these rationales, the 1996 report went on to argue that deliberative processes could be used to surface a broader range of risks that would not be identified by less inclusive processes, and that these risks could, when necessary, be addressed more formally and rigorously using more traditional analytical means.
Many of the lessons of this 1996 study regarding the value of deliberative processes to risk characterization are applicable to anticipating and identifying ethical, legal, and societal issues associated with new technologies and applications. Indeed, at least the substantive and instrumental rationales can be carried over to the ELSI context directly: nonspecialists in the technology or application under consideration may have relevant wisdom, and broad participation in decision making (especially politically controversial decision making) may make the outcome of those decisions more stable.
More recently, Worthington et al. argued that ordinary citizens should have a role in shaping technologies that pervade society, and that they can and should play a role in technology assessment.14 They further note that in the past two decades, participatory practices have expanded considerably in a number of dimensions, including greater racial and gender inclusivity of the people who constitute the professional workforce in scientific and engineering fields; increased involvement in research by ordinary people (e.g., through citizens collecting data for scientific analysis or through the origination of scientific research projects in citizen concerns); challenges by citizens to the authority of experts and their sponsors; and more frequent emergence of dissidents inside science and engineering fields who challenge research programs backed by industry, government, and scientific institutions.
Broad participation is also relevant because of another reality of decision-making processes—that when potentially controversial issues are addressed, opponents of a particular policy will seek support for their opposition from all plausible sources. Ethical concerns may play into the logic driving their opposition—indeed, opponents of a particular policy may well be more sensitive to and aware of ethical concerns than are the policy’s proponents, who may have used an analytical process that is not sensitive to these positions and perceptions, and sometimes the most
14 Richard Worthington et al., Technology Assessment and Public Participation: From TA to pTA, December 6, 2012, available at http://ecastnetwork.wordpress.com/technology-assessment-and-public-participation-from-ta-to-pta/.
salient expression of ethical concerns is the emergence of a political or public controversy.
Rather than resisting or dismissing ethical concerns that opponents raise (even if their stated ethical concerns are not in fact the “real” reasons for their opposition), policy makers can take advantage of the opportunity to gain ethical insights that might otherwise be unavailable. This is not to say that all concerns are necessarily dispositive, but some may be worthy of intellectual effort to address.
On the other hand, ethical, legal, and societal issues are not analogous to most of the risks considered in the 1996 NRC report cited above. In particular, there is no analytical or technical resolution to many ELSI dilemmas—and seeking resolution or consensus with respect to such dilemmas can result in a never-ending debate. Thus, in an ELSI context, deliberative processes should be regarded primarily as a way to surface relevant issues that would not otherwise be revealed, and, second, as a way to gather ideas for possible resolutions to the issues. Deliberative processes also help to educate more people about the technology and the ethical, legal, and societal issues involved. If nothing else, deliberative processes provide a broad range of parties with the opportunity to state their concerns—and reduce the credibility of future claims that they have been entirely left out of any decision making.
Against all of these considerations is one major downside: the possibility—indeed, the likelihood—that deliberative processes will delay the relevant decision-making processes and increase the time it takes for valuable and useful technology to be delivered to troops in the field. Two observations are relevant here.
First, this downside takes on the most significance when the application in question has direct relevance to problems that these troops are facing on an ongoing and frequent basis, but less significance when useful applications lie in the far future.
Second, the use of deliberative processes may help to defuse potential future concerns and possibly head off protracted and politically dangerous controversy in the future that could delay to an even greater extent or even kill promising and useful technologies. Two relevant examples of a failure to anticipate controversy may be the Total Information Awareness program and the Policy Analysis Market program of DARPA (Box 6.1), both of which were abandoned for the ethical controversies they raised—controversies that might have become evident beforehand as a result of deliberative processes aimed at eliciting a wide range of input regarding relevant ethical issues.
Finally, the committee observes that community engagement is sometimes difficult and expensive. Finding expert facilitators of community engagement processes, identifying the appropriate communities, and
Box 6.1 Past DARPA Projects That Have Raised Controversy
The Total Information Awareness (TIA) program, later designated the Terrorism Information Awareness program, was a DARPA project initiated in 2002. TIA was aimed at detecting and averting terrorist threats through increased data sharing between federal agencies. Specifically, TIA deployed “data-mining and profiling technologies that could analyze commercial transactions and private communications” such as individuals’ “financial, educational, travel, … medical records … [and] criminal records.”1 According to the New York Times, the program operated on the premise that the “best way to catch terrorists is to allow federal agencies to share information about American citizens and aliens that is currently stored in separate databases.”2 This project raised concern among many privacy advocates including the American Civil Liberties Union, the Electronic Frontier Foundation, and the Electronic Privacy Information Center. “This was a hugely unpopular program with a mission far outside what most Americans would consider acceptable in our democracy,” said Timothy Edgar, a legislative counsel for the American Civil Liberties Union office in Washington, D.C.3 By 2003, continued privacy concerns raised by a number of groups encouraged Congress to act. First, Congress passed a law ordering a report detailing the project in Public Law 108-87.4 The requested report was to:
include a detailed explanation for each project and activity of the Total Information Awareness program—the actual and intended use of funds; the schedule for proposed research and development; and target dates for deployment. It must assess the likely efficacy of systems such as the Total Information Awareness program; the likely impact of the implementation of the Total Information Awareness program on privacy and civil liberties; and provide a list of the laws and regulations that govern the information to be collected by the Total Information Awareness program, and a description of any modifications required to use the information in the manner proposed.5
The congressionally ordered report framed as key concerns about the TIA project its possibly raising “significant and novel privacy and civil liberties policy issues,” questions as to “whether the safeguards against unauthorized access and use are sufficiently rigorous,” and the possibility that the “performance and promise of the tools might lead … [to] increasing the extent of the collection and use of information already obtained ….”6 Continued concern led Congress to pass legislation defunding the specific project in defense fiscal appropriations bill HR 2658.7,8 While the legislation effectively ended the specific TIA program, the legislation still “allowed [certain agencies] the use of ‘processing, analysis and collaboration tools’ … for foreign intelligence operations.”9 Even under these narrower conditions, concern over the possible uses of the technology remained.
The Electronic Frontier Foundation explained that “while EFF is pleased that these tools will not be developed specifically for domestic use, we are concerned that their development for foreign intelligence purposes continues to pose civil liberties risks—especially since it appears that they are to be developed under a classified ‘black budget’ with little, if any, public accountability.”10
A second program that raised public controversy was the Policy Analysis Market (PAM) (also known as Terrorism Futures Market, FutureMAP, or Electronic Market-Based Decision Support), a project initiated by DARPA in 2001 to apply de-
cision market theories to predict world events. The “market” would allow individuals to bet on certain events occurring, such as regime changes in the Middle East, acts of terrorism, and other political and economic events. The market was to go live in July 2003 but was canceled, right after it was announced, due to public outcry.
The markets in the PAM program actually reflected an attempt to harness the judgments of many people to improve predictive power and thus to provide better information for decision making.11 The decision markets were designed to work much like other economic markets in which investors could make bids and the prices would reflect the aggregate thinking about the likelihood of an event occurring. Such markets have proved accurate in a number of contexts, including sporting events, Hollywood movie revenues, and Oscar winners.12 Of particular interest was that a political futures market studied at the University of Iowa predicted U.S. election outcomes more accurately than either opinion polls or political pundits.13
Critics complained that unlike markets for forecasting U.S. election or Oscar winners, decision markets focused on predicting possible terrorist acts. One critic argued, “Trading on corn futures is real different than trading on terrorism and atrocity futures. One is morally fine and represents free enterprise, and the other one is morally over the line.” 14 Others objected to the project on the grounds that “it was unethical and in bad taste to accept wagers on the fate of foreign leaders and the likelihood of terrorist attacks.”15 There was also concern that the market would actually incentivize terrorism actions such that “investors” could “profit from the accurate prediction of attacks that they carry out.”16
Politically, the proposed PAM program resulted in a firestorm of criticism. Senator Ron Wyden described the PAM program as “a federal betting parlor on atrocities and terrorism,” calling it “ridiculous and … grotesque.” 17 He further stated that “betting on terrorism is morally wrong.” Senator Byron Dorgan characterized the PAM program as “the most Byzantine thing I have ever seen proposed by a federal agency.”18 Senator Hillary Rodham Clinton added her opinion that it was “ … a market in death and destruction, and not in keeping with our values.”19 As a result of these criticisms, the PAM program was shut down within a day of its public announcement.
1 Jeffrey Rosen, “Total Information Awareness,” New York Times, December 15, 2002, available at http://www.nytimes.com/2002/12/15/magazine/15TOTA.html.
3 Carl Hulse, “Congress Shuts Pentagon Unit Over Privacy,” New York Times, September 26, 2003, available at http://www.nytimes.com/2003/09/26/politics/26SURV.html.
4Report to Congress Regarding the Terrorism Information Awareness Program, May 20, 2003, available at http://epic.org/privacy/profiling/tia/may03_report.pdf.
5 Congressional Research Service, “Privacy: Total Information Awareness Programs and Related Information Access, Collection, and Protection Laws,” RL31730, 2003, available at http://www.fas.org/irp/crs/RL31730.pdf.
6Report to Congress Regarding the Terrorism Information Awareness Program, 2003, available at http://hanson.gmu.edu/PAM/govt/DARPA-report-to-congress-5-20-03.pdf.
7 Electronic Frontier Foundation, “Total/Terrorism Information Awareness (TIA): Is It Truly Dead?”, 2004, available at http://w2.eff.org/Privacy/TIA/20031003_comments.php.
8 HR 2628, later Public Law 108-87 states: “Sec. 8131. (a) … [N]one of the funds appropriated or otherwise made available in this or any other Act may be obligated for the Terrorism Information Awareness Program” … [but] this limitation shall not apply to the program hereby authorized for processing, analysis, and collaboration tools for counterterrorism foreign intelligence.”
9 Carl Hulse, “Congress Shuts Pentagon Unit Over Privacy,” New York Times, September 26, 2003, available at http://www.nytimes.com/2003/09/26/politics/26SURV.html.
10 Electronic Frontier Foundation, “Total/Terrorism Information Awareness (TIA): Is It Truly Dead?,” 2004.
11 Robert Looney, “DARPA’s Policy Analysis Market for Intelligence: Outside the Box or Off the Wall?”, Strategic Insights 2(9, September):1-10, 2003, available at http://www.au.af.mil/au/awc/awcgate/nps/pam/si_pam.htm.
13 Joyce Berg, Robert Forsythe, Forrest Nelson, and Thomas Rietz, Results from a Dozen Years of Election Futures Markets Research, College of Business Administration, University of Iowa, Iowa City, 2000, available at http://tippie.uiowa.edu/iem/archive/bfnr_2000.pdf.
15 Robert Looney, “DARPA’s Policy Analysis Market for Intelligence: Outside the Box or Off the Wall?”, 2003.
16 Schoen, “Pentagon Kills ‘Terror Futures Market’,” 2003.
17 Senators Ron Wyden and Byron Dorgan, News Conference on Terror Financing Scheme, July 28, 2003, available at http://hanson.gmu.edu/PAM/govt/senator-wyden-dorgan-pressconf-7-28-03.txt.
19 See Celeste Biever and Damian Carrington, “Pentagon Cancels Futures Market on Terror,” newscientist.com, July 30, 2003, available at http://www.newscientist.com/article/dn4007-pentagon-cancels-futures-market-on-terror.html.
engaging with each of these communities all take time and money. Decision makers who adopt deliberative processes will thus have to make tradeoffs between more comprehensive engagement with relevant communities and the financial and schedule resources available.
In the first decade of the 21st century, the fields of science and technology studies and practical ethics have begun to develop a new approach to examining the societal dimensions of R&D work in science and engineering. A central premise of this examination holds that research trajectories have value dimensions that can be identified in all phases of the work, and that in fact need to be identified if important consequences are to be adequately considered—whether they are consequences involving benefits or harms, or issues of social equities or inequities.
The approach is called anticipatory governance or anticipatory ethics.15 It is different from standard approaches to technology forecasting, insofar as it does not treat the R&D process as a “black box” implying that consideration of ethical or value issues comes after the R&D itself. Anticipatory governance presumes that there are ethical and value issues that are resolved—whether explicitly, implicitly, or by default—in the doing of the R&D, whether it is in selecting a research direction and research procedures, deciding what counts as a significant finding, examining or ignoring what benefits or harms might accrue and to whom, and so forth.
Most important, this approach does not require that its adherents be able to predict the consequences of R&D to proceed in an ELSI-responsible manner. Instead, it posits that R&D managers have a responsibility to be aware that the efforts they support have and will have ELSI dimensions that need elucidation and examination at all stages, thus enabling anticipatory responsibility throughout.
Policy makers have sometimes turned to adaptive processes that allow them to respond quickly to new information and concerns as they arise in the course of technology development and use. In a 2001 article, Walker et al. note that public policies must be formulated despite profound uncertainties about the future.16 In such an environment, policies made today should account for the possibility of new information and/or new circumstances emerging tomorrow that can reduce these uncertainties. Walker et al. suggest an “adaptive’’ approach to policy making that responds to new information and that makes explicit provisions for learning. Thus, they argue, the inevitable policy changes (also known as midcourse corrections) that happen over time are part of a larger, recognized process and in particular are not forced by circumstance to be made on an ad hoc basis.
Walker et al. propose that adaptive policies should contain a variety of policy options, some of which are intended for immediate implementation and others held in reserve as contingency plans to be activated only if and when certain things happen. That is, adaptive policies involve taking
15 For more information on anticipatory governance, see Daniel Barben, Erik Fisher, Cynthia Lea Selin, and David H. Guston, “Anticipatory Governance of Nanotechnology: Foresight, Engagement, and Integration,” in The New Handbook of Science and Technology Studies, MIT Press, 2008; and D.G. Johnson, “The Role of Ethics in Science and Engineering,” Trends in Biotechnology 28(12, Dec.):589-590, 2010.
16 Warren E. Walker, S. Adnan Rahman, and Jonathan Cave, “Adaptive Policies, Policy Analysis, and Policy-Making,” European Journal of Operational Research 128(2):282-289, 2001.
only those actions that are necessary now and institutionalizing a process for learning and later action—and such policies are incremental, adaptive, and conditional.
Adaptive approaches to risk regulation have been used from time to time in the United States. In 2010, McCray et al. identified an adaptive approach to risk regulation in the development of human health standards for air pollutants, air transportation safety, pharmaceutical regulation, human nutrition, and animal nutrition.17 These cases had in common a prior commitment to subject existing policy to de novo re-evaluation and systematic efforts to obtain new factual information for use when the re-evaluation takes place. McCray et al. concluded that adaptive regulation has been at least minimally effective in improving policy in these cases and indeed may be a valuable approach to try in other domains as well.
An adaptive approach to addressing ethical, legal, and societal issues may prove valuable as well. Even if the analytical framework presented in Chapter 5 is augmented through the use of the deliberative processes described above, it is highly unlikely that all relevant ethical, legal, and societal issues will be identified before any given technology or applications development begins. That is, some initially unforeseen ELSI concerns may well arise over the course of development. An adaptive approach to addressing ethical, legal, and societal issues would thus involve the following:
• Plans that would be immediately put into action to address ethical, legal, and societal issues known to be relevant at the initiation of an R&D effort.
• Contingency plans tied to specific ethical, legal, and societal issues to be put into action if and when those issues emerge as the R&D effort unfolds. (These issues would be the issues that an a priori process can identify.)
• Criteria for recognizing the emergence of these issues and an organizational structure for receiving reports of such emergence.
• A schedule for formally determining if new circumstances, experiences, or knowledge warrant midcourse corrections to the original plan. This schedule may be tied to the calendar or to project milestones or any other reasonable set of events.
• Provisions for monitoring media, conferences, chat groups, and so on to identify unexpected ethical, legal, and societal issues that may be suggested.
17 Lawrence E. McCray, Kenneth A. Oye, and Arthur C. Petersen, “Planned Adaptation in Risk Regulation: An Initial Survey of U.S. Environmental, Health, and Safety Regulation,” Technological Forecasting and Social Change 77:951-959, 2010.
What is the downside of adaptive planning? One disadvantage is that preparation of various contingency plans can be costly, in terms of both money and personnel. Such costs are incurred before the initiation of a project and over the course of the project. In addition, what seems like the wisdom to revise plans in the face of new information can be perceived by stakeholders or observers as “weakness or unprincipled malleability in the face of political pressure.”18
A third objection to adaptive planning is that it is often better suited for addressing consequentialist (utilitarian) concerns that can be mitigated and softened by adjusting and modifying a technology development path going forward. (From time to time, but probably rarely, it will be the case that no amount of program adjustment or modification, short of complete cessation, will address ELSI concerns adequately.) Note, however, that in practice, real human thinkers generally do not take these extreme views; indeed, one philosopher-ethicist—William David Ross—proposes the notion of prima facie duties, a concept that allows for the possibility of consequences overriding deontological duties if the consequences are horrific enough but that also stresses the importance of giving such duties weight and not being overridden simply because there happens to be some consequentialist payoff.19
Last, adaptive planning is by assumption arguably less stable than traditional planning, which generally does not admit the possibility of midcourse corrections at all. Without adaptation, a priori planning may fail because the discrepancy between what was assumed and what is actually happening becomes too large. But at some point, too much adaptation (too many midcourse adjustments that are too large) eliminates the benefits of planning and reduces decision making to an entirely reactive and ad hoc enterprise. So the sweet spot in adaptive planning is somewhere between zero adaptation and too much adaptation—and where to find that spot is a matter of judgment.
18 McCray et al., “Planned Adaptation in Risk Regulation,” 2010.
19 Anthony Skelton, “William David Ross,” Stanford Encyclopedia of Philosophy, Summer 2012 Edition, Edward N. Zalta, ed., available at http://plato.stanford.edu/archives/sum2012/entries/william-david-ross/.