2

Foundational Technologies

Foundational technologies (more properly, foundational science and technologies) are by definition those that can enable progress and applications in a variety of problem domains. Even in a military or national security context, it is rare that research on foundational technologies is entirely classified. Work on foundational technologies is mostly unclassified, or else classified work and unclassified work on such technologies happen contemporaneously. Lastly, useful applications based on a foundational technology often take a long time to emerge. Even then, one foundational technology may be used in combination with other technologies, both foundational and specialized, to create useful applications.

Each of the three main sections of this chapter addresses the scientific and technological maturity, describes some possible military applications, and discusses some illustrative ELSI questions that may be associated with each of the three technologies selected by the committee for examination (or applications that might be enabled through the technologies). The reader is cautioned that ELSI concerns related to these technologies— information technology, synthetic biology, and neuroscience—are not handled uniformly from section to section, reflecting the fact that different kinds of ethical, legal, and societal issues arise with different foundational technologies and the applications they enable. This chapter and Chapter 3 (on application domains) provide case studies for empirically grounding the framework of ELSI-related questions laid out in Chapter 5.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 45
2 Foundational Technologies Foundational technologies (more properly, foundational science and technologies) are by definition those that can enable progress and applica- tions in a variety of problem domains. Even in a military or national secu- rity context, it is rare that research on foundational technologies is entirely classified. Work on foundational technologies is mostly unclassified, or else classified work and unclassified work on such technologies happen contemporaneously. Lastly, useful applications based on a foundational technology often take a long time to emerge. Even then, one foundational technology may be used in combination with other technologies, both foundational and specialized, to create useful applications. Each of the three main sections of this chapter addresses the scientific and technological maturity, describes some possible military applications, and discusses some illustrative ELSI questions that may be associated with each of the three technologies selected by the committee for exami- nation (or applications that might be enabled through the technologies). The reader is cautioned that ELSI concerns related to these technologies— information technology, synthetic biology, and neuroscience—are not han- dled uniformly from section to section, reflecting the fact that different kinds of ethical, legal, and societal issues arise with different foundational technologies and the applications they enable. This chapter and Chapter 3 (on application domains) provide case studies for empirically grounding the framework of ELSI-related questions laid out in Chapter 5. 45

OCR for page 45
46 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY 2.1  INFORMATION TECHNOLOGY In general, information technology is designed to store, process, manipulate, and communicate information rendered in digital form. Information technology includes computing and communications tech- nology. Both hardware and software fall under the rubric as well. The academic disciplines of computer science and computer engineering pro- vide much of the intellectual underpinning of information technology. 2.1.1  Scientific and Technological Maturity Information technology as a field is simultaneously mature, in the sense that the underlying technologies of information technology are sufficiently stable and well understood to support useful applications, and also newly emerging, in the sense that innovation and invention in information technology continue apace as they have for several decades. The fundamental trends underlying advances in information technol- ogy hardware have for several decades been characterized by exponential growth in processor power and storage capacity, with doubling times measured in periods ranging from 9 months to 2 years. And there has been a corresponding flowering of applications resulting from the general public’s easy access to computing power. The same is true for communications technologies. These technolo- gies support increasingly ubiquitous interconnectivity between comput- ing devices, and it is not an exaggeration to suggest that most computing devices in the world are connected—although perhaps with a significant time lag—to most other computing devices. Such connectivity has led to exponential increases in the numbers of computers (and individuals) that communicate with each other. With respect to the “packaging” of the fundamental hardware com- ponents of information technology, there are three hardware trends of note today. • Mobile computing and communications. To an ever-increasing degree, users are demanding and vendors are supplying a wide variety of mobile computing and communications platforms, ranging from smart phones and tablet computing devices that are familiar to many consumers to ubiquitous sensor networks that are physically distributed over wide areas. Wireless data services needed to support mobile applications are proliferating as well. One form of mobile computing of particular note is wearable computing, as discussed in Box 2.1.

OCR for page 45
FOUNDATIONAL TECHNOLOGIES 47 Box 2.1 Wearable Computing In contrast to handheld computing devices such as smart phones and per- sonal data assistants, wearable computing devices are generally integrated into a human’s clothing or accessories (e.g., watches, glasses, belts). As such, they are less conspicuous as they are carried or used, and onlookers are more likely to miss them in casual observation. Moreover, the placement of these devices means that computing capability and large volumes of information are nearly instantaneously available. Such devices have both military and civilian applications. Wearable computing and communications are already used to provide tactical information to soldiers in the field, and the canonical person in the street can often make use of instanta- neous knowledge about geography (mapping), products on sale, and a wide variety of other consumer applications. Advances in such technology can also be used to provide bidirectional real-time translation between English and other languages. But the inconspicuousness of wearable computing also raises many privacy issues. For example, one oft-raised privacy concern involves the possibility of instant facial recognition. A camera mounted on a user’s glasses connected to a computer can allow the user to look at another person, capture an image of his or her face, and using facial recognition software, identify that person—along with any other information associated with that identity. Especially in an environment in which everyone does not have equal access to such capabilities, the potential for information asymmetry is large. Another wearable computing application is the electronic capture of every- thing that a person can see or hear. Under most circumstances, video and audio events are fleeting—and people’s memories of these events are known to be of questionable reliability under many circumstances. Those participating in such events often count on some degree of transience to make it safer for them to engage in such participation. The availability of potentially permanent records of previously transient phenomena thus has a potential for inhibiting a large range of behavior, most of which is not illegal. Again, the possibility of such an outcome most certainly carries ELSI implications. • Cloud computing. For reasons of efficiency and economy, cloud com- puting is becoming increasingly popular among corporate users. Cloud computing provides computing power on demand, and because cloud computing is managed centrally, important IT support functions, such as security and maintenance, are simpler for many enterprises to obtain. • Embedded computing. A modern automobile today has several dozen central processing units that control the braking, navigation, steer- ing, entertainment, and power-train systems. (Indeed, safe computer-con- trolled driving has been demonstrated in a number of instances, and driv-

OCR for page 45
48 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY ing laws in some states are being updated to allow for this possibility. 1) Computing power is also increasingly embedded in myriad devices and artifacts such as refrigerators and watches to make more effective use of the physical resources at hand and to provide desired services. In the applications space, one of the most significant trends is the emergence of social computing and networking. Broadly speaking, social computing and networking support cooperative relationships for shar- ing information, and they take advantage of such shared information. In addition, information technology today is such that end users find it easier than ever before to assemble do-it-yourself applications for their own purposes. Another important trend is the increasing use of a “big data” approach to solving a broad class of computational problems. Data storage capa- bilities have increased more rapidly than the processing power increases described by Moore’s law. And as technology is increasingly used in everyday life, more data can be and are collected. When such data are appropriately represented and structured, obtaining value from large data collections is often possible. Processing these large data sets has required many additions to tradi- tional computer processing algorithms and engineering paradigms (e.g., as in the paradigm used by programmers whereby they abstract, encap- sulate, and re-use encapsulated objects). In particular, computer scientists now apply machine learning and knowledge discovery algorithms to large data sets and continually refine these algorithms based on evalua- tion of their results, and certain branches of computer science today have a substantial empirical basis. Roughly speaking, machine learning involves methods that allow computers to make inferences from known relationships and patterns. For example, machine learning can be involved when a computer looks at many pictures of vehicles and identifies which pictures contain tanks. Here, the presumption is that tanks have distinguishing characteristics (e.g., vehicles with a gun sticking out of a turret that is mounted on a tracked chassis). Knowledge discovery seeks to identify previously unknown rela- tionships hidden in large volumes of heterogeneous data collected from myriad sources (text-based databases, video surveillance cameras, and so on). For example, knowledge discovery can be involved when a computer looks at a large volume of phone call records to identify networks of fre- 1 See, for example, Maggie Clark, “States Take the Wheel on Driverless Car,” USA Today, June 29, 2013, available at http://www.usatoday.com/story/news/nation/2013/07/29/ states-driverless-cars/2595613/.

OCR for page 45
FOUNDATIONAL TECHNOLOGIES 49 quent communicators with geographical locations in Yemen or Somalia. Machine learning may, of course, be used in knowledge discovery—for example, systems can be “trained” on many examples and then asked to identify new patterns consistent with the examples in those training sets. The result has been programs that are highly adaptive, even to the point of being able to learn. Direct consumer impact has occurred in everything from search engines, to classic artificial intelligence appli- cations like speech recognition and translation, to modern e-commerce applications like interest-based advertising. Finally, one of the most important truths about developments in infor- mation technology is that despite the origins of modern information technology in military R&D, advances in IT for the last few decades have been driven primarily by the private sector. This is not to deny the role of military R&D for certain very specialized technologies, but increasingly the military (and intelligence) communities seek ways of adapting com- mercially developed technologies for their own purposes, rather than building those base technologies from scratch. Such adaptations take advantage of an extensive IT R&D infrastructure developed in the civil- ian sector. For example, scientists and engineers from the Massachusetts Insti- tute of Technology Artificial Intelligence Laboratory founded a company in 1990 to commercialize their expertise in robotics—the fruits of their work include both bomb disposal robots and robotic vacuum cleaners. And this example is just one of the myriad developments originating in the private sector, including information retrieval and ubiquitous infor- mation, three-dimensional modeling, the “internet of things,”2 and natu- ral language and image understanding. 2.1.2  Possible Military Applications U.S. military forces are highly dependent on information technology in a wide variety of contexts. To take the most basic example of such dependence, much of the IT used by DOD personnel for administrative and management purposes is essentially technology that can be obtained more or less unadorned from commercial vendors. But the DOD also has specialized needs for weaponry, command and control, training, and intelligence analysis. • Modern military forces use systems and equipment that are con- trolled by computer for navigation, propulsion, communications, sur- 2 The “internet of things” refers to a densely connected array of objects imbued with computing power that share information to work more effectively and efficiently together.

OCR for page 45
50 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY veillance, fire control, and so on. One of the most significant examples of applying information technology to military problems in the past several decades is the trend toward “smarter” guided munitions. IT is used to guide such a weapon after release directly to its target, thus vastly increas- ing the probability of a hit. IT is also used to effectuate smart fusing (e.g., optimal timing for when an explosive should detonate), thus increasing the probability that a hit will actually destroy the target. Another advan- tage is that the use of such weapons instead of “dumb” munitions poten- tially reduces the collateral damage of certain kinds of military operations by orders of magnitude. • The movements and actions of military forces are increasingly coordinated through IT-based systems for command, control, communica- tions, and intelligence (C3I) that allow information and common pictures of the battlefield to be shared and through analytical tools that help com- manders make better decisions. C3I is an enabler for commanders to place (and thus use) their forces where and when they are needed, multiply- ing the operational effectiveness of those forces. Smaller forces are thus needed to create the same military effects. • Training of U.S. military forces at many levels relies heavily on simulation, from training of individual soldiers to large-scale exercises that bring together many units. By definition, a simulation is a computer- generated representation of parts of a real environment. The use of simu- lation reduces costs of training and limits risks to individuals (e.g., from training accidents) but obviously does not substitute entirely for “live” training. In many cases, training simulations have their roots in gaming applications from the civilian sector. • Intelligence analysis is based on finding connections in large dis- parate data sets. For example, machine learning and big data applications may be able to help predict major impending events, such as an assault or a jump in insurgent activity. Analysis of surveillance videos may iden- tify an individual leaving a bomb in a public place or about to conduct a suicide attack. Authoritarian nations may use such technologies to iden- tify dissidents. Adversaries might use predictive data mining to uncover putatively secret information, such as operational deployments of U.S. military units or identities of U.S. undercover operatives. High-quality facial recognition that can operate on degraded or obscured signals or can penetrate attempts at disguise has obvious value, especially in environ- ments in which surveillance cameras are plentiful.3 As an illustration of some of the applications that new trends in com- 3 See https://www.fbibiospecs.org/facialrecogforum/_Uploads/Forum%203%20Media %20Articles_1.pdf.

OCR for page 45
FOUNDATIONAL TECHNOLOGIES 51 puting might enhance, a presentation to the committee by Peter Lee from Microsoft Research suggested three important classes of application that have military or security implications and also present ethical, legal, and societal issues. • Prediction. Large volumes of data can often be used to make predic- tions about future events (e.g., human behavior, outcomes of processes), the paradigm known as “big data” mentioned above. For example, based on the data routinely collected by electronic medical records systems in hospitals, it is possible to predict quite accurately the likelihood that a discharged patient will be readmitted. Prediction has also been demon- strated in software development (predicting the most likely locations of software defects and schedule delays); in Web browsing (predicting the Web pages that a user is likely to access); and in consumer buying behav- ior (predicting buying decisions in the near future). • Extraction of information from degraded sensor data. In many sensing applications, data streams are highly redundant. Such redundancy can be used to compensate for missing or degraded data. For example, a group at the University of Illinois at Urbana-Champaign has applied the technique of compressive sensing to face recognition4 and has achieved a success rate for correct face recognition above 80 percent even operating on a severely degraded signal. (Compressive sensing is a signal-processing technique for reconstructing in certain contexts a relatively complete sig- nal from relatively sparse measurements.) • Behavioral inference. Computers increasingly can infer meaning from data that originate from people, whether such data takes the form of physical gestures, words, pictures, and so on. Even today, software can scan e-mail (for example) and make inferences about one’s schedule and travel plans. Microsoft’s Kinect uses various cameras to look at a human’s movements and gestures and specialized software that provides interpre- tation of those gestures. Kinect has also been used in a number of applica- tions, including the use of gestures to direct the music of a computerized orchestra and enabling a small drone to avoid obstacles in its immediate surroundings.5 Other commercial applications have emerged: helping shoppers find the right size of clothing; assisting drivers in parallel park- ing; spotting suspicious human behavior in a casino. Intent-inferring tech- nologies may be able to assist in situations relevant to national security as 4 John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma, “Robust Face Recognition via Sparse Representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence 31(2):210-227, 2009. 5 Rob Walker, “Freaks, Geeks and Microsoft,” New York Times, May 31, 2012, available at http://www.nytimes.com/2012/06/03/magazine/how-kinect-spawned-a-commercial- ecosystem.html.

OCR for page 45
52 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY well—recognizing when a person is about to give a package to another person, when someone is pulling out a gun, or what events are being planned from a trail of e-mail. Two of the most prominent application domains involving IT for mili- tary purposes—autonomous military systems and cyber weapons—are discussed in Chapter 3. 2.1.3  Ethical, Legal, and Societal Questions and Implications Information technology alters many traditional concepts and activi- ties by separating out and amplifying the information dimensions of such concepts and activities. IT is often used in situations and problem domains for which there is no accepted law, policy, or ethical stance. Moreover, these situations and problem domains themselves evolve and change at a very rapid rate. To understand what ethical behavior is when IT is involved, traditional principles of ethics are relevant but often not sufficient by themselves, and considerable interpretation and analogical thinking are needed to understand how those principles apply in any given situation.6 In a civilian context, some of the ethical, legal, and societal issues raised with IT concern privacy; intellectual property; accountability; trust; loss of control; and software dependability, including safety and reli- ability. In a military or national security context, each of these issues can sometimes play out differently than it might in a civilian context. Privacy In the United States, many individuals place a significant value on privacy, especially privacy against government intrusion.7 Privacy is often an issue in the context of certain national security applications of IT. When contemplating the use of some IT application against an adversary, it is not so much the privacy rights of adversaries at issue (they have few or none), but rather the possibility that a given application may compromise the privacy rights of innocent individuals (that is, ordinary citizens). 6 These ideas are explored in two papers written in 1985 and 1998 by James Moor (a member of the committee): James H. Moor, “What Is Computer Ethics?,” pp. 266-75 in Computers and Ethics, Terrell Ward Bynum, ed., Blackwell Publishers, Ltd., 1985, published as the October 1985 issue of Metaphilosophy; and James H. Moor, “Reason, Relativity, and Responsibility in Computer Ethics,” Computers and Society 28(1):14-21, March 1998. 7 National Research Council, Engaging Privacy and Information Technology in a Digital Age, James Waldo, Herbert S. Lin, and Lynette I. Millett (eds.), The National Academies Press, Washington, D.C., 2007.

OCR for page 45
FOUNDATIONAL TECHNOLOGIES 53 Much of the tension regarding privacy and national security applications of IT focuses on managing the tradeoff between the intended security benefits of an IT application and the unintended “collateral damage” to the privacy of innocent citizens. To the extent that privacy exists as an enforceable right, privacy rights of individuals have been enforced in the past both by law and by the practical difficulty of finding certain kinds of personal information. How- ever, information technology reduces the practical difficulties of finding information, and much of what might have previously been hard to learn about an individual can in fact be learned by analyzing large amounts of data that reside in a number of different places. Protecting privacy through obscurity is increasingly difficult. Considering the big data applications described above, one might note that with compressive sensing, the task of automating facial recog- nition in noisy environments (e.g., where cameras might not be able to obtain unobstructed images) will become easier. Compressive sensing would thus be an important component of a system capable of track- ing the public movement of individuals on a large-scale basis. Intent detection potentially turns innocent movements into suspicious events, perhaps unjustly singling out individuals for examination and possible detention. Predictive analysis thus raises privacy concerns, because it requires the collection of data about an individual’s behaviors and history to make inferences about that person’s intent when he or she does some- thing anomalous. Furthermore, privacy concerns—which themselves may evolve as people become more familiar with new technologies—may be accentuated if or when individuals improperly suffer negative conse- quences (e.g., arrest, loss of jobs) because putatively private information is revealed. An extended discussion of privacy impacts of information technol- ogy can be found in the 2007 National Research Council report Engaging Privacy and Information Technology in a Digital Age.8 Intellectual Property With modern information technology, the cost of replicating digital property (sometimes also known as digital objects) is essentially zero. Replications of digital property can be perfect, unlike replications of mate- rial property. These two aspects of digital property upend many tradi- tional understandings of property, such as ownership, that have been 8 National Research Council, Engaging Privacy and Information Technology in a Digital Age, 2007.

OCR for page 45
54 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY developed primarily for property manifested as tangible objects consist- ing of arrangements of atoms. Traditional concepts associated with intellectual property may have to be modified (and in many cases simply recognized as inapplicable) when “property” is manifested as arrangements of binary digits (bits). For example, information “objects” such as data files are much more easily transported than physical objects. Although much more conve- nient to store and search, information objects are also much easier to misappropriate—and in the civilian world, a wide range of economic and societal interests have a stake in striking the right balance between how to protect and how to provide access to information objects. Issues of intellectual property protection have also become important for national security in three ways: • The use of various IT applications to create, manage, and store digitally represented intellectual property of all kinds has proliferated tremendously in the past 50 years—and so have the opportunities for misappropriation of such property. In this context, intellectual property is construed broadly to include product information, software, business plans, proprietary R&D, and economic forecasts—and when competitors are able to misappropriate such information, individual U.S. firms can be placed at a significant disadvantage. In recent years, the scale of the prob- lem has expanded in such a way that the inability to keep such intellectual property secure and confidential is no longer just an issue for individual companies but has also become a national security concern because of how it threatens U.S. economic leadership and primacy. • The misappropriation of intellectual property specifically related to national security (e.g., weapons blueprints and specifications, military plans, and so on) creates direct risks to national security. Adversaries may learn of vulnerabilities in U.S. weapons or operational procedures, may be able to anticipate U.S. military moves, and so on—all such informa- tion in the wrong hands constrains the freedom of action that is otherwise enjoyed by U.S. military forces. • Adversaries exploring the IT systems and networks controlling critical infrastructure facilities could acquire certain kinds of intellec- tual property (e.g., facility configurations, communications links between parts of a plant, and so on) that would help them to attack these facilities. Accountability Today’s computers can process inputs and then take different actions based on the specific inputs received. In common parlance and under- standing, such computers are making decisions—choosing between alter-

OCR for page 45
FOUNDATIONAL TECHNOLOGIES 55 native courses of action. Information technology underlies increasing automation of many functions previously delegated to people,9 but today and more so in the future, computers will make decisions that have tra- ditionally been made by responsible humans in positions of authority. This phenomenon is not limited to civilian systems, and there are many pressures today toward increasing the role of computer-based decision making in operational military scenarios, especially those that involve highly compressed timelines. Notions of accountability and responsibility, as applied to individu- als, have focused on the ability of humans to make appropriate decisions under various circumstances. How and to what extent, if any, are such notions applicable to computers? This question is especially complicated in light of three facts: humans program computers (or program comput- ers to program other computers); an IT system is sometimes so complex that no single individual can have a complete understanding of it; and users of such programs often have less understanding of the program than do the creators. In a military context, such facts call into question traditional notions of command and accountability, and thus the organi- zational structures built around these notions. Trust Many human relationships (e.g., commercial relationships) are built on trust. But trust relationships can be difficult to establish at a distance, and a great deal of information technology is used to enable connections at a distance. Information technologists have developed a wide variety of mechanisms for developing and enhancing trust, which in this context refers in part to assurances that an asserted identity does indeed corre- spond to an actual identity. Personal trust that depends on face-to-face interaction cannot be fully accommodated by technologies that connect individuals over long physical distances. Even so, some of the limitations of long-distance interaction can be mitigated by technical improvements, such as increased bandwidth. Larger bandwidth is an enabler for video and audio connections with higher fidelity, making it easier for individuals on both ends of a connec- tion to see and hear subtleties in the expressions of their counterparts. Reputations and social networks can also facilitate the establishment of trust. For example, John may assert that X is true. I may not know John, 9 For example, the World War II Baltimore-class cruiser (CA-68) displaced 13,600 tons and carried a crew ranging from 1650 to 1950 individuals. By contrast, the planned DDX Zumwalt-class destroyer (DDG-1000) is expected to displace approximately 14,500 tons and carry a crew of 140 individuals.

OCR for page 45
68 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY in mice have found that memory depends on a wide variety of receptors, enzymes, and proteins.40 2.3.2  Possible Military Applications Possible applications of neuroscience can be divided roughly into two classes—those that help humans to recover normal functionality and those that help humans change normal functionality. In the first category (recovery of normal functionality), humans some- times lose neurological functionality through accident or birth defects. For example, boxers and football players are known to suffer neurologi- cal damage in playing their sports, as do people who are victims of car accidents. In a military context, traumatic brain injuries (incurred, e.g., as a result of soldiers being exposed to explosions) have been described as the “signature injury” of the wars in Iraq and Afghanistan,41 and advances in neuroscience may be able to help wounded soldiers recover from such injuries. In the second category (changing normal functionality), neurosci- ence could be used to enhance or to diminish normal functionality. For example, through neuroscience-based applications, individuals might be able to operate equipment through a direct brain-machine interface rather than manipulating a joystick or typing commands on a keyboard. Work- ers in high-stress occupations, such as air traffic control, might be able to process larger amounts of information more quickly. Individuals with needs for the selective enhancement or inhibition of learning and memory might meet those needs with the administration of designer drugs based on neuroscience research. Antisocial tendencies of certain criminals, such as sexual offenders, could be diminished. Psychological traumas might be reduced for victims of abuse, torture, or other horrific events. Enhancements of the types described in the previous paragraph have obvious military applications for soldiers operating weapons or com- manders coordinating battles. Much more controversial from an ELSI standpoint are other proposals suggesting that false human memories can be created and different emotional states induced (e.g., reduced or increased fear, feelings of anger or calm) and that degrading the perfor- mance of adversaries in military contexts may be possible—applications that are generally not associated with civilian use. 40 For example, Ramirez et al. have demonstrated the insertion of false memories into mice. See Steve Ramirez et al., “Creating a False Memory in the Hippocampus,” Science 341(6144):387-391, 2013, available at http://www.sciencemag.org/content/341/6144/387. 41 See http://www.defense.gov/home/features/2012/0312_tbi/.

OCR for page 45
FOUNDATIONAL TECHNOLOGIES 69 Cognitive Enhancement Enhancement may be defined as performance that exceeds a physi- ological or statistical norm in healthy persons. For example, transcranial magnetic stimulation (TMS) may suppress the effects of sleep depriva- tion and enable individuals to perform above their baseline capability at specialized tasks, both of which would have obvious advantages for war- fighters. Repetitive TMS (rTMS) might also serve to improve learning and working memory, for example, increasing the ability of an operative to speak a native dialect or to recall complicated instructions. Some believe that near-infrared spectroscopy could detect deficiencies in a warfighter’s neurological processes and feed that information into a device utilizing in-helmet or in-vehicle TMS to suppress or enhance individual brain func- tions, such as mood and social cognition. A 2009 National Research Coun- cil report titled Opportunities in Neuroscience for Future Army Applications recommended that the Army increase its investment in TMS research.42 That committee estimated the development timeframe for using TMS to enhance attention at 5 to 10 years, and for in-vehicle deployment at 10 to 20 years. A different form of cognitive enhancement comes in the form of miti- gating the effects of sleep deprivation, which is the source of so much error in civilian as well as in military life. Historically, fatigue has been mitigated through such measures as cocaine, nicotine, and caffeine. More recently amphetamines (“speed”) have acquired popularity and have, again, been used by both students and warfighters, especially air force pilots it seems, in the form of “go pills.” Modern pharmaceutical tech- nologies may be entering new and somewhat more efficacious territory with evidence that modafinil (originally approved for the treatment of narcolepsy) may reduce fatigue-related cognitive decline, or even outper- form methylphenidate (Ritalin) in healthy persons. Short-term memory enhancement may also be achieved through nasally delivered orexin-A, as shown by a DARPA-sponsored study of sleep-deprived monkeys. 43 Neurological processes may be modified without the open-skull experiments incident to neurosurgery that have been so important in the history of neuroscience. For example, TMS uses electromagnetic induction to penetrate the skull and modulate the electrical activity of the cerebral cortex. Another method, transcranical direct current stimulation (tDCS), may be safer, but used less often, than TMS. To perform TMS, a techni- 42 National Research Council, Opportunities in Neuroscience for Future Army Applications, The National Academies Press, Washington D.C., 2009. 43 S.A. Deadwyler et al., “Systemic and Nasal Delivery of Orexin-A (Hypocretin-1) Reduces the Effects of Sleep Deprivation on Cognitive Performance in Nonhuman Primates,” Journal of Neuroscience 27(52):14239-14247, 2007.

OCR for page 45
70 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY cian holds an iron-core insulated coil on one side of a patient’s head while a large, brief current is passed through the coil. The current generates a magnetic pulse that painlessly penetrates the layers of skin, muscle, and bone covering the brain and induces weak, localized electrical currents in the cerebral cortex. It is believed that the induced electrical field triggers the flow of ions across neuronal membranes and causes the cells to dis- charge, resulting in a chain reaction of neuronal interactions. TMS offers hope for individuals suffering from major depression, ­ arkinson’s dis- P ease, and treatment-resistant migraine headaches, and it is under inves- tigation for the treatment of post-traumatic stress disorder. TMS has also helped to map brain circuitry and connectivity. Brain-Computer Interfaces Neuroscience technologies are often “dual use,” having both mili- tary/counterintelligence and medical/scientific applications. Examples of brain-computer interfaces are prosthetic limbs and communication devices. Thus they may benefit both patients and warfighters or other security personnel. These two examples are also convergent technolo- gies: during the past two decades laboratory experiments have shown that simple movements of both rodents and nonhuman primates may be controlled and that primates can be trained to manipulate robotic arms through neural activity alone.44 The same principle of remote control of a robotic prosthesis has been applied to human patients suffering from tetraplegia, by means of an implanted intracortical electrode array. Technological refinements suggest that, for some purposes at least, brain-computer interfaces need not be invasive. In the past, electroen- cephalogram-sensitive caps, which help control artificial joints during rehabilitation, were expensive and also required the application of a gel. Recent designs for such caps dispense with the gel and are far less expensive. They are now being produced for commercial application to computer gaming, with the potential for control over environmental conditions like room lighting, door locks, and window shades. DARPA has been interested in using new and noninvasive ways to gather neuro- logical information to help adapt a pilot’s brain to inputs from a cockpit array, reducing “noise” and distraction for the operator depending on what information is required for specific circumstances. Similarly the Cognitive Threat Warning System seeks to convert unconscious human neurological responses into usable information, as in a pair of binoculars 44 L.M. Dauffenbach, “Simulation of the Primate Motor Cortex and Free Arm Movements in Three-Dimensional Space: A Robot Arm System Controlled by an Artificial Neural Net- work,” Biomedical Sciences Instrumentation 35:360-365, 1999.

OCR for page 45
FOUNDATIONAL TECHNOLOGIES 71 that cue the viewer to certain portions of the visual field.45 In time, a true feedback loop that also helps adjust the computer to the human user may also be practical. Interventions intended as therapy may in some cases enhance nor- mal function. Brain-computer interfaces that control advanced prostheses that render the user faster or stronger would be one example, although perhaps an exoskeleton would be a nearer-term example of the same phenomenon. Dual-use considerations apply to this technology, just as they would for drugs intended to enhance cognitive performance (such as methylphenidate—marketed as Ritalin—which is often believed to help academic performance). Deception Detection and Interrogation Traditional measures of deception have relied on neurological cor- relates of stress like blood pressure and heart and breathing rates, but these are at best physiological proxies of intentional deception. One sys- tem known as the “brain fingerprinter” uses an EEG measure to detect an event-related potential called the P300 wave, which is associated with the recognition of a stimulus, such as a photograph of a certain location of interest. Services based on functional magnetic resonance imaging are being offered by companies such as No Lie MRI and CEPHOS, which mar- ket their products to governmental and nongovernmental organizations. A 2008 NRC report entitled Emerging Cognitive Neuroscience and Related Technologies stated that ‘‘traditional measures of deception detec- tion technology have proven to be insufficiently accurate,” recommending that research be pursued “on multimodal methodological approaches for detecting and measuring neurophysiological indicators of psychological states and intentions. . . .’’46 The report cautioned, however, that neuro- logical measurements do not directly reveal psychological states, and so there is a distinct risk of over-interpretation of results, leading to both false negatives and false positives. Another possible approach to deception detection involves the brain hormone oxytocin, which has been shown to be associated with a wide variety of social impulses. In the laboratory, subjects exposed to oxytocin via the nasal route have behaved in a more trusting and generous manner. The National Research Council’s 2008 report on emerging neuroscience identified oxytocin as a “neuropeptide of interest.” 47 However, the notion 45See http://www.wired.com/gadgets/miscellaneous/news/2007/05/binoculars. 46National Research Council, Emerging Cognitive Neuroscience and Related Technologies, The National Academies Press, Washington, D.C., 2008. 47 National Research Council, Emerging Cognitive Neuroscience and Related Technologies, 2008.

OCR for page 45
72 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY that oxytocin could be useful in interrogation requires extrapolating from laboratory experiments conducted under highly specified conditions with subjects whose background and motivation differed from those of likely interrogation targets. Performance Degradation In addition to the potential for advances in neuroscience to enhance the performance of one’s own forces, these developments also offer pos- sible opportunities to inhibit or reduce the performance of adversaries. At present, the primary focus for such efforts to support military mis- sions and law enforcement goals—as well as applications in areas such as counterterrorism or counterinsurgency where the lines between the two domains are often blurred—is on so-called incapacitating chemical agents (ICAs). The ethical and societal issues associated with ICAs are discussed in Chapter 3; this section briefly introduces the relevant scientific and technological developments. A number of recent reviews have addressed S&T potentially relevant to ICAs.48 As an example of these technical reviews, a 2012 Royal Society report, part of a larger Brain Waves project on the implications of developments in neuroscience for society and public policy, 49 identifies two particularly prominent areas of relevant research.50 These are neuropharmacology, which studies the effects of drugs on the nervous system and the brain, and advances in drug delivery methods. A number of pharmaceutical agents, which are primarily chemicals, have at least the theoretical poten- tial to provide the basis for ICAs. Current research on ICAs tends to focus on agents that offer a combination of rapid-action and short-duration effects and thus on those that “reduce alertness and, as the dose increases, 48 International Committee of the Red Cross, “Incapacitating Chemical Agents: Implica- tions for International Law,” Expert meeting, Montreux, Switzerland, March 24-26, 2010, available at http://www.icrc.org/eng/resources/documents/publication/p4051.htm; Stefan Mogl, ed., Technical Workshop on Incapacitating Chemical Agents, Spiez Laboratory, Federal Department of Defence, Civil Protection and Sports, DDPS, Federal Office for Civil Protection, Spiez, Switzerland, September 8-9, 2011, available at http://www.labor-spiez. ch/de/dok/hi/pdf/web_e_ICA_Konferenzbericht.pdf; Scientific Advisory Board, Organi- zation for the Prohibition of Chemical Weapons, “Report of the Scientific Advisory Board on Developments in Science and Technology for the Third Special Session of the Conference of States Parties to Review the Operation of the Chemical Weapons Convention,” RC-3/DG.1, 2012, available at http://www.opcw.org/documents-reports/conference-states-parties/ third-review-conference/; Royal Society, “Brain Waves Module 3: Neuroscience, Conflict, and Security,” Royal Society, London, 2012. 49 Information about the Brain Waves project is available at http://royalsociety.org/ policy/projects/brain-waves/. 50 Royal Society, “Brain Waves Module 3: Neuroscience, Conflict, and Security,” 2012.

OCR for page 45
FOUNDATIONAL TECHNOLOGIES 73 produce sedation, sleep, anaesthesia, and death.” Some of the classes of pharmaceutical agents under consideration are opioids, benzodiazepines, alpha2 adrenoreceptor agonists, and neuroleptic anaesthetics.51 In addition to these chemical agents, bioregulators—biochemical compounds that occur naturally and control vital functions such as tem- perature, heart rate, and blood pressure—have also been the subject of military research. Advances in the synthesis of bioregulatory peptides appear to offer the promise of overcoming some of the problems that have so far limited therapeutic applications and could also potentially enable national security applications as well. Advances in medical research are also yielding more effective means of delivering drugs into the central nervous system, including across the blood-brain barrier. With regard to ICAs, advances in aerosol delivery are of particular interest because inhalation seems the most plausible dissemination mode for military and law enforcement purposes. At the same time, nanotechnology is offering significant potential to provide more effective, targeted delivery to the brain. To date, however, with some exceptions for veterinary applications, the two streams of research have focused on delivering doses to individuals.52 A number of recent technical reviews have concluded that, in spite of the advances in several fields, the current state of S&T does not provide the basis for safe delivery of ICAs for law enforcement purposes, given all the challenges of delivering nonlethal doses in a variety of settings to groups that would vary by characteristics such as age, health status, and individual sensitivity to the chosen agent(s).53 In its report on S&T developments in advance of the third review conference of the Chemical Weapons Convention, the Scientific Advisory Board (SAB) of the Orga- nization for the Prohibition of Chemical Weapons commented that “in 51 Morphine is the primary example of an opioid, but the search for novel agents with fewer side effects continues. Fentanyl, the agent reportedly used as part of the aerosol compound piped into the ventilation system to break the Moscow theater siege in October 2002, is an opioid. Benzodiazepines are used to treat anxiety and also as part of general anesthesia. Alpha2 adrenoreceptor agonists, which reduce alertness and wakefulness and can also increase the effects of local and general anesthesia, have been the subject of U.S. Army research as a potential ICA. Neuroleptic anesthetics are able to induce unconsciousness without significant effects on reflexes or muscle tone. 52 National Research Council, Life Sciences and Related Fields: Trends Relevant to the Biological Weapons Convention, The National Academies Press, Washington, D.C., 2011. 53 International Committee of the Red Cross, “Incapacitating Chemical Agents: Implications for International Law,” Expert meeting, Montreux, Switzerland, March 24-26, 2010, available at http://www.icrc.org/eng/resources/documents/publication/p4051.htm; Royal Society, “Brain Waves Module 3: Neuroscience, Conflict, and Security,” Royal Society, London, 2012; Michael S. Franklin et al., “Disentangling Decoupling: Comment on Smallwood (2013),” Psychological Bulletin 139(3):536-541, 2013.

OCR for page 45
74 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY the view of the SAB, the technical discussion on the potential use of toxic chemicals for law enforcement purposes has been exhaustive.”54 The asso- ciated ethical and societal issues related to military and law enforcement applications are taken up in Chapter 3. 2.3.3  Ethical, Legal, and Societal Questions and Implications Informed and Voluntary Consent to Use The widely accepted moral principle of autonomy prohibits nonvol- untary neurotechnological interventions without informed consent or its moral equivalent. Nonetheless, it is clear that some feel impelled to accept such interventions regardless of the low likelihood that their per- sonal goals would be realized. For example, there is little evidence that drug therapies for conditions like ADHD improve academic performance, although the off-label use of medications like Ritalin by college students surely has much to do with the notion that their performance might be improved. The very term “human enhancement” could beg the question of the actual net benefits of claimed “enhancements.” Their social implications need to be examined on a case-by-case basis. Exaggerated claims about cognitive enhancement, or even accurate statements about short-term benefits, could lead to an increase in addictions due to competitive pres- sures. Differences in socioeconomic status related to contingent advan- tages like opportunities for acquiring new skills could be exacerbated by unequal access to enhancing technologies. In the military, both competitive and coercive pressures are uniquely pronounced. In general, persons in uniform are required to accept inter- ventions that commanders believe will maintain their fitness for duty or enable them to return to duty. In some circumstances, warfighters might even be required to accept medical interventions otherwise regarded as “experimental,” or at least not validated for a particular purpose, if there is a sound basis for believing that they could be of benefit if forces are threatened. A real-world example is described in Box 2.2. As useful military technologies proliferate, including those that in some sense enhance normal cognitive functions, veterans may face the prospect of adjusting to civilian life without those advantages. The tragic 54Scientific Advisory Board, Organization for the Prohibition of Chemical Weapons, “Report of the Scientific Advisory Board on Developments in Science and Technology for the Third Special Session of the Conference of States Parties to Review the Operation of the Chemical Weapons Convention,” RC-3/DG.1, 2012, p. 21, available at http://www.opcw. org/documents-reports/conference-states-parties/third-review-conference/.

OCR for page 45
FOUNDATIONAL TECHNOLOGIES 75 experience of many returning veterans, especially those who have faced the stresses of combat, demonstrates that this adjustment is already dif- ficult enough. A separate but important issue concerns the proliferation of these technologies—in civilian life and in the likely access that unfriendly per- sons, groups, organizations, and nations will gain to them. Is the potential for gain in U.S. military capabilities sufficient to overcome these potential negative effects? Or is it likely that civilian access to these technologies will precede their presence in military contexts? Privacy Longstanding, ill-defined but persistent worries and rumors about “brain-washing” and “mind control” will surely be reinforced by advances in neuroimaging, which is an excellent example of a technol- ogy that has both military and civilian applications. But do they raise valid privacy concerns? Besides issues of harm resulting from false posi- tives and negatives, the extent to which brain imaging raises issues of privacy depends of course on the ultimate accuracy of the technology in revealing psychological states—and how such accuracy is perceived by users of the technology. Exaggerated notions of technological capacity can also have adverse social consequences, such as the premature admission of imaging data into courts of law. Constitutional barriers may also be insurmountable if these data are found to violate guarantees against self- incrimination or unacceptable forms of search and seizure. Privacy challenges are emerging in many fields, including genetics and information technology, and brain imaging may or may not create unique ethical or policy issues. Even relatively simple technologies cur- rently claimed to improve on traditional “lie detector” results have lim- ited accuracy, require a cooperative subject, and may not be more efficient (or more cost-effective) than a simple interview with a skilled interrogator. ELSI concerns in this category appear to relate to both civilian and military applications of neuroscience. However, in a military or national security context, it is easy to imagine that such applications raise par- ticular concerns when they are applied to innocent bystanders—as they would inevitably be in any kind of counterintelligence investigation. Safety The safety of neuroscience-based interventions, whether drugs or devices, is of course a threshold concern. For example, external neuro- modulatory systems like dTCS and TMS are generally considered to present a low risk, but safety studies have generally been performed on

OCR for page 45
76 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY Box 2.2 Military Use in Combat of Drugs Not Approved by the Food and Drug Administration The 1991 Gulf war raised a number of ethical and policy questions regard- ing the use of investigational new drugs (INDs)—drugs that have not yet received Food and Drug Administration (FDA) approval for use in particular applications but that are currently being investigated for such use—to defend troops against the possibility that they might be attacked by chemical and biological warfare agents. As a matter of policy, the Department of Defense (DOD) has complied with all FDA requirements concerning the development and use of new drugs, including the requirement to obtain informed consent before administering INDs to research subjects. At the time of the Gulf war, two INDs were promising candidates for drugs to defend against certain chemical weapon/biological weapon agents. To comply with FDA regulations, the DOD would have had to obtain informed consent for the use of these drugs from every service member deployed to the Persian Gulf. Allowing deployed troops to refuse drugs intended for their own protection could, however, have jeopardized the combat mission. Accordingly, the DOD requested that the FDA both establish authority to waive informed consent requirements and grant waivers for administration of those particular drugs. The FDA agreed that obtaining informed consent might not be feasible “in certain combat-related situations” and that withholding potentially life-saving INDs in such situations would be “contrary to the best interests of military personnel involved,” and subsequently granted the DOD the waivers it sought.1 This decision led to controversy, much of it focused on the difference between research (in which case informed consent must be obtained for administering a drug to research subjects) and treatment (in which case no such requirement obtains in a military context). Those opposed to the waivers argued that the use of any IND was, by definition, “research” because the consequences, risks, and benefits of use were unknown, and thus informed consent was required under all circumstances. Those opposing pointed to a long line of ethical guidelines, such as healthy, normal subjects rather than persons with neurological or major psychiatric illnesses. There is a potential for seizures, although less than with conventional electroconvulsive therapy (ECT). However, the longer- term risks of repeated use of external neuromodulation are not known. The larger the populations exposed, the greater the likelihood of untow- ard results. ELSI concerns in this category appear to relate to both civilian and military applications of neuroscience equally.

OCR for page 45
FOUNDATIONAL TECHNOLOGIES 77 the guidelines in the Belmont report,2 that make no exception for waiving informed consent for research conducted under wartime conditions. They further argued that the mere intent to use an IND to provide medical benefit could not transform an experimental investigation into therapy—otherwise, researchers could simply change their stated intentions and redefine an experimental intervention as treat- ment, thereby evading informed consent requirements. Proponents of the waivers argued that the DOD had an ethical responsibility to protect its service members to the greatest extent possible. During the Gulf war, the best protection the DOD could offer its personnel included use of the INDs in question. Proponents further argued that despite their status as “investigational,” the drugs were neither remarkably novel nor experimental in a scientific or medical sense because they had already been subjected to “extensive research”; one drug had also been approved for uses that were similar to those that the DOD proposed. Moreover, prior ethical guidelines had been written with human experimentation in mind, in which the outcome of the research was in doubt and could result in serious harm to the subject. But the guidelines had not anticipated the ethical issues sur- rounding the use of drugs that would provide the only available means of avoiding death or serious disability under combat situations. Finally, the proponents noted that, under the doctrine of military command authority, the DOD could justifiably have chosen to act on its own, without FDA approval, but sought waivers to avoid even the appearance of impropriety. 1 Food and Drug Administration, “Informed Consent for Human Drugs and Biologics; De- termination That Informed Consent Is Not Feasible; Interim Rule and Opportunity for Public Comment,” 21 CFR Part 50, Federal Register 55(246):52814-52817, December 21, 1990, available at http://archive.hhs.gov/ohrp/documents/19901221.pdf. 2 The Belmont report can be found at http://www.hhs.gov/ohrp/humansubjects/guidance/ belmont.html. SOURCE: Adapted in large part from RAND, Waiving Informed Consent: Military Use of Non- FDA-Approved Drugs in Combat, 2000, available at http://www.rand.org/pubs/research_briefs/ RB7534/index1.html. Responsibility and Loss of Control Some of the most challenging societal questions relate to the pos- sibility that techniques or drugs derived from neuroscience may be used to alter trust and moral judgment. For example, as noted above, admin- istration of oxytocin to humans has the effect of increasing trust toward individuals shown to be untrustworthy.55 A TMS disruption of the right temporo-parietal junction of individuals was shown to increase the like- 55 Thomas Baumgartner et al., “Oxytocin Shapes the Neural Circuitry of Trust and Trust Adaptation in Humans,” Neuron 58:639-650, 2008.

OCR for page 45
78 ELSI FRAMEWORK FOR EMERGING TECHNOLOGIES AND NATIONAL SECURITY lihood that those individuals would forgive an unsuccessful murder attempt, as compared with a control group,56 raising the possibility that such disruptions affect moral judgments. In the absence of such manipu- lations of trust and moral judgment, individuals are often held account- able for behaving appropriately. What remains of the notion of individual responsibility when individuals are subject to such manipulations? In a military context, one might imagine the use of such techniques to reduce the qualms and inhibitions of soldiers about morally suspect or questionable activities. How and under what circumstances might neurally manipulated soldiers be accountable for activities that violate the laws of war? Impact of Classification As with synthetic biology, issues arise regarding coordination of neu- roscience research in a classified environment and how to establish effec- tive oversight in these environments. Staying abreast of developments and the associated benefits and risks can also be difficult because the research, by definition, is shielded from public view. As one example—the draft agenda for a conference titled “Evolving Neuro-Cyber Technologies and Applications and the Threats Within” held at Fort McNair in Wash- ington, D.C., on March 14, 2012, included a panel to discuss the question of the ethics of such technologies and applications, and the session was classified top secret. 56 Liane Young et al., “Disruption of the Right Temporoparietal Junction with Transcranial Magnetic Stimulation Reduces the Role of Beliefs in Moral Judgments,” Proceedings of the National Academy of Sciences 107(15):6753-7657, 2010, available at http://www.pnas.org/ content/early/2010/03/11/0914826107.full.pdf+html. (One of the investigators in this study, Marc Hauser, was found to have committed scientific misconduct in the falsification of data associated with a number of other experiments, leading to a number of retractions of published papers involving such data. However, there is no indication that the paper cited in this footnote has been similarly discredited. See http://www.boston.com/ whitecoatnotes/2012/09/05/harvard-professor-who-resigned-fabricated-manipulated- data-says/UvCmT8yCcmydpDoEkIRhGP/story.html.)