Neurotechnology and Brain-Computer Interfaces

ETHICAL AND SOCIAL IMPLICATIONS

PAUL ROOT WOLPE

Center for Bioethics

University of Pennsylvania

I want to thank my friend George Khushf, who without knowing it, perfectly set up what I want to talk to you about. All of the things he mentioned at the end of his talk—a retinal prosthesis, Miguel Nicolelis’ monkey and robotic arm, and “roborats”—are things I’m going to discuss.

Neuroethics is a brand new field. The modern use of the term was coined by William Safire, of all people, in The New York Times, who is on the board of the Dana Foundation and is very interested in issues of the brain. Neuroethics is a field that looks at emerging technologies and their relation to the brain. In Europe, the term has been used to refer to the clinical care of people with strokes and other neuropathologies. In the United States, it has come to mean something different. Here is the technical definition, which I wrote for the Encyclopedia of Bioethics: “The field of neuroethics involves the analysis of, and remedial recommendations for, the ethical challenges posed by chemical, organic, and electro-mechanical interventions in the brain” (Wolpe, 2004.).

Neuroethics includes, for example, the proper use of psychopharmacology, which is of course a long-standing issue. Human beings have been trying to enhance the brain with chemicals ever since they discovered fermentation, perhaps even before that. And we are still doing it. In fact, we have gotten a lot better at it, a lot more specific about it. We have created effective, highly specific drugs that can alter moods and cognitive states in very selective ways. We have become a psychopharmacological culture. As soon as new psychotropic drugs or other designer drugs come out, we use them, and even as we are using them, we wring our hands over whether we should. But our concerns don’t seem to keep us from buying them, whether the drug is Prozac, Ritalin, Viagra, or another drug, or even



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 57
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 Neurotechnology and Brain-Computer Interfaces ETHICAL AND SOCIAL IMPLICATIONS PAUL ROOT WOLPE Center for Bioethics University of Pennsylvania I want to thank my friend George Khushf, who without knowing it, perfectly set up what I want to talk to you about. All of the things he mentioned at the end of his talk—a retinal prosthesis, Miguel Nicolelis’ monkey and robotic arm, and “roborats”—are things I’m going to discuss. Neuroethics is a brand new field. The modern use of the term was coined by William Safire, of all people, in The New York Times, who is on the board of the Dana Foundation and is very interested in issues of the brain. Neuroethics is a field that looks at emerging technologies and their relation to the brain. In Europe, the term has been used to refer to the clinical care of people with strokes and other neuropathologies. In the United States, it has come to mean something different. Here is the technical definition, which I wrote for the Encyclopedia of Bioethics: “The field of neuroethics involves the analysis of, and remedial recommendations for, the ethical challenges posed by chemical, organic, and electro-mechanical interventions in the brain” (Wolpe, 2004.). Neuroethics includes, for example, the proper use of psychopharmacology, which is of course a long-standing issue. Human beings have been trying to enhance the brain with chemicals ever since they discovered fermentation, perhaps even before that. And we are still doing it. In fact, we have gotten a lot better at it, a lot more specific about it. We have created effective, highly specific drugs that can alter moods and cognitive states in very selective ways. We have become a psychopharmacological culture. As soon as new psychotropic drugs or other designer drugs come out, we use them, and even as we are using them, we wring our hands over whether we should. But our concerns don’t seem to keep us from buying them, whether the drug is Prozac, Ritalin, Viagra, or another drug, or even

OCR for page 57
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 nonpharmaceutical enhancements. We are long past using these drugs solely for identified pathologies; we use them now to micromanage our moods and cognitive states. I am a sociologist by training, and an underlying given of sociology is that all of science occurs within a cultural context. That is, scientists and engineers ask questions that they then try to solve. Where do the questions come from? People in different cultures ask different questions, and people in different historical periods ask different questions. In fact, very often in the history of science, theories disappear, not because they have been disproved, but because society is no longer interested in them. The questions we ask of science change as societies evolve. But it is crucial that we understand that the questions themselves have embedded values and ethical components. A perfect example is the ecological movement. Solving the problems of ecology through science was considered a silly idea by a lot of people 30 or 40 years ago. People then just didn’t think in those terms. Now, of course, ecological concerns are ubiquitous, and the idea that science can provide solutions to those questions is very much in everyone’s consciousness. So, the questions we ask of ourselves, and not just the answers we give, have profound ethical implications. What problems are we really trying to solve? And why have we chosen these problems and not other problems? Why, for example, do we put such a premium on enhancing cognitive function? Why do we think of ourselves as mechanisms that can be improved? These questions can be traced to the history of our particular time and place. The questions are different in different countries—as George said, Europeans think about these issues differently from the way we think about them. Compare our attitude toward genetically modified foods, for example, with the Europeans’ attitude. Another example is how we describe the human body. We think in terms of genetics and metaphors of information technology. Bill Wulf was talking earlier about how intervening in a single bit of a complex software code can cause problems; well, so, my Microsoft Office crashes if there is an error. A single-bit alteration in the genetic code, however, can cause cancer or some other genetic disease. When we talk about small interventions in the 3 billion bits (as opposed to 10 to 200 zeros) that make up the blueprint of my cells, I’m very concerned about a single-bit mistake. At the National Aeronautics and Space Administration (NASA), where I am chief of bioethics, we are debating the issue of radiation exposure. What is an appropriate amount of time to allow a mutagenic force to impact the bodies of our astronauts? How much alteration in an astronaut’s genetic code is allowable? How many “bugs” are acceptable in the human software? Note the convergence of metaphors. We can talk about computer code and the human body seamlessly because the questions and language of this moment in time are the same whether we are talking about biology or computers. When we talk about psychotropic drugs, we use similar metaphors—the brain as computer, a neuron as a single switch, the brain as wetware containing

OCR for page 57
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 software. But fundamental to neuroethics, and the engineering of the wetware between our ears, is the question of what it means when we begin to intervene in that software, in the functioning of our brains. Intervening in the functioning of a computer is very different from intervening in the functioning of our genes or our neurons. We also think about it differently because we are cerebrocentric. If we took my brain and put it in George Khushf’s body, aside from my becoming slightly more handsome, you would still think it was me, not George. George’s body would become the receptacle for me, because we believe our personalities and everything important about us, or at least most of what’s important about us, resides in our brains. That, however, is a culturally and historically specific claim. The site of personhood to the Japanese is in the gut, which is one reason the Japanese have been resistant to the idea of brain death and transplants. Their resistance is not based on a Luddite resistance to technology—the Japanese love technology. The brain death criteria violate their cultural model of where personhood resides. And so, when we talk about psychotropic drugs, when we “listen to Prozac,” as Peter Kramer says, when someone says “the real me was evoked by taking a psychotropic drug, and I was never me until I took Prozac,” we have to ask what “me” means in that sense. What is the nature of our sense of identity? What is mind if we can alter it in profound ways? These are profound cultural questions. When we use drugs to alter mental processes in children, the questions become even more profound. Ritalin prescription patterns, for example, are bimodally distributed. Ritalin use is very high in wealthy suburban schools and in poor inner city schools. In wealthy suburban schools, the main drivers behind Ritalin use are parents who want their kids to have the extra edge a good amphetamine can give them. In inner city schools, the people who drive Ritalin use are school managers who use it as a tool to manage problem kids. And so, it’s not just the existence of the drug, but also how we use it, and under what circumstances, that can have profound implications. We also give kids antidepressants. A lot of the pediatric literature says this is a great thing, that depressed kids have been undertreated. Everyone agrees that there is depression in children. But many of these drugs have never been tested on children because it is expensive and difficult to do clinical trials on children and because drug companies know that, even if they don’t test them on kids, doctors will prescribe them because they have no choice. So the drug companies get the income without making the investment, and we put ourselves into a Catch-22 situation. We don’t like to test things on children, so we give them without ever testing them. And finally, one of the most profound questions in pediatrics in the future may be about prophylactic treatment. Once we become skilled at understanding brain imaging and the morphological features of the brain, we will be able to predict psychiatric susceptibility. We will be able to identify prodromal states in certain diseases. We may be able to image a child and say, “This brain looks like

OCR for page 57
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 the kind of brain we see in schizophrenics, or it looks like susceptibility is very high because of morphological or functional features that we see through PET or functional MRI.” Should we treat the child prophylactically? Will we have scores of children on drugs who show no symptoms whatsoever but who seem to have pre-pathological brains? That is going to be an important question in the future that pediatrics has barely begun to address. The issue on the horizon is the use of psychotropic drugs as lifestyle drugs, which will force us to confront questions about the nature of personality, selfhood, and human enhancement. Very soon, we are all going to be micromanaging our moods. We are going to replace our current liquid caffeine delivery systems with wake-up pills to get us up in the morning and get us dressed and ready to greet the day. Right before we get to work, we will take a get-ready-for-work pill that focuses our attention. Right before lunch, we’ll take a pill that mellows us out for an hour, and also probably a pill to prevent our bodies from absorbing the fat and carbohydrates we’re about to eat at lunch. After lunch, we’ll take a pill that makes it so we don’t have the post-lunch depression we’ve all experienced (that’s why I prefer to speak before lunch rather than right after lunch at these meetings). When we get home, we’ll take “Sublime,” a pill that puts us in the mood to see our families again. This is already happening. Here is an advertisement from menshealthworld. com. “Consult with your doctor.” For what? The ad tells you below: “Celebrex, Propecia, Viagra, and Xeneca,” a weight loss drug. Pretty soon physicians will become lifestyle pharmacists. Actually, physicians will become irrelevant for that purpose because you can already get most of these drugs on the Web. A recent study of websites showed that you can get Viagra everywhere, simply by answering a few questions. We all get the spam, right? But you should all go to one of those websites and place an order—you don’t have to put in your Visa card number, but go as far as you can until you chicken out on the process. You will find that there is a clinical input form you must fill out that is “checked by a physician” to make sure you qualify for Viagra. There are four questions. If you get them wrong, you can try again until you get them right. Then there is brain imaging, which is bringing up a whole series of new, interesting questions. It turns out that the phrenologists were right, at least to a degree. Some morphological features of the brain actually do correlate with personality traits. For example, some studies have shown that you can predict some things about people, like whether they are socially withdrawn or socially active, by the size of the cingulate gyrus. Just by size, it correlates well. And there are features of the brain that can be identified by their character. For example, a history of major depression or significant drug use (cocaine or other drugs) can leave morphological signs on the brain that can be seen on CAT scans, although not individually yet. If you look at a group of people who are now in total remission or who don’t use cocaine but were once cocaine addicts, and

OCR for page 57
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 you compare them to “normal” people, you can see statistical differences in their brain structures. At this point, though, we can’t say that a single individual was a cocaine addict, except at the extremes. But imagine what would happen if we started using brain imaging routinely in the public sector—I’ll explain why we would in a moment. But imagine what that would mean for privacy. All kinds of traits might be revealed that we might not want known. Drug abuse is not the only thing that can be seen. For example, the best way to tell if someone is a drug addict is to expose them to the drug—a picture of it or the smell of it. You can see excitation in the brain, even if they no longer use the drug. You can also do that with sex offenders or people who aren’t sex offenders but who have a sexual proclivity. If you want to know if someone has a particular fetish, expose them to the fetish, and look at their functional MRI. We are talking about enormous possibilities for invasion of privacy here. We are talking about the ability to use technologies for social screening. Believe me, NASA would like nothing better than to put astronauts into brain scanners and say, “Looks like we have a pilot here—great visual cortex, good spatial sense.” That is a pipe dream, of course, but NASA is looking for any piece of information that might improve their chances. Aptitude tests might soon be replaced by brain scans. Here’s another example. We are very bad at detecting lying. However, Daniel Langleben at the University of Pennsylvania recently did a study in which he put people into an fMRI scanner, gave them a card, and told them to lie at some point about which card they had. Through brain imaging, he found he could actually detect the difference in grouped data between lies and truthfulness. Yet there are problems with such studies. There is a big difference between looking at a card with a cross on it and saying “star,” and telling a lie, like “I did not have sex with that woman, Monica Lewinsky.” These are very different kinds of lies. And it is unlikely that brain scanning can detect a lie as complex and robust as the latter, if we want to call that a lie. It depends, of course, what the meaning of “sex” is. But these are fascinating questions. Here is another interesting study. The amygdala is a part of the brain that plays a role in emotions, such as fear. In one study, white males with high racism scores were put into PET scans. When they were shown pictures of white faces, there was very little response. Famous black faces that they recognized, like Martin Luther King, elicited no response. But when they were shown unfamiliar black male faces, their amygdalas lit up like Christmas trees. Evidence of racism in a brain scan! There is talk now about creating remote brain scanners for airports. If you walk through an airport and your amygdala is lit up, it may mean you are a terrorist or it may mean you had a fight with your wife or it may mean you’re a white racist. In any case, you would be brought to the back room to be strip searched. Brain imaging presents incredibly difficult problems that must be solved

OCR for page 57
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 before we can use it in social settings—to screen for employment, for honesty, for sensitive jobs; to detect lies; to track kids into aptitudes; and so on. These are all things that the Defense Advanced Research Projects Administration (DARPA) and other defense agencies are very interested in. Let’s move for a moment to regenerative neurology. We can put actual fetal nigral cells into people’s brains for Parkinson’s and other diseases. In the United States, the ethical conversation about that issue was focused entirely on where we would get the fetal cells; abortion was central to our discussion. In Europe, they asked a different question—what it would mean to put cells into someone’s brain from someone else’s brain? Would you adulterate their personhood? And for research in Scandinavia they decided that, yes, you could put cells into someone’s brain, fetal nigral cells, but they have to be disaggregated. You cannot put a clump of brain that might have its own coherent integration into someone else’s brain. We never even had that conversation here in the United States. I also want to say a quick word about deep-brain stimulators. I was in an operating room last week to observe neurosurgeons putting a deep-brain stimulator into the putamen of a person with Parkinson’s. It was an amazing thing to see. They threaded the device through a canula deep in the brain. You watched the progression of this thing, by tenths of a millimeter, deep into this person’s brain, and you listened to the brain activity through an audio feed at the end of the probe. When they got into the putamen, the static-y sound of neurons firing increased significantly. You could just hear the cells firing wildly. The patient was on the table shaking, and when they turned the thing on, his tremors all ceased, in a moment, and he cried out, “Ah!” This man hadn’t been able to feed himself in years. They handed him a glass and said “pretend you are drinking beer,” and he brought the cup smoothly to his mouth. When they turned the current off to thread the wire through the inside of the skin to the stimulator implanted under the clavicle, he cried out again, “No, don’t turn it off!” They assured him it would go back on, but it was an amazing thing to see. There are some reports now, very preliminary, that the spouses of people who have gotten these deep-brain stimulators are reporting that, you know, “George doesn’t seem exactly like George anymore.” Is that because they have gotten used to the George who has had Parkinson’s for five years? Or could it be that deep-brain stimulators, even though they are in motor centers of the brain, are actually evoking some kind of change? Nobody knows. Finally, I want to talk about brain-computer interfaces. First noninvasive interfaces. A number of EEG-based technologies use action potentials to translate brain impulses into action. The problem is that the skull is a very bad conductor, a very bad transmitter of the electrical activity of the brain. So, when you put these things in and these caps on, you muffle most of the activity you want to detect. Using the P300 evoked-response potential, you can tell when the brain acts as an entire brain, going “Ah-ha, that’s it!” Based on that idea, they have now

OCR for page 57
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 created computer-brain interfaces that allow people to move cursors around screens and all kinds of things without any implanted technologies. A similar technology is brain fingerprinting. Lawrence Farwell has used this technology forensically. The concept is to use EEGs to show whether a person is looking at a familiar or unfamiliar scene. A suspect is shown rapid pictures, and then, boom, a picture of a crime scene where, for example, the suspect says he never was. If his brain shows familiarity, Farwell can say he was probably there. When a suspect says he was not there, and the prosecutor claims he was there, if the suspect’s brain shows unfamiliarity, Farwell can say with even more confidence that he probably wasn’t there. This technology was ruled admissible in the Terry Harrington case, and he was let go. This brings up a whole range of important issues in jurisprudence. Believe me, if the Bush administration could get its hands on some of these technologies, they would be on a plane to Guantanamo Bay tomorrow. Some brain-computer interfaces are implantable, rather than transcranial. These include cochlear implants and the optic nerve implant. Researchers are also working on retinal prosthetics. Today they have about 16 electrodes. A prosthesis with 1,000 or so electrodes could allow a patient to really look at things, to read a book, for example. But in the meantime, a person who is stone blind can read the top two lines of an eye chart. This is a fascinating prosthetic possibility. Drs. Bakay and Kennedy of Emory University have a patient named Johnny Ray, JR they call him, who had a brain-stem stroke. JR is completely locked in, completely paralyzed, can’t communicate in any way. Electrodes were implanted in his brain, and he was taught to move a cursor around a board to point to phrases, such as “I’m uncomfortable,” “thanks for visiting,” and so on. Now he has begun to use an alphabet to spell out his name. Or take Miguel Nicolelis who is in the news because of a paper that was just released about a new technology. Nicolelis has put electrodes in the brains of owl monkeys, 30 or 40 electrodes in one, 200 in another, and then had them remotely control robotic arms. Nicolelis and his team determined what the monkeys’ brain waves looked like when they moved their own arms, then used algorithms to translate them and taught the monkeys to control robotic arms. As the monkeys realized that the robotic arms mimicked the movement of their own arms, they eventually dropped their arms and began to control the robot arms entirely with their brains, without moving their arms. Amazing! And then there is the “roborat,” a rat with electrodes controlling its movements. The roborat has been turned, basically, into an organic robot. It no longer has the ability to make decisions about where it wants to go. The animal’s behavior is governed by electrodes activated by someone using a joy stick who determines whether the animal moves right or left or goes up or down a tree. This kind of technology raises all kinds of ethical questions about using technology to control animals.

OCR for page 57
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 A man named Kevin Warwick had a chip implanted in his arm, which he then connected to the computer and environment in his laboratory. When he enters the lab, the lights go on and jazz starts playing. His heart beat and blood pressure appear on his computer screen. Warwick said that he suddenly began to feel connected to the environment in a way he hadn’t before. The common feature of these technologies is that they use technologies to control moods, cognitive functions, and physical functions. Through these technologies, we can begin not only to enhance ourselves, but also to connect ourselves to our environment in new ways. It is already happening. Science recently printed a cover article about bionic humans. In other words, we are becoming cyborgs, not in the science fiction sense, but in a practical, real, obvious sense; our technology will be integrated into our bodies, and our bodies will be integrated into our technology in a seamless way. This may not turn us into “spiritual machines,” as Ray Kurtzweil claims, but it will certainly turn us into spiritual man-machine hybrids. The ethical question that confronts us is: who will have control of these technologies, and who will determine their ethical nature? Who will protect our privacy? Who will ask the important questions about enhancement—when it is good, when it is bad, and who should or should not have it? Or will these products be put on the consumer market for consumer response? Many people are already trying to answer these questions in the negative. For example, Bill McKibben in his book, Enough; Leon Kass, the head of Bush’s Presidential Bioethics Council, who has come out against in vitro fertilization, stem cell research, and other technologies; and Francis Fukuyama, who wonders in his book, Our Posthuman Future, if we are threatening “human nature.” These and others are forces arrayed against these technologies. Others are advocates for these technologies. Just as there are nanophobes and nanophiles, there are neurophobes and neurophiles defining the arguments. Science cannot march too far ahead of ethics, not because as an ethicist I need to be employed—that’s always a good thing—but because ethics is going to determine how neurotechnologies are received by the public. We made a mistake with the cloned sheep, Dolly, which was presented to the public without prophylactic ethical conversation. The public response was international hyperventilation because people didn’t understand what Dolly meant, what the implications were. We must engage the public in this conversation before these technologies are developed further. We all have a stake in the outcome. REFERENCES Fukuyama, F. 2002. Our Posthuman Future: Consequences of the Biotechnology Revolution. New York: Farrar, Straus, and Giroux. Kramer, P.D. 1993. Listening to Prozac: A Psychiatrist Explores Antidepressant Drugs and the Remaking of the Self. New York: Viking.

OCR for page 57
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 McKibben, W. 2003. Enough: Staying Human in an Engineered Age. New York: Henry Holt. Wolpe, P.R. 2003. Neuroethics. Pp. 1894–1898 in Encyclopedia of Bioethics, vol. 4, 3rd ed., edited by Stephen G. Post. New York: Macmillan Reference USA.

OCR for page 57
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 This page intentionally left blank.