Cover Image

Not for Sale



View/Hide Left Panel

Data and Measurement



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 188
Data and Measurement 188

OCR for page 188
Current Developments in a Cortically Controlled Brain-Machine Interface Nicho Hatsopoulos, University of Chicago DR. HATSOPOULOS: My background is in neuroscience. I started out in physics, went to psychology, and now I’m in neuroscience. Today I am going to talk about some work we have been doing starting 10 years ago with my collaborators at Brown University, which is trying to make sense out of large data sets collected from the brain, particularly in the cortex of behaving monkeys, and then where we are taking that to the next level. It is going to sound a little bit engineering in flavor, but I hope to convince you that it has not just applied applications but also has scientific interest. Let me start off by telling you what people in my line of business do and have done historically. For about 40 years, my field might properly be called behavioral electrophysiology. What we are trying to understand is electrical and physiological signals in the brain and how they correlate with behavior, whether it is sensory signals coming in, visual or auditory signals, cognitive process, or motor outputs. Since the late 1960s people have been able to record these electrical signals in behaving animals and, for the most part, what people have done is address this encoding problem. In our case, we have worked with a monkey that we trained for several months to play a video game by using a joy stick to move a curser to particular targets. Historically, what people have done is insert an electrode into the an animal’s brain to record the extracellular action potentials from individual neurons while the animal performs a task. Particularly in cortex, signals are believed to be highly noisy so you have to do some sort of multi-trial averaging, requiring the animal to perform the task over and over again. Figure 1 shows an example of five trials. These are raster plots showing you the time occurrence of the spikes, which are action potentials. These are extracellular recordings. By averaging many of these rasters you can get an average response which is shown in the yellow graph, and this is a typical average response for a neuron in the motor cortex. What we are plotting on the y axis is the firing rate of the neurons versus time. This vertical bar is the onset of movement when the monkey first begins to move. Typical of the motor cortex, it starts firing maybe 200 or 300 milliseconds before the moving begins, and it is believed to be intimately involved in driving the arm, driving the motor neurons in the spinal cord that ultimately activate the muscles and move the arm. This same approach has been used in the sensory domain as well. You present a sensory stimulus multiple times, so as to wash out noise, and you get this sort of response. 189

OCR for page 188
FIGURE 1 Unpublished. What we have been doing, especially over the past five years, is taking the inverse approach, the so-called decoding problem, which in some sense is more realistic in terms of what the brain actually has to deal with. We put in multiple sensors, multiple electrodes in a particular area of the brain, the motor cortex, record multiple single neurons, their action potential, and based on the activity on a single trial, tried to predict what the animal was going to do. We don’t have the luxury of trial averaging, we just have a snapshot of activity at a certain moment in time and try to predict what the animal is doing. This multi-electrode sensor, shown in Figure 2, is this so-called Utah array that was developed by Dick Normann, a biomedical engineer at the University of Utah. It is a silicon based array consisting of 100 electrodes arranged in a matrix 10 by 10. Each electrode is separated from its neighbors by 400 microns. The length of those electrodes is typically 1 or 1½ millimeters long. So, we are picking up the cortical layer where the cell bodies lie, and the tips of these electrodes are what actually pick up the electrical signals. The rest of the shaft is insulated. These tips are platinized and we pick up these extracellular signals. The picture on the right of Figure 2 gives you a sense of the scale. Oftentimes when I first gave these talks I would present this thing on a big screen and people would think this is like a bed of nails destroying the brain here. In fact, that is my finger to show you the scale. It is quite small, and for a neurosurgeon it is essentially a surface structure although it is penetrating in 190

OCR for page 188
about a millimeter. FIGURE 2 Left panel: electron-micrograph of Utah micoelectrode array taken by Dr. Richard Normann. Right panel: photo taken by the Chicago Tribune. I am going to work backwards. I am going to talk to you about where we have taken it to the next step then take you back into the lab with our monkeys, tell you where we are going for the future, and where this area of decoding is going to head in the next few years. About three or four years ago we took it to the clinical setting. The idea was if we could predict, based on a collection of signals from the motor cortex, where a monkey was going to move, could we use that to help someone who was severely motor disabled like a spinal cord injured patient who can’t move their limbs. Could we extract those signals while the patient is thinking about moving, and then control some sort of external device, whether it be a cursor on a computer screen, or a robotic device, or even their own arm by electrically stimulating the muscles. There are already devices out there for people who are partially paralyzed who can still move their shoulders but can’t move their wrist or their fingers. Basically, these devices have little sensors in the shoulder and depending on the orientation of the shoulder joint, will generate different patterns of electrical stimulation, and generate different canonical grips like holding a cup of coffee or holding a fork or whatever. Ultimately we want to connect those devices with our cortical sensor and then provide a kind of practical device for a patient. About three years ago we formed a company that took this to the clinical setting and initiated an FDA approved clinical trial involving five patients. We had three proofs of principle and milestones that we had to meet. First, by putting this in a spinal cord injury patient, could we extract human signals from a human cortex? That hadn’t been shown before, at least with this particular device. Secondly, could the participant, the patient, modulate those neural signals by thinking about moving? Thirdly, could those modulated signals be used for a useful device? What we are going to talk about today is just moving a computer cursor. 191

OCR for page 188
FIGURE 3 Photos taken by Cyberkinetics Neurotechnology Systems, Inc. Figure 3 shows our so-called BrainGate pilot device which basically has this sensor which is the same sensor we use in our monkeys, connected to a connector that is secured to the skull. Going through the skin we can connect it to an external computer and acquisition system which is shown here. We collect the data, do our calibration, build our so-called decoding model to make sense out of those signals and hopefully, allow the patient to move a cursor. Our first patient was a young man in Massachusetts who was basically tetraplegic and decided to participate. We implanted the sensor array in his motor cortex during a 3-hour surgery, he had an unremarkable recovery from the operation, and several months afterwards we began picking up electrical signals. Figure 4 shows examples of raster plots from three channels while we had the patient imagine opening and closing his hand. You can see all three of these neurons tend to fire when he imagines closing the hand, and then stop firing when he imagines opening the hand. 192

OCR for page 188
FIGURE 4 Figure generated by Cyberkinetics Neurotechnology Systems, Inc. Here is a video clip of this voluntary modulation. [A video clip was shown at the workshop.] When the technician asks the patient to think about moving the whole arm either to the left or to the right or relax, the top raster plot displays a big burst of spiking as the patient imagines moving his arm to the left. Thus, we met the first two milestones of our clinical trial, demonstrating that the neural signals can be detected and that the participant can modulate that neural output. We met the third milestone by actually showing that the patient could basically read his e-mail, turn on the television, and so forth. This is an example of what can be done. It is pretty crude, and there are already a lot of devices out there for spinal cord injured patients. This patient can speak, so we have to really demonstrate that what we are providing for this patient isn’t something he could do for himself with a much cheaper solution, such as an existing voice- activated system. I think we have just begun to scratch the surface with this technology. One of the interesting results from this study and also with our monkey studies is that it has always intrigued me how few signals we can extract and yet get reasonable control and reasonable accuracy and prediction of movement. Under that little chip, that sensor array, there are about 1.6 million neurons, and yet, by extracting signals from maybe 20 or 30 neurons or multi-unit activity, we can do a reasonably good job. Why do we have all those neurons in the first place? It is intriguing, and there are all kinds of hypotheses. There is some redundancy in the system. I am not sure if that is the whole answer, but that has always been intriguing to me. Again, this sensor array is just put into the arm area of the motor cortex. We know it is activated when the arm is moved or planned to move. 193

OCR for page 188
Aside from that, we are not picking particular cells or particular areas. Let me switch to the work that I have been doing in the lab with the monkeys, and taking this idea of a brain-machine interface to the next step. One of the ways we view interaction with the world is to consider two different general modes of interaction: a continuous (analog) mode, such as moving a mouse on a computer screen or moving one’s limb in space and time, and a discrete (symbolic) mode, such as language or pressing keys on a keyboard, or even grasp configurations such as holding a cup and so forth, which can be considered discrete kinds of movement. Our feeling is that those two different modes of movement or selection processes might involve two different kinds of cell populations and different kinds of decoding algorithms. To tackle this problem with our monkeys we trained them to perform two different kinds of tasks. To get at this continuous mode we had the monkey basically scribble to follow a set of randomly positioned targets as shown in the top left part of Figure 5. When they get to a target it disappears and a new target disappears. They move to it and it disappears and so forth. Over multiple trials they generate the kind of mess shown in the lower left. To get at the discrete mode of operation we use a much more common task, which is called a center-out task, where basically the monkey reaches from a center target to one of eight targets positioned radically away from the center, and keep repeating, doing the same task over to one target. In the actual experiment each target is randomly selected on any given trial. By doing this they generate the relatively stereotyped movement to different targets shown in the lower right of Figure 5. The key here is that we break up the center-out task into a planning phase and an execution phase: in the planning phase, one of the squares on the outer ring is turned to yellow, signifying it is the target, but the monkey is now allowed to move. He has to wait there for about a second and presumably plan his movement. After about a second, the target starts blinking, indicating that he has to move. When the monkey waits for that blinking, we know we have trained the animals. It takes a lot of training to get them to do that because as soon as they see that target their natural inclination is to move to it because they know they are going to get their apple juice and be happy. Again, it takes some time to train them to do that, but they will wait for about a second. We are going to look at that early planning phase and see if we can then predict which target the animal has selected. That tackles two different kinds of modes of operation. What we were interested in is whether there were different cortical areas that might be better suited for these two different modes of operation. 194

OCR for page 188
FIGURE 5 Top left and top right panels generated by myself, unpublished; bottom left and right: taken from Hatsopoulos et al. (2004). “Decoding continuous and discrete motor behavior from motor and premotor cortical ensembles.” Journal of Neurophysiology 92:1165-1174. For the most part we have been implanting into the central motor strip of the monkey cortex. It has a rough topography—a leg area, an arm area, and the face area. For the most part we have implanted in the arm area of the primary motor cortex. Since I moved to Chicago I have been doing these dual array implants, and am now doing triple array implants, but I will be showing you these dual array implants where we have a second array implanted in the pre-motor cortex in the dorsal part of the pre-motor cortex, which is believed to be involved in this early planning or target selection process. 195

OCR for page 188
FIGURE 6 Figure generated by myself, unpublished. The photograph on the left of Figure 6 shows you the two arrays implanted in surgery. What we are proposing here is a system of three decoders to facilitate these two different modes of interaction with the world within a constrained experimental environment. We have got these two arrays in two different cortical areas. The pre-motor cortex signals will be fed into a discrete decoder to predict which target the animal is going to go to. If the monkey incorrectly predicts, based on his brain activity, which target he is going to go to we then switch to a continuous mode based on signals in the motor cortex. We have a third decoder which allows us to switch between the two modes. We control the switch but ideally we want the switch to occur voluntarily by the patient or by the monkey, and we are going to use a different kind of signal to instantiate that switch. 196

OCR for page 188
FIGURE 7 Taken from Hatsopoulos et al. (2004). “Decoding continuous and discrete motor behavior from motor and premotor cortical ensembles.” Journal of Neurophysiology 92:1165- 1174. Figure 7 shows some elementary statistics. Basically, we have tried a number of different approaches to predicting the position of the monkey’s hand based on the response of multiple neurons at multiple time lags. This is essentially a linear regression problem. We have tried nonlinear approaches and we have tried more sophisticated linear approaches. For the most part we can get a bit of improvement but not that much better. In fact, this very simple linear approach does remarkably well. The key to this is we are looking in this response matrix; we are looking at the neural activity not at just one instant at time but in multiple time points in the past, up to about a second in the past. It turns out that neural activity way back in history can actually participate in predicting what the current position of the monkey’s hand is. Figure 8 shows the output of just one simulation. This is an off line simulation where we are predicting the position of the monkey’s hand, and the blue is our best prediction based on the neural activity. 197

OCR for page 188
FIGURE 8 Figure generated by myself, unpublished. What we found—this in retrospect isn’t so surprising, although there was some data in the literature that suggested otherwise—was that cells from the motor cortex, primary motor cortex, M1—the red trace in Figure 9—did a much better job of predicting the actual position of the monkey’s hand. Figure 10 shows the results from a monkey playing that random walk task; he is jumping around and you can see he is oscillating; he is doing harmonic oscillation. M-1 activity predicts better than pre-motor cortex. In this little video [shown at the workshop] you can probably see how well M-1 does compared to pre-motor cortex, and pre-motor cortex is lagging behind; it is not doing such a good job. That is something we have seen repeatedly; it is a consistent finding and not very surprising. 198

OCR for page 188
FIGURE 9 Taken from Hatsopoulos et al. (2004). “Decoding continuous and discrete motor behavior from motor and premotor cortical ensemble.” Journal of Neurophysiology 92:1165- 1174. FIGURE 10 Figure generated by myself, unpublished. QUESTION: It looked like the pre-motor signals actually had a higher frequency response than the motor signals. Is it true that some combination is better than one or the other? 199

OCR for page 188
DR. HATSOPOULOS: Certainly a combination is better than either one alone. We have demonstrated that as well, it is true. We were interested in, if you had a choice between one cortical area versus another, which one would you choose for this kind of continuous control, but you are absolutely right, and we have demonstrated that. So, what we found in Figure 11 was what percentage of the variance we account for based on a selection of neurons. The x-axis shows the number of neurons used to predict the movement of the hand. So, we have different sized ensembles, ranging from one neuron to 20 in this example. As you can see, overall, M-1 is doing much better than pre-motor cortex, and this is the mean response as a function of the number of neurons. FIGURE 11 Taken from Hatsopoulos et al. (2004). “Decoding continuous and discrete motor behavior from motor and premotor cortical ensembles.” Journal of Neurophysiology 92:1165- 1174. There are a couple of things that are particularly interesting. One is, if you look at the lower-left graph, if you go back to one neuron—let’s say you have only one neuron to work with—you can see a big cluster of neurons that do poorly; they account for maybe 10 percent of the variance, and then you have two outliers higher up that account for a much larger percentage of the variance. I have termed these “super-neurons” because we consistently find these unique 200

OCR for page 188
neurons that fall outside the main cluster of neurons that do a much better job. In fact, if you look at a large ensemble, and you consider 10 neurons in your decoding you can see this pattern so you have got these big spaces. This cluster of 10 neurons always contains one of these two super- neurons. That is one interesting finding. The other interesting one if when you try to extrapolate based on 20 (or we have actually gone out to 40 neurons or 50 neurons), and try to extrapolate how many neurons would you need to get ideal performance. Our definition of ideal performance would be that 95 percent of the variance is accounted for. Now extrapolation is always fraught with problems, but if you were to do that you get the hypothetical curves in the two graphs on the right side of Figure 11. You might say, wait a minute, this is just make-believe. You are right, it is kind of make believe, although these are curves that have been proposed in the past. Let’s say they were true, how many would you need? You would need about 200 neurons in the primary motor cortex. Yet we have shown in real time applications with patients and with our monkeys that it is irrelevant whether or not we were restricting the hand. The fact is the animal and the patient can control the cursor remarkably well so this is the continuous mode. For the discrete mode we had this other task where it is basically a center-out task, and we have this instruction period where the monkey is asked to bring the cursor to the center of the target and wait anywhere from 600 to 1,500 milliseconds. This is shown schematically in Figure 12. We are going to look at this early activity to try to predict which of the eight targets he is going to move to prior to actually moving at all. FIGURE 12 Figure generated by myself, unpublished. 201

OCR for page 188
Figure 13 shows the the average responses for this task—these are now peri-event histograms which are generated in the same old-fashioned way as they have been done for 50 years, you just have the monkey perform each of eight directions. Each column is now one of eight directions and each row is a different neuron. The top three are from pre-motor cortex, and the bottom three are from M-1, primary motor cortex. You just average the response over multiple trials of the same direction. What you see is this interesting increase in activity as soon as the instruction signal comes on. Zero represents the onset of the instruction signal comes on so zero represents the onset of the instruction signal. You see this increase in firing rate, then it goes down, and then about a second into the trial the movement begins. This is well before movement begins. You don’t generally see this activity in the primary motor cortex, although it is not so cut and dried. Generally you see more of this early target selection activity in pre-motor cortex. FIGURE 13 Taken from Hatsopoulos et al. (2004). “Decoding continuous and discrete motor behavior from motor and premotor cortical ensembles.” Journal of Neurophysiology 92:1165- 1174. 202

OCR for page 188
When we tried to model the probability of the response of the whole ensemble—let’s say we have 20 neurons—using a maximum likelihood classifier, this R vector is a 20-dimensional vector. We model this probability conditioned on each of the targets, so we have eight of these models, and we have tried Gaussian or Poisson models for the probability. Poisson tends to do better than the Gaussian. We then maximize the likelihood of the data and use that to predict the target. Figure 14 shows examples of that, where we are now looking at our prediction starting from instruction onset. You can see chance is 12.5 percent, shown in the blue dotted line. Pre- motor cortex does considerably better than M-1 during this early period, which is not surprising based on those average responses and, in fact, we have gotten a lot better than even 42 percent. We have gotten up to about 80 to 90 percent based on activity in pre-motor cortex. FIGURE 14 Taken from Hatsopoulos et al. (2004). “Decoding continuous and discrete motor behavior from motor and premotor cortical ensembles.” Journal of Neurophysiology 92:1165- 1174. Figure 15 shows plots where you examine the performance of the classifier as a function of neurons. As seen in the previous figure, pre-motor cortex does exactly the opposite of M-1. 203

OCR for page 188
FIGURE 15 Taken from Hatsopoulos et al. (2004). “Decoding continuous and discrete motor behavior from motor and premotor cortical ensembles.” Journal of Neurophysiology 92:1165- 1174. I just want to thank members of my lab, in particular Jignesh Joshi and John O’Leary, and John Donohue at Cyberkinetics, as well. Thank you very much. 204

OCR for page 188
QUESTIONS AND ANSWERS DR. MOODY: Perhaps one quick question and then another. In the discrete trial of the histograms, and each degree on the clock there was a great deal of variance, it seemed, in those histograms across the different directions. Could you explain why that is the case? The second question is, can you explain or can you speak perhaps more about whether or not it really matters so much if you are tapping into what they are doing if, instead, they learn how to respond and manipulate, even subconsciously, what they are doing to get the outcome. Can you speak between the two of those, the observation and the learning? DR. HATSOPOULOS: The first question was the variance in response over the eight directions. The whole point was, it not only modulates with that direction, it modulates during that planning phase, but it varies with direction, with target. That tells us, in fact, that it is providing target information. It is saying, this neuron likes target five and not target three, and that is the whole point. If it was common to all directions, we would have a very hard time decoding. The second point was, oh, yes, in some sense, you could in principle use any brain area. It is a form of biofeedback. Is that what you are getting at? Absolutely, that is true. In fact, there are EEG-based systems for doing this kind of thing in patients. The only thing I can say about that is that they are functional, but they take a long time to train the patient to use. They basically have to associate certain global brain signal brain patterns with certain movements or certain targets, and it takes a lot of training to get them to do that. With this device, it is invasive, that is the draw back, but the pro to this approach is we are extracting signals from the motor area, the area that was intentionally used in the normal intact system to control the arm, and this requires no training at all. Some of these videos were done during the first attempt, or almost the first attempt. There was no extensive training in the animal. You are right, in principle that is true. It is biofeedback, in some sense. DR. KLEINFELD: I have a question on some of the noise between cells. Were they really independent units? Did they co-fluctuate at all? DR. HATSOPOULOS: Yes. DR. KLEINFELD: You had a very coarse scale. You had a .4 millimeter scale. Is there an issue if you get too close, that you expect to have sort of synchronous or noise sources? DR. HATSOPOULOS: You are talking about cross talk. DR. KLEINFELD: Common fluctuations to neural outputs so that they are no longer considered independent variables. 205

OCR for page 188
DR. HATSOPOULOS: In fact, that was the talk I wanted to give, but then I decided to give this, looking at neural synchrony in the motor cortex. Basically in about 10 to 15 percent of cell pairs, we find evidence of synchronization, anywhere from a millisecond to 20 milliseconds. If you plot a cross-correlation histogram, we find some widths that are very narrow, maybe three millisecond widths, and other ranging all the way up to 20 milliseconds. 206