**Nancy Kopell, Boston University**

**DR. KOPELL:** As Emery said, my background is dynamical systems in neuroscience, but in a sense I am here to lobby for more work at the interface between sophisticated dynamical systems and sophisticated statistics because I think that is a major underdeveloped area. I have more questions than I have answers for that.

My assignment from Emery was to talk about what people study when they study networks of neurons. Generally, as shown in Figure 1, they start with the Hodgkin-Huxley equations or variations of them, which describe the electrical activity of a neuron or even a tiny piece of a neuron. One can even think of a neuron spread out in space, itself; a single neuron as a network. These equations come from electrical circuit theory, and the major equation there is conservation of current, so it is talking about ionic currents that are going across the cell membrane and through little channels.

Each ionic current is given by Ohm’s law, E = IR. That is nice and undergraduate. The ionic current *I* _{ion} shown in Figure 1 is the electromotor force divided by the resistance but, in the neural world, they are very optimistic: they don’t speak of resistance, they speak of conductance, which is 1 over

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.

Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 62

Neurons, Networks, and Noise: An Introduction
Nancy Kopell, Boston University
DR. KOPELL: As Emery said, my background is dynamical systems in neuroscience, but in a
sense I am here to lobby for more work at the interface between sophisticated dynamical systems and
sophisticated statistics because I think that is a major underdeveloped area. I have more questions than I
have answers for that.
FIGURE 1
My assignment from Emery was to talk about what people study when they study networks of
neurons. Generally, as shown in Figure 1, they start with the Hodgkin-Huxley equations or variations of
them, which describe the electrical activity of a neuron or even a tiny piece of a neuron. One can even
think of a neuron spread out in space, itself; a single neuron as a network. These equations come from
electrical circuit theory, and the major equation there is conservation of current, so it is talking about ionic
currents that are going across the cell membrane and through little channels.
Each ionic current is given by Ohm’s law, E = IR. That is nice and undergraduate. The ionic
current I ion shown in Figure 1 is the electromotor force divided by the resistance but, in the neural world,
they are very optimistic: they don’t speak of resistance, they speak of conductance, which is 1 over
62

OCR for page 62

resistance, and that is how we define I ion in Figure 1. What makes this a little bit more complicated than
standard undergraduate electrical circuit theory is that this conductance is not a constant, but rather it is a
product of so-called gating variables, which talk about how these pores in the membrane open and close,
and these gating variables themselves are dynamic variables that depend on the voltage. The voltage
depends on the gates and the gates depend on the voltage. It is all highly nonlinear. There are many
characteristic time scales in there. Even when you are talking about a bit of a single neuron, you are
talking potentially about a very large number of dimensions. Now, we hook cells up via a large number
of synapses, which I will tell you about in a minute, and one can have an extremely large network to be
dealing with. On top of all this is something that I am not going to describe but that I am hoping that Eve
Marder will. That has to do with the layers of control that were talked about so wonderfully in the first
lecture. That is, on top of all of this, there are neuromodulators, which you can think of as changing on a
longer time scale all of the parameters that are in here, and which are themselves changed by the activity.
FIGURE 2
How do you hook up two cells? How do you hook up many cells? There are many ways that
neurons talk to one another. The major way is via so-called synapses. Here I am talking about two kinds
63

OCR for page 62

of synapses, chemical and electrical. To begin with, just think about a pair of cells. When the
presynaptic cell spikes it unleashes a set of events that leads to another current in the post-synaptic cell.
Once again, this other current is conductance times the driving force. For chemical synapses, this driving
force is the difference between the voltage of the post-synaptic cell and something that has to do with the
nature of the synapse, as shown in Figure 2. Very crudely, one can think of the synapses as being either
excitatory or inhibitory, and that will determine the sign of what this driving force is. For an excitatory
synapse the current will drive the cell to a higher voltage, which will make it more likely for the cell to be
able to produce an action potential and, for an inhibitory one, it does the opposite. There are also
electrical synapses which depend on the difference in the voltages between the post-synaptic and pre-
synaptic cells.
FIGURE 3
Figure 2 gives just a sense of how things are hooked up in these networks. The way to think of it
is that all of these currents get added to the equation for conservation of current, so it is electrical circuits
with bells and whistles. This can get extremely complicated. There is a fair amount of work in the
literature in which people say they can’t handle it and go to something a little bit simpler. The standard
simpler thing that people work with is the so-called integrate-and-fire neuron in which most of what I said
is ignored. You still have the conservation of current; dv/dt is something that looks linear but it isn’t. The
reason that it isn’t linear is that there is a threshold in there. It is a voltage built up to some predetermined
64

OCR for page 62

spot that the person working with this decides. One pretends that there is an action potential which
happens, a spike, and then the cell gets reset to another voltage value. The graph in Figure 3 shows how
the voltage builds up and then decays in an action potential. This is the simplest kind of caricature with
which people work. There are other kinds of caricatures in which the voltage isn’t just put in by hand, but
it is in the equations and a trajectory decays back to rest if it doesn’t reach this threshold and, if it does
reach threshold it goes on and produces this action potential.
Once you have a network like this, there are general questions that everybody in the field wants to
know. What are the dynamics of these networks? They can be local, in which cells are connected to all or
a lot of the cells in the whole network. They can be spatially extended, as they would be in a neural tissue.
They can be deterministic, or they can have noise associated with them, and they can have inputs that can
be with or without spatiotemporal structure. In the study of these networks without inputs, what one
thinks about basically is the background activity of a particular part of the nervous system. When we
think about putting inputs in, we are thinking about, given the dynamics of the network and the absence
of inputs, given that background activity, how it is going to process things that actually do have structure?
Where does statistics come into all of this? Well, data are noisy. One builds these nice nonlinear
deterministic models which are extremely complicated. How do you begin to compare what you see in
the dynamical systems with what you can get out? In general people mostly make qualitative
comparisons. What one would like is to be able to compare on a much deeper level what is going on.
That is why I say I am here to lobby for a combination of dynamical systems and statistics. I want to give
you a couple of examples of this, and it comes from the things I am most interested in at the moment,
which have to do with rhythms in the nervous system that are found in electrical activity of the brain and
can be seen in EEGs and MEGs. What I am interested in is how those rhythms are used in cognition.
That is much too large a question to be able to talk about here, but what I plan to do is give you some
examples that I think will illustrate some of the statistical issues that come up.
I have two examples. The first one is a real baby example; it is two cells. To think about how you
can get rhythms in a complicated situation, you have to think about coherence and what creates
coherence. The simplest situation about coherence has to do with two cells, and that is what I am starting
with. There are various mathematical ways of thinking about coherence of cells; I am using one in Figure
4 that I think is especially well adapted to statistics. Assume you have a neuron, and its parameters are
such that it would like to keep spiking periodically if nothing else happens. Now, it gets some input in
the form of another spike, and that input does something. One of the things that it does is to change the
timing of the cell that receives that input. One can describe what these inputs are doing in terms of the so-
called spike-time response curve. What the schematic in Figure 4 is telling is that if the input comes in,
say, 60 milliseconds after the receiving cell has spiked, there will be a negative advance or delay. If the
65

OCR for page 62

input comes much later in the cycle, the receiving cell will spike considerably sooner than it would have.
If you have such a spike-time response curve, with a little bit of algebra—this is even high
school—you can figure out, if you have two cells that are put together, that are talking to one another,
what will those two-cell networks do? The mathematics that describe what it will do is called a spike-
time difference map, which maps the timing difference between the two cells at a given cycle onto the
timing difference between them at the next cycle, so that you can see if they will synchronize. It is simple
algebra that takes you from one to another to get that map. Once you have such a map it turns out to be
standard 1-dimensional map theory to tell us whether the cells will synchronize or not, and starting from
what initial conditions. It is the zeroes of this that are telling you the phase lag at which you will have
synchronization, and the slopes are telling you something about whether or not that steady state is going
to be stable or not. It turns out for the example in Figure 4 that there is stable synchrony and stable anti-
phase.
∆ ∆
∆ ∆ ∆ ∆ ∆
∆
∆
FIGURE 4 Acker, C.D., Kopell, N., and White, J.A. 2003. Synchronization of strongly coupled excitatory
neurons: Relating network behavior to biophysics, Journal of Computational Neuroscience 15:71-90.
66

OCR for page 62

Let’s add some statistics to it. The really nice thing about spike-time response curves is that you
can actually measure them in the lab using something called a dynamic clamp, which is a hybrid network
between real biological cells and in silico. You can take a real biological cell—Eve Marder is an expert
and one of the people who first developed this technique—and you can inject into it a given signal at a
given time that you want, and you can see what that cell does. You can measure it and, not surprisingly,
you get something or other, but it is awfully noisy and, depending on the signal that you have, you will
get something else.
FIGURE 5 Netoff, T.I., Banks, M.I., Dorval, A.D., Acker, C.D., Haas, J.S., Kopell, N., and White, J.A.
2005. Synchronization in hybrid neutronal networks of the hippocampal formation, Journal of
Neurophysiology 93:1197-1208.
Here is the question: What can you read from that stochastic spike-time response curve shown in
Figure 5? When you have a spike-time response curve, before you create the spike-time difference map,
can you read off whether it will synchronize or go into anti-phase? By synchrony I mean zero phase. For
the particular deterministic spike-time response curve in Figure 5 it turns out that, if you hook up two
identical cells with that, they will synchronize, not go into anti-phase. Is there any more information that
67

OCR for page 62

is in the scatter? The answer is yes; namely, if instead of modeling this as something that is fit, you make
it random with a Gaussian distribution whose variance is the measured variance here. You can predict
what will happen statistically to the network when you actually hook up these two cells to real cells with
the dynamic clamp. When cell fires you look up the spike-time response curve and realize it should
advance this much with a certain random amount, and you make the other cell fire at that, and you keep
doing that. That gives you the outlines of what is here. You can then take the two cells and, with the
dynamic clamp, inject (whenever one cell fires) a certain current into the other cell corresponding to the
synapse, and it will do something with that, according to its spike-time response curve. You see that the
histogram of phase differences, with different ways of connecting those same two cells, turns out to match
very well the predictions from the stochastic phase response curves, or spike-time response curves. The
punch line of this example is that when one takes into account the statistical structure that is in the data
here, you actually can make much better predictions about what the network will be doing than if you
simply take the fit of it and you do the mathematics with that. The deterministic fit predicts just that
things will synchronize, which will be a spike over here and nothing else, whereas using the extra
structure here is giving you the whole histogram of all of the phase differences.
That is a simple example of putting together dynamical systems and statistics. Now I am getting
to one where the questions get a lot more complex, and it is a lot less under control, and it is where I think
there is room for a huge amount of work from other people. There are many questions you can ask about
these very large networks, but one question I am very interested in concerns the formation of so-called
neural ensembles. I am definitely not using the word cell assembly, which many of you may be more
familiar with. Cell assembly, in the literature, means a set of cells that are wired up together for whatever
reason, and tend to fire together. By neural ensemble, I mean a set of cells, whether or not they happen to
be wired up together, that are firing, at least temporarily, almost synchronously with one another. So, it is
a more general notion than cell assembly. It is widely believed, although not universal, that this kind of
neural ensemble is extremely important for the functioning of the brain and cognition. Among other
reasons for the importance of synchrony is the idea that this synchrony temporarily tags the cells as
working on the same issue, that they are related now for this input, for this particular cognitive task, as
noted in Figure 6.
Cell assemblies tend to change rapidly in time. So, cells would be firing together for perhaps a
fraction of a second, and then on to another cell assembly. Think of it like a dance. These kinds of
subsets are also important as a substrate for plasticity because, when cells do fire together there is a
chance for them to really wire together and become what is known as a cell assembly. Lots of people
believe that this kind of synchronous activity is extremely important. There is a smaller group, which
includes me, that believes the so-called gamma rhythms in the brain are important for the creation of
68

OCR for page 62

those neural ensembles. The gamma rhythm is the part of the EEG spectrum that is roughly 30 to 90
hertz, depending on who you speak to and the phases of the moon. This is all very contentious. There is a
huge amount to be said about gamma rhythms and where they come from and why people think it is
important to early sensory processing, to motor control, and to general cognitive processing.
FIGURE 6 Whittington et al. 1997. Pyramidal-interneuron network gamma, with Heterogeneity, Journal
Physiology.
Since I have less than four minutes left I am going to cut to the chase and give you a sense of
where the statistical issues come from. To give you background on what we are putting the noise on, I am
going to start with, as I did in the other example, something that has no noise whatsoever in it. This is
one way of creating a gamma rhythm known as pyramidal interneuron network gamma shown in Figure
6. Pyramidal cells are excitatory; these interneurons are inhibitory; I am making a network out of the
kinds of things I showed you right at the beginning. The simplest way to do this is with one excitatory cell
and one inhibitory cell. The excitatory cell fires and makes the inhibitory cell fire. The inhibitory cell
fires and inhibits the excitatory cell, which just stays silent until the inhibition wears off.
69

OCR for page 62

FIGURE 7
If you have a large network of this—and the example that leads to Figure 7 is not that large, it is
160 pyramidal cells, 40 inhibitory cells—it self-organizes to create this gamma rhythm. Exactly how that
happens involves mathematics and probability theory when there is heterogeneity like this. That
mathematics has been worked out by Christian Borgers and me. It is not the essence of what I want to say
so I am going to go on from there. Roughly, the most important part is that the common inhibition leads
to this self organization. Now, we add in some noise and I am going to put that noise on top of a slightly
different parameter regime in which the E cells don’t fire very much and they really need this drive in
order to fire. You can then get a kind of gamma rhythm in which excitatory cells are firing very sparsely.
These graphs in Figure 7 are raster plots. You can see that any given E cell is firing very sparsely.
You still have a population rhythm, and in our hands at least, it requires a lot of noise to make it
happen. There seems to be a lot of noise in the physiological situations that correspond to this. One can
produce this in slice, and many of us including this gang believe it is this kind of activity that is associated
with a kind of background attention, a kind of vigilance that enables you to take inputs that are at low
level and be able to perceive them much better than if this background activity were completely
70

OCR for page 62

asynchronous. It is related to the kind of activity that I was talking about before. Now if there is input
into a subset of the excitatory cells here, it will create a cell assembly in which the cells that are involved
are producing a kind of PING rhythm and everything else here is more or less suppressed.
In the larger story that I am not telling, this corresponds to inputs to a specific set of cells that are
doing neural computations and that are responding in the presence of this kind of background activity.
What are some of the issues here? I think there are a huge number, but I will just mention three.
First there is the question of whether there really are cell assemblies of this sort, and gamma
rhythms associated with them. In a recent conversation that I had with Uri Eden, who works with Emery
Brown, I found he is coming up with a way of being able to decide that question. You look at an animal
that is behaving, say, running down a track, and you are looking at a place cell in the hippocampus that is
supposed to be firing when a cell is in a particular spot on a maze. The question is, if you look at the
relationship between where the cell is, can you make an association with some covariate, which in this
case would be where the cell is, and when the cell is producing these gamma rhythms, or is having its
firing related to a potential field potential that is producing local gamma rhythms? You can potentially see
with statistics whether or not this gamma rhythm is actually encoding when the cells are doing what they
are supposed to be doing. You would expect to see more gamma rhythms when the cells are actually
firing in their correct positions, and Eden believes he has a new way of thinking about that. This has to
do with looking at data and also with looking at the models themselves. The persistent vigilance models
are generally large networks, their outputs are very complicated, and it is very hard to be able to change a
lot of parameters and say what the network is actually doing.
A second major issue for which one needs statistics is to find the appropriate statistics to be able
to actually describe what the network is doing. For instance, how does the firing of the cells depend on
parameters of the system and this extra layer of control that I mentioned in neuromodulation? What
happens when you add inputs to all of this? How do statistics of firing and network connectivity affect
the response to inputs? At the moment, we don’t have any statistics like this other than the standard off-
the-shelf, and that doesn’t do it.
The final major question is one that confuses me the most, and that has to do with plasticity and
the so-called statistics of experience, in which one needs to use statistics to begin with, even before you
phrase what the question is. The point is that for real animals living in the real world there are natural
inputs, both auditory and visual, which make some cell ensembles more likely. It is not just the kind of
thing that an experimenter would give to an animal. This makes some cell assemblies much more likely to
occur than others, and one of the things we would like to know is what does that do to the creation of cell
assemblies, what does that do to network connectivity, and what would that do to the response to inputs.
In a sense the input to this theory is the statistics of experience, and the output of the theory is statistical,
71

OCR for page 62

but it is all through dynamical systems models that would tell you how one converts to the other.
QUESTIONS AND ANSWERS
QUESTION: One question is, usually at a single unit firing level, the rhythm is not very
obvious.
DR. KOPELL: One should be looking at the correlation between the single unit
and the local field potential.
QUESTION: Like in a network model, how to go about from the single unit that is the basic
building block, to a local field potential kind of measurement? Can that be done within the same
mathematical network?
DR. KOPELL: Yes, but once again, that is contentious. The physics of what the local field
potential is actually measuring is still not wholly agreed upon by everyone. So, people do this, but it is
not clear how correct it is.
DR. KLEINFELD: Nancy, where do you think all this noise is from?
DR. KOPELL: Some of it is noise that is generated in the channels.
DR. KLEINFELD: Shouldn’t that fall sort of like √N?
DR. KOPELL: Again, there is the issue of high variability versus low variability, and there is
the question of how many inputs are averaged before one gets the output. When you are thinking in terms
of the square root of N, you are sort of assuming all of this noise will cancel out. I think it doesn’t, and
there seem to be major noise generating networks, at least within axons, of some pyramidal cells, that will
actively work to produce this noise. It is presumably doing something really important here. I don't think
the noise is something that should be treated as something that will be averaged out.
DR. KLEINFELD: There is this textbook notion of 104 inputs per cell, and then these
tremendously strong sort of depression schedules for synapses, and then there are results from people like
Alex Thompson that suggest that a couple of synapses have large PSDs, like 10 millivolts instead of
hundred of microvolts. Just to put closure on what you are saying, is that what you are thinking, the very
large number of inputs to itself is really a few preferred inputs that are dominating the drive to a neuron,
and it is because of these small numbers that you get the high variability?
DR. KOPELL: That is certainly consistent with this, yes.
72

OCR for page 62

REFERENCES
FIGURE 4. Acker, C.D., Kopell, N., and White, J.A. 2003. Synchronization of strongly coupled
excitatory neurons: Relating network behavior to biophysics, Journal of Computational Neuroscience
15:71-90.
FIGURE 5. Netoff, T.I., Banks, M.I., Dorval, A.D., Acker, C.D., Haas, J.S., Kopell, N., and White, J.A.
2005. Synchronization in hybrid neutronal networks of the hippocampal formation, Journal of
Neurophysiology 93:1197-1208.
73