Learning, Recalling, and Thinking
The brain regulates an array of functions necessary to survival: the action of our five senses, the continuous monitoring of the spatial surround, contraction and relaxation of the digestive muscles, the rhythms of breathing and a regular heartbeat. As the vital functions maintain their steady course without our conscious exertion, we are accustomed to consider the brain as preeminently the organ of thought. The brain houses our mind and our memories, and we rely on its informationprocessing capacities when we set out to learn something new.
But where in the brain can we locate memory or thought itself? Chapter 7 offered some clues about the ways scientific investigation—from the molecular level to studies of the alert, behaving animal —has begun to define in physical terms an abstract quality such as “attention.” Similiar techniques and approaches are being applied to other mental functions, too, even those as seemingly intangible as learning, remembering, or thinking about the outside world.
Learning and memory, which for many years were considered central problems in psychology, the social sciences, and philosophy, have recently assumed greater importance in the area of neurobiology, itself a confluence of several lines of in
vestigation. Neuroscientific interest in learning and memory has recently increased for two reasons, according to psychiatrist Eric Kandel, a senior scientist in the Howard Hughes Medical Institute at Columbia University. One reason is the proposal of cellular mechanisms that account for a basic kind of learning and long-term memory. The model was first identified in the relatively simple nervous systems of the marine snail and the crayfish, but it appears to hold good in the hippocampus of vertebrates as well, where it also may be associated with the formation of long-term memories.
The second reason for a new interest in learning and memory is the evidence accumulating to suggest that mechanisms
involved in the structural change in the nervous system that accompanies learning may strongly resemble certain important steps in the nervous system 's development. In other words, the sorts of adjustments among synapses that account for learning may be the same as the “fine-tuning” that occurs while the maturing system is assuming its unique elaborated form. Thus, the biological changes that accompany learning may be seen— in a very schematic way—as an old process put to a new use, or as a specialized way in which the brain continues to “grow” after maturation.
A MOLECULAR ACCOUNT OF LONG-TERM MEMORY
Eric Kandel is best known for his work on the physical basis of learning and memory in the marine snail Aplysia. This animal, simple as its nervous system is (most of its 20,000 neurons have been identified by number), nevertheless provides an excellent model for the study of learning and memory, through its “gill withdrawal” reflex. When Aplysia perceives something touching its skin, it quickly withdraws both the siphon (a respiratory organ) and the gill, much as a person withdraws a hand from a hot stove without thinking about it. Although this withdrawal is a reflex, it is not completely hard-wired but can be modified by various forms of learning. One such form is sensitization, in which the animal becomes aware of a threatening factor in the environment and to protect itself learns to augment its reflex. The augmented version of the withdrawal reflex can also be maintained in short-term or long-term memory, depending on whether researchers administer the noxious stimulus (the negative reinforcement) only once or twice, or many times within a short period. The two forms of memory can be distinguished not only by their duration—the difference between minutes and days—but also at a molecular level, because it is possible to treat the snail with a chemical compound that interferes with long-term memory but leaves short-term memory unimpaired.
A major set of elements in this reflex are sensory neurons in the siphon skin, which perceive the stimulus; motor neurons in the gill, which contract the muscle and cause the gill to
withdraw; and “facilitating neurons,” or interneurons, which act on the sensory neurons to enhance their effect. The role of these facilitating neurons has recently become clearer, thanks to observations made from cell cultures, at the simplest level possible: the neurons themselves. A single sensory neuron and a single motor neuron, when implanted in a glass dish with a suitable nourishing culture, form functional interconnections. When a facilitating neuron is added or the cells are exposed to serotonin (the transmitter released by the facilitating neuron), the connection between the sensory and the motor neuron becomes stronger. The connection can last in this enhanced form for more than a day, even up to several weeks, and apparently includes some process of genetic transcription, or expression of part of the nerve cell's DNA.
This genetic transcription produces two results that set long-term memory apart from short-term memory. One is a sort of extension of a short-term effect, in which the potassium channels in the sensory neuron membrane remain closed for a longer time, while the calcium channels remain open. The net effect is that the sensory neuron is more easily excited and releases more neurotransmitter, which in turn activates the motor neuron more strongly. Actually, this effect can be produced on a short-term basis by increased levels of the second-messenger compound cyclic AMP; but after transcription, it is no longer dependent on such a factor and persists even without it. The effect can be disrupted, however, by inhibitors of protein synthesis and RNA synthesis. This constraint establishes that the recording of long-term memories involves not simply a momentary release of neurotransmitters but actual gene expression, with the synthesis of new proteins in the nerve cells themselves.
The new protein products that are synthesized—for example, under the stimulus of a repeated threatening signal—do more than merely reduce the dependence of the sensory neurons on serotonin or cyclic AMP for their activation. As a second transcription event, they induce new growth in certain parts of the sensory neurons themselves. These neurons develop many more presynaptic terminals, the structures through which they release neurotransmitter to the motor neurons; in addition, the number and the surface area of active zones in each presynaptic terminal increase, as does the total number of
vesicles, the storage containers for the neurotransmitter. Thus, gene expression appears to build long-term memory out of several effective components, which come together in a formidable array: increased excitability of the sensory neurons, with the protein kinase continuing to work on its own to keep calcium channels open, allowing calcium ions in and more neurotransmitter out; more synapses for conveying signals between sensory and motor neurons; greater numbers of active zones in the synapses; and greater quantities of neurotransmitter contained in the active zones, ready for release. No wonder that memories built of such stuff tend to last awhile.
For closer study, the Kandel laboratory has replicated in cell culture the same conditions that in the living animal lead to protein synthesis and neuronal growth: a motor neuron, a sensory neuron (injected with a fluorescent dye to make imaging possible afterward), and exposure to serotonin repeated four or five times. The results are clear: within several hours, the main axon of the sensory neuron shows an increase in the number of synapses. Exposing the neurons to the second messenger cyclic AMP produces a similiar result. But regardless of whether the facilitating compound is the neurotransmitter or the second messenger, neuronal growth occurs only if a target—a motor neuron—is also present.
The necessary presence of a target was the first similarity that Kandel and his collaborators noticed between the processes of structural change that accompany long-term memory, or learning, and those of development. The observation fit in well, too, with an earlier finding: the fine axonal branches of a sensory neuron in isolation adhere together in fat bundles, but on first contact with a motor neuron the branches tend to separate, each potentially to form its own synapse with the motor neuron. Here, at a mechanical level, is the explanation for a disassembly process that is required prior to the marked increase in synapses that takes place in the presence of serotonin. But in long-term memory, as in development, the presence of the target is necessary—a feature that makes for plasticity, or the all-important ability to change in response to the environment.
To study this learning-related plasticity at the molecular level, Kandel's research group is looking at the proteins that change in level when exposed to serotonin or cyclic AMP (or,
in the living animal, to a noxious stimulus). Of the 15 proteins that change, 10 show an increase and 5 show a decrease. The reactions are transient: the levels go up, or down, and back again quite quickly.
Most interesting, in the investigators' view, are the proteins whose response is to decrease in level. Is there a way in which producing less of something can figure in a growth process? At a molecular level, the answer can be yes, if the something is an inhibitory factor of some kind. Such an answer may apply in this case, because four of these proteins that have been identified by genetic sequencing turn out to be none other than cell-adhesion molecules of the immunoglobulin type, first discovered by the research team of Gerald Edelman at Rockefeller University.
During development, the proteins apparently play a fundamental role; at least one of them is present at the very first stages, when the fertilized egg begins to divide. In the adult, however, these four proteins appear only in the nervous system, in both sensory and motor neurons. An interesting effect of these cell-adhesion proteins can be demonstrated on an isolated sensory neuron: if an antibody is added that blocks the cell-adhesion effect, the axonal filaments of the neuron start to come apart from their thick bundle and to separate out. The effect is similar to what happens when a sensory neuron is exposed to serotonin in the presence of a target, a motor neuron. This suggests that cell-adhesion molecules can indeed act as an inhibiting factor in particular circumstances. What they inhibit, apparently, is the growth and proliferation of signal-transmitting elements on the axons of sensory neurons.
By this reasoning, the effect of the cell-adhesion molecules would have to be held in abeyance at some point, to allow the sensory neurons to strengthen and increase their synaptic connections with the motor neurons. Perhaps there is even an innate tendency for some neurons, when they are near other target neurons, always to have their axons branching and proliferating, always to be seeking to form more synapses. (Indeed, during development, as discussed in Chapter 6 , the brain actually forms a great many more synapses than can ever be functional during the animal's lifetime.) The inhibitory action of the cell-adhesion molecules may thus be a crucial factor that
keeps neuronal growth somewhat under control, and the temporary inhibition of cell-adhesion molecules in favor of long-term memory may be a single, notable exception to this form of containment. Of course, these results come from painstakingly close study of very simple nervous systems. The degree to which such findings can be extrapolated to the brains of primates, for example, which are many times more complex and which follow different patterns of development, is a matter of lively discussion among researchers in various specialized areas of neuroscience.
One striking aspect of such a system is the ingeniously high level of what, in a person, might be called thriftiness—the degree to which the same materials or biological processes are used and reused, but in novel contexts and to different ends. The protein kinase described earlier, which is dependent on cyclic AMP, appears in many other systems of the body and has various effects; but only in the nervous system, in relation to learning, does it play a role in long-term activation. Likewise, cell-adhesion molecules—better known to researchers for their general role in development—play a rather specialized part in the adult nervous system.
Just as intriguing, from a different perspective, is the evidence for significant common ground between biological mechanisms of learning and the early development of the organism: not only the common use of cell-adhesion proteins (although in different ways) but also the fact that growth in both contexts requires a target. Even the finding that a neurotransmitter such as serotonin is not restricted to moment-by-moment signaling but can actually be a factor that initiates neuronal growth in the case of long-term memory adds to an impression of the two contexts conjoining, with neurotransmitters sometimes acting as growth factors.
THE WORLD IN THE FRONT OF THE BRAIN
Short-term and long-term memory are not the only forms in which the brain stores information. All the time that the five senses are operating, the brain is assembling and sorting perceptions of the outside world, directing some to conscious attention and collecting others into a set of perpetually updat-
ed mental representations. Although we may seldom be aware of the full extent of these mental representations, or examine them directly, nevertheless, they hold great importance for our thought processes and our ability to carry out the simplest planned action or predictive step, even something as elementary as following a fast-moving target with our eyes. These mental representations are the data on which we base cognition—our thoughts, ideas, and abstract mental processes.
Animals, too, form complex mental representations of the world, which are shaped by their own brain structure and ecological requirements. For instance, information gathered through the sense of smell undoubtedly plays a much larger role in the mental representations of a dog than in those of a bird, which relies much more on its excellent vision (both in detail and in color) to help it recognize its kin, observe the territories of its rivals, and seek out food and mates. With such differences taken into account, the study of mental representation in animals can help scientists explain similiar processes in humans, particularly if the neurobiology of the animal is also under study or is well known from earlier research.
Mental representation in the monkey, in the form of short-term or working memory, has actually been studied for more than 50 years. The earliest experiments were carried out as delayed-response tests: the monkey was shown a morsel of food being placed in one of two food wells and after a short delay had to open the correct one to claim the food as a reward. The reason for the delay was to force the monkey to rely on an internal mental representation rather than on immediate stimulation—that is, what it saw taking place at that moment. In the rhesus monkey, the area of the brain known to be important for this task is the prefrontal cortex; and in humans, too, homologous areas in the frontal region of the cortex, just behind the forehead, are sites of activity for tasks that test working memory.
Present-day research of this kind with monkeys uses a computer monitor. In such experiments, the animal directs its gaze to the center of the screen. While it keeps its attention fixed on the central spot, a visual target (a light) flashes briefly (for half a second) somewhere else on the screen. The monkey's task (which requires some months of training) is to keep its eyes
fixed on the central spot as long as it is lit, and then, when the central spot has been switched off, to move its eyes to the place where the visual target had flashed some seconds before. Clearly, the test calls for working memory: the chances of turning one's eyes to the correct site by a lucky guess are slight, and since the visual target can appear anywhere at all on the screen, in any sequence —not simply location A alternating with location B—there is no possible way to “prepare” the correct response beforehand. A monkey that is practiced in this task can perform with a high degree of accuracy; but when a portion of its principal sulcus is removed by surgery, an animal that was previously proficient performs with no more than 50 percent accuracy.
Given this sharp drop in performance, what is the nature of the deficit in the monkey's brain after surgery? Patricia Goldman-Rakic, who directs such investigations at Yale University Medical School, explains that it can be considered a “hole” in the memory—not in vision or in the ability to move the eyes. These faculties show up unimpaired in tests in which the visual target is left on (so that the monkey simply moves its eyes to the target at the appropriate time). Only the ability to guide the response by a mental image (memory) is missing.
Another complementary way of investigating the same topic is to record electrical activity from the brain during a working-memory task. The ideal record in terms of clarity and precision is one obtained from a single neuron, by means of extremely fine microelectrodes. Recordings of this kind have become possible only in the past decade or so; those from Goldman-Rakic's laboratory show several very interesting things. First, the neuron under study, in the prefrontal cortex, holds to a steady level of activity when the target light appears. But it increases its activity sharply once the target light is switched off and shows sustained activity during the delay, the interval over which a memory of the target must be maintained. Finally, the neuron 's activity rather abruptly returns to a baseline level when the monkey begins its response—that is, when it moves its eyes to the site where the target had been. The neuron thus shows a high level of activity only during the time required to keep the correct spot “in mind” until the moment arrives to respond actively.
A second point of interest from these recordings is that the neurons of this region in the prefrontal cortex each tend to remember one precise location on the screen—and no others. For example, one neuron would respond accurately for targets at a 270-degree rotation from the center but would remain unresponsive to all other locations; another neuron would respond only to targets at a 90-degree rotation. In an analogy with the visual system, the neurons form a “memory field” in much the same way that nerve cells of the occipital lobe form a visual field. The memory field even shows the same cross-brain pattern that is traced by many signals: neurons oriented to the memory of stimuli that appeared in the right visual field predominate in the left hemisphere, and those oriented to the memory of stimuli presented in the left visual field predominate in the right hemisphere.
In Goldman-Rakic's words, memory is an added-on feature of the representation system for visual space. Bearing out this interpretation are recordings from trials during which monkeys that were usually accurate made a mistake in their response, moving their eyes to the wrong place. The electrical data show that the particular neurons for that location were not highly active during the delay period, and so they failed to sustain the mental representation.
According to a current view, these neurons are organized in modules rather like the ocular dominance columns of the visual system. Several lines of research have established that the principal sulcus receives a great deal of its information about the outside world from the parietal cortex, which specializes in visual spatial information (as discussed in Chapter 7 ). The nerve tracts that project from the parietal lobe do in fact form a pattern of columns in the prefrontal cortex that alternates with columns for incoming signals from other regions. As in the visual system, each column is about half a millimeter wide.
These mental representations in the prefrontal cortex are too limited to be directly responsible for an animal's complex behavior. Goldman-Rakic and her colleagues believe that this representational knowledge does guide behavior in collaboration with other areas—particularly the parietal cortex—and that the larger network very probably represents the neural circuit-
ry underlying spatial cognition in monkeys. Different parts of the network, and the connections among them, must be analyzed separately before the ensemble can be well understood as a network. A broad assortment of psychological studies have shown that when people are asked to perform any cognitive task, the prefrontal cortex invariably is activated; what remains to be discerned is which particular subdivisions of the area (visual or auditory or other) are involved. Increasingly specific testing, anatomical examination, and medical imaging of animals and human subjects are the tools that can provide this kind of information.
Meanwhile, noninvasive medical imaging of humans offers opportunities for the direct simultaneous study of physiology and mental functioning. In addition to NMR and PET scans, electroencephalographic studies can be quite useful, recording electrical activity at the scalp with great temporal precision. Recent EEG studies have shown that when a subject performs cognitive or judgment tasks that require keeping something in mind over a short period, a number of areas in the prefrontal cortex are active. When, on occasion, the subject makes an error, it appears that the network as a whole was not engaged.
NEUROTRANSMITTERS AND THE INFORMATION SYSTEM
In addition to the information-processing circuits arranged in neuronal modules and in columns of incoming nerve tracts, the brain is replete with other systems of input. In the prefrontal cortex, for example, nerve fibers containing the neurotransmitter dopamine are found in especially high concentration, and researchers have wondered for some time what role dopamine might play in prefrontal circuits of information. The evidence gathered on this point over the past few years has begun to make clear the enormous extent to which dopamine shapes not only our physical functioning in the world but also our ability to process new information, to associate ideas effectively, and even to maintain a sense of well-being in balance with realistic perceptions.
In the human prefrontal cortex, the nerve fibers containing dopamine are not scattered evenly throughout the six cerebral cell layers but are concentrated in the outermost layers and the
deep layers—that is, in layers 1, 5, and 6—and are less densely distributed in the middle layers. The cell bodies of these neurons are located relatively far away in the ventral tegmental area, a portion of the brainstem; they preferentially project their fibers to the frontal and prefrontal cortex. In addition, researchers have identified at least two distinct kinds of receptor sites for dopamine, and each has its own pattern in the layers of the cortex. The preponderance of the D-1 receptor fairly matches that of the dopamine-containing fibers: very high in the outermost layers and also considerable in the deep layers. The D-2 receptor, by contrast, shows a lower concentration throughout, with just a mild peak in layer 5.
In a test to see whether interference with the D-1 receptors would have any effect on cognitive function, Goldman-Rakic's research team injected a compound that blocks the D-1 receptor sites in the prefrontal cortex of monkeys trained in the delayed-response test described earlier. About 20 minutes after the injection, the animals showed an impairment of working memory, moving their eyes to the wrong location when the trial included a delay; but they responded correctly in a “sensory-guided” version of the task, in which the target light was left on as a guide. The D-1 receptors thus appear to be implicated in the efficiency of working memory.
A chemical compound developed for use in research that selectively stains neurons in the cerebral cortex bearing D-1 receptor sites has provided the Yale research team with an interesting lead. These neurons have been identified as pyramidal cells, the large principal cells that are the main element of cerebral cortex layer 6. The axons of these cells carry signals to another region—in this case, the thalamus (which plays an important role in the control of movement and forms part of the limbic system).
It appears from electron-microscopy studies that the dopamine receptors on these cells may modulate excitatory synapses, possibly from other pyramidal cells in the same or another region. Therefore, since dopamine acts directly on the output neurons of the prefrontal cortex—which are involved in processing, sorting, and assembling information about the outside world—the dopamine circuits can be considered a physical pathway by which this neurotransmitter can influence cog-
nitive function. With each neuron bearing millions of spines on which dopamine synapses may act, a mechanism of this kind can have a pervasive effect, and even a slight deficiency or excess of dopamine could powerfully alter the ability of many neurons to integrate information from other regions of the brain. Goldman-Rakic and her colleagues are looking closely at the identified dopamine synapses to understand more precisely the mechanism by which dopamine may affect cognition.
The prefrontal cortex, with its importance for cognition, shows a form of dysfunction when tested in patients suffering from schizophrenia. (An often disabling mental illness, schizophrenia interferes with the capacity for logical thought and greatly disturbs the emotions and social behavior; see Chapter 4 for a discussion of current theories about the importance of dopamine levels in schizophrenia.) In experiments calling for cognitive tasks, which normally require the participation of the prefrontal cortex, schizophrenic patients show significantly lower rates of activity in this region of the brain. This does not mean that a disorder as complex and varied in form as schizophrenia can be explained as a simple failure of one part of the brain— particularly since the prefrontal cortex is known to be so richly interconnected with many other regions. But the findings that indicate a less active prefrontal cortex, which have been replicated in numerous studies, fit in well with other evidence suggesting that some dysfunction in a network of areas, including the prefrontal cortex, is implicated in schizophrenia.
Studies are under way to probe the state of working memory in schizophrenic patients as a way of learning more about the normal and impaired functioning of the prefrontal cortex. Meanwhile, rhesus monkeys treated in such a way as to mimic some of the deficits characteristic of schizophrenia are also being tested for working memory, thereby allowing more direct study of the neurobiology involved. One of the behavioral deficits that has been experimentally produced in monkeys is the inability to track a fast-moving target with the eyes. The deficit is not based in the visual or motor system; this much is clear, because the monkeys remain able to track targets moving more slowly. Instead, the problem seems to be cognitive, an inability to predict where the target will be in the next frac-
tion of a second. This predictive aspect of eye movements, which falters in schizophrenic patients and in the experimentally treated animals, may well draw on the type of mental representations that the prefrontal cortex is largely occupied in assembling. The research being conducted in animals and humans is mutually helpful, offering the prospect over the next decade of significant advances in a neuroscientific account of the workings of the prefrontal cortex—including a cellular explanation of this area's memory functions. A view shared currently by Goldman-Rakic and many colleagues is that the main function of this greatly enlarged part of the brain, so recently evolved in the primate line, is to guide behavior by means of mental representations of stimuli, rather than by the stimuli themselves. Over the course of primate evolution, the advantages of this mode of mental functioning would have been considerable, greatly expanding the animal's options for varied and complex behavior.
WHAT KIND OF COMPUTER IS THIS?
The types of mental representation discussed above, such as the continuous monitoring of the spatial surround by the parietal lobes, illustrate a vital point that is often overlooked when comparisons are made between the human brain and the computer. The fact is that the human brain—or the brain of many other animals—is solving quite difficult computational problems at every moment, just in seeing, recognizing a voice, or moving in a coordinated fashion on four limbs, or two limbs, or two wings. Most of these problems are so complex that they have yet to be formulated in explicit terms by computer scientists, which is why machines that can perceive and move and communicate as animals do—and perform all these functions at once—are still largely the stuff of science fiction.
If computers are not really brains, what does it mean to call the brain a kind of computer? Terrence J. Sejnowski, whose work at the Salk Institute for Biological Studies in San Diego focuses on computer models of cognition and brain structure, answers this question by pointing to a simple device designed to do one thing optimally, and one thing only: play tic-tac-toe. This “computer,” built from electronic Tinkertoys at MIT's Ar-
tificial Intelligence Laboratory, is programmed with every possible position in the game. (These have been reduced, through mathematical operations that apply the principle of symmetry, to a subset of about 48.) The positions, each with its one optimal response, are encoded as the computer's memory. When presented with a particular position, the computer matches it to one in its subset and produces the correct response. By contrast, a digital computer would meet the challenge with a set of programmed instructions that it would run through recursively at each move to arrive at the optimal response.
The MIT device does not carry out a string of calculations or algorithms, the kind of task we generally think of a computer performing; instead, what it offers is essentially a “look-up table,” with the correct answer precomputed and readily available. To obtain swift access to that answer, however, one must present a problem that exactly matches one of the problems originally encoded in the computer's memory. Beyond that pre-encoded set, the computer cannot provide any correct answer—or even a partial answer—unlike a digital computer, which can be reprogrammed for new problems because of its more general mode of operation. Still, within the realm of its preencoded problems and responses, the “look-up table” is extremely fast and effective.
This kind of device, however, requires a great deal of memory, since every significant aspect of each pre-encoded problem must be specified if the match is to be accurate. For the game of tic-tac-toe this is manageable; for chess, with its 1040 possible game positions, or for real-life contexts in which the rules are less clear, it is impossible, at least at present. As a practical device, the look-up table is strictly limited. However, the principle of precomputing certain responses and being able to retrieve them with minimal additional effort appears to Sejnowski and others in his field as a likely clue to some of the workings of the brain. True, the abundant memory required by a look-up table was extremely expensive in the first computers and still poses a practical challenge today. But if the amount of memory were tremendously expanded, it would be possible to store many more solutions—in other words, to address many more and different kinds of problems.
It is hardly a revelation at this point that the human brain
exhibits just such a tremendous capacity to store information. With somewhere between a hundred billion and a trillion neurons, the human brain already looks fairly impressive—but what really expands its storage capacity far beyond anything we can yet envision on an engineer 's drawing board is the brain's proliferation of synapses. Each neuron contains several thousand points at which signals can be transmitted. Even if the brain were to store information at the low average rate of one bit per synapse (in terms comparable to a digital code, the synapse would be either active or inactive), the structure as a whole could still build up vast stores of memory, on the order of 1014 bits. Meanwhile, today's most advanced supercomputers command a memory of about 109 bits. The human brain, to use Sejnowski's phrase, is memory-rich by comparison.
Of course, organization is crucial to managing such a vast resource, and the brain exhibits this feature at several levels, as discussed throughout this book. Research conducted on the simpler nervous system of invertebrates, as well as on nonhuman primates, other vertebrates, and humans, has indicated how learning brings about structural changes in nerve cells and how the neurons in turn form regions, which take part in networks. The networks are organized into distributed systems, which collaborate with other systems, both sensory and associative, to produce the total working effect.
Memory itself is organized so as to take advantage of these many levels of information: it appears to be arranged along associative paths, by the principle of contiguity. That is, the brain associates bits of information in such a way that we can recall items either on their own or by being “reminded” of them by a cue. The name of an acquaintance may come to mind when needed, or we may search for it under one heading or another: the name sounded like that of another friend, or the person looked like a former co-worker, or the meeting took place at the lunch following a difficult business negotiation. Considering the brain in purely physical terms, researchers have suggested that another form of contiguity may apply as well, that is, the simple proximity that builds up into maps. It may be that neurons close enough to one another to be activated together keep some trace of that contiguity as part of their bit of information.
Just what the memory-forming mechanisms might be, at a physiological level, has long puzzled psychologists as well as neurobiologists. Evidence of several kinds is gathering, however, in support of a model first suggested in 1949 by Donald Hebb, that a memory forms as a result of at least two kinds of activity taking place at a synapse simultaneously. The activities would have to include both the pre-and postsynaptic elements, the neuron transmitting the signal and the one receiving it. Hebb reasoned that the strength of the signal received in the postsynaptic cell would depend on the interaction of many details—the amount of transmitter released, the presence or absence of neuromodulators that affect the postsynaptic cell' s excitability, the number of receptor sites on the receiving cell, and other such variables. Whatever the specifics, the underlying principle would be that information is stored as a result of two or more biochemical factors coming together in time, at the same instant, and in space, at the same synapse.
Physical evidence that indirectly supports this model has come recently from Eric Kandel's work with Aplysia. Hebb postulated two active elements (the pre- and postsynaptic terminals), but the nervous system in the marine snail appears to include a third element, the facilitating neuron that enhances the excitability of the sensory neuron. The Hebbian principle still applies, however, to the extent that the variables have to meet in time and space at a synapse.
In mammals, an example that conforms even better to the Hebbian model is found in part of the hippocampus of rats. The particular area, designated CA-3, contains about half a million neurons with recurrent connections—in other words, many of their axons lead back into the same population of neurons. Some axons also lead into the adjacent area CA-1. At the synapses in this area, both among CA-3 cells and between CA-3 and CA-1 cells, the neurotransmitter glutamate is released. It binds to two types of receptors: at one type of receptor site the glutamate slightly lowers the excitability threshold of the neuron, but at the other the binding of glutamate does not in itself affect the cell. Another simultaneous event is required: depolarization of the receiving cell, perhaps by other synapses. When this occurs together with the binding of glutamate, the cell membrane becomes momentarily permeable to ions—particu-
larly calcium ions, which are important for bringing about persistent changes in the structure of the cell.
This receptor system illustrates the principle of contiguity outlined by Hebb: the binding of glutamate to a particular kind of receptor site and the depolarization of the postsynaptic cell must occur simultaneously, or at least within the same 20 to 50 thousandths of a second, for calcium ions to enter the cell and induce structural changes.
ASSEMBLING A BRAIN IN THE LABORATORY
Hebbian synapses have also been demonstrated in another kind of laboratory, where computer scientists and engineers have built them into a computer chip. The device is a simple one, with only 16 synapses, but it performs Hebbian learning quite efficiently, at the rate of a million times per second. Newer chips have already been developed to represent more realistic neurons, with many thousands of synapses; and technology to represent the connections between such neurons will make the assembly of something more nearly resembling a working brain a little easier to envision. Such a device will have to combine analog signals, like those propagated within neurons, and digital signals, the off or on impulses transmitted from one neuron to another. It will not be simply a larger, or even an unbelievably faster, version of today 's familiar computer.
An artificial brain of this kind could be invaluable for further research along two main lines. For one, it could be set to work on some of the more difficult problems in an emerging field that might be called “artificial perception”: problems of computer vision and of speech recognition that can be delineated by current devices but that cannot be resolved by them in a practical way. For a second main line of research, this kind of artificial brain can offer an advanced testing ground for neuroscientists' ideas about how the brain functions. Theoretical models of memory, in particular, cannot be tested adequately on a digital-computer simulation of a few hundred model neurons, because the living brain works on such an enormously larger scale. But a computerized circuit of several million model neurons, with information circulating in real time, could
yield a whole new order of information about such circuits in the living animal.
The field of artificial perception already boasts chips developed at the California Institute of Technology that are capable of much of the sensory processing performed just outside the brain by the retina, for example, and by the cochlea, the spiral passage of the inner ear whose hair cells respond to vibrations by sending impulses to the auditory nerve. Now in development as well are chips to simulate some of the functions of the visual cortex; others, with some of the memory-storing capacity of the hippocampus, are being scaled up, closer to the dimensions of a living system.
But more time and knowledge are needed to produce a device that can successfully mimic the information processing of the five senses and of short-term and long-term memory, and that can, moreover, integrate these systems into a unit that functions as a whole with respect to the outside world. This is not to say that progress has not occurred: early computers of the 1950s carried out only a few thousand instructions per second (a speed matched by today's pocket calculators), whereas the fastest of the supercomputers in use today can perform billions of operations per second. Still, this rate of processing, at 109 or so operations per second, is far from that of the human brain, in which an estimated 1014 synapses are each active about 10 times per second—giving a total of 1015 operations per second.
An interesting constraint that confronts computer designers who work with the current top speeds is the simple, unchanging limitation posed by the speed of light. Signals simply cannot be transmitted faster than about 1 foot per billionth of a second (10−9, the speed of light); to achieve the effect of speeds higher than this, the computer must be reduced in size to less than a cubic foot. This reduction is made possible by duplicating the central processor many times, even thousands of times, within the same computer, so that signals have less distance to travel. Even so, extrapolating from the recent rate of increase and from today's highest known speeds of computer processing, Terrence Sejnowski estimates that an artificial device approximating the human brain cannot be expected before at least the year 2015.
This prediction should not be considered discouraging—far from it. For such a project to be within sight at all is the clearest possible sign of the progress of neuroscience, gaining impetus as it does from an increasing number of fields that are related in some way to its investigations. Now not only the biological sciences, medicine, biochemistry, pharmacology, and psychology have an interest in improving our understanding of the brain's functioning; the computer sciences, physics, and mathematics also contribute to such models and stand to gain much from their continued exploration and testing. And along the way toward the assembly of a fully functioning artificial brain, it should become increasingly possible to construct devices that satisfactorily replicate some of the principles at work in the human brain. Although the devices probably would not resemble a brain in their material form any more than an airplane resembles a bird, they will be successful if they can show some of the brain's operating principles adapted to their own form, just as an airplane carries out, in mechanical translation, some of the aerodynamic principles of natural flight.
THE BENEFITS OF AN ARTIFICIAL BRAIN
Of course, the brain cannot ever be completely characterized in terms of a computer because in addition to all its computing faculties it possesses the properties of a biological organ in a living system. But, points out Gerald Edelman of the Neurosciences Institute at Rockefeller University, computers can indeed do something that, until recently, only a brain could do: they can carry out logical functions. Today, a computer can address any challenge or problem that can be described in a logical formula. This still leaves unexplored vast areas of human experience, such as perception; but as described earlier in this chapter, computer and mathematical modeling on one side, and more detailed neurobiological examination on the other side, are making inroads in this area too.
Edelman and his colleagues have used an approach they call synthetic neural modeling to build an automaton that is able to explore its environment by simulated vision and touch; moreover, it can categorize objects on the basis of its perceptions, and its responses draw on previous experiences with
similar objects. Darwin III (the third generation of its kind) is a robot whose nervous system is built of about 50,000 cells of different types. The signals transmitted at its approximately 640,000 synaptic junctions enable Darwin III to control the functioning of its one eye and its multijointed arm. In analogy with the way living brains enter the world, Darwin III has no specific information built into its systems about the objects it may encounter in its environment. The nervous system is preencoded only to the extent that the devices for perception are made to detect certain features, such as light or movement or rough texture.
An important principle of Darwin III's nervous system is that the strength of the synaptic connections can increase selectively with greater activity when that activity leads to an adaptive end. What is “adaptive” for Darwin III is defined by arbitrary values built into its programming. For example, the built-in principle that light is “better” than no light serves to direct and refine the system's eye movements toward a target. Just as in living neurons, the enhanced connection provides a stronger response the next time that particular neural pathway is active.
This selective strengthening of connections is reminiscent of the competition among synapses in the developing brain (as discussed in Chapter 6 ). Together with the ability to categorize, it means that the system can produce behaviors that we commonly call “recognition,” for instance, or “association.” At present, Darwin III can turn its head to track a moving object with its eye; it can extend its arm to trace the contours of an object; and, alternatively, if the stimulus is noxious, it can swat the object away. In all these responses the system shows increasing accuracy with practice, as the relevant synapses are strengthened. Eventually, such a system should be able to teach itself to apply both visual and motor abilities to a complex task—for instance, distinguishing a particular object or kind of object, and picking it out with the arm from among many others.
Although Darwin III cannot represent the nervous system of living animals in a highly detailed way, its synapses and circuits provide a much-needed testing ground for ideas about what takes place inside the real thing that makes those 3 pounds of semisoft tissue the most complex information-processing system
ever known. Perhaps computers can never be brains in the full sense of serving as the nervous center of a biological system, but they can be designed with increasing success to carry out some of the functions that are routinely managed by a living brain. Gerald Edelman, like Terrence Sejnowski, believes that the prospects for building more complex “perception machines” are good—and the benefits in both intellectual and economic terms will be enormous. Most important of all would be the expanded opportunities for an understanding of higher brain functions—those that make us human—to be gained by using the computer not so much as a model of the brain, but as a tool for exploring it.
Chapter 8 is based on presentations by Gerald Edelman, Patricia Goldman-Rakic, Eric Kandel, and Terrence Sejnowski.