Skip to main content

Science at the Frontier (1992) / Chapter Skim
Currently Skimming:

9 Neural Networks: Computational Neuroscience: A Window to Understanding How the Brain Works
Pages 199-232

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 199...
... The paradigm that now governs the burgeoning field of computational neuroscience has strong roots among both theoreticians and experimentalists, Koch pointed out. With the proviso that neural networks must be designed with some fidelity to neurobiological constraints, and the caveat that their structure is analogous to the brain's in only the very broadest sense, more and more traditional neuroscientists are finding their own research questions enhanced and provoked by neural net modeling.
From page 200...
... He addressed the Frontiers symposium on the topic "Visual Motion: From Computational Analysis to Neural Networks and Perceptionj" and described to his assembled colleagues some of the "theories and experiments I believe crucial for understanding how information is processed within the nervous system." His enthusiasm was manifest and his speculations about how the brain might work provocative: "What is most exciting about this field is that it is highly interdisciplinary, involving areas as diverse as mathematics, physics, computer science, biophysics, neurophysiology, psychophysics, and psychology. The Holy Grail is to understand how we perceive and act in this world in other words, to try to understand our brains and our minds." Among those who have taken up the quest are the scientists who gathered for the Frontiers symposium session on neural networks, most of whom share a background of exploring the brain by looking at the visual system.
From page 201...
... Fidelity to biology has long been a flashpoint in the debate over the usefulness of neural nets. Philosophers, some psychologists, and many in the artificial intelligence community tend to devise and to favor "top-down" theories of the mind, whereas working neuroscientists who experiment with brain tissue approach the question of how the brain works from the "bottom up." The limited nature of topdown models, and the success of neuroscientists in teasing out important insights by direct experiments on real brain tissue, have swung the balance over the last three decades, such that most modelers now pay much more than lip service to what they call "the biological constraints." Also at the Frontiers symposium was William Newsome.
From page 202...
... Computational neuroscience has developed such a momentum of intriguing and impressive insights and models that the formerly lively debate (Churchland and Churchland, 1990; Searle, 1990) over whether or not the brain is a computer is beginning to seem somewhat academic and sterile.
From page 203...
... "A new field is emerging," he continued: "the study of how computations can be carried out in extensive networks of heavily interconnected processing elements, whether these networks are carbon- or silicon-based." NEURAL WORKS OF THE BRAIN In the 19th century, anatomists looking at the human brain were struck with its complexity (the common 20th-century metaphor is its wiring)
From page 204...
... Adams, a biophysicist, designs models and conducts experiments to explore the details of how the basic electrical currency of the brain is minted in each individual neuron (Figure 9.1~. He has no qualms referring to it as a bottom-up approach, since it has become highly relevant to computational neuroscience ever since it became appreciated "that neurons do not function merely as simple logical units" or on-off switches as in a digital computer.
From page 205...
... Each type of electrical activity is caused by "special protein molecules, called ion channels, scattered throughout the membrane of the nerve cells," said Adams. These molecules act as a sort of tiny molecular faucet, he explained, which, when turned on, "allows a stream of ions to enter or leave the cell (Figure 9.2~.
From page 206...
... Since there are many more synaptic connections "choice points"in a human brain network during its early development than there are genes on human chromosomes, at the very least genes for wiring must specify not synapse-by-synapse connections, but larger patterns. The evolution of the nervous system over millions of years began with but a single cell sensing phenomena near its edge, and progressed to the major step that proved to be the pivotal event leading to the modern human brain the development of the cerebral cortex.
From page 207...
... First, it can emit a more frequent pulse. Second, when the action potential reaches the synaptic connections that the axon makes with other cells, it can either excite the target cell's electrical activity or inhibit it.
From page 208...
... The result is the same in either case: when the charge inside the cell decreases to a certain characteristic value, the sodium gates start to open and the neuron depolarizes further. Once begun, this sequence continues throughout the axon and all of its branches, and thereby transforms the receiving nerve cell into a transmitting one, whose inherent electrical signal continues to all of the other cells connected to it.
From page 209...
... Nevertheless," he asserted, "it is clear that unless such details are just right, disaster will ensue." Any more global theory of the brain or the higher or emergent functions it expresses must encompass theories that capture these lower-level processes. Adams concluded by giving "our current view of the brain" as seen from the perspective of an experimental biophysicist: "a massively parallel analog electrochemical computer, implementing algorithms through biophysical processes at the ion-channel level." FROM COMPUTERS TO ARTI FICIAL INTELLIGENCE TO CONNECTIONISM remodels Attempts tc' Enhance Understanding The complexity of the brain's structure and its neurochemical firing could well stop a neural net modeler in his or her tracks.
From page 210...
... Throughout this history, an underlying issue persists: what is a model, and what purpose does it serve? Much of the criticism of computer models throughout the computational era has included the general complaint that models do not reflect the brain's complexity fairly, and current neural networkers are sensitive to this issue.
From page 211...
... Most of the current neural net modelers developed their outlooks under the paradigm suggested above; to wit, the brain's complexity is best approached not by searching for some abstract unifying algorithm that will provide a comprehensive theory, but rather by devising models inspired by how the brain works" in particular, how it is wired together, and what happens at the synapses. Pagels included this movement, called connectionism, among his new "sciences of complexity": scientific fields that he believed use the computer to view and model the world in revolutionary and essential ways (Pagers,
From page 212...
... Canadian neuroscientist Donald Hebb in 1949 produced a major study on learning and memory that suggested neurons in the brain actually changestrengthening through repeated use—and therefore a network configuration could "learn," that is, be enhanced for future use. Throughout the 1950s the competition for funding and converts continued between those who thought fidelity to the brain's architecture was essential for successful neural net models and those who believed artificial intelligence need not be so shackled.
From page 213...
... "The analogy between the brain and a serial digital computer is an exceedingly poor one in most salient respects, and the failures of similarity between digital computers and nervous systems are striking," wrote Churchland et al.
From page 214...
... for Modeling Neural Systems Critiques like Crick's of the misleading influence the serial computer has had on thinking about the brain do not generalize about other types of computers, nor about the usefulness of the von Neumann machines as a tool for neural networkers. Koch and Segev wrote that in fact "computers are the conditio sine qua non for studying the behavior of the model~s being developed]
From page 215...
... 3~. Accounting for Complexity in Constructing Successful Models Another of the traditional ways of thinking about brain science has been influenced by the conceptual advances now embodied in computational neuroscience.
From page 216...
... There's such an intimate connection," he stressed, "between how the brain does the computation and the computations that it does, that you can't understand one without the other." Almost all modern neuroscientists concur. The Parallel, Distributed Route to New Insights By the time Marr died in 1981, artifical intelligence had failed to deliver very compelling models about global brain function (in general, and memory in particular)
From page 217...
... Hopfield described his network, said Koch, "in a very elegant manner, in the language of physics. He used clear and meaningful terms, like 'state space,' and 'variational minimum,' and 'Lyapunov function.' Hopfield's network was very elegant and simple, with the irrelevancies blown away." Its effect was galvanic, and suddenly neural networks began to experience a new respectability.
From page 218...
... The real promise of neural networks is manifest when a model is established to adjust to and learn from the experimental inputs and the environment; one then examines how the network physically modified itself and "learned" to compute competitively with the benchmark brain. Motion Detectors Deep in the Brain As Koch explained to the symposium scientists: "All animals with eyes can detect at least some rudimentary aspects of visual motion....
From page 219...
... , visual information clearly undergoes analysis for motion cues, since, he said, "80 percent of all cells in the MT are directionally selective and tuned for the speed of the stimulus." One hurdle to overcome for effectively computing optical flow is the so-called aperture problem, which arises because each cortical cell only "sees" a very limited part of the visual field, and thus "only the component of velocity normal to the contour can be measured, while the tangential component is invisible," Koch said. This problem is inherent to any computation of optical flow and essentially presents the inquirer with infinitely many solutions.
From page 220...
... 220 is Hi., a: ..........
From page 221...
... "In fact, the smoothness constraint is something that can be learned from the environment," according to Koch, and therefore might well be represented in the visual processing system of the brain, when it is considered as a product of evolution. Ullman clarified that the smoothness assumption "concerns smoothness in space, not in time, that nearby points on the same object have similar velocities." Equipped with these assumptions and modeling tools the aperture problem, the smoothness constraint, and a generic family of motion algorithms Koch and his colleagues developed what he described as a "two-stage recipe for computing optical flow, involving a local motion registration stage, followed by a more global integration stage." His first major task was to duplicate "an elegant psychophysical and electrophysiological experiment" that supported the breakdown of visual processing for motion into these two stages and established a baseline against which he could measure the value of his own model.
From page 222...
... The monkeys performed while electrodes in their brains confirmed what was strongly suspected: the presence of motion-detecting neurons in the V1 and MT cortex regions. It was in this context that Koch and his colleagues applied their own neural network algorithm for computing motion.
From page 223...
... However, since our brains are fooled by this anomaly, a neural network that makes the same mistake is probably doing a fairly good job of capturing a smoothness constraint that is essential for computing these sorts of problems. Further, the algorithm has to rapidly converge to the perceived solution, usually in less than a fifth of a second.
From page 224...
... We still cannot perform these tasks at a satisfactory level outside the human brain." Ullman described to the symposium scientists experiments that point to some fairly astounding computational algorithms that the brain, it must be inferred, seems to have mastered over the course of evolution. Once again, the actual speed at which the brain's component parts, the neurons, fire their messages is many orders of magnitude slower than the logic gates in a serial computer.
From page 225...
... To try to fathom how two of the brain's related but distinct visual subsystems may interact, Sejnowski and Lisberger constructed a neural network to model the smooth tracking and the image stabilization systems (Figure 9.4~. "Primates," said Sejnowski, "are particularly good at tracking things with their eyes.
From page 226...
... The flocculus also receives visual information from the retina, delayed for 100 milliseconds by the visual processing in the retina and visual cortex. The image velocity, which is the difference between the target velocity and the eye velocity, is used in a negative feedback loop to maintain smooth tracking of moving targets.
From page 227...
... "In tracking a moving object," Sejnowski explained, "the visual system must provide the oculomotor system with the velocity of the image on the retina." A visual monitoring, feedback, and analysis process can be highly effective but proceeds only at the speed limit with which all of the neurons involved can fire, one after another. The event begins with photons of light hitting the rods and cones in the retina and then being transformed through visual system pathways to cerebral cortex and thence to a part of the cerebellum called the flocculus, and eventually down to the motor neurons, the so-called output whose function is to command the eye's response by moving the necessary muscles to pursue in the right direction.
From page 228...
... Along the way, and this is fascinating, it makes a lot of the kinds of mistakes that real human children make in similar situations." THE LEAP TO PERCEPTION At the Frontiers symposium, Sejnowski also discussed the results of an experiment by Newsome and co-workers (Salzman et al., 1990) , which Sejnowski said he believes is "going to have a great impact on the future of how we understand processing in the cortex." Working with monkeys trained to indicate which direction they perceived a stimulus to be moving, Newsome and his group microstimulated the cortex with an electrical current, causing the monkeys to respond to
From page 229...
... Also, algorithms need to remain within the time constraints of the system in question: "Since perception can occur within 100 to 200 milliseconds, the appropriate computer algorithm must find a solution equally fast," he said. Finally, individual neurons are "computationally more powerful than the linear-threshold units favored by standard neural network theories," Koch explained, and this power should be manifest in the algorithm.
From page 230...
... Give the neural networkers the same time frame, and it seems altogether possible that their network creations will grow with comparable from here nearly unimaginable power and creativity. Wrote Koch and Segev, "Although it is difficult to predict how successful these brain models will be, we certainly appear to be in a golden age, on the threshold of understanding our own intelligence" (Koch and Segev, 1989, p.
From page 231...
... 1982. Neural networks and physical systems with emergent collective computational abilities.
From page 232...
... 1988. Computational neuroscience.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.