Skip to main content

Currently Skimming:

ATTENTIONAL AND PERCEPTUAL MECHANISMS
Pages 181-248

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 181...
... The receptive fields of visual neurons in animals and man have been investigated mainly by three research methods: the extent and organization of the retinal areas projecting to individual neurons were determined, the transformation of the receptive-field organization was studied at different levels of the central visual system, and indirect estimates of the size of receptive fields in man were obtained and correlated with results from animal experiments. The first method was inaugurated by Hartline,11'12 who defined a receptive field in the frog as that area on the retina within which illumination activated or inhibited an optic-nerve fiber.
From page 182...
... Direct measurements of receptive fields of a small number of human cortical cells were reported by Marg et al.23 They were in extrafoveal regions and appeared to have ill-defined borders. Hermann grid stimulation of concentric-field neurons in the lateral geniculate nucleus and primary visual cortex of the cat shows strongly 182
From page 183...
... . In the fovea, receptive fields are mm quently, their inhibitory surrounds have similar effect (see example in b2)
From page 184...
... receptive-field axes. Also, less response diminution is found at grid intersections, except in large receptive fields.26 RECEPTIVE-FIELD ESTIMA TION BY APPARENT MOTION Wertheimer's apparent motion29 elicited by two successive light stimuli presented at different loci in the visual field was used to estimate the size of receptive fields for movement perception.
From page 185...
... The "receptive fields" for Wertheimer's apparent motion increase toward the periphery of the eye at the same rate as the receptivi>field centers determined by the Hermann grid. Both measures show an increase by a factor of two between 20 and 60 deg of retinal eccentricity.
From page 186...
... 30 "oi 'fc 20 10 5 o y P,,, • o.64 x+18.6 0.58x + 7. Optim Movement 10 20 SO 60 70C Horizontal distance from fixation point in FIGURE 4 Maximal angular distances for apparent motion as a function of retinal eccentricity (adapted from Spillmann28)
From page 187...
... 10 20 30 40 50 60 TO1 Distance from the fovea in ° FIGURE 5 Comparison of human receptive fields and field centers for contrast vision (Hermann grid illusion) , apparent motion (beta movement, phi phenomenon)
From page 188...
... Central spots appear brighter than adjacent white areas if one views the pattern freely. As in the Hermann grid, the illusion becomes less apparent when the central area is fixated.
From page 189...
... Instead of the brightness enhancement, a grayish cross (x) emerges, connecting the edges of the apparent square along the diagonals.
From page 190...
... . The more elaborate neuronal systems of the visual cortex showing transformations from concentric to oblong and complex field organizations appear to be less involved in the Hermann grid illusion.
From page 191...
... The Ehrenstein Illusion and Its Dependence on Oriented Lines Since 1870, the Hermann grid illusion has been explained by simultaneous contrast of bright and dark areas that, according to Baumgartner, stimulate concentric receptive fields of visual neurons. Ehrenstein's brightness illusions, which appear within oriented line patterns in the absence of contrasting planes, cast considerable doubt on this explanation by contrast alone.8 Brightness enhancement in the center of radially converging lines can hardly be explained on the basis of circular receptive fields.
From page 192...
... The rather complex transformation of receptive fields at various cerebral levels and the role of eye movements and of inhibition within the field center cannot be discussed here. We mention only two findings: Richards25 has described modifications of area summation in man during accommodation and convergence that he explains by plasticity of receptive fields.
From page 193...
... These neuronal line detectors are present in newly born kittens, before any visual experience, but deteriorate when contrast patterns are excluded from vision during the first months of life.30 Thus, visual learning apparently maintains and facilitates visual function in the cortex during the early periods of life. Short- and long-term visual memory not only are necessary for acquiring form recognition, but also are prerequisites for the normal function and early development of innate neuronal coordination.
From page 194...
... Estimates of their spatial extent were derived from threshold measurements for simultaneous contrast and apparent motion. Diameters of receptive-field centers in the human fovea, when measured with Hermann grids of different bar width, range from 25 M to 30 pi (5-10 min of arc)
From page 195...
... The possible interaction between memory and neuronal convergence within the visual system is discussed for the example of apparent motion. The term "perceptive fields" is proposed for the subjective correlates of receptive fields estimated in human vision.
From page 196...
... Receptive fields, binocular interaction and functional architecture in the cat's visual cortex.
From page 197...
... Rutkin. Receptive fields of cells in the human visual cortex.
From page 198...
... The methods and results have been described in detail elsewhere.2'15 It is easily proved that a great deal of information from a visual stimulus gets into the subject's very-short-term visual memory; the information is lost to recall because later processes are unable to use it.
From page 199...
... The kinds of data that require the concept of a recognition bufferV STM V LTM RECOG STM r> V-M LTM M LTM FIGURE 1 Model of visual information processing. Squares indicate short-term memories, rectangles indicate long-term memories, and triangles indicate scan components that transform signals from one modality into another.
From page 200...
... In principle, although not in detail, the auditory scan is exactly analogous to the visual scan. The auditory scan selects some contents of auditory memory (e.g., the sound representation of one letter)
From page 201...
... The rate of subvocal rehearsal can be measured,6'10 and it is very interesting to note that it is identical with the rate of vocal rehearsal. DISTINCTIONS BETWEEN SHORT- AND LONG-TERM MEMOR Y Neural Distinctions A short-term memory is a patch of neural tissue that is used over and over again for every appropriate input to the modality.
From page 202...
... The three triangle components each use an intermodality long-term memory. The visual scan is served by an intermodality long-term memory that associates the address of the motor 202
From page 203...
... VISUAL SCANNING The Use of Visual Noise to Estimate Processing Rate Brief visual exposures, by themselves, are useless for determining the rate at which visual information is processed. This is so because stimulus information persists in very-short-term visual memory for some 203
From page 204...
... I will go into greater depth in considering the problem of serial versus parallel processing, because it offers a good illustration of current research in information processing. The nonspecialist reader may have difficulty here, but I hope that he will persevere and obtain at least an appreciation of some contemporary methods and theories and of their potential power for studying the way in which words are read.
From page 205...
... The first two letters are scanned quickly, the next two are scanned more slowly, and scanning of the last letter has hardly begun even at the longest exposure. A purely parallel scanning process, in which information is retrieved at an equal rate from all five locations, would predict identical pi at all locations (Figure 3b)
From page 206...
... , 5) of a five-letter stimulus as a function of the exposure duration when exposure of the letters is followed by visual noise, (a)
From page 207...
... Therefore, I gave up research for a year and worked at programming a computer to display visual stimuli on a cathode-ray oscilloscope.3 The computer-produced demonstration that provides the strongest evidence of parallel processing is very similar to the procedure just described. Five letters are presented and followed by visual noise.
From page 208...
... To increase p( from 0.50 to 0.95, for example, requires less than one bit of information, whereas to increase pt from 0.05 to 0.50 requires 3.3 bits (when there are 20 equiprobable stimulus letters)
From page 209...
... Extremely Rapid Visual Search in a Continuous Task The experiments described above measured visual scanning speeds from single exposures only -- that is, the speeds achieved in single bursts of scanning. Could subjects maintain the same high scanning speed in a continuous search task?
From page 210...
... Subjects achieve the same high scanning speeds in the continuoussearch procedure as were previously demonstrated for single bursts, 1015 msec/letter. The highest scanning speeds are achieved at presentation rates of about 40 arrays per second with stimuli containing nine or more letters.
From page 211...
... and indirect (subjects did not begin writing until a second or more after the exposure and their visual memory had decayed by then, so auditory memory was the only logical alternative) .10 The observation15 that subjects suffered auditory confusion in visual recall (for example, D and 2 for T)
From page 212...
... The main finding that concerns us here is that, in the usual test of visual recall, visual-similarity deficits are small, whereas AS deficits are large.4 That auditory similarity should be a significant factor even in a task that involves only looking at letters and writing them -- and never any overt auditory representation-is prima facie evidence of a role for auditory memory in visual-recall tasks. To determine quantitatively how much of the memory load in visualrecall tasks is carried by auditory memory is more difficult.
From page 213...
... RECAPITULATION A model of the processing of information from an array of letters has been proposed. It consists of the following components: a very-shortterm, very-high-capacity visual memory; a visual scan component that converts the representation of a letter in visual memory into the address of the motor program for rehearsing the letter; a short-term memory for this address (recognition buffer-memory)
From page 214...
... , they are remembered in auditory short-term memory, as if they had been presented acoustically. In this brief account, I have not considered how eye movements are controlled, how information from successive eye movements is integrated, how long-term memories are formed, or how subjects deal with words and bigger units of meaningful materials.
From page 215...
... Phonemic model for short-term auditory memory.
From page 216...
... SPERLING: No. What I am saying is that, in the particular recall tasks that we have devised with random-letter stimulus materials, auditory memory is so much more effective than visual that we barely detect an effect of visual memory.
From page 217...
... If efficient verbal codes exist, they will be remembered in auditory memory and in other memories and thereby override the visual phenomena that we are trying to measure. The stimuli to be recognized visually have to be made nonverbalizable.
From page 218...
... If you had available a pile of neurons, I could tell you how to connect them to make an auditory memory. In conception, it is very much like a sound spectrograph; the same basic construction would serve either a mouse or a man.
From page 219...
... One might think of statements, going back to Brentano,1 that perception is purposive, intentional, and directed, and so on -- statements that have to be fleshed out if they are to be meaningful. Let us view the reading process as an intentional activity: an activity that has unique characteristics, but that also draws on abilities used in listening to speech, on the one hand, and in looking at objects and pictures, on the other.
From page 220...
... I mean that he has readied a sensorimotor program that would, if activated, result in verbal articulation. He selects a wellpracticed fragment of speech that starts with the phoneme that he has just received and listens for the later occurrence of one or two distinctive phonemes in the speech fragments.
From page 221...
... His wide area of peripheral vision gives him an intimation of the future, of what will meet his next glance. And, because eye movements are fully programmed in advance of their execution, any efficient sampling of the peripheral vision also tells him roughly where his present fixation fits in the overall pattern.
From page 222...
... I think such facts are misleading; I think it is more a question of how the visual equipment is used when you do have it than a question of what kind of behavior process or prosthetic capability you can call on when you do not have normal equipment. Two processes are available to vision, and consequently to reading, that make it different from listening: We can, to some extent, anticipate what is coming on the basis of what is on the retina in peripheral vision, and we also, to some extent, have a record of what we saw after fixation from what is still present in peripheral vision.
From page 223...
... Like the listener, therefore, the reader is engaged in formulating and testing speech fragments, but he can use the information given in peripheral vision (as informed by his linguistic expectancies) to select the places at which he obtains successive stimulus input.
From page 224...
... Monitoring and storage of irrelevant messages in selective attention.
From page 225...
... An obvious one is that it requires the beginning reader to hobble his eye movements less; he has to make fewer small, adjacent saccades that run counter to his normal scene-sampling strategies. Also, you can say that the child is not capable of discriminating small letters.
From page 226...
... For the latter to "pay attention" would require far more predictive ability and far more alertness than could be mustered, far shorter reaction times, far larger vocabulary. I think that these elements would have to be separated.
From page 227...
... You are also correct in the implication that there is not a lot of filling-in that I can do except to say that it is indeed a problem and that the ability to integrate our successive glimpses of the world into fixed spatial perceptual maps must be rather well established before we learn to read. That ability is certainly drawn on in the reading process, for example, when you go from one line to the next on a printed page as that page moves around while you read in a jogging trolley car.
From page 228...
... The average evoked potentials to the flashes, recorded over the visual area of the brain, were enhanced, whereas the responses to the same flashes were reduced when the subjects were instructed to pay attention to clicks and ignore flashes. The potentials that were enhanced in amplitude were the late components of the evoked response, which would correspond roughly in latency to some of the late or association area responses described by Dr.
From page 229...
... In our work with human subjects, recording average evoked potentials to visual, auditory, and somatosensory (median nerve) stimulation, we asked subjects to pay attention to the stimuli of one mode and ignore the other two.
From page 230...
... 25:1-10,1968) have published an article that suggests that, but I think we have evidence that the motor response is not needed.
From page 231...
... I will add a little information on that point myself, and then comment on one aspect of language learning with particular reference to visual and auditory perception. Finally, I would like to suggest some relationships between perception and response, especially for speech.
From page 232...
... PHONOLOGY One of the methods of teaching reading tries to establish a code-like relation between printed symbols and spoken ones. Unfortunately (in some cases)
From page 233...
... But there are also, in speech, some features of phonemes that depend on temporal gaps. We can lump these together under the term "temporal features." The spoken names of letters (as in the alphabet)
From page 234...
... HIRSH 180 160 Q QC 120 100 80 Q 60 40 20 T F T I I If T T I' if I FIRST FORMANT Ui x a 3 4 5 6 7 8 9 10 II 12 13 ADULT AGE (YEARS) FIGURE 1 Intrasubject variability in formant 1 typical of the different age groups, as a function of age.
From page 235...
... Visual and Auditory Perception and Language Learning ISO 160 -- 140 N 120 UJ Q Q QC 100 80 Q 60 40 20 I I t IT t I it 7/ I SECOND FORMANT 3 4 5 6 7 8 9 10 II 12 13 ADULT AGE (YEARS) FIGURE 2 Intrasubject variability in formant 2 typical of the different age groups, as a function of age.
From page 236...
... Ifl ° Itl -ICI -lol 3 4 5 6 7 8 9 10 II 12 13 ADULT AGE (YEARS) FIGURE 3 Intrasubject standard deviation of three temporal features of the words "blue," "pen," and "tall," as a function of age.
From page 237...
... Instead, alphabetic teaching is used, but through the medium of interpersonal written communication. In the case of deaf children who learn to speak through use of what residual hearing they have, the teaching of reading is somewhat simpler, and it can begin at a much earlier age (about 3 or 4 years)
From page 238...
... We conclude there is no significant difference in identifying Greek letters. In identifying the letter "a," hearing children after brief exposures were correct in 89% of trials and deaf children, in 71%, a significant difference.
From page 239...
... What was peculiar in this series of experiments was that, when we asked the subject simply to tell us what the visual word was, the verbalresponse reaction time did not increase with the number of alternatives (which were told to the subject before the trial)
From page 240...
... Thus, those dealing with vision must be concerned about eye movements, fixation, and so on, to get the target to the macula. But the attention of which Dr.
From page 241...
... It seems clear that, if a deaf child is identified before the age of 1 year, he can be prepared by suitable auditory stimulation to use his residual hearing better for the learning of speech at the age of 2 or 3 than if he does not start being stimulated until the age of 2 or 3. Although that is not quite a critical age for learning spoken language, it is something like "if you don't catch it this early, then it isn't going to be as good for general auditory reception later." We do not know the critical age for learning speech, but teachers of normal speech development have suggested that, because some stages of syntactic development are characteristic of the normal child at the age of 1 or 2 years, this is the age at which speech learning must begin.
From page 242...
... The deaf child who is left unattended until the age of 6 or 7 years can be taught to speak only with great care and difficulty. When sound was amplified sufficiently so that deaf children would respond, R
From page 243...
... Written language is acoustic language encoded in visual-form space; a written message has no temporal dimension but only the two dimensions of length and breadth. It is the responsibility of a reader to supply the temporal dimension according to the rules of the written language that govern the direction of the visual scanning process.
From page 244...
... One of the big differences between decoding language stored in visual space and decoding language stored in acoustic space is that a receiver decoding language stored in visual space must know the rules for supplying the temporal dimension. But the responsibility is relieved for him when he is decoding the message in acoustic space, and many of our problems in the strategy of teaching reading are due to overlooking this crucial fact.
From page 245...
... If he is implying that there are various built-in categories for auditory perception and phonemes, then I am not sure that I would go that far. These become built in very soon, I suggest, but they are certainly different from one language to another; and certainly no neurophysiologist, to my knowledge, has discovered a feature extractor that corresponds with phonemic features.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.