National Academies Press: OpenBook
« Previous: ROLE OF THE VISUAL SYSTEM: OPTICAL AND OCULOMOTOR, RETINAL, AND CENTRAL NEURAL FACTORS
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 181
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 182
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 183
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 184
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 185
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 186
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 187
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 188
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 189
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 190
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 191
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 192
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 193
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 194
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 195
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 196
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 197
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 198
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 199
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 200
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 201
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 202
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 203
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 204
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 205
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 206
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 207
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 208
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 209
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 210
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 211
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 212
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 213
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 214
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 215
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 216
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 217
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 218
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 219
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 220
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 221
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 222
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 223
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 224
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 225
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 226
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 227
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 228
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 229
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 230
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 231
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 232
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 233
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 234
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 235
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 236
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 237
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 238
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 239
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 240
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 241
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 242
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 243
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 244
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 245
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 246
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 247
Suggested Citation:"ATTENTIONAL AND PERCEPTUAL MECHANISMS." National Research Council. 1970. Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited. Washington, DC: The National Academies Press. doi: 10.17226/18684.
×
Page 248

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

RICHARD JUNG / LOTHAR SPILLMANN Receptive-Field Estimation and Perceptual Integration in Human Vision In the visual system, the sensory coding of luminance differences is based on the organization of receptive fields in two neuronal subsys- tems, "B" and "D," sending reciprocal information from the retina to the brain. The B system, consisting of on-center neurons, signals "brighter," and the D system, consisting of off-center neurons, signals "darker."2'18 The coding of contrast is accomplished by lateral inhibi- tion and activation: B neurons are inhibited by illumination of their receptive-field surround, and D neurons are activated. The receptive fields of visual neurons in animals and man have been investigated mainly by three research methods: the extent and organiza- tion of the retinal areas projecting to individual neurons were deter- mined, the transformation of the receptive-field organization was stud- ied at different levels of the central visual system, and indirect estimates of the size of receptive fields in man were obtained and correlated with results from animal experiments. The first method was inaugurated by Hartline,11'12 who defined a receptive field in the frog as that area on the retina within which illum- ination activated or inhibited an optic-nerve fiber. This concept was particularized by the description of lateral inhibition in Limulus. It was further refined in experiments by Kuffler22 and others on the antagon- 181

RICHARD JUNG / LOTHAR SPILLMANN istic organization of field center and surround in cats and monkeys. The second method was introduced in 1959 by Hubel, Wiesel, and Baum- gartner, who studied the neurons of the retina, the lateral geniculate nucleus, the primary visual cortex,15'16 and the paravisual cortex17 (areas 18 and 19) in the cat. Similar neuronal recordings from cortical cells in man by Marg et al.23 are discussed elsewhere in the proceed- ings. The third method was developed by Baumgartner,1 who, from his animal experiments, derived indirect procedures for investigating human receptive-field organization. The following report is concerned mainly with this third line of research in man; however, for a better explana- tion of the basic neuronal mechanisms, it will also include some related results obtained in animals. In human vision, we are virtually restricted to the psychophysical approach. RECEPTIVE-FIELD ESTIMA TION IN MAN BY HERMANN'S GRID A simple method of determining the size of visual receptive fields in man is by means of contrast patterns viewed from different distances or under different angles. Baumgartner,1 using the Hermann grid,14 was the first to measure foveal field centers in this manner (Figure 1). Sev- eral workers from our department—Kornhuber and Spillmann,21-28 Sindermann and Pieper,27 and others—have since estimated the size of field centers, as well as surrounds, by this or related indirect methods. With the grid technique, receptive-field centers in the fovea were found to be 25-30 n in diameter, and centers plus surrounds, about 50 ju. These values correspond to 5-10 min of arc of angular projection.1'27'28 Receptive-field centers in the extrafoveal regions of the eye appear to be much larger. Mean diameters increase linearly toward the periphery, doubling their size from 1.5 to 3 deg between 20 and 60 deg of retinal eccentricity (Figure 2). Bryngdahl's data5 referring to sine-wave contrast patterns yield diameters of 20-100 p for foveal and parafoveal fields and agree with results obtained by Baumgartner1 and Sindermann and Pieper27 with different methods. Direct measurements of receptive fields of a small number of human cortical cells were reported by Marg et al.23 They were in extrafoveal regions and appeared to have ill-defined borders. Hermann grid stimulation of concentric-field neurons in the lateral geniculate nucleus and primary visual cortex of the cat shows strongly 182

FIGURE I Estimation of receptive fields and field < The Hermann grid (a) and the array of white stripes I of experimental patterns used in studies by Baumgart manngrid, gray spots are seen at the intersections of peculiarity can be explained by receptive-field center: shown in two critical positions. They illustrate schem account for the darkening by producing twice as muc intersections (+). In the fovea, receptive fields are mm quently, their inhibitory surrounds have similar effect (see example in b2)- (b) The projection of a foveal rec to a white bar of various width, (b1) Maximal brightm occurs when the angular size of the white stripe appro field center. (b2) In wider stripes, a gray central canal hanced contours on either side ("border contrast"). It vation result inu from comnlefe or r>artial lateral inhihii

RICHARD JUNG / LOTHAR SPILLMANN s in ' _c Ul jj OJ ! .g Q 2.02 0,03^x+0,50 0 10 2C 3C 40 50 60° Horizontal distance from fixation point in" FIGURE 2 Lower and upper thresholds for the Hermann grid illusion as a function of retinal eccentricity (from Spillmann28). The Hermann grids in this experiment (other than in Figures 1 and 3) consisted of black bars presented against a white background. Grids of various stripe width were shown at differ- ent horizontal distances from the fixation point. Observations by one subject were made with artificial miosis to compensate for hyperopia in the peripheral retina. The critical bar width at which the illusion appears (circles) or disappears (dots) increases almost linearly with eccentricity. receptive-field axes. Also, less response diminution is found at grid intersections, except in large receptive fields.26 RECEPTIVE-FIELD ESTIMA TION BY APPARENT MOTION Wertheimer's apparent motion29 elicited by two successive light stimuli presented at different loci in the visual field was used to estimate the size of receptive fields for movement perception. The maximal dis- tances between the alternating spots across which object motion ("op- timal" or "beta" motion) or pure motion ("phi phenomenon") could be seen were determined as a function of retinal eccentricity. Figure 4 184

Receptive-Field Estimation and Perceptual Integration in Human Vision shows that threshold distances for both types of motion increase lin- early toward the periphery, doubling their size between 20 and 60 deg of eccentricity. Values for phi are somewhat larger than for beta motion. The "receptive fields" for Wertheimer's apparent motion increase toward the periphery of the eye at the same rate as the receptivi>field centers determined by the Hermann grid. Both measures show an in- crease by a factor of two between 20 and 60 deg of retinal eccentricity. In absolute terms, the field sizes for apparent motion are approximately 10-20 times larger than those for simultaneous contrast (Figure 5). The physiologic basis of these large fields seems to be a temporospa- -(+)- I 1 500 msec FIGURE 3 Response of a first-order B neuron in the visual cortex of the cat to various positions of the Hermann grid within its receptive field (from unpublished experiments by Baumgartner). The discharge rate of this neuron is consistent with the subjective brightness diminution seen at the grid intersection. (The receptive field in this example had a diameter of 6 deg and was located 20 deg paracentrally.) In positions a and b (bars), the response to light is more than twice as strong as in position c (intersection). These results are accounted for by differences in surround illumination and lateral inhibition. The be- havior of this neuron is typical only for concentric fields of geniculate and first-order cortical B neurons. In the oblong simple-field neurons of Hubel and Wiesel,15 the response depends on stimulus orientation. It reaches a maximum when the white bar coincides with the receptive-field axis. The response is minimal when bar and field axis are oriented at right angles to each other and is intermediate when stimuli a and b are combined in a pattern of two intersecting bars. 185

RICHARD JUNG / LOTHAR SPILLMANN * 30 "oi 'fc 20 10 5 o y P,,, • o.64 x+18.6 0.58x + 7. Optim Movement 10 20 SO 60 70C Horizontal distance from fixation point in FIGURE 4 Maximal angular distances for apparent motion as a function of retinal eccentricity (adapted from Spillmann28). Upper thresholds for apparent motion were determined with two alternating lights 3 deg 20 min in diameter presented with an interval of 240 msec. Fixation was on a vertical line between the two stimuli. Criteria were the perception of object motion (op- timal, or beta) or pure motion (phi phenomenon). Mean thresholds of three subjects indicate that the critical distances across which motion is seen are somewhat greater for phi (dots) than for beta (circles). Both types of thresholds show a nearly linear increase in size with retinal eccentricity. tial network of many interacting neurons arranged to signal the succes- sive occurrence of photic stimuli as motion of particular direction and velocity. These neuronal populations may normally require physical movement for adequate stimulation, but under some conditions respond also to a sequence of two light spots. The extent of neuronal conver- gence causing these motion-sensitive neurons to function as a unit or re- ceptive field can be estimated only with reference to the special dimen- sions used (spot diameter, 3 deg 20 min; sequential interval, 240 msec) and may vary for other conditions. In our experiment, the linear veloc- ity corresponding to the critical spot sequence for maximal separation was a function of retinal eccentricity and ranged from 60 to 240 deg/sec. This is in the upper range of and even beyond the highest human veloc- ity estimates investigated psychophysically by Dichgans et a/.7 756

Receptive-Field Estimation and Perceptual Integration in Human Vision 60 i 50 - 2 40 > 0) t30 0) o b 20- 10 - receptix/e fields for contrast vision ( Hermann grid ) 10 20 30 40 50 60 TO1 Distance from the fovea in ° FIGURE 5 Comparison of human receptive fields and field centers for contrast vision (Hermann grid illusion), apparent motion (beta move- ment, phi phenomenon), and eye pursuit movement as a function of retinal eccentricity (adapted from Spillmann28). Mean values for the first and second procedure were derived from data shown in Figures 2 and 4. Eye pursuit movement (dashed line) was elicited by a vertical- line stimulus oscillating on a horizontal meridian with a sine-wave fre- quency of 0.3 sec. Results obtained with this third method represent minimal amplitudes of the stimulus required for a correlated pursuit movement. In spite of differences in absolute size, thresholds for all three procedures show an approximate increase by a factor of two be- tween 20 and 60 deg of retinal eccentricity. This figure compares favorably with direct measurements of receptive-field centers in monkeys (Hubel and Wiesel15), suggesting that both objective deter- minations and indirect psychophysical estimates may refer to the same basic neuronal organization. 757

RICHARD JUNG / LOTHAR SPILLMANN EHRENSTEIN'S BRIGHTNESS ILL USION IN THE ABSENCE OF PLANE CONTRAST Ehrenstein8 in 1942 experimented with patterns of radial lines that cause brightness enhancement at the white center spot to which they converge (Figures 6 and 7). The lines must exceed a particular length (Figure 6, left), and there should not be fewer than four. Usually, the central spot is seen as a round patch within which the brightness en- hancement occurs. This round blob is altered to an apparent square when the lines are thickened and physical contrast between adjacent areas of black and white becomes more intense (Figure 7). Paradoxi- \ ' / /I \ FIGURE 6 Ehrenstein's brightness illusion8 in the center of radially converging black lines. Central spots appear brighter than adjacent white areas if one views the pattern freely. As in the Hermann grid, the illusion becomes less apparent when the central area is fixated. Paradoxically, the brightness enhancement disappears when the center spot is surrounded by a thin black circle (upper right). A minimal length of lines is required to induce the illusion (Icl'u. 188

Receptive-Field Estimation and Perceptual Integration in Human Vision FIGURE 7 Ehrenstein's illusion as a function ofline width-a brightness paradox. Physical contrast between figure and ground is strongest in the lower part and weakest in the upper part. In spite of this brightness, en- hancement of the central spot ap- pears to be most vivid in rows 3, 4, and 5, followed by the rows at the top and then at the bottom. There are inter-individual differences in sequence. At close fixation, bright- ness enhancement is replaced by another illusion forming a gray diagonal cross (X) within the cen- tral area. cally, under these conditions the central spot is less enhanced. It be- comes even more inconspicuous when it is completely enclosed by black bars (Figure 7, bottom). Brightness enhancement disappears en- tirely when the central area is surrounded by a thin circle (Figure 6, upper right). Sometimes a quite different illusion appears in Figure 7, most readily at the third, fourth, and fifth rows and with central fixation. Instead of the brightness enhancement, a grayish cross (x) emerges, connecting the edges of the apparent square along the diagonals. It is best seen from about 30 cm or farther away. At shorter distances, it disappears and 189

RICHARD JUNG / LOTHAR SPILLMANN thus may depend on the angular size of the foveal projection. This cross illusion was not described by Ehrenstein. It remains for further experimentation to decide whether the phenom- enon may be interpreted in terms of interactions between line stimuli, described as neural interaction in the human fovea by Fiorentini and Mazzantini.9 It is of interest that the cross illusion appears to be con- fined to the fovea. In contrast, the brightness illusion fades during pro- longed foveal fixation, requiring eye movements for revival. DISCUSSION The Hermann Grid Phenomenon: Single-Cell Explanation versus Population Hypothesis The Hermann grid illusion is an example of brightness contrast attribut- able to large groups of visual neurons. For a single nerve cell, the phe- nomenon can be explained by lateral interaction within its receptive field. For neuronal populations, however, the explanation is more com- plex and must concern both neuronal subsystems, the one signaling brightness (B) and the one signaling darkness (D). The Hermann pattern not only elicits a diminished brightness sensation at the intersection of white bars; after figure-ground reversal, it also results in a diminished darkness sensation at the same (now black) intersection. Neurophysio- logically, the illusion is based on different distributions of lateral inhibi- tion or activation in the surrounds of on-center fields (B neurons) and off-center fields (D neurons), respectively. Retinally and postretinally, these two subsystems interact in a manner yet unknown. Purely psychophysical methods in human vision will hardly reveal at which level within the visual system this interaction takes place. How- ever, neuronal recordings in the cat suggest that the grid phenomenon relates to a concentric receptive-field organization mainly in the "lower" parts of the visual system (retina, lateral geniculate nucleus, area 17). The more elaborate neuronal systems of the visual cortex showing trans- formations from concentric to oblong and complex field organizations appear to be less involved in the Hermann grid illusion. Only the simple 190

Receptive-Field Estimation and Perceptual Integration in Human Vision fields of Hubel and Wiesel ("line detectors") may still retain some error of brightness information,26 thus indicating that the neuronal coding of luminance differences is not excluded from the cortex. Other conditions, mainly afterimages and eye movements, may have an influence on the Hermann grid phenomenon. These factors are inves- tigated by Sindermann in a forthcoming publication and will not be dis- cussed here. Neuronal integration of receptive fields from concentric to simple and complex field organizations requires transmission through several synapses. The role of inhibition for this information processing is not confined to lateral inhibition from the field surround. Other types of inhibitory action have been demonstrated by electric stimula- tion of the optic radiation. For simple-field neurons, response latency is at least one synaptic delay longer than for concentric-field neurons.6 Complex-field neurons usually show long primary inhibition after shock stimulation of radiation fibers. To avoid confusion between neuronal receptive fields and their pos- sible equivalents in human perception, a terminologic distinction might be useful. We tentatively propose the term "perceptive fields" for the subjective correlates of receptive fields estimated in human vision. The Ehrenstein Illusion and Its Dependence on Oriented Lines Since 1870, the Hermann grid illusion has been explained by simul- taneous contrast of bright and dark areas that, according to Baum- gartner, stimulate concentric receptive fields of visual neurons. Ehren- stein's brightness illusions, which appear within oriented line patterns in the absence of contrasting planes, cast considerable doubt on this ex- planation by contrast alone.8 Brightness enhancement in the center of radially converging lines can hardly be explained on the basis of circular receptive fields. Detrimental effects of steady fixation and enclosing circles on the illusion are rather reminiscent of Hubel and Wiesel's find- ingsls'16 of simple-field neurons that respond optimally to lines of given orientation moving across their receptive-field axis. In the absence of a sufficient neuronal explanation of the Ehrenstein illusion, one may speculate that these neurons do not only signal edges, lines, and orientations, but also may contribute to brightness sensation. A neurophysiologic basis might be that the ellipsoid fields of these neurons can be divided into subclasses carrying brightness and darkness information, respectively: B neurons with oblong on centers and D neu- 191

RICHARD JUNG / LOTHAR SPILLMANN rons with oblong off centers,20 both transformed from corresponding subsystems of the retina and lateral geniculate nucleus. Receptive Fields and Levels ofNeuronal Interaction within the Visual System The rather simple concept of the receptive-field organization of a single neuron might be considered a model for similar interactions involving collateral inhibition and activation among neuronal populations. It seems worth noting that lateral inhibition and activation as the essen- tial mechanisms of the receptive-field organization in retinal ganglion cells can still be demonstrated in the visible phenomena of contrast and contour vision. Hartline,13 in his first short note in 1949, had already drawn attention to the role of lateral inhibition in contrast vision. Since then, many studies of receptive-field organization have confirmed this mechanism and stressed its significance in various species, but without elucidating the central process. How the information of primary lateral inhibition is used in the central nervous system and its large populations of visual neurons is as unclear as the synaptic mechanisms of cascade- like transformations of receptive fields. Only a few facts are known about single-neuron interaction with lateral, collateral, and reciprocal inhibition in the two neuronal systems, B and D. It is unlikely that any of them singly or in combination can sufficiently explain the integrated effects in thousands and millions of neurons necessary for sensation and perception. Results of recent experiments on Mach bands with flicker photometry favor the localization of lateral inhibition and mutual interaction of re- ceptive fields beyond the receptor organ. After measuring the brightness of Mach bands, von Bekesy4 proposed the existence of lateral interac- tion in brain structures. The rather complex transformation of receptive fields at various cerebral levels and the role of eye movements and of in- hibition within the field center cannot be discussed here. We mention only two findings: Richards25 has described modifications of area sum- mation in man during accommodation and convergence that he explains by plasticity of receptive fields. Freund and co-workers10 demonstrated a special transformation of field organization in the cat's D neurons of the lateral geniculate nucleus that differ from retinal off-center neurons by showing marked off inhibition, instead of spatial summation. Generally, lateral inhibition in the retina and collateral inhibition in 192

Receptive-Field Estimation and Perceptual Integration in Human Vision cerebral structures may, in spite of their differences as neurophysiologic mechanisms, have similar effects on visual contrast and perceptual inte- gration. In principle, this similarity holds even for different sensory modalities, as demonstrated by von Bekesy3 in his work on sensory inhibition. RECEPTIVE FIELDS IN RELA TION TO FORM VISION AND VISUAL LEARNING The neuronal mechanism of visual learning is unknown. Thus, it seems difficult to discuss its interaction with neuronal processes of sensation and perception. However, the importance of visual memory for form vision, perception, and reading is obvious, and relationships to receptive- field transformation are conceivable. A child learns to read by organiz- ing and recognizing patterns composed of lines in different orientations. Perceptually, this task involves both "innate" neuronal mechanisms and new connections acquired during the learning process. Wiesel and Hubel's experiment30 showed that cats have innate "simple" receptive fields re- sponding optimally to line stimuli of particular axis orientation. These neuronal line detectors are present in newly born kittens, before any visual experience, but deteriorate when contrast patterns are excluded from vision during the first months of life.30 Thus, visual learning appar- ently maintains and facilitates visual function in the cortex during the early periods of life. Short- and long-term visual memory not only are necessary for acquiring form recognition, but also are prerequisites for the normal function and early development of innate neuronal coordi- nation. In this context, it may be of interest to ask whether the above- mentioned Ehrenstein illusions, if dependent on line detectors, are influenced by visual learning. In discussing learning mechanisms in relation to the visual receptive- field organization, some recent experiments in humans on the effect of learning and expectation on Wertheimer's apparent motion should be mentioned. Besides the fact that expectation and bias affect the occur- rence of apparent visual motion, Raskin24 demonstrated long-term memory effects on special patterns seen previously. Prior experience as old as 1 week either facilitated or interfered with subsequent percep- tion of apparent motion. Raskin concluded that the perceptual feature of motion or nonmotion may become associated with certain form characteristics by way of learning. We cannot discuss here the rather 193

RICHARD JUNG / LOTHAR SPILLMANN complex neurophysiology of motion perception and its dual mecha- nisms—afferent movement caused by passive displacement of retinal images and efferent movement produced by active pursuit movements of eye, head, and body. Both mechanisms appear to be intimately linked to the detection of contrast, allowing one to fixate stationary and moving borders. Thus, it is not surprising that the minimal thresh- old amplitudes for eliciting eye pursuit movement28 are of the same order of magnitude as the receptive-field diameters obtained with the Hermann grid illusion (Figure 5). Although learning should influence form recognition and might be involved in the progressive transformation of receptive fields, we cannot yet apply our results to the physiology of reading and its disorders. Be- cause we read with moving eyes during short pauses of fixation, making use of black-white contrast for the recognition of letters, we may say only that oculomotor functions and mechanisms of contrast and pat- tern vision, among others, contribute to the physiology of reading. Whether in man, as in the cat, the organization of simple receptive fields with specific axis orientation is a congenital neuronal property of the visual cortex, requiring early visual experience to prevent dete- rioration, cannot be answered with certainty. Assuming that a combina- tion of inherent mechanisms with learning is necessary for the normal functioning of the visual system, it might be justifiable to discuss receptive-field organization and its application to human vision in a conference on reading functions. SUMMARY Receptive fields of visual neurons as determined by direct recordings in animals can be inferred in man from contrast and movement illusions. Estimates of their spatial extent were derived from threshold measure- ments for simultaneous contrast and apparent motion. Diameters of receptive-field centers in the human fovea, when mea- sured with Hermann grids of different bar width, range from 25 M to 30 pi (5-10 min of arc), and receptive-field centers plus surrounds, about 50 ju. The size of receptive-field centers is a linear function of retinal eccen- tricity. Between 20 and 60 deg from the fovea, the average diameter of field centers doubles from 1.5 to 3 deg of arc. 194

Receptive-Field Estimation and Perceptual Integration in Human Vision Results of Hermann grid stimulation of concentric-field neurons in the visual system of the cat are consistent with apparent brightness dif- ferences in human vision. For white grids, B neurons of the lateral ge- niculate nucleus and the first stage of the visual cortex show enhanced responses when exposed to bars and diminished responses when stimu- lated by intersections. Simple-field neurons of the visual cortex show similar enhancement only if the orientation of their receptive-field axes corresponds to border positions of the grid. Ehrenstein's illusions of brightness enhancement elicited by radial line patterns in the absence of marked physical contrast between figure and ground are tentatively ascribed to cortical simple-field neurons and their possible contribution to brightness perception. "Receptive fields" determined with Wertheimer's optimal (or beta) motion and phi phenomenon are 20-30 times larger than receptive-field centers measured with the Hermann grid. Analogous to these, "recep- tive fields" for apparent motion increase linearly toward the peripheral retina, doubling in size between 20 and 60 deg of eccentricity. The receptive-field organization, by transforming luminance gradients into contrast borders, is a basic mechanism of form vision. Although its significance is evident, its detailed role in pattern vision and reading can- not yet be explained in neuronal terms. Modifiability of receptive fields and plasticity of spatial mapping in the central visual system must be postulated to explain size constancy and form recognition. The possible interaction between memory and neuronal convergence within the vi- sual system is discussed for the example of apparent motion. The term "perceptive fields" is proposed for the subjective correlates of receptive fields estimated in human vision. We are thankful to several former and present co-workers of the Freiburg Labora- tories, especially Prof. Baumgartner (Zurich), Prof. Kornhuber (Ulm), Doz. Dr. Sindermann (Ulm), and Doz. Dr. Dichgans (Freiburg), for stimulating discussions, help in our experiments, and permission to use their material. The preparation of this paper was supported in part by U.S. Public Health Service grant NB 01482. 795

RICHARD JUNG / LOTHAR SPILLMANN REFERENCES 1. Baumgartner, G. Indirekte Grossenbestimmung der rezeptiven Felder der Retina beim Menschen mittels der Hermannschen Gittertauschung. Pfliigers Arch. ges. Physiol. 272:21-22. 1960. 2. Baumgartner, G. Die Reaktionen der Neurone des zentralen visuellen Systems der Katze im simultanen Helligkeitskontrast, pp. 296-313. In R. Jung and H. H. Kornhuber, Eds. Neurophysiologie und Psychophysik des visuellen Systems. Berlin-Gottingen-Heidelberg: Springer-Verlag, 1961. 524pp. 3. Bekesy, G. von. Sensory Inhibition. Princeton: Princeton University Press, 1967. 265 pp. 4. Bekesy, G. von. Brightness distribution across the Mach bands measured with flicker photometry, and the linearity of sensory nervous interaction. J. Opt. Soc. Amer. 58:1-8, 1968. 5. Bryngdahl, O. Perceived contrast variation with eccentricity of spatial sine-wave stimuli. Size determination of receptive field centres. Vision Res. 6:553-565, 1966. 6. Denney, D., G. Baumgartner, and C. Adorjani. Responses of cortical neurons to stimulation of the visual afferent radiations. Exp. Brain Res. 6:265-272, 1968. 7. Dichgans, J., F. Korner, and K. Voigt. Vergleichende Skalierung des afferenten und efferenten Bewegungssehens beim Menschen: Lineare Funktionen mit verschiedener Anstiegssteilheit. Psychol. Forsch. 32:277-295, 1969. 8. Ehrenstein, W. Probleme der ganzheitspsychologischen Wahrnehmungslehre. 3rd Ed. Leipzig: J. A. Barth, 1954. 342 pp. 9. Fiorentini, A., and L. Mazzantini. Neural inhibition in the human fovea: a study of interactions between two line stimuli. Atti Fond. G. Ronchi 21:7 38-747, 1966. 10. Freund, H.-J., G. Grunewald, and G. Baumgartner. Raumliche Summation im receptiven Feldzentrum von Neuronen des Geniculatum laterale der Katze. Exp. Brain Res. 8:53-65, 1969. 11. Hartline, H. K. The response of single optic nerve fibers of the vertebrate eye to illumination of the retina. Amer. J. Physiol. 121:400-415, 1938. 12. Hartline, H. K. The receptive fields of optic nerve fibers. Amer. J. Physiol. 130:690-699, 1940. 13. Hartline, H. K. Inhibition of activity of visual receptors by illuminating nearby retinal areas in the Limulus eye. Fed. Proc. 8:69, 1949. 14. Hermann, L. Eine Erscheinung simultanen Contrastes. Pfliigers Arch. ges. Physiol. 3:13-15, 1870. 15. Hubel, D. H., and T. N. Wiesel. Receptive fields of optic nerve fibres in the spider monkey. J. Physiol. 154:572-580, 1960. 16. Hubel, D. H., and T. N. Wiesel. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. J. Physiol. 160:106-154, 1962. 17. Hubel, D. H., and T. N. Wiesel. Receptive fields and functional architecture in two nonstriate visual areas (18 and 19) of the cat. J. Neurophysiol. 28:229-289, 1965. 18. Jung, R. Korrelationen von Neuronentatigkeit und Sehen, pp. 410-435. In R. Jung and H. H. Kornhuber, Eds. Neurophysiologie und Psychophysik des visuellen Systems. Berlin-Gottingen-Heidelberg: Springer-Verlag, 1961. 524pp. 196

Receptive-Field Estimation and Perceptual Integration in Human Vision 19. Jung, R. Neuronal mechanisms of pattern vision and motion detection. 18 Int. Psychol. Cong. Moskau Sympos. 15:1-16, 1966. 20. Jung, R., and G. Baumgartner. Neuronenphysiologie der Visuellen und Para- visuellen Rindenfelder. In 8th Int. Cong. Neurol. Wien. Proc. 3:47-75, 1965. Vienna: Verlag der Wiener Medizinischen Akademie, 1965. 299 pp. 21. Kornhuber, H. H., and L. Spillmann. Zur visuellen Feldorganisation beim Menschen: Die receptiven Felder im peripheren und zentralen Gesichtsfeld bei Simultankontrast, Flimmerfusion, Scheinbewegung und Blickfolgebewe- gung. Pflugers Arch. ges. Physiol. 279:R5-6, 1964. 22. Kuffler, S. W. Discharge patterns and functional organization of mammalian retina. J. Neurophysiol. 16:37-68, 1953. 23. Marg, E., J. E. Adams, and B. Rutkin. Receptive fields of cells in the human visual cortex. Experientia 24:348-350, 1968. 24. Raskin, L. M. Long-term memory effects in the perception of apparent move- ment. J. Exp. Psychol. 79:97-103, 1969. 25. Richards, W. Apparent modifiability of receptive fields during accommodation and convergence and a model for size constancy. Neuropsychologia 5:63-72, 1967. 26. Schepelmann, F., H. Aschayeri, and G. Baumgartner. Die Reaktionen der "simple" field-Neurone in Area 17 der Katze beim Hermann-Gitter-Kontrast. Pflugers Arch. ges. Physiol. 294:R57, 1967. 27. Sindermann, F., and E. Pieper. Grossenschatzung von fovealen Projektionen receptiver Kontrastfelder (Zentrum und Umfeld) beim Menschen im psycho- physischen Versuch. Pflugers Arch. ges. Physiol. 283:R47-48, 1965. 28. Spillmann, L. Zur Feldorganisation der visuellen Wahrnehmung beim Menschen. Universitat Miinster, Ph.D. Thesis, 1964. 29. Wertheimer, M. Experimentelle Studien iiber das Schen von Bewegung. Z. Psychol. 61:161-265, 1912. 30. Wiesel, T. N., and D. H. Hubel. Single-cell responses in striate cortex of kittens deprived of vision in one eye. J. Neurophysiol. 26:1003-1017, 1963. 797

GEORGE SPERLING Short-Term Memory, Long-Term Memory, and Scanning in the Processing of Visual Information A MODEL OF VISUAL-INFORMATION PROCESSING In reading, as in most visual tasks, the eye gathers information only during the pauses between its quick saccadic movements. The normal input to the visual system is thus a sequence of brief exposures. I would like to propose here a model of the way people process the information they receive in one such exposure. I shall be concerned with the simple situation in which a person is shown briefly an array of letters and then asked to write them and the closely related situation in which he hears spoken letters and is required to write them. The model shown in Figure 1 summarizes the results of numerous experiments. The squares indicate short-term memories. The first box represents a very-short-term visual memory, which, in the past, I have called visual-information storage.15 It contains a great deal more infor- mation than the subject ultimately will be able to report, but its con- tents normally fade rapidly, usually within about one fourth of a sec- ond. These conclusions are derived from a partial-report procedure: the subject is required to report only a small fraction of the stimulus con- tents on any trial and does not know in advance which aspects he will be required to report. The methods and results have been described in detail elsewhere.2'15 It is easily proved that a great deal of information from a visual stimulus gets into the subject's very-short-term visual memory; the information is lost to recall because later processes are unable to use it. 198

Short-Term Memory, Long-Term Memory, and Scanning Ultimately, stimulus letters are "recognized"; that is, the subject says or writes them. He makes an appropriate motor response. In terms of the model, it is useful to distinguish between actually executing the motor response (saying, subvocally rehearsing, or writing a letter) and having decided which response is to be executed. This kind of distinc- tion is most often made in discussing computers, and perhaps the ter- minology that has been developed to deal with it in that domain will help to clarify it here. Saying a letter may be conceived of as executing a long program that consists of hundreds of instructions to various muscle groups. Recogniz- ing a letter may be considered as having decided which program to exe- cute. In practice, a program is designated by its location, or address: the address is the location of the first program instruction to be executed. The second short-term memory box in the model designates the recog- nition buffer-memory. It is a short-term memory for letters that are about to be spoken or rehearsed subvocally, i.e., a memory of the ad- dresses of the programs for saying them. The kinds of data that require the concept of a recognition buffer- V STM V LTM RECOG STM r> V-M LTM M LTM FIGURE 1 Model of visual information processing. Squares indicate short-term memories, rectangles indi- cate long-term memories, and triangles indicate scan components that transform signals from one modality into another. V, visual; A, auditory; M, motor; R, rehearsal; RECOG, recognition buffer-memory; -*, direc- tion of association. 799

GEORGE SPERLING memory have been described.14 The basic idea is that three or four let- ters can be recalled from visual presentations even if the effective dura- tion of the presentation—e.g., vis—is so short that there is not time for the rehearsal of even one letter. The recognition buffer-memory can hold at least three letters (i.e., the addresses of the motor programs for rehearsing the letters) for a period of about 1 sec, until they have been rehearsed. A scan component is needed to transform the visual information in very-short-term visual memory into the motor-address information of the recognition buffer-memory. The visual scan component is desig- nated by a triangle in Figure 1 to indicate that it is not a memory and that it transforms information from one modality into another. Actu- ally, the visual scan component has at least three distinguishable func- tions: deciding which areas of the visual field contain information on which further processing should be performed ("prescan"8); directing processing capacity to the locations selected by the prescan ("atten- tion"); and converting the visual input from the selected locations into the addresses of motor programs ("scanning"). The maximal rate at which letters are scanned can be measured from visual presentations in which the persistence of the information from an initial letter stimulus is obliterated by a subsequent visual "noise" stim- ulus. The measured rates are quite high—say, one letter every 10-15 msec, which is equivalent to rates of up to 100 unrelated letters per second 10 The middle triangle in Figure 1 designates rehearsal. In vocal re- hearsal, the motor instructions designated by the recognition buffer- memory are executed, and a spoken letter results. Because it indicates a change of modality or dimension, a triangle is used to designate the rehearsal component; in this case, the transformation is from move- ments to sound. The sound produced by a vocal rehearsal is heard and remembered in auditory short-term memory. In principle, although not in detail, the auditory scan is exactly analogous to the visual scan. The auditory scan selects some contents of auditory memory (e.g., the sound representation of one letter) and converts them into the address of a motor program. The address is re- membered in the recognition buffer-memory, the program is executed by the rehearsal component, and the sounds are re-entered into audi- tory memory. By means of this rehearsal loop, information can be re- 200

Short-Term Memory, Long-Term Memory, and Scanning tained for a very long time in auditory short-term memory—many times longer than the decay time of the memory itself. Perhaps in young children and some adults, the output of the re- hearsal component must first enter into the outside world as sound before it can enter auditory memory, but most adults seem to have evolved a shortcut, which I have designated "subvocal rehearsal." In subvocal rehearsal, the subvocal output of the rehearsal component is entered into the auditory short-term memory just as though it had been a vocal output; i.e., auditory memory contains a memory of the sound of the letter. The rate of subvocal rehearsal can be measured,6'10 and it is very interesting to note that it is identical with the rate of vocal rehearsal. DISTINCTIONS BETWEEN SHORT- AND LONG-TERM MEMOR Y Neural Distinctions A short-term memory is a patch of neural tissue that is used over and over again for every appropriate input to the modality. For example, the retina undoubtedly serves as a short-term memory; a particular neu- ron in the retina might, by appropriate stimulus positioning, be acti- vated by every letter that could be presented. But I suggest that the neurons involved in long-term memory are extremely specialized and are active only when their key is found. This does not mean that only one stimulus can activate a neuron in long-term memory, but rather that its range is infinitesimal, compared with the range of possible stimuli. There is now fairly widespread agreement1'9'12'18 that short-term memory is short-term not because its neurons remember poorly (al- though that is probably a factor) but because every new stimulus over- writes its predecessor or at least pushes it away from the fore of mem- ory. Even silence or darkness, the absence of stimulation, is an input to short-term memory that must be recorded and that therefore inevi- tably drives out the record of previous stimulation. Structural Distinctions A short-term memory can be likened to a register in a computer; a long-term memory, to a section of core memory.13 That is, a short- term memory is complicated and expensive (involving many neurons 201

GEORGE SPERLING per unit of information stored), because the information in it is capa- ble of being manipulated in many ways. For example, one bit of in- formation can be compared with another bit of information, can be shifted, and so on. Every operation of this sort requires many connec- tions. In computers, core memory is made as starkly economical as possible. So much is sacrificed to economy that no operation whatever (except perhaps erasure) is possible on the contents of long-term mem- ory before they have been removed to a register. I propose that the same overriding principles that guided the evolution of computers to have a very few (but very intricate) registers and to have a great many (but very simple) core memory cells guided the evolution of nervous systems to have a few intricate short-term memories controlling great masses of long-term memory. Functional and Behavioral Distinctions The contents of short-term memory are retrieved by asking for the contents of the particular sensory memory, i.e., by giving the name of the memory. What did I just hear? What did I just see? The contents of long-term memory are retrieved by giving an association, i.e., a com- plex, highly specific input. For example, I say: "My telephone number is 582-2644. What is my telephone number?" You answer by asking yourself what was the last thing you heard. That it is Sperling's tele- phone number is irrelevant to the retrieval of the digits. However, if I meet you on the street tomorrow and ask you to repeat my telephone number, no short-term memory could possibly be equal to the job. You would need a memory that could be entered with the name "Sperling" (and perhaps some other concomitant bits of information) and that, when so prodded, would return the correct digits. SIX LONG-TERM MEMORIES Each of the active components in the model (Figure 1) is associated with a long-term memory. The long-term memory was constructed by the subject out of his past experience, long before his participation in any of my experiments. The three triangle components each use an intermodality long-term memory. The visual scan is served by an inter- modality long-term memory that associates the address of the motor 202

Short-Term Memory, Long-Term Memory, and Scanning program for saying a letter with the visual features of that letter. The rehearsal component is served by a long-term memory that associates the auditory features of a sound with the motor program for producing that sound. The auditory scan is served by a long-term memory that associates the address of a motor program for producing a sound with the auditory features of that sound. These intermodality long-term memories represent skills. As chil- dren, we learned to imitate sounds that we heard. We learned how to recognize letters, that is, to say the name of a letter when we saw it. Later, we learned how to read without speaking. Beneath each short-term memory square in Figure 1 is a long-term memory of events within that modality. For example, long-term visual memory might contain the information necessary to recognize a partic- ular face as familiar, even if no name or occasion can be associated with it. A preschool child would recognize some letters as familiar, even if he could not name them. Similarly, we have auditory memories of audi- tory events. Finally, we have the memory of the motor sequence nec- essary to say a letter. The proper development of all six of these long-term memories is a prerequisite for the effective operation of the information-processing system outlined before. Quantitative theories of short-term recall performance find it neces- sary to take into account a small amount of information that is getting into long-term memory from each trial and that, when there are re- peated trials, significantly affects performance (see especially Atkinson and Shiffrin1). Although the experiments I have dealt with probably in- volve very little long-term memory (because each stimulus is viewed only once), it is obvious that something is entering the various long- term memories, at least occasionally. I will concentrate now on the two aspects of the model that are of greatest relevance to reading: visual scanning and auditory memory. VISUAL SCANNING The Use of Visual Noise to Estimate Processing Rate Brief visual exposures, by themselves, are useless for determining the rate at which visual information is processed. This is so because stim- ulus information persists in very-short-term visual memory for some 203

GEORGE SPERLING undetermined time after the exposure, for at least 0.1 sec and usually for 0.2 sec or longer. If the duration of visual availability is undeter- mined, processing rate cannot be determined; duration of visual per- sistence and processing rate are complexly intermingled. The way around this difficulty is to follow exposure of the stimulus letters by a "noise" postexposure field (Figure 2). The visual noise that I use looks like scattered bits and pieces of letters, and it effectively ob- literates the visual persistence of the stimulus letters. By delaying the onset of the noise postexposure field, we allow the subject more time to scan the letters. Each 10-15 msec of delay enables him ultimately to report one additional letter, up to about three or four letters. This pro- cessing rate can be shown to be independent of the number of letters presented and of many other variations in procedure. Serial or Parallel Processing? In a brief exposure, are letters scanned one at a time, a new letter in each interval of 10-15 msec, or is information being gathered about several letters simultaneously at an overall rate equivalent to one new letter per 10-15 msec? A positive answer to the first question defines a serial scanning process, and to the second, a parallel process. I will go into greater depth in considering the problem of serial versus paral- lel processing, because it offers a good illustration of current research in information processing. The nonspecialist reader may have difficulty here, but I hope that he will persevere and obtain at least an apprecia- tion of some contemporary methods and theories and of their potential power for studying the way in which words are read. METHOD i When I first confronted the serial-versus-parallel problem, I sought the answer by examining the rate at which information was acquired about each individual letter in a stimulus instead of looking only at the over- all rate.14 Subjects were presented with five random letters followed, after various intervals, by a noise postexposure field (Figure 2). Their task was to report correctly as many letters as they could, from all the locations. If they processed letters in a purely serial order, I would ex- pect only the letter in the first location to be reported correctly at the briefest exposure; the first and second letter to be reported at longer exposures; then the first, second, and third; and so on. Let pi be the 204

Short-Term Memory, Long-Term Memory, and Scanning FIGURE 2 A normal tachistoscopic exposure sequence (top) and a postexposure visual noise sequence (bottom). probability of correctly reporting the letter in the /th location. Con- sidering each of the five letter-locations separately and plotting these Pi's as a function of exposure duration should yield a set of functions like those illustrated in Figure 3a. That is, the pf functions in Figure 3a would be produced by a serial left-to-right scanning process whose over- all theoretical performance best matches the observed performance. The first two letters are scanned quickly, the next two are scanned more slowly, and scanning of the last letter has hardly begun even at the longest exposure. A purely parallel scanning process, in which information is retrieved at an equal rate from all five locations, would predict identical pi at all locations (Figure 3b). Because all p, 's are the same, this pi function also represents the observed overall percentage of correct responses. The results of an actual test are shown in Figure 3c. The data illus- trated are for one subject; tests of other subjects, including myself, yielded basically similar data. The downward concavity of all the ob- served p. functions means that information is acquired, at each letter 205

GEORGE SPERLING 100 100 o o i- z i a 1. 2, 3. 4. 5 (b) 100 50 1OO 150 EXPOSURE DURATION IN m SEC 200 FIGURE 3 Accuracy of report of the letter at each location (1, . . . , 5) of a five-letter stimulus as a function of the exposure duration when exposure of the letters is followed by visual noise, (a) Theoretical data generated by a serial scan process with fixed order of scan, (b) Theoretical data generated by a parallel scan process having the same rate of information acquisition at all five locations, (c) Data of a typical subject (after Sperling14). These data are not corrected for chance guessing. position, most rapidly immediately after the letter stimulus is turned on and that the rate diminishes as the exposure continues.* Informa- tion is acquired more rapidly at the first position than at the second, and so on, except that this subject acquired information more rapidly at the fifth position than at the fourth. Other subjects had different idiosyncratic orders. * Percentage correct is a nonlinear (but monotonic) function of information retrieved. Plotting the results in terms of bits of information retrieved would exaggerate the concavity and strengthen the conclusion. 206

Short-Term Memory, Long-Term Memory, and Scanning METHOD 2 Although the interpretation I have just given is stated in terms of paral- lel processing, one cannot rule out the possibility of some complex form of serial processing. To make a more sensitive test, more intricate stim- ulus sequences were required. Therefore, I gave up research for a year and worked at programming a computer to display visual stimuli on a cathode-ray oscilloscope.3 The computer-produced demonstration that provides the strongest evidence of parallel processing is very similar to the procedure just described. Five letters are presented and followed by visual noise. The basic difference is that one of the letters is changed midway during its exposure (Figure 4). When this is done, for example in the fifth location, then almost invariably the first letter that appeared in that location is the one that is reported (that is, if the subject cor- rectly reports anything at all from that location). This result with very brief exposures is just the opposite of the usual result when exposures are long (greater than 50 msec) or no postexposure noise field is used. In the latter circumstances, the second letter that occupies a location is the one that is reported.2'5'U PREDICTIONS OF THE THEORIES OF SERIAL AND PARALLEL SCANNING In a serial process, increasing exposure duration improves performance (increases p,.), because the /th location is more likely to have been scanned during a longer interval. Consider, for example, an exposure duration ATl, which is just long enough so that pi = Ap. Now con- M T K L X FIGURE 4 A computer-generated stimulus se- quence for testing serial versus parallel processing. The initial stimulus is M T K L X. M, T, K, and L persist continuously until the onset of the post- exposure visual noise; X is changed to Z in the middle of the exposure interval. Two consecutive noise fields are used to increase the effectiveness of the noise. 207

GEORGE SPERLING sider the additional exposure A!T2 that is needed to increase pt to 2Ap. In serial-scanning theory, an increase of Ap in pt during AT2 means that as many letter scans are made in AT2 as in AT^ . If occasionally the ith position is scanned twice during the exposure, then more letter scans must be occurring in AT2, inasmuch as occasionally a letter that was scanned in AT^ will be rescanned in AT2, and that would be a wasted scan. Serial-scanning theories can be characterized as basically "top- heavy." That is, when pt is large—i.e., near the top of a graph like Fig- ure 3b—then as many or more scanning attempts are needed to raise it by a given amount, Ap, compared with the number when pt is small. Parallel-processing theory assumes that information is accumulated continuously. To increase p( from 0.50 to 0.95, for example, requires less than one bit of information, whereas to increase pt from 0.05 to 0.50 requires 3.3 bits (when there are 20 equiprobable stimulus letters). This example illustrates a general property of information-gathering systems: the first few bits of information change the probability of being correct only very slightly, and the last few bits cause big changes. Thus, parallel-processing theory is "bottom-heavy." The weighty pro- cessing occurs while pt is small, i.e., near the bottom of Figure 3b. To relate these theories to data, let us restrict ourselves, for the mo- ment, to locations 3, 4, and 5, and to exposure durations of less than 100 msec. For example, consider an exposure of 50 msec and divide it, conceptually, into two consecutive intervals of 25 msec. Figure 3c shows that there is an equal or greater increase of pf between 25 and 50 msec than between 0 and 25 msec in these three cases. Suppose now that at location 5 a different letter is presented in each of the two in- tervals—the experiment described above. According to the serial-scan- ning theory, an equal or greater amount of scanning occurs in the second interval, and so we would expect the second letter to be re- ported at least as often as the first letter. In the parallel-scanning theory, in this instance, about 60% more information accumulates in the first 25 msec than in the second 25 msec, so we would expect the letter from the first interval to be re- ported more often. For parallel theory to predict quantitatively how much more often the first letter is reported than the second would require additional assumptions. The experimental result was that the first letter is nearly always reported. We therefore reject the serial-scanning theory and tenta- tively accept the parallel-scanning theory. In 50 msec, the visual sys- 208

Short-Term Memory, Long-Term Memory, and Scanning tern achieves sufficient information, in parallel, from a letter array to recognize about three letters. This conclusion is potentially important for understanding the read- ing of words. It means that the visual system has the capacity to pro- cess a word not merely letter by letter or by its overall shape, but as a complex pattern. Whether a word is recognized directly as a visual pat- tern, or the letters are recognized first and then the letter pattern is recognized as a word, or both processes occur together we do not yet know. But we do know now that the visual system has the capacity to gather enough information simultaneously—i.e., in parallel—from an array of letters (a word) to identify uniquely most ordinary words. Extremely Rapid Visual Search in a Continuous Task The experiments described above measured visual scanning speeds from single exposures only—that is, the speeds achieved in single bursts of scanning. Could subjects maintain the same high scanning speed in a continuous search task? The following experiment was devised to test this possibility. A computer3 generates arrays of random letters and displays them on the cathode-ray oscilloscope. Figure 5 shows a se- quence of 3 X 3 arrays. All the arrays except one consist entirely of random letters; the critical array contains the numeral "2" in a ran- domly selected location. The subject does not know in advance of the trial which array in the sequence will be the critical one, nor in which location the critical character will occur. His task is to look at the whole sequence of arrays and to say at which location the critical character has occurred. From the proportion of times the subject is a K- -H FIGURE 5 Diagram of the stimulus sequence in the sequential search procedure, a, fixation field, b, 6 to 12 letter arrays (randomly determined). c, the critical array, in this instance containing a "2" in the middle- right location, d, 12 more letter arrays. 209

GEORGE SPERLING able to make the correct response, we can deduce the speed with which he is able to scan characters to determine whether each is a "2." We have also trained a subject to detect the occurrence of any numeral among letters. The discrimination of an unknown one of ten numerals takes only slightly longer than the discrimination of a known single numeral. We16 have studied arrays containing from 1 to 25 letters, and pre- sented new arrays at rates of 3 to 200 per second. We have not yet com- pleted all these experiments, but the main results are already clear. Subjects achieve the same high scanning speeds in the continuous- search procedure as were previously demonstrated for single bursts, 10- 15 msec/letter. The highest scanning speeds are achieved at presentation rates of about 40 arrays per second with stimuli containing nine or more letters. Under these conditions, the fastest subject has broken solidly through the 10-msec barrier; he can scan characters for the absence of the numeral "5" faster than one letter per 8 msec. When nine-letter ar- rays are presented at a rate of 25 arrays per second (40 msec/array) he can identify the location of the critical character correctly about 70% of the time. That means that he is effectively monitoring five of the nine locations.* In terms of the parallel-scanning theory, this subject can process a fresh batch of five letters every 40 msec. When the presentation rate is lowered, response accuracy improves, indicating that additional locations are being scanned. For example, my fast subject scans the equivalent of about 16 locations from a 25-letter array when new arrays are presented every 160 msec. His scanning speed goes down to about one letter per 10 msec at this rate, indicating that locations outside the most favored six are scanned more slowly. Six- teen positions are the maximum that he can scan in a brief exposure; lowering the rate does not improve his response accuracy. A more typ- ical observer can scan three locations in 40 msec and a maximum of 10 locations in a single exposure. In conception, these search experiments follow the pioneering work of Ulric Neisser,7 who was the first to study rapid scanning of this kind. His subjects searched long lists for the presence of a critical item and signaled when they had found it. The important difference between our procedures is not that I use a detection method and he a reaction *The estimate of the number of locations monitored depends somewhat on the guessing strat- egy that the subject is assumed to be using when he has not seen the critical character. If he could use absolutely the most efficient strategy, he could achieve a probability of being cor- rect of 0.7 even when he monitored only 5.3 locations. 210

Short-Term Memory, Long-Term Memory, and Scanning method, but that in Neisser's experiments the sequence of visual inputs is controlled by the subject's own eye movement, and in my experi- ments, by a computer. The optimal scanning rate in the searching for a "2" or a "5" occurs at presentation rates that are five times higher than the rate of eye movements. When the presentation rate of stimuli is lowered so that it is comparable with that of eye movements (e.g. 200-250 msec), then the processing capacity is virtually idle for the second half of the interval; it has done all or nearly all that it can do in the first half. With more complicated processing tasks, of course, pro- cessing times would be longer and the rate of eye movements might not be the limiting factor. Although it is technically very difficult to implement, the method of searching sequentially presented displays is most promising for estimat- ing processing times and will yield much of importance for reading. It already has provided one nontrivial conclusion: In simple search tasks, the limiting factor in performance is the rate at which eye movements can be made, and not the rate at which information can be processed. A VDITOR Y SHOR T-TERM MEMOR Y Auditory Memory in Visual-Recall Tasks I claim that the same factors limit recall of letters from brief visual ex- posures (assuming that the letters are clearly visible) and from auditory presentations, to such an extent that visual recall can be predicted from auditory recall.17 The original evidence of auditory components in visual-recall tasks was introspective (all subjects said they rehearsed subvocally) and in- direct (subjects did not begin writing until a second or more after the exposure and their visual memory had decayed by then, so auditory memory was the only logical alternative).10 The observation15 that sub- jects suffered auditory confusion in visual recall (for example, D and 2 for T) was promising but not powerful. The powerful evidence comes from the measurement of "AS deficits," a technique that was intro- duced independently and almost simultaneously in three laboratories by Conrad, Wickelgren, and me (see Sperling and Speelman17), al- though it could and should have been invented 100 years earlier. An AS deficit is defined as the decrement in performance caused by replacing a stimulus composed of acoustically different letters (for ex- 211

GEORGE SPERLING ample, F, H, Q, and Y) with acoustically similar (AS) letters (for ex- ample, B, C, D, and G). The deficit technique can be applied to other dimensions, such as visual similarity, semantic similarity, and pro- nounceability. The main finding that concerns us here is that, in the usual test of visual recall, visual-similarity deficits are small, whereas AS deficits are large.4 That auditory similarity should be a significant factor even in a task that involves only looking at letters and writing them—and never any overt auditory representation-is prima facie evi- dence of a role for auditory memory in visual-recall tasks. To determine quantitatively how much of the memory load in visual- recall tasks is carried by auditory memory is more difficult. However, we17 have been able to predict AS deficits in visual-recall tasks, in which subjects viewed a dozen letters exposed simultaneously, from the AS deficits of the same subjects in auditory tasks, in which they heard spoken lists of letters and were required to recall them. We could make these predictions from lists spoken at either one or two letters per second but not from lists spoken at rates of four letters per second. The rate of silent rehearsal was previously estimated to be three letters per second.6 This rate seems to be critical for auditory presentations of random letters; at higher rates, recall performance deteriorates rapidly. I would conclude, pending evidence to the contrary, that the same fac- tors limit recall from simultaneous visual presentations and limit recall of auditory sequences spoken at rates lower than four letters per second. A Phonemic Model of Short-Term Auditory Memory The results of 38 experimental conditions in which Mrs. Speelman and I measured recall of auditory stimuli could be predicted quite accurately from rules based on a phonemic model of short-term auditory memory.17 (The predictions accounted for 0.96 of the variance of the data.) The phonemic model assumes (1) that individual phonemes are retained and forgotten independently in auditory memory; (2) that, when some of the constituent phonemes of a letter are forgotten, the letter is reconstructed as well as possible on the basis of the remaining phonemes; and (3) that, when the remaining phonemes do not suffice to identify the letter unique- ly, a choice is made from among the most probable alternatives. According to this theory, the reason that stimuli composed of letters chosen from AS alphabets are poorly recalled is that they contain phonemes that do not 212

Short-Term Memory, Long-Term Memory, and Scanning help to discriminate among alternative members of the alphabets. For example, in the alphabet consisting of B, C, D, and G, the phoneme e is useless for discriminating among members, and retention of this pho- neme in memory is a waste of space—a precisely predictable waste. It is reasonable to call the memory into which an unrehearsed audi- tory stimulus enters an "auditory memory." Because the predictions of the model apply equally well to conditions in which there is little sub- vocal rehearsal and conditions in which there is a great deal of subvocal rehearsal, there is no need to postulate different memories for rehearsed and unrehearsed material. Finally, because the same generalizations govern recall of visual stimuli, there is no need to postulate a different memory for visual recall. I should add that a really satisfactory paradigm for differentiating between the recognition buffer-memory and the auditory short-term memory has not yet been discovered. Therefore, when 1 say "auditory memory," I have to include in it the contribution of the rehearsal buffer-memory. That is not much of a complication, because, if the contribution of the recognition buffer is small, then it does not matter much, and if its contribution is large, then we can say that it must be very much like an auditory memory, in that the phonemic model (of auditory memory) accounts for so much of the evidence. RECAPITULATION A model of the processing of information from an array of letters has been proposed. It consists of the following components: a very-short- term, very-high-capacity visual memory; a visual scan component that converts the representation of a letter in visual memory into the ad- dress of the motor program for rehearsing the letter; a short-term memory for this address (recognition buffer-memory); a rehearsal com- ponent that converts the subvocal rehearsal into an auditory represen- tation; an auditory short-term memory for the sound of the letter; and an auditory scan component that converts the auditory representation into the address of the motor program for rehearsing the letter. Neural, functional, and behavioral criteria for distinguishing between short-term and long-term memory have also been proposed. A short- term memory is made up of neurons that are used over and over again by all inputs to the modality; complicated functions can be carried out 213

GEORGE SPERLING on the contents of the memory; to retrieve the contents of memory re- quires knowledge only of the memory's name (i.e., the modality being served). The neurons that form a long-term memory are activated only by very specific inputs; no functions are carried out directly on the con- tents of memory; and the contents of memory can be retrieved only by means of very specific "associations." The components of the processing model are served by six kinds of long-term memory: visual, auditory, and motor long-term memories; and visual-motor, auditory-motor, and motor-auditory association long-term memories. Experiments with visual postexposure noise fields are interpreted to mean that information is gathered simultaneously—i.e., in parallel—from three or more letter locations at an initial rate of one letter per 10-15 msec. The visual system thus has, in principle, the capacity to analyze a word not letter by letter nor by overall shape, but from information gathered, in parallel, from its component letters. In the sequential-search procedure, a subject searches a computer- produced sequence of letter arrays for a character at an unknown loca- tion in one of them. The highest processing rate occurs when a new array occurs every 40 msec. This maximal rate of 25 arrays per second is 5 times the rate of eye movements. Lowering the sequence rate to the rate of eye movements grossly impairs search efficiency. The best subject was able to scan five locations every 40 msec and a maximum of about 16 locations (achieved in 160 msec) in a single brief exposure. It is concluded that, in simple visual-search tasks, the rate of eye move- ment will be a limiting factor in search rate. The recall of visually presented arrays of letters is shown to suffer in a predictable way when acoustically similar letters (for instance, B, C, D, and G) are used. By comparing the recall of visually presented arrays with the recall of auditory letter sequences, it is concluded that visual letters are rehearsed at fewer than four letters per second (probably three per second) and that the rehearsal is stored in auditory short-term memory. Even when letter arrays are presented visually and are reported by writing (never overtly represented in an auditory mode), they are re- membered in auditory short-term memory, as if they had been pre- sented acoustically. In this brief account, I have not considered how eye movements are controlled, how information from successive eye movements is inte- grated, how long-term memories are formed, or how subjects deal with words and bigger units of meaningful materials. These problems are rel- 214

Short-Term Memory, Long-Term Memory, and Scanning evant and important for the study of visual-information processing; some are considered elsewhere in these proceedings, but most, unfor- tunately, are far from solution. REFERENCES 1. Atkinson, R. C., and R. M. Shiffrin. Human memory: a proposed control sys- tem and its control processes, pp. 89-195. In K. W. Spence and J. T. Spence, Eds. The Psychology of Learning and Motivation. Vol. I. Advances in Research and Theory. New York: Academic Press, 1967. 381 pp. 2. Averbach, E., and G. Sperling. Short term storage of information in vision, pp. 196-211. In C. Cherry, Ed. Information Theory. Washington, D.C.: Butter- worth Inc., 1961. 476 pp. 3. Budiansky, J., and G. Sperling. GSLetters. A general purpose system for pro- ducing visual displays in real time and for running psychological experiments on the DDP24 computer. Bell Telephone Laboratories Technical Memorandum, 1969. Bell Telephone Laboratories, Inc., Murray Hill, New Jersey. 4. Cimbalo, R. S., and K. R. Laughery. Short-term memory: effects of auditory and visual similarity. Psychon. Sci. 8:57-58, 1967. 5. Greenberg, M., M. S. Heifer, and M. S. Mayzner. Information processing of letter and word pairs as a function of on and off times. Perception and Psycho- physics 4:357-360, 1968. 6. Landauer, T. K. Rate of implicit speech. Percept. Motor Skills 15:646, 1962. 7. Neisser, U. Cognitive Psychology. New York: Appleton-Century-Crofts, Inc. 1967.351pp. 8. Neisser, U. Preattentive and focal processes in perception. Invited address, 76th Annual Convention of the American Psychological Association. 1968. 9. Postman, L. Short-term memory and incidental learning, pp. 145-201. In A. W. Melton, Ed. Categories of Human Learning. New York: Academic Press, 1964. 356 pp. 10. Sperling, G. A model for visual memory tasks. Hum. Factors 5:19-31, 1963. 11. Sperling, G. Information retrieval from two rapidly consecutive stimuli: a new analysis. Perception in Psychophysics. (in press) 12. Sperling, G. Phonemic model for short-term auditory memory. Proc. Amer. Psychol. Assoc. 4:63-64, 1968. 13. Sperling, G. Structural factors in models of memory. Acta Psychol. (in press) 14. Sperling, G. Successive approximations to a model for short term memory. Acta Psychol. 27:285-292, 1967. 15. Sperling, G. The information available in brief visual presentation. Psychol. Monogr. 74:1-29, 1960. 16. Sperling, G., J. Budiansky, G. J. Spivak, and M. C. Johnson. Extremely rapid visual search. Bell Telephone Laboratories Technical Memorandum, 1970. Bell Telephone Laboratories, Inc., Murray Hill, New Jersey. 17. Sperling, G., and R. G. Speelman. Acoustic similarity and auditory short-term memory: experiments and a model, pp. 151-202. In D. A. Norman, Ed. Models of Human Memory. New York: Academic Press, 1970. 18. Waugh, N. C., and D. A. Norman. Primary memory. Psychol. Rev. 72:89-104, 1965. 215

GEORGE SPERLING DISCUSSION DR. KAGAN: Can I recall something you said a few minutes ago only because I have been rehearsing it? Would it not be stored in long-term memory? DR. SPERLING: Certainly. Even very brief events often leave lasting memories; I wish I knew more about how and why. The stimulus materials in the experi- ments I have been discussing are random letters and numerals; they almost never get into long-term memory in just one trial. They can be recalled accur- ately for only a few seconds. To recall them after intervals of, say, 10 sec, a subject must rehearse them vocally or subvocally and must not be forced to accept any new information into his short-term memory. If either of these con- ditions is violated, the stimulus is forgotten. Repeated rehearsal not only maintains the stimulus in short-term memory, but helps it to get into long-term memory. We do not know whether it is the act of rehearsing itself that is responsible, or whether it is merely that the longer a stimulus resides in short-term memory the likelier it is to enter long-term mem- ory. To reiterate, the essence of short-term memory is that the same patch of neural tissue is used over and over again by new inputs. Obviously, this same tis- sue cannot also serve as a long-term memory. DR. ULLMAN: Is the use of short-term memory a prerequisite for the formation of long-term memories? DR. SPERLING: I would say that visual inputs pass through visual short-term mem- ory, and auditory inputs pass through auditory short-term memory. Given the complexity of long-term memory, I would be rash to venture beyond that simple statement. DR. SCHUBERT: The leaders in the field of reading would have us believe that some children are visually minded and some children are kinesthetically minded. When you say that your subjects rehearse subvocally and you relate their per- formance to this kind of rehearsal, are you referring in particular to kinestheti- cally minded subjects? DR. SPERLING: No. What I am saying is that, in the particular recall tasks that we have devised with random-letter stimulus materials, auditory memory is so much more effective than visual that we barely detect an effect of visual memory. If we were dealing with words and language, or with pictures, it might be quite differ- ent. Incidentally, Dr. Michael Siegal is using our acoustically similar stimuli for memory tests on children with eidetic imagery, and finds that even these ex- tremely visually competent subjects do not behave differently from subjects on these tasks. DR. HOCHBERG: Can you predict one kind of memory from the other? DR. SPERLING: No. I did not say that I could predict the capacity of a subject's 216

Short-Term Memory, Long-Term Memory, and Scanning visual memory from the capacity of his auditory memory, but rather that I could predict a subject's performance on the recall of visual stimuli from his performance on the recall of auditory stimuli. The reason is that the stimuli that Mrs. Speelman and I used are remembered in auditory memory even if they are presented visually. That is, when we make this assumption, we can predict performance. I do not wish to be put into the position of saying that there is no visual memory; there certainly is. But except for the very-short-term visual memory, visual memory seems to be basically unadapted to recall, and so we do not find much evidence of it in recall tests. To find out about short-term visual memory, or perhaps intermediate-term visual memory, we have to use recognition procedures. Even that is not sufficient in itself. If efficient verbal codes exist, they will be re- membered in auditory memory and in other memories and thereby override the visual phenomena that we are trying to measure. The stimuli to be recognized visually have to be made nonverbalizable. Or they have to be so constructed that a verbal description of them would be so inefficient that subjects would not be tempted to try it or, if they did, it would not aid them. I use a computer to gen- erate visual stimuli and, with small modifications in the program (occasionally unintentional), it produces good characters for a recognition experiment. These are made of basically the same segments as letters, but joined in different ways. They look like elements from an unfamiliar Eastern scrawl (see Figure 4). The computer produces an almost limitless variety of different characters, so that none of them becomes familiar. In our tests, we show the subject a stimulus twice, with an interval of a few milliseconds to 16 sec between the two presentations. The stimulus is com- posed of six or ten of these characters. In the second presentation, one of the characters is altered, and the subject's task is to say which character. From the accuracy of his response, we deduce how many characters he is remembering correctly. In preliminary experiments with this method, we again found the very-short-term, very-high-capacity visual memory. Beyond the first quarter of a second, performance was disappointingly poor. Subjects are able to retain enough information about only two or three characters to recognize that they have been changed. However, the time constant of forgetting was, surprisingly, so long that I could not estimate it properly. These experiments, like most others that have been used to investigate visual memory, have their problems (Hochberg, J., in R. N. Haber, Ed. Contemporary Research and Theory in Visual Perception. New York: Holt, Rinehart and Winston, Inc., 1968. pp. 309-331), but I cite them to show that measurements of short-term visual memory are being made (see also Shepard, R. N., J. Verbal Learning Verbal Behav. 6:156- 163,1967). DR. SHANKWEiLER: It seems to me that you should not attribute your findings to auditory memory. I suggest that subjects are coding into language. 217

GEORGE SPERLING DR. SPERLING: The kind of auditory memory I have been discussing is basically very simple, although some of its properties are very complex and may sur- prise us. If you had available a pile of neurons, I could tell you how to connect them to make an auditory memory. In conception, it is very much like a sound spectrograph; the same basic construction would serve either a mouse or a man. It is a memory for sounds; let us reach semantic agreement on that point. To construct a memory that remembers not merely sounds but linguistic units would be incredibly more difficult. I should think that one would not even under- take it unless one already had a very good auditory memory for sounds. But that is a philosophic answer. That I like to keep things simple does not mean that nature does. In fact, your hypothesis about linguistic memory probably could be formulated specifically in terms of an alternative model and subjected to ex- perimental investigation. I invite you to do so. 218

JULIAN HOCHBERG Attention in Perception and Reading Reading text, listening to speech, viewing scenes and pictures—these activities are not automatic responses to an array or sequence of pat- terned stimuli. A reader does not simply look at a block of text and absorb its message. He must "pay attention" to the display; what this means has not yet been well worked out. Attention is often thought of as a separate faculty that operates on the perceptual process—e.g., as a determinant of degree of arousal or sensory facilitation or as a gate or a filter. If attention functions in this general way, it might be of prac- tical importance to study its effects on reading behavior, but it would not be very interesting, theoretically, as a source of insight into the na- ture of the reading process in particular, or of perceptual processes in general. Alternatively, one can consider that the phenomena of attention are intrinsic and inseparable aspects of the perceptual process. One might think of statements, going back to Brentano,1 that perception is pur- posive, intentional, and directed, and so on—statements that have to be fleshed out if they are to be meaningful. Let us view the reading pro- cess as an intentional activity: an activity that has unique character- istics, but that also draws on abilities used in listening to speech, on the one hand, and in looking at objects and pictures, on the other. 219

JULIAN HOCHBERG LISTENING TO SPEECH Consider the act of listening to speech: It is easy to demonstrate that attention is necessary to speech perception. As a considerable amount of research has shown, if a subject is required to attend to one of two simultaneous and fairly rapid monologues, he seems to fail to hear the content of the unattended message. Many workers have explained such selective attention by positing a filter that passes the attended signals and attenuates or even blocks the unattended. Such a filter would re- quire many unlikely and complex properties. If both voices speak the same message, but the unattended one lags behind the attended one, after a while most subjects realize that both messages are the same.6 However, as a general statement, it is not true that the content of the unattended message is unheard. If, for example, the subject's own name appears in the unattended channel, he will pick it up, and, although it is easy to see how one could "tune" a filter in terms of frequency or any other simple characteristic, it is very hard to see how one might do so in terms of analyzed verbal meaning. For these and other reasons, Neisser5 and I2'3 have proposed that there is no filter. Suppose, instead, that the listener does the fol- lowing when he receives a phoneme in a voice to which he wants to attend. He selects a plan to produce some well-practiced fragment of speech that starts with the phoneme just received. By "well-practiced fragment of speech," I do not mean that he is actually pacing—actually going through a subvocalization—nor that auditory images are going through his mind. I mean that he has readied a sensorimotor program that would, if activated, result in verbal articulation. He selects a well- practiced fragment of speech that starts with the phoneme that he has just received and listens for the later occurrence of one or two distinc- tive phonemes in the speech fragments. If he actually receives what he anticipates, he goes on to anticipate the next speech fragment. Thus, it is the expectations that are being tested, rather than the entire sequence of phonemes that were presented. What is important is the confirmed speech fragments—the listener's expectations, rather than the sound- waves actually presented. This would explain what happens in the two-channel experiment. The subject makes an active anticipatory response to the initial phonemes that he hears in the voice to which he is to attend. Meanwhile, phonemes that are uttered by the unattended voice are briefly stored as unrelated 220

Attention in Perception and Reading sounds, not as the confirmed expectations, and will generally fade from memory while the subject reports the attended message. The unattended message is presented but not anticipated and encoded in any memorable form. The assumption that attentive listening depends on this kind of linguistic extrapolation yields several predictions about two-channel listening experiments; in general, these predictions seem to fit the existing data. The most frequent technique used is shadowing. This procedure en- sures that attention is being paid to the desired channel, and at the same time one has a measure of how well that channel is being heard. If the prime message does not have any syntactic redundancy, but is a group of unrelated sounds that have no meaning, then extrapolation is im- possible—and so is shadowing. This observation makes good sense in terms of an expectancy model of attention, but it makes little sense in terms of a filter. Of course, there may also be a filter, and there are other kinds of intellectual capabilities that may contribute to attention in general. But my point is that, if learning to listen to speech is a process involv- ing redundancies of sound, then the two-channel attentional phenom- ena result from an intrinsic aspect of the listening process and not from the action of an additional faculty. Something like this should also occur in reading, which is rooted in the listening and speaking process. LOOKING AT TEXT Reading must draw also on the abilities of the visual-motor system, whose skills are originally developed in looking at the world of scenes and objects. Like listening, looking brings the subject a temporal se- quence of patterned stimuli. He must integrate the successive, restricted glimpses of the world that he obtains about four times a second through his small area of clear foveal vision. Unlike the listener, however, the viewer does not have to rely only on contextual redundancy to antici- pate the next moment's stimuli. His wide area of peripheral vision gives him an intimation of the future, of what will meet his next glance. And, because eye movements are fully programmed in advance of their execu- tion, any efficient sampling of the peripheral vision also tells him roughly where his present fixation fits in the overall pattern. Obviously, there are many ways to retrieve meaning from a printed 227

JULIAN HOCHBERG page or a scene, and, as Alpern noted earlier (p. 119), the periphery is not essential to vision. You can read without it, and, as Glickstein told us (p. 139), you can retain relatively little of the nervous system and still have good form perception. I think such facts are misleading; I think it is more a question of how the visual equipment is used when you do have it than a question of what kind of behavior process or pros- thetic capability you can call on when you do not have normal equipment. Two processes are available to vision, and consequently to reading, that make it different from listening: We can, to some extent, antici- pate what is coming on the basis of what is on the retina in peripheral vision, and we also, to some extent, have a record of what we saw after fixation from what is still present in peripheral vision. However, experi- ments in which the subject views a moving scene without using periph- eral vision show that the adult observer has cognitive bases, in addition to those offered by peripheral vision, for expecting what each new fixa- tion will bring. He has cognitive schemata into which he can fit each glimpse, so that he has a single "map" to store during the looking pe- riod and not a sequence of individual glimpses. Reading is a form of looking, and, as a kind of visual perceptual behavior, it must share some of the characteristics of looking. But the attentional components of normal looking at scenes and shapes, which are undoubtedly well practiced long before school age, probably run counter to those needed when a person is learning to read or is reading a difficult text. There are few occasions in normal looking to make small, successive adjacent fixations, whereas the first task for a child when he learns to read is to put letters together into words by such adjacent fixations—surely an unaccustomed task for the visual-motor system. A practiced reader, in contrast, has freed himself from that unpleas- ant necessity. He samples a display of the text, rather than looking at each letter. He has learned to respond effectively to the few features seen with clear foveal vision by expecting an entire word or even a phrase. He needs to fixate only the parts of the array farther along the page that will enable him to formulate new guesses as well as to check his current guesses. His expectations of what he will find are based on the syntax and the meaning of what he has just read, and they must also be based on the blurred view provided by peripheral vision. A be- ginning reader probably makes little use of peripheral information, and 222

Attention in Perception and Reading is therefore less hampered than an experienced reader when peripheral vision—peripheral information—is reduced, for example, by making the interword spaces indiscriminable to peripheral vision by running the words together.4 Presumably, the same is true of a reader's ability to guide his sampling of the text by cognitive factors. The better the reader, the more widespread the fixations by which he samples the text, as long as the text provides contextual redundancy and as long as the task permits the reader to leave individual letters uninspected—that is, as long as the reader attends to meaning or content, rather than to spelling. Like the listener, therefore, the reader is engaged in formulating and testing speech fragments, but he can use the information given in periph- eral vision (as informed by his linguistic expectancies) to select the places at which he obtains successive stimulus input. This is like run- ning very fast over broken terrain; anticipation is needed for making adjustments. This is a headlong flight through the text, not "informa- tion processing," letter by letter. For example, when a reader follows a line of type, letters above and below it are often technically visible and fall well within his acuity. I suggest that, like the voice in the unattended channel previously de- scribed, letters above and below the line are simply not anticipated, not encoded, and therefore not remembered in any linguistic structure. And, as with two-channel listening experiments, it should be possible to produce intrusions into normal linear reading by placing suitable ma- terial, such as a subject's own name, in the unattended lines. This is a prediction that I made earlier3 and that was recently confirmed in an unpublished report by Neisser. This brings me back to the question: What would happen if you presented bits of text to a subject in a sequence that would be similar to views provided by reading saccades but not actually produced by saccades initiated by the subject? Would the result be normal reading, inasmuch as the stimulus sequence is similar to that which would re- sult from an intention to read? I argue that it would not—that, when the subject moves his eyes in reading, he is not just moving his eyes, but is looking for specific features, testing his expectations of what he will see next. If he is simply receiving bits of text, with no attempt to formulate and test linguistic structures, the letters or words that he sees should quickly exceed his memory span. The display should provide neither meaning nor clear temporal organization. In contrast, when he 225

JULIAN HOCHBERG looks at text with an intention to read it, he fits speech fragments to the letters glimpsed with each fixation; the speech fragments then af- ford a meaningful linguistic structure by which successive glimpses may be stored and repeated. In this reading model, "paying attention" is an integral part of the reading process, and it depends on the task and on the reader's syntac- tic and semantic knowledge and vocabulary. Some of the implications have been tested in research; many more have not. REFERENCES 1. Brentano, F. Psychologic vom empirischen Standpunkte. Leipzig: Dunker and Humbolt, 1874.350pp. 2. Hochberg, J. Attention and communication. In D. Mostovsky, Ed. Attention. New York: Appleton-Century-Crofts, Inc. (in press) 3. Hochberg, J. Attention, organization and consciousness. Pillsbury Address pre- sented to Cornell University, New York City, 1968. 4. Hochberg, J., H. Levin, and C. Frail. Unpublished manuscript described in Hoch- berg, J. Components of literacy: speculations and exploratory research. In H. Levin and J. Williams, Eds. Basic Studies on Reading, (to be published) 5. Neisser, U. Cognitive Psychology. New York: Appleton-Century-Crofts, Inc., 1967. 351 pp. 6. Treisman, A. Monitoring and storage of irrelevant messages in selective attention. J. Verb. Learn. Verb. Behav. 3:449-459, 1964. DISCUSSION DR. Liss: Speech is different from reading, in that you are working on what you just heard, rather than anticipating. For example, if I said—suppose you had wanted to—now, I am saying— DR. HOCHBERG: I was trying to listen to you, and what I heard was, of course, impossible to anticipate. My presentation is obviously simplified, but I think that your intuitive objection is also too simple. That no skilled action requires atten- tional activity is speculation on your part. If you are listening to difficult speech— relatively low-redundancy speech—coming at a fairly rapid clip, of course your anticipations are relatively slow. If you had been able to attend to what I was 224

Attention in Perception and Reading saying very, very carefully, listened for all the inflections, it would have been another matter. DR. LISS: You are reducing the possible alternatives to such a low level that you can anticipate full names. Usually, we do not speak this carefully, and the senses are receiving novel stimuli and are unable to differentiate sentences. There are too many possibilities to conclude that. DR.HOCHBERG: The sentences may be novel, but the contexts in which those sentences are used under normal conditions are far from novel. I am sure you did not have to pay attention to full names to get a full name. You do not have to test those full names. These are working names, and you have far from novel characteristics here—you can hear the names with these preliminary sounds just as well as with the full names. Just listen to all the political speeches. I think you will find that the only information coming in that you actually need may be the first few sounds, to give a clue as to what the speaker is saying. You are perfectly capable of handling familiar speech at much greater speeds than you normally would be able to tolerate with denser, more foreign types of speech. One does something like that in reading, except that the reader can control his own rate of reading. And the span of anticipation in reading will also depend on both task and redundancy. If you are actually reading a paper, you are not attending to the meaning of what is being said around you. Also, you can read a paper in the sense that you are only proofreading; you might not be able to explain it or make any sense out of what you are reading, but still be doing a good job of proofreading. It seems to me that you rather nicely focused on the problem of the child who starts to read for the first time. As to whether what I have said has any implica- tion for the varying methods of teaching reading, I do not know. I have a couple of disjointed thoughts about that. One of them is about the old prescription that seems to be universally followed of using large, well-spaced type with very few words on a page. This can have many explanations. An obvious one is that it re- quires the beginning reader to hobble his eye movements less; he has to make fewer small, adjacent saccades that run counter to his normal scene-sampling strategies. Also, you can say that the child is not capable of discriminating small letters. I do not know what that means in any reasonable sense, inasmuch as his visual acuity is probably a lot better than mine when he starts to read. It seems clear to me that limitations in his sampling behavior are involved, not anything about receptive fields or about vision itself. It is by no means rare for children to teach themselves to read. A very small, unknown proportion of children do, in fact, teach themselves to read. This seems to me to be such an important fact concerning the alleged difficulty of acquiring reading capacities that it ought not to be ignored. How these children teach themselves to read, and by what routes, should be attended to. I find it implaus- ible that they are all following the same route. I expect that it does not matter 225

JULIAN HOCHBERG how they get through the initial stages, how they learn to sample, how to make good guesses. To get back to the point of this meeting, reading, let us try this prescription: The child should be started with material as close to his own listen- ing vocabulary—the units into which he normally breaks heard speech—as pos- sible. If you are dealing with children of the ghetto, whose habitual language is very different from that of the initial primer, then you should design primers in which the constraints are close to the guesses that he is going to make when look- ing at a block of letters. Primers should be designed so that the child's guesses are going to be right more often than not. Letter-group combinations that do not en- courage guessing are probably nonreinforcing. Certain things start children on skimming, rather than reading, and there is a distinction between skimming and reading that workers on "reading itself have made that is probably unwise. DR. MASLAND: Some children have much more difficulty with auditory stimuli, particularly verbal, than other stimuli. It is not important to have auditory atten- tion, which is involved, as you and others have pointed out, in the visual repre- sentation of language; but it is important to be able to use methods that help children increase their auditory attention when the meanings of the things said are presented in a visual form. DR. HOCHBERG: I was speaking to just that point. If the child has a vocabulary of expectancies you can increase the span of auditory attention simply by increas- ing the chunk size of the message, that is, the chunk size for him. Obviously, if someone says "Fourscore and seven years ago," I only have to check it minutes hence to see whether or not it is the Gettysburg Address, although I am not really attending in between. And that is why, at the beginning of my presenta- tion, I put "pay attention" in quotes. Among children who have difficulty in paying attention, there may be some for whom the difficulty is organic; there may be some for whom this is proof of various motivational factors—they just do not want to listen; and there may be some who cannot pay attention be- cause the redundancy is too low. For the latter to "pay attention" would re- quire far more predictive ability and far more alertness than could be mustered, far shorter reaction times, far larger vocabulary. I think that these elements would have to be separated. DR. HIRSH: I wonder whether the vision people can tell us whether the span, measured in angles or distances, can vary as one reads across a line? This, it seems to me, would be necessary to make reading analogous to listening to speech. We are talking about the problem of segmentation; in the case of lis- tening to spoken language, the size of the chunk that you take in varies from point to point in a sentence. I am sure we would all agree that the visual process of reading would be impossible if we forced the child to read through a reduction tube only one letter big, so that he had to scan one letter at a time. However, if language does 226

Attention in Perception and Reading make this great difference in reading, then it ought to be true that glances or their content on the retina are of different sizes. DR.HOCHBERG: You cannot ask that question in that form, I think, because the acuity needed to pick up the confirmatory information for any given message is going to be a function of the redundancy of that message to begin with. If my hypothesis about how much information is going to come next is going to be answered by "Yes, here's a space and a period," I can pick that up at the periph- ery, maybe 10 deg out. If I am going to have to distinguish between an "e" and an "s," or an "a" and an "s," it is going to have to be, say, within 5 deg. So, in- deed, there is a variable span already given in terms of the kind of question that you have asked. Now, if you could partial out that redundancy factor, then you might ask the vision people whether the span is variable, and the answer would probably be "no." But I do not see how you can partial out that redundancy. DR. SPERLING: It has been shown that, in a single glance, a person usually gets far more information than he can process before making the next glance. It is useful, therefore, to distinguish between the amount of information available in a glance and the fraction of that information that is ultimately processed. DR. M ASLAND: I would like to ask for clarification of a problem that seems to represent a very important concept. You mentioned that, in regard to visual attention, a person samples his surroundings against a relatively fixed map. That is to say, a spatial display is being continually sampled, but it is, in a sense, a fixed pattern, whereas the auditory display and the sampling of it are tempo- rally dispersed elements. This represents an important aspect of reading tests, which you skimmed over rather lightly: a person in a sampling has a visual map fixed in such a fashion as to achieve a sequential event or a series of events that fits into an auditory display, which is a temporal display. DR. HOCHBERG: You are correct in your interpretation, and you are correct that I did skim over it lightly. You are also correct in the implication that there is not a lot of filling-in that I can do except to say that it is indeed a problem and that the ability to integrate our successive glimpses of the world into fixed spatial per- ceptual maps must be rather well established before we learn to read. That ability is certainly drawn on in the reading process, for example, when you go from one line to the next on a printed page as that page moves around while you read in a jogging trolley car. But the temporal sequence, I have been arguing, is not essentially a visual function; it is a linguistic function. It is a matter of simple convention that you are always going to read from left to right until you get to the end of the line. That alone is not sufficient to impose a constant perceptual order; it can only guide a sequence of looking. The temporal order is given by the linguistic struc- ture and is not part of the visual process in any direct sense. DR. M ASLAND: It is unproved that the spatial map is primarily a righthanded 227

JULIAN HOCHBERG sphere or preoccupation, if you will, and the auditory temporal map primarily a lefthanded sphere. I do not know whether Dr. Sperry's observations confirm that. DR. INGRAM : Dr. Hochberg has emphasized the importance of the child's learning about the segmental features and the syntax of the spoken language he is in the process of acquiring. But he has not emphasized the importance of nonsegmental features. Rhythm and intonational patterns may well be crucial to the child try- ing to comprehend the significance of what is said to him at an early stage of speech development. Rules of grammar are probably acquired considerably later. Consider, for example, the complexity of sentences involving the use of the nega- tive, such as "He is coming, isn't he?" or "He isn't coming, is he?" Children have to learn how to use the negative in these situations. It is wrong, for example, to say "He is not coming, isn't he?" or "He is coming, is he?" Before he reaches an understanding of the grammatical rules underlying such utterances, he depends on other informational aspects of language. DR. HOCHBERG: "He is coming, isn't he?"—there are intonations, paralinguistics, what the story is about, and so on. That is probably why children's books are all picture books—as a substitute for such sources of information. DR. LINDSLEY : I would like to try to tie together something Dr. Hochberg has said with what we have heard with respect to electric potentials of the brain. I think that we are all aware that one can look and not see, one can see and not perceive, and one can perceive and not remember. Perhaps one can per- ceive and remember and not learn or develop a concept. Dr. Hochberg was em- phasizing, particularly, selective attention to specific points, perhaps to the type that one reads, or perhaps to something else that is related to it. In talking about electric potentials of the brain, Dr. Buser mentioned associative potentials that have a longer latency than primary potentials. In our work on selective attentive- ness (Spong, Haider, and Lindsley, Science 148:395-397, 1965), we have given subjects a pattern of stimuli made up of clicks and flashes alternating in sequence and have instructed them to pay attention to flashes and ignore clicks. The aver- age evoked potentials to the flashes, recorded over the visual area of the brain, were enhanced, whereas the responses to the same flashes were reduced when the subjects were instructed to pay attention to clicks and ignore flashes. The potentials that were enhanced in amplitude were the late components of the evoked response, which would correspond roughly in latency to some of the late or association area responses described by Dr. Buser. The question was raised for Dr. Buser whether shorter-latency potentials from primary receiving areas spread to association areas, where they are recorded with longer latency. Apparently, they are relatively independent and represent differ- ent systems, inasmuch as Thompson, Johnson, and Hoopes (J. Neurophysiol. 26: 343-364, 1963) found short-latency, modality-specific responses over primary sensory areas, whereas the longer-latency responses of more than one sense mode 225

Attention in Perception and Reading were recorded from association areas, which seemed to be convergence centers. These responses, even in different association or convergence centers, tended to be highly correlated in their response amplitude, in contrast with the modality- specific responses recorded in the specific sensory zones of the cortex, which showed differential response characteristics. In our work with human subjects, recording average evoked potentials to visual, auditory, and somatosensory (me- dian nerve) stimulation, we asked subjects to pay attention to the stimuli of one mode and ignore the other two. The primary, short-latency components of the response to somatosensory stimulation were reduced in amplitude when that sensory mode was selectively attended to, whereas the longer-latency compo- nents of the response were enhanced. Thus, the short-latency or primary com- ponents of the response appear to represent one system, and the longer-latency components, another, in view of their differential reaction to the same stimuli under the same attentive set or condition (Spong and Lindsley, unpublished data). We have found that the degree of amplitude enhancement of these late potentials, during selective attention, is a function also of the general arousal or activation level that one can create by producing a more difficult task to perform; therefore, the two things seem to go hand in hand. There is a selec- tive attention effect and an arousal effect. In learning to read, some kind of general arousal or activation or learning level would be helpful; presumably, it would be generated by the motivating influences to read that one could have. The specific, selective attention factors are, as Dr. Hochberg pointed out, partly anticipation. I would agree with Dr. Kagan that they may stem from what has just occurred in the past, because that may serve as a guide or anticipation or expectancy of what is to come in the future. I would like to emphasize that associated with this there is also an electric potential different from that just discussed. This is what Walter etal. (Nature 203:380-384,1964) have called a "contingent negative variation" (CNV), or a slow, negative, d-c potential shift. In their experimental situation, the subject was given a warning sound and told that flashes of light would follow and that he should press a key quickly to stop them when they occurred. When the sound comes, and before the flashes appear and before the key is pressed, there is a buildup of a slow negative potential, a d-c shift that discharges when the anticipated flashes occur. We have been working on a similar experiment with a little more relevance to this particular problem. We presented three flashes 0.5 sec apart, and the subject was instructed to press a key on the third flash. Then we practiced the subject on another stimulus sequence, with the third flash de- layed 1 sec after the second one. When the subject knew that he was going to get one or the other pattern, and there was no uncertainty about it and no probabil- ity decision to make, there were evoked potentials to each flash, but no CNV or d-c potential shift building up to the onset of the third flash. Thus, this expec- 229

JULIAN HOCHBERG tancy wave or potential shift seems to reflect anticipation or probability decision on the part of the subject; but, unless there is uncertainty associated with it, the CNV does not occur. I think that, in the reading situation, the extent to which we are interested in reading depends on the extent to which we can anticipate, as Dr. Hochberg said, or on what expectancy we have concerning what is to come, either because we want to confirm something (if it is, say, scientific writing) or because it is some- thing that we can anticipate as new and unique. This can be contrasted with proofreading, in which one is simply looking to see whether the words are spelled or put together correctly and does not necessarily understand or remem- ber the content of what is read. In other words, the purpose or goal of the read- ing makes a great deal of difference. When you read for your own information, you read for what you expect to get out of it, and not in terms of these other characteristics. DR. KAGAN: I was under the impression that you only get this rising potential if the subject has to make a motor act, and not under such circumstances as when you say, "When the light comes on, you are going to see an extremely attractive picture." Is it correct that you need a motor signal? DR. LINDSLEY : I do not believe so. Vaughan, Kosta, and Ritter (Electroenceph. Clin. Neurophysiol. 25:1-10,1968) have published an article that suggests that, but I think we have evidence that the motor response is not needed. In our ex- periment, there is no motor response. DR. KAGAN: The question is, do you have to have an intention to make a motor response to get the effect? DR. LINDSLEY: I do not think so, but it is still debatable. 230

IRA J. HIRSH Visual and Auditory Perception and Language Learning It is assumed that there is a relationship between language comprehen- sion and reading. One of the reasons it would be so difficult to teach a 1-year-old child to read is that there is very little written material in 1-year-old language. Perceptual modalities have something to do with learning a language, so that we really want to hear from the reading ex- perts about how language affects reading. I will add a little information on that point myself, and then comment on one aspect of language learning with particular reference to visual and auditory perception. Finally, I would like to suggest some relationships between perception and response, especially for speech. LANGUAGE LEARNING Children from the age of about a year show evidence of both producing and appreciating grammatical structure, using phrases and sentences, although the rules that they appear to follow may not happen to be the adult rules. Until about 3 or 3Vi years of age, they seem to learn, not by memorizing cases, declensions, inflections, and conjugations, but rather by comprehending what some linguists have called the "deep structure" 231

IRA J. HIRSH that is common to a variety of surface grammatical forms. Vocabulary does not appear to become asymptotic until about the twenties, at least in most people. I would like to concentrate on a third aspect of language, its phonol- ogy, because that is the spoken form of the language that is represented in the written form. Phonetic development takes place after the age of about 6 months. Before that, there is some babbling or noisemaking that seems to be devoid of recognizable phonetic content. After about 6 months, we begin to hear some phoneme-like production. Beginning at about a year, these speech-like sounds are used in phrases and sentences. PHONOLOGY One of the methods of teaching reading tries to establish a code-like relation between printed symbols and spoken ones. Unfortunately (in some cases), the printed-symbol unit is the letter, and the spoken-symbol unit is the phoneme. It might be dangerous to use phonemes as models of printed letters, for two reasons. First, phonemes do not exist as enti- ties except in linguistic analysis. The speech output of a talker is essen- tially continuous sound. It is broken up into chunks, but by listeners who know the speaker's language. This chunking or segmenting is an ac- tive process on the listeners' part. In reading, the chunks or words are already marked off—by spaces. Second, the analyzed phonemes, as prod- ucts of analysis, are themselves not very consistent. I would like to present some data that Eguchi and I will publish soon.1 Spectral Features Spectrographic analysis displays frequencies of sound on the ordinate of a graph, and can be read essentially as one reads music; pitch or frequen- cies go up or down, and time is shown on the abscissa. By connecting the amplitude peaks, we outline the "formants" characteristic of that vowel. We asked some children, ranging in age from 3 to 13 years, and some adults to speak two sentences: "He has a blue pen" and "I am tall." We had them repeat these sentences on five different occasions, and analyzed all such productions with the spectrographic technique. We were particu- larly interested in how precise the subjects were in saying the same thing 232

Visual and Auditory Perception and Language Learning five times. If we characterize the vowel in "he" as reaching a first peak or formant in one region of the spectrum and a second formant in another region, how repeatable are those formants? The repeatability under study here is measured by the intertrial standard deviation (so) of those formants for each person, as shown in Figure 1. The points show an average so for each age group. The standard deviations for each subject are based on five recitations of the two sentences. The points represent the vowel sounds in the words of the sentences. Variances for the first formant in each vowel are dif- ferent for the different vowels. I think the impression is clear for the first formant. Children are very variable when they are 3 or 4 years old, and their speech variability de- creases as their precision of speech increases, until they reach the age of about 11 or 12. At that time we can speak of speech "habits" having been formed, but not before. Figure 2 shows similar results for the second formant. There is no re- markable exception to the rule that variabilities decrease over time. What worries me about teaching youngsters that a certain letter rep- resents a certain sound is that they do not produce a particular sound each time they intend a phoneme. One of the reasons this study got started was the difficulty we had when we moved to another country and spoke a second language. I have great difficulty in understanding French youngsters, although I understand French adults well. My Japa- nese colleague finds it impossible to understand American children, but he has little trouble with American adults. The reason is that these pro- ductions are extremely variable in children. We must conclude that the phoneme is, at best, a very labile model to use for the printed letter. Temporal Features The vowel spectrum for the words "blue" and "tall" is characteristic of a vowel in a steady state for some period. But there are also, in speech, some features of phonemes that depend on temporal gaps. We can lump these together under the term "temporal features." The spoken names of letters (as in the alphabet) do not always cor- respond to the sounds of the spoken letters themselves, and this disparity is particularly great in cases in which one cannot even say what a letter stands for, except as a transition between two other sounds. We looked at one such temporal feature: the interval between the "b" 233

IRA J. HIRSH 180 160 Q QC 120 100 80 Q 60 40 20 T F T I I If T T I' if I FIRST FORMANT Ui x a 3 4 5 6 7 8 9 10 II 12 13 ADULT AGE (YEARS) FIGURE 1 Intrasubject variability in formant 1 typical of the different age groups, as a func- tion of age. Each point represents the square root of the average variance for each age group. (Reprinted with permission from Eguchi and Hirsh.1) 234

Visual and Auditory Perception and Language Learning ISO 160 — 140 N 120 UJ Q Q QC 100 80 Q 60 40 20 I I t IT t I it 7/ I SECOND FORMANT 3 4 5 6 7 8 9 10 II 12 13 ADULT AGE (YEARS) FIGURE 2 Intrasubject variability in formant 2 typical of the different age groups, as a func- tion of age. Each point represents the square root of the average variance for each age group. (Reprinted with permission from Eguchi and Hirsh.1) 235

IRA J. HIRSH explosion in the word "blue" and the onset of phonation corresponding to the letter "1"—which is about 40-50 msec. Such intervals constitute an integral part of speech, because their length helps the listener identify sounds. Here again we measured variability. If the child says "blue" five times in a sentence context, is the interval always 40-50 msec, or does it vary? Figure 3 shows that variability changes with age. The mean interval between the "b" and the "1" or the "p" and the "e" in "pen" does not change over time, but the average intrasubject variability around that mean decreases sharply with age. It is interesting that it approaches its asymptotic minimum (corresponding to the adult value) at a much ear- lier age than do the spectral features of the vowels. Precision in the tem- poral feature, at least in English, is extremely conspicuous. Whether this u 111 £ Z o 28 28 24 22 20 18 18 14 12 10 TEMPORAL FEATURES V \ \ ' * Ifl ° Itl -ICI -lol 3 4 5 6 7 8 9 10 II 12 13 ADULT AGE (YEARS) FIGURE 3 Intrasubject standard deviation of three temporal features of the words "blue," "pen," and "tall," as a function of age. (Reprinted with permission from Eguchi and Hirsh.1) 236

Visual and Auditory Perception and Language Learning is because temporal features are more important in language learning than the spectral features of vowel identification is difficult to say. There are some complications related to phonemic load over time that we cannot go into now. RELATIONSHIP OF LANGUAGE COMPETENCE AND READING Handicap of Deafness Deaf children who have not learned to speak learn to read with great difficulty. It is interesting that they are taught to read by teachers who use paper and pencil to institute communication. Until then, there has been no communication between that child and others except by signs. If we could find the printed correlate of the conventional signs of the deaf, perhaps we could teach them. Instead, alphabetic teaching is used, but through the medium of interpersonal written communication. In the case of deaf children who learn to speak through use of what residual hearing they have, the teaching of reading is somewhat simpler, and it can begin at a much earlier age (about 3 or 4 years) to make up for what will be lost time. Even so, throughout the years of the elementary school, deaf children, even those who speak, generally maintain a retardation in reading of about two grade levels. Let me give the essentials of a study done by Hartung2 in connection with his recent Ph.D. dissertation at Washington University. Experiments have demonstrated that children can produce nonsense syllables (tri- grams) when they are pronounceable more easily than when they are not pronounceable. For example, the syllable "mox" will be recognized cor- rectly more often than the group of three letters, "mxo," which does not conform to normal spelling traditions in English and is very difficult to pronounce. Dr. Hartung wanted to study not only the variable of familiarity with the code to tie together the graphemes and phonemes, but also the variable of letter familiarity; therefore, he used trigrams that were made up of Greek letters. Hartung asked his subjects to do two tasks. In the first, after a series of flashes lasting a couple of seconds, he asked children to tell whether a particular letter was present. No reproduction was needed. For three ex- posures, normal children about 8 years old correctly identified the pres- ence or absence of a single Greek character in about 78% of the trials, 237

IRA J. HIRSH and deaf children of about the same age, in 75%. We conclude there is no significant difference in identifying Greek letters. In identifying the letter "a," hearing children after brief exposures were correct in 89% of trials and deaf children, in 71%, a significant difference. When trigram reproduction was required—writing down all three letters—the deaf children were at an even greater disadvantage. For ex- ample, after brief exposures of pronounceable trigrams, the deaf children were 31% correct in their reproductions and the hearing children, 71%. For the nonpronounceable trigrams, deaf children were only 15% cor- rect and hearing children, 48%. We expected the deaf children to do more poorly than the hearing children on trigrams in general. What was interesting was that they did even worse on the nonpronounceables than on the pronounceables. In short, in the process of learning what speech they had managed to learn up to the age of 8 or 9 years, they had acquired enough of the rules of correspondence between phonemes and graphemes for that difference to show up in their responses. There was considerable spread in these data. In normally hearing chil- dren, there was no correlation between these flash-recognition scores and reading, as measured on a MacGinty-Gates Form C, but in the deaf chil- dren, there was a significant correlation of 0.5. I am not going so far as to say that we read with our ears, although I recognize that different specialists view things from their own points of view. George Sperling has said that short-term memory very often in- volves a conversion to auditory storage. I suggest that such storage is especially important when part of the information being stored is of a language—that is, is verbal. There is something special about verbal re- sponses that I do not quite understand. Stimulus-R esponse Compatibility We were interested in information processing and so-called compatibility between response and stimulus. We have tried to investigate responses to verbal and nonverbal stimuli in two modalities, hearing and vision. We restricted ourselves to a vocabulary in which the objects were named by relatively simple words, like "bell," "cat," "dog," and "baby" and were sound-producers themselves. We could flash pictures of objects and of the printed words for them and play the spoken words and the sounds of the objects (the bark of a dog, the meow of a cat, the cry of a baby, the 238

Visual and Auditory Perception and Language Learning sound of a bell, and so on) on tapes. This gave four different vocabu- laries, two visual and two auditory; considered the other way, two were verbal and two were nonverbal. Reaction time was measured for three different responses. The first was the pressing of a key on which a picture of the object appeared, and the second was a similar response, but with a printed word on the key. The third response was merely a spoken identification or verbal response. One hypothesis from the notion of stimulus-response compatibility was that the reaction time would not increase with number of alternatives for those combinations of auditory-verbal stimuli and spoken responses, visual-picture stimuli and visual-picture keys, or visual-verbal stimuli and visual-word keys. Other combinations would show less compatibility between the form of stimulus and the form of response. The subjects were 30 women, 19-24 years old. Our first experiment was a two-alternative choice. There were two keys, one with the word "dog" and the other with the word "bell"; the subject was to push the appropriate one when she saw or heard a stimulus. As we increased the number of choices to eight, there were four keys in each of two little semicircles, so that the eight fingers could rest on them. We even con- trolled for finger preference by using different arrangements of the keys for different subjects. The reaction time did increase with the number of alternatives, as predicted, when the response was pressing a key. What was peculiar in this series of experiments was that, when we asked the subject simply to tell us what the visual word was, the verbal- response reaction time did not increase with the number of alternatives (which were told to the subject before the trial). Apparently, one does not have to translate modalities. This result appears to be independent of the stimuli and appears also to have to do more with a verbal response as such than with the relationship between the response and the stimulus. The results were similar with an auditory stimulus. A word was spoken on a little Language Master card, and the subject was supposed to push the key that had the appropriate word or picture printed on it. The reac- tion times followed the general rule except for verbal responses, which did not increase with the number of alternatives. The general case appears to be that the reaction time increases with the number of stimulus alter- natives (i.e., the slope of the line relating these two quantities is high), whereas in the verbal case, the reaction time stays relatively constant over different numbers of alternatives (i.e., the slope of the function is low or near zero). 239

IRA J. HIRSH The fourth stimulus was the auditory object, the sounds. They are probably not as easy to identify; it takes longer to comprehend what they are. Here, again, the slope of this function relates the verbal-response time to the number of alternatives and was much lower for verbal- response than for press-response time. I am sure that, if these subjects practiced over and over again for many months with these press responses, we could bring the regression slopes down. Several workers have already shown that, if one practices a connection between a response and a stimulus mode for a long time, one experiences no increase in reaction time with number of alternatives. The point of these data is to show that long-practiced correspondence is already present by virtue of "being part of the language." The stimulus mode—whether it is auditory or visual, verbal or nonverbal—does not make much difference. The press-response time increases with the num- ber of alternatives, but the verbal- or vocal-response time does not. LANGUAGE COMPETENCE AND SOUND There are obviously contrasts between the reading of printed language and the understanding of speech. I should point out first that, although every known society communicates by talking, many societies are com- pletely illiterate. A second point is extremely important: sound can arrive at the ears from any direction. Images fall on the eye only from sources that one is looking at. Thus, those dealing with vision must be concerned about eye movements, fixation, and so on, to get the target to the macula. But the attention of which Dr. Hochberg has spoken takes place as sensory in- formation goes into the auditory system, because we can hear from all around us; we do not need to be oriented toward the source. If we ex- trapolate to exposure to spoken language, as opposed to written language, there are hours of spoken language impinging on our ears all day long. This occurs particularly in the case of a growing child, whereas written or printed language becomes part of his stimulus input for relatively short periods. A third point, mentioned earlier, is that the sound of speech is con- tinuous, and the listener breaks it up into appropriate chunks, because 240

Visual and Auditory Perception and Language Learning he happens to be a member of the same language community as the speaker. In the case of reading, some of that chunking is done by punc- tuation and the spaces between words. I do not know about the visual counterpart of this, but language modi- fies some kinds of nonlanguage auditory perception. The Haskins Labora- tory work3 has demonstrated that some kinds of discrimination of non- verbal stimuli are sharpened in the parts of the stimulus dimensions that correspond to the boundaries between different phoneme categories. As far as I am aware, no one has suggested that some aspects of visual per- ception are modified by the very process of reading. As a trivial example, do letter-like forms become more discriminable than non-letter-like forms after a person has been using letters for a while? My suspicion is that they do not. Another aspect of modification is segmentation. The way in which one listens to speech sounds is quite different from the way in which one listens to other sounds. The language rules that one has inter- nalized in a sense control auditory "glances." There are probably other kinds of auditory subsets that follow the same rule and illustrate the same principle. For example, Morse-code telegraphers group nonverbal auditory stimuli as we do when we listen to speech sounds and as careful listeners do when they listen to various kinds of musical passages. Another kind of contrast has to do with critical age. It seems clear that, if a deaf child is identified before the age of 1 year, he can be pre- pared by suitable auditory stimulation to use his residual hearing better for the learning of speech at the age of 2 or 3 than if he does not start being stimulated until the age of 2 or 3. Although that is not quite a critical age for learning spoken language, it is something like "if you don't catch it this early, then it isn't going to be as good for general auditory reception later." We do not know the critical age for learning speech, but teachers of normal speech development have suggested that, because some stages of syntactic development are characteristic of the normal child at the age of 1 or 2 years, this is the age at which speech learning must begin. We have been told about similar observations on the visual system in handicapped children of one sort or another. For the visual system in general, I would be extremely interested to know whether a deprivation of the printed word has any serious consequences for learning to read. Or can we just as well start at the age of 10 without suffering difficulty? 241

IRA J. HIRSH SENSOR Y DEPRIVA TION I have not much mentioned deaf children, who afford, perhaps, a natural experiment on the question of the effects of sensory deprivation. I sup- pose that we can think of the congenitally deaf child as being like the congenital cataract patient—not in terms of the underlying pathology, but in the sense that we can, at some time after birth, alleviate the deficiency. In the case of a blind child, we remove the cataract; in the case of a deaf child, we amplify everything by about 60 or 70 db, and the effect is roughly comparable. He will not hear everything, to be sure, but all deaf children of my acquaintance have some sensitivity to fre- quencies up to about 500 Hz. I will not discuss whether the sensitivity is auditory or tactile; both can be used as information receivers, and both seem to benefit from early stimulation. The deaf child who is left unattended until the age of 6 or 7 years can be taught to speak only with great care and difficulty. When sound was amplified sufficiently so that deaf children would respond, R. Gengel (in a doctoral study now being completed) found poor discrimination as a result of auditory sensory deprivation. An example is the child who can hear tones at 110 db but cannot distinguish frequency in the low- frequency range. A trained person with normal hearing can distinguish a frequency of 500 Hz from one of 505 Hz. These deaf youngsters on the average heeded differences of around 80-100 Hz; that is, they could not discriminate unless the frequencies were first 500 Hz and then jumped up to about 600 Hz. After about 3 months of training, the poor discrimina- tion almost disappeared. He never got them down to a 5-Hz difference, but they did get down to 10 or 12 Hz, which is the difference that an untrained observer could probably distinguish. We do not have more information like this for the ear because we do not have the elegant battery of clinical tests that we have for visual func- tion. We measure the sensitivity by making an audiogram, and often that is the sole basis of information on what a child can hear. There are dozens of tests for visual function, but we do not have their analogs for auditory function. Ordinarily, we use lists of words and ask the subjects to discriminate them. If a child is 2 or 3 years old and has not spoken yet, the testing routine is difficult. I have one suggestion for those who must face this problem of the long-deprived child, such as an underprivileged child from the ghetto 242

Visual and Auditory Perception and Language Learning who is very retarded in reading: to get him to read better, get him to talk better. REFERENCES 1. Eguchi, S., and I. J. Hirsh. Development of speech sounds in children. Acta Otolaryng. Suppl. (in press) 2. Hartung, J. H. Visual perceptual skills, reading ability and the young deaf child. Dissertation. Washington University, 1968. 3. Uberman, A. M., K. S. Harris, H. S. Hoffman, and B. C. Griffith. The discrimina- tion of speech sounds within and across phoneme boundaries. J. Exp. Psychol. 54:358-368, 1957. DISCUSSION MR. ADAMS: When a person is listening to a spoken message, the temporal order of the arrival of the message is in the control of the speaker, not the listener. When a person is reading a written message, the temporal order of the arrival of the message is in the control of the reader, not the writer. Those facts are due to the different properties of the space (or medium) through which messages are transmitted. When a message is transmitted through acoustic space, it occurs in real time and exists only in the temporal dimension. The decoding of acoustic messages obeys special rules associated with acoustic space. When a message is transmitted in two-dimensional visual space, the message is "stored" in a medium that has no temporal dimension. Written language is acoustic language encoded in visual-form space; a written message has no temporal dimension but only the two dimensions of length and breadth. It is the responsibility of a reader to supply the temporal dimension according to the rules of the written language that govern the direction of the visual scanning process. If a reader scans the letters "d o g" from left to right, he will decode the message to read "dog." But if he scans them from right to left, he will decode the message to read "god." I suggest that this directional scanning may account for the reversals that we sometimes see with a retarded reader. So we have come full circle: visual chunking is not done for us by the page; a reader must supply the chunking in the same way that he supplies the temporal 243

IRA J. HIRSH chunking when he listens. One of the big differences between decoding language stored in visual space and decoding language stored in acoustic space is that a re- ceiver decoding language stored in visual space must know the rules for supplying the temporal dimension. But the responsibility is relieved for him when he is decoding the message in acoustic space, and many of our problems in the strategy of teaching reading are due to overlooking this crucial fact. I think this problem was identified by one of the experiments in Project Literacy conducted at Cornell. It was discovered that the child must learn the rules governing the visual cues associated with recognizing the visual boundaries of words. Knowing these rules is crucial for successfully learning how to decode written language. I do not recall all the details associated with that particular ex- periment, but it will suffice to say here that members of the Project Literacy staff reviewed the teacher's manuals that accompany commercially prepared children's readers, such as the basal reading series prepared by children's textbook pub- lishers. They were looking for specific instructions to the teacher on how to pre- pare specific lessons that overtly instructed a child on matters pertaining to the visual boundaries of words. After the review, the Project Literacy staff surmised that teachers are neglecting this important aspect of learning to read. Indeed, at no time did the teacher's manuals mention that not only alphabet recognition must be taught, but also word recognition and paying attention to both the white spaces between letters and the white spaces between words, sentences, and para- graphs, because they contain important visual cues to the decoding process. A child must learn these rules in order to make correct decoding decisions, because there are a finite number of letters that can be grouped in infinite ways to make up infinite words, which can be combined in infinite ways to make up infinite messages. I just wanted to take issue with you in a friendly way and ask how you think the chunking is being done by ghetto children when they decode messages in the visual mode, as opposed to the acoustic mode. DR. HIRSH: My main emphasis should have been restricted to the identification of words. I am less concerned here with temporal order than with segmentation. Even though children may not yet know the significance of the rules that tell them what to do with the larger white spaces between the words, those white spaces do appear in the visual stimulus pattern. No such spaces appear in the auditory message. If I showed you an oscilloscopic tracing of the waveform of the sounds that make up a sentence, you would see that there are few if any silences and that the acoustic message is essentially a continuous sound whose internal structures change with time. One must have learned to speak and to listen in a particular language in order to organize pieces of that continuous sound in such a way that they will correspond with the morphemic elements of the language. Such separation or segmentation must be put in by a listener as he 244

Visual and Auditory Perception and Language Learning listens. Similar segmentation may also be a part of the reading process, but at least the segments are more clearly marked with spaces on the printed page. DR. BOYNTON: I think Dr. Liberman of the Haskins Laboratory at the University of Connecticut recently suggested that the basic processing mechanisms required for the appreciation of speech sounds (which vary from one language to another, but perhaps not very much) might be built in as part of the sensory regulating apparatus. I do not think that anyone would suggest that such equipment could conceivably be built in for the processing of the visual counterpart of letters and words. This would constitute a fundamental difference between reception in the two modalities. DR. HIRSH : If what he has said is as general as saying that humans contain a pre- disposition for spoken language, then I agree. If he is implying that there are various built-in categories for auditory perception and phonemes, then I am not sure that I would go that far. These become built in very soon, I suggest, but they are certainly different from one language to another; and certainly no neurophysiologist, to my knowledge, has discovered a feature extractor that corresponds with phonemic features. They have found a feature extractor that corresponds with some interesting acoustic features, such as whether the tones glide upward or downward, something like edge detectors or angle detectors if you like, but none that is specifically phonemic. 245

EARLY EXPERIENCE AND LEARNING IN VISUAL INFORMATION PROCESSING

Next: EARLY EXPERIENCE AND LEARNING IN VISUAL INFORMATION PROCESSING »
Early Experience and Visual Information Processing in Perceptual and Reading Disorders: Proceedings of a Conference Held October 27-30, 1968, at Lake Mohonk, New York, in Association With the Committee on Brain Sciences, Division of Medical Sciences, National Research Council. Edited Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!