Skip to main content

Currently Skimming:

Some Developments in Research on Language Behavior
Pages 208-248

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 208...
... Structural linguists were developing objective procedures for analyzing the sound patterns and syntax of a language, according to well-defined, systematic principles (e.g., Bloomfield, 19331. Students of dialect were applying such procedures to construct atlases of dialect geography (Kurath, 1939)
From page 209...
... How do these capacities derive from their biophysical structures, that is, from human anatomy and physiology? What is the course of their ontogenetic development?
From page 210...
... A similar combinatorial principle underlies the structure of both levels. Consider, first, the fact that a six-year-old, middle-class American child typically has a recognition vocabulary of some 8,000 root words, some 14,000 words in all (Templin, 19571.
From page 211...
... Let us now turn back the clock and consider the early vicissitudes of three areas of applied research that contributed to this development. Three Areas of Applied Research in Language In the burst of technological enthusiasm that followed World War II, federal money flowed into three related areas of language study: automatic machine translation, automatic speech recognition, and automatic reading machines for the blind.
From page 212...
... From such observations, it gradually became clear that we would make little progress in machine translation without a deeper understanding of syntax and of its relation to meaning. The initial assumption underlying attempts at automatic speech recognition was similar to that for machine translation and equally in error (cf.
From page 213...
... There now exist automatic speech recognition devices that recognize vocabularies of roughly a thousand words, spoken in limited contexts by a few different speakers (Levinson and Liberman, 19811. Scientific texts with well-defined vocabularies can now be roughly translated by machine, then rendered into acceptable English by an informed human editor.
From page 214...
... However, none of the devices that speak, listen, or understand actually speaks, listens, or understands according to known principles of human speech and language. For example, a speech synthesizer is the functional equivalent of a human speaker to the extent that it produces intelligible speech.
From page 215...
... Chomsky (1956) proved mathematically, as work on machine translation had suggested empirically, that a simple left-to-right grammar can never suffice as the grammar of a natural language.
From page 216...
... Noun Phrase - Article ~ Noun (3) Verb Phrase - Verb + Noun Phrase (4)
From page 217...
... Whether a complete grammar of English, or any other natural language, could be written as a set of phrase-structure rules is not clear. In any event, Chomsky argues in Syntactic Structures that such a grammar would be unnecessarily repetitive and complex, since it does not capture a native speaker's intuition that certain classes of sentence are structurally related.
From page 218...
... The complementary role of psychology in language research is to describe language function and its course of behavioral development in the individual, while physiology, neurology, and psychoneurology chart its underlying structures and mechanisms. Whether this shark distinction between language as a formal object and language as a mode of biological function can, or should, be maintained is an open question.
From page 219...
... , brain specialization for language, and language development in children. Acoustic Phonetics We begin with audible speech, partly because we are then following the course of development, both in the species and the individual, from the bottom up; partly because it is in this area, where we are dealing with observable, physical processes, that the most dramatic progress has been made; and partly because we have come to realize in recent years that the physical medium of language places fundamental constraints on its surface structure.
From page 220...
... The result of this interleaving is that, at any instant, the sound is conveying information about more than one phonetic segment, and that each phonetic segment draws information from more than one piece of sound an obvious problem for automated speech recognition. Unfortunately, we cannot, as was at one time hoped, escape from this predicament by building a machine to recognize syllables, because similar interactions between phonetic segments occur across syllable boundaries.
From page 221...
... The sound spectrograph revealed, for the first time, the astonishing variability of the speech signal both within and across speakers. It was also the basis for the first systematic studies of speech perception, from which we have learned which aspects of the signal carry crucial phonetic information.
From page 222...
... 222 Cal en, 2 -._ _ .~ _ · _ ~ .~ - tic ...
From page 223...
... M-O-N ( b) FIGURE 3 Above, a spectrogram of the utterance To catch pink salmon.
From page 224...
... First, the cues for a given phonetic segment (that is, for a particular consonant or vowel) vary markedly as a function of context.
From page 225...
... With this in mind let us turn to recent work on American Sign Language, which draws on a different perceptuomotor system from that of spoken language. AMERICAN SIGN LANGUAGE Speech is the natural medium of language.
From page 226...
... Thus, the original language was in fact based on a spoken language. However, over the past 165 years it has developed among the deaf into an independent sign language.
From page 227...
... Thus, a grammatical function variously served in spoken language by word order, case markers, verb inflections, and pronouns is fulfilled in ASL by a spatial device. Finally, ASL has a variety of syntactic devices that make use of the face.
From page 228...
... Differently put, we still do not know whether the relation between signed and spoken language is one of analogy or homology. If the two systems prove to be homologous, that is, if they prove to draw on the same neural structures and organization, we will have strong evidence that language is a distinct cognitive faculty.
From page 229...
... has been classically found to be confluent: having good comprehension but awkward speech, characterized by pauses, difficulties in word-finding and distorted articulation; utterances are described as "telegrammatic," consisting of simple, declarative sentences, relying on nouns and uninflected verbs, omitting grammatical morphemes or function words. By contrast, a Wernicke's aphasic has been found to have poor comprehension, even of single words, but fluent speech, composed of inappropriate or nonexistent (though phonologically correct)
From page 230...
... The incorrect alternative showed either a subject-object reversal or an action different from that specified by the verb. Broca's aphasics performed very well on simple declarative sentences and on sentences with strong semantic constraints (as when the incorrect alternative depicted the wrong action)
From page 231...
... The syntactic capacity of the right hemisphere is also limited. The hemisphere can recognize verbal auxiliaries (see above)
From page 232...
... describe three patients, all of whom are native ASL signers and display normal visual-spatial capacity for nonlanguage functions. Their symptoms, resulting from strokes, divide readily into the two broad classes noted above for spoken language: two patients are fluent, one is confluent.
From page 233...
... Rather, the hemisphere supports general linguistic functions, common to both spoken and signed language. Thus, despite the left hemisphere's innate predisposition for speech (see section below on language acquisition)
From page 234...
... LANGUAGE ACQUISITION As many as 5 percent of American children suffer from some form of delayed or disordered language development, and many more join the ranks of the illiterate. Moreover, there is growing evidence that the capacity to read depends in large part on normal development of the primary language processes of speaking and listening (Crain and Shankweiler, in press)
From page 235...
... Indeed, the entire enterprise of generative grammar might fail, yet leave the claim of innateness untouched. Certainly Chomsky's linguistic theories have been, and continue to be, a rich source of hypothesis and experiment in studies of language acquisition.
From page 236...
... From our discussion of the problems of speech perception and automatic speech recognition, it will be obvious that we have much to learn about how the infant discovers invariant phonetic and lexical segments in the speech signal. We still do not know how the infant learns the basic sound pattern of a language during its first two years of life and comes to speak its first few dozen words.
From page 237...
... A double dissociation of the left cerebral hemisphere for perceiving speech and of the right hemisphere for perceiving nonspeech sounds within days of birth has been demonstrated both electrophysiologically (e.g., Molfese, 1977) and behaviorally (e.g., Segalowitz and Chapman, 19801.
From page 238...
... The measure provides an index of language development independent of a child's chronological age. Of interest in the present context is that no purely grammatical description of Stage I (MLU = 2.00, with an upper bound of 5.00)
From page 239...
... showed some years ago that parents tend to correct the pronunciation and truth value, rather than the syntax, of their children's speech. Indeed, one of the puzzles of language development is why children improve at all.
From page 240...
... No one would have predicted that applied research on reading machines for the blind would contribute to basic research in human phonetic capacity, lending experimental support to the formal linguistic claim of the independence of
From page 241...
... Baker, C., and Padden, C.A. 1978 Focusing on the nonmanual components of American Sign Language.
From page 242...
... Zurif, eds., Language Acquisition and Language Breakdown. Baltimore: Johns Hopkins University Press.
From page 243...
... Eimas, P.D. 1982 Speech perception: A view of the initial state and perceptual mechanics.
From page 244...
... 1978 Nonmanual signals and relative clauses in American Sign Language.
From page 245...
... Gruber, eds., Language Development and Neurological Theory. New York: Academic Press.
From page 246...
... Studdert-Kennedy, eds., Signed and Spoken Language: Biological Constraints on Linguistic For~n. Weinheim: Verlag Chemie.
From page 247...
... Stokoe, W.C., Jr., Casterline, D.C., and Croneberg, C.G. 1965 A Dictionary of American Sign Language.
From page 248...
... 248 it.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.