Click for next page ( 209


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 208
Some Developments In Research on Larlguage Behavior MICHAEL STUDDERT-KENNEDY INTRODUCTION Fifty years ago the study of language was largely a descriptive endeavor, grounded in the traditions of nineteenth century European philology. The object of study, as proposed by de Saussure in a famous course of lectures at the University of Geneva (1906-191 1), was larlgue, language as a system, a cultural institution, rather than parole, language as spoken and heard by individuals. In 1933 historical linguists were describing and comparing the world's languages, tracing their family relations, and reconstructing the protolanguages from which they had sprung (Lehmann, 19731. Structural linguists were developing objective procedures for analyzing the sound patterns and syntax of a language, according to well-defined, systematic principles (e.g., Bloomfield, 19331. Students of dialect were applying such procedures to construct atlases of dialect geography (Kurath, 1939), while anthropological linguists were applying them to American Indian, African, Asian, Polynesian, and many other languages (Lehmann, 19731. The work goes on. From it we are coming to understand the origins of language diversity: not only how languages change over time and space but also how they and their dialects act as forces of social cohesion and differentiation (e.g., Labov, 19721. However, the unfolding of the descriptive tradition and the development of new methods and theories in the field of sociolinguistics are not my concerns in this chapter. My concern, rather, is with a view of language that has emerged from a more diverse tradition. For like the taxonomic studies of Linnaeus in botany and of his followers in zoology, the great 208

OCR for page 208
SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 209 labor of language description and classification has provided the raw ma- terial for a broader science, stemming from the work of seventeenth century grammarians and of such nineteenth century figures as the German physicist Hermann von Helmholtz, the French neurologist Paul Broca, and the En- glish phonetician Henry Sweet. The several strands that their works rep- resent have come together over the past 30 to 40 years to form the basis of a new science of language, focusing on the individual, rather than on the social and cultural, linguistic system. Since the new focus is essentially biological, a biological analogy may be helpful. It is as though we shifted from describing and classifying the distinctive flight patterns of the world's eight or nine thousand species of birds to analyzing the basic principles of individual flight as they must be instantiated in the anatomy and physiology of every hummingbird and condor. Thus, this new science of language asks: What is language as a category of individual behavior? How does it differ from other systems of animal communication? What do individuals know when they know a language? What cognitive, perceptual, and motor capacities must they have to speak, hear, and understand a language? How do these capacities derive from their biophysical structures, that is, from human anatomy and physiology? What is the course of their ontogenetic development? And so on. Such questions hardly fall within the province of a single discipline. The new field is markedly interdisciplinary and addresses questions of practical application as readily as questions of pure theory or knowledge. Linguistics, anthropology, psychology, biology, neuropsychology, neu- rology, and communications engineering all contribute to the field, and their research has implications for workers in many areas of social import: doctors and therapists treating stroke victims, surgeons operating on the brain, applied engineers working on human-machine communication, teachers of second languages, of reading, and of the deaf and otherwise language-handicapped . The origins of the new science are an object lesson in the interplay between basic and applied research, and between research and theory. To understand this, we must begin by briefly examining the nature of language and the properties that make it unique as a system of communication. The Structure of Language If we compare language with other animal communication systems, we are struck by its breadth of reference. The signals of other animals form a closed set with specific, invariant meanings (Wilson, 19751. The ultrasonic squeaks of a young lemming denote alarm; the swinging steps and lifted tail of the male baboon summon his troop to follow; the "song" of the

OCR for page 208
210 MICHAEL STUDDERT-KENNEDY male white-crowned sparrow informs his fellows of his species, sex, local origin, personal identity, and readiness to breed or fight. Even the elaborate "dance" of the honeybee merely conveys information about the direction, distance, and quality of a nectar trove. But language can convey information about many more matters than these. In fact, it is the peculiar property of language to set no limit on the meanings it can carry. How does language achieve this openness, or productivity? There are several key features to its design (Hockett, 1960~. Here we note two. First, language is learned: it develops under the control of an open rather than a closed genetic program (Mayr, 1974~. Transmission of the code from one generation to the next is therefore discontinuous; each individual recreates the system for himself. There is ample room here for creative variation- probably a central factor in the evolution of language and in the constant processes of change that all languages undergo (e.g., Kiparsky, 1968; Locke, 1983; Slobin, 1980~. One incidental consequence of this freedom is that the universal properties of language (whatever they may be) are largely masked by the surface variety of the several thousand languages, and their many dialects, now spoken in the world. Second, and more crucially, language has two hierarchically related levels of structure. One level, that of sound pattern, permits the growth of a large lexicon; the other level, that of syntax, permits the formation of an infinitely large set of utterances. A similar combinatorial principle underlies the structure of both levels. Consider, first, the fact that a six-year-old, middle-class American child typically has a recognition vocabulary of some 8,000 root words, some 14,000 words in all (Templin, 19571. Most of these have been learned in the previous four years, at a rate of about five or six roots a day. As an adult, the child may come to have a vocabulary of well over 150,000 words (Seashore and Erickson, 19401. How is it possible to produce and perceive so many distinct signals? The achievement evidently rests on the evolution in our hominid ancestors of a combinatorial principle by which a small set of meaningless elements (phonemes, or consonants and vowels) is repeatedly sampled, and the sam- ples permuted, to form a very large set of meaningful elements (morphemes, words). Most languages have between 20 to 100 phonemes; English has about 40, depending on dialect. The phonemes themselves are formed from an even smaller set of movements, or gestures, made by jaw, lips, tongue, velum (soft palate), and larynx. Thus, the combinatorial principle was a biologically unique development that provided "a kind of impedance match between an open-ended set of meaningful symbols and a decidedly limited set of signaling devices" (Studdert-Kennedy and Lane, 1980; cf. Cooper, 1972; Liberman et al., 19671. We may note, incidentally, that a large lexicon

OCR for page 208
SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 211 is not peculiar to complex, literate societies: even so-called primitive human groups may deploy a considerable lexicon. For example, the Hanunoo, a stone age people of the Philippines, have nearly three thousand words for the flora and fauna of their world (Levi-Strauss, 19661. Of course, a large lexicon is not a language. Many languages have relatively small lexicons, and in everyday speech we may draw habitually on no more than a few thousand words (Miller, 19511. To put words to linguistic use, we must combine them in particular ways. Every language has a set of rules and devices, its syntax, for grouping words into phrases, clauses, and sentences. Among the various devices that a language may use for predicating properties of objects and events, and for specifying their relations (who does what to whom) are word order and inflection (case, gender, and number affixes for nouns, pronouns, adjectives; person, tense, mood, and voice affixes for verbs). An important distinction is also made in all languages between open-class words with distinct meanings (nouns, verbs, adjectives, etc.) and closed-class or function words (conjunctions, articles, verbal auxiliaries, enclitics e.g., the particle "not" in "cannot") that have no fixed meaning in themselves but serve the purely syntactic function of indicating relations between words in a sentence or sequence of sentences. Here again then, a combinatorial principle is invoked: a finite set of rules and devices is repeatedly sampled and applied to produce an infinite set of utterances. I should note that many of the facts about language summarily described above are already framed from the new viewpoint that has developed in the past 40 years. Let us now turn back the clock and consider the early vicissitudes of three areas of applied research that contributed to this de- velopment. Three Areas of Applied Research in Language In the burst of technological enthusiasm that followed World War II, federal money flowed into three related areas of language study: automatic machine translation, automatic speech recognition, and automatic reading machines for the blind. A considerable research effort was mounted in all three areas during the late 1940s and early l950s, but surprisingly little headway was made. The reason for this, as will become clear below, was that all three enterprises were launched under the shield of a behaviorist theory according to which complex behaviors could be properly described as chained sequences of stimuli and responses. The initial assumption underlying attempts at machine translation was that this task entailed little more than transposing words (or morphemes) from one language into another, following a simple left-to-right sequence.

OCR for page 208
212 MICHAEL STUDDERT-KENNEDY If this were so, we might store a sizable lexicon of matched Russian, say, and English words in a computer and execute translation by instructing the computer to type out the English counterpart of each Russian word typed in. Unfortunately, both semantic and syntactic stumbling blocks lie in the path. The range of meanings, literal and metaphorical, that one language assigns to a word (say, English high, as in "high mountain," "high pitch," "high hopes," "high horse," "high-stepping," and "high on drugs") may be guise different from the range assigned by another language; and the particular meaning to be assigned will be determined by context, that is, by meanings already assigned to some in principle unspecifiable sequence of preceding words. Moreover, the syntactic devices for grouping words into phrases, phrases into clauses, and clauses into sentences may be quite different in different languages. This is strikingly obvious when we compare a heavily inflected language, such as Russian, with a lightly inflected language with a more rigid word order, such as English. Oettinger (1972) amusingly illustrates the general difficulties with two simple sentences, immediately intelligible to an English speaker, but a source of knotty prob- lems in both phrase structure and word meaning to a computer, programmed for left-to-right lexical assignment: Time flies like an arrow, and Fruit flies like a banana. From such observations, it gradually became clear that we would make little progress in machine translation without a deeper under- standing of syntax and of its relation to meaning. The initial assumption underlying attempts at automatic speech recog- nition was similar to that for machine translation and equally in error (cf. Reddy, 19751. The assumption was that the task entailed little more than specifying the invariant acoustic properties associated with each consonant and vowel, in a simple left-to-right sequence. One would then construct an acoustic filter to pass those properties but no others, and control the ap- propriate key on a printer by means of the output from each filter. Unfor- tunately, stumbling blocks lie in this path also. A large body of research has demonstrated that speech is not a simple left-to-right sequence of discrete and invariant alphabetic segments, such as we see on a printed page (e.g., Pant, 1962; loos, 1948; Liberman et al., 19671. The reason for this, as we shall see shortly, is that we do not speak phoneme by phoneme, or even syllable by syllable. At each instant our articulators are engaged in executing patterns of movement that correspond to several neighboring phonemes, including those in neighboring syllables. The result of this shingled pattern of movement is, of course, a shingled pattern of sound. Even more extreme variation may be found when we examine the acoustic structure of the same syllable spoken with different stress or at different rates or by different speakers. From such observations it gradually became clear that we would make little progress in automatic speech recognition without a deeper un

OCR for page 208
SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 213 derstanding of how the acoustic structure of the speech signal specifies the linguistic structure of the message. Finally, the initial assumption underlying attempts to construct a reading machine for the blind was closely related to that for automatic speech recognition and again in error (Cooper et al., 19841. A reading machine is a device that scans print and uses its contours to control an acoustic signal. It was supposed that, given an adequate device for optical recognition of letters on a page, one need only assign a distinctive auditory pattern to each letter, to be keyed by the optical reader and recorded on tape or played in real time to a listener a sort of auditory Braille. Once again there were stumbling blocks, but this time they were perceptual. We normally speak and listen to English at a rate of some 150 words per minute (wpm), that is, roughly 5 to 6 syllables or 10 to 15 phonemes per second. Ten to 15 discrete sounds per second is close to the resolving power of the ear (20 elements per second merge perceptually into a low-pitched buzz). Not surprisingly, despite valiant and ingenious attempts to improve the acoustic array, even the most practiced listeners were unable to follow a substitute code at rates much beyond that of skilled Morse code receivers, namely some 10 to 15 words per minute a rate intolerably slow for any extended use. From this work, it gradually became clear that the only acceptable output from a reading machine would be speech itself. This conclusion was one of many that spurred development of speech synthesis by artificial talking machines in following years (Cooper and Borst, 1952; Pant, 1973; Flanagan, 1983; Mattingly, 1968, 1974~. The conclusion also raised the- oretical questions. For example: Why can we successfully transpose speech into a visual alphabet, using another sensory modality, if we cannot suc- cessfully transpose it within its "natural" modality of sound? Why is speech so much more effective than other acoustic signals? Is there some peculiar, perhaps biologically ordained, relation between speech and the structure of language? We will return to these questions below. I have not recounted these three failures of applied research missions to argue that money and effort spent on them were wasted. On the contrary, initial failure spurred researchers to revised efforts, and valuable progress has since been made. Reading machines for the blind, using an artificial speech output, have been developed and are already installed in large li- braries (Cooper et al., 19841. There now exist automatic speech recognition devices that recognize vocabularies of roughly a thousand words, spoken in limited contexts by a few different speakers (Levinson and Liberman, 19811. Scientific texts with well-defined vocabularies can now be roughly translated by machine, then rendered into acceptable English by an informed human editor. These advances have largely come about by virtue of brute computational

OCR for page 208
214 MICHAEL STUDDERT-KENNEDY force and technological ingenuity, rather than through real gains in our understanding of language. This is not because we have made no gains, for as we shall see shortly, we surely have. However, none of the devices that speak, listen, or understand actually speaks, listens, or understands according to known principles of human speech and language. For example, a speech synthesizer is the functional equivalent of a human speaker to the extent that it produces intelligible speech. But it obviously does so by quite different means than those that humans use: none of its inorganic compo- nents correspond to the biophysical structures of larynx, tongue, velum, lips, and jaw. Instead, a synthesizer simulates speech by means of a complex system of tuned electronic circuits, and resembles a speaker somewhat as, say, a crane resembles a human lifting a weight. We are still deeply ignorant of the physiological controls by which a speaker precisely coordinates the actions of larynx, tongue, and lips to produce even a single syllable. In short, the main scientific value of the early work I have described was to reveal the astonishing complexity of speech and language, and the in- adequacy of earlier theories to account for it. One important effect of the initial failures was therefore to prepare the ground for a theoretical revolution in linguistics (and psychology) that began to take hold in the late 1950s. THE GENERATIVE REVOLUTION IN LINGUISTICS The publication in 1957 of Noam Chomsky's Syntactic Structures began a revolution in linguistics that has been sustained and developed by many subsequent works (e.g., Chomsky, 1965, 1972, 1975, 1980; Chomsky and Halle, 1968~. To describe the course of this revolution is well beyond the scope of this chapter. However, the impact of Chomsky's writings on fields outside linguistics philosophy, psychology, biology, for example and their importance for the emerging science of language has been so great that some brief exposition of at least their nontechnical aspects is essential. I should emphasize that Chomsky's work has by no means gone unchal- lenged (e.g., Givon, 1979; Hockett, 1968; Katz, 19811. My intent in what follows is not to present a brief in its defense, but simply to sketch a bare outline of the most influential body of work in modern linguistics. The central goal of Chomsky's work has been to formalize, with math- ematical rigor and precision, the properties of a successful grammar. He defines a grammar as "a device of some sort for producing the sentences of the language under analysis" (Chomsky, 1957, p. 111. A grammar, in Chomsky's view, is not concerned either with the meaning of a sentence or with the physical structures (sounds, script, manual signs) that convey it. The grammar, or syntax, of a language is a purely formal system for arranging the words (or morphemes) of a sentence into a pattern that a

OCR for page 208
SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 215 native speaker would judge to be grammatically correct or at least accept- able. In Syntactic Structures, Chomsky compared three types of grammar: finite-state, phrase-structure, and transformational grammars. A finite-state grammar generates sentences in a left-to-right fashion: given the first word, each successive word is a function of the immediately preceding word. (Such a model is, of course, precisely that adopted by B.F. Skinner in his Verbal Behavior (1957), a dernier cri in behaviorism, published in the same year as the "premier cri" of the new linguistics.) Chomsky (1956) proved mathematically, as work on machine translation had suggested empirically, that a simple left-to-right grammar can never suffice as the grammar of a natural language. The reason, stated nontech- nically, is that there may exist dependencies between words that are not adjacent, and an indefinite number of phrases containing other nonadjacent dependencies may bracket the original pair. Thus, in the sentence, Anyone who eats the fruit is damned, anyone and is damned are interdependent. We can, in principle, continue to add bracketing interdependencies indef- initely, as in Whoever believes that anyone who eats the fruit is damned is wrong, and Whoever denies that whoever believes that anyone who eats the fruit is damned is wrong is right. In practice, we seldom construct such sentences. However, the recursive principle that they illustrate is crucial to every language. The principle permits us to extend our communicative reach by embedding one sentence within another. For example, even a four-year-old child may combine, We picked an apple and I want an apple for supper into the utterance I want the apple we pickedfor supper. Thus, the child embeds an adjectival phrase, we picked (= that we picked with the relative pronoun deleted), to capture two related sentences in a single utterance (cf. Limber, 19731. Chomsky goes on to consider how we might formulate an alternative and more powerful grammar, based on the traditional constituent analysis of sentences into "parts of speech." Constituent analysis takes advantage of the fact that the words of any language (or an equivalent set of words and affixes) can be grouped into categories (such as noun, pronoun, verb, adjective, adverb, preposition, conjunction, article) and that only certain sequences of these categories form acceptable phrases, clauses, and sen- tences. By grouping grammatical categories into permissible sequences, we can arrive at what Chomsky terms a phrase-structure grammar. Such a grammar is "a finite set . . . of initial strings and a finite set . . . of 'instruction formulas' of the form X~Y interpreted: 'rewrite X as Y' " (Chomsky, 1957, p. 291. Figure 1 illustrates a standard parsing diagram of the utterance, The woman ate the apple, in a form familiar to us from grammar school (above), and as a set of "rewrite rules" from which the parsing diagram can be generated (below).

OCR for page 208
216 MICHAEL STUDDERT-KENNEDY Parsing Diagram Sentence Noun Phrase / \ Article Noun Verb Verb Phrase - Noun Phrase /\ the woman ate Article Noun the apple Rewrite Rules (1) Sentence ~ Noun Phrase+ Verb Phrase (2) Noun Phrase - Article ~ Noun (3) Verb Phrase - Verb + Noun Phrase (4) Article -~ the, a } (5) Noun ~ ~ woman, apple... (6) Verb ~ { ate, seized } FIGURE 1 Above, a parsing diagram dividing the sentence The woman ate the apple into its constituents. Below, a set of rewrite nobles that will generate any sentence having the constituent structure shown above. Notice, incidentally, that rewrite rules are indifferent to meaning. They will generate anomalous utterances such as The chocolate loved the clock, no less readily than The woman ate the apple. Moreover, many native speakers would be willing to accept such anomalous utterances as gram- matically correct, even though they have no meaning. This hints at the possibility that syntactic capacity might be autonomous, a relatively in- dependent component of the language faculty. This is a matter to which we will return below. An important point about a set of rewrite rules is that it specifies the grouping of words necessary to correct understanding of a sentence. The sentence Let's have some good bread and wine is ambiguous until we know whether the adjective good modifies only bread or both bread and wine. The distinction may seem trivial. But, in fact, the example shows that we

OCR for page 208
SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 217 are sensitive (or can be made sensitive) to an ambiguity that could not have arisen from any difference in the words themselves or in their sequence. Rather, the origin of the ambiguity lies in our uncertainty as to how the words should be grouped, that is, as to their phrase structure. A correct (or incorrect) interpretation of their meaning therefore depends on the listener (and a fortiori the speaker) being able to assign an abstract phrase structure to the sequence of words. Whether a complete grammar of English, or any other natural language, could be written as a set of phrase-structure rules is not clear. In any event, Chomsky argues in Syntactic Structures that such a grammar would be unnecessarily repetitive and complex, since it does not capture a native speaker's intuition that certain classes of sentence are structurally related. For example, the active sentence Eve ate the apple and the passive sentence The apple was eaten by Eve could both be generated by an appropriate set of phrase-structure rules, but the rules would be different for active sentences than for their passive counterparts. Surely, the argument runs, it would be "simpler" if the grammar somehow acknowledged their structural relation by deriving both sentences from a common underlying "deep structure." The derivation would be accomplished by a series of steps or "transfor- mations" whose functions are to delete, modify, or change the order of the base constituents Eve, ate, apple. An important aspect of transformations is that they are structure depen- dent, that is, they depend on the analysis of a sentence into its structural components, or constitutents. For example, to transform such a declarative sentence as The man is in the garden into its associated interrogative Is the man in the garden?, a simple left-to-right rule would be: "Move the first occurrence of is to the front." However, the rule would not then serve for such a sentence as The man who is tall is in the garden, since it would yield Is the man who tall is in the garden? The rule must therefore be something like: "Find the first occurrence of is following the first noun phrase, and move it to the front" (Chomsky, 1975, pp. 30-311. Thus, a transformational grammar, no less than a phrase-structure grammar, pre- supposes analysis of an utterance into its grammatical (or phrasal) constit- uents. We may note, in passing, that children learning a language never produce sentences such as Is the man who tall is in the garden? Rather, their errors suggest that, even in their earliest attempts to frame a complex sentence, they draw on a capacity to recognize the structural components of an utterance. However, here we should be cautious. Chomsky has repeatedly empha- sized that " . . .a generative grammar is not a model for a speaker or hearer" (1965, p. 9), not a model of psychological processes presumed to be going on as we speak and listen. The word "generative" is perhaps misleading

OCR for page 208
218 MICHAEL STUDDERT-KENNEDY in this regard. Certainly, experimental psychologists during the 1960s de- voted much ingenuity and effort to testing the psychological reality of transformations (for reviews, see Cairns and Cairns, 1976; Fodor et al., 1974; Foss and Hakes, 1978~. But the net outcome of this work was to demonstrate the force of Chomsky's distinction between formal descriptions of a language and the strategies that speakers and listeners deploy in com- municating with each other (cf. B ever, 1970~. At first glance, the distinction might seem to be precisely that between langue and parole, drawn by de Saussure. However, for de Saussure, langue, the system of language, "exists only by virtue of a sort of contract signed by the members of a community" (de Saussure, 1966, p. 14~: it is a kind of formal artifice or convention, maintained by social processes of which individuals may be quite unaware. By contrast, for Chomsky the "generative grammar [of a language] attempts to specify what the speaker actually knows" (1965, p. 81. What a speaker knows, his competence in Chomsky's terminology, is attested to by "intuitive" judgments of gram- maticality. What a speaker does, performance (parole), is linguistic com- petence filtered through the indecisions, memory lapses, false starts, stammerings, and the "thousand natural [nonlinguistic] shocks that flesh is heir to." Thus, even though a theory of grammar is not a theory of psychological process, it is a theory of individual linguistic capacity. In Chomsky's view, the task of linguistics is to describe the structure of language much as an anatomist might describe the structure of the human hand. The complementary role of psychology in language research is to describe language function and its course of behavioral development in the individual, while physiology, neurology, and psychoneurology chart its underlying structures and mechanisms. Whether this shark distinction between language as a formal object and language as a mode of biological function can, or should, be maintained is an open question. What is clear, however, is that it was from a rigorous analysis of the formal properties of syntax (and later of phonology: see Chomsky and Halle, 1968) that Chomsky was led to view language as an autonomous system, distinct from other cognitive systems of the human mind (cf. Fodor, 1982; Pylyshyn, 19801. His writings during the late 1950s and 1960s brought an exhilarating breath of fresh air to psychologists in- terested in language, because they offered an escape from the stifling be- havioristic impasse, already noted by Lashley (195 1) and others (e.g., Miller et al., 19601. The result was an explosion of research in the psychology of language, with a strong emphasis on its biological underpinnings. Whatever one's view of generative grammar, it is fair to say that almost every area of language study over the past 25 years has been touched, directly or indi

OCR for page 208
238 MICHAEL STUDDERT-KENNEDY tested (e.g., [b] versus [p], [d] versus [g], [m] versus [n]) (Aslin et al., 1983; Eimas, 1982~. There is also evidence that infants begin to recognize the function of such contrasts, to distinguish words in the surrounding language, during the second half of their first year (Werker, 1982~. (For fuller review, see Studdert-Kennedy, l9XS.) In terms of sound production, Oiler (1980) has described a regular pro- gression from simple phonation (0-1 months) through canonical babbling (7-10 months) to so-called variegated babbling (1 1-12 months). The pho- netic inventory of babbled sounds is strikingly similar across many lan- guages and even across hearing and deaf infants up to the end of the first year (Locke, 1983~. These similarities argue for a universal, rather than language-specific, course of articulatory development. However, around the end of the twelfth month, when the child produces its first words, the influence of the surrounding language becomes evident. From this point on, universals become increasingly difficult to discern, because whatever universals there may be are masked by surface diversity among languages. In this respect, the development of language differs from the development of, say, sensorimotor intelligence or mathematical ability (cf. Gelman and Brown, this volume). Nonetheless, we can already trace some regularities across children within a language and, to some lesser extent, across languages. The most heavily studied stage of early syntactic development, in both English and some half-dozen other languages, is the so-called two- morpheme stage. Brown (1973) divides early development into five stages on the basis of mean length of utterance (MLU), measured in terms of the number of morphemes in an utterance. The stages are "not . . . true stages in Piaget's sense" (Brown, 1973, p. 58), but convenient, roughly equi- distant points from MLU = 2.00 through MLU = 4.00. The measure provides an index of language development independent of a child's chro- nological age. Of interest in the present context is that no purely grammatical description of Stage I (MLU = 2.00, with an upper bound of 5.00) has been found satisfactory. Instead, the data are best described by a "rich interpretation," assigning a meaning or function to an utterance on the basis of the context in which it occurs. Brown lists eleven meanings for Stage I constructions, including: naming, recurrence (more cup), nonexistence (all gone egg), agent and action (Mommy go), agent and object (Daddy key), action and location (sit chair), entity and location (Baby table), possessor and pos- session (Daddy chair), entity and attribute (yellow block). Brown (1973) proposes that these meanings "derive from sensorimotor intelligence, in Piaget's sense . . . [and] probably are universal in humankind but not . . . innate" (p. 2011.

OCR for page 208
SOME DEVELOPMENTS IN RESEARCH ON GAGE BEHAVIOR 239 We should emphasize that these Stage I patterns reflect semantic, not grammatical, relations even though they may be necessary precursors to the grammatical relations that develop during Stage II (MLU = 2.50, with an upper bound of 7.001. Brown (1973) traced the emergence of 14 gram- matical morphemes in three Stage II English-speaking children. The mor- phemes included: prepositions (in, on), present progressive (l am playing), past regular (Jumped), past irregular (broke), plural -s, possessive -s, third persons -s (he jumps), and others. The remarkable finding was that all three children acquired the morphemes in roughly the same order (with rank order correlations between pairs of children of 0.86 or more). This result was confused in a study of 21 English-speaking children by de Villiers and de Villiers (19731. However, unlike the meanings and functions of Stage I, the more or less invariant order of morpheme acquisition of Stage II has not been confirmed for languages other than English. Perhaps we should not expect that it will be. Languages differ, as we have seen, in the grammatical devices that they use to mark relations within a sentence. The devices used by one language to express a particular grammatical relation may be, in some uncertain sense, "easier" to learn than the devices used by another language for the same grammatical relation. Slobin (1982) has compared the ages at which four equivalent grammatical constructions are learned in Turkish, Italian, Serbo-Croatian, and English. In each case, the Turkish children developed more rapidly than the other children. If these results are valid and not mere sampling error, the "studies suggest that Turkish is close to an ideal language for early acquisition" (Slobin,1982, p. 145~. Unless we suppose that Turkish parents are more attentive to their chil- dren's language than Italian, Serbo-Croatian, and English parents, we may take this result as furler evidence that "selection pressures" (reinforce- ment) have little role to play in language learning. Brown and Hanlon (1970) showed some years ago that parents tend to correct the pronunciation and truth value, rather than the syntax, of their children's speech. Indeed, one of the puzzles of language development is why children improve at all. At each stage, the child's speech seems sufficient to satisfy its needs. Neither reinforcement nor imitation of adult speech suffices to explain the improve- ment. Early speech is replete with forms that the child has presumably never heard: two sheeps, we goed, mine boot. These errors reflect not imitation, but over-generalization of rules for forming plurals, past tenses, and possessive adjectives. We come then to a guiding assumption of much current research: Learning a first language entails active search for language-specific grammatical patterns (or rules) to express universal cognitive functions. The child may be helped in this by the relative "transparency" (Slobin, 1980) of the speech

OCR for page 208
240 MICHAEL STUDDERT-KENNEDY addressed to it" either because the language itself, like Turkish, is trans- parent and/or because adult speech to the child is conspicuously well foxed. Several studies (e.g., Newport et al., 1977) have shown that the speech addressed to children tends not to be "degenerate." Yet the speech may be "meager" in the sense that relatively few instances suffice to trigger recognition of a pattern (Roeper, 19821. Such rapid learning would seem to require a system specialized for discovering distinctive patterns of sound and syntax in any language to which a child is exposed. Finally, it is worth remarking that all normal children do learn a language, just as they learn to walk. Western societies acknowledge this in their attitude to children who fail: we regard them as handicapped or defective, and we arrange clinics and therapeutic settings to help them. As Dale (1976) has remarked, we do not do the same for children who cannot learn to play the piano, do long division, or ride a bicycle. Of course, children vary in intelligence, but not until I.Q. drops below about 50 do language difficulties begin to appear (Lenneberg, 19671. Children at a given level of maturation also vary in how much they talk, what they talk about, and how many words they know. Where they vary little, it seems, is in their grasp of the basic principles of the language system-its sound structure and syntax. CONCLUSION The past 50 years have seen a vast increase in our knowledge of the biological foundations of language. Rather than attempt even a sampling of the issues raised by the research we have reviewed, let me end by emphasizing a point with which I began: the interplay between basic and applied research, and between research and theory. The advances have come about partly through technological innovations, permitting, for example, physical analysis of the acoustic structure of speech and precise localization of brain abnormalities; partly through methodolog- ical gains in the experimental analysis of behavior; partly through growing social concern with the blind, the deaf, and otherwise language-handi- capped. Yet these scattered elements would still be scattered had they not been brought together by a theoretical shift from description to explanation. Perhaps the most striking aspect of the development is its unpredictability. Fifty years ago no one would have predicted that formal study of syntax would offer a theoretical framework for basic research in language acqui- sition, now a thriving area of modern experimental psychology, with im- portant implications for treatment of the language-handicapped. No one would have predicted that applied research on reading machines for the blind would contribute to basic research in human phonetic capacity, lending experimental support to the formal linguistic claim of the independence of

OCR for page 208
SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 241 phonology and syntax. Nor, hmally, would anyone have predicted that basic psycholinguistic research in Amencan Sign Language would provide a unique approach to the understanding of brain organization for language and to testing the hypothesis, derived from linguistic theory, that language is a distinct faculty of the human mind. Presumably, continued research in the areas we have reviewed and in related areas that we have not (such as the acquisition of reading, the motor control and coordination of articulatory action, second language learning), will consolidate our view of language as an autonomous system of nested subsystems (phonology, syntax). Beyond this lies the further task of un- folding the language system, tracing its evolutionary and ontogenetic origins in the nonlinguistic systems that surround it and from which, in the last analysis, it must denve. We would be rash to speculate on the diverse areas of research and theory that will contribute to this development. * * * I thank Ignatius Mattingly for comments and advice. REFERENCES Aslin, R.N., Pisoni, D.B., and Jusczyk, P.W. 1983 Auditory development and speech perception in infancy. In M.M. Haith and J.J. Campos, eds., Infancy and the Biology of Development. Vol. II: Carmichael's Manual of Child Psychology. 4th ed. New York: John Wiley and Sons. Baker, C., and Padden, C.A. 1978 Focusing on the nonmanual components of American Sign Language. Pp. 27-58 in P. Siple, ea., Understanding Language Through Sign Language Research. New York: Academic Press. Bates, E., and MacWhinney, B. 1982 Functionalist approaches to grammar. Pp. 173-218 in E. Wanner and L.R. Gleitman, eds., Language Acquisition: The State of the Art. New York: Cambridge University Press. Bellugi, U., Poizner, H., and Klima, E.S. 1983 Brain organization for language: clues from sign aphasia. Human Neurobiology 2: 155- 170. Benson, D.F. 1983 Cerebral metabolism. Pp. 205-211 in M. Studdert-Kennedy, ea., Psychobiology of Language. Cambridge: MIT Press. Bever, T.G. 1970 Bloomfield, L. 1933 Language. New York: Holt. Brown, R. 1973 A First Language: The Early Stages. Cambridge: Harvard University Press. Brown, R., and Hanlon, C. 1970 Derivational complexity and order of acquisition in child speech. In J.R. Hayes, ea., Cognition and the Development of Language. New York: John Wiley and Sons. The cognitive basis for linguistic studies. In J.R. Hayes, ea., Cognition and Language Development. New York: John Wiley and Sons.

OCR for page 208
242 MICHAEL STUDDERT-KENNEDY Cairns, H.S, and Cairns, C.E. 1976 Psycholinguistics. New York: Holt, Rinehart and Winston. Caramazza, A., and Zurif, E.B. 1976 Comprehension of complex sentences in children and aphasics: a test of the regression hypothesis. Pp. 145-161 in A. Caramazza and E.B. Zurif, eds., Language Acquisition and Language Breakdown. Baltimore: Johns Hopkins University Press. Carey, S. 1982 Semantic development: the state of the art. In E. Wanner and L. Gleitman, eds., Language Acquisition: The State of the Art. New York: Cambridge University Press. Chiba, T., and Kajiyama, M. 1941 The Vowel: Its Nature and Structure. Tokyo: Tokyo-Kaiseikan. Chomsky, N. 1956 Three models for the description of language. IRE Transactions on Information Theory IT-2:113-124. 1957 1959 1965 1972 1975 Syntactic Structures. The Hague: Mouton. Review of Verbal Behavior by ELF. Skinner. Language 35:26-58. Aspects of the Theory of Syntax. Cambridge: MIT Press. Language and Mind. New York: Harcourt Brace Jovanovich (revised edition). Reflections on Language. New York: Random House. 1980 Rules and representations. The Behavioral and Brain Sciences 3:1-62. Chomsky, N., and Halle, M. 1968 The Sound Pattern of English. New York: Harper and Row. Cole, R.A., and Scott, B. 1974 Toward a theory of speech perception. Psychological Review 81:348-374. Cole, R.A., Rudnicky, A., Reddy, R., and Zue, V.W. 1980 Speech as patterns on paper. In R.A. Cole, ea., Perception and Production of Fluent Speech. Hillsdale, N.J.: Lawrence Erlbaum Associates. Cooper, F.S. 1950 Spectrum analysis. Journal of the Acoustical Society of America 22:761-762. 1972 How is language conveyed by speech? Pp. 25-45 in J.F. Kavanagh and I.G. Mattingly, eds., Language by Ear and by Eye: The Relationships Between Speech and Reading. Cambridge: MIT Press. Cooper, F.S., and Borst, J.M. 1952 Some experiments on the perception of synthetic speech sounds. Journal of the Acoust- ical Society of America 24:597-606. Cooper, F.S., Gaitenby, J., and Nye, P.W. 1984 Evolution of reading machines for the blind: Haskins Laboratories' research as a case history. Journal of Rehabilitation Research and Development 21:51-87. Crain, S., and Shankweiler, D. In press Reading acquisition and language acquisition. In A. Davison, G. Green, and G. Herman, eds., Critical Approaches to Readability: Theoretical Bases of Linguistic Complexity. Hillsdale, N.J.: Lawrence Erlbaum Associates. Dale, P.S. 1976 Language Development. 2nd ed. New York: Holt, Rinehart and Winston. Darwin, C.J. 1976 The perception of speech. In E.C. Carterette and M.P. Friedman, eds., Handbook of Perception. Vol. 7, Language and Speech. New York: Academic Press. Dennis, M. 1983 Syntax in brain-injured children. Pp. 195-202 in M. Studdert-Kennedy, ea., Psycho- biology of Language. Cambridge: MIT Press.

OCR for page 208
SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 243 de Saussure, F. 1966 Course in General Linguistics (Translated by Wade Basuin). New York: McGraw Hill. de Villiers, J.G., and de Villiers, P.A. 1973 A cross-sectional study of the acquisition of grammatical morphemes. Journal of Psycholinguistic Research 2:267-278. Eimas, P.D. 1982 Speech perception: A view of the initial state and perceptual mechanics. Pp. 339-360 in J. Mehler, E.C.T. Walker, and M. Garrett, eds., Perspectives on Mental Repre- sentation. Hillsdale, N.J.: Lawrence Erlbaum Associates. 1973 Pant, G. 1960 Acoustic Theory of Speech Production. The Hague: Mouton. 1962 Descriptive analysis of the acoustic aspects of speech. Logos 5:3-17. 1968 Analysis and synthesis of speech processes. Pp. 173-277 in B. Malmberg, ea., Manual of Phonetics. Amsterdam: North-Holland. Descriptive analysis of the acoustic aspects of speech. Speech Sounds and Features (Chapter 2). Cambridge: MIT Press. Flanagan, J.L. 1983 Speech Analysis, Synthesis and Perception. Heidelberg: Springer-Verlag. Fodor, J. 1982 Modularity of Mind. Cambridge: MIT Press. Fodor, J.A., Bever, T.G., and Garrett, M.F. 1974 The Psychology of Language. New York: McGraw-Hill. Foss, D.J., and Hakes, D.T. 1978 Psycholinguistics: An Introduction to the Psychology of Language. Englewood Cliffs, N.J.: Prentice-Hall. Givon, T. 1979 On Understanding Grammar. New York: Academic Press. Goodenough, C., Zurif, E.B., and Weintraub, S. 1977 Aphasics' attention to grammatical morphemes. Language and Speech 20:11-20. Goodglass, H., and Geschwind, N. 1976 Language disturbance (aphasia). In E.C. Carterette and M.P. Friedman, eds., Hand- book of Perception. Vol. 7. New York: Academic Press. Goodglass, H., and Kaplan, E. 1972 The Assessment of Aphasia and Related Disorders. Philadelphia: Lea and Febiger. Hecaen, H., and Albert, M.L. 1978 Human Neuropsychology. New York: John Wiley and Sons. Hockett, C.F. 1960 The origin of speech. Scientific American 203:89-96. 1968 The State of the Art. The Hague: Mouton. Jakobson, R. 1941 Kindersprache, Aphasie, und Allgemeine Lautgesetze. Stockholm: Almqvist and Wik sell. Joos, M. 1948 Acoustic phonetics. Language Monograph 23(24):Supplement. Katz, J.J. 1981 Language and Other Abstract Objects. Totowa, N.J.: Rowman and Littlefield. Kimura, D. 1961 Cerebral dominance and the perception of verbal stimuli. Canadian Journal of Psy- chology 15:166-171.

OCR for page 208
244 MICHAEL STUDDERT-KENNEDY 1967 Functional asymmetry of the brain in dichotic listening. Cortex 8:163-178. Kiparsky, P. 1968 Linguistic universals and linguistic change. Pp. 171-202 in E. Bach and R. Harms, eds., Universals in Linguistic Theory. New York: Holt, Rinehart and Winston. Klima, E.S., and Bellugi, U. 1979 The Signs of Language. Cambridge: Harvard University Press. Kurath, H. 1939 Handbook of the Linguistic Geography of New England (With the collaboration of Marcus L. Hansen, Julia Bloch, and Bernard Bloch). Providence, R.I.: Brown Uni versity Press. Labov, W. 1972 Sociolinguistic Pattern. Philadelphia: University of Pennsylvania Press. Ladefoged, P. 1980 What are linguistic sounds made of? Language 56:485-502. Lashley, K.S. 1951 The problem of serial order in behavior. Pp. 112- 136 in L.A. Jeffress, ea., Cerebral Mechanisms in Behavior. New York: John Wiley and Sons. Lehmann, W.P. 1973 Historical Linguistics. New York: Holt, Rinehart and Winston. Lenneberg, E.H. 1967 Biological Foundations of Language. New York: John Wiley and Sons. Lesser, R. 1978 Linguistic Investigations of Aphasia. New York: Elsevier. Levinson, S.E., and Liberman, M.Y. 1981 Speech recognition by computer. Scientific American, April. Levi-Strauss, C. 1966 The Savage Mind. Chicago: University of Chicago Press. Liberman, A.M. 1957 Some results of research on speech perception. Journal of the Acoustical Society of America 29: 117- 123. 1970 The grammars of speech and language. Cognitive Psychology 1:301-323. 1982 On finding that speech is special. American Psychologist 37:148-167. Liberman, A.M., and Studdert-Kennedy, M. 1978 Phonetic perception. Pp. 143-178 in R. Held, H.W. Leibowitz, and H.-L. Teuber, eds., Handbook of Sensory Physiology. Vol. VIII: Perception. New York: Springer- Verlag. Liberman, A.M., Cooper, F.S., Shankweiler, D., and Studdert-Kennedy, M. 1967 Perception of the speech code. Psychological Review 74:431-461. Liberman, A.M., Ingemann, F., Lisker, L., Delattre, P.C., and Cooper, F.S. 1959 Minimal n~les for synthesizing speech. Journal of the Acoustical Society of America 31:1490-1499. Licklider, J.C.R., and Miller, G. 1951 The perception of speech. In S.S. Stevens, ea., Handbook of Experimental Psychology. New York: John Wiley and Sons. Liddell, S.K. 1978 Nonmanual signals and relative clauses in American Sign Language. Pp. 59-90 in P. Siple, ea., Understanding Language Through Sign Language Research. New York: Academic Press. Lieberman, P., and Crelin, E.S. 1971 On the speech of Neanderthal marl. Linguistic Inquiry 2:203-222.

OCR for page 208
SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 245 Lieberman, P., Cretin, E.S., and Klatt, D.H. 1972 Phonetic ability and related anatomy of the newborn, adult human, Neanderthal man, and the chimpanzee. American Anthropologist 74:287-307. Limber, J. 1973 The genesis of complex sentences. In T.E. Moore, ea., Cognitive Development and the Acquisition of Language. New York: Academic Press. Linebarger, M.C., Schwartz, M.F., and Saffran, E.M. 1983 Sensitivity to grammatical structure in so-called agrammatic aphasics. Cognition 13:361- 392. Locke, J. 1983 Phonological Acquisition and Change. New York: Academic Press. Luria, A.R. 1966 Higher Cortical Functions in Man. New York: Basic Books. 1970 Traumatic Aphasia. The Hague: Mouton. MacNamara, J. 1982 Names for Things. Cambridge: MIT Press. Mattingly, I.G. 1968 Experimental methods for speech synthesis by rule. IEEE Transactions on Audio and Electroacoustics AU-16:198-202. Speech synthesis for phonetic and phonological models. Pp. 2451-2487 in T.A. Sebeok, ea., Current Trends in Linguistics Vol. 12. The Hague: Mouton. 1974 Mayberry, R.I. 1978 Manual communication. In H. Davis and S.R. Silverman, eds., Hearing and Deafness (4th ed.). New York: Holt, Rinehart and Winston. Mayr, E. 1974 Behavior programs and evolutionary strategies. American Scientist 62:650-659. Miller, G.A. 1951 Language and Communication. New York: McGraw-Hill. Miller, G.A., Galanter, E., and Pribram, K.H. 1960 Plans and the Structure of Behavior. New York: Henry Holt and Company, Inc. Molfese, D.L. 1977 Infant cerebral asymmetry. In S.J. Segalowitz and F.A. Gruber, eds., Language Development and Neurological Theory. New York: Academic Press. Moscovitch, M. 1983 Stages of processing and hemispheric differences in language in the normal subject. Pp. 88-104 in M. Studdert-Kennedy, ea., Psychobiology of Language. Cambridge: MIT Press. Mowrer, O.H. 1960 Learning Theory and the Symbolic Processes. New York: John Wiley and Sons. Muller, J. 1848 The Physiology of the Senses, Voice and Muscular Motion with the Mental Faculties. (Translated by W. Baly). New York: Walton and Maberly. Neville, H.J. 1980 Event-related potentials in neuropsychological studies of language. Brain and Lan- guage 11:300-318. Neville, H.J., Kutas, M., and Schmidt, A. 1982 Event-related potential studies of cerebral specialization during reading. II. Studies of congenitally deaf adults. Brain and Language 16:316-337. Newport, E.L., Gleitman, H., and Gleitman, L.R. 1977 Mother, I'd rather do it myself: some effects and non-effects of maternal speech style.

OCR for page 208
246 MICHAEL STUDDERT-KENNEDY In C. Snow and C. Ferguson, eds., Talking to Children; Language input and Acqui- sition. Cambridge, England: Cambridge University Press. Oettinger, A. 1972 The semantic wall. In E.E. David and P.B. Denes, eds., Human Communication: A Unif ed View. New York: McGraw-Hill. Ojemann, G.A. 1983 Brain organization for language from the perspective of electrical stimulation mapping. The Behavioral and Brain Sciences 6:218-219. Oiler, D.K. 1980 The emergence of the sounds of speech in infancy. Pp. 93-112 in G.H.Yeni-Komshian, J.F. Kavanagh, and C.A. Ferguson, eds., Child Phonology. Vol. 1: Production. New York: Academic Press. Pick, A. 1913 Die Agrammatischen Sprachstorungen. Berlin: Springer. Porter, R.J., Jr., and Hughes, L.F. 1983 Dichotic listening to CV's: method, interpretation and application. In J. Hellige, ea., Cerebral Hemispheric Asymmetry: Method, Theory, and Application. Praeger Science Publishers: University of Southern California Press. Potter, R.K., Kopp, G.A., and Green, H.C. 1947 Visible Speech. New York: D. Van Nostrand Co., Inc. Pylyshyn, Z.W. 1980 Computation and cognition: issues in the foundations of cognitive science. The Be- havioral and Brain Sciences 3:111-169. Reddy, D.R. 1975 Speech Recognition: Invited Papers Presented at the 1974 IEEE Symposium. New York: Academic Press. Roeper, T. 1982 The role of universals in the acquisition of gerunds. In E. Wanner and L.R. Gleitman, eds., Language Acquisition: The State of the art. New York: Cambridge University Press. Seashore, R.H., and Erickson, L.D. 1940 The measurement of individual differences in general English vocabularies. Journal of Educational Psychology 31:14-38. Segalowitz, S.J., and Chapman, J.S. 1980 Cerebral asymmetry for speech in neonates: a behavioral measure. Brain and Language 9:281-288. Shankweiler, D., and Studdert-Kennedy, M. 1967 Identification of consonants and vowels presented to the left and right ears. Quarterly Journal of Experimental Psychology 19:59-63. Skinner, B.F. 1957 Verbal Behavior. New York: Appleton-Century Crofts. Slobin, D.I. 1980 The repeated path between transparency and opacity in language. In U. Bellugi and M. Studdert-Kennedy, eds., Signed and Spoken Language: Biological Constraints on Linguistic For~n. Weinheim: Verlag Chemie. 1982 Universal and particular in the acquisition of language. In L. Gleitman and E. Warner, eds., Language Acquisition: State of the Art. New York: Cambridge University Press. Stevens, K.N. 1975 The potential role of property detectors in the perception of consonants. Pp. 303-330 in G. Fant and M.A.A. Tatham, eds., Auditory Analysis and Perception of Speech. New York: Academic Press.

OCR for page 208
SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 247 Stevens, K.N., and House, A.S. 1955 Development of a quantitative description of vowel articulation. Journal of the Acous- tical Society of America 27:484-493. 1961 An acoustical theory of vowel production and some of its implications. Journal of Speech and Hearing Research 4:303-320. Stokoe, W.C., Jr. 1960 Sign language structure. Studies in Linguistics: Occasional Papers 8. Buffalo: Buffalo University Press. 1974 Classification and description of sign languages. Pp. 345-371 in T.A. Sebeok, ea., Current Trends in Linguistics. Vol. 12. The Hague: Mouton. Stokoe, W.C., Jr., Casterline, D.C., and Croneberg, C.G. 1965 A Dictionary of American Sign Language. Washington, D.C.: Gallaudet College Press. Studdert-Kennedy, M. 1974 The Perception of Speech. In T.A. Sebeok, ea., Current Trends in Linguistics. Vol. 12. The Hague: Mouton. 1976 Speech perception. Pp. 243-293 in N.J. Lass, ea., Contemporary Issues in Experi mental Phonetics. New York: Academic Press. 1983 Psychobiology of Language. M. Studdert-Kennedy, ea., Cambridge: MIT Press. 1985 Sources of variability in early speech development. In J.S. Perkell and D.H. Klatt, eds., Invariance and Variability of Speech Processes. Hillsdale, N.J.: Lawrence Erl baum Associates. Studdert-Kennedy, M., and Lane, H. 1980 Clues from the differences between signed and spoken language. Pp. 29-39 in U. Bellugi and M. Studdert-KeMedy, eds., Signed and Spoken Language: Biological Constraints on Linguistic Form. Deerfield Park, Fla.: Verlag Chemie. Studdert-Kennedy, M., and Shankweiler, D.P. 1970 Hemispheric specialization for speech perception. Journal of the Acoustical Society of America 48:579-594. Templin, M. 1957 Certain Language Skills of Children. Minneapolis: University of Minnesota Press. Von Stockert, T. 1972 Recognition of syntactic structure in aphasic patients. Cortex 8:323-335. Wanner, E., and Gleitman, L.R., eds. 1982 Language Acquisition: The State of the Art. New York: Cambridge University Press. Werker, J.F. 1982 The Development of Cross-Language Speech Perception: The Effect of Age, Expe- rience and Context on Perceptual Organization. Unpublished Ph.D. dissertation. Uni- versity of British Columbia. Wilson, E.O. 1975 Sociobiology. Cambridge: The Belknap Press. Yeni-Komshian, G.H., Kavanagh, J.F., and Ferguson, C.A., eds. 1980 Child Phonology. Vols. 1 and 2. New York: Academic Press. Zaidel, E. 1978 Lexical organization in the right hemisphere. Pp. 177-197 in P.A. Buser and A. Rougeul-Buser, eds., Cerebral Correlates of Conscious Experience. Amsterdam: E1 sevier/North-Holland Biomedical Press. 1980 Clues from hemispheric specialization. In U. Bellugi and M. Studdert-Kennedy, eds., Signed and Spoken Language: Biological Constraints on Linguistic Form. Weinheim: Verlag Chemie. 1983 On multiple representations of the lexicon in the brain-the case of two hemispheres.

OCR for page 208
248 it. 105-125 ~ ha. Smdded-Kennedy, Eden f~ ~. Cadge: MID Pass. t E.B., ad Blums~in, S.E. 1978 L=guage ad Me ban. In a. Halle, J. B=SD=, ad G.A. Miller, eds., [f~c ~ ~ f~-f~' R~/~. C~bddge: Ma Pass.