. "Cerebral organization for langague in deaf and hearing subjects: Biological constraints and effects of experience." (NAS Colloquium) Neuroimaging of Human Brain Function. Washington, DC: The National Academies Press, 1998.
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Colloquium on Neuroimaging of Human Brain Function
parents and who acquired both ASL and English as native languages (hearing native signers).
Subjects. All subjects were right-handed, healthy adults (see Table 1).
Experimental Design/Stimulus Material. Each population was scanned by using functional magnetic resonance imaging while processing sentences in English and in ASL. The English runs consisted of alternating blocks of simple declarative sentences (read silently) and consonant strings, all presented one word/string at a time (600 msec/item) in the center of a screen at the foot of the magnet. The ASL runs consisted of film of a native deaf signer producing sentences in ASL or nonsign gestures that were physically similar to ASL signs. The material was presented in four different runs (two of English and two of ASL—presentation counterbalanced across subjects). Each run consisted of four cycles of alternating 32-sec blocks of sentences (English or ASL) and baseline (consonant strings or nonsigns). None of the stimuli were repeated. Subjects had a practice run of ASL and of English to become familiar with the task and the nature of the stimuli.
Behavioral Tests. At the end of each run, subjects were asked yes/no recognition questions on the sentences and nonwords/nonsigns to ensure attention to the experimental stimuli (see Table 1). ANOVAs were performed on the percent-correct recognition. Deaf subjects also took 10 subtests of the Grammaticality Judgment Test (34) to assess knowledge of English grammar (see Table 1). At the end of each run subjects indicated whether or not specific sentences and nonword/nonsign strings had been presented.
MR Scans. Gradient-echo echo-planar images were obtained by using a 4-T whole body MR system, fitted with a removable z-axis head gradient coil (35). Eight parasagittal slices, positioned from the lateral surface of the brain to a depth of 40 mm, were collected (TR=4 sec, TE=28 ms, resolution 2.5 mm×2.5 mm×5 mm, 64 time points per image). For each of the subjects, only one hemisphere was imaged in a given session because a 20-cm diameter transmit/receive radio-frequency surface coil was used to minimize rf interaction with the head gradient coil. The surface coil had a region of high sensitivity that was limited to a single hemisphere.
MR Analysis. Subjects were asked to participate in two separate sessions (one for each hemisphere). However, this was not always possible, leading to the following numbers of subjects: (i) hearing, eight subjects on both left and right hemispheres; (ii) deaf, seven subjects on both left and right hemispheres, plus four subjects left hemisphere only and five subjects right hemisphere only, (iii) hearing native signers, six subjects on both left and right hemispheres, plus three subjects left hemisphere only and four subjects right hemisphere only. Between-subject analyses were performed by considering left and right hemisphere data from all three groups as a between-subject variable. Individual data sets were first checked for artifacts (runs with visible motion and/or signal loss were discarded from the analysis resulting in the loss of the data from four hearing native signers: two left hemisphere on English, one left hemisphere on ASL, and one right hemisphere on English). A cross-correlation thresholding method was used to determine active voxels (36) (r≥0.5, effective df=35, alpha=.001). MR structural images were divided into 31 anatomical regions according to the Rademacher et al. (37) division of the lateral surface of the brain (see Fig. 1); between-subject analyses were performed on these predetermined anatomical regions. Activation measurements were made on the following two variables for each region and run: (i) the mean percent change of the activation for active voxels in a region and (ii) the mean spatial extent of the activation in the region (corrected for size of the region). In addition, a region was not considered further unless at least 30% of runs displayed activation. Multivariate analysis was used to take into account each of these different aspects of the activation. The analyses relied on Hotelling’s T2 (38) statistic, a natural generalization of the Student’s t-statistic, and were performed by using BMDP statistical software. In all analyses, the log transforms of the percent change and spatial extent were used as dependent variables, and data sets were used as independent variables. Activation within a region was assessed by testing the null hypothesis that the level of activation in the region was zero. Comparisons across hemispheres and/or languages were performed by entering these factors as treatments.
The behavioral data confirmed that subjects were attending to the stimuli and were better at recognizing sentences than nonsense strings [stimulus effect F(1,55)=156, P<.0001]. All groups performed equally well in remembering both the (simple, declarative) English sentences and the consonant strings [group effect not significant (NS)]. Hearing subjects who did not know ASL performed at chance in recognizing ASL sentences and nonsigns unlike the two other native signer groups (group effect, F(2,55)=41, P<.0001). Deaf and hearing signers performed equally well on ASL stimuli (group effect NS) (Table 1).
English. When normally hearing subjects read English sentences they displayed robust activation within the standard language areas of the left hemisphere, including inferior frontal (Broca’s) area, Wernicke’s area [superior temporal sulcus (STS) posterior], and the angular gyrus. Additionally, the dorsolateral prefrontal cortex (DLPC), inferior precentral cortex, and anterior and middle STS were active, in agreement with recent studies indicating a role for these areas in language processing and memory (39–42). Within the right hemisphere there was only weak and variable activation (less than 50% of runs) reflecting the ubiquitous left hemisphere asymmetry described by over a century of language research (Fig. 1a; Table 2). In contrast to the hearing subjects, deaf subjects did not display left hemisphere dominance when reading English
Table 1. Demographic and behavioral data for the three subject groups