It is not yet known how the frontal and posterior areas share semantic processing during reading and listening. However, one clue comes from the study of eye movements during skilled reading (13). Skilled readers fixate on a word for only ≈300 ms, and the length and even the direction of the saccade after this fixation are influenced by the meaning of the word currently fixated (14). The time course of fixations during skilled reading suggests that frontal areas are activated in time to influence saccades to the next word but that the posterior activity is too late to play this role. Previous analysis of semantic processing, such as the N400 (15), has involved components that are also too late to influence the saccade during normal skilled reading.
Because the distance of the saccade during skilled reading reflects the meaning of the lexical item currently fixated, it is necessary that at least some of the brain areas reflecting semantic processing occur in time to influence the saccade. Areas involved in chunking visual letters into a unit (visual word form) and those related to attention as well as the anterior semantic areas are active early enough to influence saccades (3). Partly for this reason, we have suggested that the left anterior area that is active during the processing of single words reflects the meaning of the single word (lexical semantics), and the posterior area is involved in relating the current word to other words within the same phrase or sentence (sentential or propositional semantics).
The distinction between lexical and propositional semantics is a common one in linguistics (16). Many psycholinguistic studies (17, 18) also draw on this or similar distinctions. It is clear that the meaning of each individual lexical item taken in isolation gives little that would serve as a reliable cue to the overall meaning of a passage. If even giving a highly familiar use of a word requires activation of left posterior areas, as the positron-emission tomography studies argue (5), this left posterior area also must be important to obtaining the overall meaning of passages that rely on integrating many words. Most psycholinguistic studies draw heavily on working memory to perform this role (19). Thus, it may be of importance that the portion of working memory involved in the storage of verbal items lies in a brain area near Wernicke’s area (ref. 6; see also ref. 25).
For all of these reasons, we tried to test the specific hypothesis that frontal areas will be more important in obtaining information about the meaning of a lexical item and posterior areas will be more important in determining whether the item fits a sentence frame. Our basic approach was to have each subject perform these two tasks on the same lexical items in separate sessions. We then compared the electrical activity generated during the two tasks to determine whether the lexical task produces increased activity in the front of the head while the sentence task does so in posterior regions.
A secondary hypothesis refers to the ability of subjects to give priority voluntarily to the lexical or sentence level computation. Our basic idea (4) is that the subject can increase the activation in any brain area by giving attention to that area. Accordingly, in one session, we trained subjects to press a key if “the word is manufactured and fits the sentence,” and in another session, we asked them to press a key if the word “fits the sentence and is manufactured.” These two conjunctions are identical in terms of their elements, but they are to be performed with opposite priorities. Our general view is that attention reenters the same anatomical areas at which the computation is made originally and serves to increase neuronal activity within that area.
In accord with this view, we expect that, in the front of the head, we will see more activity early in process under the instruction to give priority to the lexical computation, and late in processing, there will be more frontal activity when the sentence elements has been given priority. At posterior sites, the effect of instruction will be reversed.
Twelve right-handed native English speakers (six women) participated in the main experiment. Handedness of participants was assessed by the Edinburgh Handedness Questionnaire (20, 21). Their ages ranged between 19 and 30 years and averaged 21.9 years.
EEG was recorded from the scalp by using the 128-channel Geodesic Sensor Net (Electrical Geodesics, Eugene, OH) (22). The recorded EEG was amplified with a 0.1- to 50-Hz bandpass, 3-dB attenuation, and 60-Hz notch filter, digitized at 250 Hz with a 12-bit A/D converter, and stored on a magnetic disk. Each EEG epoch lasted 2 s and began with a 195-ms pre-stimulus baseline before the onset of the word stimulus. All recordings were referenced to Cz. ERPs were re-referenced against an average reference and averaged for each condition and for each subject after automatic exclusion of trials containing eye blinks and movements. A grand average across subjects was computed; difference waves as well as statistical (t test) values comparing different tasks were interpolated onto the head surface for each 4-ms sample (these methods are described further in refs. 9 and 10).
The experimental trials began with the presentation of a sentence with a missing word (e.g., “He ate his food with his _____”). The sentence was displayed until the subject pressed a key, and then a fixation cross appeared in the center of the screen for a variable interval of 1,800–2,800 ms. After the fixation cross, a word (e.g., “fork”) was presented for 150