blood oxygenation, as measured by transcranial Doppler sonography, with occasions on which observers miss signals. In addition to measuring blood flow and blood oxygenation, which are indirect indicators of neural functioning, event-related potentials may be another way to learn when an individual has missed a critical signal. As discussed in the following section on neuroergonomics, if the data stream of the original event-related potentials is formatted so as to elicit, for example, a P300 response when a miss occurs, then an augmented perception system could be triggered by such an electrophysiological signal. Such techniques for augmenting perception—in this case to improve awareness of a signal—depend on vigilance for catching a specific signal. The interface employed for this task may need to be structured to make best use of the augmentation opportunity, and such designs are a challenge to scientists in the human factors and ergonomics communities (Hancock and Szalma, 2003a, 2003b).
Despite the challenges, work on military applications for this kind of brain-signal-augmented recognition is going forward, as illustrated by two current Defense Advanced Research Projects Agency (DARPA) programs.1 The Neuroscience for Intelligence Analysts system uses electroencephalography (EEG) to detect a brain signal corresponding to perceptual recognition (which can occur below the level of conscious attention) of a feature of interest in remote (airborne or space-based) imagery. In macaque monkeys, an EEG signature from electrophysiological recordings has been successfully detected for target image presentation rates of up to 72 images per second (Keysers et al., 2001). In the Phase 1 proof-of-concept demonstration of a triage approach to selecting images for closer review, actual intelligence analysts working on a realistic broad-area search task achieved a better than 300 percent improvement in throughput and detection relative to the current standard for operational analysis. There is evidence that this technology can detect at least some classes of unconscious attention, providing support for the notion that perception is not only or always consciously perceived.
The second DARPA program, the Cognitive Technology Threat Warning System, uses a signal-processing system coupled with a helmet-mounted EEG device to monitor brain activity to augment a human sentinel’s ability to detect a potential threat image anywhere in a wide field-of-view image seen through a pair of binoculars. Again, the objective is to identify potential features of interest using the brain signal, then warn the soldier-sentinel and direct his or her attention to those features.
If augmentation of signal awareness can enhance performance in continuous-vigilance tasks during the hours of boredom, as illustrated by these DARPA demonstration-experiments, are there opportunities to enhance soldier performance during the infrequent but intense moments of terror? In the modern Army environment, such contexts typically involve surging information loads on individuals who must process all the relevant information quickly and appropriately to avoid the twin performance faults: failure to respond or incorrect response. When peak demands are coming from multiple cognitive tasks—e.g., perceptual judgment, information assimilation to cognitive schema, and choice selection (decision making in a broad sense), all of which must be carried out with urgency—cognitive overload is likely to degrade performance.
As an example, consider a mounted soldier-operator who is monitoring his own formation of manned and unmanned ground vehicles, along with attached unmanned aerial vehicle assets, and is receiving and sending communications over his tactical radio system. At the same time that he notices some problem with one of the unmanned ground vehicles, he loses contact with one of the aerial vehicles and receives preliminary indications of an enemy position on his flank. The soldier in this or analogous circumstances may well have trained for such events individually, but all three occurring simultaneously is likely to produce cognitive overload.
The primary way in which neuroscience can help an individual deal with cognitive overload is through improved methods for load-shedding as the workload stress on the individual increases beyond a manageable level. In effect, the aiding system removes or lessens one or more of the stacked processing-and-response demands on the individual. The load-shedding process can continue as cognitive tasks are sequentially removed. Thus, in the example above, the soldier-operator can focus on the most serious threat—the signs of hostile activity—while his load-shedding system automatically moves into problem-management routines for the two “straying” unmanned vehicles and cuts the incoming message traffic on the radio to just the highest priority messages. Various forms of discrete task allocation have been around in concept since the mid-1950s and in practice since the later 1980s. However, in these existing forms, the aiding system does not receive input on how close the aided individual is to cognitive overload. This particular aspect—monitoring the status of the individual for measures of cognitive overload—is where neuroscience and its technologies for assessing neurophysiological state can contribute to enhancing performance in the moments of terror. In our mounted soldier-operator example, an information workload monitoring system would detect the soldier’s nascent cognitive overload condition and activate the automated problem management routines for his straying assets and the radio “hush-down.”
In the past decade, much effort has gone into the assessment of neurophysiological indicators of incipient overload. At the forefront of these efforts was the Augmented Cognition (AugCog) program of DARPA. The AugCog objective was to use a number of neural state indicators to control adaptive human–machine interfaces to information systems.