runs, etc.). Given the number and severity of confounding factors, the results of the study are preliminary at best and in no way constitute unequivocal scientific evidence that the CSGs identified as statistically significant can effectively detect change in cognitive activity in a complex human supervisory control task.
A second set of four experiments was conducted in Phase 2 of AugCog. The stated objective of Phase 2 was to manipulate an operator’s cognitive state as a result of near-real-time psychophysiological measurements (Dorneich et al., 2005). The experiments used a video game environment to simulate military operations in urban terrain (MOUT) in either a desktop setting or a motion-capture laboratory. In addition to the primary task, navigating through the MOUT, participants had to distinguish friends from foes while monitoring and responding to communications. A communications scheduler, part of the Honeywell Joint Human–Automation Augmented Cognition System,1 determined operator workload via a cognitive state profile (CSP) and prioritized incoming messages accordingly. The CSP was an amalgam of signals from cardiac interbeat interval, heart rate, pupil diameter, EEG P300, cardiac quasi-random signal (QRS), and EEG power at the frontal (FCZ) and central midline (CPZ) sites (Dorneich et al., 2005).
As in the Phase 1 experiment, there were only a few participants (16 or fewer) in each of the four Phase 2 experiments. Construct validity and statistical models were questionable, with significant experimental confounds. There is no open account of how the neurological and physiological variables were combined to form the CSP, making independent peer-researched replication of these experiments difficult. In light of these concerns, claims such as a 100 percent improvement in message comprehension, a 125 percent improvement in situation awareness, a 150 percent increase in working memory, and a more than 350 percent improvement in survivability should be considered tentative. In addition, the authors claim anecdotal evidence that their CSGs can indicate operator inability to comprehend a message (Dorneich et al., 2005).
The focus in these experiments appears to have been on generating measurable outcomes on a very tight time schedule. Most of the technical data on the performance of the actual sensors and of the signal processing and combination algorithms were not published. This information would have been useful for further scientific evaluation and confirmation of the reported results.
The problem with AugCog as a development lies less in the intrinsic concept of managing cognitive workload through neural and physiological feedback to a smart information system than in the assumptions that were made about the maturity of the technologies required to implement such a system and thus about the time frame for an initial operational capability. Unfortunately, no follow-on studies have reported how the successful CSGs could or would be combined in an operational system. The engineering obstacles to combining EEG, fNIR, and eye-tracking devices are substantial. Unless dramatic leaps are made soon in the miniaturization of these technologies and in improved signal-processing algorithms, the realization of a single headset that can combine all—or even a subset—of these technologies is at least a decade away.
Other engineering problems, such as how to measure EEG signals in a dynamic, noisy environment, have not been addressed, at least in the open literature. Basic sensor system engineering problems like these will be critical to any operational deployment of these technologies. A similar engineering problem underlay the use of the eye-tracking devices assumed for AugCog applications. These devices currently require a sophisticated head-tracking device in addition to the eye-tracking device, and encapsulating this technology into an unobtrusive device that can be worn in the field appears also to be at least 10 years in the future.
In addition to hardware limitations on the use of neural and physiological technologies in an operational field setting, the software/hardware suite required to interpret cognitive state reliably in real time is beyond current capabilities, particularly in the highly dynamic, stochastic settings typical of command-and-control environments. The experiments for the AugCog program were conducted under controlled laboratory conditions. While this is to be expected for preliminary, proof-of-concept studies, such a limitation constrains the extrapolation of the reported results. For example, the communications scheduler in the Phase 2 experiments made changes in information presentation based on gross differences in perceived cognitive state. In actual battlefield conditions, the amount of task-relevant information and the degrees of freedom in cognitive state will require more precision and reliability in ascertaining an operator’s condition and making situation-appropriate adjustments rather than limiting access, perhaps inappropriately, to information that may be critical for a real-time decision. Not only must the sensors and signal-processing algorithms improve substantially; significant advances are also needed in decision-theoretic modeling. In particular, these models will have to accommodate a significant range of individual variability.
Overall, the AugCog goal of enhancing operator performance through psychophysiological sensing and automation-based reasoning is desirable but faces major challenges as an active information filter. Suppose a system is implemented that can change information streams and decrease the volume by filtering incoming information presented to a user. How is the system to know that its filtering in a specific situation is both helpful to this user and passes along the correct information for the current situation? The system software must correctly determine an optimal cognitive load for an individual in a dynamic, highly uncertain context and decide