Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 67
Opportunities in Neuroscience for Future Army Applications 6 Improving Cognitive and Behavioral Performance The transformation of the U.S. military services into a highly networked force has markedly increased the need for rapid collection and dissemination of vast amounts of data. This includes the fusion and display of data in formats that can be readily comprehended by soldiers who can then take the appropriate actions. The combination of optimal performance, data comprehension, and decision making required for this transformation to a networked force comes as the Army evolves into a growing variety of combat platforms. Depending on their specialties, soldiers can be expected to operate equipment ranging from 70-ton Abrams tanks and 35-ton Stryker vehicles to sophisticated manned and unmanned ground and air systems of the Future Combat Systems and to conduct basic dismounted soldier operations in all environments, including urban terrain. All of these operations rely on the soldier maintaining situational awareness and sharing a common operating picture of the battlefield. The soldiers conducting these varied operations will also contend with stressors on cognitive performance as described in Chapters 4 and 5. The need to remain vigilant and on task continuously for extended periods (longer than 36 hours) in extreme environments (for example, in closed vehicles, where the temperature can exceed 110°F) while enduring the stresses of sustained combat or of security and stability operations will challenge the baseline cognitive and behavioral capabilities of soldiers, who must assimilate and react appropriately to the flow of task-relevant information. In short, each soldier’s cognitive performance on his assigned tasks will more than ever before be critical to his or her operational performance. The recent breakthroughs in neuroimaging and other technologies described in Chapters 2 through 5 allow quantifying the physiological metrics of human attentiveness, cognitive performance, and neural functioning. The knowledge gained is guiding the development of countermeasures against such stressors as fatigue, sleep deprivation, information overload, dehydration and other metabolic stresses, even overtraining. At the same time, many of these techniques appear to offer possibilities for enhancing soldiers’ performance beyond their normal, or unaided, baseline capabilities. This chapter assesses the current status and emerging prospects for such neuroscience-informed enhancements. HOURS OF BOREDOM AND MOMENTS OF TERROR Many of the tasks that make up a military deployment, especially for operations in a combat theater, can be characterized as “hours of boredom and moments of terror” (Hancock, 1997). During the long periods of waiting that lead up to a combat operation where hostile action may await, the main demand on individual performance is for vigilance and sustained attention (Warm, 1984). The strong association between the psychophysical dimensions of the vigilance task and certain mental workload measures made possible by advances in neuroscience can aid the soldier (Warm et al., 1996). Failures in vigilance can lead to calamity in military operations (Rochlin, 1991; Snook, 2000) as well as in other facets of human life (Hill and Rothblum, 1994; Evan and Manion, 2002). The classic military example is standing watch, where continuous sustained attention is essential but the probability of a threat event at any particular time is low. Many prominent cases of military failure associated with human error have involved failures of sustained attention (e.g., Miller and Shattuck, 2004). Looked at in terms of classic signal detection theory, such failures are classified as either a false alarm or a miss. Both forms of inappropriate response are problematic, but missing a critical signal for response can result in injury and fatalities not just for the soldier but also for the immediate unit or even beyond. Thus, finding ways to extend attentiveness could have a significant return for overall military performance. Fortunately, substantial progress has recently been made on the problem of sustained attention. For example, Tripp and Warm (2007) have linked variations in blood flow and
OCR for page 68
Opportunities in Neuroscience for Future Army Applications blood oxygenation, as measured by transcranial Doppler sonography, with occasions on which observers miss signals. In addition to measuring blood flow and blood oxygenation, which are indirect indicators of neural functioning, event-related potentials may be another way to learn when an individual has missed a critical signal. As discussed in the following section on neuroergonomics, if the data stream of the original event-related potentials is formatted so as to elicit, for example, a P300 response when a miss occurs, then an augmented perception system could be triggered by such an electrophysiological signal. Such techniques for augmenting perception—in this case to improve awareness of a signal—depend on vigilance for catching a specific signal. The interface employed for this task may need to be structured to make best use of the augmentation opportunity, and such designs are a challenge to scientists in the human factors and ergonomics communities (Hancock and Szalma, 2003a, 2003b). Despite the challenges, work on military applications for this kind of brain-signal-augmented recognition is going forward, as illustrated by two current Defense Advanced Research Projects Agency (DARPA) programs.1 The Neuroscience for Intelligence Analysts system uses electroencephalography (EEG) to detect a brain signal corresponding to perceptual recognition (which can occur below the level of conscious attention) of a feature of interest in remote (airborne or space-based) imagery. In macaque monkeys, an EEG signature from electrophysiological recordings has been successfully detected for target image presentation rates of up to 72 images per second (Keysers et al., 2001). In the Phase 1 proof-of-concept demonstration of a triage approach to selecting images for closer review, actual intelligence analysts working on a realistic broad-area search task achieved a better than 300 percent improvement in throughput and detection relative to the current standard for operational analysis. There is evidence that this technology can detect at least some classes of unconscious attention, providing support for the notion that perception is not only or always consciously perceived. The second DARPA program, the Cognitive Technology Threat Warning System, uses a signal-processing system coupled with a helmet-mounted EEG device to monitor brain activity to augment a human sentinel’s ability to detect a potential threat image anywhere in a wide field-of-view image seen through a pair of binoculars. Again, the objective is to identify potential features of interest using the brain signal, then warn the soldier-sentinel and direct his or her attention to those features. If augmentation of signal awareness can enhance performance in continuous-vigilance tasks during the hours of boredom, as illustrated by these DARPA demonstration-experiments, are there opportunities to enhance soldier performance during the infrequent but intense moments of terror? In the modern Army environment, such contexts typically involve surging information loads on individuals who must process all the relevant information quickly and appropriately to avoid the twin performance faults: failure to respond or incorrect response. When peak demands are coming from multiple cognitive tasks—e.g., perceptual judgment, information assimilation to cognitive schema, and choice selection (decision making in a broad sense), all of which must be carried out with urgency—cognitive overload is likely to degrade performance. As an example, consider a mounted soldier-operator who is monitoring his own formation of manned and unmanned ground vehicles, along with attached unmanned aerial vehicle assets, and is receiving and sending communications over his tactical radio system. At the same time that he notices some problem with one of the unmanned ground vehicles, he loses contact with one of the aerial vehicles and receives preliminary indications of an enemy position on his flank. The soldier in this or analogous circumstances may well have trained for such events individually, but all three occurring simultaneously is likely to produce cognitive overload. The primary way in which neuroscience can help an individual deal with cognitive overload is through improved methods for load-shedding as the workload stress on the individual increases beyond a manageable level. In effect, the aiding system removes or lessens one or more of the stacked processing-and-response demands on the individual. The load-shedding process can continue as cognitive tasks are sequentially removed. Thus, in the example above, the soldier-operator can focus on the most serious threat—the signs of hostile activity—while his load-shedding system automatically moves into problem-management routines for the two “straying” unmanned vehicles and cuts the incoming message traffic on the radio to just the highest priority messages. Various forms of discrete task allocation have been around in concept since the mid-1950s and in practice since the later 1980s. However, in these existing forms, the aiding system does not receive input on how close the aided individual is to cognitive overload. This particular aspect—monitoring the status of the individual for measures of cognitive overload—is where neuroscience and its technologies for assessing neurophysiological state can contribute to enhancing performance in the moments of terror. In our mounted soldier-operator example, an information workload monitoring system would detect the soldier’s nascent cognitive overload condition and activate the automated problem management routines for his straying assets and the radio “hush-down.” In the past decade, much effort has gone into the assessment of neurophysiological indicators of incipient overload. At the forefront of these efforts was the Augmented Cognition (AugCog) program of DARPA. The AugCog objective was to use a number of neural state indicators to control adaptive human–machine interfaces to information systems. 1 Amy A. Kruse, Defense Sciences Office, DARPA, Briefing to the committee on June 30, 2008.
OCR for page 69
Opportunities in Neuroscience for Future Army Applications The neural state indicators were used to assess cognitive overload stress, and when the stress became too great, they would trigger the dynamic load-shedding activity of an interface management system (McBride and Schmorrow, 2005; Schmorrow and Reeves, 2007). This pioneering effort in information workload management via physiological and neural feedback, which is discussed further in Chapter 7 and in greater detail in Appendix D, met some degree of success. It also provides important lessons on the challenges for implementing this type of adaptive aiding technology. The concept of adaptive aiding, which was first advanced by Rouse (1975) for the U.S. Air Force, builds on a long tradition of behavioral adaptation to environmental constraints (Hancock and Chignell, 1987). However, in a neural-indicator-driven implementation, such as in the original AugCog vision, the adaptation is not managed by the individual alone but is augmented by the aiding system’s assessment of the individual’s level of cognitive stress or other electropsychological parameters. Research projects in adaptive aiding have focused on systems for such real-world tasks as air traffic control (Hillburn et al., 1997), control of unmanned aerial vehicles (see, for example, Mouloua et al., 2003; Taylor, 2006), and augmentation of fine motor skills such as laparoscopic surgery (Krupa et al., 2002). In fact, most tasks in which humans perform knowledge-intensive work in conjunction with a complex information management and computational system could probably be improved by better diagnostic representations of the state of the human operator. Ultimately, the question becomes how action is integrated within the brain itself. For example, Minsky (1986) suggested that the brain could be viewed as a “society of mind.” In this view, a person’s conscious experience is an emergent property arising from the interaction of many cortical subsystems. The way in which these subsystems appear to interact seamlessly2 may well represent a template for advanced human–machine systems whose goal would be to reproduce the apparent effortlessness with which a person willfully controls his or her own limbs. A sad reality is that we no doubt will learn more about how this interaction works within the brain by working with individuals damaged by war. See, for example, the section of Chapter 7 entitled “Optimal Control Strategies for Brain–Machine Interfaces.” The prospect of returning the wounded to their previous level of physiological capability is a potential source of satisfaction. However, the next step beyond recovery of capability—providing capabilities that exceed the norm of natural abilities—would raise ethical issues if, indeed, it became technically feasible (Hancock, 2003). NEUROERGONOMICS Neuroergonomics has been defined by the individual who coined the term as “the study of the brain and behavior at work” (Parasuraman and Rizzo, 2007). It is one facet, or formalized expression, of the broader field of brain–machine (or sometimes mind–machine) interfaces (Levine et al., 2000; Lebedev and Nicolelis, 2006). Much of the broader field, as discussed in Chapters 5 and 7, has focused on ways to restore full functioning to individuals who have lost limbs or who have suffered some form of cognitive deficit following concussive or kinetic injuries. Although many of the advances in knowledge and in technology for these medical applications are inherently important to all application areas for brain–machine interfaces, the focus of this section is not medical prostheses. Rather, a typical application in neuroergonomics is concerned with enhancing selected capabilities beyond an unaided level, whether or not the individual aided by the system has experienced some degradation in capability. Neuroergonomics deals as much with performance improvement and performance enhancement as with performance recovery. The brief summary below examines some of the opportunities envisioned by those working in this field, as well as some of the barriers, acknowledged and implied, to successful realization of these opportunities. Specificity of Brain Signals as Control Inputs to a Brain–Machine Interface For a nonexpert, the advances in neuroscience described in the popular press—and sometimes in proposals seeking funding—can easily be interpreted in ways that overstate the specificity of the signal patterns within the brain that can be monitored with current techniques. Thus, lay individuals frequently ask whether current diagnostic techniques allow an observer to know what the person being observed is thinking. A similarly unrealistic flight of fantasy is that the weapons system of an advanced aircraft can be controlled by thinking in the language of the aircraft’s designers or pilots. In general, an expectation that higher levels of cognition can be immediately comprehended by assessing a small number of neural signals is destined for disappointment. However, the confluence of insights from neuroscience and improvements in complex systems control functions will provide limited opportunities for sending discrete control signals directly to an external system. We can, to some degree, elicit and subsequently measure, with a fair degree of accuracy, discrete responses from the brain. A prime example is the P300 wave, which not only is a potential index of cognitive workload but also can be employed as a form of binary (yes/no) control to a hybrid human–machine monitoring system. Although it remains difficult to distill the P300 wave on a single trial, the signal-to-noise ratio is constantly being improved, as it is for other neurophysiological and 2 There may be more cognitive dissonance, or contention and conflict, occurring in such situations than either the operator or an observer of the operator’s behavior can detect unaided by information on the operator’s neural state (Hancock, in press).
OCR for page 70
Opportunities in Neuroscience for Future Army Applications allied techniques in neural monitoring. Currently, various forms of brain function can be monitored for use as binary control signals, and simple forms of such controls have been created. A Pragmatic Approach to Neuroergonomics Applications Recently, Parasuraman and Wilson (2008) drew a distinction between techniques that measure cerebral metabolic processes, such as transcranial Doppler sonography or functional magnetic resonance imaging (fMRI), and techniques that measure neural activity (neural signaling) per se, such as EEG and event-related potentials. Their primary concern in making this distinction relates to the use of the output from these neurophysiolgical monitoring techniques as inputs to adaptive control systems (Hancock, 2007a, 2007b). Parasuraman and Wilson also considered how sequential improvements in spatial and temporal resolution of these electrophysiolgical measures can provide opportunities for increasingly refined control inputs and thus for increasingly sophisticated control of complex technologies. The eventual goal, whether implied or stated, is to translate an intention to act into a real action—in more colloquial terms, controlling our tools directly with our minds. The temptation here is to attempt to decide this issue in terms of current and proposed neuroscience methodologies—that is, framing the discussion in terms of what we can measure now and may be able to measure in the near future—and asking how such measures might be used as control signals to a compliant external system (that is, as an input to the defined control interface for the external system). A more practical approach is to ask what the Army and, by extension, its soldiers are expected to do, then consider how these tasks could be accomplished by soldiers interacting with systems via interfaces supported by advanced neuroscience techniques. The discussions of brain–machine interface technologies in Chapter 7 follow this more pragmatic approach. LEVERAGING EXTERNAL RESEARCH TO ENHANCE SOLDIER PERFORMANCE This section describes two areas of research on performance enhancement in nonmilitary contexts that have sufficient relevance to Army applications to bear continued monitoring. In addition to discussing these applied research programs, Chapter 7 discusses investments by nonmilitary entities in neuroscience-related technology development. In many cases, those technology opportunities also aim to enhance cognitive and behavioral performance. Driver Workload Research Driving a vehicle on highways and streets shared with other vehicles is an integrated, multiple-task behavior that requires proficient performance of different but interrelated skills. These skills rely on interconnected visual, motor, and cognitive brain systems. This common civilian activity of highway driving is regularly performed under conditions of varying task workload and stress. Extensive behavioral research on enhancing driver proficiency has been funded by the private sector—principally the automotive original equipment manufacturers (OEMs)—and by the U.S. Department of Transportation (DOT). These behavioral studies are beginning to be extended and deepened into neurocognitive analyses through complementary research that uses neuroimaging techniques in a laboratory setting. These studies, which are looking at advanced forms of smart vehicles and driver-support information systems, are relevant to soldier performance in high-workload conditions for either combat or noncombat operations that call for combinations of visual, motor, and cognitive processing in the road-driving context. In addition, the models, simulation techniques, and analysis procedures developed for driver workload research and smart-vehicle technology have direct application to human–machine interfaces common in military vehicles, complex weapons systems, and battlefield operations generally. Neuroscience research has shown the strong influence of key brain communication networks in the degradation of performance in overlearned skills. The methods of neuroscience are very useful for improving the ability to predict behavior by paying attention to the role of identified brain networks. The goal in much of the advanced driver workload/smart-vehicle research is to develop models that predict behavior rather than obtain the most accurate simulation. (Simulation systems for operator training have a different goal, and for them, realism is more important.) OEM precompetitive collaborative research projects of interest to the Army include the workload measurement aspects of the final reports from the Crash Avoidance Metrics Partnership, as well as the Safety Vehicle Using Adaptive Interface Technology (SAVE-IT) program sponsored by DOT’s Research and Technology Innovation Administration (DOT, 2008b). SAVE-IT deals with adaptive interfaces for high-workload environments. The Integrated Vehicle-Based Safety Systems program, also sponsored by the Research and Technology Innovation Administration, will include a study of driver performance in an environment with multiple warning systems that are intended to prevent rear-end, run-off-road, and lane-change crashes (DOT, 2008a; UMTRI, 2008). These investments in driver safety technology have been motivated by an interest in active safety systems to avoid a crash rather than survive one after it happens. These behavioral studies of workload metrics form the basis for a small set of brain imaging studies in simulated environments. Uchiyama et al. (2003) showed that brain networks are activated in driving-like scenarios in laboratory environments. Young et al. (2006) and Graydon et al. (2004) reported fMRI and magnetoencephalography results showing that a static driving paradigm in a laboratory setting activated the brain network more than did the sum of all the
OCR for page 71
Opportunities in Neuroscience for Future Army Applications component tasks in the paradigm. They interpreted these results as suggesting that a critical mass of stimulation cues in a laboratory imaging environment can reasonably replicate a real-world scenario for studying driving behavior. Spiers and Maguire (2007) developed a technique for analyzing blocks of driving activity using fMRI and a video game stimulus. Bruns et al. (2005) have used EEG to monitor an individual driving a military vehicle, and Harada et al. (2007) have demonstrated near-infrared spectroscopy technology to monitor the cortical blood flow of an individual operating a civilian automobile. NASA Neuroscience Research The National Aeronautics and Space Administration (NASA) has made the second largest federal investment, after DOD, in studying performance under stressful conditions. The National Space and Biomedical Research Institute (NSBRI), a NASA-funded consortium of institutions studying health risks related to long-duration spaceflight, has sought to develop countermeasures to the physical and psychological challenges of spaceflight. The NSBRI also works on technologies to provide medical monitoring and diagnosis capabilities in extreme environments, including cognitive capabilities. NSBRI investigators come from over 70 U.S.-based universities, and the institute is governed by an oversight committee comprising a dozen of its member institutions (NSBRI, 2008a). The Neurobehavioral and Psychosocial Factors team at the NSBRI seeks to identify neurobehavioral and psychosocial risks to the health, safety, and productivity of space crews. Additional research focuses on developing novel methods of monitoring brain function and behavior and measures that enhance the quality of life for astronauts, along with improving their performance and motivation. This team’s current projects range from researching ways to enhance the performance of a team carrying out a space exploration mission to developing new techniques for monitoring cognitive changes and the effects of stress on performance. The team is also developing a computer system that monitors speech patterns for use in predicting changes in mental capacities, such as cognition and decision making, which may be affected by the heightened exposures to radiation or hypoxia that may be encountered on an extended mission. Another current project is to develop a computer-based system for the recognition and treatment of depression (NSBRI, 2008b). Among the studies completed by the Neurobehavioral and Psychosocial Factors team, Shephard and Kosslyn (2005) developed a portable system to assess nine cognitive functions, including problem solving, attention, and working memory, to provide an early warning sign of stress-related deficits. Dinges et al. (2005, 2007) developed a system using optical computer recognition to track changes in facial expression of astronauts on long spaceflights, when such changes may indicate increased stress. A second NSBRI team, the Human Factors and Performance team, is studying ways to improve daily living and keep crew members healthy, productive, and comfortable during extended space exploration missions. Overall aims of this team are to reduce performance errors by studying environmental and behavioral factors that could threaten mission success. The team develops information tools to support crew performance and guidelines for human systems design. Team members are examining ways to improve sleep and scheduling of work shifts and looking at how lighting can improve alertness and performance. Other projects address nutritional countermeasures and how factors in the environment, such as lunar dust, can impact crew health. Recent projects of the Human Factors and Performance team includes research on sleep disruption in space and finding a nutritional counterbalance for the loss of muscle mass and function attributed to long spaceflights. Because rapidly changing light-dark cycles in space can affect the human body’s natural circadian cycle, Lockley et al. (2006) have been investigating whether exposure to short-wavelength blue light can be an effective means of shifting the circadian pacemaker, suppressing melatonin, and essentially increasing alertness. Gronfier et al. (2007) found that a modulated light exposure, with bright light pulses of 100 lux being supplied in the evening, can retrain human subjects to a light-dark cycle. NEUROPHARMACEUTICAL APPROACHES TO PERFORMANCE ENHANCEMENT Chapter 5 discusses nutritional supplements and pharmaceuticals used to sustain performance (measures to counter environmental stressors) as opposed to enhancing it above an individual’s baseline optimum. The committee has significant concerns about the potential for inappropriate use of currently available performance-enhancing drugs by the military. The caveats noted in Chapter 5 to the off-label use of neuropharmaceuticals to sustain performance, outside the FDA-approved medical indications for prescribing them, apply even more stringently when the intent is to enhance performance beyond the baseline capability. The requirements for specificity and selectivity must be set high and must be clearly met with scientifically sound evidence. And the risk of undesirable and still-unknown side effects must be weighed carefully against any performance benefit using tools to measure the performance improvement and clinical measures to assess the overall effects of the intervention. Such tools may need to be developed. Despite these concerns, it may be worthwhile to continue research on the use of pharmacological agents to optimize performance if the benefits to unique military circumstances clearly outweigh the risks. Future studies may discover enhancers with more striking effects then those currently available (Narkar et al., 2008).
OCR for page 72
Opportunities in Neuroscience for Future Army Applications Neuropharmaceuticals might also be applied to influence adversary behavior and decision making. Because pharmaceuticals can no doubt modulate the neurophysiological underpinnings of behavior and performance, they can in principle be used to weaken or incapacitate an adversary, just as they can be used to sustain and strengthen our own soldiers. Although this might be a direction for long-term research, it would also raise substantial ethical, legal (from the perspectives of both U.S. and international law), and strategic issues that should be addressed before the Army supports any such research and before assessing the relevance for Army applications of any non-Army research in this area. As with chemical and biological weapons, the most relevant opportunity for the counteradversary use of pharmaceuticals may be in developing the means to protect our soldiers (and civilians) against pharmacological weapons used against us. REFERENCES Bruns, A., K. Hagemann, M. Schrauf, J. Kohmorgen, M. Braun, G. Dornhege, K. Muller, C. Forsythe, K. Dixon, C.E. Lippitt, C.D. Balaban, and W.E. Kincses. 2005. EEG- and context-based cognitive-state classifications lead to improved cognitive performance while driving. P. 1065 in Foundations of Augmented Cognition, Proceedings of the 2005 Human-Computer Interaction Conference. D.D. Schmorrow, ed. Boca Raton, Fla.: CRC Press. Dinges, D.F., R.L. Rider, J. Dorrian, E.L. McGlinchey, N.L. Rogers, Z. Cizman, S.K. Goldenstein, C. Vogler, S. Venkataraman, and D.N. Metaxas. 2005. Optical computer recognition of facial expressions associated with stress induced by performance demands. Aviation, Space, and Environmental Medicine 76(Supplement 1): B172-B182. Dinges, D.F., S. Venkataraman, E.L. McGlinchey, and D.N. Metaxas. 2007. Monitoring of facial stress during space flight: Optical computer recognition combining discriminative and generative methods. Acta Astronautica 60(4-7): 341-350. DOT (U.S. Department of Transportation). 2008a. Integrated vehicle-based safety systems. Available at http://www.its.dot.gov/ivbss/. Last accessed September 8, 2008. DOT. 2008b. Safety vehicle using adaptive interface technology (SAVE-IT). Available at http://www.volpe.dot.gov/hf/roadway/saveit/index.html. Last accessed September 8, 2008. Evan, W.M., and M. Manion. 2002. Minding the Machines: Preventing Technological Disasters. Upper Saddle River, N.J.: Prentice Hall. Graydon, F.X., R. Young, M.D. Benton, R.J. Genik II, S. Posse, L. Hsieh, and C. Green. 2004. Visual event detection during simulated driving: Identifying the neural correlates with functional neuroimaging. Transportation Research Part F: Traffic Psychology and Behaviour 7(4-5): 271-286. Gronfier, C., K.P. Wright, Jr., R.E. Kronauer, and C.A. Czeisler. 2007. Entrainment of the human circadian pacemaker to longer-than-24-h days. Proceedings of the National Academy of Sciences of the United States of America 104(21): 9081-9086. Hancock, P.A. 1997. Hours of boredom, moments of terror—or months of monotony, milliseconds of mayhem. Paper presented at the Ninth International Symposium on Aviation Psychology, Columbus, Ohio. April 27-May 1. Hancock, P.A. 2003. The ergonomics of torture: The moral dimension of evolving human-machine technology. Pp. 1009-1011 in Proceedings of the Human Factors and Ergonomics Society 47. Santa Monica, Calif.: Human Factors and Ergonomics Society. Hancock, P.A. 2007a. Procedure and dynamic display relocation on performance in a multitask environment. IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans 37(1): 47-57. Hancock, P.A. 2007b. On the process of automation transition in multitask human–machine systems. IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans 37(4): 586-598. Hancock, P.A. In press. The battle for time in the brain. In Time, Limits and Constraints: The Study of Time XIII. J.A. Parker, P.A. Harris, and C. Steineck, eds. Leiden, The Netherlands: Brill. Hancock, P.A., and M.H. Chignell. 1987. Adaptive control in human–machine systems. Pp. 305-345 in Human Factors Psychology. P.A. Hancock, ed. Amsterdam, The Netherlands: North Holland. Hancock, P.A., and J.L. Szalma. 2003a. The future of neuroergonomics. Theoretical Issues in Ergonomic Science 4(1-2): 238-249. Hancock, P.A., and J.L. Szalma. 2003b. Operator stress and display design. Ergonomics in Design 11(2): 13-18. Harada, H., H. Nashihara, K. Morozumi, H. Ota, and E.A. Hatakeyama. 2007. A comparison of cerebral activity in the prefrontal region between young adults and the elderly while driving. Journal of Physiological Anthropology 26(3): 409-414. Hill, S.G., and A.M. Rothblum. 1994. Human factors issues in the maritime industry. Proceedings of the Human Factors and Ergonomics Society 38: 862. Hillburn, B., P.G. Jorna, E.A. Byrne, and R. Parasuraman. 1997. The effect of adaptive air traffic control (ATC) decision aiding on controller mental workload. Pp. 84-91 in Human–Automation Interaction: Research and Practice. M. Mouloua and J. Koonce, eds. Mahwah, N.J.: Lawrence Erlbaum Associates. Keysers, C., D.K. Xiao, P. Földiák, and D.I. Perrett. 2001. The speed of sight. Journal of Cognitive Neuroscience 13(1): 90-101. Krupa, A., M. de Mathelin, C. Doignon, J. Gangloff, G. Morel, L. Soler, J. Leroy, and J. Marescaux. 2002. Automatic 3-D positioning of surgical instruments during robotized laparoscopic surgery using automatic visual feedback. Pp. 9-16 in Lecture Notes in Computer Science, Volume 2488: Medical Image Computing and Computer-Assisted Intervention—MICCAI 2002. Berlin: Springer. Lebedev, M.A., and M.A.L. Nicolelis. 2006. Brain–machine interfaces: Past, present and future. Trends in Neuroscience 29(9): 536-546. Levine, S.P., J.E. Huggins, S.L. BeMent, R.K. Kushwaha, L.A. Schuh, M.M. Rohde, E.A. Passaro, D.A. Ross, K.V. Elisevich, and B.J. Smith. 2000. A direct brain interface based on event-related potentials. IEEE Transactions on Rehabilitation Engineering 8(2): 180-185. Lockley, S.W., E.E. Evans, F.A.J.L. Scheer, G.C. Brainard, C.A. Czeisler, and D. Aeschbach. 2006. Short-wavelength sensitivity for the direct effects of light on alertness, vigilance, and the waking electroencephalogram in humans. Sleep 29(2): 161-168. McBride, D.K., and D.D. Schmorrow, eds. 2005. Quantifying Human Information Processing. Lanham, Md.: Lexington Books. Miller, N.L., and L.G. Shattuck. 2004. A process model of situated cognition in military command and control. Paper presented at the 2004 Command and Control Research and Technology Symposium, San Diego, Calif. Minsky, M. 1986. The Society of Mind. New York, N.Y.: Simon and Schuster. Mouloua, M., R. Gilson, and P.A. Hancock. 2003. Human-centered design of unmanned aerial vehicles. Ergonomics in Design 11(1): 6-11. Narkar, V.A., M. Downes, R.T. Yu, E. Embler, Y.-X. Wang, E. Banayo, M.M. Mihaylova, M.C. Nelson, Y. Zou, H. Juguilon, H. Kang, R.J. Shaw, and R.M. Evans. 2008. AMPK and PPAR agonists are exercise mimetics. Cell 134(3): 405-415. NSBRI (National Space and Biomedical Research Institute). 2008a. About NSBRI. Fact sheet. Available at http://www.nsbri.org/About/FactSheet.html. Last accessed September 8, 2008. NSBRI. 2008b. Neurobehavioral and psychosocial factors: Current team projects. Available at http://www.nsbri.org/Research/Projects/listprojects.epl?team=psycho. Last accessed September 8, 2008.
OCR for page 73
Opportunities in Neuroscience for Future Army Applications Parasuraman, R., and M. Rizzo, eds. 2007. Neuroergonomics: The Brain at Work. New York, N.Y.: Oxford University Press. Parasuraman, R., and G.F. Wilson. 2008. Putting the brain to work: Neuroergonomics past, present and future. Human Factors: Journal of the Human Factors and Ergonomics Society 50(3): 468-474. Rochlin, G. 1991. Iran Air flight 655: Complex, large-scale military systems and the failure of control. Pp. 95-121 in Social Responses to Large Technical Systems: Control or Anticipation. T.R. La Porte, ed. Boston, Mass.: Kluwer Academic Publishers. Rouse, W.B. 1975. Human interaction with an intelligent computer in multi-task situations. Pp. 130-143 in Proceedings of the 11th Annual Conference on Manual Control. Seattle, Wash.: Boeing Commercial Airplane Company. Schmorrow, D.D., and L.M. Reeves, eds. 2007. Foundations of Augmented Cognition. New York, N.Y.: Springer. Shephard, J.M., and S.M. Kosslyn. 2005. The Minicog Rapid Assessment Battery: Developing a “blood pressure cuff for the mind.” Aviation, Space, and Environmental Medicine 76(Supplement 1): B192-B197. Snook, S.A. 2000. Friendly Fire: The Accidental Shootdown of U.S. Black Hawks over Northern Iraq. Princeton, N.J.: Princeton University Press. Spiers, H.J., and E.A. Maguire. 2007. Neural substrates of driving behaviour. NeuroImage 36(1): 245-255. Taylor, R.M. 2006. Human automation integration for supervisory control of UAVs. Pp. 12:1-12:10 in Virtual Media for Military Applications, Meeting Proceedings RTO-MP-HFM-136. Neuilly-sur-Seine, France: RTO. Tripp, L.D., and J.S. Warm. 2007. Transcranial Doppler sonography. Pp. 82-94 in Neuroergonomics: The Brain at Work. R. Parasuraman and M. Rizzo, eds. New York, N.Y.: Oxford University Press. Uchiyama, Y., K. Ebe, A. Kozato, T. Okada, and N. Sadato. 2003. The neural substrates of driving at a safe distance: A functional MRI study. Neuroscience Letters 352(3): 199-202. UMTRI (University of Michigan Transportation Research Institute). 2008. Integrated Vehicle-Based Safety Systems (IVBSS): Phase I Interim Report. Report No. DOT HS 810 952. Available at http://www.nhtsa.dot.gov/staticfiles/DOT/NHTSA/NRD/Multimedia/PDFs/Crash%20Avoidance/2008/810952Lo.pdf. Last accessed September 8, 2008. Warm, J.S., ed. 1984. Sustained Attention in Human Performance. New York, N.Y.: Wiley. Warm, J.S., W.N. Dember, and P.A. Hancock. 1996. Vigilance and workload in automated systems. Pp.183-200 in Automation and Human Performance: Theory and Applications. R. Parasuraman and M. Mouloua, eds. Mahwah, N.J.: Lawrence Erlbaum Associates. Young, R.A., L. Hsieh, F.X. Graydon, R. Genik, M.D. Benton, C.C. Green, S.M. Bowyer, J.E. Moran, and N. Tepley. 2006. Mind-on-the-drive: Real-time functional neuroimaging of cognitive brain mechanisms underlying driver performance and distraction. SAE Transactions 114(6): 454-472.