Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 74
Opportunities in Neuroscience for Future Army Applications 7 Neuroscience Technology Opportunities Chapters 3 through 6 discussed neuroscience research leading to developments in key application areas of training, decision making, and performance, including recommendations on predominantly research-based opportunities. This chapter discusses high-risk, high-payoff opportunities for using neuroscience technologies in Army applications. Where the committee identified significant investments in a technology opportunity from nonmilitary federal, commercial, or foreign sectors, the leveraging opportunities for the Army are noted. A section on barriers to Army use of these technology opportunities describes important scientific and technical barriers. The concluding section presents the committee’s priorities—“high priority,” “priority,” and “possible future opportunities”—for Army investment. A section on technology trends discusses several trends that the committee believes will endure and even grow in significance for Army applications. The Army should establish a mechanism to monitor these trends effectively and have a capability in place to evaluate the applicability of any resulting technology to Army needs and opportunities. This chapter discusses many technologies and plausible areas of neuroscience research, including experiments that may be facilitated through the use of human subjects. The committee assumes that all such research will be conducted in accordance with the guidelines established in the Belmont Report and subsequent regulations issued by the Office of Human Research Protections of the U.S. Department of Health and Human Services. The report places technologies in two categories: those that result in “mission-enabling” instruments and those that result in “research-enabling” instruments. In some instances, a technology has applications in both categories. The word “instrument” is used in the most general sense: it could be a pen-and-paper personality inventory, a software-controlled skills survey, a reaction-time analysis method for training assessment, a control interlock system to distribute information among different vehicle crew based on their current workload and baseline cognitive capability, an in-helmet device designed to monitor neural activity or cerebral blood flow, or an advance in imaging technology. Both categories of technology share the common characteristic that neuroscience research, as defined in Chapter 2, plays a key role in their development. Deployable instruments are technologies directly affecting performance, training, or military decision making. Enabling instruments fill gaps in current technology and allow neuroscientific examination (laboratory) or evaluation (training or battlefield) of soldier performance, training, or military decision making. The committee feels this distinction is vital, because it is not immediately clear, for example, whether miniaturized signal processing technology will open additional opportunities to use laboratory devices currently considered impractical. If that happened, the miniaturization of signal processing would be an enabling technology. All neuroscience technologies have spatial and temporal resolutions that define the neurophysiological building blocks they can study. Twenty-five years ago, the vast majority of detailed in situ function of localized structures in the human brain was extrapolated from work on animals measured with electronic stopwatches, clipboards, and scalp surface electrodes or was inferred from correlation studies of injury/pathology using psychiatric examinations. The introduction of noninvasive technologies has expanded the breadth and depth of studies of normal human brain function and allowed the development of noninvasive neural measurement techniques to study the functioning human brain. Figure 7-1 shows how these new technologies can monitor or even predict performance anywhere in the spatiotemporal plane. (For discussion of the history and advancement of neuroscience research related to Army applications, see Chapter 2.) MANAGING THE SOLDIER’S PHYSICAL LOAD There are multiple research and development opportunities involving soldiers, such as extracting information
OCR for page 75
Opportunities in Neuroscience for Future Army Applications FIGURE 7-1 Various noninvasive imaging technologies provide insight into the brain (anatomy) and mind (function). The spatial resolution of a given technology defines the largest and smallest brain structures that can be observed, while the temporal resolution defines the elements of mind function to be measured. Academic and commercial research is primarily geared to improving resolution, although important measurements for the prediction of behavior can be made at any point in the brain-mind plane. Shown are several of the technologies discussed in Chapter 7. SOURCE: Adapted from Genik et al., 2005. from the brain and nervous system, inferring neural states from physiological information, or designing control strategies to alter or enhance neural states. Nevertheless, the committee recognizes that critical ergonomic considerations limit the added burden—particularly added weight—that neuroscience technologies can place on an already overloaded soldier. Mission-enabling technologies (including devices for sensing, power, and onboard computating) must be considered as part of the larger system of a dismounted soldier’s equipment load, and they should not add appreciable weight or volume to the helmet or backpack. A National Research Council study determined that any new device(s) should not add more than 1 kg to the helmet or 2 kg to the pack. More important, any helmet-mounted neuroscience technology should not interfere with ballistic protection, helmet stability, or freedom of head movement (NRC, 1997). The committee believes that these design and engineering constraints must be considered from the outset to ensure successful integration of a neuroscience technology with the soldier’s existing equipment load. MISSION-ENABLING TECHNOLOGIES The Army has a basic requirement to process, distribute, and apply information efficiently. These requirements will only increase with the demands of a network-centric environment. Better cognitive performance must be achieved if soldiers are to contend with an ever-increasing river of information. Solutions are needed to address demonstrated operational requirements, such as avoidance of information overload and successful synthesis of information that selectively highlights the mission-critical features from multiple sources. The technologies described in this section apply knowledge and techniques from neuroscience to help solve these and related challenges in sustaining and improving soldier performance. Mission-enabling (deployable) instruments or technologies of interest to the Army must be capable of being scientifically validated and include brain–machine interface (BMI) technologies, remote physiological monitoring to extend performance in combat, and optimization of sensor-shooter responses under cognitive stress. BMI technology examples include near-term extensions of current train-
OCR for page 76
Opportunities in Neuroscience for Future Army Applications ing applications of virtual reality (VR) systems, iconic or graphical information overlays on real-world visual displays, and various approaches to monitoring neurophysiological stress signals to manage information overload, and the use of neural signals to control external systems. Several of the technologies discussed may not appear to the casual observer to be rooted in neuroscience research. Where a connection is not obvious, it will be explicitly stated or the neuroscience aspect outlined. Field-Deployable Biomarkers of Neural State The first issue in applying laboratory neuroscience results to field operations is to find reliable indicators of neural state that can be used in the field (field-deployable biomarkers). The equipment used in functional neuroimaging laboratories is sensitive to movement of both the subject and metal objects near the subject as well as susceptible to interference from proximal electronic devices: Such constraints are antithetical to mission environments. One way to avoid this difficulty is to identify reliable physiological surrogates for the neural states of interest, surrogates that are easier to monitor in an operational environment. For example, alertness in a driving environment can be reliably determined by monitoring eyelid movement (Dinges et al., 1998). Neuronal state measurement is a primary topic in this chapter, but there are many other physiological indicators that can be evaluated for their reliability as biomarkers of functional brain states and behaviors. These include Galvanic skin response (GSR); Heartbeat, including rate and interbeat interval; Eye movements, including response times, gaze latency, and stability; Pupilometry; Low-channel-count electroencephalography (EEG); Cortical changes in blood flow, measured using near-infrared spectroscopy (NIRS); Blood oxygen saturation (also using NIRS); and Facial expression as monitored by optical computer recognition (OCR) techniques. Combinations of these and future physiological measures are also possible. The committee believes that physiological indicators as surrogates for neurological states or conditions can be useful even before our understanding of the neurophysiological basis for the established correlation is complete. Therefore, the technology opportunities discussed here include the development and scientific validation of surrogate markers with research; to understand how they work is of secondary importance. One futuristic technology that should be developed in parallel with work on individual physiologic indicators and surrogate markers is some kind of health and status monitoring tool for operational commanders that combines relevant neural measures to provide near-real-time feedback about operator neural readiness. To develop such a tool, a top-down functional analysis should be conducted to determine which of the neural indicators available are meaningful for different kinds of Army decision makers. For example, commanders on the battlefield could benefit from decision support that alerts them in near real time to issues with personnel neural readiness, such as unexpectedly high levels of fatigue or sleep-deprivation deficits in individuals or across units. Another class of decision makers that could benefit from readily accessible neural indexes would be medical commanders, who could decide how to allocate medical resources in response to rapidly changing events. Early versions of the monitoring system might find their initial application in training, including advanced training for specialized tasks as well as basic training of recruits. The development of an operational neural health and status monitoring system represents an important intersection of neuroscience and military operational applications, since such a system could inform critical, time-pressured decisions about near-real-time soldier status. As the reliability and range of neural state indicators grows, this could also incorporate predictive biomarkers to aid commanders in comparing alternative scenarios. For example, a decision support tool that could indicate the neural impact of extending troop deployments, in both the near term and the far, could help to determine troop rotations and to select individuals or units for various activities. EEG-Based Brain–Computer Interfaces One area of neuroscience technology that has received much attention in the mainstream media is the development and use of EEG-based brain–computer interfaces. These interface systems have potential operational use in areas such as remote, real-time physiological monitoring—for example, a battlefield commander could receive some indication that a soldier is approaching maximum mental workload or stress. While such operational uses are possible, battlefield applications of these sensors as a neurotechnology are not likely to be realized in the next 10 years. Commercial developers of EEG-based interfaces, which target primarily applications in video gaming and in marketing, generally claim that they can detect facial expressions, emotional states, and conscious intentions (Greene, 2007a). Their devices usually contain both the EEG sensors and machine-learning algorithms that require training for individual users. A handful of companies claim they will have brain–computer interface headsets commercially available in the very near future for gaming applications (Emotiv and NeuroSky are two such), and a similar number already claim success with methodologies for consumer research (in neuromarketing research applications) (EmSense, Lucid Systems, and NeuroFocus are three such methodologies). One technological advance that commercial EEG-based
OCR for page 77
Opportunities in Neuroscience for Future Army Applications brain–computer interfaces have incorporated is the concept of a dry electroencephalograph, which is an EEG device that does not require the use of electrically conducting gel on the scalp. However, the capability of this technology has been questioned, since a dry EEG device cannot produce as strong a signal as a traditional gel-based electroencephalograph (Greene, 2007b). Scientific proof of the claims for these brain–computer interfaces is virtually nonexistent, and they have been heavily criticized by academics (Nature, 2007; Greene, 2007b). The aforementioned companies have not published any scientific papers on their devices or methodologies, so the industry, as of now, is still extremely opaque. While the possible outcomes could be relevant for the operational Army, the use of EEG-based brain–computer interfaces to support real-time decision making should be considered basic research and funded accordingly. Furthermore, the headset technology demonstrated by companies such as Emotiv brings into question whether the primary signal that trains the interface is of cortical origin or based on cranial muscle polarization. These devices are therefore interesting as control interfaces to augment the number of devices a single soldier can control, but they probably do not qualify as a neuroscience technology opportunity. One commercial application that could be more useful to the Army in the near term is a neurofeedback system for self-regulation. Neurofeedback, akin to biofeedback, allows individuals to receive visual and aural feedback about their brainwave activity to promote self-regulation. This technique, which was developed in part through research sponsored by the National Aeronautics and Space Administration (NASA), has been recommended as a therapeutic intervention for the treatment of attention-deficit hyperactivity disorder, traumatic brain injury, post-traumatic stress disorder, and depression in children and adolescents (Hirshberg et al., 2005). While neurofeedback systems are not going to be used for operational real-time decision making anytime soon, their application in field settings could one day be of interest to the Army in ways that go beyond their obvious medical therapeutic benefits. CyberLearning Technology, LLC, which has an exclusive license with NASA for its neurofeedback methods, connects its system directly with off-the-shelf video game consoles such as the Sony PlayStation and the Microsoft Xbox. Given the ubiquity of these game consoles and personal computers in field settings, it may be possible to leverage this technology in the near term for use by soldiers in the field. Haptic Feedback Technology for Virtual Reality VR, a technology whose graphical base is driven by advances in the gaming industry, is now a common tool in behavioral neuroscience research and applications. Of particular importance for the Army is the use of VR for the study and modification of human behavior and for the enhancement of human abilities (Tarr and Warren, 2002). Indeed, VR is becoming a familiar technique for simulating flight, shipboard seamanship and navigation, and tank and armored vehicles. VR implementations are central to training for the Future Combat Systems program. The nascent Virtual Squad Training System is a wireless, wearable system that trains Army warfighters using simulated weapons like the Army Tactical Missile System in a virtual combat environment. One area of VR that the Army has not yet exploited is the use of three-dimensional (3D) haptic interfaces. In general, a haptic interface provides cutaneous sensing and kinesthetic feedback to simulate the feel of physically manipulating what are in fact virtual devices. Haptic interfaces not only have training applications but also could be used for systems such as those that teleoperate robots. The Army can leverage commercial-sector investments in haptic interfaces. The current commercially available haptic-sensing devices range from more extensive exoskeleton-based force-reflecting systems (e.g., Immersion’s CyberForce) costing tens of thousands of dollars, to smaller, personal-computer-based, electromechanical force feedback systems (e.g., Novint’s Falcon) that retail for a few hundred dollars. The larger force-reflecting systems could be useful in large simulation-training environments such as the Virtual Squad Training System. The PC-based systems, which are much smaller and less expensive, could be used for training during deployments. Augmented Reality Technologies Unlike VR, which seeks to replace the whole spectrum of sensory and thus perceptual experience with a simulated experience, augmented reality (AR) is a hybrid technology in which a display of the natural world is intermixed with information-rich virtual elements. A simple illustration of the basic approach is the display of information on, say, precipitation levels and wind intensity, derived from weather radar data, overlaid on a photograph of the terrain. The principle of linkage is illustrated in the weather-map example by physical observation: Does the online image match ground conditions? The principle of scaling is illustrated by zooming in and out on the map: Time-locked weather patterns should stay constant over the same locations at all resolutions of an image. These weather depictions are fairly accurate, but it is still not unheard of for an online map to inaccurately describe what is observed with a quick trip to the window. The first practical adaptation of AR for military use were “heads-up” displays in advanced fighter jets. These systems typically display on demand the status of the dynamics of the aircraft—including heading, altitude, and velocity—surrounding a central “pitch ladder” that illustrates the attitude of the aircraft itself. In most military aircraft displays, the augmentation includes assisted targeting capabilities.
OCR for page 78
Opportunities in Neuroscience for Future Army Applications Also, the display refers to the aircraft against a real-world background. For dismounted soldier applications, an AR display might use a head-mounted, “monocle” device such as was deployed in a 120-person study of a simulated search-and-rescue operation (Goldiez et al., 2007). To conduct search-and-rescue operations in a building, team members must systematically clear an entire structure of possibly wounded compatriots and supply treatment if needed, while defending against or removing possible threats. Such operations tax working memory, and substantial improvement can be realized if mission elements requiring short-term memory encoding can be off-loaded to mobile technology. In the study cited, AR was employed to map a simulated building, which allowed participants to concentrate instead on tests of working memory such as locating mission-objective targets and planning for speedy exits. Similar applications subjected to more complex field testing will allow (1) smaller teams to complete the same missions, (2) teams to operate longer by shortening the time during which vigilance must be sustained, (3) team members to share graphic information in real time, and (4) teams to succeed in their mission even if they are operating at less than optimal cognitive capacity, perhaps as a result of fatigue. Recent terrestrial applications of AR have focus-linked displays in which the augmentation (the information overlay) is spatially and temporally locked to some (usually geographical) aspect on the focal plane of the ambient display. These technologies have typically been used for way-finding and similar forms of orientation assistance. Such applications must be able to adjust scale as the real-world display zooms in and out and to lock on a real-world feature such that the display is constantly oriented correctly to a change in field of view. AR poses some interesting neuroscience questions. An important concern is the correspondence issue: How is the electronically generated information to be permanently locked and scaled to the changing environment in which the AR user finds himself or herself? What happens when a failure of linkage produces a mismatch of either scale or orientation? Spatial disorientation is a problem that has been explored extensively in aerospace human factors studies. However, the latencies involved in spatial transformations and rotations can lead to a condition often called “simulator sickness” (Shepard and Metzler, 1971; Shepard and Cooper, 1982). Whereas the barrier to further advances in VR is largely insufficient computational capacity to generate the virtual world for the user, the comparable barrier to advances in AR is the difficulty of integrating information and of achieving perceptual realism in the display. Large investments will be required to overcome these more complex issues, but AR will eventually prove to be a more powerful technology than VR for incorporating neuroscience advances. Additional sensory inputs will certainly be developed one day. Present technology for the senses other than vision is relatively rudimentary. A haptic simulation could include information overlay on a trigger that provides cutaneous feedback when a weapon locks on a potential target and has identified it as friend or foe. A soldier can be trained to set reflexive neurons in a state to pull the trigger without higher neural involvement, decreasing the time between acquiring and engaging the target. Such reflexive neuron training is in use today for complex, force-feedback motions like the double-tap.1 Commercial AR applications include displays with visual, aural, or haptic channels that can be used individually or integrated. The type of heads-up visual display used in military aviation is being adopted in commercial industries such as trucking. The Army currently has a helmet-mounted monocular display as part of its Land Warrior program. However, the program has received mixed reviews (Lowe, 2008). It is possible that a variation on commercial heads-up display technology might improve the quality and performance of current implementations. Commercial versions of visual AR include the Apple iPhone/iTouch system, which offers more intuitive and fluid gesture-based interactions. These same gesture-based interactions can be seen in Microsoft Surface, a tabletop interactive display that has recently become commercially available and was notably employed in television coverage of the 2008 presidential election. As an Army application, tabletop displays are primarily suitable for high-level, stationary command posts owing to their relatively high cost and fragility. Combined with appropriate software such as map search, visual AR devices in both small field-deployable versions and larger stationary versions enhance situational awareness very effectively. Significant research has been done on enhancing decision making through use of an aural AR channel. Many human factors studies have stressed the importance of properly applying and integrating these aural systems. Innovative signal-presentation approaches for aural AR include spatial audio (reproducing spatial depth relative to the listener); ambient audio (either suppressing ambient noise or presenting local, external audio signals for operators in collaborative environments who are required to wear headsets); and continuous audio (mapping an alert condition to synthetically produced continual signals such as virtual rumble strips). Potentially useful Army applications include adaptation of commercially available spatial audio headsets and speaker systems that broadcast spatial audio in a group setting. The latter could be useful in enclosed environments such as a command and control vehicle (C2V). However, significant development work on the software would be required to adapt the hardware for Army use. Visteon has produced the only commercial proximity-and touch-sensitive haptic controls. In these displays, which 1 The double-tap is a combat pistol technique whereby the shooter sends a signal to the peripheral nervous system to pull the trigger on a semiautomatic pistol a second time once the trigger returns to a firing position. The second trigger pull is actually occurring while the shooter is locating the next target and assessing outcome using peripheral vision.
OCR for page 79
Opportunities in Neuroscience for Future Army Applications have been used in automobiles, as the operator’s hand approaches the display, it lights up and a software-activated button provides haptic feedback to the user’s touch, mimicking the feel of a mechanical button being pushed, even though the display is a flat screen. Potential uses for this technology include the C2V and control stations for unmanned aerial vehicles and unmanned ground vehicles. Information Workload Management via Physiological and Neural Feedback (Including Augmented Cognition) As noted earlier, neuroscience can help the soldier avoid information overload while helping with the cognitive tasks of synthesizing information and picking out mission-critical features. Examples of the latter include intelligence fusion and other forms of data interpretation to heighten situational awareness. The use of such information processing with presentation technology can enhance warfighter performance. Previous work by the military on this subject has dealt narrowly with filter methods and technologies to rapidly present the results to the soldier. The Defense Advanced Research Projects Agency (DARPA) Augmented Cognition program (the AugCog program), which formally ended in FY 2006, sought to augment human information-processing capabilities through the design of interfaces incorporating neuroscience technologies that enable the interface to adapt in real time to the stress state of the user. Similar DARPA research continues under the rubric of Improving Warfighter Information Intake Under Stress. Army research along the AugCog path is continuing at the Natick Soldier Research, Development and Engineering Center. Appendix D reviews the phases of development work and testing under the AugCog program and the direction taken by Army follow-on activities. Highlights of the AugCog effort are presented here to illustrate the approach taken and the implementation achieved to date. The term “information workload management” refers to managing the presentation of information to sustain and enhance cognitive processing capability when the emotional-cognitive evidence indicates an individual may be reaching an overload condition. This information monitoring may feed back into an adaptive interface, as in the AugCog concepts, or, less ambitiously, it may trigger some type of warning signal to the user—for example, as part of an AR display. The original goal of the AugCog program—to enhance information workload management through technologies that monitor indicators of cognitive state—is even more relevant now as the Department of Defense (DOD) moves toward network-centric warfare. However, while the researchers involved with the AugCog milestones made progress in terms of hardware and software advances, their results were preliminary, as one would expect. More important, perhaps, is the lesson that the original objectives of that program are not achievable in the near term because of barriers that became evident during this ambitious but early technology development effort. Despite the stated goal of those closely associated with the AugCog program—that its technologies would be operational within 10 years—the likely horizon for an initial operating capability is much farther away. One major hurdle is development of a wireless EEG device that is unobtrusive, does not require the use of conducting gel (known as “dry EEG”), and is able to process onboard signals, all while the user is in motion and often under difficult environmental conditions, including electromagnetic interference. While some advances have been made in wireless EEG and dry EEG (see an earlier subsection on EEG-based brain–computer interfaces), the signals from these devices are substantially weaker than signals from more traditional electroencephalographs. Moreover, their ability to detect cognitive states for use in predictive algorithms in dynamic, uncertain environments has yet to be demonstrated and validated to the level required of an operational system. The committee believes the Army should continue funding research in information workload management with a focus on hardware developments, including development of surrogate indicators for laboratory-based indicators of neural state, ruggedization of instruments for use in field environments, and advancement of associated signal-processing efforts. Without advances in these areas, the laudable information workload management techniques of AugCog cannot be operationalized. Substantial research and development will also be needed on predictive algorithms in dynamic, highly uncertain domains for open-loop systems with noisy sensor data. This is an instance where the higher-level technology used to monitor for and ameliorate cognitive overload will depend on the successful understanding of field-deployable indicators of neural state, as discussed above. Technologies to Optimize Sensor-Shooter Latency and Target Discrimination Another important Army concept applicable to a range of tactical combat situations is known as the sensor-shooter paradigm. The latency in sensor-to-shooter responsiveness is measured by the time needed to recognize a specific threat from its first appearance, to select the appropriate course of action to neutralize that threat, and to respond with the correct action. All other factors being equal, the lower latency from sensor to shooter will increase the efficiency of the sensor-shooter response. An equally important (or in some circumstances, more important) measure of response efficiency is target discrimination, which requires correct recognition of the threat with the fewest possible false negatives or false positives. The sensor-shooter should neither fail to recognize a threat (the foe) nor mistake a nonthreat (a friend or neutral actor) for a threat. Thus, improving sensor-shooter response efficiency requires optimizing the
OCR for page 80
Opportunities in Neuroscience for Future Army Applications combination of a short latency period with a very high degree of target discrimination. Complicating sensor-shooter efficiency is that one should not aim to minimize latency in a tactical vacuum devoid of appropriate strategic considerations. Strategy often involves longer-term goals for which a faster response (slower latency) is not necessarily better (Scales, 2006). The threat analysis technology that needs to interface with the soldier operationally requires strategic as well as tactical input and the ability to communicate both sets of information to the soldier so that he can make a decision. Furthermore, the real-world scenarios to which the sensor-shooter paradigm applies often subject the soldier to stresses such as fatigue, sleep deprivation, information overload, and cognitive overload (Harris et al., 2005). Just the addition of a simple secondary cognitive task to be performed during a complex primary task will degrade performance of the primary task by an already overloaded individual from his or her baseline (Hancock and Warm, 1989). Technologies informed by neuroscience can boost the individual soldier’s performance in sensor-shooter activities. The committee focused on two ways to support the sensor-shooter in difficult circumstances: devices to augment threat assessment and virtual simulation technologies to enhance intuitive decision making. The third aspect of the sensor-shooter paradigm, motor execution of the action decided upon, is an important research opportunity for the Army. It is discussed in Chapter 8. Threat Assessment Augmentation Aids It is clear that technology can be brought to bear on threat recognition. A number of devices are already used to support this function. Mainly they confer visual enhancement expressed on distal screens, on head-mounted displays that present a totally synthetic vision of the kind seen in VR, or on hybrid displays of the AR kind, including real-scale or telescopic capability, and overlaying relevant information from sources spanning the electromagnetic spectrum. These integrated displays attempt to replicate the perceived environment in some fashion. More sophisticated augmentation techniques can begin to replicate the capacity of the visual system of the eye and brain to focus on specific characteristics such as novelty, intensity, and context-driven importance. Such smart augmentation aids can help the observer focus on critical information. As an example, consider the problem of detecting a recently emplaced improvised explosive device (IED) along a transportation corridor. One of the primary signals of threat is a change in the morphology or presence of roadside objects. Modern technology is very efficient at detecting a change in the scene if the only object in a field that changes shape or appearance is an IED. If the roadside litter in a field of view has also changed during the same time interval, however, detection may be markedly degraded. In the best case, a smart display could alert a patrolling soldier of change in the coming roadway scene, which could signal the presence of an IED. Such an augmented display could be programmed to scale these threats, e.g., pedestrian versus large-scale objects, to allow for a degree of preprocessed threat assessment. Biometric technologies capable of identifying specific individuals from a distance while supporting persistent visual monitoring of the environment will extend the amount of time available for a soldier to integrate fused data rather than collect and sort information, thereby increasing processing efficiency and reducing the likelihood of error. Simulation Technologies to Enhance Intuitive Decision-Making Skill Chapter 4 discussed decision making as it applies (primarily) to command-level decisions. Decision-making theory also provides useful insights into how simulation technologies can be used to help the sensor-shooter through the concept of intuitive decision making. The concept assumes that the decision maker has a high level of situational awareness. The simulations described in Chapter 5 that enable military leaders to accumulate life experiences that improve their intuitive decision-making skills can also be used to develop sensor-shooter training. Such simulations could be designed to adapt and respond to soldiers in an intelligent manner and portray cognitively, culturally, and intellectually accurate and challenging scenarios that identify, develop, improve, and assess these skills. Increasingly, human factors—the cognitive, cultural, and intellectual aspects of human conflict—are the main determinants of success on the battlefield (Scales, 2006). A soldier–simulator interface that elicits personal interaction could lead to a self-referent memory approach by the trainee, increasing the accuracy of an individual’s recall when a similar situation is faced again (Rogers et al., 1977). This type of interaction with the simulator would be based on the theory of recognition-primed decision making (Klein, 1989), and if the interaction is properly exploited with well-designed interfaces will lead to perceptual learning in the areas of attentional weighting, stimulus imprinting, differentiation, and unitization (Goldstone, 1998). A well-designed decision-making simulator could enable soldiers who will have to function in demanding sensor-shooter roles to learn using scenarios that provide life experiences, bloodlessly. Finally, as discussed in Chapter 4, recent advances in neuroimaging enable researchers to follow the spatial pathways and temporal patterns of expert decision makers. For example, the detection of potential threats as revealed in VR displays appears to involve the amygdala and related brain regions. While most of the new information and correlations have been achieved in the laboratory environment, new lightweight, portable technologies that take the place of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) to detect loss of decision-making
OCR for page 81
Opportunities in Neuroscience for Future Army Applications capability would enhance operational skills and survival on the battlefield. These new technologies are expected to be of greatest benefit in High OpTempo2 environments (continuous operation for of 12-36 hours). This is another example where research and development work on field-deployable sensors to indicate neural state is essential for achieving a more advanced state of neuroscience-based technology. RESEARCH-ENABLING TECHNOLOGIES Several of the mission-enabling technologies in the preceding section require for their further development a fuller understanding of common neurophysiological patterns in human behavior. Research-enabling technologies are also needed to develop tools to study and assess underlying aspects of performance such as ground-truth workload and attention to detail. The advances made with research-enabling technologies will be deployed with soldiers on a limited basis, or used in training, simulations, or laboratory environments. Some of the mission-enabling technology described in the previous section will also find uses in the research environment. Investment in research-enabling technology is crucial for adapting current technology to Army applications, as well as for advancing to future generations of Army applications. Such technology might also help the Army conduct scientifically rigorous validation and testing for emerging mission-enabling technology. Some of the opportunities simply involve bridging gaps in technology—for instance, the ability to use fMRI and concomitant eye tracking across a wide visual angle. In this section the committee discusses signal processing challenges, control strategies for BMIs, fatigue and sleep models for soldiers, advances in functional paradigm technology, adapting laboratory neuroimaging technologies for use in the field, and data fusion. The committee also touches briefly on the science of connectonomics. It looks at the development of a few pieces of hardware and imaging methodologies that could dramatically advance several basic science techniques. Signal Processing Challenges At present the methods for extracting information from the brain and nervous system can be divided into two categories: invasive and noninvasive. Invasive methods include multielectrode recordings, local field potentials (lfp’s), and calcium imaging. The advantage of the invasive methods is that they provide the most direct information about the functioning of specific brain regions on a very fast timescale. The disadvantage is that they frequently require surgery and sometimes cannot be used in humans. Noninvasive methods include EEG, MEG, diffuse optical tomography (DOT), diffusion tensor imaging (DTI), fMRI, GSR, electromyography, and electrooculography. Noninvasive recording techniques have the advantage of not requiring an invasive procedure to place the recording apparatus. However, they frequently require a tremendous amount of additional hardware and infrastructure to collect the information. In addition, noninvasive procedures generally allow high resolution on the temporal scale at the cost of less resolution on the spatial scale or vice versa. The disadvantage of noninvasive recording techniques is that the information they collect is often indirect and less specific. The first conceptual issue surrounding the processing of signals is our limited understanding, for each of the invasive modalities, of what information the signals are providing about neural activity in a specific brain region and the relation between that activity and specific physiological changes and/or behaviors. Addressing this issue requires executing specific experiments and developing specific techniques. Research in neuroscience has not completely answered the challenging signal processing questions that must be answered if the data from invasive monitoring modalities are to be used efficiently. Among these questions are the following: To which aspects of a stimulus does a neuron respond? How do groups of neurons represent information about a biological signal (a movement command, a memory, a sound or light input) in their ensemble spiking activity? How can the plasticity in single neurons and ensemble representations of information be tracked reliably across time? How should algorithms be devised to process the activity of large numbers of neurons in real time? What sort of signal processing and biophysical information should be used to optimally fuse information from different types of recording techniques? The second conceptual issue is the extent to which brain activity from invasive measurements can be related to brain activity inferred from noninvasive measurements. If the relationship is strong, the noninvasive technique might be an adequate stand-in for the invasive technique and could lead to application as a field-deployable surrogate biomarker. If the relationship is weak, certain types of brain information may not be accessible by noninvasive means. These observations point to the need for simultaneously conducting invasive and noninvasive recordings in order to understand the relation between the two. For example, EEG is the simplest and perhaps the most widely used noninvasive neural recording technology. Although EEG has been used for nearly 80 years to study brain function dynamically, how it works is not completely understood. Much of the use of EEG signals still depends on heuristic associations. The fundamental questions here 2 High operations tempo (OpTempo) refers to missions carried out as quickly and fully as feasible, to apply overwhelming force in a time frame such that opposing forces are unable to respond and counter effectively. By their nature, High OpTempo missions are characterized by high levels of psychological and physical stress, including constant awareness of mortal danger and potential for mission failure, combined with heavy decision-making loads. When High OpTempo is combined with sustained operations (SUSOPS) (missions lasting longer than 12 hours before resupply), cognitive capabilities are easily overtaxed and prone to degradation or failure.
OCR for page 82
Opportunities in Neuroscience for Future Army Applications are, What does an electroencephalogram mean? What is the biophysical mechanism underlying its generation? To what extent can it give us reliable information about both neocortical and subcortical activity? Studies that combine EEG and invasive electrophysiological recordings in specific brain regions will be required to answer these questions. MEG is used less often than EEG, but similar questions can be asked about it. In the last decade and a half, fMRI has become the fundamental tool in many fields of neuroscience. The basic question—How do changes in neural activity relate to the changes in local blood flow and local blood volume that are necessary to produce the fMRI image?—is only beginning to be answered (Schummers et al., 2008). Similar fundamental biophysics questions about DOT have yet to be answered. In addition, when it is possible to combine a high-temporal-resolution technique such as EEG with a high-spatial-resolution technique such as fMRI, what is the optimal strategy for combining the information they generate? This example illustrates that simultaneously recording using two or more noninvasive methods can also be mutually informative. The third conceptual issue is that the ability to analyze behavior and performance quantitatively is essential to understanding the role of the brain and the nervous system in their guiding function. Some typical measures of performance include reaction time, GSR, heartbeat dynamics, local neurochemistry, and quantitative/objective measures of pain and nociception.3 In most behavioral neuroscience investigations performance is measured along with neural activity using one of the invasive or noninvasive methods. These investigations are crucial for linking neural activity in specific brain regions with overtly observable measures of performance and physiological state. Often the analyses of these performance measures are quite superficial and not very quantitative. For example, reaction times are simply plotted rather than analyzed with formal statistical methods. Similarly, GSR and heartbeat dynamics are directly observable measures of the brain’s autonomic control system. Such signals are rarely if ever analyzed as such. fMRI studies are beginning to help us better understand the processing of pain and the signals from the body’s pain receptors (nocioceptor signals). For this work to translate into techniques that can be used to aid the military, quantitative measures of pain and nociceptor stimuli must be developed. The fourth signal processing issue is being able to properly fuse information from different sources, whether invasive or noninvasive, and the fifth issue surrounding signal processing is the challenge of rapid and (ideally) real-time visualization and analysis. In short, the ability to effectively use information collected from the brain and nervous system to enhance performance and improve therapies depends critically on the signal-processing methods used to extract that information. All of the popular noninvasive methods for measuring neural states in humans have unanswered questions concerning their underlying neurophysiology. Although one can certainly glean useful methodology without probing deeply, fundamental questions remain. If research answers them, more applications and measurement techniques may open up, including, eventually, field-deployable indicators of neural state. Fatigue and Sleep Models for Soldiers Chapter 5 talks in detail about fatigue and sleep research, as well as mitigation strategies. Two important technologies enabling the performance-sustaining research discussed in these areas are (1) the computational models for predicting behavior and (2) the physical models for transferring results to the appropriate warfighter population. The computational model used is a vector of parameters important for sleep or fatigue, and inputs calibrated to a specific individual soldier. The additional strategies the officers employ in the field—naps, nutritional supplements, etc.—should be included in the model, and it should account for the difference between an academic research subject used to construct a model and a soldier. Ideally, the physical model used in research would be an actual soldier in the state of readiness expected at the start of a mission. However, the multitude of research variables that must be tested necessitates using an ordinary civilian to stand in for the soldier. Chapter 3 described an opportunity to leverage research using high-performance athletes. In an academic setting, it would be preferable to use persons from a university community for most of the studies and reserve actual soldiers for experimental runs once the paradigms are well understood and being tested for validity. Research in the area of fatigue might also include a systematic study of the differences between cognitive fatigue, physical fatigue, and fatigue, including environmental stress, from hypoxic or thermal challenges; biomarkers predictive of a soldier’s susceptibility to fatigue under extreme environmental conditions; and behavioral measures of fatigue to advance screening and testing procedures for soldier assessment. Functional MRI and Hardware to Support fMRI Research on Army Applications Functional MRI is detailed in Chapter 2. Technology associated with fMRI for use in clinical health care is receiving sufficient investment from industry. However, clinical applications require only medium spatial resolution (3-4 mm) and low temporal resolution (tens of seconds). These resolutions are usually sufficient for a clinical deter- 3 Nociception is the physiological perception of a potentially injurious stimulus.
OCR for page 83
Opportunities in Neuroscience for Future Army Applications mination of whether a major circuit in the brain is functioning normally; however, they are inadequate for measuring neural responses to instantaneous events in rapid succession, among other research paradigms.4 Academic research laboratories, funded mainly by the National Institutes of Health, possess fMRI technology that is superior to the equipment available commercially (~2-mm and 1-sec resolutions for whole human head scans). This improved spatiotemporal resolution is primarily achieved through use of advanced imaging electronics, such as parallel signal receiver channels rather than an exclusive concentration on ever-increasing static field strength. Cutting-edge laboratories have advanced measurement techniques that are a vast improvement over conventional imaging, but even typical facilities have invested in excess of $10 million for equipment and facilities, an investment that could be leveraged by the Army for the evolutionary application of current technology. The Army needs to monitor advances in existing facilities and consider ways to utilize them. Some areas of research could be of great value for Army applications but are not being addressed by industry or academia because they have little if any potential for use in the clinical market. These areas are likely to require Army investment to achieve sufficient understanding to adapt results from laboratory environments to the field. They include vertical-bore MRI; full-motion, interactive stimulation; wide-angle, immersive visual stimulation; and high-temporal-precision stimulation and monitoring. Currently, all fMRI research is done with the subject lying down. There are physiological and perceptual differences between horizontal and vertical orientation. To determine whether and to what extent supine-orientation fMRI is applicable to field situations, the least that must be done is to conduct experiments with the subject sitting up or, possibly, standing. Subjecting participants to heat, humidity, smells, and other such stimulants encountered in combat situations will also be required. This necessary work will require designing and building a specialized MRI machine, with its supporting laboratory, that is capable of scanning subjects in the vertical position while also exposing them to relatively rapid environmental changes. Developing such an fMRI system is likely to entail an investment horizon of at least 5-10 years. One company (Fonar) produces vertical MRI machines for humans, but these machines are primarily for orthopedic imaging rather than brain imaging and lack the high temporal resolution needed for Army-relevant research. At least one Army application laboratory would be required. The committee estimates that setting up a facility to perform vertical fMRI at 3 T or more using state-of-the-art imaging systems would cost $10 million to $20 million for the first 5 years of operation and, nominally, $2 million per year thereafter. Collaboration with external partners could reduce the Army investment. If the first such machine proves useful for Army applications, additional machines should cost substantially less. The main risk in this investment is that the study results may show there is no additional information to be gleaned for Army applications by examining subjects sitting or standing rather than lying supine. Although such a result cannot be ruled out before the requisite testing is done, it would contradict current research on perceptual differences observed in nonhuman primates. Mitigating this risk is that measurements are not expected to be any worse than with a commercial off-the-shelf5 system; however, the custom system is not expected to be a versatile clinical machine. Moreover, the majority of research paradigms in fMRI are static, meaning that stimulation is planned out entirely before the experiment. A small number of laboratories have produced technology that allows for feedback based on subject responses or physiological reactions that helps determine the subject stimulation in real time. This real-time technology would allow basic research into neural function in more naturalistic environments. Pioneering research in this field is being carried out at the Army’s Institute for Creative Technologies.6 The goal here is to continue such research and offer more naturalistic environments for research paradigms. A real-time system should be able to log the time at which a stimulation occurred or a response was made, including eye movements, with an accuracy of less than 1 msec and a precision of 500 μsec. This software environment should be deployable to neuroimaging centers doing Army research, requiring relatively small amounts of hardware ($200,000 for research quality and of $50,000 for clinical quality) and local technical support. The committee notes that this advance in real-time, interactive paradigms for neuroscience research should be developed with a vertical fMRI capability but would also be applicable to the development of standard supine-oriented machines. The Army should also support development of extended-range visual stimulation hardware that is closer to combat conditions and also MRI-compatible. The hardware currently available for fMRI includes wide-angle immersive visual stimulation and high-frequency presentation, but the current state of the art is ±15 degrees of nominal center (Cambridge Research is using ±45 degrees but has not yet publicly demonstrated the capability), and peripheral stimulation standards normally exceed ±40 degrees of center. 4 This information comes from two installed software packages (BrainWave, GE; and Neuro 3D, Siemens) and from discussions with company representatives attending the annual meeting of the Radiological Society of North America about what is expected to be released in the next few years. 5 Examples of such systems would be research scanners made by the three main commercial vendors: Siemens, General Electric, or Philips. 6 Jonathan Gratch, Institute for Creative Technologies, University of Southern California, “The Neuroscience of Virtual Humans,” presentation to the committee on February 13, 2008.
OCR for page 84
Opportunities in Neuroscience for Future Army Applications Additionally, 60 Hz is the standard display rate for research, with some optional displays claiming 100 Hz.7 The video gaming industry’s top-of-the-line displays are five times faster (500 Hz). It is unlikely that 500-Hz displays would be required for fMRI research in the next 5 or 10 years as long as the screen refresh time is known to the submillisecond standard. Finally, eye-monitoring hardware that tracks gaze also mainly exists in the 60-Hz world. In order to correct reaction times for eye movements, cameras sampling at 1000 Hz and above will perform the best. State-of-the-art hardware for investigating behavior provides 1250-Hz sampling rates; however, these cameras are not intrinsically MRI-compatible. The high-speed solution is to utilize a limbus tracker,8 which can sample as high as 10 kHz but has neither the angular resolution of a high-speed camera nor complete pupilometry capability. Additionally, even high-sampling-rate equipment has latency due to its USB PC interface, giving an overall synchronization uncertainty of 8 msec (ideally it should be negligible). The various components of these eye-tracking systems exist at separate facilities, but engineering them all into a standardized setup for Army applications research would be a worthy investment. Transferring Laboratory Neuroimaging Technologies to Field Applications in the Far Term The gold standard in functional neuroimaging is currently fMRI. Details of this technology are introduced in Chapter 2. To summarize: fMRI indirectly measures neuronal activity by watching changes in local blood flow around active neurons through the blood oxygen level-dependent (BOLD) effect. BOLD changes can also be observed by NIRS—also known as diffuse optical tomography (DOT)—detectors, though at a lower resolution. The underlying neuronal activation may also be observed at low resolution noninvasively with EEG or MEG. The committee expects that field-deployable fMRI technology will not be available for at least 20 years. Accordingly, results from the high-resolution fMRI laboratory experiments will need to be translated to field monitoring applications through the use of surrogate markers well-correlated with the fMRI results. The most likely candidates for field deployment are NIRS/DOT, EEG, and transcranial magnetic stimulation (TMS). There are a number of approaches to portable, field-usable application of fMRI-based neuroscience research: Direct measurement of BOLD responses in the brain, Direct measurement of the neuronal firing that caused the BOLD response, Suppression of unwanted brain activation, and Enhancement of desired brain activation. The committee projects that none of these approaches will be practical before the far term at the soonest. However, breakthroughs happen, and all of them could have great impacts in the future and deserve to be monitored or even considered for some initial pilot funding. BOLD responses could be measured directly with in-helmet NIRS/DOT detectors. NIRS/DOT can detect the dynamic changes of any spectroscopically active molecule. Single- or dual-wavelength techniques are usually employed to track blood flow in the brain, providing a crude monitor of the BOLD effect. Further development of these techniques over the next 10-20 years will lead to portable systems to take advantage of results of basic brain research in the intervening years. Measuring neuronal firing in the field is a long-term goal no matter if it will be achieved with a few sensors or with a several-hundred-channel electrical imaging system. Quantitative EEG (qEEG) is a marketing term to describe the marriage of traditional EEG with the digital recording and analysis of signals. The nomenclature change is promoted mostly in legal circles to add weight to expert testimony in civil tort proceedings or criminal defenses (“my brain made me do it”), as well as alternative medicine circles that use biofeedback to treat physiological ailments. Most of the claims for qEEG (sometimes labeled rEEG) are suspect; however, there is solid science behind the decades of analyzing the signals detected by transient EEG, and this area of neuroscience research is well worth monitoring. If research into the real-time processing of transient EEG ever reveals something of value, then a deployable in-helmet EEG detector would need to be available. Immediate uses for data on general sleep and fatigue are envisioned that would justify deploying existing EEG equipment long before a high-sampling-rate, 100-channel system is needed. A portable monitor was demonstrated at the 2008 annual meeting of the organization for Human Brain Mapping.9 A field-deployable EEG detector system has in fact been developed and should be tested in the field. The 5-year goal is the recording of a half-dozen channels with the subject jogging on a treadmill, and the 10-year goal is producing a 7 In liquid crystal display technology, unlike with the older cathode ray tube (CRT) displays, merely pumping up the input video frequency does not result in faster displays. The liquid crystal elements have a limit to their on-off transition time—typically 15-20 msec for a standard desktop. This transition time explains why flat-panel displays do not flicker like CRTs and therefore cause less eyestrain. Top-of-the-line gaming displays can transition in as little as 2 msec, providing a true 500-Hz refresh. 8 A limbus tracker illuminates the eye with infrared light (IR) and uses a single photodiode to collect the IR reflection. The motion of the edges of pupil and iris induce changes in the total reflected IR intensity. Standard eye trackers use a camera to transmit IR video of the pupil, iris, and sclera, which is processed using image analysis software. Limbus trackers are good at detecting any motion of the eye but do not provide any directional or absolute gaze information. 9 This was the 14th Annual Meeting of the Organization for Human Brain Mapping (Melbourne, Australia, June 15-19, 2008). The meeting information is documented at http://www.hbm2008.com.
OCR for page 85
Opportunities in Neuroscience for Future Army Applications system that can be used in real-time training and assessment exercises. Unwanted brain signals can be temporarily suppressed by noninvasive means in the laboratory using TMS. TMS uses high-frequency magnetic fields to block the functioning of target neuronal structures, in essence jamming the functional ability of a brain region. Two aspects of this technology need to be worked on: targeting smaller areas to lessen the side effects and making the technology deployable in a vehicle or helmet. Additionally, much research is required to learn which brain signals should be blocked and under what circumstances. Finally, there has been little research on the long-term impact of multiple TMS exposures on brain circuitry, leaving significant ethical concerns about exposing healthy humans to this technology over long periods. Enhancing desirable brain networks is usually accomplished with neuropharmacology, as discussed in Chapter 5. Additionally, it is possible that TMS can be employed to enhance rather than suppress activation. One recent study showed enhancement of top-down visuospatial attention using combined fMRI/TMS stimulation (Blankenburg et al., 2008). The ability to target smaller areas is an objective sought by the TMS research community in general, but making such a device deployable in the field would require Army investment. Making this technology available in-vehicle is achievable in the medium term. The committee believes that in-helmet TMS technology would not be a useful investment until definitive applications, enhancing or inhibiting, are identified in the laboratory. Implantation of deep-brain stimulators has been researched for use in Parkinson’s, epilepsy, and obsessive-compulsive disorder for both suppression and enhancement of neuronal activation. Study of such an invasive technology should be limited to the treatment of similar disorders in soldiers. Finally, although it is unlikely that a portable fMRI for detecting BOLD can be developed in the next 20 years, a low-field, combined fMRI/MEG approach that would measure both direct neuronal currents and BOLD fluctuations could produce a soldier-wearable system. Initial laboratory experiments with fMRI/MEG (McDermott et al., 2004; Kraus et al., 2007) indicate some feasibility, although it will require substantial technology development and breakthroughs in both ultralow magnetic field detection and signal capture in electromagnetically noisy environments. The committee concluded that such developments are equally unlikely in the next 20 years. However, the fMRI/MEG approach should be monitored, as it is already being supported by the National Institutes of Health and the Department of Energy despite the risky prospects for the technology. An interesting outgrowth of the low-field fMRI/MEG direct neuronal firing work is a high-field application of the same method using the parallel acquisition mode of an advanced brain imaging coil. The principle here is detection of stimulated magnetic resonance relaxation at very fast repetition times: up to 100 frames per second. This allows very high temporal resolution of BOLD signals and can calibrate individual BOLD characteristics. This technique, termed inverse magnetic resonance imaging, could be very valuable in understanding fundamental brain activity (Lin et al., 2008). An emerging imaging technology known as DTI is an enabling technology for a new field known as connectomics, the study of the brain’s neural pathways for information transfer. The name derives from the concept of the human connectome—the entire collection (body) of neural connections—in much the same way as the entire collection of genes is termed the human genome. (See Box 5-3 in Chapter 5.) Connectomics is an area of basic neuroscience research with tremendous potential to enable the understanding of brain function, and DTI may have potential for future Army research. The Army should also monitor research on atomic magnetometers for its potential to contribute to portable and rugged MRI (Bourzac, 2008). Atomic magnetometers may prove of great importance to MEG, and MEG imaging is the basis for inverse MRI, which will need to be developed for ultraportable (less than 20 pounds) MRI scanners. However, putting 100,000 sensors around a soldier’s head does not make much sense unless you can deal with all of the sensor information in real time. Although this technology cannot support Army applications until the signal processing issues outlined in a previous subsection have been addressed, the committee views the area as a future opportunity. Optimal Control Strategies for Brain–Machine Interfaces One far-term technology opportunity will require a great deal of technique development and experimentation—namely, the extension of current control theory and control technology to optimal strategies for controlling an external system through signal communication only (an information interface) between the brain and the external system’s control input and feedback subsystems. The natural way our brains control an external system is through efferent peripheral connections to muscles, where the information signal is transduced through a motor response; for example, we turn a wheel, step on a pedal, press buttons, move a joystick, utter a vocal command, or type a command to a software subsystem on a keyboard. The external system provides feedback to the controller-brain in the form of sensory stimuli: visual information, proprioceptive inputs, auditory signals, etc. In a BMI, control signals from the brain are identified and transmitted by a decoding subsystem, which communicates the signal to the external system’s control input interface. In addition to the customary range of feedback cues via peripheral sensory stimuli, the external system could in principle send feedback signals to the brain through stimulation channels. In this sense of information transmission between the controller-brain and the controlled external system, a BMI can use either invasive or noninvasive technologies for con-
OCR for page 86
Opportunities in Neuroscience for Future Army Applications trol signal monitoring and (possibly) feedback stimulation. (See the discussion of invasive and noninvasive monitoring methods in the subsection on signal processing above.) Invasive control and feedback methods are most relevant to technological aids to recover normal function lost through an accident, disease, or combat, and should, for ethical reasons, remain restricted to such applications, which would include advanced prosthetic limbs and, perhaps, alternatives to a limb such as a directly controlled wheelchair or a reaching-grasping device. In the context of this report, however, the types of external systems to be controlled are not prostheses but the kinds of systems a soldier would normally control by efferent motor responses: a vehicle, a UAV or UGV, or an information-processing system/subsystem (i.e., a computer or a microprocessor-based information node). For such systems, noninvasive (as opposed to invasive) control and feedback methods are, for the foreseeable future, the only practical and ethical options. The entire field of BMIs is at an early stage of understanding. For example, we are just beginning to learn about the incredible potential offered by the plasticity of even a mature adult brain. There is much that will need to be learned from the current and continuing work on invasive methods for prostheses before we can even think about the longer-term challenge of embedding BMIs. Advanced Upper-Limb Prosthetics With improvements in battlefield medicine, many more soldiers than in previous wars are now surviving serious injury. Many of these injuries involve loss of an upper limb or of multiple limbs. While leg prosthetics have been very successful, the prosthetics for an upper limb, which has over 20 degrees of freedom, are a much greater challenge. Current versions of upper limb prosthetics, which use electromyography activity to control the limb, have limited degrees of freedom, are difficult to control, and are very heavy, expensive, and uncomfortable. Indeed, patients often abandon these complex prosthetic limbs for simpler and more rudimentary limbs. An exciting long-term goal for Army medical research is to develop upper-limb prosthetics that are neurally controlled. The two most promising approaches for the limb control (efferent) and sensory feedback (afferent) interface with the nervous system are connection to the peripheral nerves or directly to the cerebral cortex. The peripheral nerve approach involves recording signals from the stumps of the severed nerves to control the prosthetic limb. The cortical approach requires a direct BMI that records the signals derived from activity in the motor and sensory-motor areas of the cortex involved in forming movement intentions. Both approaches require not only “reading out” the movement intention (efferent signaling) of the subject but also a means of sensory feedback (afferent signaling), which is essential for dexterous control. There are several challenges for technology development in designing brain–machine interfaces for upper-limb prosthetics using cortical control. One is to design implants that ensure longevity of recording, ideally for the lifetime of the individual. Current implants typically last only a year or so; however, some implants have lasted for a number of years. Understanding biocompatibility and durability and other factors that affect the longevity of implants is an important area of research. A second developmental challenge is the integration of electronics with electrodes. Ideally the substrate of the electrodes should be metal leads on a silicon substrate. This type of electrode can easily be integrated as a single unit with integrated circuit electronics. A third challenge is to make the electrodes movable—that is, able to automatically search out cells for optimal recording and move to new cells when cells are lost. A fourth challenge is the use of local field potentials and spikes to improve recording decodes, particularly for determining behavioral states and transitions between them. Fifth, implantable electronics need to be developed that allow on-board processing and decoding of neural signals and wireless transmission of the signals to controllers within the prosthetic limb. Finally, limb robotic technologies need to be advanced to achieve lightweight limbs that can generate forces similar to those in natural limbs and with power sources that can last for long periods of time before recharging. Given all of the challenges and the delicate nature of direct neural connection technology, it is unlikely that the interfaces could be made battlefield robust in the foreseeable future. Invasive technology is currently utilized in medical prosthetics, including direct brain connections such as multichannel cochlear implants to replace the sense of hearing. These direct connections are able to capture and generate individual neuronal currents, as well as monitor or induce coherent neural activity at much greater signal-to-noise ratio than noninvasive technology. The invasive technologies should therefore be considered the best possible case of both signal detection and interface complexity for what could be achieved via noninvasive technologies. These medical applications are therefore important for the Army to monitor as a guide for what may be possible noninvasively. Other Prosthetics Applications with Relevance to Brain–Machine Interfaces In addition to BMI systems to facilitate recovery of motor function, other prosthetic devices are on the horizon. These include devices that carry out deep brain stimulation to improve cognitive function and arousal state and to treat depression. There has also been work recently on central auditory neural prostheses that stimulate the inferior colliculus and visual neural prostheses using stimulation in the retina or lateral geniculate nucleus. In most of these cases, research has demonstrated the feasibility of devices either stimulating a given brain region or using information
OCR for page 87
Opportunities in Neuroscience for Future Army Applications from a particular brain region. A fundamental question must be answered: What are the optimal control strategies that allow a prosthetic device to interface in the most efficient and physiologically sound way with its human user? As a simple illustration, most deep-brain stimulation to treat Parkinson’s disease is carried out by applying a current once the stimulator is implanted. Given what we know about neural responses and, in particular, about neurons in the subthalamic nucleus, is it possible to design a device for stimulating this brain region that does not require the constant input of current? SCIENTIFIC AND TECHNICAL BARRIERS TO NEUROSCIENCE TECHNOLOGIES Chapter 2 discussed ethical and legal barriers to neuroscience research and development. There are also scientific and technical barriers to the development of neuroscience technologies that could be overcome using advances in unrelated fields of science and engineering. Advances in the miniaturization of electronics and other components, for example, would enable development and deployment of research-enabling imaging technologies needed to substantiate and apply neuroscience hypotheses in the field. Such advances would also facilitate the design of less bulky and ungainly BMIs. Add biocompatibility to bulkiness as another barrier to the development of neural prostheses. Once this barrier is overcome, biocompatible devices could serve as alternatives to more invasive monitoring and imaging techniques. Data fusion is yet another barrier. Neuroimaging data collected by various means will not realize their maximum utility until different modalities can be fused. Additionally, even though one may be able to fuse laboratory results, field-deployed equipment will have its own measurement quirks that must be taken into account when the task of fusing data is transferred from the laboratory to the field. Possibly the greatest challenge for the Army is to ensure that its institutional expertise—in, for example, analysis modalities and data fusion techniques—resides in individuals of all ages. Overcoming this barrier should be a major goal for the Army. The committee observed that much of the neuroscience expertise in the Army is possessed by late-career scientists without mid- and early-career backup. This failure to diversify agewise puts the Army at the risk of losing substantial institutional intellectual equity each time a senior neuroscientist retires. In-house expertise is crucial for leading research in Army-specific areas, such as understanding the amount of effort involved in the measurement of ground-truth,10 knowing whether it is possible to train up to an arbitrary capacity (versus improving a human–machine interface, for example), and recognizing further technology opportunities. TRENDS IN NEUROSCIENCE TECHNOLOGY The committee identified several trends in neuroscience technology that the Army should monitor for application to its needs. Advances in neuroscience technology and methodology are occurring at an extraordinary rate, and extraordinary measures are needed to keep abreast of developments in the field. The committee identified trends in six areas: cognitive psychology and functional imaging, targeted delivery of neuropharmacological agents for operational—that is, not medically indicated—purposes, multimodal fusion of neural imagery and physiological data, new types of averaging in fMRI, database aggregation and translation for meta-analyses, and default mode networks. Cognitive Psychology and Functional Imaging Because fMRI has become so widely available, psychologists are able to test cognitive models of the human mind against functional data. Traditional cognitive psychology developed and flourished well before scientists could noninvasively image activity in the human brain, and some of them still deny the utility of knowing which areas of the brain are active at particular times. They use the analogy that knowing which parts of a computer are consuming the most energy does not tell you what the software is doing. Be that as it may, fMRI continues to produce consistent results on psychological fronts that cannot be dismissed, and the imaging community is more and more accepting the need for theoretical models of cognition (Owen et al., 2002; Heuttel et al., 2004) such that the amount of functional data available is overwhelming resources that could otherwise be used for experimentation. The Army should monitor the collaborative progress that is made among neuroscientists for synergies that may reveal possible future opportunities for applications. Targeted Delivery of Neuropharmacological Agents for Operational Purposes An important technology trend is improvement in the ability to deliver pharmacologic agents to specific brain locations in the nervous system in a controlled manner. It is hypothesized that targeted delivery mechanisms will open up significant new classes of compounds for use above and beyond those considered safe for oral ingestion. Research in this area is of two kinds: (1) the identification of specific functional targets in the brain and spinal cord and (2) the creation of delivery systems that can place pharmacologic agents at these targets. Besides ingestion, there are three routes by which drugs can be delivered to the nervous system: by injection, inhalation, and topical application. With all of these routes it is important to know whether the objective is to enter tissue, which is an aggregation of cells, or to enter cells directly. For any intravenously delivered agent, 10 Ground-truth workload is an objective measure of brain activity based on functional neuroimaging or a corresponding field-deployable biomarker. The phrase “ground truth” relates to the fact that most measures of workload are based on subjective response to questions such as “On a scale of 1 to 10, how busy did you feel?” This objective measure is a goal for the future.
OCR for page 88
Opportunities in Neuroscience for Future Army Applications a key issue is passing the blood-brain barrier. Several new modes of drug delivery are now being studied, including encapsulation in nanoparticles and scaffolding in polymer systems (Lee et al., 2007; Cheng et al., 2008). Site-specific delivery can now be controlled more precisely by targeted activation and inactivation. It is apparent that this is a very active area of research that will see many improvements over the next several years. Delivery systems are key technologies that the Army should monitor rather than invest in directly. Multimodal Fusion of Neural Imagery and Physiological Data Another trend is the collection of data from several physiological monitors concurrently or the fusing of data from separate sources into a common paradigm. An important trend in many imaging centers is the development of functional neuroimaging tools to fuse multimodal images using various combinations of fMRI, DOT, EEG, and MEG measurements. An EEG-based instrument has recently been commercialized to allow data from MEG and EEG or fMRI and EEG to be collected simultaneously. These imaging and electrophysiological measurements can also be combined with other physiological variables such as heart rate, GSR, eye movements, blood pressure, and oxygen saturation. Other instruments combine advanced anatomical data with functional data. Examples are the combinations of computerized tomography with positron emission tomography (PET) (for which instrumentation is available), MRI with PET (a prototype instrument is in use, commercial rollout expected in 2010), and diffusion tensor imaging with fMRI. Moreover, the higher resolutions of MRI and computerized tomography are leading to higher-resolution mapping of cortical thickness. The Army needs to monitor these advanced neuroimaging techniques. Improvements are such that studies that were completed in the past decade or so are being repeated and are yielding much different results than the earlier studies. The use of two or more imaging modalities simultaneously or in sequence offers the exciting prospect of being able to track the dynamics of brain activity on different spatial and temporal scales. To do this, it will be necessary to develop an integrated, dynamic computational framework based on the biophysical, physiological, and anatomical characteristics that can be imaged by these modalities. The modality components of this computational framework could be identified and validated through a series of cross-modal experiments. Some of this work can be done using high-speed computing resources to design and test the dynamic data analysis algorithms on simulated (and, later, experimental) data from multimodal imaging. This cross-modality validation is especially important for the Army in that it directly feeds into the understanding of surrogate measures. The ability to record information from large numbers of neurons using multielectrode recording techniques, local field potentials, and two-photon imaging now makes it possible to understand in greater detail the functional and anatomical significance of specific brain regions. Similarly, the ability to carry out large-scale simulations of neural models makes it possible to guide experimental research in a principled way using data from experimental measurements to facilitate the choice of model parameters. In this regard, the two arms of computational neuroscience, biophysical and algorithmic, can work in concert with experimental neuroscience at all levels to help integrate information into computational theories of the brain and to validate these theories. New algorithms will improve quantitative understanding of the information in experimental data. Gaining more insight into how the brain computes will undoubtedly bring new approaches to the design of algorithms for machine learning and computation applicable to a broad range of fields. As an example, a key area for research in neural signal processing will be algorithms to facilitate BMIs. These algorithms must be able to make use of the broad range of neural signals (neural spike trains, local field potentials, electroencephalograph recordings) to control the interactions between humans and machines. This is a very challenging task, because the output of the control strategy—for instance, a particular movement—may be clear, but how the control strategy is represented and carried out in the brain and nervous system is less apparent. Algorithms must be developed hand in hand with efforts by neurophysiologists to reverse engineer the mechanisms of neural control. It is also important that this algorithm research stays in close contact with the field of control theory, where similar algorithms have been developed to solve problems related to entirely manmade control systems. This research will lead to new algorithms and most likely new theories and practical approaches to the design and implementation of control strategies. Some BMIs might be designed for motor prosthetic purposes, others for control of deep-brain stimulation to treat Parkinson’s disease, obsessive compulsive disorder, and depression. Each problem has its unique features and control requirements that will need to be studied in detail to understand how the relevant brain region functions, so that optimal algorithms and, eventually, optimal therapeutic strategies can be devised. New Types of Averaging in fMRI Group averages still dominate the literature, with studies utilizing the average activation pattern of 5-10 subjects for a given paradigm. The current trend is to use at least 10-12 subjects and construct a second level of analysis to produce a random-effects average. In a group average, single subjects may dominate data sets and skew the results. In a random-effects average, a single subject’s data are treated
OCR for page 89
Opportunities in Neuroscience for Future Army Applications as a fluctuation from the population average. Another type of analysis is a conjunction of activated areas in a sample of subjects: This type of analysis produces a map based on common (overlapping) regions in each subject’s activation map. It is expected that additional methods will emerge that promote understanding of brain function common to all as well as individual and group variations in brain function. This trend should be monitored for possible future Army applications in selection and assessment. Database Aggregation and Translation for Meta-analyses Several groups are sponsoring the creation of results databases and proposing standard formats for brain functional and anatomical imaging data, including multimodal techniques. Some are based on cortical surface maps, some on Montreal Neurologic Institute coordinate statistical parametric mapping, and some are based on both. Clearing-houses are under construction for analysis tools (National Institutes of Health Blueprint for Neuroscience Research) and other resources. The Army can leverage these resources for meta-analyses of large data samples to seek out opportunities for further research. Default Mode Networks Since the work of Biswal et al. (1997), there has been expanding interest in the so-called default-mode network of the brain. This network is seen to consist of “naturally connected” areas and to include functional connectivity and effective connectivity as well as the difference between the two. The topic is being pursued by those interested in neuroergonomics, and those promoting the topic hypothesize that, ultimately, the efficient use of neural resources takes advantage of these default connections. This work could have implications for cognitive fatigue, learning, and performance optimization. Unlike research in connectomics, this research is noninvasive and is conducted on humans. The Army should monitor this trend for proof that such a default network overlies our physical neural connections. PRIORITIES FOR ARMY INVESTMENT The committee was tasked to identify technology development opportunities and to recommend those worthy of investment in the near, medium, and far terms. These technology development opportunities, all of which have been discussed earlier in this chapter, were judged to be “high-priority” (Table 7-1), “priority” (Table 7-2), and “possible future opportunities” (Table 7-3). The committee asked four questions as it decided which opportunities to include in the tables: Should the Army fund the technology? Should the Army maintain expertise in the technology? Is it likely that the technology, if successful, will have a significant impact? Will there need to be advances in subordinate technologies, such as robust, ruggedized sensors TABLE 7-1 High-Priority Opportunities for Army Investment in Neuroscience Technologies (Recommendation 14) Technology Opportunity ME RE Time Framea Current Investment (L, M, or H) Commercial Academic Field-deployable biomarkers of neural state x x Ongoing L M In-helmet EEG for brain–machine interface x x Medium term M L Signal processing and multimodal data fusion, including imaging modalities such as MRI, fMRI, DTI, DSI, PET, and MEG and physiological measures such as heartbeat, interbeat intervals, GSR, optical computer recognition, eye tracking, and pupilometry x x Ongoing M H Soldier models and biomarkers for sleep x Ongoing M M Vertical fMRI x Medium term L L Fatigue prediction models x Medium term L M Behavioral measures of fatigue x Medium term M L Prospective biomarkers for predictive measures of soldier response to environmental stress, including hypoxic and thermal challenges x x Medium term L L NIRS/DOT x x Medium term L L Biomedical standards and models for head impact protection, including torso protection from blast x x Medium term M M Threat assessment augmentation x Medium term M M fMRI paradigms of military interest x Ongoing L M NOTE: ME, mission-enabling; RE, research-enabling; L/M/H, low, medium, or high; EEG, electroencephalography; MRI, magnetic resonance imaging; fMRI, functional magnetic resonance imaging; DTI, diffuse tensor imaging; DSI, diffusion spectrum imaging; PET, positron emission tomography; MEG, magnetoencephalography; NIRS, near-infrared spectroscopy; DOT, diffuse optical tomography; GSR, galvanic skin response. aIn this column, “medium term” means between 5 and 10 years and “ongoing” means that results will be available within 5 years, but continuing investment is recommended to stay at the forefront of the technology. SOURCE: Committee-generated.
OCR for page 90
Opportunities in Neuroscience for Future Army Applications TABLE 7-2 Priority Opportunities for Army Investment in Neuroscience Technologies (Recommendation 15) Technology Opportunity ME RE Time Framea Current Investment (L, M, or H) Commercial Academic Haptic feedback with VR x Medium term H L Augmented reality (virtual overlay onto real world) x x Medium term H H In-helmet EEG for cognitive state detection and threat assessment x x Medium term L M Information workload management x Far term L M Time-locked, in-magnet VR and monitoring for fMRI x Medium term L M Immersive, in-magnet virtual reality x Near term L M EEG physiology x x Far term L H Uses of TMS for attention enhancement x Medium term L M In-vehicle TMS deployment x Far term L L Heartbeat variability x x Near and medium term L H Galvanic skin response x x Near and medium term H L NOTE: ME, mission-enabling; RE, research-enabling; L/M/H, low, medium, or high; VR, virtual reality; TMS, transcranial magnetic stimulation. aIn this column, “near term” means within 5 years, “medium term” means between 5 and 10 years, and “far term” means 10-20 years. SOURCE: Committee-generated. TABLE 7-3 Possible Future Opportunities (Neuroscience Areas Worthy of Monitoring for Future Army Investment) Technology Opportunity ME RE Time Framea Current Investment (L, M, or H) Commercial Academic Brain–computer interface system (direct) x Far term H H Imaging cognition x Far term L H Neuropharmacological technology x Far term M M Advanced fMRI data collection x Medium term M M Averaging methodology for fMRI x Medium term L M Brain database aggregation x Far term M M Default mode networks x x Medium term L H Inverse MRI x Medium term L M Low-field MRI x x Far term L M Uses of TMS for brain network inhibition x Far term L M Safety of multiple exposures to TMS x Medium term M M In-helmet TMS deployment x Far term L L Connectomics x Far term L M Atomic magnetometers x x Far term M M NOTE: ME, mission-enabling; RE, research-enabling; L/M/H, low, medium, or high; fMRI, functional magnetic resonance imaging; MRI, magnetic resonance imaging; and TMS, transcranial magnetic stimulation. aIn this column, “medium term” means between 5 and 10 years and “far term” means 10-20 years. SOURCE: Committee-generated. and noise-filtering algorithms, before the technology can be implemented? The committee considered all of the topics in Tables 7-1 and 7-2 worthy of immediate investment but left up to the Army their relative prioritization within each group. Initial priorities might depend, for instance, on the relative importance to the Army of the applications served; these priorities might then change based on research progress. As defined at the very beginning of this chapter, a technology is categorized as mission-enabling (ME column in the tables) if it is instrumental in assisting the warfighter or commander in an operational mission or in a training or assessment mission. It is research-enabling (RE column) if it is instrumental in filling a critical gap in current research capability. Research-enabling instruments are expected to be brought into service on a smaller scale to study and evaluate warfighter or commander performance, perhaps in the laboratory, perhaps in simulated environments. The research is expected to shed neuroscientific light onto current or future Army training and doctrine and to yield concrete suggestions to improve
OCR for page 91
Opportunities in Neuroscience for Future Army Applications warfighter performance. Note that a technology may be both mission-enabling and research-enabling. For each opportunity, the Time Frame column gives the committee’s estimate of the time needed for development and an idea of when a particular technology will be fielded—that is, the duration of the investment before a product or instrument can be brought into service. The Current Investment column lists the source, academic sector or commercial, and the level of funding being brought to bear on the particular technology in its envisioned Army application. Commercial investment comprises large investments by industry in for-profit ventures. Academic investment comprises investments by various civilian funding agencies such as the National Institutes of Health or the National Science Foundation in university (or other academic) research. A high (H) level of current investment reflects sufficient external investment to develop the technology to the point where it can be used for Army applications without additional Army investment, although some funding might be required to adapt the technology to a specific application. A medium level of investment (M) reflects funding that would, by itself, allow Army applications to be developed in two or three times the time shown (much slower development than with Army investment in addition to the external sources). A low investment level (L) means that there is little or no investment in Army applications and there will be no technology advance toward an Army application without Army support. For the high-priority and priority technology development opportunities (Tables 7-1 and 7-2), the committee envisaged the Army employing a mix of internal research and personnel and externally supported research and personnel, as exemplified for research in other fields by Army University Affiliated Research Centers (UARCs), Collaborative Technology Alliances (CTAs), Multidisciplinary University Research Initiatives (MURIs), and faculty fellowships. In addition, as has already been mentioned several times, the committee envisioned the Army would maintain a constant level of expertise to monitor relevant areas of neuroscience research across the board. This would probably entail support from outside experts who would regularly report on progress throughout neuroscience and who might be members of a permanent body established to stay abreast of developments. Aside from distinguishing between priority and high-priority opportunities, the committee did not prioritize the technology opportunities within a particular table, and all are recommended to receive some level of investment by the Army. The opportunities in Table 7-1 are recommended for initial long-term (5 years or more) commitments. Those in Table 7-2 are recommended for limited (2-3 year initial) commitments to augment the high-priority investments in Table 7-1 and to enable exploration of additional applications that could have a large impact. Their continued funding for longer periods will be guided by evaluations of their research progress. Table 7-3 lists possible future opportunities for consideration by the Army. In addition to recommending that the Army pursue the listed opportunities, the committee recommends that the Army enhance its existing in-house resources and research capabilities. This would ensure that the Army has mechanisms for interacting with the academic and commercial communities engaged in relevant areas of research and technology development, to monitor progress and decide when future advances in neuroscience technology developments would merit Army investment. REFERENCES Biswal, B.B., J. Van Kylen, and J.S. Hyde. 1997. Simultaneous assessment of flow and BOLD signals in resting-state functional connectivity maps. NMR in Biomedicine 10(4-5): 165-170. Blankenburg, F., C. Ruff, S. Bestmann, O. Josephs, R. Deichman, O. Bjoertomt, and J. Driver. 2008. Right parietal cortex and top-down visuospatial attention: Combined on-line rTMS and fMRI. Presented at HBM2008, the 14th Annual Meeting of the Organization for Human Brain Mapping, Melbourne, Australia, June 15-19. Bourzac, K. 2008. TR10: Atomic magnetometers. Available at http://www.technologyreview.com/read_article.aspx?ch=specialsections&sc=emerging08&id=20239&a=. Last accessed July 21, 2008. Cheng, Y., J. Wang, T. Rao, Xi. He, and T. Xu. 2008. Pharmaceutical applications of dendrimers: Promising nanocarriers for drug delivery. Frontiers in Bioscience 13(4): 1447-1471. Dinges, D.F., M.M. Mallis, G. Maislin, and J.W. Powell. 1998. Evaluation of Techniques for Ocular Measurement as an Index of Fatigue and the Basis for Alertness Management, Report No. DOT HS 808 762. Springfield, Va.: National Technical Information Service. Genik, R.J., C.C. Green, F.X. Graydon, and R.E. Armstrong. 2005. Cognitive avionics and watching spaceflight crews think: Generation-after-next research tools in functional neuroimaging. Aviation, Space, and Environmental Medicine 76(Supplement 1): B208-B212. Goldiez, B.F., A.M. Ahmad, and P.A. Hancock. 2007. Effects of augmented reality display settings on human wayfinding performance. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 37(5): 839-845. Goldstone, R.L. 1998 Perceptual learning. Annual Review of Psychology 49: 585-612. Greene, K. 2007a. Brain sensor for market research: A startup claims to read people’s minds while they view ads. Available at http://www.technologyreview.com/Biztech/19833/?a=f. Last accessed July 23, 2008. Greene, K. 2007b. Connecting your brain to the game: Using an EEG cap, a startup hopes to change the way people interact with video games. Available at http://www.technologyreview.com/Biztech/18276/?a=f. Last accessed July 23, 2008. Hancock, P.A., and J.S. Warm. 1989. A dynamic model of stress and sustained attention. Human Factors 31: 519-537. Harris, W.C., P.A. Hancock, and S.C. Harris. 2005. Information processing changes following extended stress. Military Psychology 17(2): 115-128. Hirshberg, L.M., S. Chiu, and J.A. Frazier. 2005. Emerging brain-based interventions for children and adolescents: Overview and clinical perspective. Child and Adolescent Psychiatric Clinics of North America 14(1): 1-19. Huettel, S.A., A.W. Song, and G. McCarthy. 2004. Functional Magnetic Resonance Imaging. Sunderland, Mass.: Sinauer Associates, Inc. Klein, G.A. 1989. Recognition-primed decision. Advances in Man-Machine Systems Research 5: 47-92.
OCR for page 92
Opportunities in Neuroscience for Future Army Applications Kraus, R.H., Jr., P. Volegov, A. Matlachov, and M. Espy. 2007. Toward direct neural current imaging by resonant mechanisms at ultra-low field. NeuroImage 39(1): 310-317. Lee, M., T.T. Chen, M.L. Iruela-Arispe, B.M. Wu, and J.C.Y. Dunn. 2007. Modulation of protein delivery from modular polymer scaffolds. Biomaterials 28(10): 1862-1870. Lin, F.-H., T. Witzel, J.B. Mandeville, J.R. Polimeni, T.A. Zeffiro, D.N. Greve, G. Wiggins, L.L. Wald, and J.W. Belliveau. 2008. Event-related single-shot volumetric functional magnetic resonance inverse imaging of visual processing. NeuroImage 42(1): 230-247. Lowe, C. 2008. Land Warrior needs work, soldiers say. Available at http://www.military.com/NewsContent/0,13319,161855,00.html. Last accessed July 23, 2008. McDermott, R., S.K. Lee, B. ten Haken, A.H. Trabesinger, A. Pines, and J. Clarke. 2004. Microtesla MRI with a superconducting quantum interference device. Proceedings of the National Academy of Sciences of the United States of America 101(21): 7857-7861. Nature. 2007. Mind games: How not to mix politics and science. Nature 450(7169): 457. NRC (National Research Council). 1997. Tactical Display for Soldiers: Human Factors Considerations. Washington, D.C.: National Academy Press. Owen, A.M., R. Epstein, and I.S. Johnsrude. 2002. fMRI: Applications to cognitive neuroscience. Pp. 312-327 in Functional MRI. P. Jezzard, P.M. Matthews, and S.M. Smith, eds. New York, N.Y.: Oxford University Press. Rogers, T.B., N.A. Kuiper, and W.S. Kirker. 1977. Self-reference and the encoding of personal information. Journal of Personality and Social Psychology 35(9): 677-688. Scales, R.H. 2006. Clausewitz and World War IV. Available online at http://www.afji.com/2006/07/1866019/. Last accessed July 23, 2008. Schummers, J., H. Yu, and M. Sur. 2008. Tuned responses of astrocytes and their influence on hemodynamic signals in the visual cortex. Science 320(5883): 1638-1643. Shepard, R.N., and L.A. Cooper. 1982. Mental Images and Their Transformations. Cambridge, Mass.: MIT Press. Shepard, R.N., and J. Metzler. 1971. Mental rotation of three dimensional objects. Science 171(3972): 701-703. Tarr, M.J., and W.H. Warren. 2002. Virtual reality in behavioral neuroscience and beyond. Nature Neuroscience 5(11): 1089-1092.