Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 76
Emerging Cognitive Neuroscience and Related Technologies 3 Emerging Areas of Cognitive Neuroscience and Neurotechnologies INTRODUCTION Much of the foundation of current neuroscience research is discussed in Chapter 2. In Chapter 3, the discussion is expanded to include potential applications for neuroscience that may emerge in the next two decades. The committee points out that the two areas it covers in this chapter—computational biology and distributed human-machine systems—are not meant to represent the entire spectrum of possibilities. Rather, they are areas the committee selected as examples. Some of these applications are already beginning to appear and may one day impact the intelligence and military communities. COMPUTATIONAL BIOLOGY APPLIED TO COGNITION, FUNCTIONAL NEUROIMAGING, GENOMICS, AND PROTEOMICS While computing is essentially ubiquitous in the fields of neuroscience and cognition, it can broadly be said to have impact in two areas. The first is analysis, with computation used to analyze the enormous quantities of data acquired from genome sequencing, neuroimaging, ribonucleic acid (RNA) expression arrays, and the study of proteomics, and to correlate them with experimental conditions to eventually understand the biology of the nervous system and cognition. Analysis broadly includes what falls traditionally in the areas of bioinformatics. The second area where computation has an impact is modeling. It entails putting a hypothesis into concrete computational form in an attempt to validate the hypothesis and/or make a prediction. In biophysical models the physical behavior of the system is modeled, and in biomathematical models the quantities associated with the system are abstracted mathematically and studied. In some cases,
OCR for page 77
Emerging Cognitive Neuroscience and Related Technologies a model is of both kinds. The distinction between the two kinds of models is not always sharp because data analysis often/sometimes makes basic assumptions about the data fitting a specific model. To understand how the two categories are affected by the limitations of computational technology, each is first discussed separately. Then the concerns that are apparent when they are considered together are discussed. Analysis of Experimental Data The analysis of genetic data,1 proteome data,2 morphologic data,3 and neuroimaging data4 is the most common use of computation in neuroscience. Computing has played a critical role in enabling the technologies that produce these data. This role ranges from data acquisition to creating the algorithms used to tease the signals out of the data. The hardware requirements for analyzing large data sets are relatively straightforward. Applications based on database search and local alignment depend on a style of high-performance computing (HPC) known as “embarrassingly parallel.”5 This generally requires between 10 and 1,000 identical servers, each of which is given a portion of the search and alignment task to accomplish. Embarrassingly parallel computing requires little coordination between servers. The cost of HPC clusters is now well within the reach of individual departments and research groups owing to the emergence of Beowulf-class computer clusters, which are built on commodity hardware deploying Linux operating systems and open source software.6 The challenges lie in the development of software programs for analysis of large data sets. This requires advances in the science of bioinformatics, also known as computational biology. Bioinformatics is a multidisciplinary field at the intersection of computer science, statistics and molecular biology. Neuroscience 1 Sources of genetic data are microarrays, whole genome sequences, and epigenetic changes, among others. 2 An example of proteome data is data that comes from mass spectroscopy. 3 Morphologic data comes from quantitation of cells, phenotyping, and locating over time. 4 Techniques that generate functional neuroimaging data include electroencephalography (EEG), magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), positron emission tomography (PET), and near-infrared spectroscopy (NIRs). 5 “Embarrassingly parallel” tasks typically include doing an identical problem over and over again with different starting configurations or different random seeds. In these cases, there is generally little or no communication between the different processors. For additional information, see the Web site of the University of Melbourne’s Department of Computer Science and Software Engineering at http://www.cs.mu.oz.au/498/notes/node40.html. Last accessed on January 18, 2008. 6 “Beowulf clusters” describes a set of identical computing nodes that are connected together somewhat loosely in order to enable communication between the nodes. In general, the individual nodes are off-the-shelf computers connected to one another through a commodity means (usually just Ethernet). For additional information, see the Beowulf Project Overview Web site at http://www.beowulf.org/overview/index.html. Last accessed on January 18, 2008.
OCR for page 78
Emerging Cognitive Neuroscience and Related Technologies benefits from the developments of analytical techniques that are applicable to data from many areas of biology, such as genomics and proteomics. The science has evolved to allow the large-scale integration of data from a variety of sources for analysis of complete complex systems rather than individual parts (Drabløs et al., 2004).7 For instance, algorithms have been developed to assemble and align genome sequences, to identify genes and their functions, to analyze data from large-scale expression arrays, and to build integrated databases.8 There should be a significant role for computing in the analysis of neuroimaging data. It is somewhat remarkable that computing has not played a larger role in neuroimaging data analysis. In many ways, the field should be one of the most important scientific drivers for high-performance computing. Imaging data sets are among the largest sets of data produced by any scientific field. Additionally, distinguishing the signals from the noise is extremely complicated in neuroimaging data. However, neuroimaging analysis has been somewhat reluctant compared to other kinds of computations analysis to apply state-of-the-art scientific computing techniques. Significant breakthroughs might occur if massively parallel computing were applied to analyzing the neuroimaging data that currently exist, but there is still no significant effort to engage computational resources at such a scale to solve this problem. The mathematical effort that is being applied in the area is somewhat closer to the state of the art in that field, but again, a push by the neuroimaging community to engage more applied mathematicians could well lead to improved results. There is a fundamental limitation, however, on the potential for breakthroughs in neuroimaging analysis. It is impossible to detect the electrical or magnetic signals at a certain temporal and spatial resolution given the shielding provided by the skull and brain. In the case of magnetic imaging, the fields that are used to stimulate natural magnetic activity in the brain are already thought to be close to the limit of human safety. Although one might surmise that advances in magnet technology could lead to much higher resolution, it would probably be too dangerous to perform these experiments on human subjects. As long as these temporal and spatial resolutions lie above those needed for individual neuronal monitoring, interpreting the experimental data will have to rely on assumptions. There is potential for improved electrodes that could be inserted directly into the brain to do very precise monitoring of individual neurons, but it is extremely unlikely that such technology could be used for anything other than very small doses without significantly damaging the subject. For this reason one would have 7 For additional information, see Drabløs et al. (2004) at http://www.ime.ntnu.no/infosam2020/oldpage/work_groups/wg_bioinformatics.html). Last accessed on January 4, 2008. 8 A completely inclusive list would also include protein modifications, protein-protein/protein-nucleic/and protein-lipid interactions, gene regulation through alternative splicing and microRNAs, and multi-scalar integration to deduce and predict signaling pathways and information networks; however, given the fact that new technologies are being developed constantly, the committee chose to limit this list to a few of the most widely used key technologies as representative examples.
OCR for page 79
Emerging Cognitive Neuroscience and Related Technologies to be extremely cautious about predicting revolutionary improvements in gathering neuroscience data in the next two decades. Physiologically Plausible Models of Human Cognition and Affect Modeling and simulation represent an entirely different way to apply computation to cognitive and neural sciences, and here the future is much harder to predict. While data analysis will most likely be limited in the near term by the physical limitations associated with data collection, modeling will in some respects be limited only by the resources that any group would choose to devote to it. Building a neurophysiologically plausible model of the whole brain will most likely remain impossible for the near future. Because it is now possible to model large neuronal networks with biophysically detailed neurons and synaptic properties that approach the numbers and complexity found in living nervous systems (Silver et al., 2007), it is tempting to predict computational power in 20 years based with the typical number of neurons in a human brain. This would be a deceptive comparison, however, for the primary difficulty in building a model of the brain is not the lack of computational power but inadequate understanding of how to model in detail the neurophysiological, cognitive, and affective aspects of brain function. Despite tremendous advances in general understanding of how individual parts of the brain work and communicate among themselves, it is highly unlikely under current assumptions that research breakthroughs allowing a neurophysiologically plausible model of a whole brain will occur in the next two decades. This is not just because the relationship among parts of the brain is so complex, but is also because collective understanding of the relationship between the observed neurophysiological aspects of the brain and the more subtle questions associated with human cognition is so limited. Moreover, the brain is not a standalone organ but interacts in complex ways with monitoring and regulatory systems throughout the body. The committee does not wish to imply that a fairly comprehensive understanding of the neural systems (including the brain) of other simpler organisms will not be gained in the next two decades. It is not unreasonable to believe that a simple model organism such as C. elegans will have its neural system understood in two decades, or that studies on transgenic animals such as mice will not yield additional understanding about mammalian brains in general. However, given the, in many ways, unique nature of the human neural system and the difficulty of doing experiments with similar or equivalent organisms, even the best such work would still not yield data on important aspects of the human neural system. Based on its current understanding, the committee hopes and expects that neuroimaging technology will eventually demonstrate a relatively straightforward neural architecture that can be implemented in a computational model, but such an imaging technology will most likely require a spatial and temporal resolution
OCR for page 80
Emerging Cognitive Neuroscience and Related Technologies well beyond anything that currently exists or will exist in the near future. Of course, it is also very possible that the architecture of the brain could turn out to be something that is very difficult to simulate using today’s digital or analog computing architectures. High-fidelity modeling of the human brain will require breakthroughs in collective understanding of how to model cognition and affect. Approaches to this challenge are likely to leverage the property of emergence, which is found in many natural systems—that is, the ability of these systems to generate adaptive complexity by means of elegant simpler principles. The principle of emergent behavior is especially complex with respect to future predictions. It is likely to play a key role in the ultimate understanding of how the brain works, but no one has been able to capture it in a way that fits current understanding of brain function. But given emergent behavior’s importance, it is very important to recognize that organizations and nations that are able to harness their creativity, expertise in cognitive neurosciences, and computational resources to such an end will have a large degree of leverage in defining the next revolution in technology (Goldstein, 1999). Finding 3-1. The global scientific computing community is approaching an era in which high-end computing will, in principle, be sufficient in capacity and computational power to model the human brain. However, there does not yet exist either an adequate and detailed understanding of how such modeling can be done, or a complete model of how the brain interacts with complex regulatory and monitoring systems throughout the body. These and other difficulties make it highly unlikely that in the next two decades anyone could build a neurophysiologically plausible model of the whole brain and its array of specialized and general-purpose higher cognitive functions. Proteomics and Genomics Science of Genomics and Proteomics There are additional computational methodologies that will drive fundamental understanding of how the brain works. They will be concentrated in genomics and proteomics. Any discussion of modern biology must refer to the revolutionary role that genomics and proteomics are playing, and will continue to play, over the next few decades. Neuroscience, and biology in general, has until recently been considered to be an observational science. The fundamental paradigm was making observations of a system or parts of it and correlating those observations with a specific behavior or result. Qualitative rules were then developed that could be applied with some fidelity to draw conclusions about what role each part of the system played in the whole system. One could also begin to describe how variations in
OCR for page 81
Emerging Cognitive Neuroscience and Related Technologies each of these parts drove differences between different organisms within a specific species and how one species differed from another. This approach, carried on in a heroic way during the nineteenth and twentieth centuries by outstanding biologists, led to a remarkable knowledge base for many different aspects of biological science. However, it was very difficult to use these broad observations to build a formal set of rules that could be used to gain a greater understanding of the system as a whole. Perhaps even more important, this work did not give significant insight into the mechanisms that determined why these variations appeared from organism to organism. The genomics revolution has changed this picture remarkably. While the fundamental role of genes has been understood since the time of Mendel, the ability by scientists to decode individual genomes letter for letter has become a powerful tool for biology and neuroscience. 9 It allows one to build a quantitative scientific basis for understanding the effect of specific, well-defined, and easily measured genomic sequences on traits or behavior. Furthermore, it allows what is known as comparative genomics, which is the science of understanding relationships between genomes of different species or strains within a species. This powerful tool is not useful just for its ability to classify differences, but it allows experiments to be done on one species that can be somewhat faithfully extrapolated to other species. Because of the simple four-letter alphabet of genomics, computers can be used to make quantitative predictions about the behavior and traits of individuals based just on their genome. This is a tremendous improvement over a system of more qualitative observations. More importantly, genomics is enabling not only an understanding of what determines differences from organism to organism but also a mechanistic understanding of why the individual differences occur and what the mechanism is that causes the differences. It is a tenet of biology that the expression of genes occurs to a large extent through their manifestation as the instruction code for building the proteins that make up the fundamental machinery of cells. Gene expression can be analyzed at the single cell level to provide insights into how neurons and glial cells respond to different physiologic signals and also to characterize regional differences in the same types of cells (Ng et al., 2007). High-throughput methods permit genome-wide searches to discover genes that are uniquely expressed in brain circuits and regions that control behavior in animal model systems. In situ hybridization then permits anatomic localization of the expressed genes.10 However, not all genes that are expressed are translated into proteins. The study of proteins is known as proteomics, and it provides an even closer link 9 Gregor Mendel, a 19th century plant geneticist, discovered the underlying principles of heredity common to all life forms. While understanding the fundamental role that genes play was the result of work of the greater scientific community that occurred largely after Mendel, his initial work was seminal. 10 For additional information, see the Allen Brain Atlas of the Allen Institute for Brain Science’s at http://www.brain-map.org. Last accessed on January 4, 2008.
OCR for page 82
Emerging Cognitive Neuroscience and Related Technologies to the understanding of the fundamental physical basis for processes in cells. The computational questions surrounding proteomics are generally even more complex than those of genomics and focus on developing complex graphs of protein interactions representing metabolic function. Because a protein’s structure determines its function, there are also additional computational needs related to graphing a protein structure at the molecular level to better understand the function it performs. Implications of Proteomics and Genomics Research for Neuroscience Probably the most obvious and talked about impact that genomics research will have on neuroscience is in the area of genetic testing. Once significant genomic information is available about the general human population, as there will be in 20 years time, correlating genetic markers not only with intelligence but also with the ability to learn and be trained to perform a variety of specific physical and mental tasks will become a relatively straightforward exercise.11 Such screening would allow the objective identification of the differential vulnerability of people to intense stress, sleep loss, drug effects, hypoxia, and dehydration. It would be a tremendous advantage to any organization to start with a pool of trainees who had been selected via genetic screening such that most of them, not just 1 to 10 percent, would go on to perform a given task in an outstanding manner. The abilities could include such things as learning a foreign language, performing a task on limited amounts of sleep, and performing as an elite athlete. Currently, screening for such abilities is done through different types of testing, but to a large extent they measure the existing aptitude, not the inherent potential. It is important to point out that while the potential for genetic screening exists, there is no guarantee that a gene or genes exist that will directly control the trait of interest. It is widely understood, for instance that “intelligence” is a difficult characteristic to define, and it is unlikely that a small set of genes would be an indicator of intelligence. It may even turn out that, for all traits of interest, the effect of the genes is negligible compared to the effect of the environment. Of even greater concern would be the incorrect use of genetic testing so that excellent candidates are turned away due to a badly validated testing procedure. Proteomics provides an extremely strong scientific framework for understanding the effect of neuroenhancing pharmaceuticals. As the designing of 11 The key point is that doing the correlation itself will be a relatively straightforward exercise. The result of the exercise will often be that the correlation is weak to non-existent. It is important to emphasize that strong correlations will not necessarily be found, but that it will be easy to do such a search and, if such correlations exist, they will be easy to find. The committee does not doubt that there will probably be disappointment in those who are looking for simple genetic correlations with broad traits. At the same time, the committee would not be surprised if there were a few remarkable correlations found with a handful of traits.
OCR for page 83
Emerging Cognitive Neuroscience and Related Technologies drugs becomes more computational and less a matter of hit-and-miss testing, understanding the structure and function of the proteins of interest in the relevant pathways becomes a vital piece of the drug design puzzle. One can begin to rationally design active molecules based on the structure of the molecular actors of interest. Additionally, one can understand the effect on the larger metabolic system of shutting down a particular pathway so that side effects can be better understood from the start. As the metabolic machinery of the brain becomes better understood and metabolic engineering and proteomics become understood jointly in a systems fashion, it is possible that neuroenhancing (or perhaps even neurodefensive) compounds could make a significant leap forward. It is important to note, however, that rational drug design is and most likely will remain a very difficult endeavor. Genomics and proteomics are already making certain aspects of imaging significantly easier. One aspect of this is the creation of transgenic mice, which have been engineered to express different fluorescent proteins in different situations. This allows the visualization of specific cellular behaviors in live animals under realistic conditions. Such research holds tremendous value for the real-time imaging of neural activity in animals that could in turn be used to understand similar neural activity in people. It also serves as evidence of the power and potential of genetic engineering. It should be noted that the techniques involved in adding fluorescent markers to proteins are much better understood than the general genetic enhancement which has been speculated about in the literature. It is not out of the question, however, that advances in genomics could enable more significant advances in genetic engineering. Current ethical constraints, combined with the significantly longer life-cycle of humans, make it unlikely that such work have an effect in the next two decades. Finally, as knowledge of these subjects grows, it may be possible to predict much more about individual abilities, capabilities, personality characteristics, and other traits from the genome; such information may be particularly useful to the intelligence community and the military.12 DISTRIBUTED HUMAN-MACHINE SYSTEMS Advances in neurophysiological and cognitive science research have fueled a surge of research aimed at more effectively combining human and machine capabilities. Results of this research could give human performance an edge at both the individual and group levels. Though much of this research defies being assigned rigid boundaries between disciplines, for the sake of convenience the committee has organized this section into four discussion areas: 12 There is no doubt in the mind of the committee members that genomics and proteomics will play an increasingly large role in the future of cognitive neuroscience. However, given the fact that genomics/proteomics is largely recognized by the entire technical community as being important, and appears to be growing, the committee decided to not add a finding related to this topic.
OCR for page 84
Emerging Cognitive Neuroscience and Related Technologies Brain-machine interfaces. This category includes direct brain-machine interfaces for control of hardware and software systems. Traditional human interface technologies, such as visualization (Thomas and Cook, 2005), are not considered in this report. Robotic prostheses and orthotics. Included here are replacement body parts (robotic prostheses) and mechanical enhancement devices (robotic orthotics) designed to improve or extend human performance in the physical domain. Cognitive and sensory prostheses. These technologies are designed to improve or extend human performance in the cognitive domain through sensory substitution and enhancement capabilities or by continually sensing operator state and providing transparent augmentation of operator capabilities. Software and robotic assistants. These technologies also are designed to improve or extend human performance in the physical and/or cognitive domains. However, unlike the first three areas for discussion, they achieve their effect by interacting with the operator(s) rather than as assistants or team members in the manner of a direct prosthetic or orthotic extension of the human body, brain, or senses. Agent-based technologies for social and psychological simulations are not considered in this report. Brain-Machine Interfaces The basis of brain-machine interfaces (BMI) is the capture of various forms of dynamically varying energy emissions from the working brain by means of functional neuroimaging devices. These devices include the electroencephalograph (EEG) and the magnetoencephalograph (MEG) for the detection of the electrical energy of working neurons; functional near-infrared spectroscopy (fNIRS), which uses light to measure the hemodynamic response of functional regions of the brain, and functional magnetic resonance imaging (fMRI), which uses powerful magnetic fields to detect magnetic resonance differences in blood in different areas of the brain and allows them to be correlated to differences in neuronal activity (i.e., oxygen consumption). Positron emission tomography (PET) uses a gamma ray detector that locates and records bioactive radioactive assays injected into the blood, thereby measuring the metabolism of neurons in the functional regions of the brain. The brain is so remarkably flexible that people can, after just a few hours of feedback training, learn to activate and deactivate functional regions and to vary the brain’s electrical distribution, metabolic activity, and brain wave patterns. A BMI takes advantage of neuroplasticity to activate and control electronic or mechanical devices (Birbaumer, 2006). A surprising amount of work is being done on connecting the brain directly to prosthetics. EEG and MEG scanners can record oscillation signals from the whole brain or functionally specific regions and activate a device when the subject specifically controls this activity. Slow cortical potentials (SCPs) and sensorimotor rhythm (SMR) have both been used
OCR for page 85
Emerging Cognitive Neuroscience and Related Technologies to activate electronic devices. Evoked potentials recorded by EEG, especially the positive deflecting waveform that occurs approximately 300 msec following an evoked potential (P300 wave), have been used to activate and even operate communications equipment (Birbaumer, 2007). The blood-oxygenation-level-dependent (BOLD) magnetic resonance (MR) signal and NIRs instruments measuring cortical blood flow have also been used as a BMI (Birbaumer, 2007). Reinforcement learning and other algorithmic techniques have been used to rapidly identify the neural signatures of intentional actions and train BMI machine learning subsystems (DiGiovanna et al., 2007). Much of the research to date has had the objective of enabling people to exert some degree of control over a prosthetic, pointing, or communication device (Schwartz, 2004). Neuroplasticity is the critical basis for this work. The working brain adapts its functioning very quickly and readily given adequate, timely, and veridical feedback even with informal training. With operant conditioning the brain can quickly learn and adapt to new kinds of interaction demands (Bach-y-Rita, 1996). However, the limits of neuroplasticity are still poorly misunderstood. Another important issue is the complexity of response that can be driven by BMIs. While it is beyond the scope of this report to attempt to quantify the relative complexity of using a human hand and arm to the high-level operation of an aircraft, the committee speculates that given sufficiently rich sensorimotor feedback (essentially a sensory prosthetic—see below), the levels of complexity may be within the same order of magnitude. However, the training and detection-of-brain-activity capabilties needed to bring BMIs to this level of complexity and specificity are still beyond current technical reach, and the ultimate range of the associated physiological potential and limitations is not yet well understood. More important, from the point of view of enhancement (vs. rehabilitation) of human performance, it has not yet been established that BMI is superior to other methods of direct control of computing functions and robotic vehicles. Promising areas for continued BMI research include the control of robotic orthotics (see section “Robotic Prosthetics and Orthotics”) and the management of information flow to an individual based on changes in the user’s cognitive state (see the section “Cognitive Prostheses”). Finding 3-2. Research on brain-machine interfaces (BMIs) has progressed steadily, with the principal objective being to allow people to exert some degree of control over a prosthetic, pointing and tracking, or communication device. The ultimate range of physiological potential and limitations for BMIs is not yet well understood. From the point of view of enhancement (versus rehabilitation) of human performance, it has not yet been established that BMI is superior to other methods of control of computing functions and robotic vehicles. Promising areas for continued BMI research include the control of robotic orthotics and the management of information flow to an individual based on changes in the user’s cognitive state.
OCR for page 86
Emerging Cognitive Neuroscience and Related Technologies Robotic Prostheses and Orthotics Robotic prosthetics and orthotics are mechatronic systems that can be considered a form of assistive robotics. Prostheses replace a body part, and orthotic devices work in cooperation with the body, which helps to control them and/or assist in movement. While there are many applications for orthotic devices made for rehabilitative purposes (Krebs et al., 2004), focus is on devices that have been designed to improve or extend human performance in the physical domain. Krebs et al. (2004, p. 353) observed that progress in the development of limb prostheses “has been modest. This may be partly due to irregular interest in their development, which tends to correlate with major wars.” As a result, a 1991 survey estimated that “only 10% of prosthesis users in the U.S. (5% of the upper-limb amputee population) operate externally powered devices” (Krebs et al., 2004, p. 354). Until effective methods for EMG control, direct and natural feedback, and impedance control (e.g., for contact tasks) are developed, the adoption of externally powered artificial limbs will continue to be slow, although research continues on anatomically analogous model limbs with some success (Herr et al., 2003; Schwartz, 2004). The problem of impedance control warrants further explanation. When a powered prosthesis under human control is designed to grasp an object, there must be feedback to the human for the amount of force that is being applied. The fact that some objects are soft and others hard means that it is not just information on the amount of force that must be fed back, but the force-compliance relationship as well—that is, how much the object is being squeezed to achieve a given amount of force. The human motor system normally manages this relationship unconsciously. With respect to robotic orthotics, a variety of exoskeletons have been developed to increase human strength, endurance, and speed. It is difficult to say exactly what an exoskeleton is. To some extent, an automobile might be considered a kind of exoskeleton because it allows humans to move farther and faster and because the experience of driving can give the driver the feeling that the car is an extension of him or her self. The goal of research on orthotic exoskeletons is for the interface to be so transparent that there is no learning involved—the user simply performs the task in the usual way, and the system responds as if the wearer simply had stronger, faster, or more accurate limbs. This requires that the device do three things: (1) determine the user’s intent, (2) apply forces when and where appropriate, and (3) get out of the way of the user’s natural movement (Pratt et al., 2004). Examples of types of orthotic exoskeletons include mechanical interfaces to the upper body allowing a person to lift and move extremely heavy loads (Kazerooni, 1996) and devices interfacing with the lower extremities, allowing people to carry heavy loads for long periods of time or to cover long distances with minimal fatigue (Weiss, 2001; Kawamoto and Sonkai, 2002; Walsh et al., 2007). One such device, the RoboKnee, is an endurance multiplier. A user wearing a 60 kg backpack can do one-legged deep knee bends for an unlimited
OCR for page 92
Emerging Cognitive Neuroscience and Related Technologies FIGURE 3-3 Visual prosthesis design for combat operations. Unlike current night vision systems (right), sensory substitution for night vision would not impede use of the eyes at night (left). SOURCE: Reprinted with permission from Anil Raj and the Institute for Human and Machine Cognition at the University of West Florida. disease, congenital defect, or physical trauma have demonstrated enhancement of situation awareness (Ptito et al., 2005; Veraart et al., 2004; Kaczmarek et al., 1985; Saunders et al., 1981). Additionally, projections from visual, auditory, and proprioceptive sensory systems interact in the brain stem (Meredith and Stein, 1986a,b), indicating that the sensory channels operate cross-modally, possibly to reduce ambiguities in data sensed by single end-organ receptors. This provides a rationale for incorporating similar cross-modal techniques into multiple sensory channel substitution interfaces for the control of complex systems such as teleoperated robots, aircraft, and vehicles and is supported by adverse workload performance and situational awareness effects when veridical information is limited to a subset of normal sensory channels (Paivio, 1991; Wickens and Holland, 1999). Applications and approaches to sensory substitution through 2003 were cataloged by Lenay et al. (2003). Efforts to develop a comprehensive approach to the integration and dynamic adjustment of multi sensory input are described in Raj et al. (2000). Figure 3-4 illustrates a Vibrotactile displays for pilots.
OCR for page 93
Emerging Cognitive Neuroscience and Related Technologies FIGURE 3-4 Vibrotactile displays for pilots experiencing sensory overload or cognitive illusions have been a successful application of sensory substitution technologies. Pilots, operators of complex equipment, and analysts monitoring large amounts of data can benefit as information is made available through underused sensory modalities that are more effective in some contexts than vision or hearing. SOURCE: Raj et al. (2005). The Defense Advanced Research Projects Agency’s (DARPA’s) Augmented Cognition (AugCog) program was a research effort focused on appropriately exploiting and integrating all channels of communication from agents to humans (e.g., visual, auditory, tactile) and, conversely, sensing and interpreting a wide range of physiological measures of the human being in real time so they can be used to tune assistive behavior and thus enhance joint human-machine performance.13 For example, sets of system sensor agents (say, a joystick), human sensor agents (EEG, pupil tracking, arousal meter), human display agents (visual, auditory, tactile), and adaptive automation agents (say, assisting in the perfor- 13 For additional information, please see the Augmented Cognition International Society Web site at http://www.augmentedcognition.org. Last accessed on January 9, 2008.
OCR for page 94
Emerging Cognitive Neuroscience and Related Technologies mance of specific flight tasks) could work together with a pilot to promote stable and safe flight, sharing and adjusting aspects of control among the human and virtual crew member agents while taking system failures and human attention and stress loads into account. Technologies developed for recent AugCog applications detect and classify human cognitive states such as fatigue, alertness decrement, inattention, or mental and sensory overload in real time by tracking changes in human behavioral and physiological patterns (Schmorrow and Kruse, 2004; Reeves et al., 2007). Using advanced machine learning algorithms, automated biosignal analyses have been developed (Trejo et al., 2003) that require as little as 3.5 sec of EEG data for a robust multivariate support vector classifier to correctly identify cognitive fatigue in 90 percent of individuals performing a demanding 3-hour task (Trejo et al., 2003, 2004). To date most of the effort has gone into measuring “cognitive state,” where gains have been made in the measurement of relatively simple constructs, but serious challenges have arisen in the measurement of complex cognitive constructs in real-world settings and in calibrating them to individual differences. While it is still too early to gauge the success of efforts such as AugCog, let alone to establish principles for making cognitive prostheses acceptable, it is clear that such advances will require new ways of thinking about human-machine interaction. A final note should be made about other forms of machine monitoring. These forms include real-time computer “reading” of human emotions (for example, optical recognition of facial expressions), of cognitive status (with, say, acoustic analyses of the voice), and of alertness/arousal (perhaps by optical tracking of slow eyelid closures [PERCLOS]). These capitalize on newer mathematical techniques such as active shape modeling, which has some potential. Figure 3-5 depicts a closed-loop augmented cognition concept of operation. Finding 3-4. Researchers using sensory substitution interfaces in individuals with sensory loss have demonstrated enhanced situational awareness in their subjects. Thus use of similar cross-modal techniques with multiple sensory channel substitution interfaces shows promise for control of complex systems. Consistent with Finding 2-1, efforts in augmented cognition, which is sometimes oversold, have had some success in sensing and interpreting physiological measures of superordinate psychological states. These measures were calibrated to individuals in real time to tailor the assistive behavior and to manage information flow. Software and Robotic Assistants Many kinds of software and hardware (in the form of robotic assistants) are being designed in hope of improving or extending human performance in the physical and/or cognitive domains (Bradshaw, 1997; Lieberman, 2001; Murphy, 2000). However, they achieve their effect by interacting with the operator as
OCR for page 95
Emerging Cognitive Neuroscience and Related Technologies FIGURE 3-5 Closed-loop augmented cognition concept of operation. SOURCE: Schmorrow and Kruse (2004). assistants or team members rather than to the body, brain, or senses to extend them. Software assistants can enable sophisticated forms of remote sensing, deliberation, and action in concert with humans and other assistants. Embedding a software assistant in a robot or unmanned vehicle provides the additional benefit of physical mobility. Building an assistant might entail modeling the human brain. While modeling the whole brain is highly unlikely in the next two decades, it is not unreasonable to imagine that significant subsystems could be modeled. Moreover, it seems likely that increasingly sophisticated cognitive systems will be constructed in those two decades that, while not aiming to mimic processes in the brain, could nonetheless perform similar tasks well enough to be useful, especially in constrained situations. In this case, success would not be determined by how closely the system resembled the brain’s mechanisms for action, but by how similar the performance of specific cognitive tasks was to a typical human user. While a machine that simulates a complete human brain would be useful, it would be almost as useful to have high-performing machines simulating replicas of different cognitive capabilities. For example, a machine-enabled visual cortex could save untold hours of manual labor spent studying images from satellites
OCR for page 96
Emerging Cognitive Neuroscience and Related Technologies or other sorts of reconnaissance photographs, in effect augmenting human performance. Although there are rudimentary systems for image detection through pattern recognition, scientists are still far from replacing live subject matter experts in this area. However, one can imagine an intelligent machine that not only recognizes patterns as well as an expert but also could do this over an entire spectrum of wavelengths that are inaccessible to the human eye. Similarly, one can also imagine a machine-based “ear” listening to radio communications and identifying crucial conversations. Research suggests that the combination of a robotic assistant and a human expert in pattern recognition, in which the robotic assistant automatically scans a photograph for areas that are likely to hold targets and directs the human to search again in those more likely areas, can improve detection probability and the efficiency of the search (see, for example, Oren et al., 2008; Mařík et al., 2008). Research in artificial intelligence (AI) and cognitive science has pursued such objectives for many years, with reasonable success in many domains of understanding and application. Some AI researchers are focused on maximizing fidelity to cognitive neurophysiology, with the aim of increasing the scientific understanding of human functioning. Others are more concerned with the practical engineering of useful intelligent systems through an eclectic mix of engineering know-how and an understanding of human intelligence. Yet others wish to enhance human performance by combining the strengths of humans and automation. Success over the next two decades will require an appreciation of all perspectives, neither undervaluing the independent study of human functioning nor slavishly imitating superfluous aspects of natural systems in the development of artificial ones, like the engineer who insists that airplanes must have flapping wings because birds have them (Ford and Hayes, 1998). Making up in part for a lack of “experience” on which much of human expertise is based, intelligent systems are increasingly using the Internet, now the largest repository of knowledge on the planet, to learn. By way of contrast, early intelligent systems were like the disembodied brains shown in low-budget science fiction movies: entities that ruled the world while floating in a glass jar tethered by wires. While potentially rich in internal knowledge and inferential power, their only direct experience of the world arrived through the impoverished modes of keyboard input and video display output. The Internet’s rise and the multitude of specialized interactive devices it has spawned give intelligent systems rich means to sense, learn, and interact with humans and within cyberspace. Because this sort of intelligence is distributed, coordination becomes as important as cognition. Thus many researchers have increasingly abandoned the metaphor of the intelligent system as disembodied brain in favor of that of the agent as software robot. This research emphasis has subtly shifted much of the newer research from deliberation to doing, from reasoning to remote action. There are no fundamental reasons why artificial cognitive systems designed for specific functions could not be constructed. Such systems could employ
OCR for page 97
Emerging Cognitive Neuroscience and Related Technologies reusable software components to embody the “intelligence” needed for specific human tasks. The main practical limitation, other than memory and processing power, is coming up with more powerful and efficient means of drawing conclusions from these data. There is reason to believe that if science has not yet met the technical specifications for an artificial cognitive system, it will certainly do so in the near future. With only modest research funding, surprise breakthroughs in this area could come not only from the United States but also from any one of a number of countries in Europe or Asia in the coming 10 years. In contrast to research that has focused on how to increase the autonomy of intelligent systems, efforts are under way to better understand and satisfy requirements for software and robotic assistants that can work in close and continuous collaboration with people. The considerable interdependence of humans and machines in such applications requires that, in addition to what people and assistants do to accomplish the work itself, they must also invest time and attention to making sure that their tasks are appropriately coordinated. Through the effective handling of these concerns, human attention can remain focused on critical tasks, costly errors can be avoided, and individual and group performance can be significantly enhanced. There are many potential military and intelligence applications of software assistants in the form of software and robots. Promising areas include the following (Bradshaw et al., 2003): PLOW is a good example of a state-of-the-art software assistant that helps people manage their everyday tasks (Allen et al., 2007). It uses the same collaborative architecture to learn tasks as it does to perform tasks. PLOW displays an intelligence that springs from sophisticated natural language understanding, reasoning, learning, and acting capabilities unified within a collaborative agent architecture. Robotic assistants respond to the growing use of unmanned systems in the military, whereby large numbers of heterogeneous unmanned ground, air, underwater, and surface vehicles work together, coordinated by a smaller number of human operators (Summey et al., 2001). A key requirement for such systems is real-time cooperation with people and with other autonomous systems (Goodrich and Schultz, 2007). While these heterogeneous cooperating platforms may operate at different levels of sophistication and with dynamically varying degrees of autonomy, they will require some common means of representing and appropriately participating in joint tasks. Just as important, developers of such systems will need tools and methodologies to assure that the systems work together reliably and safely even when they are designed independently. “Teamwork” is a widely accepted term for describing the cooperation among people and intelligent systems (Tambe et al., 1999; Klein et al., 2004). The idea is that shared knowledge, goals, and intentions are a glue that binds team members
OCR for page 98
Emerging Cognitive Neuroscience and Related Technologies together. By having a largely reusable explicit formal model of shared intentions, team members attempt to manage general responsibilities and commitments to each other in a coherent fashion that facilitates recovery when unanticipated problems arise. For example, it often happens in joint action that one team member fails and can no longer perform. A teamwork model might stipulate that each team member be notified, under appropriate conditions, of the failure, thus reducing the requirement for special-purpose, exception-handling mechanisms for each possible failure mode (Cohen and Levesque, 1991). Finding 3-5. Software and robotic assistants are designed to improve or extend human performance in the physical and cognitive domains. Much of the newer research has shifted from deliberation to doing, from reasoning to acting remotely. This suggests a blurring of discipline boundaries such that new indicators and observables are needed. Progress will require investments in the development of (1) reusable software components that embody the intelligence needed to support specific human tasks and (2) assistants that can coordinate their interaction with humans and artificial assistants in ways that emulate natural and effective teamwork within groups of people. Finding 3-6. As high-performance computing becomes less expensive and more available, a country could become a world leader in cognitive neuroscience through sustained investment in the nurture of local talent and the construction of required infrastructure. Key to allowing breakthroughs will be the development of software-based models and algorithms, areas in which much of the world is now on par with or ahead of the United States. Given the proliferation of highly skilled software researchers around the world and the relatively low cost of establishing and sustaining the necessary organizational infrastructure in many other countries, the United States cannot expect to easily maintain its technical superiority. Recommendation 3-1. The intelligence community, in collaboration with outside experts, should develop the capability to monitor international progress and investments in computational neuroscience. Particular attention should be given to countries where software research and development are relatively inexpensive and where there exists a sizeable workforce with the appropriate education and skills. Finding 3-7. Unlike in the domain of cognitive neurophysiological research, where the topics are constrained by certain aspects of human physiology and brain functioning, progress in the domain of artificial cognitive systems and distributed human-machine systems (DHMS) is limited only by the creative imagination. Accordingly, with sustained scientific leadership there is reason for optimism about the continued development of (1) specialized artificial cognitive systems
OCR for page 99
Emerging Cognitive Neuroscience and Related Technologies that emulate specific aspects of human performance and (2) DHMS, whether through approaches that are faithful to cognitive neurophysiology, or through some mix of engineering and studies of human intelligence, or by combining the respective strengths of humans and automation working in concert. Researchers are addressing the limitations that made earlier systems brittle by exploring ways to combine human and machine capabilities to solve problems and by modeling coordination and teamwork as an essential aspect of system design. Research in artificial cognitive systems and DHMS faces two problems. One is the unrealistic programs driven by the specific, short-term needs of DoD and the intelligence community. The second problem is the inadequacy of the current approach to metric measurement systems, which makes it next to impossible to achieve meaningful progress because it deals chiefly with epiphenomena of limited value. REFERENCES Allen, J., N. Chambers, G. Ferguson, L. Galescu, H. Jung, M. Swift, and W. Taysom. 2007. PLOW: A collaborative task learning agent. In Proceedings of Association for the Advancement of Artificial Intelligence (AAAI) 2007. Vancouver, British Columbia: AAAI. Bach-y-Rita, Paul. 1972. Brain Mechanisms in Sensory Substitution. New York: Academic Press. Bach-y-Rita, Paul. 1996. Nonsynaptic diffusion neurotransmission and brain plasticity. The Neuroscientist 2(5):260-261. Bach-y-Rita, P., and S.W. Kercel. 2003. Sensory substitution and the human-machine interface. Trends in Cognitive Sciences 7(12):541-546. Bach-y-Rita, P., K. Kaczmarek, M. Tyler, and J. Garcia-Lara. 1998. Form perception with a 49-point electrotactile stimulus array on the tongue. Journal of Rehabilitation Research Development 35(4):427-430. Bach-y-Rita, P., M.E. Tyler, and K.A. Kaczmarek. 2003. Seeing with the brain. International Journal of Human-Computer Interaction 15(2):287-297. Birbaumer, Niels. 2006. Breaking the silence: Brain-computer interfaces (BCI) for communication and motor control. Psychophysiology 43(6):517-532. Birbaumer, Niels. 2007. Brain-computer interfaces: Communication and restoration of movement in paralysis. Journal of Physiology 579(3):621-636. Boff, Kenneth R. 2006. Revolutions and shifting paradigms in human factors and ergonomics. Applied Ergonomics 37(4; July):391-399. Special Issue: Meeting Diversity in Ergonomics. Bradshaw, Jeffery M., ed. 1997. Software Agents. Cambridge, MA: AAAI Press/MIT Press. Bradshaw, J.M., G. Boy, E. Durfee, M. Gruninger, H. Hexmoor, N. Suri, M. Tambe, M. Uschold, and J. Vitek, eds. 2003. Software agents for the warfighter. ITAC Consortium Report. Cambridge, MA: AAAI Press/The MIT Press. Cohen, P.R., and H.J. Levesque. 1991. Teamwork. Menlo Park, CA: SRI International. Dennis, R.G., and H. Herr. 2005. Engineered muscle actuators: Cells and tissues. Pp. 243-266 in Biomimetics: Biologically Inspired Technologies, Y. Bar-Cohen, ed. Boca Raton, FL: CRC Press. DiGiovanna, J., B. Mahmoudi, J. Mitzelfelt, J.C. Sanchez, and J.C. Principe. 2007. Brain-machine interface control via reinforcement learning. Pp. 530-533 in 3rd International IEEE/EMBS Conference on Neural Engineering, CNE ’07, May 2-5, Kohala Coast, HI.
OCR for page 100
Emerging Cognitive Neuroscience and Related Technologies Drabløs, Finn, Mikael Hammer, Astrid Lægreid, Jon Olav Hauglid, and Bjørn K. Alsberg. 2004. Bioinformatics towards 2020. Paper read at the Information Society of 2020 (InfoSam2020) conference, April 19-20, Trondheim, Norway. Available from https://www.ime.ntnu.no/infosam2020/oldpage/Conference/Conference_Papers_2ndEdition_2.pdf. Last accessed June 18, 2008. Engelbart, D.C. 1962. Augmenting Human Intellect: A Conceptual Framework. Menlo Park, CA: Stanford Research Institute. Fernández, E., F. Pelayo, S. Romero, M. Bongard, C. Marin, A. Alfaro, and L. Merabet. 2005. Development of a cortical visual neuroprosthesis for the blind: The relevance of neuroplasticity. Journal of Neural Engineering 2(4):R1-12. Finkel, L.H. 1990. A model of receptive field plasticity and topographic map reorganization in the somatosensory cortex. Pp. 164-192 in Connectionist Modeling and Brain Function: The Developing Interface, S.J. Hanson and C.R. Olsen, eds. Cambridge, MA: MIT Press. Ford, K.M., C. Glymour, and P. Hayes. 1997. Cognitive prostheses. AI Magazine 18(3):104. Ford, K.M., and P. Hayes. 1998. On computational wings: Rethinking the goals of artificial intelligence. Scientific American. Special Issue, “Exploring Intelligence,” 9(4):78-83. Goldstein, Jeffrey. 1999. Emergence as a construct: History and issues. Emergence: Complexity and Organization 1:49-72. Goodrich, M.A., and A.C. Schultz. 2007. Human-robot interaction: A survey. Foundations and Trends in Human-Computer Interaction 1(3):203-275. Gordon, K.E., and D.P. Ferris. 2007. Learning to walk with a robotic ankle exoskeleton. Journal of Biomechanics 40(12):2636-2644. Herr, H., G. Whiteley, and D. Childress. 2003. Cyborg technology—biomimetic orthotic and prosthetic technology. Pp. 103-144 in Biologically Inspired Intelligent Robots, Y. Bar-Cohen and C. Breazeal, eds. Bellingham, WA.: SPIE Press. Kaczmarek, K., P. Bach-y-Rita, W.J. Tompkins, and J.G. Webster. 1985. A tactile vision-substitution system for the blind: Computer-controlled partial image sequencing. IEEE Transactions on Biomedical Engineering BME-32:602-608. Kawamoto, H., and Y. Sankai. 2002. Comfortable power assist control method for walking aid by HAL-3. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Vol. 4. October 6-9, Yasmine Hammamet, Tunisia. Kazerooni, H. 1996. The human power amplifier technology at the University of California, Berkeley. Journal of Robotics and Autonomous Systems 19:179-187. Klein, G., P.J. Feltovich, J.M. Bradshaw, and D.D. Woods. 2004. Common ground and coordination in joint activity. Pp. 139-184 in Organizational Simulation, Vol. 1, W.B. Rouse and K.R. Boff, eds. New York, NY: John Wiley. Krebs, H.I., N. Hogan, W. Durfee, and H. Herr. 2004. Rehabilitation robotics, orthotics, and prosthetics. Pp. 337-369 in Textbook of Neural Repair and Rehabilitation, Vol. 1, M.E. Selzer, S. Clarke, L.G. Cohen, P.W. Duncan, and F.H. Gage, eds. Cambridge, England: Cambridge University Press. Kupers R., A. Fumal, A.M. de Noordhout, A. Gjedde, J. Schoenen, and M. Ptito. 2006. Transcranial magnetic stimulation of the visual cortex induces somatotopically organized qualia in blind subjects. Proceedings of the National Academy of Sciences U.S.A. 103(35):13256-13260. Lenay, C., O. Gapenne, S. Hanneton, C. Genouëlle, and C. Marque. 2003. Sensory substitution, limits and perspectives. In Touching for Knowing: Cognitive Psychology of Haptic Manual, Y. Hatwell, A. Streri, and E. Gentaz, eds. Available from http://www.utc.fr/gsp/publi/Lenay03-SensorySubstitution.pdf. Last accessed on February 11, 2008. Licklider, J.C.R. 1960. Man-computer symbiosis. IRE Transactions on Human Factors in Electronics HFE-1(2):4-11. Lieberman, H., ed. 2001. Your Wish Is My Command: Programming by Example. San Francisco: Morgan Kaufmann Publishers. Mann, S. 1997. Wearable computing: A first step toward personal imaging. IEEE Computer 30(2):25-32.
OCR for page 101
Emerging Cognitive Neuroscience and Related Technologies Mařík, V., J.M. Bradshaw, J. Meyer, W.A. Gruver, and Petr Benda. 2008. Pp. 63-68 in Proceedings of the 2008 IEEE International Conference on Distributed Human-Machine Systems (DHMS 2008). March 9-12, Athens, Greece. Meredith, M.A., and B.E. Stein. 1986a. Spatial factors determine the activity of multisensory neurons in cat superior colliculus. Brain Research 365(2):350-354. Meredith, M.A. and B.E. Stein. 1986b. Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. Journal of Neurophysiology 56(3):640-662. Motluk, A. Seeing with your ears. The New York Times Online Edition, December 11, 2005. Available from http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-14.html?ex=1291957200&en=3c72cf9fa46bbb06&ei=5090&partner=rssuserland&emc=rss. Last accessed June 18, 2008. Murphy, R.R. 2000. Introduction to AI Robotics. Cambridge, MA: The MIT Press. Nelson, M. 2006. Scientists probe use of the tongue. USA Today Online, April 24, 2006. Available from http://www.usatoday.com/tech/science/discoveries/2006-04-24-tongue-research_x.htm. Last accessed June 18, 2008. Ng, Lydia, Sayan Pathak, Chihchau Kuan, Chris Lau, Hong-wei Dong, Andrew Sodt, Chinh Dang, Brian Avants, Paul Yushkevich, James Gee, David Haynor, Ed Lein, Allan Jones, and Mike Hawrylycz. 2007. Neuroinformatics for genome-wide 3-D gene expression mapping in the mouse brain. IEEE/ACM Transactions on Computational Biology and Bioinformatics 4(3):382-393. Oren, Y, A. Bechar, J. Meyer, and Y. Edan. 2008. Performance analysis of human-robot collaboration in target recognition tasks. Paper read at IEEE International Conference on Distributed Human-Machine Systems (DHMS 2008), Athens, Greece, March 9-12, 2008. Paivio, A. 1991. Dual coding theory: Retrospect and current status. Canadian Journal of Psychology 45(3):255-287. Pratt, J.E., B.T. Krupp, C.J. Morse, and S.H. Collins. 2004. The RoboKnee: An exoskeleton for enhancing strength and endurance during walking. Pp. 2430-2435 in Proceedings of the 2004 IEEE International Conference on Robotics and Automation, Vol. 3. New Orleans, LA. April 26-May 1. Ptito, M., S.M. Moesgaard, A. Gjedde, and R. Kupers. 2005. Cross-modal plasticity revealed by electrotactile stimulation of the tongue in the congenitally blind. Brain 128(3):606-614. Raj, A.K., S.J. Kass, and J.F. Perry. 2000. Vibrotactile displays for improving spatial awareness. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting 1(4):181-184. Raj, A.K., R.W. Carff, M.J. Johnson, S.P. Kulkarni, J.H. Higgins; and J.M. Bradshaw. 2005. A multisensory integrated representation architecture for notional display applications (MIRANDA). Proceedings of the 1st International Conference on Virtual Reality, Paper #86 on CD of Vol. 9, Advances in Virtual Environments Technology: Musings on Design, Evaluation, and Application. Mahwah, NJ: Lawrence Erlbaum Associates. Reefhuis, Jennita, Margaret A. Honein, Cynthia G. Whitney, Shadi Chamany, Eric A. Mann, Krista R. Biernath, Karen Broder, Susan Manning, Swati Avashia, Marcia Victor, Pamela Costa, Owen Devine, Ann Graham, and Coleen Boyle. 2003. Risk of bacterial meningitis in children with cochlear implants. New England Journal of Medicine 349(5):435-445. Reeves, L.M., D.D. Schmorrow, and K.M. Stanney. 2007. Augmented cognition and cognitive state assessment technology: Near-term, mid-term, and long-term research objectives. In LNAI 4565, Foundations of Augmented Cognition, D.D. Schmorrow and L.M. Reeves, eds. Berlin: Springer. Saunders, F.A.,W.A. Hill, and B. Franklin. 1981. A wearable tactile sensory aid for profoundly deaf children. Journal of Medical Systems 5(4):265-270. Schmorrow, D.D., and A.A. Kruse. 2004. Augmented cognition. Pp. 54-59 in Berkshire Encyclopedia of Human-Computer Interaction, W.S. Bainbridge, ed. Great Barrington, MA: Berkshire Publishing Group. Schwartz, A.B. 2004. Cortical neural prosthetics. Annual Review of Neurosciences 27: 487-507.
OCR for page 102
Emerging Cognitive Neuroscience and Related Technologies Silver, Rae, Kwabena Boahen, Sten Grillner, Nancy Kopell, and Kathie L. Olsen. 2007. Neurotech for neuroscience: Unifying concepts, organizing principles, and emerging tools. Journal of Neuroscience 27(44):11807-11819. Still, D.L., and L.A. Temme. 2001. OZ: A human-centered cockpit display. Paper read at the Interservice/Industry Training, Simulation, and Education Conference, Orlando, FL, November 26-29, 2001. Summey, D.C., R.R. Rodrigues, D.P. DeMartino, H.H. Portmann Jr., and E. Moritz. 2001. Shaping the Future of Naval Warfare with Unmanned Systems. Panama City, FL: Dahlgren Division, Naval Surface Warfare Center. Tambe, M., W. Shen, M. Mataric, D.V. Pynadath, D. Goldberg, P.J. Modi, Z. Qiu, and B. Salemi. 1999. Teamwork in cyberspace: Using TEAMCORE to make agents team-ready. Presented at AAAI Spring Symposium on Agents in Cyberspace, Menlo Park, CA. Available from http://www.isi.edu/~modi/papers/aaai-spring99.ps. Last accessed June 18, 2008. Thomas, J.J., and K.A. Cook. 2005. Illuminating the path: The research and development agenda for visual analytics. Richland, WA: National Visualization and Analytics Center. Available from http://nvac.pnl.gov/agenda.stm#book. Last accessed January 10, 2008. Trejo, L.J., K.R. Wheeler, C.C. Jorgensen, R. Rosipal, S. Clanton, B. Matthews, A.D. Hibbs, R. Matthews, and M. Krupka. 2003. Multimodal neuroelectric interface development. IEEE Transactions on Neural Systems and Rehabilitation Engineering 11(2):199-204. Trejo, L.J., R. Kochavi, K. Kubitz, L.D. Montgomery, R. Rosipal, and B. Matthews. 2004. Measures and models for estimating and predicting cognitive fatigue. Psychophysiology 41:S86. Veraart, C., F. Duret, M. Brelén, M. Oozeer, and J. Delbeke. 2004. Vision rehabilitation in the case of blindness. Expert Review of Medical Devices 1(1):139-153. Walcott, E.C., and R.B. Langdon. 2001. Short-term plasticity of extrinsic excitatory inputs to neocortical layer 1. Experimental Brain Research 136(2):143-151. Walsh, C.J., K. Endo, and H. Herr. 2007. A quasi-passive leg exoskeleton for load-carrying augmentation. International Journal of Humanoid Robotics 4(3):487-506. Weiss, P. 2001. Dances with robots. Science News 159(26):407. Wickens, C.D., and J. Holland. 1999. Engineering Psychology and Human Performance, 3rd Edition. Upper Saddle River, NJ: Prentice Hall, Inc. Zhou, D., and R. Greenberg. 2005. Microsensors and microbiosensors for retinal implants. Frontiers in Bioscience 10(1):166-179.