images with a camera, process the information (for example, categorizing and locating objects in scenes), and alert the operator.

Computation and Human Cognition

The reality of human perception can be modified (augmented) through the use of cognitive artifacts of varied sophistication. Cognitive artifacts are technological systems that complement and enhance human cognitive abilities. A cognitive artifact does not make a person smarter; instead, it is the combined system of the human and the artifact that is smarter or more capable (Norman, 1993). For example, advanced cognitive artifacts can be worn on the body or implanted in various parts of the body and potentially offer enhancement of biological system performance, memory, sensory abilities, and communication.

Augmented reality (AR) has great potential to improve command choices and decisionmaking, with external experts providing relevant information and interpretation from remote and geographically dispersed locations. Another application is enhanced training; through enhanced communication and visualization methods, it is possible to enhance the performance of distributed work teams dramatically. By extension, the integration of nonhuman autonomous components with humans in team-like arrangements could enhance the cognitive performance of groups of humans.

Computational Limitations

The enhancement of cognition by computational means is limited by power demands and architecture design that do not currently support complex cognitive processing. Although information technology (IT) has continued to provide better performance with decreasing power consumption every year, current capabilities are being outpaced by spiraling data and information-processing demands (Izydorczyk, 2010). For example, the IBM Watson is an advanced computing system that “understands” questions in natural language, finds information in relevant sources, determines the confidence level of different options, and responds with factual answers (Ferrucci, 2012). However, Watson’s impressive capability for artificial intelligence and cognitive information processing is still far less than the capability of the human brain, which is by comparison orders of magnitude smaller and more efficient. Although advances in data storage and hardware design will improve this situation, computers may need to become more brain-like to meet the requirements of augmented reality.

Reconfigurable computing2 offers one approach to much more energy efficient, brain-like computers capable of self-learning and adjusting to tasks and requirements without having to be programmed. Such tools for enhancing cognition will require research and development on neuromorphic devices and circuits in which computing elements and memory are “fused” together or finely interleaved (Indiveri et al., 2011). To become brain-like, computers will require dense interconnections between neuron-like computing units. In addition, challenges posed by space constraints for the logical units and by mapping of the neurosynaptic functions of the brain to configuration requirements will have to be overcome. Such developments could fundamentally change the nature of computing, although it might be 15 years or more before real-world applications could be ready.


2Reconfigurable computing allows the building of intelligent circuits that can be adjusted on the basis of experience and learning.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement