KEVIN A. GLUCK
Air Force Research Laboratory
The role of the Air Force Research Laboratory (AFRL), like the other service laboratories, is to conduct the basic and applied research and advanced technology development necessary to create future technology options for the Department of Defense. At the Warfighter Readiness Research Division of AFRL’s Human Effectiveness Directorate we have initiated a research program focused on mathematical and computational cognitive process modeling for replicating, understanding, and predicting human performance and learning. This research will lead to new technology options in the form of human-level synthetic teammates, cognitive readiness analysis tools, and predictive and prescriptive knowledge-tracing algorithms. Creating a future in which these objectives become realities requires tightly coupled, multidisciplinary, collaborative interaction among scientists and engineers dedicated to overcoming the myriad challenges standing between current reality and our future vision.
There are many barriers to progress in cognitive science in general and to computational cognitive process modeling in particular. I will emphasize just two of them here. The first is a domain barrier. There exists an infinite variety of domains in which humans learn and perform, and in order to simulate human performance and learning in a particular domain, we must provide relevant
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 99
Barriers, Bridges, and Progress in Cognitive Modeling for Military Applications Kevin A. Gluck Air Force Research Laboratory Mesa, Arizona The role of the Air Force Research Laboratory (AFRL), like the other service laboratories, is to conduct the basic and applied research and advanced technology development necessary to create future technology options for the Department of Defense. At the Warfighter Readiness Research Division of AFRL’s Human Effec- tiveness Directorate we have initiated a research program focused on mathemati- cal and computational cognitive process modeling for replicating, understanding, and predicting human performance and learning. This research will lead to new technology options in the form of human-level synthetic teammates, cogni- tive readiness analysis tools, and predictive and prescriptive knowledge-tracing algorithms. Creating a future in which these objectives become realities requires tightly coupled, multidisciplinary, collaborative interaction among scientists and engineers dedicated to overcoming the myriad challenges standing between cur- rent reality and our future vision. BARRIERS AND BRIDGES There are many barriers to progress in cognitive science in general and to computational cognitive process modeling in particular. I will emphasize just two of them here. The first is a domain barrier. There exists an infinite variety of domains in which humans learn and perform, and in order to simulate human performance and learning in a particular domain, we must provide relevant 99
OCR for page 99
100 FRONTIERS OF ENGINEERING domain knowledge to the simulated human. Transfer from one domain to the next is largely a function of the degree to which the knowledge in the two domains overlaps. The reason this is problematic for scientific progress is that the domains typically used to study human cognitive functioning in the laboratory are very different from the domains of application in the real world. Laboratory domains are mostly simple, abstract, and of short duration, whereas real-world applica- tion domains are complex, situated, and of long duration. Thus, in the field of cognitive science we must look for ways to build bridges between laboratory and applied contexts. The second barrier I will emphasize here is a disciplinary barrier. Cogni- tive science is a field of study comprising seven subdisciplines: anthropology, artificial intelligence, education, linguistics, neuroscience, philosophy, and psy- chology. These subdisciplines involve very different methods, frameworks, and theories, and it is both challenging and exciting to make progress at disciplinary intersections. For instance, there is a powerful zeitgeist currently associated with neuroscience-based explanations of everything from attentional, perceptual, and related cognitive phenomena (leading to the creation of a field known as com- putational cognitive neuroscience—see Itti’s paper in this volume) to complex economic decision making (leading to the creation of a field known as neuroeco- nomics—see Glimcher, 2003). This has led people in some circles to speculate that there ought to be ways to improve the readiness of our military personnel by capitalizing on the tools, methods, empirical results, and theories of neurosci- ence. Simultaneously there is interest in bringing together the subdisciplines of anthropology, artificial intelligence, and psychology in order to better understand and prepare for multicultural interaction (see the paper by van Lent and colleagues in this volume). Making scientific progress across these disciplinary boundaries requires that we build bridges among the neural, cognitive, and social bands of human experience (Newell, 1990). Anderson and Gluck (2001) noted that the same challenge exists in connecting neuroscience and educational practice and proposed that cognitive architectures are an appropriate formalism for building such bridges. I propose that cognitive architectures also are an appropriate formal- ism for building bridges from neuroscience to the military’s cognitive readiness applications, using cognitive phenomena and models. THE SOLUTION: COGNITIVE ARCHITECTURES The purpose of all scientific disciplines is to identify invariant features and explanatory mechanisms for the purpose of understanding the phenomena of interest in the respective disciplines. Within the cognitive science community there is an approximately 50-year history of empirical research that involves using carefully constructed (usually simple and abstract) laboratory tests to isolate com- ponents of the human cognitive system in order to model and understand them. Sometimes optimistically referred to as “divide and conquer,” this approach has
OCR for page 99
BARRIERS, BRIDGES, AND PROGRESS IN COGNITIVE MODELING 101 led to comprehensive empirical documentation and sophisticated theories of hun- dreds of phenomena (e.g., fan effect, framing effect, Stroop effect) and functional components (e.g., attention, perception, memory, cognition, motor movement). A subset of the cognitive science community has become concerned that this divide and conquer approach is not leading to a unified understanding of human cognitive functioning, and has proposed cognitive architectures as the solution to that problem (Newell, 1973). Thus, cognitive architectures are intended to serve an integrative, cumulative role within the cognitive science community. They are where the fractionated theories come together in a unifying account not only of the computational functionality of the component processes but also of the architec- tural control structures that define the relationships among those components, and of the representation of knowledge content that is used by cognition. Gray (2007) explains how these three theoretical spaces (components, control structures, and knowledge) interact and provides numerous case studies of each. Ultimately it is at the intersection of these theories that cognitive architectures exist. ONGOING COGNITIVE MODELING RESEARCH Our cognitive modeling research program at the Air Force Research Labora- tory’s Mesa Research Site is organized around a set of methodological strategies with associated benefits. First, we are using and improving on the ACT-R (Adap- tive Control of Thought—Rational) cognitive architecture (Anderson et al., 2004), because it provides a priori theoretical constraints on the models we develop; facilitates model reuse among members of the ACT-R research community; and serves the integrating, unifying role described earlier. Second, we use the architec- ture, or equations and algorithms inspired by it, to make quantitative predictions in order to facilitate eventual transition to applications that make accurate, precise predictions about human performance and learning. Third, we develop models in both abstract, simplified laboratory tasks and in more realistic, complex syn- thetic task environments in order to begin constructing those bridges between the laboratory and the real world. Fourth, we compare the predictions of our models to human-subject data, in order to evaluate the necessity and sufficiency of the computational mechanisms and parameters that are driving those predictions and in order to evaluate the validity of the models. We are pursuing this research strategy in several lines of research, which I briefly describe next. Knowledge tracing. This is our only research line that is entirely math- ematical modeling and does not involve a computational modeling component. The current approach is an extension and (we think) improvement to the general performance equation proposed by Anderson and Schunn (2000); thus, it derives from the computational implementation of learning and forgetting processes in ACT-R. The new equation allows us to make performance predictions or prescribe the timing and frequency of training, both of which will enable tailored training experiences at individual and team levels of analysis (Jastrzembski et al., 2006).
OCR for page 99
102 FRONTIERS OF ENGINEERING Communication. One of the barriers standing between us and human-level synthetic teammates is that we don’t have a valid computational implementation of natural language, verbal or otherwise. This is critical because good teammates adapt their communications in order to facilitate accomplishing the shared mis- sion. Our research in natural language modeling involves extending the double R computational cognitive linguistic theory to knowledge-rich, time-pressured team performance environments similar to those encountered in real-world situations, such as unmanned air vehicle reconnaissance missions (Ball et al., 2007). Spatial competence. Spatial cognition has long been a subspecialization within the cognitive science community, but typically individual scientists or research groups adopt particular phenomena to study without worrying about how the pieces of the spatial cognitive system come back together to create a more general competence. It turns out there is no comprehensive theory of the mecha- nisms and processes that allow for spatial competence. Our research in this area is pushing the field and the ACT-R architecture in the direction of a neurofunctional and architectural view of how spatial competence is realized in the brain and the mind (Gunzelmann and Lyon, 2008). Fatigue. There is a rich history of sleep-related fatigue research conducted in and sponsored by the military laboratories. We are adding a new twist to that tradition by implementing new architectural mechanisms and processes that allow us to replicate the effects of sleepiness on the cognitive system. The process models are then combined with biomathematical models of the circadian and sleep homeostat systems to create the capacity to predict what the precise effects of sleep deprivation or long-term sleep restriction will be in a given performance context (Gunzelmann et al., 2007). High-performance and volunteer computing. As our cognitive model- ing research expanded in breadth and depth and our scientific and technical objectives grew more ambitious, we began to exceed the capacity of our local computing resources. In the search first for more resources and subsequently for more intelligent and efficient use of available resources, we have begun to use both high-performance computing and volunteer computing as platforms for processor horsepower. We have demonstrated that such platforms can indeed be used productively for faster progress in cognitive modeling (Gluck et al., 2007) and are investing in additional software improvements for facilitating the use of these resources. AN IMPORTANT DIRECTION FOR THE RESEARCH COMMUNITY I close by mentioning an important research direction for the cognitive model- ing community: overcoming the knowledge engineering bottleneck. The key here is not the development of tools for doing manual knowledge engineering more efficiently, although that is a perfectly fine idea in the interim. Instead, I believe it is critical that we develop the ability for our modeling architectures to acquire
OCR for page 99
BARRIERS, BRIDGES, AND PROGRESS IN COGNITIVE MODELING 103 their own knowledge without direct human assistance. This will require a variety of learning mechanisms based on a combination of cognitive psychology, machine learning, and Internet search algorithms. REFERENCES Anderson, J. R., and K. A. Gluck. 2001. What role do cognitive architectures play in intelligent tutor- ing systems? Pp. 227-261 in Cognition and Instruction: 25 Years of Progress, D. Klahr and S. M. Carver, eds. Mahwah, NJ: Lawrence Erlbaum Associates. Anderson, J. R., and C. D. Schunn. 2000. Implications of the ACT-R learning theory: No magic bullets. Pp. 1-34 in Advances in Instructional Psychology: Educational Design and Cognitive Science, Vol. 5., R. Glaser, ed. Mahwah, NJ: Lawrence Erlbaum Associates. Anderson, J. R., D. Bothell, M. D. Byrne, S. Douglass, C. Lebiere, and Y. Qin. 2004. An integrated theory of the mind. Psychological Review 111:1036-1060. Ball, J., A. Heiberg, and R. Silber. 2007. Toward a large-scale model of language comprehension in ACT-R 6. Pp. 163-168 in Proceedings of the 8th International Conference on Cognitive Model- ing. New York: Psychology Press. Glimcher, P. W. 2003. Decisions, Uncertainty, and the Brain: The Science of Neuroeconomics. Cam- bridge, MA: MIT Press. Gluck, K. A., M. Scheutz, G. Gunzelmann, J. Harris, and J. Kershner. 2007. Combinatorics meets processing power: Large-scale computational resources for BRIMS. Pp. 73-83 in Proceedings of the Sixteenth Conference on Behavior Representation in Modeling and Simulation. Orlando, FL: Simulation Interoperability Standards Organization. Gray, W. D., ed. 2007. Integrated Models of Cognitive Systems. New York: Oxford University Press. Gunzelmann, G., and D. R. Lyon. 2008. ������������������������������������������������������� Mechanisms of human spatial competence. Pp. 288-307 in Spatial Cognition V: Lecture Notes in Artificial Intelligence #4387, T. Barkowsky, C. Freksa, M. Knauff, B. Krieg-Bruckner, and B. Nebel, eds. Berlin, Germany: Springer-Verlag. Gunzelmann, G., K. Gluck, J. Kershner, H. P. A. Van Dongen, and D. F. Dinges. 2007. Understanding decrements in knowledge access resulting from increased fatigue. Pp. 329-334 in Proceedings of the 29th Annual Meeting of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Jastrzembski, T. S., K. A. Gluck, and G. Gunzelmann. 2006. Knowledge tracing and prediction of future trainee performance. Pp. 1498-1508 in Proceedings of the Interservice/Industry Training, Simulation, and Education Conference. Orlando, FL: National Training Systems Association. Newell, A. 1973. You can’t play 20 questions with nature and win: Projective comments on the papers of this symposium. Pp. 283-310 in Visual Information Processing, W. C. Chase, ed. New York: Academic Press. Newell, A. 1990. Unified Theories of Cognition. Cambridge, MA: Harvard University Press.
OCR for page 99