Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Technology-Based Training Arthur C. Graesser and Brandon King There undeniably has been an extraordinary change in technology- based training in recent decades. Fifty years ago none of the genres of learning environments that will be addressed in this paper even existed: (1) computer-based training, (2) multimedia, (3) interactive simulation, (4) hypertext and hypermedia, (5) intelligent tutoring systems, (6) inquiry- based information retrieval, (7) animated pedagogical agents, (8) virtual environments with agents, (9) serious games, and (10) computer-supported collaborative learning. All but the first two were not available 20 years ago, and most are not mainstream technologies in schools today. Yet the web has either exemplars or mature technologies for all 10 of them, so they are potentially available to all web users. The gap between the potential of and actual technology-based training translates into a critical, if not desperate, need for research in the social and behavioral sciences. Before addressing each of the 10 technologies, we discuss learning environments more broadly and their critical role in how technologies are developed, understood, and used. LEARNING ENVIRONMENTS Most learners do not know how to use the advanced learning envi- ronments effectively; indeed, learners often do not even know how to get started. The learning environments they confront are often limited or dis- appointing because the developers of the systems have not had sufficient training in cognitive science, pedagogy, behavioral sciences, and learning technologies. There is a shortage of trained professionals in these areas of 127
128 HUMAN BEHAVIOR IN MILITARY CONTEXTS the social and behavioral sciences, particularly those who have a back- ground in conducting projects in interdisciplinary research teamsâwhat is needed to design and develop an advanced learning environment. Far too many learning environments are launched without the required empirical testing on usability, engagement, and learning gains. The pace of new tech- nologies hitting the market is so fast that there typically is not enough time to adequately test the systems. Therefore, there is a need for basic research, theoretical models, and tools to forecast the quality of learning environment designs before or during their potential development. The role of technology in training has had its critics. Cuban (1986, 2001) documented that technology has historically had a negligible impact on improvements in education. Clark (1983) argued that it is the pedagogy underlying a learning environment, not the technology per se, that typically explains learning gains. That conclusion of course suggests that we inves- tigate how particular technologies are aligned with particular pedagogical principles, theories, models, hypotheses, or intuitions. For example, a film clip on how to dismantle an improvised explosive device is not a technology or subject matter naturally aligned with a pedagogical theory that empha- sizes active discovery learning. Reading texts on the web about negotiation strategies is not well aligned with a social learning theory that embraces modeling-scaffolding-fading. It is important to start with a broad perspective on the landscape of learning technologies and learning theories (National Research Council, 2000; OâNeil and Perez, 2003). Any given technology, T, affords a number of cognitive, social, and pedagogical mechanisms, M (Gee, 2003; Kozma, 1994; Norman, 1988). In addition to these TM mappings, it is essential to consider the goals, G, of the learning environment: Is the learning environ- ment designed for quick training on shallow knowledge about an easy topic or for deep learning about explanations of a complex system? It is essential to consider the characteristics of the learner, L, such as high or low knowl- edge of the subject matter and high or low verbal ability. The resulting TMGL landscape of cells needs to be explored. Some cells are promising conditions for learning, others are impossible, and groups of cells give rise to interesting interactions. We advocate a long-term research roadmap that identifies an appropri- ate TMGL landscape for military training and that selects research projects that strategically cover cells that need attention. For example, there has not been enough research on learning gains from serious games that afford ac- tive discovery learning in adults with low reading ability. In contrast, there is a wealth of research on learning gains from intelligent tutoring systems on algebra and physics that spans the gamut of learner characteristics, pedagogical mechanisms, and learning goals (Anderson, Corbett, Koed- inger, and Pelletier, 1995; Corbett, 2001; VanLehn et al., 2002). There are
TECHNOLOGY-BASED TRAINING 129 debates about the conditions under which animated pedagogical agents are effective in improving learning and motivation, so the corresponding cells would need attention. A TMGL landscape (or a comparable, perhaps continuous space) would provide a useful guide for inviting and selecting research projects. The set of cognitive, social, and pedagogical mechanisms to ex- plore is of course too extensive to identify in this paper. The prominent ones associated with each genre of learning environment are discussed below. Examples of pedagogical mechanisms are mastery learning with presentation-test-feedback-branching; building on prerequisites; practice with problems and examples; multimedia learning; modeling-scaffolding fading; reciprocal training; problem-based learning and curricula; inquiry learning; and collaborative knowledge construction. Nearly all of these mechanisms emphasize that learners actively construct knowledge and build skills, as opposed to merely being exposed to information delivered by a learning environment. Learning environments significantly vary in development costs. The approximate cost for a 1-hour training session with conventional computer- based training would be $10,000; for a 10-hour course with conventional computer-based training and rudimentary multimedia would be $100,000; for an information-rich hypertext-hypermedia system would be $1,000,000; for a sophisticated intelligent tutoring system would be $10,000,000; and for a serious game on the web with thousands of users would be $100,000,000. These very approximate costs would depend further on detailed parameters of the relevant cells in the TMGL landscape. Moreover, the estimated costs for the newer advanced learning environments are perhaps misleading because they represent the development of initial systems or early designs. Costs are dramatically less when existing technologies are reused for the development of new material. Given that training systems have costs and that some have nontrivial costs, there have been major efforts to find ways to cut the price, develop- ment time, and other resources needed for building systems. At the same time, however, it would be important to preserve the quality of the learning experience. One successful effort is that of the Advanced Distributed Learn- ing initiative (see http://www.adlnet.org [accessed June 2007]; Dodds and Fletcher, 2004; Duval, Hodgins, Rehak, and Robson, 2004; Fletcher, 2003), which was launched by the Department of Defense. Learning content for computer-based training, multimedia, and some of the more advanced learning technologies is standardized by being decomposed, packaged, and organized into learning objects that conform to the standards of SCORM (sharable content object reference model). Each learning object is a package of learning material with a set of meta-tags that identify the relevant con- texts of its application. A SCORM-conformant learning object can be used
130 HUMAN BEHAVIOR IN MILITARY CONTEXTS in most learning management systems, so the content is sharable, interoper- able, reusable, and extendable. This feature creates substantial savings in costs: once content is created in a SCORM-conformant fashion, it can be used throughout the electronic learning world. One of the chief challenges now is to get designers of courseware to use the SCORM-conformant learn- ing objects (Brusilovsky and Nijhawan, 2002; Sampson and Karampiperis, 2006). Such use will depend on building and indexing large repositories of SCORM-conformant content, as in Content Object Repository Discovery and Registration/Resolution Architecture or CORDRA (see http://cordra. net [accessed June 2007]; Rehak, 2005), and to somehow market and en- courage such repositories to be used. A second major challenge is to develop SCORM standards for the more advanced learning environments (numbers 3-10) now that SCORM is mainstream for computer-based training and most multimedia. We assume that advanced distributed learning (see http:// www.adlnet.gov/ [accessed June 2007]) and SCORM will continue to be a priority for military funding. There are other methods of reducing costs in building the learning environments. Authoring tools are available for easy preparation of course content for computer-based training and multimedia, but better author- ing tools are needed to build new course content with the more advanced learning environments (Murray, Blessing, and Ainsworth, 2003). The ex- isting authoring tools for advanced systems are very difficult to learn and use. Some of them are so complex that only the most advanced cognitive scientists and computer scientist can use themâoften only the original designers of the systems. In order to make authoring tools more widely used by individuals with varying backgrounds, there needs to be systematic research in human factors and human computer interaction on the process of developing course content with them. Otherwise, it is difficult to see how these advanced systems will scale up to handle the large volume of train- ing needs in the military. One side benefit is that these authoring tools can also be viewed as learning environments themselves. One way of learning a complex system at a deep level is to build an advanced learning environ- ment on the system with an authoring tool. Learning environments must be evaluated from the standpoint of learn- ing gains, usage, engagement, and return on investment. Such performance criteria need to have measures that are operationally definedâa task well suited to social and behavioral scientists. For learning gains, the outcome variables include tests of retention for shallow or deep knowledge, prob- lem solving, and transfer of knowledge and skill to different but related contexts. Meta-analyses have revealed that computerized learning envi- ronments fare well in comparison with classroom instruction (Dodds and Fetcher, 2004; Wisher and Fletcher, 2004): the effect sizes (i.e., sigma, dif- ferences between treatment and control conditions, measured in standard
TECHNOLOGY-BASED TRAINING 131 deviation units) are 0.39 for computer-based training, 0.50 for multimedia, and 1.08 for intelligent tutoring systems. There are few data on learning gains from various classes of learning environments, such as inquiry-based information retrieval, virtual environments with agents, serious games, and computer-supported collaborative learning: research is needed on them. Although learning gains are routinely reported in published studies, there are often incomplete data on use (attrition), engagement (including how much the learners like the system), system development time, study time, and costs. The latter measures are needed to systematically assess return on investment. TEN GENRES OF LEARNING ENVIRONMENTS For each of the 10 learning environments discussed in this section, we identify salient theoretical frameworks, empirical findings, and opportuni- ties for future research. Computer-Based Training A prototypical computer-based training system involves mastery learn- ing. The learner (a) studies material presented in a lesson, (b) gets tested with a multiple choice or other objective test, (c) gets feedback on the test performance, (d) restudies the material if the test performance is below a specified threshold, and (e) progresses to a new topic if the test performance exceeds the threshold. The order of topics presented and tested can follow different pedagogical models, such as ordering on prerequisites (Gagne, 1985), a structured top-down hierarchical organization (Ausubel, Novak, and Hanesian, 1978), a knowledge space model that attempts to fill learn- ing deficits and correct misconceptions (Doignon and Falmagne, 1999), or other models that allow dynamic sequencing and navigation (OâNeil and Perez, 2003). The materials presented in a lesson can vary considerably in computer- based training on the web. There can be organized text with figures, tables, and diagrams (essentially, books on the web), multimedia, problems to solve, example problems with solutions worked out, and other classes of learning objects. Computer-based training has been extensively studied over the last few decades and has evolved into a mature technology that is ripe for scaling up at an economical cost. As noted above, meta-analyses show effect sizes of 0.39 sigma in comparison with classroom learning (Dodds and Fletcher, 2004). The amount of time that learners spend studying the material in computer-based training has a 0.35 correlation with learning performance (Taraban, Rynearson, and Stalcup, 2001) and can be opti- mized by contingencies that distribute practice. Interactions between learner
132 HUMAN BEHAVIOR IN MILITARY CONTEXTS characteristics and the sequencing of learning objects have been docu- mented. For example, available evidence suggests that for high-knowledge learners, it is best to have problems followed by worked-out solutions; for low-knowledge learners, it is best to have worked-out example solutions followed by problems. Learning researchers will always be discovering and testing theoretically inspired aptitude-treatment interactions. The nature of the feedback in computer-based training merits careful attention (Kulhavy and Stock, 1989; Moreno and Mayer, 2005; Shute, 2007). A test influences the course of learning in a formative evaluation, but in a summative evaluation it simply scales a learnerâs mastery (Hunt and Pellegrino, 2002). A test score alone is adequate feedback for informing learners how well they are doing, but it is not useful for clarifying specific deficits in knowledge or skill. There needs to be a better understanding of the conditions under which a learner benefits from feedback in the form of correct answers, explanations of why correct answers are correct, identi- fication of misconceptions, explanations of the misconceptions, and other forms of elaboration. It is important to identify conditions in which it is best to withhold feedback so that learners acquire self-regulated learning strategies. The nature of the test format calls for additional research. Most mul- tiple choice questions in actual courses, electronic learning facilities, and commercial test banks tap shallow rather than deep levels of comprehen- sion (Ozuru, Graesser, Rowe, and Floyd, 2005; Wisher and Graesser, 2005). Shallow questions quiz a learner on explicit information in the lessons, definitions of terms, properties of concepts, steps in procedures, and other forms of perception-based and memory-based processes that require little or no reasoning. Deep-level questions require a learner to understand causal mechanisms, logical justification of claims, explanations of complex systems, mental models, inferences, and applications (Bloom, 1956; Chi, de Leeuw, Chiu, and LaVancher, 1994; Graesser and Person, 1994). An emphasis in training on shallow knowledge has the unfortunate consequence of letting learners settle for shallow standards of comprehen- sion (Baker, 1985; Dwyer, 2005; Otero and Graesser, 2001). As a conse- quence, learners often perform well on tests with shallow questions but not tests with deep questions. Experimental investigations need to manipulate the quality of questions affiliated with a course and measure the effects on retention, problem solving, and transfer performance. High-quality as- sessments need to be developed that not only satisfy psychometric criteria but also pedagogical theory in the cognitive and learning sciences (Dwyer, 2005). This direction is being pursued at Educational Testing Service and the College Board. There are two potential disadvantages of conventional computer-based training, both of which need confirmation with additional research. First,
TECHNOLOGY-BASED TRAINING 133 some populations of learners are not engaged in the learning process pro- vided by computer-based training, particularly learning environments that lack multimedia. Conventional electronic page turning is fine for moti- vated learners who want to be trained on easy-to-moderate material in the minimum amount of time, but not for those who lack motivation and need more entertainment. Second, computer-based training seems more appro- priate for acquiring inert knowledge than active application of knowledge (Bereiter and Scardamalia, 1985; National Research Council, 2000) and for shallow knowledge rather than deep knowledge. Other learning envi- ronment genres appear to be more appropriate for enhancing engagement, active application of knowledge and skills, and depth of mastery. Multimedia In a multimedia learning environment, material can be delivered in different presentation modes (verbal, pictorial), sensory modalities (audi- tory, visual), and delivery media (text, video, simulations). The impact of different forms of multimedia has been extensively investigated by Mayer and his colleagues (see Mayer, 2005). Meta-analyses reported by Dodds and Fletcher (2004) indicate an effect size of 0.50 sigma for multimedia learn- ing in comparison with traditional instruction; the effect size for the meta- analyses reported by Mayer (2005) is considerably higher, about 1.00. In many of these studies, retention, problem solving, and transfer of training is facilitated by multimedia because the separate modalities offer multiple codes (Paivio, 1986), conceptually richer and deeper representations (Craik and Lockhart, 1972), and multiple retrieval routes. Additional research is of course needed to identify the content and characteristics of the learners who benefit most from multimedia. It is important that a multimedia presentation does not present so large a cognitive load that it splits a learnerâs attention (Kalyuga, Chandler, and Sweller, 1999; Sweller and Chandler, 1994). For example, a picture on the screen with a voice that explains highlighted aspects of the picture provides multiple codes without overloading working memory. However, if there is text on the screen that redundantly echoes the spoken explanations, then there may be cognitive overload, interference, and a split attention effect (between print and the picture). Inputs in the same sensory modality inter- fere with each other more than inputs from different modalities. Mayer (2005) has documented and empirically confirmed a number of principles that predict when different forms of multimedia facilitate learn- ing. Among these are the principles of multimedia, modality, spatial and temporal continuity, coherence, redundancy, and individual differences. The principles are based on a cognitive model that specifies the processes of selecting, organizing, and integrating information. Mayerâs multimedia
134 HUMAN BEHAVIOR IN MILITARY CONTEXTS learning model attempts to predict when and how to highlight a text or diagram with arrows, lines, color, sound, spoken messages, and so on. One counterintuitive result of research with multimedia is that nonin- teractive animations of a complex process often have no effects on learning (Lowe, 2004; Rieber, 1996; Tversky, Morrison, and Betrancourt, 2002). Such animations run a number of risks: not being easy to understand, being transient, moving too quickly, presenting distracting material, placing de- mands on working memory, and depicting processes in a fashion other than one that the learner would otherwise actively construct (Hegarty, 2004). In contrast, a static picture remains on the screen for inspection, is available for active construction of interpretations at the learnerâs leisure, and po- tentially stimulates a mental construction of the dynamic process (Hegarty, Kriz, and Cate, 2003). Although some researchers have documented learn- ing gains from animations, there is a persistent question of whether there is information equivalence between the simulation and control conditions in that research. There is a need for a formal cognitive model that predicts the effects of particular forms of multimedia on learning at varying levels of depth. What is desired, for example, is a GOMS (goals, operators, methods, and selection rules) model (Card, Moran, and Newell, 1983; Gray, John, and Atwood, 1993) of multimedia learning that has the theoretical scope, ana- lytical precision, and predictive power that GOMS provided for the field of human-computer action in the 1980s and 1990s. A satisfactory model would need to consider the cognitive representations of the content, the processes needed to perceive and interpret the multimedia presentations, knowledge of the learner, and the tasks the learner needs to perform. A fine-grained cognitive model would resolve at least some of the inconsistent findings in the literature and could be used to make a priori predictions. Social science research is needed to resolve a number of other ques- tions about multimedia. How can learners be trained to interpret complex multimedia displays? What sort of semiotic theory is needed to explain how pictures and icons are interpreted and integrated with verbal input? How can cognitive theories inform graphic artists? How can multimedia presentations be tailored to the profile of learners, including those with disabilities? How can different forms of content be represented with differ- ent types of multimedia? Given that most research on multimedia is based on experiments in which material is presented for less than 1 hour, how well does that research reflect the learning environments that are used for several weeks? Will the razzle dazzle of exotic multimedia end up being too exhausting to a learner over a longer period of time?
TECHNOLOGY-BASED TRAINING 135 Interactive Simulation Interactive simulation allegedly produces more learning than simply viewing simulations because a learner can actively control input parameters and observe the results on the system. A learner can slow down animations to inspect the process in detail, zoom in on important subcomponents of a system during the course of a simulation, observe the system from multiple viewpoints, and systematically relate inputs to outputs (Kozma, 2000). Some studies have indeed shown advantages of interactive simulation on learning, but others have shown no gains of interactive simulation in com- parison with various control conditions (Deimann and Keller, 2006; Dillon and Gabbard, 1998; Jackson, Olney, Graesser, and Kim, 2006; Stern et al., 2006; van der Meij and de Jong, 2006). The empirical results are therefore mixed and in need of a meta-analysis, assuming that a sufficient number of empirical studies have been conducted. Unfortunately, simulations and many other advanced learning environ- ments tend to have complex content and interfaces that are unfamiliar to learners. Learners with low domain knowledge or computer expertise have trouble getting started and managing the human-computer interface. Even learners with medium or high knowledge and expertise often do not under- stand how to strategically interact with the simulation to advance learning. Consequently, designers of these systems are sometimes disappointed on how little or ineffectively the simulations are used. There needs to be training, modeling, and scaffolding of the use of com- plex simulations before they can be used effectively. A game environment with points and feedback (as in the case of Flight Simulator, see http://www. microsoft.com/games/flightsimulator/ [accessed August 2007]) is believed to motivate learners and be effective in promoting learning gains. Research is needed on the cognitive and motivational mechanisms that encourage intelligent interactions with interactive simulations. Hypertext and Hypermedia Hypertext and hypermedia systems provide a large space of web pages with texts, pictures, animations, and other media. Each page has hot spots for the learner to click and explore. The learner has free reign to maneuver through the space, which can be an ideal environment for active learning and inquiry. Unfortunately, most learners do not have the skills of self- regulation and metacognition to intelligently search through a hypertext/ hypermedia space (Azevedo and Cromley, 2004; Conklin, 1987; Winne, 2001): they get lost, get sidetracked by seductive details, and lose sight of the primary learning goals. These known liabilities of this technology have resulted in mixed re-
136 HUMAN BEHAVIOR IN MILITARY CONTEXTS ports of learning gains from hypertext/hypermedia when compared with a designed sequence of materials by an expert author (Azevedo and Cromley, 2004; Rouet, 2006). Learners benefit from a navigational guide that trains, models, and scaffolds good inquiry strategies (Azevedo and Cromley, 2004). Another aid is an interface that shows the learner an overview of the space and where the learner has visited; a graphical interface or labeled hier- archy may be suitable for providing this global context (Lee and Baylor, 2006). More research is needed on training learners how to effectively use hypertext and hypermedia to achieve specific learning goals. Research is also needed to assess and increase the likelihood that designers of these environments use principles of cognition, human factors, semiotics, and human-computer interaction. Many designers congest the web pages with excessive options, clutter, and seductive details (i.e., feature bloat), which overloads the cognitive system and distracts learners, especially those with low ability. Intelligent Tutoring Systems Intelligent tutoring systems track the knowledge states of learners in fine detail and adaptively respond with activities that are sensitive to those knowledge states. The processes of tracking knowledge (called user mod- eling) and adaptively responding to a learner ideally incorporate com- putational models in artificial intelligence and cognitive science, such as production systems, case-based reasoning, Bayes networks, theorem prov- ing, and constraint satisfaction algorithms. Successful systems have been developed for mathematically well-formed topics, including algebra, geom- etry, programming languages (the Cognitive Tutors, Anderson et al., 1995; Koedinger et al., 1997), physics (Andes, Atlas, and Why/Atlas, VanLehn et al., 2002; VanLehn et al., 2007), electronics (Lesgold and Nahemow, 2001), and information technology (Mitrovic, Suraweera, Martin, and Weerasinghe, 2004). These systems show impressive learning gains com- pared to control instruction (an effect size of approximately 1.00 sigma), particularly for deeper levels of comprehension. School systems are adopting intelligent tutoring systems at an increas- ing pace, particularly those developed at LearnLab and Carnegie Learning in the Pittsburgh area. Carnegie Mellon and Pittsburgh have a Science of Learning Center (funded by the National Science Foundation) to scale up their systems in mathematics, physics, and foreign languages. Intelligent tutoring systems are expensive to build but are now in the phase of scaling up. One challenge in getting widespread use of these sys- tems is that instructors do not know what systems are available, how to access and use them, and how to integrate the systems in course curricula.
TECHNOLOGY-BASED TRAINING 137 Advanced distributed learning networks and projects hold some promise in facilitating more widespread use of intelligent tutoring systems. A second challenge lies in the authoring of new subject matter content in a system at a pace with the growth of knowledge. Some of the newer systems have attempted to handle knowledge domains that are not math- ematically precise and well formed. The Intelligent Essay Assessor (Foltz, Gilliam, and Kendall, 2000; Landauer, Laham, and Foltz, 2000) and e- Rater (Burstein, 2003) grade essays on science, history, and other topics as reliably as experts of English composition. Summary Street (Kintsch, Steinhart, Stahl, and LSA Research Group, 2000) helps learners summa- rize texts by identifying idea gaps and irrelevant information. AutoTutor (Graesser, Lu et al., 2004; Graesser, Chipman, Haynes, and Olney, 2005) helps college students learn about computer literacy, physics, and critical thinking skills by holding conversations in natural language. AutoTutor shows learning gains of approximately 0.80 sigma in comparison with reading a textbook for an equivalent amount of time (Graesser, Lu et al., 2004; VanLehn, Graesser et al., 2007). These systems automatically analyze language and discourse by incorporating recent advances in computational linguistics (Jurafsky and Martin, 2000) and information retrieval, notably latent semantic analysis (Dumais, 2003; Landauer, McNamara, Dennis, and Kintsch, 2007; Millis et al., 2004). There are three major reasons for encouraging more research on de- veloping and testing intelligent tutoring systems with tutorial dialogue in natural language. First, the military has needs for intelligent training on subject matters that involve conceptualizations and verbal reasoning that is not mathematically well formed. Second, natural language dialogue is a frequent form of communication, as in the case of chat rooms, MUD (multiuser domain) games, MOO (MUD object oriented), other computer games, and instant messaging (Kinzie, Whitaker, and Hofer, 2005; Looi, 2005). The majority of teenagers in the United States use instant messaging every day. Third, the revolutionary advances in computational linguistics, corpus analyses, speech recognition, and discourse processing (Graesser, Gernsbacher, and Goldman, 2003) make it possible to make significant progress in developing natural language dialogue systems. However, two points of caution are needed. It is important to focus on making the conversational systems more responsive to learnersâ ideas, threads of reasoning, and questions, rather than merely coaching learners in following the systemâs agenda. The second caution is that there needs to be a fine-grained assessment of what aspects of natural language dialogue facilitate learning, engagement, and motivation. Learners get irritated with conversation partners who do not seem to be listening at a sufficiently deep level (Mishra, 2006; Walker et al., 2003).
138 HUMAN BEHAVIOR IN MILITARY CONTEXTS Inquiry-Based Information Retrieval One type of inquiry learning consists of asking questions and searching for answers in an information repository (Graesser, Hu, Jackson, Person, and Toth, 2004; Wisher and Graesser, 2005). High-knowledge individuals sometimes do not have the patience to wade through learning materials; they prefer to actively ask questions to achieve their goals. Query-based in- formation retrieval occurs when Google is used to access information on the web. The queries do not need to be well formed semantically and syntacti- cally because the system uses keyword search algorithms. The responses are not direct answers to queries, but rather are web pages and documents that hopefully contain the answers. Recently, advances in computational linguis- tics have made it possible for information retrieval systems to parse and interpret well-formed questions and return answers to users (Harabagiu, Maiorano, and Pasca, 2002; Voorhees, 2001). The information repositories have varied from focal topics (terrorism, finances in Wall Street Journal) to open searches on the web. Formal evaluations of these question answering systems have been held in the TREC QA and ARDA AQUAINT initiatives (see http:/www. informedia.cs.cmu.edu/aquaint/index.html [accessed August 2007]). The performance of these query-based information retrieval systems has been quite impressive for short-answer questions (who, what, when, where) but not for questions that require lengthy answers (why, how). For the latter questions, the best that can be accomplished is returning a paragraph from the text that may contain the answer. What has been rare in evaluations of these systems is performance in the context of learning environments. In one study on a learning environment on research ethics, the performance in terms of the accuracy of paragraphs returned in an inquiry, with 95 per- cent of the paragraphs judged relevant by the learners and 50 percent were judged informative (Graesser, Hu et al., 2004). More research is needed to assess the questions that learners ask during learning and the fidelity of the answers delivered by the question-answer systems in the learning environments. One challenge that limits the utility of query-based retrieval systems is that most learners ask very few questions, and most of the questions they ask are shallow (Graesser and Person, 1994; Graesser, McNamara, and VanLehn, 2005). Questions are typically asked when learners experience cognitive disequilibrium as a result of obstacles to goals, contradictions, anomalous information, difficult decisions, and salient knowledge gaps (Graesser and Olde, 2003). But even in those situations, most learners need to be trained to ask good questions. Such training of question-ask- ing skills improves question quality and also comprehension (King, 1994; Rosenshine, Meister, and Chapman, 1996). Learners need to be exposed to
TECHNOLOGY-BASED TRAINING 139 good models of question asking, inquiry, and curiosity. A curious learner is something too rarely seen in classrooms and other natural settings. A different sense of inquiry learning is manifested in learning en- vironments that stimulate reasoning akin to the scientific method, such as Inquiry Island (White and Frederiksen, 2005). Learners are presented authentic challenges that motivate them to generate hypotheses and plans for testing them, reports to colleagues, revisions of hypotheses, and so on. Ideally, learners will be intrinsically motivated by the problem and the affordances of the learning environment so that they become engaged in the inquiry process. However, there is a need to investigate the process of scaffolding effective inquiry for a wide range of learner profiles. Many learning environments fail to stimulate genuine inquiry in most learners, so this is an area greatly in need of research. The time-course of learning from these learning environments involves weeks, months, or years (not 1-hour training sessions), so the research is expensive and takes months or years for adequate evaluations. A pragmatic skeptic might ask how, when, or whether these broad-scale inquiry learning environments are relevant to military training. Animated Pedagogical Agents Embodied animated conversational (pedagogical) agents have become very popular in information and communication technologies, but the most serious applications have been in learning technologies (Atkinson, 2002; Baylor and Kim, 2005; Cole et al., 2003; Graesser, Jackson, and McDaniel, in press; Johnson, 2001; Johnson, Rickel, and Lester, 2000; McNamara, Levinstein, and Boonthum, 2004; Moreno and Mayer, 2004; Reeves and Nass, 1996). These agents speak, point, gesture, walk, and exhibit facial expressions. Some are built in the image of humans, and some are animals or cartoon characters. The potential power of these agents, from the stand- point of learning environments, is that they can mimic face-to-face commu- nication with human tutors, instructors, mentors, peers, and people in other roles. Ensembles of agents can model social interaction. Single agents can model individuals with different knowledge, personalities, physical features, and styles. Both single agents and ensembles of agents can be carefully choreographed to mimic virtually any social situation: curious learning, negotiation, interrogation, arguments, empathetic support, helping, and so on. Agent technologies have the potential for a revolutionary impact on behavioral and social science research. Researchers have investigated the conditions in which single agents promote learning either alone or in the presence of other media (Mayer, 2005). For example, is it better to have information presented in print or spoken by agents? Are realistic agents better than cartoon agents? Does
140 HUMAN BEHAVIOR IN MILITARY CONTEXTS the attractiveness or conversational style of the agent matter? These and similar questions can be related to the previous research on multimedia, discourse, and social psychology that was conducted before the emergence of agent technologies. It is of course important to make sure that an agent does not create cognitive overload, a split attention effect, or a distraction from other information on the display that has higher importance (Moreno and Mayer, 2004). It is also important to make sure that an agent is not so realistic that the learner has unrealistically high expectations of its intel- ligence (Norman, 1994; Shneiderman and Plaisant, 2005). The research suggests that it is the content of what is expressed, rather than the aesthetic quality of the speech or face, that is most important in predicting learning from pedagogical agents (Graesser, Moreno et al., 2003). Research also suggests that it is possible to create social presence from facial icons with expressions, a minimalist form of the persona effect. There are four research directions that merit attention of social and behavioral scientists in this area. First, ensembles of agents can model learning processes, so researchers can investigate how learning is systemati- cally affected by different theories of social interaction. There can be dyads between peer learners, between a teacher and a student, or between an intelligent tutoring system and a student; there can be triads among teach- ers, intelligent tutors, and peers (McNamara et al., 2004). Learners can vicariously learn from such interactions (Craig, Gholson, Ventura, Graesser, and the TRG, 2000). The possibilities are endless. Second, researchers can explore the processes that designers and learn- ers go through when they create agents with the tool kits that have been developed. In addition to understanding these design processes, researchers will accumulate a broader population of agents to test in their studies (i.e., beyond Microsoft Agents), including those with diverse physical appear- ances, personalities, and styles that specific learner populations are respon- sive to (Baylor and Kim, 2005). Third, researchers can develop agents that deeply interpret what learn- ers express in tutorial dialogue or other forms of human-computer interac- tion. This direction requires integration of advances from computational linguistics, cognitive science, and artificial intelligence. Fourth, researchers can investigate alternative ways that agents can be responsive to learners as learners make contributions that vary in quality. This is already being done in AutoTutor (Graesser et al., 2005), which holds a mixed initiative dialogue with a learner. AutoTutor has dialogue moves that are responsive to the learnerâs knowledge states: short feedback (positive, neutral, negative), prompts for information (What else?), hints, answers to learner questions, and corrections of student misconceptions. Similiarly, the iSTART system (see McNamara, Levinstein, and Boonthum, 2004) has groups of agents that adaptively respond to learners who gener-
TECHNOLOGY-BASED TRAINING 141 ate self-explanations while reading science texts. These responsive agents, developed at the University of Memphis, require more built-in intelligence than the prepared choreographed agents. Virtual Environments with Agents Outstanding examples of virtual environments with agents are those developed at University of Southern California, particularly Mission Re- hearsal (Gratch et al., 2002) and Tactical Iraqi (Johnson and Beal, 2005). These virtual worlds are very close to authentic interactions in war sce- narios or interactions between soldiers and people in another culture with a different language. The learner holds a dialogue in natural language, with speech recognition, and multiple agents. These award-winning virtual environments are major milestones and have involved major investments by the military. Continued research on these large-scale virtual environments is of course very prudent. Highly encouraged are evaluations of these systems on learning gains, usability, learner impressions, and the fidelity of specific computational modules. It would be useful to augment the interdisciplinary development team with more social and behavioral scientists. These learn- ing environments are not currently on the web, so the feasibility of trans- porting simpler versions on simpler platforms remains a question. More modest virtual environments with agents are available in MOOs (Slator, Hill, and Del Val, 2004). Serious Games The game industry has certainly captured the imagination of this gener- ation of young adults, with revenues larger than the movie industry. Serious teenage gamers play approximately 20 hours per week (Yee, 2006). There is a rich taxonomy of games, nearly all of which could be integrated with military training, such as first person shooter games, multiparty games, and simulations of cities. A large-scale game like Americaâs Army is extremely engaging for both young and older people because it is fun and weaves in serious content about the Army. The challenge of combining entertainment and pedagogical content is the foundational question of serious games (Brody, 1993). Understanding the mechanisms that lead to fun and learning is an important topic for behavioral and social science research. Although the components of games have been analyzed at considerable depth (Gee, 2003; Salen and Zimmerman, 2003), there has been very little research on the impact of these components on learning gains, engagement, and usability (Cameron and Dwyer, 2005; de Jong and van Joolingen, 1998; Lawrence, 2004; Malone and Lepper, 1987; Moreno and Mayer,
142 HUMAN BEHAVIOR IN MILITARY CONTEXTS 2005; Virvou, Katsionis, and Manos, 2005). Presumably, the success of a game can be attributed to feedback, progress markers, engaging content, fantasy, competition, challenge, uncertainty, curiosity, control, and other factors that involve cognition, emotions, motivation, and art. Investigating the relationships between game features and outcome measures should be an important priority for behavioral and social scientists because scientific data are sparse and the impact of games on society is enormous. Computer-Supported Collaborative Learning In computer-supported collaborative learning, groups of learners col- laboratively construct knowledge on a topic in pursuit of project goals that are usually provided by instructors (Lee, Chan, and Aalst, 2006). For example, in Knowledge Forum (Bereiter, 2002; Scardamalia and Bereiter, 1994), students create messages that others can review, elaborate, critique, and build on. These systems support threads of conversations that involve formulating arguments, problem solving, planning, report writing, and countless other tasks (Gunawardena, Lowe, and Anderson, 1997). In cur- rent practice, the length of most of these conversational threads is short (2.2 to 2.7 turns per thread; Hewitt, 2005), so attempts have been made to design the systems to lengthen the threads. There is some evidence that computer-supported collaborative learning systems facilitate deep learning, critical thinking, shared understanding, and long-term retention (Garrison, Anderson, and Archer, 2001; Johnson and Johnson, 2001), but the scale of these distributed learning environments makes it very difficult to perform systematic evaluations. Social and behavioral scientists can improve these systems in several ways (Clark and Brennan, 1991; Dillenbourg and Traum, 2006; Looi, 2005; Mazur, 2004; Soller et al., 1998; Wang, 2005). How do learners figure out how to use the complex interfaces on multiparty computer-mediated com- munication systems? How does a potential contributor learn how and when to speak? How is knowledge grounded in such distributed systems? How can moderators guide a group of learners in productive directions? FUNDING PRIORITIES AND CONCLUSION This paper identifies 10 genres of learning environments and dozens of research directions for social, cognitive, and behavioral scientists. This sec- tion identifies five directions that we believe should have the highest priority for funding in the near future. They are not listed in order of priority. Blended instruction that assigns the optimal learning environment to a particular learner at the right time.â The TMGL landscape (technol- ogy, pedagogical mechanism, goals, and learning characteristics) would be
TECHNOLOGY-BASED TRAINING 143 applied to individuals with different learning profiles over long stretches of time (i.e., months or years, not 1-hour training sessions). Cells in the landscape would not only be selected on a principled theoretical basis, but would also be empirically tested as data are accumulated from large samples of learners with different learning profiles. Agents that model and scaffold the use of complex learning environ- ments and human-computer interfaces.â Modeling would involve single agents that take on different roles (mentors, tutors, peers, struggling learner agents for the learner to teach) and ensembles of agents that choreograph different patterns of interaction. Scaffolding would require a deep interpre- tation of a learnerâs contributions, including natural language, multimodal sensing, and a dynamic generation of computer actions. Modeling and testing the conditions under which interactive simula- tion with multimedia promotes deep learning.â There typically are a large number of displays, media, controls, input channels, forms of feedback, icons with particular semiotic functions, and other interface features in interactive simulations. There needs to be a GOMS model (or a similar quantitative model) that generates theoretical predictions on human ac- tions, time, and errors on benchmark tasks. The hope is that the quantita- tive model would resolve discrepant findings in the literature in addition to generating testable predictions. Systematic tests of the impact of serious games on learning.â There needs to be rigorous evaluations on usability, engagement, and learning at different levels of depth and for different types of learnersâ knowledge and skills. Examining the process of authoring advanced learning environ- ments. The process of authoring advanced learning environments needs to be explored for instructors, learners, and the designers of learning environments. Many experts are convinced that learning gains from technologies are best attributed to the underlying pedagogies rather than the technologies per se. At the same time, we all recognize that various technologies have affordances (i.e., properties, constraints) that support the opportunity for learners to benefit from specific pedagogies. One strong but arguable claim is that the social, cognitive, and behavioral sciences will provide the most incisive mapping between technology and pedagogy. However, it will be necessary for these social, cognitive, and behavioral scientists to be part of interdisciplinary teams of learning environment designers, developers, and deliverers.
144 HUMAN BEHAVIOR IN MILITARY CONTEXTS References Anderson, J.R., Corbett, A.T., Koedinger, K.R., and Pelletier, R. (1995). Cognitive tutors: Lessons learned. The Journal of Learning Sciences, 4(2), 167-207. Atkinson, R.K. (2002). Optimizing learning from examples using animated pedagogical agents. Journal of Educational Psychology, 94(2), 416-427. Ausubel, D., Novak, J., and Hanesian, H. (1978). Educational psychology: A cognitive view, second edition. New York: Holt, Rinehart and Winston. Azevedo, R., and Cromley, J.G. (2004). Does training on self-regulated learning facilitate stu- dentsâ learning with hypermedia? Journal of Educational Psychology, 96(3), 523-535. Baker, L. (1985). Differences in standards used by college students to evaluate their compre- hension of expository prose. Reading Research Quarterly, 20, 298-313. Baylor, A. L., and Kim, Y. (2005). Simulating instructional roles through pedagogical agents. International Journal of Artificial Intelligence in Education, 15, 95-115. Bereiter, C. (2002). Education and mind in the knowledge age. ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Mahwah, NJ: Lawrence Erl- baum Associates. Bereiter, C., and Scardamalia, M. (1985). Cognitive coping strategies and the problem of âinert knowledge.â In S.F. Chipman, J.W. Segal, and R. Glaser (Eds.), Thinking and learning skills: Current research and open questions (vol. 2, pp. 65-80). Hillsdale, NJ: Lawrence Erlbaum Associates. Bloom, B.S. (Ed.). (1956). Taxonomy of educational objectives: The classification of educa- tional goals: Handbook I, cognitive domain. New York: Longmans, Green. Brody, H. (1993). Video games that teach? Technology Review, 96(8), 50-58. Brusilovsky, P., and Nijhawan, H. (2002). A framework for adaptive e-learning based on distributed re-usable learning activities. In M. Driscoll and T.C. Reeves (Eds.), Proceed- ings of world conference on E-learning in corporate, government, healthcare, and higher education, E-learn 2002 (pp. 154-161). Montreal, Canada, Oct. 15-19. Norfolk, VA: Association for the Advancement of Computing in Education. Burstein, J. (2003). The E-rater scoring engine: Automated essay scoring with natural language processing. In M.D. Shermis and J.C. Burstein (Eds.), Automated essay scoring: A cross- disciplinary perspective (pp. 133-122). Mahwah, NJ: Lawrence Erlbaum Associates. Cameron, B., and Dwyer, F. (2005). The effect of online gaming, cognition and feedback type in facilitating delayed achievement of different learning objectives. Journal of Interactive Learning Research, 16(3), 243-258. Card, S., Moran, T., and Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates. Chi, M.T.H., de Leeuw, N., Chiu, M., and LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439-477. Clark, H.H., and Brennan, S.E. (1991). Grounding in communication. In L. Resnick, J. Levine, and S. Teasely (Eds.), Perspectives on socially shared cognition (pp. 127-149). Washing- ton, DC: American Psychological Association. Clark, R.E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53, 445-460. Cole, R. van Vuuren, S., Pellom, B., Hacioglu, K., Ma, J., Movellan, J., Schwartz, S., Wade- Stein, D. Ward, W., and Yan, J. (2003). ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Perceptive animated interfaces: First steps to- ward a new paradigm for human computer interaction. Proceedings of the IEEE, 91(9), 1391-1405. Conklin, J. (1987). Hypertext: A survey and introduction. IEEE Computer, 20(9), 17-41. Corbett, A.T. (2001). Cognitive computer tutors: Solving the two-sigma problem. User model- ing: Proceedings of the eighth international conference (pp. 137-147). Berlin, Germany: Springer-Verlag.
TECHNOLOGY-BASED TRAINING 145 Craig, S.D., Gholson, B., Ventura, M., Graesser, A.C., and the Tutoring Research Group (2000). Overhearing dialogues and monologues in virtual tutoring sessions: Effects on questioning and vicarious learning. International Journal of Artificial Intelligence in Education, 11, 242-253. Craik, F., and Lockhart, R. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 11, 671-684. Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. New York: Teachers College. Cuban, L. (2001). Oversold and underused: Computers in the classroom. Cambridge, MA: Harvard University Press. Deimann, M., and Keller, J.M. (2006). Volitional aspects of multimedia learning. Journal of ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Educational Multimedia and Hypermedia, 15(2), 137-158. de Jong, T., and van Joolingen, W.R. (1998) Scientific discovery learning with computer simu- lations of conceptual domains Review of Educational Research, 68(2), 179-201. Dillenbourg, P., and Traum, D. (2006). Sharing solutions: Persistence and grounding in multi- modal collaborative problem solving. Journal of the Learning Sciences, 15(1), 121-151. Dillon, A., and Gabbard, R. (1998). Hypermedia as an educational technology: A review of the quantitative research literature on learner comprehension, control, and style. Review of Educational Research, 68, 322-349. Dodds, P., and Fletcher, J.D. (2004). Opportunities for new âsmartâ learning environments enabled by next-generation web capabilities. Journal of Educational Multimedia and Hypermedia, 13(4), 391-404. Doignon, J.P., and Falmagne, J.-C. (1999). Knowledge spaces. Berlin, Germany: Springer-Verlag. Dumais, S. (2003). Data-driven approaches to information access. Cognitive Science, 27(3), 491-524. Duval, E., Hodgins, W., Rehak, D., and Robson, R. (2004). Learning objects symposium spe- cial issue: Guest editorial. Journal of Educational Multimedia and Hypermedia, 13(4), 331-342. Dwyer, C.A. (Ed.). (2005). Measurement and research in the accountability era. Mahwah, NJ: Lawrence Erlbaum Associates. Fletcher, J.D. (2003). Evidence for learning from technology-assisted instruction. In H.F. OâNeil, Jr., and R. Perez (Eds.), Technology applications in education: A learning view (pp. 79-99). Hillsdale, NJ: Lawrence Erlbaum Associates. Foltz, P.W., Gilliam, S., and Kendall, S. (2000). Supporting content-based feedback in on-line writing evaluation with LSA. Interactive Learning Environments, 8, 111-127. Gagne, R.M. (1985). The conditions of learning and theory of instruction (4th ed.). New York: Holt, Rinehart and Winston. Garrison, D.R., Anderson, T., and Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Educa- tion, 15(1), 7-23. Gee, J. (2003). What video games have to teach us about learning and literacy. New York: Palgrave Macmillan. Graesser, A.C., Chipman, P., Haynes, B.C., and Olney, A. (2005). AutoTutor: An intelligent tutoring system with mixed-initiative dialogue. IEEE Transactions in Education, 48, 612-618. Graesser, A.C., Gernsbacher, M.A., and Goldman, S. (Eds.). (2003). Handbook of discourse processes. ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Mahwah, NJ: Lawrence Erlbaum Associates. Graesser, A.C., Hu, X., Person, P., Jackson, T., and Toth, J. (2004). Modules and information retrieval facilities of the human use regulatory affairs advisor (HURAA).Â International Journal on eLearning, 3(4), 29-39.
146 HUMAN BEHAVIOR IN MILITARY CONTEXTS Graesser, A.C., Jackson, G.T., and McDaniel, B. (in press). AutoTutor holds conversations with learners that are responsive to their cognitive and emotional states. Educational Technology. Graesser, A.C., Lu, S., Jackson, G.T., Mitchell, H., Ventura, M., Olney, A., and Louwerse, M.M. (2004). AutoTutor: A tutor with dialogue in natural language. Behavioral Research Methods, Instruments, and Computers, 36, 180-193. Graesser, A.C., McNamara, D.S., and VanLehn, K. (2005). Scaffolding deep comprehension strategies through PointandQuery, AutoTutor, and iSTART. Educational Psychologist, 40, 225-234. Graesser, A.C., Moreno, K., Marineau, J., Adcock, A., Olney, A., and Person, N. (2003). AutoTutor improves deep learning of computer literacy: Is it the dialog or the talking head? In U. Hoppe, F. Verdejo, and J. Kay (Eds.), Proceedings of artificial intelligence in education (pp. 47-54). Amsterdam, Netherlands: IOS Press. Graesser, A.C., and Olde, B.A. (2003). How does one know whether a person understands a device? The quality of the questions the person asks when the device breaks down.Â Jour- nal of Educational Psychology, 95(3), 524-536. Graesser, A.C., and Person, N.K. (1994). Question asking during tutoring. American Educa- tional Research Journal, 31, 104-137. Gratch, J., Rickel, J., Andre, E., Cassell, J., Petajan, E., and Badler, N. (2002). Creating interac- tive virtual humans: Some assembly required. IEEE Intelligent Systems, 17, 54-63. Gray, W.D., John, B.E., and Atwood, M.E. (1993). Project Ernestine: Validating a GOMS analysis for predicting and explaining real-world performance. Human-Computer Inter- action, 8(3), 237-309. Gunawardena, L., Lowe, C., and Anderson, T. (1997). Interaction analysis of a global on-line debate and the development of a constructivist interaction analysis model for computer conferencing. Journal of Educational Computing Research, 17(4), 395-429. Harabagiu, S.M., Maiorano, S.J., and Pasca, M.A. (2002). Open-domain question answering techniques. Natural Language Engineering, 1, 1-38. Hegarty, M. (2004). Dynamic visualizations and learning: Getting to the difficult questions. Learning and Instruction, 14(3), 343-351. Hegarty, M., Kriz, S. and Cate, C. (2003). The roles of mental animations and external anima- tions in understanding mechanical systems. Cognition and Instruction, 21, 325-360. Hewitt, J. (2005). Toward an understanding of how threads die in asynchronous computer conferences. Journal of the Learning Sciences, 14(4), 567-589. Hunt, E., and Pellegrino, J.W. (2002). Issues, examples, and challenges in formative assess- ment. New Directions for Teaching and Learning, 89, 73-85. Jackson, G.T., Olney, A., Graesser, A.C., and Kim, H.J. (2006). AutoTutor 3-D simulations: Analyzing userâs actions and learning trends. In R. Son (Ed.), Proceedings of the 28th an- nual meeting of the Cognitive Science Society (pp. 1557-1562). Mahwah, NJ: Lawrence Erlbaum Associates. Johnson, D.W., and Johnson, R.T. (1991). Classroom instruction and cooperative learning. In H.C. Waxman and H.J. Walberg (Eds.), Effective teaching: Current research (pp. 277- 294). Berkeley, CA: McCutchan. Johnson, W.L. (2001). Pedagogical agent research at CARTE.Â AI Magazine, 22, 85-94. Johnson, W.L., and Beal, C. (2005). Iterative evaluation of a large-scale intelligent game for language learning. In C. Looi, G. McCalla, B. Bredeweg, and J. Breuker (Eds.), Artificial intelligence in education: Supporting learning through intelligent and socially informed technology (pp. 290-297). Amsterdam, Netherlands: IOS Press. Johnson, W.L., Rickel, J., and Lester, J. (2000). Animated pedagogical agents: Face-to-face interaction in interactive learning environments. International Journal of Artificial Intel- ligence in Education, 11, 47-78.
TECHNOLOGY-BASED TRAINING 147 Jurafsky, D., and Martin, J.H. (2000). Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition. Upper Saddle River, NJ: Prentice-Hall. Kalyuga, S., Chandler, P., and Sweller, J. (1999). Managing split-attention and redundancy in multimedia instruction. Applied Cognitive Psychology, 13, 351-371. King, A. (1994). Guiding knowledge construction in the classroom: Effects of teaching children how to question and how to explain. American Educational Research Journal, 31(2), 338-368. Kintsch, E., Steinhart, D., Stahl, G., and LSA Research Group. (2000). Developing summariza- tion skills through the use of LSA-based feedback. Interactive Learning Environments, 8(2), 87-109. Kinzie, M.B., Whitaker, S.D., and Hofer, M.J. (2005). Instructional uses of instant messaging (IM) during classroom lectures. Educational Technology and Society, 8(2), 150-160. Koedinger, K.R., Anderson, J., Hadley, W., and Mark, M.A. (1997). Intelligent tutoring goes to school in the big city. International Journal of Artificial Intelligence in Education, 8, 30-43. Kozma, R.B. (1994). Will media influence learning? Reframing the debate. Educational Tech- nology Research and Development, 42(2), 7-19. Kozma, R.B. (2000). Reflections on the state of educational technology research and develop- ment. Educational Technology Research and Development, 48(1), 5-15. Kulhavy, R.W., and Stock, W.A. (1989). Feedback in written instruction: The place of response certitude. Educational Psychology Review, 1(4), 279-308. Landauer, T.K., Laham, D., and Foltz, P.W. (2000). The intelligent essay assessor. IEEE Intel- ligent Systems 15, 27-31. Landauer, T.K., McNamara, D., Dennis, S., and Kintsch, W. (Eds.). ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Handbook of (2007), latent semantic analysis. Mahwah, NJ: Lawrence Erlbaum Associates. Lawrence, R. (2004). Teaching data structures using competitive games. IEEE Transactions on Education, 47(4), 459-466. Lee, E.Y.C., Chan, C.K.K., and van Aalst, J. (2006). Students assessing their own collabora- tive knowledge building. International Journal of Computer Supported Collaborative Learning, 1, 57-87. Lee, M., and Baylor, A.L. (2006). Designing metacognitive maps for web-based learning. Educational Technology and Society, 9, 344-348. Lesgold, A., and Nahemow, M. (2001). Tools to assist learning by doing: Achieving and as- sessing efficient technology for learning. In D. Klahr and S. Carver (Eds.), Cognition and instruction: Twenty-five years of progress (pp. 307-346). Hillsdale, NJ: Lawrence Erlbaum Associates. Looi, C. (2005). Exploring the affordances of online chat for learning. International Journal of Learning Technology, 1(3), 322-338. Lowe, R. (2004). Interrogation of a dynamic visualization during learning. Learning and Instruction, 14(3), 257-274. Malone, T., and Lepper, M. (1987). Making learning fun: A taxonomy of intrinsic motivations of learning. In R.E. Snow and M.J. Farr (Eds.), Aptitude, learning, and instruction, vol- ume 3: Cognitive and affective process analyses (pp. 223-253). Hillsdale, NJ: Lawrence Erlbaum Associates. Mayer, R.E. (2005). Multimedia learning. Cambridge, MA: Cambridge University Press. Mazur, J.M. (2004). Conversation analysis for educational technologists: Theoretical and methodological issues for researching the structures, processes and meaning of on-line talk. In D.H. Jonassen (Ed.), Handbook for research in educational communications and technology (2nd ed., pp. 1073-1098). Mahwah, NJ: Lawrence Erlbaum Associates.
148 HUMAN BEHAVIOR IN MILITARY CONTEXTS McNamara, D.S., Levinstein, I.B., and Boonthum, C. (2004). iSTART: Interactive strategy trainer for active reading and thinking. Behavioral Research Methods, Instruments, and Computers, 36, 222-233. Millis, K.K., Kim, H.J., Todaro, S., Magliano, J., Wiemer-Hastings, K., and McNamara, D.S. (2004). Identifying reading strategies using latent semantic analysis: Comparing semantic benchmarks. Behavior Research Methods, Instruments, and Computers, 36, 213-221. Mishra, P. (2006). Affective feedback from computers and its effect on perceived ability and affect: A test of the computers as social actor hypothesis. Journal of Educational Multi- media and Hypermedia, 15(1), 107-131. Mitrovic, A., Suraweera, P., Martin, B., and Weerasinghe, A. (2004). DB-Suite: Experiences with three intelligent web-based database tutors. Journal of Interactive Learning Re- search, 15(4), 409-432. Moreno, R., and Mayer, R.E. (2004). Personalized messages that promote science learning in virtual environments. Journal of Educational Psychology, 96(1), 165-173. Moreno, R., and Mayer, R.E. (2005). Role of guidance, reflection, and interactivity in an agent-based multimedia game. Journal of Educational Psychology, 97(1), 117-128. Murray, T., Blessing, S., and Ainsworth, S. (Eds.). (2003). Authoring tools for advanced tech- nology learning environments: Towards cost-effective adaptive, interactive and intelligent educational software. Dordrecht, Netherlands: Kluwer. National Research Council. (2000) How people learn (expanded ed.). Committee on Develop- ments in the Science of Learning. J.D. Bransford, A.L. Brown, and R.R. Cocking (Eds.). With additional materials from the Committee on Learning Research and Educational Practice. M.S. Donovan, J.D. Bransford, and J.W. Pellegrino (Eds.). Commission on Be- havioral and Social Sciences and Education. Washington, DC: National Academy Press. Norman, D.A. (1988). The psychology of everyday things. New York: Basic Books. Norman, D.A. (1994). How might people interact with agents? Communication of the ACM, 37(7), 68-71. OâNeil, H.F., and Perez, R. (Eds.). (2003). Technology applications in education: A learning view. Hillsdale, NJ: Lawrence Erlbaum Associates. Otero, J., and Graesser, A.C. (2001). PREG: Elements of a model of question asking. Cogni- tion and Instruction, 19, 143-175. Ozuru, Y., Graesser, A.C., Rowe, M., and Floyd, R.G. (2005). Enhancing the landscape and quality of multiple choice questions. In R. Roberts (Ed.), Spearman ETS conference proceedings. Mahwah, NJ: Lawrence Erlbaum Associates. Paivio, A. (1986). Mental representations. New York: Oxford University Press. Reeves, B., and Nass, C. (1996). The media equation. New York: Cambridge University Press. Rehak, D. (2005). CORDRA: Content object repository discovery and registration/resolution architecture home page. Available: http://cordra.net [accessed May 2007]. Rieber, L.P. (1996). Animation as feedback in a computer-based simulation: Representation matters. Educational Technology Research and Development, 44(1), 5-22. Rosenshine, B., Meister, C., and Chapman, S. (1996). Teaching students to generate questions: A review of the intervention studies. Review of Educational Research, 66, 181-221. Rouet, J-F. (2006). The skills of document use: From text comprehension to web-based learn- ing. Mahwah, NJ: Lawrence Erlbaum Associates. Salen, K., and Zimmerman, E. (2004). Rules of play: Game design fundamentals. Cambridge, MA: MIT Press. Sampson, D., and Karampiperis, P. (2006). Towards next generation activity-based learning systems. International Journal on E-Learning, 5(1), 129-149. Scardamalia, M., and Bereiter, C. (1994). Computer support for knowledge-building com- munities. Journal of the Learning Sciences, 3(3), 265-283.
TECHNOLOGY-BASED TRAINING 149 Shneiderman, B., and Plaisant, C. (2005). Designing the user interface: Strategies for effective human-computer interaction, fourth edition. Reading, MA: Addison-Wesley. Shute, V. (2007). Focus on formative feedback. (ETS no. RR-07-11). Princeton, NJ: Educa- tional Testing Service. Slator, B.M., Hill, C., and Del Val, D. (2004). Teaching computer science with virtual worlds. IEEE Transactions on Education, 47(2), 269-275. Soller, A., Goodman, B., Linton, F., and Gaimari, R. (1998) Promoting effective peer inter- action in an intelligent collaborative learning environment. Proceedings of the fourth international conference on intelligent tutoring systems (ITS 98) (pp. 186-195), Aug. 16-19, San Antonio, TX. Berlin, Germany: Springer-Verlag ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Stern, F., Xing, T., Muste, M., Yarbrough, D., Rothmayer, A., and Rajagopalan, G., Caughey, D., Bhaskaran, R., Smith, S., Hutchings, B., and Moeykens, S. (2006). Integration of simulation technology into undergraduate engineering courses and laboratories. Inter- national Journal of Learning Technology, 2(1), 28-48. Sweller, J., and Chandler, P. (1994). Why some material is difficult to learn. Cognition and Instruction, 12, 185-233. Taraban, R., Rynearson, K., and Stalcup, K.A. (2001). Time as a variable in learning on the world-wide web. Behavior Research Methods, 33(2), 217-225. Tversky, B., Morrison, J.B., and Betrancourt, M. (2002). Animation: Can it facilitate? Inter- national Journal of Human-Computer Studies, 57, 247-262. van der Meij, J., and de Jong, T. (2006). Supporting studentsâ learning with multiple represen- tations in a dynamic simulation-based learning environment. Learning and Instruction, 16(3), 199-212. VanLehn, K., Graesser, A.C., Jackson, G.T., Jordan, P., Olney, A., and Rose, C.P. (2007). When are tutorial dialogues more effective than reading? Cognitive Science, 31, 3-62. VanLehn, K., Lynch, C., Taylor, L., Weinstein, A., Shelby, R.H., Schulze, K.G., Treacy, D., and Wintersgill, M. (2002). Minimally invasive tutoring of complex physics problem solving. In S.A. Cerri, G. Gouarderes, and F. Paraguacu (Eds.), Intelligent tutoring systems, 2002, 6th international conference (pp. 367-376). Berlin, Germany: Springer-Verlag. Virvou, M., Katsionis, G., and Manos, K. (2005). Combining software games with educa- tion: Evaluation of its educational effectiveness. Educational Technology and Society, 8(2), 54-65. Voorhees, E. (2001). The TREC question answering track. Natural Language Engineering, 7, 361-378. Walker, M., Whittaker, S., Stent, A., Maloor, P., Moore, J., Johnson, M., and Vasireddy, G. (2003). Generation and evaluation of user tailored responses in multimodal dialogue. Cognitive Science, 28, 811-840. Wang, C.-H. (2005). Questioning skills facilitate online synchronous discussions. Journal of Computer Assisted Learning, 21(4), 303-313. White, B., and Fredericksen, J. (2005). A theoretical framework and approach for fostering metacognitive development. Educational Psychologist, 40, 211-223. Winne, P.H. (2001). Self-regulated learning viewed from models of information processing. In B. Zimmerman and D. Schunk (Eds.), Self-regulated learning and academic achievement: Theoretical perspectives (pp. 153-189). Mahwah, NJ: Lawrence Erlbaum Associates. Wisher, R.A., and Fletcher, J.D. (2004). The case for advanced distributed learning. Informa- tion and Security: An International Journal, 14, 17-25. Wisher, R.A., and Graesser, A.C. (2005). Question asking in advanced distributed learning environments. In S.M. Fiore and E. Salas (Eds.), Toward a science of distributed learning and training. Washington, DC: American Psychological Association. Yee, N. (2006). The labor of fun: How video games blur the boundaries of work and play. Games and Culture, 1(1), 68-71.