There are a number of training simulations and devices currently fielded that go a long way to meeting the capability needs of tactical small units. This appendix highlights several areas that could accelerate and accentuate the effectiveness of training through simulation technologies that span the live, virtual, and constructive training spectrum.
There are now established protocols for performing a cognitive task analysis (Clark et al., 2008). The result of this process is a set of learning objectives and knowledge that can be used directly in a guided experiential learning system. What is needed is a set of authoring tools to guide training developers in the steps for creating a training package using experiential and realistic live, virtual, and constructive simulations that supports cognitive task analysis and instructional design. The authoring tools should enable the authoring and linking of learning objectives, assessment, and feedback for coaching in the context of an experiential training scenario.
Authoring tools can also be used to support rapid, complex scenario development. A given issue that has often been raised is the difficulty of authoring complex scenarios for training. Scenario authoring currently takes a long time and a great deal of expertise. Given a content library and a virtual environment or game engine, authoring tools should enable training developers to rapidly design a new scenario or edit an existing scenario in a fraction of the time currently needed. To the extent possible, the authoring tools should make use of real-world scenario data, but the scenario should relate to the learning objectives produced during the instructional design.
Authoring tools can also improve the content creation pipeline to support the virtual worlds used in games and simulations for training. One of the greatest expenses in development of games and simulations is the creation of the artwork and animation to bring about the desired learning effects. The cost and time of training development can potentially be reduced by an order of magnitude by content development pipelines that automatically acquire, model, and animate objects and people, reducing the need for support by an artist. While commercial
game developers and special-effects houses are developing tools to make it less expensive to rapidly develop content, they typically do not make these tools available to others due to the competitive nature of the marketplace.
TOOLS FOR IMMERSION
The need to immerse the trainee in a virtual world for more effective learning, while at the same time keeping the cost of the display system low, is an important issue. It is becoming increasingly possible to develop low-cost, small-footprint, ruggedized, wireless, head-mounted display systems to provide the learner with a realistic visual experience in the virtual world. Further, it would be ideal to improve head-mounted displays by making them fit the size and form factor of the soldier’s eye protection wear.
The other alternative worthy of further investigation is to bring the virtual world out to the physical world. This concept was demonstrated in the Future Immersive Training Environment Joint Capabilities Technology Demonstration at the Infantry Immersion Trainer (IIT) at Camp Pendleton. The IIT used display screens that featured interactive characters integrated into a physical site used in training for military operations in urban terrain. An alternative to the display screen technology is the head-mounted projectors with retroreflective material, which gives each viewer an individualized view of the world. Training immersion can also be achieved through naturalistic interfaces to computer systems and autonomous animated nonplayer characters and teams.
For dismounted transport in games and simulations, the current standard interface is a game controller or a joystick, which is not a natural way of moving or using one’s body in the virtual world. The training technology community should leverage the trend toward vision-based interfaces that track the body and facial expressions and infer gestural meaning. In addition the community needs to leverage advances in speech recognition and natural language processing to enable conversational interfaces to games and simulations.
Autonomous NonPlayer Characters
One of the limitations of using platforms like VBS2 and other game-based simulations is that the avatars of the opposing forces and civilians have to be controlled by other humans. Like the Janus simulator, which required six people to train one person, this is a costly way to do business. While many of these systems have semiautonomous forces, their capabilities still require supervision by human exercise controllers. An alternative is to develop programmable
autonomous characters and teams that will play the roles of the opposing force and civilians across all the mission sets: offense, defense, and wide-area support operations in the Dismounted Soldier Training System. Autonomous characters should be capable of perceiving objects and entities in the virtual environment, making plans and decisions, taking coordinated action with or against human teams, and providing feedback to the after-action reporting system. They should be capable of being used in game-based environments such as VBS2 and the CryEngine as well as in a Massively Multiplayer Online Game environment such as EDGE, which is currently being developed by the Army Research, Development and Engineering Command. They should also be capable of explaining their actions and decisions during after action review so that squad members can see how their actions affected the opposing force.
Clark, R.E., D. Feldon, J.J. Merrienboer, K. Yates, and S. Early. 2008. Cognitive Task Analysis. Pp. 577-594 in Handbook for Research for Educational Communications and Technology, edited by J.M. Spector, M.D. Merrill, J.J.G. van Merriënboer, and M.P. Driscoll. Mahwah, N.J.: Lawrence Erlbaum Associates.