Serious Games and Their Role in Defense Modeling, Simulation, and Analysis
DEPARTMENT OF DEFENSE MS&A GAME APPLICATIONS
There are strong driving applications to which the Department of Defense (DoD) could apply a science of games if one existed. In the health domain, we easily envision medical training on vital systems in game form. An example of this is in America’s Army—the full three-lecture series on combat life saving is in the game, including the test you take in the real Army! When the player passes that test, he then can act as a combat medic in the game.
In the public policy domain, we foresee games similar to SimCity, maybe SimNavy, where resource allocation and policy change can be explored for their effect before implementation. This makes the very large assumption that the resource models underneath the game are accurate and verified.
The value of games for strategic communication has already been demonstrated by the America’s Army game. With wartime recruitment at an all-time low for the Services, it is clear that more work in this domain is essential to reach eligible youth.
Training and simulation using game technology and creativity is an obvious direction for the DoD; but for this approach to become fully effective, it needs to coordinate its efforts more. Right now, game development for defense training is being handed off to individual contractors, who are not necessarily tightly coupled to DoD requirements. Such arm’s-length contracted efforts lead to systems not well connected to the ever-changing requirements of the department. Additionally, those developed games are being built with company-proprietary technologies and resources that cannot even be repurposed for other parts of the department without exorbitant payments.
With a university-affiliated research center (UARC) in place for this technology, we would see game-based rapid mission rehearsal systems, games deployable by the soldier in minutes rather than months. We will see our next generation combat modeling and analysis systems with gamelike interfaces rather than complicated menus and submenus. We can imagine a training system that recreates a virtual Fallujah, with all the excitement and stress of the real historical battle, followed by a careful and considered game-based take on nation building. We can imagine building in situ resource utilization games that allow us to explore colonizing the Moon in order to control space for defense purposes. There are clearly many potentials for a UARC with game focus.
With the development of the Army’s SIMNET system, starting in 1983, the era of modern modeling and simulation began. The signature of the new era was the inclusion of the three-dimensional (3-D) visual display in subsequent modeling and simulation systems. Mostly gone was the era of building large computational models whose outputs were printouts delivered to analysts for pronouncement of results. 3-D visual displays had become mainstream for the DoD modeling and simulation (M&S) world by 1990.
Beginning in 1997 with the publication of the National Research Council report Modeling and Simulation—Linking Entertainment and Defense, it became clear to DoD that the entertainment community, in particular the videogame community, was generating better-performing visual systems than defense contractors (Zyda and Sheehan, 1997). The entertainment industry was producing highly immersive games and location-based entertainment, with wonderful visual displays, great performing artificial intelligence (AI) characters, and networking scales equal in size to those required by defense. The delivery of this report was a shock to the DoD in that historically defense had been the technology leader but now the entertainment world might be overrunning that position.
By 1999, DoD moved into high gear to attempt to catch up or at least to get access to this new force for its M&S requirements. In early 1999, the chief scientist of the U.S. Army asked the chair of the committee that wrote the report just mentioned to draft an operating plan and research agenda for an organization that was to become the Institute for Cre-
ative Technologies at the University of Southern California (USC ICT). The mission for USC ICT was to focus on defense and entertainment immersive technologies for use in Army training. One of the many projects USC ICT began was the development of the Full Spectrum Warrior and Full Spectrum Command videogames, delivered in the summer of 2003.
At the same time and in parallel with the formation of USC ICT, the U.S. Navy formed the MOVES Institute, whose mission was research, application, and education on the grand challenges of modeling, virtual environments and simulation. The MOVES Institute carries out research on 3D visual simulation, networked virtual environments, computer-generated autonomy, human performance engineering, and game-based simulation. The MOVES Institute developed its highly successful game, America’s Army, and posted it on the Internet on July 4, 2002. America’s Army became the fastest growing online game of all time and started a much larger discussion on the use of game technology and creativity inside DoD. Everyone wants their next-generation training system to be as beautiful and as easy to operate as America’s Army. America’s Army became the first delivered serious game to have a major impact, and its continued impact crosses beyond the boundaries of defense to the corporate world, where interest in serious game production is also large. There is great potential for transforming the military’s MS&A efforts with serious games if that technology and creativity is deployed appropriately and a framework, or science of games, is created to support deployment. Serious games are not going to achieve their potential if we fall into the same patterns of hype and magic as prevailed in the early days of artificial intelligence and virtual reality. Before we can begin to define a research agenda for the science of games, we need a few definitions.
WHAT IS A GAME, AND WHAT IS A SERIOUS GAME?
The word “game” is emotionally charged, with the strength of the emotion breaking along the generation gap issue: Did you play videogames growing up? We define a videogame as a mental contest according to certain rules, played with a computer, for amusement, recreation, or winning a stake. We define a serious game as a mental contest according to certain rules, played with a computer, which uses entertainment to further the objectives of government or corporate training in fields such as education, health, public policy, and strategic communication.
A typical organization for developing videogames for entertainment and serious purpose is illuminating. Bing Gordon, chief creative officer at Electronic Arts, thinks of games as “story, art, and software.” We learn that there is a design team, headed by a lead designer, who is responsible for the story, the entertainment component of the game. Then there is an art team, headed by the lead artist, responsible for the look and feel of the game. Finally, there is a programming team, headed by a lead programmer, responsible for developing code that implements story requirements, interface features, networking, Web connectivity, scoring systems, AI scripting, game engine changes—just about anything technical and programmatic required for the entire development effort (Zyda et al., 2005). Note that serious games have more than just story, art, and software. Serious games have pedagogy too—the activity of educating or instructing or teaching—activities that impart knowledge.
The activities of educating or instructing or teaching that impart knowledge or skill is exactly what is added to games that makes them serious. Now, notice that pedagogy has to be subordinate to story. Story is the entertainment part and that comes first; once that is worked out, then we can do the pedagogy. Pedagogy insertion, as it’s called, comes from a human performance engineering team that works closely with the design team. There is a lead for that team, the lead pedagogist, who is a combination of instructional scientist and subject matter expert for the domain for which we are building the serious game. We cannot just build serious games by tossing their development to a traditional game team. That team has to interact with the instructional scientists and subject matter experts who make up a larger human performance engineering team.
Clearly, a research agenda that supports serious games also supports the entertainment industry, one of the largest industries in this country. In fact, the serious games research agenda is larger than the research agenda of the entertainment industry in that it has to carefully deal with the issue of merging pedagogy and story in videogame form.
CREATING A SCIENCE OF GAMES
The development and wide release of the America’s Army game began a revolution in thinking about the potential role of videogames in nonentertainment domains and started a discussion on how to advance the state of the art of game technology to support entertainment and serious games of the future (MOVES Institute, 2004). DoD’s application domain interests for serious games include games for modeling, simulation, and analysis, games for training, and games for strategic communication. To carry out that widespread deployment of games, we need to define a research agenda that will get us to the science of games.
A GAMES RESEARCH AGENDA
To impact the future of serious and entertainment games, we need to undertake an R&D agenda that transforms the game production process from a handcrafted, labor-intensive effort into an effort having shorter, more predictable production timelines, increased complexity, and innovation in the produced games. We see several components of that research agenda:
Cognition and games,
Infrastructure is the underlying software and hardware necessary for the development of the future of interactive games. Infrastructure includes work on
Massively multiplayer online game architectures,
Game engines and tools,
Next generation consoles, and
Wireless and mobile devices.
Architectures for massively multiplayer online games (MMOGs) are important for many application domains, including the military, homeland defense, and online education. The fundamental research question is how do we develop software architectures that are dynamically extensible and semantically interoperable. That is, how do we build game or simulation clients that can connect into a running MMOG, download the appropriate code for display and interaction, and then operate with the other online players? This is a question of interest to the gaming world and the large government game-based simulation world. There are currently no dynamic solutions to this—only static solutions that dramatically drive up the cost of large-scale simulation and gaming. We need to solve the MMOG architecture problem not just for game clients but also for large-scale computational architectures such as grid computing.
Game engines and tools are an important research area if we are going to attack the problem of lack of reuse in gaming and if we are to move games from crafted systems built by game industry technicians to engineered systems used widely in the government and corporate worlds. Currently the only part of the game world that uses reusable game engines is “first-person shooters.” Some attempts to broaden that usage to other domains have occurred in the America’s Army project, but they have all such suffered from major limitations (Zyda et al., 2005). Those limitations include the lack of support for large terrain boxes (many game engines can only handle 1 km × 1 km, and most real-world applications require much larger spaces), onerous and expensive game engine licenses, and the general lack of game engines for the R&D and serious games community at large. There is a need for an open source game engine, including a development tool set that is widely available and utilized, such as Linux. With an open source game engine, we can explore additional capabilities not provided now, including the larger terrain box, dynamic terrain, physical modeling, and other requirements ignored by the entertainment world. In addition, with an open source engine and testbed, other inadequately explored directions such as the modeling and simulation of computer characters, story, and human emotion, become possible.
Cognition and Games
We use cognition and games to develop theories and methods for modeling and simulating computer characters and story, modeling and simulating human emotion, analyzing large-scale game play, innovating new game genres and play styles, and integrating pedagogy with story in the interactive medium of games. Work in cognition and games includes
Modeling and simulating human emotion,
Understanding and analysis,
Pedagogy/story integration, and
Game play innovation.
Computer-generated autonomy is the modeling of human and organizational behavior in networked games. If we think of taking the technology from a game like The Sims and deploying it for a serious purpose, such as a training aid for nursing, we have the potential to model and simulate, in game form, hospital operations and the like, providing an immersive experience for the nurse trainee.
Computer-generated story is the modeling of a story computationally such that we can build engines and tool suites that dramatically simplify the deployment of a new story for our networked game.
Modeling and simulation of human emotion is the frontier for networked games and simulations. For the entertainment world, the future of gaming includes developing an immersive gaming experience that has an emotional impact on the player. For the military, homeland security and defense, and hospital trauma worlds, we need a similar game-based simulation capability. The fundamental question is, How do we model human emotion such that we can author emotional experiences in game form in a controlled and appropriate manner? There are demanding requirements for such a capability across the spectrum of entertainment and serious-game developers, and it is critical that we perform the research needed to understand the potential human impact.
Understanding and analysis are a key element of any agenda for research into games. When humans are placed into large-scale MMOGs or into single-player modules, the question becomes, What happened during game play? What was the impact on the player? Current serious-game usage and large-scale simulation require human monitors to watch networked play. At the end of play, the human monitor comes back and says “I believe this team won, and here is why.” We need an automated understanding and analysis capability for MMOG play such that we get a high-level
report on what happened during game play over a specified period of time, from a particular viewpoint, with the ability to query that system for additional detailed information on why it reported as it did. There are defense, homeland security, and educational applications that require such automated analyses if we are to extend gaming much further into the serious-game domain.
Pedagogy/story integration is the insertion of pedagogy into a story, such that the story is immersive and entertaining, with pedagogy remaining subordinate to story. The game industry has experienced the failure of Edutainment, where educational software was sprinkled lightly with gamelike interfaces and cuteness. The story must come first and we must then learn how to insert pedagogy into the story creation and development process in the interactive medium of games.
The sense of presence in a game is called immersion. It includes the following:
Computer graphics, sound, and haptics,
Effective computing—sensing the human state and emotion,
Advanced user interfaces.
R&D is needed on the technologies for engaging the mind of the game player by means of stimulation, for developing theories of presence, and for computing to sense human physical state and emotion.
Research on sensory channels is fundamental to the science and technology of games. As we move toward more capable graphics engines, we need to know how to appropriately utilize that new capability for our serious games, and we need to generate new technology that can be put into the next-generation graphics chipsets that industry provides. Spatial and immersive sound are key components of whatever training and educational systems we build with gaming. Future engineering requirements and human performance engineering need to be advanced to make sure sound is deployed appropriately and usefully for our serious purpose. Cross-modal sensory conflicts are an area for research. Haptics is also key to the future of games. If we believe that the R&D work we are performing now will be used for the technology that the game industry deploys in 10 to 25 years, there is still much to be done to improve sensory stimulation.
Affective computing entails measuring the physical and emotional state of human beings and transferring it to computer software. In the next 2 years, low-cost sensors will be available that measure the emotional state of the human and provide that as input to the running game. Devices will be needed to read the sensors and input the person’s emotional state to the game. The game will need to be able to use that state as one of many inputs and respond appropriately. We do not really know what this response will look like. We do not have good models of human emotion nor do we have good models of how our computer characters should react to such inputs. We do know that such inputs will have a major impact on both the entertainment games and the serious games of the future. We need to understand that impact and to engineer and author it in a careful and controlled fashion. This type of research effort has the potential to broaden the scope and genres of entertainment and serious games. We may get to the point where a videogame not only makes us cry but knows that we are crying.
Presence is immersive experience offered to the game player or virtual reality explorer. Whether we are building a virtual reality or a game, we are attempting to give the player the illusion that he is in a virtual world. We need to be able to engineer presence such that we can create the effect we want rather than just hoping that it will turn out as we wish.
Advanced user interfaces become key as we move from the standard desktop PC to the mobile platform. There is much to be gained by studying how the game industry has developed interfaces that are almost universal—for example, if you can play Quake, you can play Unreal Tournament. We need to understand interfaces from the game perspective if we are to make good progress in the deployment of serious games.
Serious games research and simulations for nonentertainment domains include
Serious game development across all application domains—health, public policy, strategic communications, defense, training, and education;
Human performance engineering; and
Serious game development is a fairly new phenomenon in the game world, and if the proper research is conducted it has the potential to eclipse the entertainment world in size over time. Here we are building games that use entertainment principles, creativity, and technology to carry out a government or corporate objective. As we engage in serious game development we need to establish principles, processes, and procedures for such deployment—usually called human performance engineering. If training and education are the objectives of our serious game, then we need to understand how to use the creativity of the entertainment world and combine it with appropriate human performance engineering principles. It is for that reason alone that the first serious games should (in fact must) be constructed in carefully controlled university or laboratory environments.
Zyda, M., and J. Sheehan, eds. 1997. Modeling and Simulation: Linking Entertainment and Defense. Washington, D.C.: National Academy Press.
Zyda, M., A. Mayberry, J. McCree, and M. Davis. 2005. “From Viz-Sim to VR to games: How we built a hit game-based simulation,” Organizational Simulation: From Modeling and Simulation to Games & Entertainment. W.B. Rouse and K.R. Boff, eds. New York, N.Y.: Wiley.