Page 32

2—
Setting a Common Research Agenda

The entertainment industry and the U.S. Department of Defense (DOD) are both interested in a number of research areas relevant to modeling and simulation technology. Technologies such as those for immersive simulated environments, networked simulation, standards for interoperability, computer-generated characters, and tools for creating simulated environments are used in both entertainment and defense applications. Each of these areas presents a number of research challenges that members of the entertainment and defense research communities will need to address over the next several years. Some of these areas may be amenable to collaborative or complementary efforts.

This chapter discusses some of the broad technical areas that the defense and entertainment research communities might begin to explore more fully to improve the scientific and technological base for modeling and simulation. Its purpose is not to provide answers to the research questions posed in these areas but to help elucidate the types of problems the entertainment industry and DOD will address in the coming years.

Technologies for Immersive Simulated Environments1

Immersive simulated environments are central to the goals and needs of both the DOD and the entertainment industry. Such environments use a variety of virtual reality (VR) technologies to enable users to directly interact with modeling and simulation systems in an experiential fash-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 32
Page 32 2— Setting a Common Research Agenda The entertainment industry and the U.S. Department of Defense (DOD) are both interested in a number of research areas relevant to modeling and simulation technology. Technologies such as those for immersive simulated environments, networked simulation, standards for interoperability, computer-generated characters, and tools for creating simulated environments are used in both entertainment and defense applications. Each of these areas presents a number of research challenges that members of the entertainment and defense research communities will need to address over the next several years. Some of these areas may be amenable to collaborative or complementary efforts. This chapter discusses some of the broad technical areas that the defense and entertainment research communities might begin to explore more fully to improve the scientific and technological base for modeling and simulation. Its purpose is not to provide answers to the research questions posed in these areas but to help elucidate the types of problems the entertainment industry and DOD will address in the coming years. Technologies for Immersive Simulated Environments1 Immersive simulated environments are central to the goals and needs of both the DOD and the entertainment industry. Such environments use a variety of virtual reality (VR) technologies to enable users to directly interact with modeling and simulation systems in an experiential fash-

OCR for page 32
Page 33 ion, sensing a range of visual, auditory, and tactile cues and manipulating objects directly with their hands or voice. Such experiential computing systems are best described as a process of using a computer or interacting with a network of computers through a user interface that is experiential rather than cognitive. If a user has to think about the user interface, it is already in the way. Traditional military training systems are experiential computing systems applied to a training problem. VR technologies can allow people to directly perform tasks and experiments much as they would in the real world. As Jack Thorpe of SAIC pointed out at the workshop, people often learn more by doing and understand more by experiencing than by simple nonparticipatory viewing or hearing information. This is why VR is so appealing to user interface researchers: it provides experience without forcing users to travel through time or space, face physical risks, or violate the laws of physics or rules of engagement. Unfortunately, creating effective experiences with virtual environments is difficult and often expensive. It requires advanced image generators and displays, trackers, input devices, and software. Experiential Computing in DOD The most prominent use of experiential computing technology in DOD is in the area of personnel training systems for aircraft and ground vehicles. DOD also has a series of initiatives under way to develop advanced training systems for dismounted infantry that rely on experiential computing. Such programs are gaining increased attention in DOD and will become a primary driver behind the military's efforts to develop and deploy technologies for immersion in synthetic environments. They are being undertaken in coordination with attempts to develop computing, communications, and sensor systems to provide individual soldiers with relevant intelligence information.2 Experiential computing, as applied to flight and tank simulation, is a mature science at DOD. There are a number of organizations that have extensive historical reference information they can draw on in specifying the requirements for new immersive training systems. These organizations include the U.S. Army's Simulation, Training, and Instrumentation Command (STRICOM) and the Naval Air Warfare Center's Training Systems Division. Experiential computing is something that has been essential to military training organizations for decades. For traditional training and mission rehearsal functions, the current need is to reduce the cost of immersive systems. Existing mission rehearsal systems based on image generators like the Evans and Sutherland ESIG-4000 serve the Army's Special Operations Forces well, allow-

OCR for page 32
Page 34 ing them to fly at low altitudes above high-resolution geo-specific terrain for hundreds of miles and enabling them to identify specific landmarks along their planned flight path to guide them on their actual mission. Unfortunately, these dome-oriented trainers used to cost upward of $30 million, making it impractical to either procure many simulators or to train many pilots. Cost reductions would allow more widespread deployment of such systems. Experiential computing technologies are being used by the U.S. Navy in both training and enhanced visualization. For battleships an advanced battle damage reporting system allows a seaman in the battle bridge to navigate a three-dimensional (3D) model of his ship to identify where damage has occurred and both where the best escape routes would be for trapped seamen and which routes the rescue and repair crews should take. In another Navy application developed at the Naval Command Control and Ocean Surveillance Center's (NCCOSC's) Research, Development, Test, and Evaluation Division (which is referred to as NRaD), submarines are fitted with an immersive system that generates a view of the outside world for the commander when they are submerged. Since submarine crews cannot normally look outside the boat except when it is on the surface, a virtual window outside provides not only a view of the seafloor (created through the use of digital bathymetric data) but of the tactical environment as well, with other ships, submarines, sonobuoys, and sea life represented clearly and spatially for the commander to gain a better understanding of the tactical and navigational situation. In the nonimmersive domain, experiential computing technology is being leveraged by both the Naval Research Lab (NRL) and the Army Research Lab (ARL) in the form of a stereoscopic table-based display. This display is known at NRL as the Responsive Workbench and at ARL as the Virtual Sandtable. The Responsive Workbench was invented in 1992 at the German National Computer Science and Mathematics Institute outside Bonn. NRL duplicated the bench and started exploring how it could be used in a variety of applications. The concept of the workbench is simple. The bench itself is a table 6 feet long, 4 feet wide, and standing 4 feet off the floor. The tabletop is translucent, and a mirror sits underneath at a 45 degree angle. A projector behind the table shines on the mirror and up onto the table surface from below, creating a stereoscopic image on the tabletop. Users wear stereoscopic glasses and a head tracker. As they move their heads, the image changes to reflect that motion and objects appear to be sitting, like a physical model, on the table. An Army application of this technology is a re-creation of the traditional sand table in which forces are laid out and move around to plan strategies and tactics or to review a training exercise. Coryphaeus Soft-

OCR for page 32
Page 35 ware of Los Gatos, California, is commercializing a similar product, the Advanced Tactical Visualization System, which operates with the commercial version of the Responsive Workbench, the Immersive Workbench by Fakespace Inc. Since commanders are used to working with scale models of battlefields and maps, they can easily accommodate this type of display. Experiential Computing in the Entertainment Industry The problem with creating effective experiential computing systems is that they demand real-time graphics. In the entertainment industry, return on investment must be considered. The high cost of immersive technologies has slowed their expansion into entertainment settings. Nevertheless, an increasing number of location-based entertainment attractions and home systems are emerging. The majority of the systems in operation fall into one of three categories: (1) arcade systems, (2) location-based entertainment centers, and (3) VR attractions at theme parks. Location-based entertainment centers and arcades boast both stand-alone systems that allow participants to drive down a race course, ski down a mountain, or play virtual golf. Others have networked together flight simulators that allow players to interactively fly through a virtual environment and engage targets (including each other). Disney has developed a VR attraction based on its film Aladdin, and Universal Studios has developed a ride based on Back to the Future. Now that the costs of real-time graphics systems are dropping, it is likely that the list of VR experiences for entertainment will expand and that home applications will become more prevalent. Three-dimensional graphics are becoming more widely available on home computers, and the number and variety of peripheral devices, such as throttle-like joysticks and mock-ups of fighter cockpits, are expanding. Continued reductions in cost coupled with increases in capability will likely stimulate further expansion of the home market. Research Challenges Several areas of experiential computing would benefit from additional research. Much of this work would be applicable to both defense and entertainment applications of experiential computing technology. Technologies for image generation, tracking, perambulation, and virtual presence are of interest to both communities, but research priorities tend to be very different. As an example, the factors guiding development of the microprocessors that form the heart of the new Nintendo 64 game machine are very different from those that DOD would have set were it

OCR for page 32
Page 36 specifying a deployable, low-cost, real-time simulation and training device. For example, the Nintendo system was designed for operation in conjunction with a television and uses an interlaced scanning technique and low-resolution graphics. Most training systems would require higher resolution to enable participants to identify more easily specific features of the environment and to avoid eye strain during periods of extended use and would likely use a progressive scan system similar to most computer monitors. Thus, for military purposes it might be possible to leverage a variant of the Nintendo 64 processor, but the actual processor would probably not do the job. Image Generation Visual simulations in defense and entertainment applications share a common need for image generators with a range of capabilities and costs. On the entertainment side, low-cost platforms such as personal computers (PCs) and game boxes, such as those manufactured by Sega or Nintendo, underlie the video games industry. PCs also serve as the primary point of entry to the Internet and therefore are critical to companies providing on-line entertainment, whether through so-called chat rooms or multiplayer games. Larger location-based entertainment centers, such as the flight simulator centers operated by Virtual World Entertainment and the Magic Edge, also are interested in moving away from workstation-based simulators to PC-based simulators as a means of reducing operating costs. Image generation has long benefited from close linkages between the commercial and defense industries. From its early roots at Evans and Sutherland (E&S) and GE Aerospace, the image generator industry responded largely to defense needs because volumes were low and prices high, typically in the millions of dollars. The high cost limited the use of such simulators outside DOD. Nevertheless, the E&S CT5 (circa 1983) and the GE Compuscene 4 Computer Image generators were benchmarks by which all interactive computer graphics systems were measured for years. At about the same time, interactive 3D graphics began to migrate into commercial applications. Stanford University Professor James Clark and seven of his graduate students founded Silicon Graphics Inc. to bring real-time graphics to a broad range of applications. Other companies soon followed, creating the now-pervasive commercial market for real-time 3D graphics. As a result, image generation capabilities that cost over $1 million in 1990 are now available on the desktop for one-one thousandth (1/1,000) that price—a drop of over three orders of magnitude in less than a decade. This improvement in price/performance ra-

OCR for page 32
Page 37 tios results from both technological advances and a related growth in demand for 3D graphics. By driving up production volumes, increased demand has lowered costs significantly, and the entrance of new competitors into the market has accelerated the pace of innovation and resulted in further declines in cost. As real-time 3D becomes a commodity, the true cost of image generation is switching to software—the time and resources required to model virtual worlds. As commercial systems become more capable, more opportunities will exist for DOD and the entertainment industry to work together on image generation capabilities, coupling fidelity with the lower costs that stem from producing larger volumes. A number of existing and emerging technologies could potentially be used for DOD training applications. Low-cost 3D image generators exist that can support robust dynamic 3D environments. These range from game machines such as Nintendo 64 to low-cost graphics boards for PCs manufactured by companies such as 3Dfx and Lockheed Martin. Improvements in low-cost image generators depend on advances in six underlying technologies: processors, 3D graphics boards, communications bandwidth, storage, operating systems, and graphics software. The commercial computer industry will play the leading role in bringing such technologies to the market but will continue to draw from a larger national technology base created by both public and private research programs. Advances in high-end DOD systems may be able to create capabilities that can be used in less expensive systems. Processing power continues to increase with each new generation of microprocessors. Current microprocessors operate at speeds of 200 megahertz or more, and many include multiprocessor logic that can allow several (typically four to eight) processors to work together on a common problem. In the area of 3D graphics boards, some 30 to 40 companies currently offer boards for PCs. As a result, David Clark of Intel Corporation predicts that the performance of graphics chips (the number of polygons generated per second) may double in performance every nine months—twice as fast as processors are improving. Inexpensive chips will soon be able to generate upward of 50 million pixels per second with textures. New communications architectures for PC graphics, such as Intel's accelerated graphics port architecture, will enable over 500 megabytes per second of sustained bandwidth, enabling designers to rapidly transfer texture maps from main memory, thus keeping the cost of 3D graphics low. Because of such advances, producers of PC hardware and software see 3D graphics as a growing application area and are moving quickly to commercialize 3D graphics technology. Both Windows NT and UNIX operating systems support PC-based graphics, and a number of software vendors are porting their applications from the workstation to the PC environment.

OCR for page 32
Page 38 Multigen Inc. has announced that it is making products available for Windows NT systems; Gemini Corporation has ported the Gemini Visualization System. Microsoft Corporation's purchase of Softimage, manufacturer of high-end graphics creation software used by both DOD and the entertainment industry, promises to accelerate the graphics capabilities of PCs. Tracking One of the areas that has seen insufficient innovation in the past decade, position and orientation tracking, continues to hamper advanced development in experiential computing. Today's tracking systems include optical, magnetic, and acoustic systems. The most popular trackers are AC or DC magnetic systems from, respectively, Polhemus Corporation and Ascension Technologies. These systems have fairly high latency, marginal accuracy, moderate noise levels, and limited range. New untethered tracking systems from Ascension help with the intrusive nature of being wired up but still require the user to wear a large magnet. Tracking remains a barrier to free-roaming experiences in virtual environments. To meet the goals of the U.S. Army's STRICOM for training dismounted infantry, long tracker range, resistance to environmental effects from light and sound, and minimal intrusion are key to assuring that the tracking does not get in the way of effective training (see position paper by Traci Jones in Appendix D). Similar requirements were expressed at the workshop by Scott Watson of Walt Disney Imagineering. Magnetic tracking is currently used for detecting head position and orientation in Disney's Aladdin experience and other attractions, despite the fact that the latency of such systems is roughly 100 milliseconds—long enough to contribute to symptoms of simulator sickness.3 As the performance of graphics engines rendering virtual environments increases, the proportional effect of tracker lag is increased. Some optical-based trackers are currently yielding good results but have some problems with excessive weight and directional and environmental sensitivity. Experiments with novel tracking technologies based on tiny lasers are showing promise, but much more work needs to be done before untethered long-range trackers with six degrees of freedom are broadly available in the commercial domain. While untethering the tracker is a current next-step goal, the ideal tracker would not only be untethered but also unobtrusive. Any device that must be worn or held is intrusive, as it intrudes on the personal space of the individual. All current tracking systems suffer from this problem except for some limited-functionality video tracking systems.

OCR for page 32
Page 39 Video recognition systems are typical examples of unobtrusive trackers, allowing users to be tracked without requiring them to wear anything (except for the University of North Carolina video tracker, which actually had users wear cameras!). While this is an ideal, it is difficult to effectively implement and thus has seen only limited application. Some examples include Myron Krueger's VideoPlace and Vincent John Vincent's Mandala system. Perambulation Improved technologies are also necessary for supporting perambulation in virtual environments. The U.S. Army's STRICOM has funded the development of an omni-directional treadmill to explore issues associated with implementing perambulation in virtual environments, a topic that is applicable to entertainment applications of VR as well. Allowing participants in a virtual environment to wander around, explore, and become part of a story would greatly enhance the entertainment value of the attraction. It would also enable residents of a particular neighborhood to wander around synthetic re-creations of their neighborhoods to see how a proposed development nearby would affect their area, from a natural perspective and with a natural user interface. Research is needed to improve current designs and to create perambulatory interfaces that allow users to fully explore a virtual environment with floors of different textures, lumps, hills, obstructions, and other elements that cannot easily be simulated using a treadmill. Technologies for Virtual Presence
4 Virtual presence is the subjective sense of being physically present in one environment when actually present in another environment.5 Researchers in VR have hypothesized the importance of inducing a feeling of presence in individuals experiencing virtual environments if they are to perform their intended tasks effectively. Creating this sense of presence is not well understood at this time, but among its potential benefits may be (1) providing the specific cues required for task performance, (2) motivating participants to perform to the best of their abilities, and (3) providing an overall experience similar enough to the real world that it elicits the conditioned or desired response while in the real world. Several technologies may contribute to virtual presence. • Visual stimulus. This is the primary means to foster presence in most of today's simulators. However, because of insufficient consideration of the impact of granularity, texture, and style in graphics

OCR for page 32
Page 40 rendering, the inherent capability of the available hardware is not utilized to the greatest effect. One potential area of collaboration could be to investigate the concepts of visual stimulus requirements and the various design approaches to improve graphics-rendering devices to satisfy these requirements. • Hearing and 3D sound. DOD has initiated numerous efforts to improve the production of 3D sound techniques, but it has not yet been effectively used in military simulations. Providing more realistic sound in a synthetic environment can improve the fidelity of the sensory cues perceived by participants in a simulation and help them forget they are in a virtual simulated environment. • Olfactory stimulus. Smell can contribute to task performance in certain situations and can contribute to a full sense of presence in a synthetic environment. There are certain distinctive smells that serve as cues for task initiation. A smoldering electrical fire can be used to trigger certain concerns by individuals participating in a training simulator. In addition, smells such as that of hydraulic fluid can enhance a synthetic environment to the extent that it creates a sense of danger. • Vibrotactile and electrotactile displays. Another sense that can be involved to create an enhanced synthetic environment is touch and feel. Current simulator design has concentrated on moving the entire training platform while often ignoring the importance of surface temperature and vibration in creating a realistic environment. • Coherent stimuli. One area that has not received much research is the required coherent application of the above-listed stimulations to create an enhanced synthetic environment. Although each stimulation may be valid in isolation, the real challenge is the correct level and intensity of combined stimulations. Electronic Storytelling Part of making a simulated experience engaging and realistic has nothing to do with the fidelity of the simulation or the technological feats involved in producing high-resolution graphics and science-based modeling of objects and their interactions. These qualities are certainly important, but they must be accompanied by skilled storytelling techniques that help participants in a virtual environment sense that they are in a real environment and behave accordingly. "The problem we are trying to solve here is not exactly a problem of simulation," stated Danny Hillis at the workshop. "It is a problem of stimulation." The problem is to use the simulation experience to help participants learn to make the right decisions and take the right actions. The entertainment industry has considerable experience in creating

OCR for page 32
Page 41 simulated experiences—such as films and games—that engage participants and enable them to suspend their disbelief about the reality of the scenario. These techniques involve methods of storytelling, of developing an engaging story and using technical and nontechnical mechanisms to enforce the emotional aspects. As Danny Hillis observed: If you want to make somebody frightened, it is not sufficient to show them a frightening picture. You have to spend a lot of time setting them up with the right music, with cues, with camera angles, things like that, so that you are emotionally preparing them, cueing them, getting them ready to be frightened so that when you put that frightening picture up, they are startled. Understanding such techniques will become increasingly important in applications of modeling and simulation in both DOD and the entertainment industry. Alex Seiden of Industrial Light and Magic observed at the workshop that "any art, particularly film, succeeds when the audience forgets itself and is transported into another world." The technology used to create the simulation (such as special effects for films) must serve the story and be driven by it. DOD recognizes the importance of storytelling in its large-scale simulations. Judith Dahmann of DMSO noted that DOD prepares participants for simulations by laying out the scenario in terms of the starting conditions: Who is the enemy? What is the situation? What resources are available? However, DOD may be able to learn additional lessons from the entertainment industry regarding the types of sensory cues that can help engender the desired emotional response. Selective Fidelity One of the primary issues that must be considered in both entertainment and defense applications of modeling and simulation technology is achieving the desired level of fidelity. How closely must simulators mimic the behavior of real systems in order to make them useful training devices? Designing systems that provide high levels of fidelity can be prohibitively costly, and, as discussed above, the additional levels of fidelity may not greatly improve the simulated experience. As a result, simulation designers often employ a technique called selective fidelity in which they concentrate resources on improving the fidelity of those parts of a simulation that will have the greatest effect on a participant's experience and accept lower levels of fidelity in other parts of the simulation. Developers of DOD's Simulator Networking (SIMNET) system, a distributed system for real-time simulation of battle engagements and war games, recognized that they could not fool trainees into actually believ-

OCR for page 32
Page 42 ing they were in tanks in battle and put their resources where they thought they would do the most good.
6 They adopted an approach of selective fidelity in which only the details that proved to be important in shaping behavior would be replicated. Success was measured as the degree to which trainees' behavior resembled that of actual tank crews. As a result, the inside of the SIMNET simulator has only a minimal number of dials and gauges; emphasis was placed on providing sound and the low-frequency rumble of the tank, delivered directly to the driver's seat to create the sense of driving over uneven terrain. Though users initially reported dismay at the apparent lack of fidelity, they accepted the simulator and found it highly realistic after interacting with it.7 The entertainment industry has considerable experience in developing systems that use selective fidelity to create believable experiences that minimize costs. Game developers constantly strive to produce realistic games at prices appropriate for the consumer market. They do so by concentrating resources on those parts of their games most important to the simulation. After realizing that game players spent little time looking at the controls in a flight simulator, for example, Spectrum HoloByte shifted resources to improving the fidelity of the view out the window.8 Experiments have shown that even in higher-fidelity systems the experience can be improved by telling a preimmersion background story and by giving participants concrete goals to perform in virtual environments.9 Selective fidelity is important in both defense and entertainment simulations, though it can be applied somewhat differently in each domain to reflect the importance given to different elements of the simulation. For DOD, selective fidelity is typically used to ensure realistic interactions between and performance of simulated entities, sometimes at the expense of visual fidelity. Hence a DOD simulation might have a radar system with performance that degrades in clouds and rain or an antitank round that inflicts damage consistent with the kind of armor on the target, but it might use relatively primitive images of tanks and airplanes if they are not central to the simulation. The entertainment industry tends to place greater emphasis on visual realism, attempting to make simulated objects look real, while relaxing the fidelity of motions and interactions. An entertainment simulation is more likely to use tanks that look real, but that do not behave exactly like real tanks: their motion may not slow when they travel through mud, or their armor may not be thinner in certain places than in others. Such differences limit the ability of defense and entertainment systems to be used in both communities. For example, while many modern video games create seemingly realistic simulations, they do not necessarily model the real world accurately enough to meet defense requirements. Granted, there is a genre of video games that strive to be as realistic as

OCR for page 32
Page 73 Spectator Roles Another area in which DOD and the entertainment industry have overlapping interests is in developing technology for incorporating spectators into models and simulations. As Jacquelyn Ford Morie noted during the workshop, not everyone involved in digital forms of entertainment will want to be direct participants. Some will prefer to engage as a spectator, similar to sports such as baseball, football, and tennis in which only a small percentage of the participants actually play in a match and much of the industry is built around the fans. Morie believes that "there is a potentially huge market to be developed for providing a substantial and rewarding spectator experience in the digital entertainment realm" (see position paper by Morie in Appendix D). As Morie notes, being a spectator does not necessarily mean being passive; it is about being a participant with anonymity in a crowd, providing a less threatening forum in which people can express themselves. DOD has already expressed an interest in this type of capability. The role of the "stealth vehicles" has become increasingly important in defense simulations. Such vehicles are essentially passive devices that allow observers to navigate in virtual environments, attach to objects in the environments, and view simulated events from the vantage point of the participant. As multiplayer games become more sophisticated and interesting, such a capability may evolve into a spectator facility that will allow novices to observe and learn from master practitioners. Popular games may evolve to the level of current professional sports with teams, stars, schedules, commentators, and spectators. Tools for Creating Simulated Environments Another area in which DOD and the entertainment industry have common interests is in the development of software and hardware tools for creating simulated environments. Such tools are used to create and manipulate databases containing information about virtual environments and the objects in them, allowing different types of objects to be placed in a virtual environment and layers of surface textures, lighting, and shading to be added. For games this may be a 3D world that is realistic (such as a flight simulator) or fantastic (like a space adventure), in which an individual interacts directly with the synthetic world and its characters. For film and television, simulated models are often used as primary or secondary elements of scenes that involve real actors, while in other cases the entire story is built around synthetic characters, be they traditional two-dimensional (2D) animations or more advanced 3D animations. For

OCR for page 32
Page 74 DOD these worlds are synthetic representations of the battle space (ground, sea, and air) and virtual representations of military systems. Sophisticated hardware and software tools for efficiently constructing large complex environments are lacking in both the defense and entertainment industries. At the workshop Jack Thorpe of SAIC stated that existing toolsets are quirky and primitive and require substantial training to master, often prohibiting the designer from including all of the attributes desired in a simulated environment (see position paper by Thorpe in Appendix D). Improved tools would help reduce the time and cost of creating simulations by automating some of the tasks that are still done by hand. Alex Seiden, of Industrial Light and Magic, claims that software tools are the single largest area in which attention should be focused. Animators and technical directors for films face daunting challenges as shots become more complicated and new real-time production techniques are developed to model, animate, and render synthetic 3D environments for film and video. Entertainment Applications and Interests For digital film and television, special effects and animation are performed during the preproduction and postproduction processes. Preproduction brings together many different disciplines, from visual design to story boarding, modeling to choreography, and even complete storyboard simulation using 2D and 3D animations. Postproduction takes place after all of the content has been created or captured (live or otherwise) and uses 2D and 3D computer graphics techniques for painting, compositing, and editing. Painting enables an editor to clean up frames of the film or video by removing undesirable elements (such as deleting a microphone and props that were unintentionally left in the scene or an aircraft that flew across the sky) or enhancing existing elements. Compositing systems enable artists to seamlessly combine multiple separate elements, such as 3D models, animations, and effects and digitized live-action images into a single consistent world. Matched lighting and motion of computer graphics imagery (CGI) are critical if these digital effects are to be convincing. In the games world the needs for content-creation tools are similar. Real-time 3D games demand that real-world imagery, such as photographic texture maps, be combined quickly and easily with 3D models to create the virtual worlds in which pilots fly. In the highly competitive market that computer game companies face, time to market and product quality are major factors (along with quality of game play) in the success of new games. This challenge has been eased somewhat in the past few years as companies have begun offering predefined 3D models and tex-

OCR for page 32
Page 75 tures that serve as the raw materials that game and production designers can incorporate into their content. Despite the enormous cost savings that can be enjoyed from automating these processes, entertainment companies invest little in the development of modeling and simulation tools. Most systems are purchased directly from vendors.
47 Film production companies using digital techniques and technologies tend to write special-purpose software for each production and then attempt to recycle these tools and applications in their next production. Typically, little time or funding is available for exploring truly innovative technologies. The time lines for productions are short, so long-term investments are rare. Leveraging commercial modeling and animation tools from both the entertainment world (Alias | Wavefront, Softimage, etc.) and DOD simulation (Multigen, Coryphaeus, Paradigm Simulation) is starting to form a bridge between the entertainment industry and DOD. DOD Applications and Interests DOD faces an even greater challenge in its modeling and simulation efforts. Because of the large number of participants in defense simulations, the department requires larger virtual environments than the entertainment industry and ones in which users can wander at their own volition (as opposed to traditional filmmaking in which designers need to create only those pieces of geometry and texture that will be seen in the final film). Beyond training simulations, content-creation tools are potentially useful in creating simulations of proposed military systems to support acquisition decisions. DOD could use such models to prototype aircraft, ships, radios, and other military systems. The key would be linking conceptual designs, computer-aided engineering diagrams, analysis models, or training representations into a networked environment that would enable DOD to perform "what if?" analyses of new products. Finding some way to allow these varied types of data to fit into a common data model would greatly facilitate this process. Like the entertainment industry, DOD lacks affordable production tools to update simulation environments and composite numerous CGI elements. While its compositing techniques are useful and efficient for developing certain types of simulation environments, they cannot handle the complexity demanded by some high-fidelity applications. Some models and simulation terrain must be built and integrated using motion, scale, and other perceptual cues. Here, DOD personnel encounter problems similar to those of entertainment companies that set up, integrate, and alter CGI environments. Human operators can be assisted by appropriate interactive software tools for accomplishing these iterative tasks.

OCR for page 32
Page 76 Having better tools to integrate and create realistic environments could play a major role in the overall simulation design of training systems, exploring simulation data, and updating simulation terrain. Interactive tools could empower more individuals to participate in this process and would increase strategic military readiness. Research Challenges Database Generation and Manipulation Both the entertainment industry and DOD have a strong interest in developing better tools for the construction, manipulation, and compositing of large databases of information describing the geography, features, and textures of virtual environments. Simulations of aircraft and other vehicles, for example, require hundreds or thousands of terrain databases; filmmakers often need to combine computer-generated images with live-action film to create special effects. Most existing systems for modeling and computer-aided design cannot handle the gigabyte and terabyte data sets needed to construct large virtual worlds. As Internet games companies begin to develop persistent virtual worlds and architectural, planning, and military organizations develop more complete and accurate models of urban environments, the need for software that can create and manipulate large graphics data sets will becoming more acute. At DOD the data used to create these databases are typically captured in real time from a satellite and must be integrated into a completed database in less than 72 hours to allow rapid mission planning and rehearsal. Today's modeling tools can be very powerful, allowing users to create real-time models with texture maps and multiple levels of detail using simple menus and icons. Some have higher-level tools for creating large complex features, such as roadways and bridges, using simple parameters and intelligent modeling aids. At the assembly level, new tools use virtual reality technology in the modeling stage to help assemble large complex environments more quickly and intuitively. Still, modeling tools have not gotten to the point of massive automation. There are some automated functions, but overall responsibility for feature extraction, creation, and simplification is in the hands of the modeler. More research is needed in this area.
48 Bill Jepson from UCLA is exploring systems for rapidly creating and manipulating large geo-specific databases for urban planning. With a multidisciplinary research team, he has designed a system capable of modeling 4,000 square miles of the Los Angeles region. It uses a client-server architecture in which several multiterabyte databases are stored on a multiprocessor system with a server. Communications between

OCR for page 32
Page 77 client and server occur via asynchronous transfer mode, at about 6 megabytes per second. Actual 3D data are sent to the client based on the location of the observer, incorporating projections of the observer's motion. Additional research is under way to link this system with data from the Global Positioning System so that the motions of particular vehicles, such as city buses, can be tracked and transmitted to interested parties. Similar systems could be useful for the Secret Service or the Federal Bureau of Investigation for security planning or for U.S. Special Forces or dismounted infantry training operations in a specific geographic locale. Other work at the University of California, Berkeley, is exploring the automatic extraction of 3D data from 2D images.
49 These methods are likely to play a large role in the future in the rapid development of realistic 3D databases. Another area of possible interest to both the entertainment industry and DOD is in the development of technologies that allow image sequence clips to be stored in a database. This would permit users in both the defense and entertainment communities to rapidly store and retrieve video footage for use in modeling and simulation. A prototype system has been developed by Cinebase, a small company working with Warner Brothers Imaging Technology. Additional development is required to make the technology more robust and widely deployable. Additional efforts to develop more standardized formats for storing the information contained in 3D simulated environments would be beneficial to both DOD and the entertainment industry. A standard format could be developed that allows behaviors, textures, sounds, and some forms of code to be stored with an object in a persistent database. Such efforts could build on the evolving VRML standard. The goal is to devise a common method for preserving and sharing the information inherent in 3D scenes prior to rendering.50 Compositing Both DOD and the entertainment industry are interested in software tools that will facilitate the process of combining (or compositing) visual images from different sources. Such tools must support hierarchy and building at multiple levels of detail: they must allow a user to shape hills, mountains, lakes, rivers, and roads as well as place small items, such as individual mailboxes, and paint words on individual signs. They must also allow designers to develop simulated environments in pieces that can be seamlessly linked together into a single universe. This need will become more acute as the scale of distributed simulations grows. Existing computer-aided design tools do not have the ability to easily

OCR for page 32
Page 78 add environmental features, such as rain, dust, wind, storm clouds, and lightning, to a simulated scene. There are many unsolved compositing problems in pre- and postproduction work for filmmaking that are directly related to simulation and modeling challenges. For example, a need exists for postproduced light models for digital scenes and environments. To create appropriate lighting for composited realistic live-action scenes, lighting models must affect digitized images that were captured under variable lighting conditions. Such a simulation problem is encountered when realistic photographic data are composited into simulation data and the lighting must be interactively adjusted from daylight to night during persistent simulations. Here, it is necessary to develop lighting models that image-process photographic data to provide postproduced lighting adjustments after scenes have been captured. Solutions to these problems do not exist, yet the research would be applicable to both the entertainment industry and DOD. Opportunities may exist for DOD and the entertainment industry to share some of the advances they have made in designing systems for creating models and simulation. DOD might be able to use some of the advanced compositing techniques that have been developed by the entertainment industry to integrate live-action video with computer graphics models. The entertainment industry's software techniques for matching motion and seamlessly integrating simulated scenes into a virtual environment might also be beneficial to DOD. However, most entertainment software is extremely proprietary. It will be necessary to address proprietary issues and methods of information exchange before extensive collaboration can occur between the entertainment industry and DOD. Conversely, some DOD technologies might prove to be very beneficial for entertainment applications as well. At the workshop, Dell Lunceford, of DARPA, suggested that some of the technologies developed as part of DOD's Modular Semiautonomous Forces (ModSAF) program might be useful in creating some of the line drawings used in preproduction stages of filmmaking. ModSAF cannot support the detailed graphical animation needed for facial expressions, but it could facilitate the simpler earlier stages of production in which characters are outlined and a story's flow is tested. Interactive Tools Interactive tools that facilitate the creation of simulations and models and that can be used for real data exploration could be valuable to both the entertainment industry and DOD. The computer mouse and keyboard are extremely limited when creating CGI scenes, and individuals

OCR for page 32
Page 79 are often impaired or constrained by these traditional input devices. A recent project of the National Center for Supercomputing Applications located at the University of Illinois at Urbana-Champaign resulted in an interactive virtual reality interface to control the computer graphics camera in 3D simulation space. The project created an alternative virtual reality computer system, the Virtual Director, to enhance human operator control and to capture, edit, and record camera motion in real time through high-bandwidth simulation data for film and video recording. This interactive software was used to create the camera choreography of large astrophysical simulation data sets for special effects in the IMAX movie, Cosmic Voyage. This project has proven to be valuable for film production as well as scientific visualization. Such uses of alternative input devices to explore and document very large data sets are nonexistent in commercial production because of the time line required to develop such technology, yet this type of tool is extremely important to solve many problems in the entertainment industry as well as DOD simulation and modeling. Conclusion As this chapter illustrates, the defense modeling and simulation community and the entertainment industry have common interests in a number of underlying technologies ranging from computer-generated characters to hardware to immersive interfaces. Enabling the two communities to better leverage their comparative strengths and capabilities will require that many obstacles be overcome. Traditionally, the two communities have tended to operate independently of one another, developing their own end systems and supporting technologies. Moreover, each community has developed its own modes of operation and must respond to a different set of incentives. Finding ways to overcome these barriers will present challenges on a par with the research challenges identified in this chapter. Notes 1. For a more comprehensive review of research requirements for virtual reality, see National Research Council. 1995. Virtual Reality: Scientific and Technological Challenges, Nathaniel I. Durlach and Anne S. Mavor, eds. National Academy Press, Washington, D.C. 2. DOD has several ongoing programs to extend the military's command, control, communications, computing, intelligence, surveillance, and reconnaissance systems to the dismounted combatant. These include the Defense Advanced Research Projects Agency's Small Unit Operations Program, Sea Dragon, Force XXI, and Army After Next. 3. Latency is not the only factor that causes simulator sickness, and even completely

OCR for page 32
Page 80 eliminating latency will not eliminate simulator sickness. See position paper by Eugenia M. Kolasinski in Appendix D. 4. This subsection is derived from a position paper prepared for this project by the Defense Modeling and Simulation Office; see Appendix D. 5. Sheridan, T.B. 1992. Telerobotics, Automation, and Human Supervisory Control. MIT Press, Cambridge, Mass. 6. For a more complete description of the SIMNET program see Van Atta, Richard, et al., 1991, DARPA Technical Accomplishments, Volume II: An Historical Review of Selected DARPA Projects, Institute for Defense Analyses, Alexandria, Va., Chapter 16; and U.S. Congress, Office of Technology Assessment, 1995, Distributed Interactive Simulation of Combat, OTABP-ISS-151. U.S. Government Printing Office, Washington, D.C., September. 7. U.S. Congress, Office of Technology Assessment, Distributed Interactive Simulation of Combat, p. 32, note 6 above. 8. Gilman Louie, Spectrum Holobyte Inc., personal communication, June 19, 1996. 9. Pausch, Randy, et al. 1996. "Disney's Aladdin: First Steps Toward Storytelling in Virtual Reality," ACM SIGGRAPH '96 Conference Proceedings: Computer Graphics. Association for Computing Machinery, New York, August. 10. RTime Inc. introduced an Internet-based game system in April 1997 that supports 100 simultaneous players and spectators. See RTIME News,Vol. 1, February 1, 1997. 11. The National Research Council's Computer Science and Telecommunications Board has another project under way to examine the extent to which DOD may be able to make better use of commercial technologies for wireless untethered communications. A final report is expected in fall 1997. Another project to examine DOD command, control, communications, computing, and intelligence systems was initiated in spring 1997. 12. Specifications for implementing multicast protocols over the Internet are outlined by S.E. Deering in "Host Extensions for IP Multicasting," RFC 1112, August 1, 1989, available on-line at
http://globecom.net/ietf/rfcll2.html. See also Braudes, R., and S. Zabele, "Requirements for Multicast Protocols," RFC 1458, May 1993. 13. As such, multicast stands in contrast to broadcast, in which one designated source sends information to all members of the receiving community, and to unicast systems in which a sender transmits a message to a single recipient. 14. This capability is called routing spaces. It will permit objects to establish publish regions to indicate areas of influence and subscription regions to indicate areas of interest. When publish and subscription regions overlap, the RTI will cause data to flow between the publishers and the subscribers. The goal of this effort, and the larger Data Distribution Management Project, of which it is part, is to reduce network communications by sending data only when and where needed. See Defense Modeling and Simulation Office, HLA Data Distribution Management: Design Document Version 0.5, Feb. 10, 1997; available on-line at http://www.dmso.mil/projects/hla/. 15. Internet Engineering Task Force, "Large Scale Multicast Applications (Isma) Charter," available on-line at http://www.ietf.org/html.charters/lsma-charter.html. 16. Much of the material in this section is derived from a position paper prepared for this project by Will Harvey of Sandcastle Inc.; see Appendix D. 17. Deployment of a new algorithm for queue management, called Random Early Detection, may help greatly reduce queuing delays across the Internet. 18. Floyd, S., and V. Jacobson. 1993. "Random Early Detection Gateways for Congestion Avoidance," IEEE/ACM Transactions on Networking 1(4):397-413; Wroclawski, J. 1996. "Specification of the Controlled-Load Network Element Service," available on-line as ftp://ftp.ietf.org/internet-drafts/draft-ietf-intserv-ctrl-load-svc-03.txt. 19. Clark, D. 1996. "Adding Service Discrimination to the Internet," Telecommunications Policy 20(3):169-181.

OCR for page 32
Page 81 20. Sandcastle Inc., an Internet-based game company, is one source of research on synchronization techniques. 21. DOD defines modeling and simulation interoperability as the ability of a model or simulation to provide services to and accept services from other models and simulations and to use the services so exchanged to enable them to operate effectively together. See U.S. Department of Defense Directive 5000.59, "DOD Modeling and Simulation (M&S) Management," January 4, 1994, and U.S. Department of Defense, Under Secretary of Defense for Acquisition and Technology, Modeling and Simulation (M&S) Master Plan, DOD 5000.59-P, October 1995. 22. All participants in a simulation do not need an identical representation of the environment. Individual combatants, for example, will differ from fighter pilots in the amount of terrain they can see and the sensor data (radar, infrared, etc.) available to them. The key is ensuring that their views of the environment are consistent with one another (e.g., that all players would agree that a given line of trees obstructs the line of sight between two participants in the simulation). 23. DIS conveys simulation state and event information via approximately 29 PDUs. Four of these PDUs describe interactions between entities such as tanks and personnel carriers; the remainder transmit information on supporting actions, electronic emanations, and simulation control. The entity state PDU is used to communicate information about a vehicle's current position, orientation, velocity, and appearance. The fire PDU contains data on weapons or ordinance that are fired or dropped. The detonation PDU is sent when a munition detonates or an entity crashes. The collision PDU is sent when two entities physically collide. The structure of each PDU is regimented and changed only after testing and subsequent discussion at the biannual DIS workshops convened by the Institute for Simulation and Training at the University of Central Florida. 24. Macedonia, Michael R. 1995. "A Network Software Architecture for Large-Scale Virtual Environments." Ph.D. dissertation, Naval Postgraduate School, June; available from the Defense Technical Information Center, Fort Belvoir, Va. 25. Defense Modeling and Simulation Office, HLA Management Plan: High-Level Architecture for Modeling and Simulation, Version 1.7, April 1, 1996. 26. The Navy alone has over 1,200 simulation systems that do not currently comply with HLA. A compliance monitoring reporting requirement and waiver process, similar to the Ada waiver process, were put into place. Each affected service is to fund retrofits of simulation systems from their own budgets. 27. Ordering information is available on the DMSO Web site at http://www.dmso.mil. 28. The Computer Science and Telecommunications Board workshop provided an opportunity for representatives from Internet game companies to learn more about HLA. Several agreed to review the specifications to see if they would be applicable to them 29. Lantham, Roy. 1996. "DIS Workshop in Transition to. . . What?," Real Time Graphics 5(4):4-5. 30. National Research Council. 1995. Virtual Reality: Scientific and Technological Challenges, Nathaniel I. Durlach and Anne S. Mavor, eds. National Academy Press, Washington, D.C. 31. Macedonia, Michael R., et al. 1995. "Exploiting Reality with Multicast Groups," IEEE Computer Graphics & Applications, September, pp. 38-45. 32. Brutzman, Don, Michael Zyda, and Michael Macedonia. 1996. "Cyberspace Backbone (CBone) Design Rationale," paper 96-15-99 in Proceedings of the 15th Workshop on Standards for DIS, Institute for Simulation and Training, Orlando, Florida; Brutzman, Don, Michael Zyda, Kent Watsen, and Michael Macedonia. 1997. "Virtual Reality Transfer Protocol (vrtp) Design Rationale," accepted for the Proceedings of the IEEE Sixth International

OCR for page 32
Page 82 Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE '97), held June 18-20, 1997, at the Massachusetts Institute of Technology, Cambridge, Mass. 33. Brutzman et al., 1996, "Cyberspace Backbone (CBone) Design Rationale," and Brutzman et al., 1997, "Virtual Reality Transfer Protocol (vrtp) Design Rationale," note 32 above. 34. Macedonia, "Exploiting Reality with Multicast Groups," note 31 above. 35. A standard 14.4-kilobit-per-second modem can transmit or receive a standard DIS packet in approximately 80 milliseconds, meaning that only about five players can participate in a real-time interactive game if each must send and receive messages (updating positions, velocities, etc.) to and from each other player at each stage in the game and latencies must be kept below 100 milliseconds. 36. From this perspective the code base is like the way a television studio thinks of it sets, props, and sound stages. The code base needs to be able to be data driven so that new episodes can be created in less than a week instead of a couple of years. Programming will be developed using scripting tools that allow writers and designers to quickly develop new stories. These tools will be important to help the writers and designers not only create new environments but also direct automated units and characters to "perform" new roles for the new scenarios. 37. For example, a player may be flying an F-15 along with a wingman when a pair of enemy MiGs engages them in battle. As a player breaks into a turn, he or she may realize that the wingman has disconnected (intentionally or unintentionally) from the game. 38. The Aladdin attraction is something of an anomaly in that Walt Disney Imagineering approached it not only as a theme park attraction but also as scholarship. It published results of its research in the open literature. See Pausch, Randy, et al. 1996. "Disney's Aladdin: First Steps Towards Storytelling in Virtual Reality," ACM SIGGRAPH '96 Conference Proceedings: Computer Graphics. Association of Computing Machinery, New York, pp. 193-203. 39. Fryer, Bronwyn, "Hollywood Goes Digital," available on-line at
http://zeppo.cnet.com/content/Features/Dlife/index.html. 40. Ditlea, Steve. 1996. "'Virtual Humans' Raise Legal Issues and Primal Fears," New York Times, June 19; available on-line at http://www.nytimes.com/library/cyber/week/0619humanoid.html. 41. Magneanat Thalmann, N., and D. Thalmann. 1995. "Digital Actors for Interactive Television," Proceedings of the IEEE, August. 42. An agent that could meet this requirement would satisfy the "Turing test." Alan Turing, a British mathematician and computer scientist, proposed a simple test to measure the ability of computers to display intelligent behavior. A user carries on an extended computer-based interaction (such as a discussion) with two unidentified respondents—one a human and the other a computer. If the user cannot distinguish between the human and the computer responses, the computer is declared to have passed the Turing test and to display intelligent behavior. 43. Chandrasekaran, Rajiv. 1997. "For Chess World, A Deep Blue Sunday: Computer Crushes Kasparov in Final Game," Washington Post, May 12, p. Al. 44. U.S. Congress, Office of Technology Assessment. 1995. Distributed Interactive Simulation of Combat, OTA-BP-ISS-151. U.S. Government Printing Office, Washington, D.C., September, pp. 123-125. 45. Genetic algorithms are computer programs that evolve over time in a process that mimics biological evolution. They can evolve new computer programs through processes analogous to mutation, cross-fertilization, and natural selection. See Holland, John H. 1992. "Genetic Algorithms," Scientific American, July, pp. 66-72. 46. The National Research Council is conducting another project on the representation of human behaviors in military simulations. See National Research Council. 1997. Repre-

OCR for page 32
Page 83 senting Human Behavior in Military Simulations—Interim Report, Richard W. Pew and Anne S. Mavor, eds. National Academy Press, Washington, D.C. 47. Paul Lypaczewski of Alias | Wavefront estimates that the market for off-the-shelf modeling and simulation tools is about $500 million per year. 48. See National Research Council, Virtual Reality, note 30 above. 49. Debevec, P.E., C.J. Taylor, and J. Malik. 1996. "Modeling and Rendering Architecture from Photographs: A Hybrid Geometry- and Image-based Approach," Proceedings of SIGGRAPH '96: Computer Graphics. Association of Computing Machinery, New York, pp. 11-20. 50. The process of rendering computer graphics is the process of making frames from objects with motion so they can be displayed by the computer or image generator.