Page 115

D—
Position Papers

Prior to the Computer Science and Telecommunications Board's October 1996 workshop on modeling and simulation, participants were asked to submit a one- to three-page position paper that responded to three questions:

1. How do you see your particular industry segment evolving over the next decade (i.e., how will markets and products evolve)?

2. What technological advances are necessary to enable the progress outlined in your answer to question 1? What are the primary research challenges?

3. Are you aware of complementary efforts in the entertainment or defense sectors that might be applicable to your interests? If so, please describe them.

This appendix reproduces a number of these position papers. The papers examine technologies of interest to the entertainment industry and the U.S. Department of Defense, as well as some of the barriers to collaboration. Several of the papers are cited in the body of the report; substantial portions of some have also been incorporated there.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 115
Page 115 D— Position Papers Prior to the Computer Science and Telecommunications Board's October 1996 workshop on modeling and simulation, participants were asked to submit a one- to three-page position paper that responded to three questions: 1. How do you see your particular industry segment evolving over the next decade (i.e., how will markets and products evolve)? 2. What technological advances are necessary to enable the progress outlined in your answer to question 1? What are the primary research challenges? 3. Are you aware of complementary efforts in the entertainment or defense sectors that might be applicable to your interests? If so, please describe them. This appendix reproduces a number of these position papers. The papers examine technologies of interest to the entertainment industry and the U.S. Department of Defense, as well as some of the barriers to collaboration. Several of the papers are cited in the body of the report; substantial portions of some have also been incorporated there.

OCR for page 115
Page 116 Brian Blau— VRML: Future of the Collaborative 3D Internet Introduction VRML (virtual reality modeling language) is the three-dimensional computer graphics interchange file specification that has become the standard for Internet-based simulations. It is being used in many industries, and the momentum of the standard and industry acceptance continues to grow at a fast pace. Most of the major software and hardware corporations are now starting serious efforts to build core VRML technologies directly into business applications, scientific and engineering tools, software development tools, and entertainment applications. One of the most significant developments in the history of VRML was its adoption by Silicon Graphics Inc. (SGI), Netscape, and Microsoft during 1995-1996. This broad level of industry acceptance continues to challenge the VRML community to provide an official international standard so that wide adoption will be possible. Given that creation of VRML came from a unique and open consensus-based process, its future depends on continued innovation in the directions of true distributed simulations as well as efforts to keep the standards process moving forward. Historical Development of VRML Over the past two years the development of a standard for distributing 3D computer graphics and simulations over the Internet has taken the quick path from idea to reality. In 1994 a few San Francisco cyberspace artisans (Mark Pesce, Tony Parisi, and Gavin Bell) combined their efforts to start the VRML effort. Their intention was to create a standard that would enable artists and designers to deliver a new kind of content to the browsable Internet. In mid-1995 VRML version 1.0 emerged as the first attempt at this standard. After an open Internet vote, VRML 1.0 was to be based on Silicon Graphics' popular Open Inventor technology. VRML was widely evaluated as unique and progressive but still not useable. At this point broad industry support for VRML was coalescing in an effort to kick-start a new industry. Complimentary efforts were also underway to deliver both audio and video over the Internet. The general feeling was that soon the broad acceptance of distributed multimedia on the Internet was a real possibility and that VRML would emerge as the 3D standard. After completion of the VRML 1.0 standard, the VRML Architecture Group (VAG) was established at SIGGRAPH 1995 and consisted of eight

OCR for page 115
Page 117 Internet and 3D simulation experts. In early 1996 VAG issued a request for proposals on the second round of VRML development. The call was answered by six industry leaders. Through an open vote SGI emerged as the winner with its Moving Worlds proposal. By this time over 100 companies had publicly endorsed VRML, and many of them were working on core technologies, browsers, authoring tools, and content. At SIGGRAPH 1996 VAG issued the final VRML 2.0 specification and made a number of other significant announcements. To help maintain VRML as a standard, VAG made several concrete moves. First, it started the process of creating the VRML Consortium, a nonprofit organization devoted to VRML standard development, conformance, and education. Second, VAG announced that the International Standards Organization (ISO) would adopt VRML and the consensus-based standardization process as its starting place for an international 3D metafile format. Distributed and Multiuser Simulations Using VRML Based on the current state of technology, it is now obvious that distributed 3D simulations are clearly possible for a wide audience. Distributed simulation is the broad term that defines 3D applications that communicate by standards-based communications protocols. Military training, collaborative design, and multiuser chat are examples of such applications. Widespread adoption of this technology depends on the following key technology factors: platforms, rendering, multimedia, and connectivity. Today, the most popular platforms for accessing the Internet are desktop machines—namely, Windows 95/NT and the Macintosh PowerPC family. These operating systems are running on computing platforms powerful enough to display complex 3D-rendered scenes. The tools are readily available as well, thanks to Microsoft's DirectX media integration API's and ActiveX Internet controls as well as Netscape's Live3D and LiveConnect developer platforms. These software tools, combined with powerful desktop processors, make it easy for software developers to create VRML technologies and products. Another key aspect of development is the tight integration of multimedia into these platforms. Hollywood and the video games industry see the desktop PC as the next major platform for delivery of multimedia content. This means VRML technology development will be accessible to developers of all types of integrated Internet-based media. The final key is development of open-protocol communications standards suited for Internet use. Currently, the military uses distributed interactive simulation (DIS) as the communications protocol for training

OCR for page 115
Page 118 applications and has been successful to date. The integration of DIS with Internet technology is key but not the entire solution. DIS was developed only for military applications. Its broader acceptance by industry is dependent on significant changes to its infrastructure, including the simulation model, numerical representation, integration with VRML, and dependence on Department of Defense initiatives. Another complementary area of interest is multiuser VRML spaces. These applications are the next step in on-line human-to-human communication and are enabled by the Internet and VRML. Several companies have products that let individuals directly interact with others. In these on-line worlds each person views a fully interactive 3D VRML world, including moving graphical avatars that are the virtual representations of their human counterparts. Some of these applications also include real-time voice that is syncopated with movements of the avatar's eyes and mouth. It is very compelling to communicate with someone and only be able to see their virtual representation. Several companies and organizations are now starting to collaborate on a standard for VRML-based avatars. These groups are now in the formative stages and are being published by fairly small companies. The first avatar standard will roll out later in 1996. Future Directions VRML technology and content development in 1996-1997 will focus on several areas. On the standards front, the VRML Consortium and ISO will continue to broaden acceptance of VRML. The VRML Consortium will have its first official meetings in late 1996. Creating the organization and filling it with technical, creative, process-oriented people will be a goal. The VAG will continue to serve as the focus for standards-based VRML work until the consortium is self-sustaining. Also during 1997, ISO will officially adopt VRML as the only international 3D metafile format for the Internet. Once the VRML Consortium is operational, the focus of activities will be on continued development of the VRML specification and the creation of working groups. On the software and hardware development fronts many advances will be made. VRML 2.0 browsers will emerge and will integrate directly into the popular HTML-based browsers. Manufacturers of three-dimensional hardware accelerators will add features that directly support basic VRML graphics. Tool manufacturers, such as polygonal modelers and scene creation tools, will incorporate VRML read-and-write capabilities. Integration of DIS and other distributed simulation communications protocols will quickly help content authors build multiuser capabilities into

OCR for page 115
Page 119 their worlds. Finally, content developers will enjoy the flood of new modeling and programming tools. Given all of these advances there are still three immediate technical areas that need to be addressed before VRML becomes widely adopted: common scripting language, external API, and binary file format. Currently, these areas are quite controversial, but it is clear within the VRML community that solutions to the problems are within reach. VRML Resources on the Internet http://vag.vrml.org—Official home of the VRML spec and the VAG http://sdsc.vrml.org—Very comprehensive list of VRML resources http://www.intervista.co—Popular VRML browser http://www.microsoft.com/ie/ie3/vrml.htm—Popular VRML browser http://www.sgi.com/cosmo—Popular VRML browser http://home.netscape.com/eng/live3d—Popular VRML browser http://www.blacksun.com—Multiuser 3D application http://www.onlive.com—Multiuser application with real-time voice http://www.dimensionx.com—Java-based VRML tools http://www.ktx.com—VRML tools

OCR for page 115
Page 120 Mark Bolas Introduction If the National Aeronautics and Space Administration's VIEW laboratory marks the beginning of the virtual reality (VR) industry, the industry is just about to pass its 10-year mark. There is a rule of thumb stating that it takes about 20 years for a new technology to find its way into the mainstream economy. Applied here, this means 10 years before VR is in the mainstream economy. This prediction seems completely reasonable, or even pessimistic. Consumers can currently purchase VR headsets with integrated tracking for less than $800. A handful of automotive manufacturers and aerospace contractors use VR on an ongoing basis to solve design and engineering problems. However, early adopters are incorporating the technology into their work and lives. They face all of the frustrations and challenges typically associated with being on the cutting edge. The next 10 years will see the VR industry evolve in a straightforward and boring fashion—early adopters will have paved the way for easy use by the mainstream. This evolution will require a fundamental shift in the way VR technology is viewed and used. The technology must cease to stand apart; it needs to become an invisible part of a user's lifestyle and work habits. This requires progress on two basic fronts: First, the technology must be integrated into the user's physical environment. Second, it must be integrated into the user's software environment. Evolution For mainstream users to benefit from VR technologies, the technologies must become pervasive. They must extend throughout our industries and lives. They must diligently work for their users and quietly become part of their lifestyle. The facsimile machine is an example of a technology that has accomplished this. Walkmen, dishwashers, televisions—All these have become pervasive by thoroughly changing the way people do things. A person does not talk about using a walkman, or a dishwasher, or a television. If anything, a person discusses the content or end result as opposed to the NOTE: The industry segment described here is defined as industries that benefit from immersive human-computer interfaces. The term virtual reality is intended to include this definition.

OCR for page 115
Page 121 actual device. "I heard a good song," "The dishes are clean," "Did you see that stupid show last night?" There is little question that three-dimensional (3D) graphics and simulation are on the way to becoming pervasive. In industry the design process is being transformed to demand 3D models and simulations. This Christmas consumers will be choosing between the Sony or Nintendo platforms with 3D graphics capability being assumed. However, the VR industry must evolve to provide such 3D systems with immersive interfaces that multiply the utility and effect of the 3D graphics. Currently, most 3D graphics are shown on a 2D screen and manipulated via a 2D mouse. These interfaces effectively remove much of the value present in the 3D environments. The VR industry must maintain the utility and comfort present in a user's natural ability to perceive and manipulate 3D environments and objects. Advances For VR to become a pervasive tool, it must become integrated into both the user's physical and software environments. Seamless integration with a user's physical environment is not simple because immersive interfaces tend to immerse—that is, they surround and envelop the user. This can easily intrude on a user's physical and mental environment. The VR industry needs to minimize this intrusion to the point where immersive interfaces are as natural to use as a telephone or mouse. It is interesting to note that both these examples are not inherently natural, but both have been integrated into users' workspaces and lifestyles. To achieve a natural interface, paradigms that transcend the standard goggles-and-gloves paradigm need to be pursued. The fact that people collaborate, multitask, and eat while they work are down-to-earth aspects that must be considered in the design of immersive tools. Equally challenging is the integration of these new interfaces in the software environment. Application software packages have typically been written for 2D screens and interfaces. As a result, most immersive interfaces are poor retrofits onto existing packages that were never designed to incorporate them. This lack of integration severely cripples the utility of immersive interfaces. This integration is probably best achieved by starting with a "top down/bottom up" design approach on a number of key applications. For example, the entertainment industry could use an immersive set design and preview system, while the Defense Department would benefit from a simulation-based design and modeling system that fully utilizes a human's ability to think, design, and manipulate 3D space.

OCR for page 115
Page 122 Peter Bonanni The U.S. armed forces have created the most advanced training systems in the world. Some segments of the armed forces, however, are facing true training shortfalls for the first time in decades. These training deficiencies are being caused by worldwide deployments. U.S. Air Force active duty and reserve squadrons, for example, have experienced a reduction in training sorties of up to 25 percent. This reduction is a direct result of deployments in support of contingency operations over Iraq and Bosnia. Pilots are most proficient and able to fight when they are first deployed to these areas. As the deployment wears on, with little or no training opportunities, pilot proficiency slips. The same problem is occurring in other combat arms as the trend to use U.S. forces in peacekeeping roles accelerates. Since conducting realistic training is impossible on most of these missions, simulators provide the only realistic training alternative. Unfortunately, most of the simulators in use today are very expensive, are limited to single-crew training, and are not deployable. Emerging commercial simulation technology, however, may provide a near-term solution to this military training problem. Some fighter pilot skills, for example, cannot be practiced in simulation, regardless of the fidelity. The most important (and perishable) skills, however, can be honed by very-low-cost simulators. The computer game Falcon 4.0 is an example of a commercial product that is shattering the fidelity threshold and providing a model for very-low-cost simulation. There are several key components to Falcon 4.0 that allow this type of breakthrough. Falcon 4.0 features "SIMNET-like" networking protocols that create a large man-in-the-loop environment. These features of Falcon 4.0 provide the basic building blocks for producing a simulator that will be low in cost and deployable and that will provide pilots with team training opportunities. In the near term this capability will be enhanced with the development of commercial head-mounted displays and voice recognition systems.

OCR for page 115
Page 123 Defense Modeling and Simulation Office: DOD Modeling and Simulation Overview and Opportunities for Collaboration Between the Defense and Entertainment Industries The U.S. Department of Defense (DOD) is building a robust modeling and simulation (M&S) capability to evaluate weapons system requirements and courses of actions; to reduce the time line, risk, and cost of complex weapons system development; to conduct training; and for realistic mission rehearsal. Part One of this paper provides a description of the current and envisioned application of M&S in the training, analysis, and acquisition support functional areas. It also summarizes the plan that is in place to help achieve DOD's M&S vision. Part Two is a list of technology areas that DOD believes have a potential for collaborative development with the entertainment industry. Part One: DOD Modeling and Simulation Overview Vision and Application The foundation for the above set of DOD M&S capabilities will be the development of a common technical framework to maximize interoperability among simulations and the reuse of simulation components. The cornerstone of the common technical framework (CTF), the High-level Architecture (HLA), has just been adopted as DOD-wide policy. Together with the other elements of the CTF, data standards, and a common understanding (or conceptual model) of the real world, the HLA will enable DOD to use and combine simulations in as-yet unimagined ways. Establishment of a commercial standard will follow as applications spread to training for natural disaster response, weather and crop forecasting, and a host of other business and social problems. Common services and tools also will be provided to simulation developers to further reduce the cost and time required to build high-fidelity representations of real-world systems and processes. Realistic simulations, interacting with actual war-fighting systems, will enable combatants to rehearse missions and "train as we fight." Virtual prototypes developed in a collaborative design environment using the new integrated product and process development concept will be evaluated and perfected with the help of real war fighters before physical realizations are ever constructed. DOD

OCR for page 115
Page 124 will enforce recently approved policies and procedures for the verification, validation, and accreditation of models and simulations to ensure accuracy, thereby enhancing the credibility of simulation results. The advanced M&S capability envisioned by DOD will be a rapidly configured mix of computer simulations, actual war-fighting systems, and weapons systems simulators geographically distributed and networked and involving tens of thousands of entities to support training, analysis, and acquisition. Not only is there a desire to quickly scale the size and mix of simulations, but DOD also is pursuing the capability whereby both groups and individuals can interact equally well with a synthetic environment. The major challenge in providing scalability, as well as the group and individual experience, is achieving consistency and coherence of both time and space. Other areas of ongoing research in DOD that show promising results are the accurate representation of human behavior, systems, and the natural environment (air, space, land, sea, weather, and battle effects). DOD's efforts are focused on just-in-time generation of integrated and consistent environmental data to support realistic mission rehearsals anywhere in the world, including inaccessible or operationally dangerous locations. Investments in the rapid extraction of land and water surfaces, features existing on those surfaces, and features derived from ocean, air, and space grided fields have begun to yield results. The goal is to develop a capability to generate feature-integrated surfaces that require minimal editing and model-based software for feature extraction. Achieving this will, for example, ensure that weather fronts that bring rain or snow change the characteristics of the ground so that transit rate is affected and the associated wind patterns move trees, create waves, and alter dispersal patterns of smoke and dust. The resulting realism will add significantly to training, analysis, and acquisition. These effects, when coupled with dial-up capability to create custom correlated conditions, can provide year-round training. Training Warriors of every rank will use M&S to challenge their skills at the tactical, operational, or strategic level through the use of realistic synthetic environments for a full range of missions, to include peacekeeping and providing humanitarian aide. Huge exercises, combining forces from all services in carefully planned combined operations, will engage in realistic training without risking injury, environmental damage, or costly equipment damage. Simulation will enable leaders to train at scales not possible in any arena short of full-scale combat operations, using weap-

OCR for page 115
Page 125 ons that would be unsafe on conventional live ranges. Simulation will be used to evaluate the readiness of our armed forces as well. The active duty and reserve components of all forces will be able to operate together in synthetic environments without costly and time-consuming travel to live training grounds. In computer-based training, both the friendly and opposition forces, or computer-generated forces (CGFs), are highly aggregated into large command echelons and carry out the orders resulting from staff planning and decision making. CGFs fall into two categories: (1) semiautomated forces (SAFs), which require some direct human involvement to make tactical decisions and control the activities of the aggregated force, and (2) automated forces, which are associated with autonomous agent (AA) technology. AAs are in early development phases and will find extensive applications in M&S as the technology matures. There is now a diverse and active interest throughout the DOD M&S community, academia, and the software industry in the development of CGFs and AAs. The Defense Advanced Research Projects Agency is sponsoring the development of Modular Semi-Automated Forces for the Synthetic Theater of War program, which includes both intelligent forces and command forces. This effort also involves development of the command and control simulation interface language. It is designed for communications between and among simulated command entities, small units, and virtual platforms. The services, more specifically the Army's Close Combat Tactical Trainer program, is now developing opposing forces and blue forces to be completed in 1997. The British Ministry of Defence also is developing similar capabilities using command agent technology in a program called Command Agent Support for Unit Movement Facility. Academic and industrial interest in this technology has led to the First International Conference on Autonomous Agents, which will take place in Marina del Rey, California, on February 5-8, 1997. Analysis M&S will provide DOD with a powerful set of tools to systematically analyze alternative force structures. Analysts and planners will design virtual joint forces, fight an imaginary foe, reconfigure the mix of forces, and fight the battle numerous times in order to learn how best to shape future task forces. Not only will simulation shape future force structure, but it will also be used to evaluate and optimize the course of action in response to events that occur worldwide. M&S representations will enable planners to design the most effective logistics pipelines to supply the warriors of the future, whether they are facing conventional combat missions or operations other than war.

OCR for page 115
Page 171 certainly the Web will become the preeminent forum for the exchange of commercial and scientific information; its significance will exceed that of the cellular phone, the automated teller machine, the fax machine, and the Home Shopping Network combined. This is not a trivial development. Whether storytelling itself will be fundamentally changed depends on a paradigm shift that I would contend is much larger than for other emerging media. To fully evaluate the likelihood and meaning of such as shift requires a careful distinction between what we think of now as a "story" and what we consider a "game" or "environment." A full appraisal of the differences between the cognitive processes involved is beyond the scope of this paper and is an excellent subject for further research.

OCR for page 115
Page 172 Steven Seidensticker— Distributed Simulation: A View from the Future The battle date is August 17, 1943. I am the ball turret gunner of Luscious Lady, a brand new B-17F of the 427th squadron, 303rd Bombardment Group, of the Eighth Air Force. Our takeoff from Molesworth was without incident, but as soon as we were off the ground the pilot asked me to check the wheels. He had an indication that the left main gear had not retracted fully. I hopped into the ball and spun it until I had a good view of the wheel. It looked OK. We chalked it up to a bad indicator in the cockpit. Although the ball with its twin 50s is primarily intended to protect a B-17 from enemy fighters approaching from below, the view from beneath the aircraft comes in handy for other chores. We climb out and begin a long lazy circle. I keep tabs on and report other squadron aircraft as they join our formation. We are on our second mission and our first over Germany. Our first mission was to bomb a Luftwaffe airfield near Paris. The target was partly obscured by weather. Opposition was light. A few Me-109s came up to meet us. They were not particularly aggressive or well coordinated. Nevertheless, we lost one of our squadron. I saw Old Ironsides get most of her rudder shot off. The pilot was obviously losing control and chose to abandon his ship. I saw 10 good chutes. The debriefing team called the mission a "milk run." The missions would become much tougher as we gained more experience. We were happy to get this far. My pilot and copilot are in Milwaukee. The navigator/bombardier is in Montreal. Other crew members are in Seattle, San Jose, Denver, and Green Bay. We cannot see or touch each other, but we communicate via what appears to be a B-17's standard intercom. In fact, we are part of a wide-area high-speed data network that connects all crew stations of all aircraft, both friendly and hostile. I don't know the total number of nodes on this network, but it must be in the thousands. The number of spectators who can tap into the net is in the millions. In addition to our voices, this network carries all the data that our individual crew station simulators need to show other aircraft the terrain over which we fly, the weather, and other elements of our environment. To participate in these missions each of us simply dials into the network at the time scheduled for the mission, gets the standard crew briefing on our screens, and waits for our turn to take off. The pilots, bombardiers, and navigators get a detailed briefing on the target and expected weather. The rest of the crew gets briefed on expected opposition. The briefings are, of course, the same as (or as close as possible to) the original briefings given to the original crews. Like in the original briefings, we can ask questions and get answers.

OCR for page 115
Page 173 Not all the crew stations on Luscious Lady are manned by humans. The waist gunners and the radio operator are computer-generated entities. They do their jobs reasonably well. They even respond to us when we talk to them over the intercom. However, if the conversation strays from simple orders or reports they quickly become confused and start spouting gibberish. Some of the other friendly aircraft on the mission and some of the opposing Luftwaffe fighters have no human crews at all. But it's getting harder to tell who is human and who is computer-generated, because the programmers keep tweaking their behavior algorithms. But my personal feeling is that they will never get to the point where these simulations are totally indistinguishable from real people. I hope they don't. Over the Channel the pilot gives us the order to test our guns. This is a ritual that ensures that the guns are working and marks the real beginning of the mission for us gunners. From here we are in harm's way. I cock both guns, point to a clear area, and let loose with a short burst. The tracers arc away gracefully. I have managed not to hit anyone else in the formation. To do so is considered very bad form. It also requires the hapless shooter to buy dinner for the shootee's crew at our next annual convention. Of course, the computers that run this whole operation keep track of everything, so there is no arguing or hiding. The target today is the Me-109 plant in Regensburg. We know that the Luftwaffe was out in force that day. The Eighth Air Force lost 24 B-17s out of a force of 147. Shortly after we cross the French coast the nose gunner shouts "four 109s at 12 o'clock low." The control yoke feels comfortable in my hands as I spin the turret forward. They are coming at our formation four abreast from dead ahead. The winking lights on the leading edge of their wings show that they are firing. I mash the right pedal hard to tell the lead computing gun sight to use maximum range. The left pedal goes to the third notch to input the wing span of an Me-109. I line the sight's pipper on the number two plane and fire short bursts, trying to adjust the range as they close. My shots appear low. Just about everyone in our formation is firing. A puff of smoke bursts from the number three fighter. It continues to smoke as their formation passes right through ours. This line abreast head-on attack was developed by the Luftwaffe in early 1943. It took a lot of courage and discipline on the part of the German pilots, but it was very effective. The idea was not only to get the best shots possible but also to intimidate the bomber pilots and break up the formation. It was probably the greatest game of chicken ever and it frequently ended in collision. The right waist gunner reports another formation at the four o'clock level. But they are out of our range and overtaking us on a parallel course, no doubt moving up for another head-on pass through the bomber stream. I can see their yellow cowlings and

OCR for page 115
Page 174 know that they belong to JG 26, the "Abbeville Kids," one of the best Luftwaffe fighter wings. The attacks continue sporadically until we are about 30 miles from the target. At that point we start seeing the dreaded flak. The small black clouds bloom innocently in the distance, but we know that as the ground gunners adjust the aim of their 88s, the bursts will be right around us. There is little evasive action that a formation of B-17s can take. We are near the IP (initial point) that the pilot must fly over if we are to get our bombs anywhere near the target. At that point, the bombardier takes over and actually flies the plane to the bomb release point, using autopilot controls on the famous Norden bomb sight, probably one of the most famous but overrated technical developments of World War II. The flak rounds get closer. The concussion from one of them is louder than the fifties going off next to my ears. The pilot reports that number four engine is starting to vibrate and that the manifold pressure is dropping. Bad news. If it fails we will have to drop out of the formation. Like the weak separated from the herd, we will be on our own. We may have to fight packs of fighters as we try for the coast and the protection of friendly Spitfires. Most who have been through this say that it can be the most exciting part of an afternoon of simulation, but the B-17 seldom survives. Those that do get an award at the next convention and, of course, their battle with the fighters is replayed on the large screen. We finally reach the target, the bombardier hits the pickle switch, and I watch the bombs fall away. I loose sight of them after a few seconds, but shortly thereafter see a string of explosions on the ground. The bombs land in a rail yard just east of the target complex. But that's closer than the original crew came in 1943. The flight back was challenging. For two hours we endured more flak and almost constant fighter harassment. Our pilot managed to coax enough power out of the number four engine to maintain our position in the formation. The rest of the formation was not so lucky. Stric Nine took an 88-mm round in the right wing root and the whole wing came off. There were no chutes. Wallaroo lost an engine and had to drop back, but we were close to the coast and a flight of P-47s escorted her back. Once we got over the Channel I turned over my role to an automatic ball turret simulation and had a quick dinner in the kitchen with my wife. I doubt that the rest of my crew even noticed I was gone. I rejoined the simulation for the debriefing. The colonel told us that we had done reasonably well for a second mission crew. My ball turret is a medium-priced model from RealSim Inc., one of the rising companies in this field. It provides a lot of fidelity for the price and has a lot of update options. I'm very happy with it. The ball spins

OCR for page 115
Page 175 and rotates vertically much the way the original did and takes up less than half of my garage. The visual scenes are presented on panels built right into the ball. Sound and vibration are provided by some large but ordinary speakers. RealSim sells the basic turret dirt cheap but knows how sim-heads get hooked on fidelity, and so they offer a large range of add-ons that can become real expensive. Some of my colleagues have mounted their units on electrically driven motion platforms. I don't know if that is worth the extra cost. Maybe next year. Many other simulated crew stations are built around virtual reality goggles. Those are a lot less expensive but work quite well. One enthusiastic crew has built a whole B-17 fuselage in a warehouse. As in most simulations, visual scenes provide the dominant cues. The simulation industry long ago reached its holy grail of creating visual images that are indistinguishable from the real thing. The processing power needed to create them is so cheap that the image generators are no longer a cost factor in most simulators. Databases that represent the terrain of any portion of the earth are readily available at any resolution desired. Specialty "period" databases (Dunkirk or Waterloo for instance) for groundpounders are becoming available but are very expensive. The key factor that made this kind of group simulation possible was the development of the DIS (distributed interactive simulation) standards about 25 years ago. Once these standards were in place, the designers and builders of simulator components didn't have to spend any more time thinking about linking them together than does the designer of a railroad car need to worry about how to couple his car to a train. The DIS standards allowed the simulation industry to concentrate on functionality, performance, and cost reduction. My wife used to ask me why I spend so much time and money on this. There are a number of reasons. I, like most middle-aged guys, have often fantasized about going into battle to test my wits and skill with a comparably equipped enemy. In this fantasy I support my comrades and in turn depend on their support. I yearn to experience the heat of battle, victory over my adversary, or a narrow escape from the reach of his weapons. However, I have no desire to shed any of my blood. I also love history, great battles in particular. I know of no greater battle than that between the U.S. Eighth Air Force and the German Luftwaffe in 1943 and 1944. The leaders of the American forces felt that they could win the war with heavy bombing of German military and industrial targets. To be accurate this had to be done in daylight. Escort fighters of the day did not have sufficient range to cover the bombers. The bombers had to depend on their own defensive weapons. Participation in these re-created battles is available at a number of levels. I started as a spectator. The magic carpet mode of my computer

OCR for page 115
Page 176 let me observe operations from any point in space. It also let me attach myself to any aircraft in the battle and listen to the radio and intercom traffic for that aircraft. Running commentary is available from experts. Previews and schedules of upcoming battles are carried by the major sports pages. Reports of completed battles also are carried. These tend to dwell on the personalities involved and the shoot-em-up aspects. How close the reenactment came to the original battle seems to be getting lost. After watching several of the major raids, I was hooked and wanted to play an active role. My first desire was to be a Luftwaffe pilot, but the requirement for fluency in German eliminated that. Rumors are that an English-speaking Luftwaffe wing is forming. My second choice was to sit in the cockpit of a B-17. But, like the original aircrews, I needed training. The training course for all pilot positions is long and demanding. I opted for the less ambitious role of gunner. Fortunately, the simulator technology that I own trains me more efficiently and quickly than did similar training programs in 1943. After a few intense weekends, I passed the qualification tests and was assigned to my present crew. We are not the most proficient crew on today's raid, but neither were the new crews in 1943. As I become more serious in this avocation, I wonder where it is going. Some social commentators are starting to decry the "glorification of war." Others counter with statements about "harmless outlets of male aggression," despite the fact that at last year's convention the Best B-17 Crew Award went to an all-female crew. Some critics are worried that the super-realistic simulation available today is going to replace drugs as the national addiction. Who knows! The raid on the ball-bearing factories in Schweinfurt is scheduled for next week. It was the bloodiest for the Eighth Air Force. I think my crew and I are good enough and lucky enough to survive. I can hardly wait to find out.

OCR for page 115
Page 177 Jack Thorpe— Research Needs for Synthetic Environments Purpose This paper introduces one approach for thinking about the technical challenges of constructing synthetic environments and some of the related research issues. The paper is designed to stimulate discussion, not to be a comprehensive treatise on the topic. Discussion Simulation, virtual reality, gaming, and film share the common objective of creating a believable artificial world for participants. In this context, believability is less about the specific content of the environment and more about the perception that there exists a world that participants can port themselves into and be active in—that is, exert behavior of some sort. In film, this is vicarious. In simulation, virtual reality, and gaming it tends to be active, even allowing participants to choose the form for porting into the environment: either as an occupant of a vehicle moving through the environment, as a puppet (proxy) of him or herself that he or she controls from an observation station, or as a fully immersed human. The iconic representation or avatar can assume whatever form is appropriate for the environment. When the participant is an audience member in a single venue and is neither required to interact overtly with other audience members in the same venue or other connected venues, the issues of large-scale interactivity and distributed locations are minimal. On the other hand, when tens or hundreds of remotely located participants are ported into the same world and begin to interact freely (and unpredictably), as demonstrated in recent advances in distributed interactive simulation, not only are the environments more interesting but the technical challenges are also more difficult. It is likely that these will also be the next-generation commercial application for this technology, and so addressing technical issues is timely. To design and build these more complex worlds, the following major tasks have been found to be useful classifications of the work needed to be done and the tools required to perform this work, thus leading to the research and development needed to construct the tools. For each of these tasks a few of the research issues are identified, but this is far from a comprehensive treatment:

OCR for page 115
Page 178 • Efficient fabrication of the synthetic environment; • Design and manufacture of affordable porting devices that allow humans to enter and/or interface with these environments; • Design and management of a worldwide simulation Internet to connect these porting devices in real time; • Development of computational proxies (algorithms) that accurately mimic the behavior of humans unable to be present; • Staffing, organization, and management of realistic, validated sentient opponents (or other agents), networked based, for augmenting the world; and • Development of innovative applications and methodologies for exploiting this unique capability. Efficient Fabrication of the Synthetic Environment Artificial worlds are usually three-dimensional spaces whose features are sensed by the participants in multiple modes, almost always visual but possibly auditory, tactile, whole-body motion, infrared, radar, or via a full range of notional sensor or information cues. For each of these modes of interaction, the attributes can be specified in a prebuilt database ahead of time, or calculated in real time, or both. The challenge is to construct interesting three-dimensional environments efficiently. Cost rises as a function of the size of the space (in some military simulations it can be thousands of square miles of topography), resolution, detail (precision cues needed for interaction), dynamic features (objects that can interact with participants, like doors that can open or buildings that can be razed), and several other factors. As a general observation, the tools needed to efficiently construct large complex environments are lacking, a particularly serious shortfall when fine-tuning environments for specific goals of a simulation or game. Toolsets are quirky and primitive, require substantial training to master, and often prohibit the environment architect from including all of the attributes desired. This is a serious problem, one that seems to get relatively little attention. It is an area that needs continual research and development focus. Design and Manufacture of Affordable Porting Devices that Allow Humans to Enter and/or Interface with These Environments The manner in which the human enters the synthetic environment continues to undergo rapid change. Flight simulators are a good example. Twenty years ago a sophisticated flight simulator cost $20 million to

OCR for page 115
Page 179 $40 million. Ten years ago technology allowed costs to drop by a factor of 100. Today there has been another one or two orders of magnitude decrease. Further, each new generation is more capable than its more costly predecessor. This drop in cost, with an increase in the richness of the participant's ability to interact with the environment and other people and agents similarly ported there, is especially important as large-scale simulations are constructed—that is, those that might have 50 or more participants (some military simulations have thousands of participants). The cost per participant (cost per seat) can be a limiting factor no matter how rich the interface. The research issues include the design methodology that leads to good functional specifications for the simulation or game (the work on selective fidelity by Bob Jacobs at Illusion Inc. is relevant), the design and fabrication approaches for full-enclosure simulators (vehicles) and caves (individuals), the porting facade at the desktop workstation (partly manifested by the graphical user interface), and other means of entering the environment, such as while mobile via a wireless personal digital assistant. Design and Management of a Worldwide Simulation Internet to Connect These Porting Devices in Real Time Small-scale as well as large-scale distributed interactive environments have baseline requirements for latency, which is compounded when a requirement to worldwide entry into environments is added. Latency is influenced by the type of interaction a participant is involved with in the specific synthetic environment. The requirement is that the perception of "real timeness" is not violated, that is, that participants do not perceive a rift in the time domain (a stutter, momentary freeze, or unnatural delay in consequence of some action that should be a seamless interaction). Because this is a perceptual issue, it is dependent on the nature of the interaction and the participant's expectations. This becomes a technology issue as the number of independently behaving participants grows, the number of remote sites increases, and the diversity of the types of interactions coming from these sites and participants grows. It has been demonstrated that unfiltered broadcasting of interaction messages ("I am here doing this to you") quickly saturates the ability of every participant to sort through all the incoming messages, the majority of which are irrelevant to a specific participant. The functionality needed in this type of large interactive network is akin to dynamically reconfigurable multicasting, as yet unavailable as a network service. It could turn out that as the Internet expands it will provide the ded-

OCR for page 115
Page 180 icated protected speed and addressing for these types of interactions, but this is not the case to date, and dedicated networks have had to be installed to support large exercises. Further, it is conceivable that the appetite of the simulation or game designer for more complex and interactive environments will outpace the near-term flexibility and capacity of network providers. Networks are going to have to be smarter, a continuing research issue. Development of Computational Proxies (Algorithms) That Accurately Mimic the Behavior of Humans Unable to Be Present Late 1980s experimentation with distributed interactive simulations resulted in the constant pressure to grow the environments in the numbers of participants, but there were never enough porting devices or people to man them to satisfy this growth. Since these environments began as behaviorally rich human-on-human/force-on-force experiences, players demanded that any additional agents brought on via computer algorithm have all the characteristic behaviors of intelligent beings, that is, that they passed the Turing test and would be indistinguishable from real humans—a tall order. This resulted in a series of developments of semiautomated and fully automated forces capable of behaving as humans and interacting alongside or against other humans ported into the simulation. These developments have met with mixed success. In some cases computer algorithms have been constructed that are excellent mimics of actual individuals and teams, particularly in vehicles, but in other cases the problem is more difficult, especially in areas of mimicking cognition as in decision making. Nonetheless, the commercial application as well as the defense application of large-scale interactive environments will require large-scale synthetic forces behaving correctly. Given that understanding, predicting, and "generating" human behavior transcends simulation and gaming, this will continue to be a major research area. Staffing, Organization, and Management of Realistic, Validated Sentient Opponents (or Other Agents), Networked Based, for Augmenting the World Where environments require teams of people acting in concert to augment the synthetic environment for participants, for example, teams of well-trained and commanded competitors, the opportunity presents itself for the establishment of network-based teams. These could be widely remoted themselves, even though they would be perceived as being

OCR for page 115
Page 181 ported into the synthetic environment at a single location. The challenge of establishing these teams is less technical and more organizational, typical of military operations, except in the case where these teams are required to faithfully portray forces of different backgrounds, languages, and value systems. Technology can assist with real-time language generation and translation. Behaving as someone from a different culture is more difficult. Development of Innovative Applications and Methodologies for Exploiting This Unique Capability The capabilities created through the design and instantiation of a synthetic environment can be unprecedented, making conventional applications and methodologies obsolete. This task recognizes that research is needed on how to characterize these new capabilities and systematically exploit them.