Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 226
--> 10 Virtual Reality Comes of Age Virtual reality (VR) is a highly multidisciplinary field of computing that emerged from research on three-dimensional interactive graphics and vehicle simulation in the late 1960s and early 1970s.1 For much of its early development, VR often seemed more like science fiction than science, but it is now transforming fields such as military training, entertainment, and medicine. Applications range from navigation systems that enable pilots and air traffic controllers to operate in dense fog2 to fully digital design environments for creating new car models3 (see Box 10.1). This chapter focuses on research and development (R&D) in computer graphics and related technologies that contributed to the emergence of VR as a practical technology. In particular, it examines the diversity of funding agencies, missions, and environments, as well as the strong interactions between public and private research and personnel, that have promoted advances in the field. The analysis is not intended to be comprehensive but rather concentrates on selected topics that illuminate the R&D process. It highlights medical and entertainment applications of VR because they demonstrate interesting aspects of the innovation process. The emphasis on head-mounted displays is not meant to downplay the significance of other VR technologies that are not addressed, such as the large projection environments at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.4 The research on head-mounted displays is but one illustration of the many ways in which federally sponsored research programs have influenced the VR field.
OCR for page 227
--> BOX 10.1 What Is Virtual Reality? Virtual reality (VR) refers to a set of techniques for creating synthetic, computer-generated environments in which human operators can become immersed. In VR systems, human operators are connected to computers that can simulate a wide variety of worlds, both real and imaginary, and can interact with those worlds through a variety of sensory channels and manipulators (National Research Council, 1995, pp. 247-303). Simple VR systems include home video games that produce three-dimensional (3D) graphical displays and stereo sound and are controlled by an operator using a joystick or computer keyboard. More sophisticated systems—such as those used for pilot training and immersive entertainment experiences—can include head-mounted displays or large projection screens for displaying images, 3D sound, and treadmills that allow operators to walk through the virtual environment. Such systems are increasingly being used in a variety of applications, from telecommunications and information visualization to health care, education and training, product design, manufacturing, marketing, and entertainment. Among other things, they enable operators to explore foreign cities from the comfort of their own homes, train for hazardous missions, develop new surgical procedures, and test new product designs. VR is the outcome of a complex alignment of research fields that include computer graphics, image processing, computer vision, computer-aided design, geometric modeling, user-interface design, and physiological psychology. It also incorporates robotics; haptics and force feedback; computer architectures and systems development; entire new generations of processors, graphics boards, and accelerators; and a host of software applications converted to firmware in computers for rendering data visually. Finally, VR also involves work on high-speed data transmission and networks. This case history demonstrates that federal support has been the single most important source of sustained funding for innovative research in both computer graphics and VR. Beginning in the 1960s with its investments in computer modeling, flight simulators, and visualization techniques, and continuing through current developments in virtual worlds, the federal government has made significant investments in military, civilian, and university research that laid the groundwork for one of today's most dynamic technologies. The commercial payoffs have included numerous companies formed around federally funded research in graphics and VR. The first section of the chapter briefly outlines the origins of VR. The next seven sections, which are organized in roughly chronological order, discuss early development of the academic talent pool, the private sector's cautious initial approach, the role of synergy in launching visionary VR
OCR for page 228
--> research, a breakthrough that provided initial building blocks for a commercial VR infrastructure, the mixture of research projects that led to biomedical applications, the role of entertainment applications in expanding use of VR, and the growing role of military R&D in producing commercial spin-offs. The last section of the chapter summarizes the lessons learned from history. Launching the Graphics and Virtual Reality Revolution The earliest use of a computer-generated graphical display on a cathode ray tube (CRT) was in Project Whirlwind, a project sponsored by the U.S. Navy to develop a general-purpose flight simulator (see Chapter 4). By the late 1940s, Robert Everett at the Massachusetts Institute of Technology (MIT) had developed a light gun that could cause the CRT to react. Researchers on SAGE, the successor to Whirlwind, made extensive use of interactive graphics consoles with displays equipped with a light gun capable of sending signals coordinated with the display. By 1955, U.S. Air Force personnel working on SAGE were using light guns for data manipulation. These and other early projects convinced a number of researchers that the capability to interact with a computer in real time through a graphical representation was a powerful tool for making complex information understandable. In the late 1950s and early 1960s, several government agencies, including the National Science Foundation (NSF), National Institutes of Health (NIH), National Aeronautics and Space Administration (NASA), and various divisions within the Department of Defense (DOD), began funding research to address an array of computer graphics problems, including the development of input/output devices and programming. The total funding for these early programs was comparatively small. For example, the NSF allocated about 8 percent of its annual computing research budget to computer graphics between 1966 and 1985. Its graphics-related expenditures rose from $93,000 to $1.8 million annually during this period.5 Another source of funding for computer graphics research during these years was the Information Processing Techniques Office (IPTO) of the DOD's Defense Advanced Research Projects Agency (DARPA, known at times as ARPA). The IPTO support for the development of interactive graphics was concentrated at MIT, Carnegie Mellon University, and especially the University of Utah, which received $10 million in IPTO support for interactive graphics research between 1968 and 1975 (Stockham and Newell, 1975; Van Atta et al., 1991a,b). University programs were only loosely coupled to deliverable systems but supported visionary ideas and the training of students to pursue them.
OCR for page 229
--> The eventual payoffs from these small initial investments were enormous. The government support established an infrastructure for the computer graphics field through university-based research and training in fundamental science. These centers identified key research and technical problems, developed sample solutions, created tools and methods, and, above all, produced cadres of students, researchers, and teachers who became the leading practitioners in the field. The graduates of the federally supported academic programs have made substantial contributions not only to many areas of science, technology, and medicine, but also to the intellectual and artistic culture of the late 20th century. They have also launched companies that laid the foundations for a worldwide market for computer graphics worth $40 billion in 1997. Seeding the Academic Talent Pool Among the greatest contributions of the federal government has been support for the development of human resources. (Associations also played a role in building the graphics community, as illustrated in Box 10.2). An early pioneer, Steven Coons, benefited from federal support of research at MIT that helped realize his vision of interactive computer graphics as a powerful design tool. During World War II, Coons worked on the design of aircraft surfaces, developing the mathematics to describe generalized surface patches. An early advocate of the use of computers in mechanical engineering, Coons taught in the Mechanical Engineering Department at MIT during the 1950s and 1960s, where he inspired his students with the vision of creating interactive computer graphics to assist design (Coons, 1967). Among the students he inspired were Ivan Sutherland and Lawrence Roberts, both of whom went on to make numerous contributions to computer graphics and (in Roberts' case) to computer networks. Both men also served as directors of IPTO. Working in the early 1960s on the TX-2 at MIT's Lincoln Laboratory, which was equipped with an interactive display tube, Sutherland developed a graphics system called Sketchpad as his dissertation in 1963. Sketchpad was an interactive design tool for the creation, manipulation, and display of geometric objects in two-dimensional (2D) or three-dimensional (3D) space. The system could sketch with a light pen on the face of the CRT, position objects, change their size, square up corners, create multiple copies of objects, and paste them into an evolving design. Sketchpad was the first system to explore the data management techniques required for interactive graphics. Roberts, meanwhile, wrote the first algorithm to eliminate hidden or obscured surfaces from a perspective picture (Roberts, 1963). In 1965, Roberts implemented a homogeneous coordinate scheme for transforma-
OCR for page 230
--> BOX 10.2 Community Building Many researchers credit the group SIGGRAPH with helping to build a strong community of graphics researchers that propelled the field forward rapidly. SIGGRAPH, which is the Association for Computing Machinery's Special Interest Group on Graphics, facilitates the exchange of ideas among researchers and technology developers through conferences and publications in an attempt to advance the technology of computer graphics and interactive techniques. It introduces the latest topics in computer graphics through conference courses and other educational activities, including development and distribution of curriculum materials. SIGGRAPH attracts a diverse range of members, from computer scientists specializing in computer graphics and visualization, to business leaders and artists who use graphics as a means to further their craft. Interaction among such diverse members can help technology developers better understand the needs of users and promote advances in the capabilities of graphics technology. An annual conference has become a central location for the exchange of ideas and demonstration of developmental systems. Numerous academic and industry researchers publish papers in SIGGRAPH journals and conference proceedings. Edwin Catmull has called SIGGRAPH a "tremendous community" and credits its collaborative spirit and broad-based constituency with helping to accelerate the development of computer graphics.1 1 Presentation by Edwin Catmull, chief technical officer, Pixar Animation Studios, at the Computer Science and Telecommunications Board workshop, "Modeling and Simulation: Competitiveness Through Collaboration," October 19, 1996, Irvine, CA. tions and perspective. His solutions to these problems prompted attempts over the next decade to find faster algorithms for generating hidden surfaces (Roberts, 1965). Sutherland expanded the talent pool everywhere he went. First MIT, then Harvard University (especially after Sutherland's return from his stint as IPTO director in 1966), and, following Sutherland's move there in 1968, the University of Utah became the major academic centers of early work in interactive graphics. In particular, the period from the late 1960s through the late 1970s was a golden era of computer graphics at Utah. Students and faculty in Utah's APRA-funded program contributed to the growth of a number of exploratory systems in computer graphics and the identification of key problems for future work (Table 10.1). Among their notable activities were efforts to develop fast algorithms for removing hidden surfaces from 3D graphics images, a problem identified as a key computational bottleneck (Sutherland et al., 1974). Students of the Utah program made two important contributions in this field, in-
OCR for page 231
--> TABLE 10.1 Select Alumni of the University of Utah's Computer Graphics Program Name Affiliation Accomplishments Alan Kay Ph.D. 1969 Developed the notion of a graphical user interface at Xerox PARC, which led to the design of Apple MacIntosh computers. Developed SmallTalk. Fellow at Apple Computer. John Warnock Ph.D. 1969 Worked on the ILLIAC 4 Project, a spaceflight simulator, and airplane simulators at Evans & Sutherland. Developed the Warnock recursive subdivision algorithm for hidden surface elimination. Founder of Adobe Systems, which developed the Postscript language for desktop publishing. Nolan Bushnell B.S. 1969 Developed the table tennis game Pong, which in 1972 launched the video game industry. Founder of Atari, which became the leading company in video games by 1982. Charles Seitz Faculty 1970-1973 Pioneer in asynchronous circuits. Co-designer of the first graphics machine, LDS-1 (Line Drawing System). Designed the Cosmic Cube machine as a research prototype that led to the design of the Intel iPSC. Founder of Myricom Corp. Henri Gouraud Ph.D. 1971 Developed the Gouraud shading method for polygon smoothing—a simple rendering method that dramatically improved the appearance of objects. Edwin Catmull Ph.D. 1974 Pioneer in computer animation. Developed the first computer animation course in the world. Co-founder of Pixar Animation Studios, a leading computer graphics company that has worked for LucasFilm and was recently involved in the production of the movie Toy Story. Received a technical Academy Award (with Tom Porter, Tom Duff, and Alvy Ray Smith) in 1996 for "pioneering inventions in Digital Image Compositing." James Clark Ph.D. 1974 Rebuilt the head-mounted display and 3D wand to see and interact with three-dimensional graphic spaces. Former faculty member at Stanford University. Founder of Silicon Graphics Incorporated and chairman of Netscape Communications Corporation.
OCR for page 232
--> Name Affiliation Accomplishments Bui Tuong-Phong Ph.D. 1975 Invented the Phong shading method for capturing highlights in graphical images by modeling specular reflection. Phong's lighting model is still one of the most widely used methods for illumination in computer graphics. Henry Fuchs Ph.D. 1975 Federico Gil Professor, University of North Carolina at Chapel Hill. Research in high-performance graphics hardware; three-dimensional medical imaging; head-mounted display and virtual environments. Founder of Pixel Planes. Martin Newell Ph.D. 1975; Faculty 1977-1979 Developed procedural modeling for object rendering. Co-developed the Painter's algorithm for surface rendering. Founder of Ashlar Incorporated, which develops computer-assisted design software. James Blinn Ph.D. 1978 Invented the first method for representing surface textures in graphical images. Scientist at Jet Propulsion Laboratory, where he worked on computer animation of the Voyager flybys. James Kajiya Ph.D. 1979 Developed the frame buffer concept for storing and displaying single-raster images. cluding an area search method by Warnock (1969) and a scan-line algorithm that was developed by Watkins (1970) and constructed into a hardware system. Perhaps the most important breakthrough was Henri Gouraud's development of a simple scheme for continuous shading (Gouraud, 1971). Unlike polygonal shading, in which an entire polygon (a standard surface representation) was a single level of gray, Gouraud's scheme involved interpolation between points on a surface to describe continuous shading across a single polygon, thus achieving a closer approximation of reality. The effect made a surface composed of discrete polygons appear to be continuous. The work of these individuals alone reflects the high level of fundamental research performed under federal sponsorship in a variety of graphics fields, including surface rendering, simulations, computer animation, graphical user interface design, and early steps toward VR. No less than 11 commercial firms, several of which ship more than $100 million in products annually, trace their origins to the Utah program.6
OCR for page 233
--> Virtual Reality in the Private Sector: Approach with Caution Industry and private research centers played an important role in the early development of interactive graphics. But an examination of several key players—Bell Laboratories, the Mathematical Applications Group Incorporated (MAGI), and General Electric Company (GE)—illustrates that the private sector, even when it has federal funding for isolated projects, cannot support development of nascent technologies requiring high-risk research with uncertain payoffs. Indeed, even when a company contributes lucrative new technologies to the field, the government is often the key to sustaining progress over time (see Box 10.3). Bell Laboratories had one group of researchers, including Michael Noll, Bela Julesz, and C. Bosche, working on computer-animated stereo BOX 10.3 The Rise and Fall of Atari Atari, founded by University of Utah graduate Nolan Bushnell, was once the fastest-growing company in the United States. Started in 1972 with an initial investment of $500, Atari attained sales exceeding $500 million in 1980. During the late 1970s and early 1980s, Atari was a center for exciting developments in software and chip design for the home entertainment market. A joint venture with LucasFilm in 1982, in which Atari licensed and manufactured games designed by LucasFilm, established cross-pollination between video games and film studios. Several pioneering figures in the VR field got their start at Atari. For instance, Warren Robinett, who has directed the head-mounted display and nano-manipulator projects at the University of North Carolina in Chapel Hill, developed the popular video game Adventure at Atari in the late 1970s. Jaron Lanier got his start by creating the video game Moondust. He used the profits to launch VPL-Research in 1984, the first commercial VR company. In 1980 Atari created its own research center, directed by Alan Kay, who came from Xerox PARC and assembled a team of the best and brightest in the field of interface design and VR research. But Atari fell on hard times. Not long after its banner year in 1980, Atari registered $536 million in losses for 1983. The Atari Research Laboratory was a casualty of the economic crash in the video game industry (and computer industry more generally). Most of the people working in VR at Atari either migrated to work on VR projects in federal laboratories, or, like Jaron Lanier, landed government contracts. Lanier won a contract to build the DataGlove for NASA. Industry was clearly not prepared, after sustaining such a big economic blow, to continue the development of VR technology on its own. Indeed Lanier's failed efforts to market a consumer entertainment version of the DataGlove, called Power-Glove, for Nintendo, demonstrated that the 1980s was not the right time for a sustained industry push. Federal support was crucial to building the array of hardware and software necessary for industry to step in and move VR forward.
OCR for page 234
--> movies, and another group, including Ken Knowlton, Leon Harmon, and Manfred Schroeder, working on pixel graphics methods for digitizing still images, gray-scale techniques, and rule-directed animation. Knowlton also produced an important animation language, called BEFLIX, which permitted the creation and modification of gray-scale pixel images. MAGI, headed by Phillip Mittleman, was supported by military contracts for projects simulating equipment behavior. MAGI developed a hidden-surface algorithm along with a user language, Synthavision, which sent output to a specially built monitor for microfilming through color filters. The system provided a user-oriented syntax for making computer animation, and it was important for creating film footage for advertising. The GE group built the first real-time, full-color, interactive flight simulator, a project funded by a NASA contract for the manned space program (Rouselot and Schumacker, 1967).7 The simulator, completed in 1967, permitted up to 40 solid objects to be displayed in full color, with hidden surfaces removed and visible surfaces shaded to approximate reflected illumination. The entire display was updated in real time, depending on a trainee's actions on the controls. This GE system was the prototype for a new generation of training simulators that integrated computer-driven synthetic visual environments with interactive tactile feedback. Although GE had a well-endowed in-house research infrastructure of venerable standing, the company took a cautious approach to this new area of research. GE Aerospace did not market its early image-generating systems to customers other than the federal government, nor did it initiate its own program to develop VR. GE did spin off a commercially successful system called Genigraphics, a full-color, interactive, 2D slide-generating system aimed at the commercial audiovisual market. And of course, GE did continue contract work on image generators for flight simulators, including its highly rated Compu-Scene IV system, which ''practically stole the market in high-end military flight simulation and training in 1984 when [it] introduced photographic-quality texturing to real-time graphics.''8 GE also pursued medical imaging. Its Medical Systems Laboratory has been a major manufacturer of medical imaging systems, from x-ray machines to ultrasound, computerized tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) systems. In addition, GE scientists have made distinguished contributions to the published literature on scientific and medical visualization. For example, the "marching cubes" algorithm developed by William E. Lorensen and Harvey E. Cline of the Electronic Systems Laboratory at the GE Research and Development Center is one of the most fundamental algorithms for high-resolution, 3D surface reconstruction from CT, MRI, or SPECT data
OCR for page 235
--> (Lorensen and Cline, 1987).9 Graphics work of this sort has been regarded by GE as central to the development of new imaging systems. Significantly, GE's achievements in this area have benefited from university collaborations and federal support. An example is the recent arrangement between GE Medical Systems and the University of Chicago involving the GE digital detector system, a 10-year, $100 million R&D effort that has been the basis of a portfolio of medical imaging and computer-aided detection systems involving more than 100 scientists and resulting in 80 patents (General Electric, 1997). The GE technology will be used by the University of Chicago Medical Center in a long-term project supported by the National Cancer Institute, American Cancer Society, U.S. Army, and Whitaker Foundation to develop a platform for computer-aided diagnosis, which provides the radiologist with guidance for reading a mammographic image.10 The GE experience demonstrates the difficulty faced by private firms in funding long-term research that is not directly related to ongoing product development efforts. Industry seldom funds research that is expected to take more than 5 to 7 years to produce tangible results, although firms can misjudge how long it will take to develop a marketable product from new technology. And, some firms do support limited research with longer time horizons (see Chapter 5 for a discussion of long-term research). In its press releases on the Digital Detector System, GE emphasizes that this 10-year project is the largest development project in company history. Commercial VR, by comparison, has taken 30 years to mature. None of the companies discussed in this section (Bell Laboratories, MAGI, or GE) pursued commercial applications of VR. MAGI left the graphics field completely, failing to sustain a research capability in computer animation and simulation even though it helped launch the field.11 Both Bell Laboratories and GE abandoned work on commercial simulation systems in spite of commanding early positions in the field. It is not difficult to see why. VR is one of those fields that Ivan Sutherland would christen "holy grails"—fields involving the synthesis of many separate, expensive, and risky lines of innovation in a future too far distant and with returns too unpredictable to justify the long-term investment. Synergy Launches the Quest for the "Holy Grail" Work on head-mounted displays illustrates the synergy between the applications-focused environments of industry and government-funded (both military and civilian) projects and the fundamental research focus of university work that spills across disciplinary boundaries. Work on head-mounted displays benefited from extensive interaction and cross-fertilization of ideas among federally funded, mission-oriented military
OCR for page 236
--> projects and contracts as well as private-sector initiatives. The players included NASA Ames, Armstrong Aerospace Medical Research Laboratory of the Air Force, Wright-Patterson Air Force Base, and, more recently, DOD programs on modeling and simulation, such as the Synthetic Theater of War program. Each of these projects generated a stream of published papers, technical reports, software (some of which became commercially available), computer-animated films, and even hardware that was accessible to other graphics researchers. Other important ideas for the head-mounted display came from Knowlton and Schroeder's work at Bell Laboratories, the approach to real-time hidden-line solutions by the MAGI group, and the GE simulator project (Sutherland, 1968). Early work on head-mounted displays took place at Bell Helicopter Company. Designed to be worn by pilots, the Bell display received input from a servo-controlled infrared camera, which was mounted on the bottom of a helicopter. The camera moved as the pilot's head moved, and the pilot's field of view was the same as the camera's. This system was intended to give military helicopter pilots the capability to land at night in rough terrain. The helicopter experiments demonstrated that a human could become totally immersed in a remote environment through the eyes of a camera. The power of this immersive technology was demonstrated in an example cited by Sutherland (1968). A camera was mounted on the roof of a building, with its field of view focused on two persons playing catch. The head-mounted display was worn by a viewer inside the building, who followed the motion of the ball, moving the camera by using head movements. Suddenly, the ball was thrown at the camera (on the roof), and the viewer (inside the building) ducked. When the camera panned the horizon, the viewer reported seeing a panoramic skyline. When the camera looked down to reveal that it was "standing" on a plank extended off the roof of the building, the viewer panicked! In 1966, Ivan Sutherland moved from ARPA to Harvard University as an associated professor in applied mathematics. At ARPA, Sutherland had helped implement J.C.R. Licklider's vision of human-computer interaction, and he returned to academe to pursue his own efforts to extend human capabilities. Sutherland and a student, Robert Sproull, turned the "remote reality" vision systems of the Bell Helicopter project into VR by replacing the camera with computer-generated images.12 The first such computer environment was no more than a wire-frame room with the cardinal directions—north, south, east, and west—initialed on the walls. The viewer could "enter" the room by way of the "west" door and turn to look out windows in the other three directions. What was then called the head-mounted display later became known as VR. Sutherland's experiments built on the network of personal and pro-
OCR for page 239
--> Both the Stanford and the Berkeley groups were interested in designing a simple machine that could be built as a microchip within the university environment. Hennessy played a key role in transferring this technology to industry. During a sabbatical from Stanford in 1984-1985, he co-founded MIPS Computer Systems (acquired by Silicon Graphics Incorporated, in 1992), which specialized in the production of computers and chips based on these concepts. In 1986 the computer industry began to announce commercial processors based on RISC technology. Hewlett-Packard Company (HP) converted its existing minicomputer line to RISC architectures. IBM never turned the 801 into a product but adapted the ideas for a new low-end architecture that was incorporated into the IBM RT-PC. This machine was a commercial failure, but subsequent RISC processors with which IBM has been involved (e.g., the Apple/IBM/Motorola PowerPC) have been highly successful. In 1987 Sun Microsystems, Inc. began delivering machines based on the SPARC architecture, a derivative of the Berkeley RISC-II machine. In the view of many, it was Sun's success with RISC-based workstations that convinced the remaining skeptics that RISC was significant commercially. Sun's success sparked renewed interest at IBM, which announced a new RISC architecture in 1990, as did Digital Equipment Corporation in 1993. By 1995, RISC had become the foundation of a $15 billion industry in computer workstations. RISC computers advanced the field of interactive graphics and promoted the development of VR. Silicon Graphics Incorporated (SGI), co-founded by James Clark in 1982, was an early adopter of RISC processors and has been a leader in the recent development of high-end graphics, including VR. Clark joined the Stanford engineering faculty in 1979 after completing his doctorate with Ivan Sutherland on problems related to the head-mounted display. Clark worked with Hennessy and Forest Baskett in the Stanford VLSI program and was supported by DARPA in the Geometry Engine project, which attempted to harness the custom chip technology of MIPS to create cost-effective, high-performance graphics systems. In 1981, Clark received a patent for his Geometry Engine—the 3D algorithms built into the "firmware" that enable the unit to serve up realtime, interactive 3D graphics. The patent formed the basis of SGI. Clark also invented the Graphics Library, the graphics interface language used to program SGI's computers. Silicon Graphics is part of the commercial infrastructure for interactive graphics and VR that finally took root in the fertile ground laid by early federal funding initiatives. Companies such as SGI, Evans & Sutherland, HP, Sun Microsystems, and others have generated products that have enabled simulations of all sorts, scientific visualizations, and computer-aided design programs for engineering. They also helped cre-
OCR for page 240
--> ate the film and video game industries, which have stimulated advances in graphics by providing jobs, markets, and substantial research advances.16 In 1997, SGI reported revenues of $3.66 billion (McCracken, 1997).17 Biomedical Applications The basic technologies developed through VR research have been applied in a variety of ways over the last several decades. One line of work led to applications of VR in biochemistry and medicine. This work began in the 1960s at the University of North Carolina (UNC) at Chapel Hill. The effort was launched by Frederick Brooks, who was inspired by Sutherland's vision of the ultimate display as enabling a user to see, hear, and feel in the virtual world. Flight simulators had incorporated sound and haptic feedback for some time. Brooks selected molecular graphics as the principal driving problem of his program. The goal of Project GROPE, started by Brooks in 1967, was to develop a haptic interface for molecular forces (Brooks, 1990). The idea was that, if the force constraints on particular molecular combinations could be "felt," then the designer of molecules could more quickly identify combinations of structures that could dock with one another. GROPE-I was a 2D system for continuous force fields. GROPE II was expanded to a full six-dimensional (6D) system with three forces and three torques. The computer available for GROPE II in 1976 could produce forces in real time only for very simple world models—a table top; seven child's blocks; and the tongs of the Argonne Remote Manipulator (ARM), a large mechanical device. For real-time evaluation of molecular forces, Brooks and his team estimated that 100 times more computing power would be necessary. After building and testing the GROPE II system, the ARM was mothballed and the project was put on hold for about a decade until 1986, when VAX computers became available. GROPE III, completed in 1988, was a full 6D system. Brooks and his students then went on to build a full-molecular-force-field evaluator and, with 12 experienced biochemists, tested it in GROPE IIIB experiments in 1990. In these experiments, the users changed the structure of a drug molecule to get the best fit to an active site by manipulating up to 12 twistable bonds. The test results on haptic visualization were extremely promising (Ouh-Young et al., 1988, 1989; Minsky et al., 1990). The subjects saw the haptic display as a fast way to test many hypotheses in a short time and set up and guide batch computations. The greatest promise of the technique, however, was not in saving time but in improving situational awareness. Chemists using the method reported better comprehension of the force fields in the active site and of exactly why each particular candi-
OCR for page 241
--> date drug docked well or poorly. Based on this improved grasp of the problem, users could form new hypotheses and ideas for new candidate drugs. The docking station is only one of the projects pursued by Brooks's group at the UNC Graphics Laboratory. The virtual world envisioned by Sutherland would enable scientists or engineers to become immersed in the world rather than simply view a mathematical abstraction through a window from outside. The UNC group has pursued this idea through the development of what Brooks calls "intelligence-amplifying systems." Virtual worlds are a subclass of intelligence-amplifying systems, which are expert systems that tie the mind in with the computer, rather than simply substitute a computer for a human. In 1970, Brooks's laboratory was designated as an NIH Research Resource in Molecular Graphics, with the goal of developing virtual worlds of technology to help biochemists and molecular biologists visualize and understand their data and models. However, because of budget cutbacks and a reorientation of the program, support from the NIH National Center for Research Resources has declined by more than 50 percent since 1979. Fortunately, a variety of other federal agencies have continued to support the virtual worlds project since the early 1980s. These agencies include NIH's National Cancer Institute, DARPA, and the NSF. Collaboration with the Air Force Institute of Technology on image-delivery systems has also been an important part of the work at UNC since 1983 (U.S. Congress, 1991). During the 1990s, UNC has collaborated with industry sponsors such as HP to develop new architectures incorporating 3D graphics and volume-rendering capabilities into desktop computers (HP later decided not to commercialize the technology).18 Since 1985, NSF funding has enabled UNC to pursue the Pixel-Planes project, with the goal of constructing an image-generation system capable of rendering 1.8 million polygons per second and a head-mounted display system with a lagtime under 50 milliseconds. This project is connected with GROPE and a large software project for mathematical modeling of molecules, human anatomy, and architecture. It is also linked to VISTANET, in which UNC and several collaborators are testing high-speed network technology for joining a radiologist who is planning cancer therapy with a virtual world system in his clinic, a Cray supercomputer at the North Carolina Supercomputer Center, and the Pixel-Planes graphics engine in Brooks's laboratory. With Pixel-Planes and the new generation of head-mounted displays, the UNC group has constructed a prototype system that enables the notions explored in GROPE to be transformed into a wearable virtual-world workstation. For example, instead of viewing a drug molecule through a window on a large screen, the chemist wearing a head-mounted display
OCR for page 242
--> sits at a computer workstation with the molecule suspended in front of him in space. The chemist can pick it up, examine it from all sides, even zoom into remote interior dimensions of the molecule. Instead of an ARM gripper, the chemist wears a force-feedback exoskeleton that enables the right hand to "feel" the spring forces of the molecule being warped and shaped by the left hand. In a similar use of this technology, a surgeon can work on a simulation of a delicate procedure to be performed remotely. A variation on and modification of the approach taken in the GROPE project is being pursued by UNC medical researcher James Chung, who is designing virtual-world interfaces for radiology. One approach is data fusion, in which a physician wearing a head-mounted display in an examination room could, for example, view a fetus by ultrasound imaging superimposed and projected in 3D by a workstation. The physician would see these data fused with the body of the patient. In related experiments with MRI and CT scan data fusion, a surgeon has been able to plan localized radiation treatment of a tumor. In the UNC case, funding of VR research by several different agencies has sustained the laboratory through changing federal priorities and enabled it to pursue a complementary mix of alternative approaches, basic and applied research, and prototype development. Although federal agencies have different mission objectives, a synergy evolved between the various projects, and a common base of knowledge and personnel was established. Over the years, the government's investment has greatly expanded the range of tools available to both the research community and industry. Virtual Reality and Entertainment: Toward a Commercial Industry At a 1991 Senate hearing, several VR pioneers noted that commercial interests, with their need for quick returns, could not merge the substantially different technologies needed to create virtual worlds, particularly while the technologies remained at precompetitive stages for so many years (U.S. Congress, 1991). But a sustained mixture of government, industry, and university-based R&D and the synergistic development of several applications has helped bring VR to the marketplace. In particular, the nexus between public research and privately developed entertainment systems made VR technology more affordable and scaled it up for large consumer markets, thereby promoting the rapid adoption and widespread use of imaging technology in science and medicine. An example is RenderMan, developed by Pixar Animation Studios. Edwin Catmull, an alumnus of the Utah graphics program, joined Alvy Ray Smith at LucasFilm in 1979. Catmull and Smith had worked together
OCR for page 243
--> at the New York Institute of Technology (NYIT). To realize the dream of constructing an entire film from computer-generated material, Smith and Catmull recruited a number of young computer graphics talents to LucasFilm. Among them was Loren Carpenter from the Boeing Company, who had studied the research of Mandelbrot and then modified it to create realistic fractal images. In 1981, Carpenter wrote the first renderer for LucasFilm, REYES (Renders Everything You Ever Saw), which was the beginning of RenderMan. In 1986, the computer graphics division of LucasFilm's Industrial Light and Magic was spun off as Pixar, with Catmull as president and Smith as vice president. Under their direction, Pixar worked on developing a rendering computer. Also joining the REYES machine group at Pixar in 1986 was Patrick Hanrahan, who worked with Robert Drebin and Loren Carpenter in developing the first volume-rendering algorithms for the Pixar image computer (Drebin et al., 1988). These algorithms created images directly from 3D arrays without the typical intermediate steps of converting to standard surface representations, such as polygons. Hanrahan was the principal architect of the interface and was responsible for the rendering software and the graphics architecture of RenderMan. The rendering interface evolved into the RenderMan standard now widely used in the movie industry. This standard describes the information the computer needs to render a 3D scene—the objects, light sources, cameras, and atmospheric effects. Once a scene is converted to a RenderMan file, it can be rendered on a variety of systems, from Macintoshes to personal computers to SGI workstations. This opened up many possibilities for 3D computer graphics software developers. RenderMan was used in creating Toy Story, the first feature-length computer-animated film; the dinosaurs in Jurassic Park; and the cyborg in Terminator 2. This powerful tool also has contributed to visualization and volume rendering in a number of fields of science, engineering, and medicine. In addition, the hardware and software components and the individuals involved have circulated between industry and academe. Pat Hanrahan, after moving from NYIT to Pixar, moved back to an academic laboratory, first as an associate professor at Princeton University, and more recently as a professor at Stanford University, where he has contributed to several areas of graphics. One was the development of applications for the Responsive Workbench, a 3D, interactive virtual environment workspace for scientific visualization, architecture, and medicine. The workbench has been a cooperative project between Stanford and the German Institute for Information Design, supported by grants from Interval Research Corporation, DARPA (for visualization of complex systems), and NASA Ames (for virtual windtunnel). Silicon Graphics and Fakespace Incorporated donated equipment.
OCR for page 244
--> The Right Mix: Virtual Reality in the 1990s Continued improvements in computer graphics in processors and new chip architectures have stimulated the growth of commercial markets for VR technology, fueling the revenues of companies such as SGI and cutting the prices of graphics workstations drastically. The resulting improvements in the price-performance ratios for computer graphics technologies have, in turn, increased demand for these products. Furthermore, potential markets for multimedia products have driven the search for new architectures for image caching and compression techniques that greatly reduce bandwidth and memory requirements. This convergence of high-end computer architectures, graphics-rendering hardware, and software with low-end commercial markets for computer graphics expands the opportunities for the use of VR technologies in a variety of commercial applications. It also motivates further technical advances that benefit commercial and military customers alike. As SGI chief executive officer Ed McCracken once explained, ''Our entertainment customers drive our technological innovation. And technological innovation is the foundation of Silicon Graphics.''19 As civilian research has proceeded and the DOD has come under increasing pressure to operate effectively on reduced budgets, the traditional relationship between military and commercial VR research projects has changed. The DOD continues to be a major consumer of VR technology, but now it can draw increasingly on the commercial technologies. A number of reforms have been enacted to enable the DOD to procure products from the commercial industrial base more easily. A number of defense contractors have also diversified into commercial applications of VR technology (see Box 10.4). In 1998, the DOD expected to spend more than $2.5 billion on programs for modeling and simulation (U.S. Department of Defense, 1997). Such considerable resources will likely stimulate further development of graphics and VR technologies. Directive 5000.1 (U.S. Department of Defense, 1996) mandates that models and simulations be required of all proposed systems, and that "representations of proposed systems (virtual prototypes) shall be embedded in realistic, synthetic environments to support the various phases of the acquisition process, from requirements determination and initial concept exploration to the manufacturing and testing of new systems, and related training." More interestingly, attempts have been made to better coordinate the efforts of military and commercial research programs in VR technologies. The Defense Modeling and Simulation Office, for example, asked the National Research Council to examine areas of mutual interest to the defense modeling and simulation community and the entertainment industry. The resulting report identified five broad areas of common inter-
OCR for page 245
--> BOX 10.4 Real3D Emerges from Military-Commercial Linkage Real3D, one of several companies that offers real-time three-dimensional (3D) graphics products for commercial systems, traces its origins to the first GE Aerospace Visual Docking Simulator for the Apollo lunar landings. In 1991, GE Aerospace began exploring commercial applications of its real-time 3D graphics technology, which led to a contract with Sega Enterprises, Limited, of Japan, which was interested in improving its arcade graphics hardware so that the games would present more realistic images. GE Aerospace adapted a miniaturized version of its real-time 3D graphics technology specifically for Sega's Model 2 and Model 3 arcade systems, which incorporated new algorithms that provided a visual experience for exceeding expectations.1 To date, Sega has shipped more than 200,000 systems that include what is today Real3D technology. In 1993, GE Aerospace was acquired by Martin Marietta, another leader in the field of visual simulation. Martin Marietta not only advocated expanding the relationship with Sega but also encouraged further research and analysis to look at other commercial markets, such as personal computers (PCs) and graphics workstations. In 1995, Martin Marietta merged with Lockheed Corporation and shortly thereafter launched Real3D to focus solely on developing and producing 3D graphics products for commercial markets. To that end, in November 1996, a strategic alliance was formed between Real3D and Chips and Technologies Incorporated, aimed at selling Real3D R3D/100 two-chip graphics accelerators to the PC industry and bringing world-class 3D applications to professionals who use the Windows NT environment.2 Finally, in December 1997, Lockheed Martin established Real3D Incorporated as an independent company and announced that Intel Corporation had purchased a 20 percent minority stake in Real3D. Real3D thus builds on more than three decades of experience in real-time 3D graphics hardware and software going back to the Apollo Visual Docking Simulator. This experience has led to more than 40 key patents on 3D graphics hardware and software. Strategic relationships with various companies provide opportunities to transition high-end graphics technology from leading-edge research environments to the desktops of physicians, engineers, and scientists. Conversely, the company may also be able to transfer technology developed for video games to developers of military training simulators. 1 See the discussion by Jeffrey Potter in CSTB (1997b), pp. 163-164. Additional information is available online at <http://www.real3d.com/sega.html>. 2 The R3D/100 chipset directly interfaces with Microsoft-compliant application programming interfaces, such as OpenGL.
OCR for page 246
--> est: fundamental technologies for immersive environments, networked simulation, standards for interoperability across systems, computer-generated characters, and tools for creating simulated environments (Computer Science and Telecommunications Board, 1997b). Already, the DOD has work under way in many of these areas. It is exploring ways of improving representations of human behaviors in synthetic environments and has developed a High-Level Architecture (HLA) to facilitate interoperability of distributed simulation systems.20 Commercial entertainment companies are also exploring related areas of research and may benefit from—and contribute to—defense-related activities. The growing linkages between the commercial and military VR communities are also apparent in the movement of experts between the two sectors. For example, Robert Jacobs, director and president of Illusion Incorporated, a company that derives some 80 percent of its revenues from the commercial entertainment industry, is an inventor of DARPA's Defense Simulation Network (SIMNET) program and has been a technical contributor to most of the related training programs. Eric Haseltine, now vice-president and chief scientist of research and development at Walt Disney Imagineering, was previously an executive at Hughes Aircraft Company, a defense contractor he joined after completing a post-doctoral fellowship in neuroanatomy and a doctorate in physiological psychology. Real3D senior software engineer Steven Woodcock began his career developing game simulations for Martin Marietta, where he has been responsible for weapons code development, testing, integration, and documentation for the Advanced Real-time Gaming Universal Simulation (ARGUS).21 ARGUS is a real-time, distributed, interactive command-and-control simulation focusing on ballistic missile defense and theater missile defense, running on a network consisting of a Cray-2 supercomputer and more than 50 SGI workstations. Woodcock has noted that his Martin Marietta experience in distributed applications, real-time simulations, and artificial intelligence has proven invaluable in the real-time, 3D, multiplayer environments of games he has been designing recently. These examples demonstrate the complex and changing relationship between federally funded research and commercial innovation. Yet even as the commercial industry has grown, federal funding has played a critical role in advancing technologies to serve the government's own needs as well as supporting underlying fundamental technologies. Indeed, DARPA, the NSF, Department of Energy (DOE), and other federal agencies continue to invest in VR and graphics-related research. The NSF's funding of the Science and Technology Research Center in Computer Graphics and Scientific Visualization supports collaborative research on
OCR for page 247
--> computer graphics among participants from five universities. The DOE's Advanced Strategic Computing Initiative, although aimed at supporting development of models of nuclear weapons, includes funding for university research on fundamental techniques for computer graphics and scientific visualization. Such programs may ultimately help build a self-sustaining technological infrastructure for VR. Lessons from History Federal funding has played a critical role in developing VR technology. It funded early, precompetitive research on topics such as CRTs that industry had few incentives to support. As the technology advanced and practical applications emerged, federal funding continued to complement industry support, as illustrated by work in head-mounted displays and the continuing government support of the field after the collapse of Atari. Federal support has enabled universities to create and maintain leading-edge computer graphics and VR research centers, which have contributed to the information revolution. Industry sectors and companies that generate billions of dollars in annual revenues (SGI is but one example) trace their roots to federally funded research. A primary benefit of federal funding, particularly of university research, has been the creation of human resources that have carried out, and driven advances in, VR research. A number of graduate students and academic researchers who received federal support have made significant contributions to the field and have established leading companies (see Table 10.1). Research in computer graphics and VR has benefited from multiple sources of federal support, which have enabled the simultaneous pursuit of various approaches to technical problems, funded a complementary mix of basic and applied research, developed a range of applications, provided a funding safety net that has sustained emerging technology despite changes in federal mission priorities, and offered the flexibility needed to pursue promising new ideas. The success of this approach is evidenced by the rich selection of VR products now available across the aerospace, military, industrial, medical, education, and entertainment sectors. Finally, this case study demonstrates that advances in computing and communications seldom proceed along a linear or predictable path. Progress in VR technologies has benefited from varied interactions among government, universities, and industry and from the fusion of ideas from different areas of research, such as computer graphics, computer architectures, and military simulation.
OCR for page 248
--> Notes 1. Such statements are invariably subject to the "back to the ancients" process of identifying precursors, such as Edwin Link's work on vehicle simulation in the 1920s. See Ellis (1991, 1994). 2. This project and others are listed on the Advanced Displays and Spatial Perception Laboratory page on the NASA Ames Research Center Web site at <http://duchamp.arc.nasa.gov:80/adsp.html>. 3. See Rowell (1998) and an article posted on the Silicon Graphics Web site at <http://www.sgi.com/features/1998/aug/chrysler/>. 4. The contributions of this center to scientific visualization and work in VR are discussed by Cruz-Neira et al. (1992). 5. These estimates are based on data compiled from NSF's annual report Summary of Grants and Awards for the years cited. 6. Another noteworthy graduate of the Utah program in the late 1970s was Gary Demos, who started several major computer graphics production companies and had a big impact on the introduction of computer graphics technology in the film industry. 7. The equipment was installed at the Manned Spacecraft Center in Houston. 8. Jeffrey Potter, Intel Corporation, as quoted in CSTB (1997b). For an evaluation of one of the GE systems, see Brown et al. (1994). This document is also available online at <http://tspg.wpafb.af.mil/programs/documents/asctr94.htm>. 9. This algorithm could run on Sun, VAX, or IBM systems with conventional graphics displays, such as the GE Graphicon 700. Additional information about Lorensen's work is available online at <http://www.crd.ge.com/~lorensen/>, as is information about the GE Computer Graphics Systems Program at <http://www.crd.ge.com/esl/cgsp/index.html>. 10. Like the spellcheck program on a word processor, which helps writers avoid typographical errors, the aim of this project is to develop a CAD program that provides "another set of 'eyes' in reviewing images, alerting a radiologist to look closer at specific areas of an image," according to Dr. Martin J. Lipton, chairman of the Radiology Department at the University of Chicago Medical Center, where the CAD technology is being developed. "GE and EG&G Sign Collaboration Pact to Produce Digital X-Ray Detectors," 21 August, 1997, available online at <http://www.ge.com/medical/Media/msxrldd>. 11. Along with Triple I, MAGI was involved in making the film Tron . 12. Other head-mounted display projects using a television camera system were undertaken by Philco in the early 1960s, as discussed by Ellis (1996). 13. Ivan E. Sutherland in "Virtual Reality Before It Had That Name," a videotaped lecture before the Bay Area Computer History Association. 14. See National Research Council (1995), especially Figure 8.4, "The History of Workstation Computation and Memory." 15. Hennessy et al. (1981) published a description of the Stanford MIPS machine, also developed under DARPA sponsorship. 16. Scott Fisher, "Current Status of VR and Entertainment," presentation to the National Research Council's Committee on Virtual Reality Research and Development, Woods Hole, MA, August, 1993, as cited in National Research Council (1995).
OCR for page 249
--> 17. Also see the comparative financial data reported for 1993 through 1997 at <http://www.sgi.com/company_info/investors/annual_report/97/fin_sel_info.html>. 18. This collaboration is described on the Web site of PixelFusion at <http://www.pixelfusion.com>. 19. See McCracken (1997). McCracken also noted: "While there have been incredible advances across many areas of science and technology--the new Craylink architecture for supercomputers, new improvements on the space shuttle, sheep cloning--no advance has been more prolific, more ubiquitous, more wide-reaching than consumer-oriented entertainment developments." 20. The program description is available online at <http://www.stricom.army.mil/STRICOM/PM-ADS/ADSTII/>. 21. Steven Woodcock's biography is available online at <http://www.cris.com/~swoodcoc/stevegameresume.html>. Also see Wall Street Journal Interactive Edition (May 19, 1997). Also see Coco (1997), which is available online at <http://www.cgw.com/cgw/Archives/1997/07/07story1.html>.
Representative terms from entire chapter: