National Academies Press: OpenBook
« Previous: Review of Army Requirements
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 18
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 19
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 20
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 21
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 22
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 23
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 24
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 25
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 26
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 27
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 28
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 29
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 30
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 31
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 32
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 33
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 34
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 35
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 36
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 37
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 38
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 39
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 40
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 41
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 42
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 43
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 44
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 45
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 46
Suggested Citation:"Review of Relevant Commercial Technologies." National Research Council. 1995. Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy. Washington, DC: The National Academies Press. doi: 10.17226/5036.
×
Page 47

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Review of Relevant Commercial Technologies In this chapter, the committee describes trends in commercial multimedia building block technologies. The technologies that are described were selected by the committee in the context of a layered archi- tecture that is relevant to generic multimedia appli- cations. In addition, there is discussion of some commercial, system-level applications of multimedia information technologies and some important les- sons learned in the application of multimedia tech- nologies in commercial venues. This chapter serves as a technical foundation for recommendations made later in this report. MULTIMEDIA ARCHITECTURE The committee configured a generic layered archi- tecture as a basis for identifying building block tech- nologies that are relevant to Army multimedia communications. This multimedia architecture is de- picted in Figure 3-1. The purpose of the committee's generic architecture is primarily for discussions of how a set of relevant building block technologies relate to each other. It does V. Specific Applications . _ 3u Or, - a: i_ IV. Generic Applications/Enablers III. Middleware II. System Software 1 ' 1 I. Physical Platforms FIGURE 3-1 Generic architecture for multimedia communications. 18 not represent a fully fleshed-out technical architecture.' The importance of a technical architecture is discussed in Chapters 4 and 6 and in the related recommendations in Chapter 7 of this report. The committee's architecture consists of several ge- neric layers. The naming of the various layers of the architecture explicitly reflects the fact that multimedia technologies are strongly dependent upon software. With the generic architecture as a framework, the committee selected relevant multimedia technologies and overlaid them onto the various layers of the architecture (see Figure 3-21. Generally, but not always, higher layer technologies employ the services of lower layer tech- nologies. Note that the building block technologies are numbered in Figure 3-2 from bottom layers to top layers in order to facilitate later discussions. The bottom layer of Figure 3-2 (Layer I) includes physical devices, subsystems, and systems (e.g., light- weight portable terminals, storage systems, and com- munications subsystems and systems to support people on the move). The next level (Layer II) provides protocols for interconnecting subsystems, systems, net- works and gateways, operating systems for managing computational resources, and distributed computing environments for managing distributed software proc- esses. The middleware (Layer III) is built on top of the lower level system software and provides capabilities such as information filtering, database management, and user-friendly multimedia user interfaces. Layer IV provides generic applications/enablers such as multi- media teleconferencing capabilities and groupware, iA fully fleshed-out technical architecture would not merely say that certain building block technologies lie in certain levels of the architec- ture. It would specify, for illustration, the Internet protocol or one or more alternative protocols as the protocols to be used for specific applications; it would specify a graphical user interface (e.g., Motif or one or more alternatives) as the graphical user interface to be used. Where more than one alternative is specified as acceptable for a specific building block, the documentation supporting the technical architecture would explain why there is more than one acceptable alternative, provide guidance regarding which alternatives should be used in which types of applications, and explain how interoperability is to be achieved between applications using different building block alternatives.

REVIEW OF RELEVANT COMMERCIAL TECHNOLOGIES Layer V Specific Applications ',) s s in, A. £ rid . _ c - - ~r. _ .,~ ~ _ _ a. ~ _ Or. ._ ~ t.C ~ ~ ^ ~ ~ C ~ £ c ._ _ ~ 3 it, Z ~ ~4 14. Simulation: systems and applications Layer IV Generic Applications/Enablers 13. Multimedia messaging capabilities 12. Decision support tools, groupware, multimedia teleconferencing 11. Multimedia information access capabili- ties Layer II0~Iiddleware 10. Multimedia information analysis and processing building blocks and middle- ware services 9. User-friendly multimedia user interfaces 8. Multimedia database management sys- tems 7. Information filtering systems Layer System Software 6. Distributed computing environments and operating systems Protocols and related Functionality to support communications Layer I Physical Platfonr~s Information capture technologies 3. Communications platforms that support people on the move 2. Storage systems for multimedia informa- tion 1. Lightweight, rugged, portable appli- ances and terminals FIGURE 3-2 Building block technologies in the generic multimedia architecture. which can be tailored for specific applications (e.g., simulation) at the top layer (Layer V). Woven through the architecture are network management and security technologies to provide reliable, secure information processing (Layer VI). The description of building block technologies in the sections below follows the arrangement of Figure 3-2. The value of doing this will become more apparent in Chapter 4, where it will be shown that these technologies and layered architecture concepts can be used to clarify the recommendations for how the Army should proceed to acquire the technologies to meet its operational needs and functional requirements. 19 The building block technologies described range from physical technologies, such as hand-held multi- media appliances and physical storage subsystems, to technologies that are embodied in algorithms and soft- ware (e.g., speech recognition and distributed computing technologies). They are discussed in the order of the six layers of the generic technical architecture (Figure 3-29. In the discussions of these building block technolo- gies, the focus is on the current status and likely trends in each technology, with particular emphasis on how large the respective commercially driven research and development (R&D) efforts are likely to be. These dis- cussions have been kept brief to avoid unnecessary technical detail and include only what is needed to support the recommendations in Chapter 4. BUILDING BLOCK TECHNOLOGIES (LlYER I—PHYSICAL PLATFORMS) Building block technologies discussed under Layer I Physical Platforms of the generic architecture (Figure 3-2) include lightweight, rugged, portable appliances and terminals; storage systems for multime- dia information; communications platforms that sup- port people on the move; and information capture technologies. Lightweight, Rugged, Portable Appliances and Terminals During the first half of this decade, portable laptop and palmtop computers have grown into a multibillion dollar industry (based on sales of these computers in 19941. At the same time, these appliances have shrunk to the point where laptop computers provide the full functionality of bulkier desktop computers, except for the smaller display and keyboard, and weigh less than five pounds. Personal digital assistants (PDAs) for general purpose and special purpose applications are emerging in the marketplace. For example, they are being used by rental car agencies to expedite check-in and by delivery services to track the real-time status of packages in transit. In this section we examine the hardware component technologies- the processing chips, memories, storage devices, displays, and batter- ies- underlying these portable computing appliances and terminals to illustrate the rapid technological evo- lution of these appliances and terminals that is enabled by the underlying technology trends and driven by commercial market opportunities.

20 Processors At the heart of every computing device is a central processing unit (CPU), a processing chip that "executes" the instructions the computer is programmed to perform. While CPU capabilities can be characterized by various metrics, the MIPS (millions of instructions per second) that a processor can execute is one commonly accepted measure of performance. In 1994, microprocessor CPUs were announced with a rating of 1,200 MIP~an impres- sive figure given that CPUs found in many personal computers produced in the late 1980s and early 1990s were rated in tens of MIPS. Such remarkable, order-of- magnitude increases in processor speeds have been commonplace over the past decade, and there is every , indication that processor capabilities will continue to (Geppert, 1995~. increase in the foreseeable future. Memory The increasingly sophisticated operating systems and application programs used in today's computers are often characterized as being "memory hungry." They require more dynamic random access memory (DRAM) than their earlier counterparts in order to operate efficiently. One indication of this trend is the fact that the average amount of DRAM memory used in a personal computer has grown from 0.5 megabytes in 1984 to 8 megabytes in 1994, a 1 6-fold increase (Geppert, 19951. Fortunately, this increase in memory needs has been matched by a 20-fold decrease in memory costs over the 1982-1992 period. Trends in increasing memory density (i.e., the number of bits on a single chip) are expected to continue for the foreseeable future, with a factor-of-four increase every few years. Sixteen-megabit DRAM chips are now com- mon, and memory chip producers are assembling manu- facturing facilities for 64-megabit chips. Hitachi and NEC announced early designs for a 1-gigabit memory chip at a conference in February 1995. While such gigabit DRAMs are not expected to be produced in volume until the first years of the next century, a continued dramatic trend in increasing memory densities is evident. Permanent Storage: Disk Drives and Flashcards The miniaturization trends for CPUs and memory noted above are also evident in the area of disk drives, the traditional storage media for data that must be stored for extended periods of time. Disk drives for laptops are now available in so-called PCMCIA cards (Personal Com- puter Memory Card International Association), which are COMMERCIAL MULTIMEDIA TECHNOLOGIES FOR TWENI-Y-FlRST CENTURYARMYBA 1 IS lightweight, credit-card-sized devices that can be easily plugged into a laptop computer. Disk drives have mechanical parts and thus require sophisticated technology (such as liquid-bearing mo- t`>r.s~for~.se in laptops, where ruggedness is a concern. A recent competitor to disk drive technology, and one that is based on semiconductor technology containing no movable parts, is the so-called PCMCIA "flashcard." Currently, flashcards have less storage capacity than disk drives and are significantly more expensive, cost- ing about 15 times more than comparable disk drives. However, as flashcard technology is relatively new, prices may fall as the technology matures. Current flashcards can store 16 megabytes of data; 256-mega- hvre flashcards are expected to be available in 1997 Display Technology The need for lightweight and rugged displays for portable computers and the quest for flat screen televi- sion sets are the driving forces behind display technol- ogy. Laptop display technology can broadly be classified into passive liquid crystal display (LCD) technology and active LCD technology. Active matrix LCD (AMLCD) technology is the more recent of the Do and was developed to overcome some of the difficulties associ- ated with passive displays. Companies have spent an estimated $3 billion to commercialize AMLCDs. The cost of a manufacturing facility for AMLCDs is very substantial; it is estimated that a single state-of-the-art AMLCD pro- duction line exceeds $100 million (Werner, 19939. In addition to ongoing commercial AMLCD research, there is continuing commercial research on developing portable displays by improving upon passive LCD tech- nology. Another commercial initiative of interest is a push to develop lightweight and durable displays based on plastic LCDs. The emergence of high definition television (HDTV) as a consumer technology will fuel consumer demand for low-cost high resolution displays, particularly in the larger sizes usually associated with television viewing. This demand should, in turn, further stimulate invest- ments by commercial display manufacturers in all types of high resolution displays, including flat panel displays using the technologies described above. The large physi- cal size of conventional cathode ray tube (CRT) HDTV receivers (particularly the depth) will make them imprac- tical for many households. The commercial market op- portunity available to any company that can create a large flat panel display technology suitable for residential entertainment applications is enormous, and this drives the large investments being made with respect to re-

RENEW OF COEVAL CO~CIAL ~CHNOLOGI~ search on new manufacturing methods and entirely new approaches to creating displays. The underlying display technologies described above are being incorporated into novel commercial products such as virtual reality glasses or goggles and automotive "heads-up" displays. Virtual reality glasses or goggles use small liquid crystal elements and combinations of lenses and mirrors to create a virtual image that appears to the wearer to be projected in front of the wearer as a large image at a distance of several feet. In some cases, these virtual reality glasses or goggles are used to immerse the viewer in a visual environment that fills the viewer's sensory visual field of view and thus creates the sensation that the viewer is part of the three-dimensional environ- ment perceived. Heads-up displays use projected images to superimpose information on a window or screen through which the viewer can observe other important information (e.g., instrumentation information superim- posed on an automobile windshield). These emerging technologies tend to be at the high end of normal mass market consumer price points (e.g., more than several hundred dollars) but are expected to experience rapidly declining prices as mass production and competition take hold. Power and Batteries Power consumption and better battery technology are also key technology factors in lightweight, portable computing, devices. Many microprocessors and applica- tion-specific integrated circuits and memory chips are now being manufactured with decreased power require- ments. Many now run at almost half the power require- ment of previous chips, and many processors have a sleep mode in which only a minimum amount of power is consumed. Batteries are used in a wide variety of applications ranging from small batteries in toys and hand-held consumer appliances to large storage batteries in auto- mobiles and for backup power systems. Although not as dramatic as the progress that occurs each year in the performance of semiconductor-based components, there has been a steady improvement in battery technology and associated performance over the last several dec- ades. Alkaline batteries have become very popular as primary sources for small appliances. A variety of new battery types, such as nickel-cadmium and lithium-ion batteries, have emerged as rechargeable power sources for appliances such as cellular telephones, cordless tele- phones, and notebook computers. Recent advances, such as plastic lithium-ion batteries, show promise of increased energy densities, improved safety and environ- mental friendliness, ruggedness, and low cost. Research 21 continues on fuel cells and alternative large energy-stor- age batteries for automobiles and industrial applications. The worldwide market for batteries is $26 billion per year, of which almost 40 percent is for consumer single use batteries. Because of the economic impact of battery technology on automobiles, telecommunications, con- sumer electronics, and ultimately trade, the major indus- trialized countries have been funding battery research via national consortia. As an example, the United States Advanced Battery Consortium, which includes General Motors, Ford, Chrysler, and the U.S. Department of Energy, is a $260 million (total over several years) joint government-industry effort to develop advanced batter- ies for electric vehicles (Shokoohi, 19951. Personal Digital Assistants Personal digital assistants (PDAs) are extremely light- weight and compact hand-held computers. Rather than relying on a keyboard, PDAs use a stylus together with a touch-sensitive screen for input. Long-term data storage is provided via PCMCIA cards. PDAs can be used to maintain a small database (e.g., an address book), write and store notes, and send or receive electronic mail and facsimiles when plugged into a phone jack (or connected via a wireless modem). PDAs were introduced to the marketplace in 1993. Difficulties with the handwriting recognition system in the first PDA may have limited its widespread accep- tance. Other PDAs were introduced into the market in 1994 by many of the major international commercial consumer electronics, computer, and communications companies. PDA technology is still relatively immature. No stand- ard chip sets or common architectures have yet been adopted in the computer industry. From a user's point of view, PDAs also have a way to go. It has been predicted that it will take 10 years for PDA technology to meet its expectations (Halfhill, 19931. Epilogue: The Personal Computer industry The component hardware in portable laptop and palmtop computers is closely tied with developments in the larger personal computer industry. In order to indi- cate the momentum and scale of the resources being invested in this area, the section concludes with some brief figures indicating the current and projected size of the personal computer market. It is estimated that 16 million personal computers will be sold in the United States in 1995, with 34 million more machines being sold worldwide. The estimated installed

22 COMMERCIAL MULTIMEDIA TECHNOLOGIES FOR 7.WEN7Y-FIRST CENTURYARMYBA i 1LF~I~S base of personal computers in the United States is 80 million (approximately one personal computer for every three people) as of 1994, with 200 million worldwide. It has been estimated that the annual sales of personal computers will surpass 100 million units worldwide by the end of this decade (see Juliessen, 1995~. Storage Systems for Multimedia Information Storage systems are used to store information that is needed for performing computational tasks, the results of which may be displayed or presented to end users or re-stored for subsequent use. As discussed above, storage systems are used in lightweight, portable appliances and terminals. They are also used as the physical platform for storing information in distributed, networked applica- tions and for archiving information. Storage systems range from archival storage systems such as magnetic tapes and associated tape drives (whose information may take relatively long to access) to mag- netic disks and optical disks <whose information may take milliseconds to access) to semiconductor memory (whose information may take tens of nanoseconds to access). Multimedia applications stress storage systems be- cause multimedia objects such as images, voice clips, audio clips, and video clips contain many more bytes of information than text. For example, the storage require- ments of various information objects can be compared as shown in Table 3-1 and described below: A typical electronic mail message might contain 256 words. If we assume an average of eight letters per word, including spaces, and 1 byte (eight bits) required to represent each letter, then we require 256 x 8 = 2,048 bytes to store one E-mail message. A high resolution TABLE 3-1 Comparison of Storage Requirements Media Bytes Text (256 words) Image (1,000 x 1,000 pixels, 3 bytes per pixel, uncompressed) Image (256 x 256 pixels, 1 byte per pixel, 20:1 compression) 2,048 3,000,000 Speech (60 seconds at 16 kbps) 120,000 Video (60 seconds at 384 kbps) 2,880,000 Video (60 seconds at 1.5 Mbps) 11,250,000 NOTE: P~xel=picture element; kbps=kilobits per second; Mbps=megabits per second. image might require 1,000 resolvable picture elements per line and 1,000 lines. This results in 1,000 x 1,000 = 1 million picture elements (pixels), each of which may require 3 bytes of data to represent all of the color and intensity information. The result is a requirement to store 3 million bytes of information to represent 1 high reso- lution picture. However, a lower resolution picture re- quiring only 256 pixels per line and 256 lines would result in a reduction of a factor of 16 in the storage require- ments. Furthermore, by using image compression tech- niques to reduce redundant information, an additional factor of 20 in storage requirements might be possible. Thus images requiring 1 byte/pixel may be represented with acceptable quality with only 3,277 bytes of storage (for example). Speech can be represented with reason- ably good quality at data rates of ~16 kilobits per second. A speech clip lasting 60 seconds and represented at 16 kilobits per second (kbps) would require 60 x 16,000 bits or 120,000 bytes of storage. Video is the most bandwidth and storage-intensive media type of commercial interest. Conference quality video can be supported by 64384 kbps using current video digital encoding technologies, with 64 kbps representing marginal quality for desktop teleconferencing applications and 384 kbps being suit- able for large screen teleconferencing or high quality desktop teleconferencing. Data rates of between 1.5 megabits per second (Mbps) and 6 Mbps are suitable for entertainment quality television comparable to today's standards for home entertainment systems, while data rates of 1~40 Mbps are needed for HDTV. Some exam- ples of storage requirements for 60 second video clips are given in Table 3-1. There is intense competition in the computer industry to create improved storage systems, which (a) increase the amount of storage per unit area or per unit volume; (b) reduce power requirements; (c) reduce the time required to access stored information in terms of both the time it takes to begin accessing information and the rate at which information can be delivered from the storage system; and (d) have increased ruggedness. With the increasing activity in commercial telecommu- nications focused on what is called "video-on-demand" (i.e., the ability to remotely access stored video informa- tion with individualized interactive control), there has been a surge of commercial efforts to develop the multimedia servers (storage systems) needed to store hundreds of full-length videos that can be simultaneously and independently accessed by thousands of users. A variety of architectures ranging from arrays of small disks and inexpensive multimedia-capable computers operat- ing in parallel to supercomputer-like approaches are being tested in commercial technology and market trials. Commercial and industrial server and storage systems for medical image archiving, electronic home shopping,

REVIEW OFRELLEVANT COMMERCIAL TECH~JOLOG1~ educational applications, interactive games, and other applications that involve multimedia information are being driven by a perceived market measured in billions of dollars per year in server and storage system sales. These developments include the compact, rugged multi- media storage systems needed by notebook computers (discussed above), personal compact disk players, set top boxes (as described under Residential Information Systems later in this chapter), and other consumer appliances. Communications Platforms That Support People on the Move "Wireless personal communications" is a commercial telecommunications industry term for networking serv- ices and associated applications that support people on the move (IEEE, 19951. Cordless telephones, cellular telephony, and paging systems are the most popular current manifestations of wireless personal communica- tion systems. Cordless telephony started as a low-cost, low-power, short-range (a few hundred feet) home appliance in- tended to eliminate the tether to the telephone network. The concept has blossomed into cordless telephones that people can carry away from home and operate anywhere within reach of a compatible base station. The CT2 Common Air Interface is a standard used in the United Kingdom, Canada, and Hong Kong and other parts of Southeast Asia. These systems have been optimized for cost, and the handsets are extremely light and portable. The Digital European Cordless Telecommunications system provides an advanced design that supports higher user densities. It uses small "picocells" and resembles a cellular system as much as it does a cordless system. It supports data transmission as well as voice. The Personal Handyphone System is specific to Japan. The system is designed to provide a single, small, port- able phone that can be used at home or office (already launched) or as a public access device (to be launched this year). The system will support fax as well as voice. The potential subscriber base is estimated at 5.5 million in 1998 with up to 39 million by 2010 (Padgett et al., 19951. In the United States, Bell Communications Research has developed an air interface for a Wireless Access Communications System (WACS). By combining parts of the Personal Handyphone System with WACS, the pro- posal is now called Personal Access Communications Services. The goal is to provide wireless access to the wireline networks of local exchange carriers. Base sta- tions are expected to be shoe-box-sized enclosures mounted on telephone poles about 600 meters apart. 23 In contrast to cordless systems that were developed for people walking around, cellular systems were origi- nally intended for vehicle applications. The first genera- tion systems, called Advanced Mobile Phone Service (AMPS), use analog modulation for speech and base stations with coverage of 10 km or less in some cases as little as 0.5 km. These systems have been widely deployed (see discussion of cellular elsewhere in this report). As the cost of digital electronics has continued to drop and low-rate digital speech coding techniques have continued to improve, digital versions of cellular have begun to appear. The European Global System for Mobile Communications (GSM) is expected to improve quality over systems like AMPS and to provide pan-European roaming. The GSM standard also includes specifications for synchronous and asynchronous data services at 9.6, 4.8, and 2.4 kbps. In the United States, the Electronic Industries Association and the Telecommunications IndustIy Association adopted a standard called IS-54, where IS stands for Interim Stand- ard. IS-54 equipment is operational in most of the top U.S. cellular markets, and customer adoption is increasing. A second Interim Standard, IS-95, is based on a different frequency sharing scheme called code division multiple access (CDMA), which was originally developed to increase jamming resistance for military applications. This is a relatively new approach, with the first systems expected to be deployed in California this year. Manufacturers of cellular hand-held units must ad- dress Ho fundamental markets. The first, sometimes called the "road warrior," is a user whose livelihood actually depends on phone contacts made while on the move. Sales people are an obvious example. These users demand high quality voice transmission and reliable connections. They are also the leading buyers of pre- mium services and features, and they are relatively insensitive to prices. The second major group is the "casual user." They are more concerned about price and are more tolerant of lower voice quality or occasional dropped connections. Manufacturers sell far more cas- ual-user phones than road-warrior phones, but the latter are very important to carriers because they generate many more minutes of usage. The net result is that manufacturers are driven to produce both a simple, high-volume, low-cost product line and a lower-volume, higher-cost, feature-rich line of handsets. In contrast to voice, there are fewer systems and standards (so far) for wireless data services. Wireless local area networks (LANs) are usually privately owned and operated and cover very small areas. These are generally intended for high data rates (greater than 1 Mbps) and operate in unlicensed spectrum. Standards for wireless LANs are being developed under the Institute for Electri-

24 COMMERCIAL 31ULTIMEDL`4 TECHNOLOGIES FOR 7-WEN7Y-FIRST CEI~I-URYARMYBATILEFIELDS cat and Electronics Engineers (IEEE) 802.11 committee in the United States and under the European Technical Standards Institute RES10 committee in Europe (HIPER- LAN). Wireless LANs can be used to form ad hoc and quasi-static networks, although full roaming mobility is not yet available. Generally, the subscriber units require a hub with which to communicate although there are exceptions. Some operate at data rates of several mega- bits per second, approaching that of a wired Ethernet. Many are sized to fit on a PCMCIA card. The Advanced Radio Data Information Service (ARDIS) in the United States covers much wider areas using specialized mobile radio (SMR) frequencies in the 800 to 900 MHz range. There are over 50,000 subscribers today, and service is offered in over 400 metropolitan areas. The prevailing data rate is 4.8 kbps, with 19.2 kbps available in some areas. Cellular digital packet data is another approach that reuses existing analog cellular networks. It is a transparent overlay to AMPS systems, taking advantage of idle time on analog channels. The European GSM infrastructure, already digital, is develop- ing the General Packet Radio Service to handle data. Satellite communications systems allow for a rapid expansion of communications infrastructure and provide connectivity to isolated locations. Commercial satellite services are designed to provide coverage to predeter- mined geographic areas, and their ability to redirect or expand this capacity is very limited. More Ku-band (14/12 GHz) systems are being de- ployed to augment those at C-band (6/4 GHz). Also, Ka-band (30/20 GHz) has been set aside for commercial satellite communications, and equipment for this band is in the experimental stage in the United States. Utilization of Ka-band began in Japan and Italy in 1990. These trends imply that significantly more capacity will be available in orbit (IEEE, 1990; Manders and Wu, 19911. Very Small Aperture Terminal systems are being used to provide two-way data services (T1 rates, 1.544 Mbps) to small terminals. Direct Broadcast Services (DBS) now transmit data at rates of tens of megabits/second to small terminals (less than about 2 feet in size). However, both of these latter services are available only in selected coverage areas. There are several commercial satellite systems for personal communications currently in planning and de- velopment phases; examples are IRIDIUM, Odyssey, Globalstar, and Inmarsat-P. These systems are intended to support users anywhere in the world. The Teledesic system of low-orbiting satellites would provide a very large number of medium-to-high data rate links (several megabits/second) to small terminals. These emerging satellite systems will represent more than $10 billion in development and construction (including launch) costs when deployed (CDMA, 1994~. In addition to providing wireless links to end users, communications platforms that support people on the move must include backbone and feeder transmission and switching facilities that interconnect the wireless access nodes. Terrestrial nodes (e.g., cell sites) are typi- cally connected with cable-based fiber optic or copper cable facilities and occasionally with point-to-point mi- crowave facilities. The cable-based facilities that have been used to date have relatively low bit-error rates. The point-to-point microwave facilities used in commercial applications generally have relatively low bit-error rates as well (10-6 or lower) (Ivanek, 1989~. Recently, there has been a great deal of commercial interest in the use of upgraded cable television systems to interconnect small cell sites in emerging personal communications net- works. Since the cable systems carry combinations of frequency-division-multiplexed analog television, digit- ized television, and other digital information streams, and because of the characteristics of the existing cable facili- ties, the digital error rates on these systems are expected to be somewhat higher than in other commercial facili- ties. As a result, certain protocols like the asynchronous transfer mode (ATM) protocol that was designed for low bit-error rate transmission facilities will not work properly unless steps are taken to encapsulate these protocols inside an error-tolerant transmission format. Steps are being taken by a number of standards bodies to create a transmission format that will allow ATM to be carried over facilities with relatively high error rates, like cable television facilities. This is considered a high prior- ity issue because of the investments that existing cable system operators have in their current facilities and because of the announced plans of some local exchange carriers to deploy new multipurpose broadband net- works based on combinations of fiber optic and coaxial cable. The ATM protocol is discussed at length later in this chapter under Protocols and Related Functionality to Support Communications. Information Capture Technologies Information capture technologies interface to the physical world to capture (sense) sounds, images, mov- ing scenes, etc., electronically and to convert the cap- tured information to digital forms that can be used by automated information systems. In the past ten years, information capture technologies have become consumer products in the form of sophis- ticated, but moderately low-priced and highly usable, camcorders. These commercial devices can operate in a wide range of light levels under control of their internal micro-processor based systems and can provide features such as zooming and integrated sound capture (micro- phones and associated electronics). These devices can

REVIEW OF RELEVANT COMMERCIAL ~CHNOLOGIES also communicate to videocassette recorders via inter- faces provided for that purpose, and it is anticipated that next-generation camcorders will appear shortly with their video and audio outputs encoded in standard digital formats (e.g., the Motion Picture Experts Group (MPEG) MPEG-2 standard). Camcorders are generally priced in the range of several hundred to a thousand dollars, although advances in mass production techniques and the desirability of reaching price points below $300 to address mass consumer markets will lead to lower prices in the future. The emergence of desktop multimedia teleconferencing, including video and image capture, have led to the appearance of low-cost charge-coupled device (CCD) video and still image cameras on the commercial market in the $50 price range. Associated with the emergence of these consumer devices, and with the continued demand for sophisti- cated multimedia programming by movie and television audiences as well as users of multimedia computers (e.g., games), is the continuing evolution of video processing systems that can be used to perform such functions as editing video and still images and combining video with superimposed text and images. Some of these are tran- sitioning into consumer products as increasingly power- ful computer technologies allow these functions to be performed on general purpose personal computers with appropriate software. Another example of a sensor that has found wide- spread commercial use is the infrared sensor technology used in such things as motion detectors. These devices are available commercially in products costing only a few dollars and are used by consumers in applications includ- ing home security systems and automated driveway lighting. Analog-to-digital conversion and other specialized sampling and data conversion tasks associated with sensors can now be performed easily with one or a few low-cost commercial chips, and associated signal proc- essing can be accomplished with commercial single-chip digital signal processors. This development means that analog multimedia information such as audio, image, and video can be captured and converted to digital form easily and cheaply using commercial technologies. For example, IBM recently announced low-cost encoder ($700) and decoder ($30) chip sets that implement the MPEG-2 video compression coding standard. BUILDING BLOCK TECHNOLOGIES (LAYER II - YSTEM SOFTWARE) Building block technologies discussed under Layer II System Software of the generic architecture 25 (Figure 3-2) include (a) protocols and related functional- ity to support communications, and (b) distributed com- puting environments and operating systems. Protocols and Related Functionality to Support Communications Network protocols are the system-level standards for formatting information, controlling information flows, reserving communications paths, authorizing access to communications resources, managing congestion, recov- ering from communication or equipment failures, deter- mining routing, and other processing tasks needed to transfer or access data between remote end systems (e.g., users) in a network. Since even a brief overview of network protocols could fill an entire report, we focus here on two leading protocol architectures: the Internet protocol suite and the emerging ATM high-speed net- work standards. For each, we begin with a brief history and architectural overview and then consider the types of network services they support, with an emphasis on multimedia communication. Also, a "technology fore- cast" is provided for each. This section concludes with a discussion about software support for mobility. The Internet Protocol Suite The Internet protocol suite is the result of 25 years of evolutionary development, much of which has been sponsored by the Advanced Research Projects Agency (ARPA). In the Internet architecture, there is a clear distinction between the services to be provided by the network-level infrastructure and the services to be built on top of these network services by the communicating end systems. The division of functionality between the network and the end systems has proven to be a remark- ably robust and prescient one. At the heart of the Internet architecture is a design philosophy in which the network-level infrastructure provides only a simple, minimal packet delivery service. The Internet Protocol (IP) and the associated route selection protocols that provide this net~vork-level serv- ice make no guarantees that (a) the packets sent by one end system will be received by the other end system, or (b) the network will deliver transmitted packets to the destination in the order in which they were sent. IF does not recognize any notion of a "connection" between two endpoints. Any additional functionality (e.g., reliable data trans- fer) required by an end application is provided by "end-to-end" protocols which execute on the end sys- tems. For example, the Transmission Control Protocol

26 COMMERCIAL MULE ~CHNOLOGI~ FOR -FIRST CE~YA~YBA EMITS (TCP) is an end-to-end service which uses IP to provide in-order, reliable delivery of data between two end systems. From a service standpoint, the Internet protocol suite provides for both reliable (via TCP) and unreliable (via the user datagram protocol (UDP)) data transfer between two end systems. Multicasting of data (i.e., the copying and delivery of a single data packet to multiple receivers) is also supported. All Internet applications (e.g., file transfer protocol, remote login (telnet), E-nail, World Wide Web (WWW)) are built upon either TCP or UDP data transfer. The Internet protocols provide no explicit support for real-time communication (e.g., interactive audio or video) or for applications that send information at a constant bit rate, and they require the stream of bits to be delivered with a very low delay variability among the bits in the stream. It should be noted, however, that several efforts are under way in the Internet Engineering Task Force (IETF), the Internet standards body, on developing protocols for supporting such services. Nota- ble among these efforts is the RSVP resource reservation protocol (Zhang et al., 19939. It should also be noted that several efforts have demonstrated the possibility of mul- timedia teleconferencing over the current Internet (Cas- ner and Deering, 1992; Macedonia, 19941. In principle, even the current Internet protocols can provide for teleconferencing services, as long as the network's traffic load remains low. Since there is increasing interest in extending the use of the Internet to applications that require guaranteed packet delivery within a specified range of delays, such as real-time interactive multimedia teleconferencing, it is anticipated that the next-generation Internet protocol (IP version 6) will include mechanisms that enable network resources to be reserved and packets to be assigned priorities for transport through intermediate routers. Thus the Internet protocol suite is moving from its traditional connectionless "best effort" delivery approach toward a set of approaches that includes the equivalent of connec- tion-oriented transport. Internet protocols were initially developed for rela- tively low speed (e.g., 56 kbps) communication links. However, they have been shown to be able to handle traffic at rates of hundreds of megabits per second (Borman, 1989~. The current Internet protocols were developed for fixed-terminal (i.e., nonmobile) network users; this is reflected in the design of IP (and to a lesser extent, TCP and UDP). However, there has been considerable recent activity and interest in developing a new IP that will support mobility. Several efforts are currently under way in the IETF to develop Internet standards in support of mobility. Nineteen ninety-tour can well be characterized as the year that the Internet moved into the public conscious- ness. The reach of the Internet and the installed base is indeed impressive. There were 3,864,000 host computers in 154 countries connected to the Internet as of December 1994; almost 100 billion packets passed through a single National Science Foundation (NSF) net site in a single month (Bell, 19951. Industry and commerce are now relying on the Internet for services. Security is also now a serious and legitimate concern for the Internet. Commercial needs and the increasing reliance on the Internet as part of our national infrastruc- ture are fueling efforts in the area of network security both from the network standpoint (i.e., protecting the network from tampering) and from the end-user point of view <e.g., secure credit card transactions over the In- ternet and so-called "digital cash"~. Asynchronous TransferMode (ATMJ Networks The development of ATM networks represents a meet- ing of the minds of the traditionally circuit-oriented (e.g., telephony) and packet-oriented (e.g., computer-related) communities. The push for ATM networks arose because of the recognized need to support high data transfer rates and a diverse set of applications (e.g., ranging from file transfer to real-time interactive multiparty videoconfer- encing) in a common integrated network. Within an ATM network, data are carried in fixed- length packets known as cells. The cell format and associated network switching methods have been de- signed to facilitate very high speed operation (e.g., at 155 Mbps and above). The ATM architecture has been de- signed specifically to support a multitude of applications, ranging from bursty data to constant-bit-rate, real-time video. Consequently, ATM networks provide four service communication classes: (1) constant-bit-rate, real-time service (audio and video, roughly equivalent to today's telephone service); (2) variable-bit-rate, real-time service (audio or video, roughly equivalent to today's telephone service); (3) connection-oriented data (roughly equiva- lent to TCP in the Internet); and (4) connectionless data (roughly equivalent to UDP in the Internet). These four services can be offered over a single common network. ATM has been designed to operate over transmission facilities which have relatively low bit-error rates (10-6 or less). Recently, consideration has been given to the transport of ATM over transmission facilities with rela- tively high bit-error rates, such as upgraded cable televi- sion facilities and certain types of wireless links. Standards activities have begun in the ATM Forum and the IEEE 802 Standards Committee to produce a standard method for encapsulating ATM cells along with error-cor-

RENEW OF ~EVA~ CO~=C~ ~CHNOLOGI~ recting mechanisms for transport over facilities with relatively high bit-error rates. Before 1994, ATM products and prototypes were being supplied by only a handful of vendors and installed primarily in research institutions and government labs. In 1994, however, more than two dozen vendors brought ATM switching products to market. Membership in the ATM Forum (the standards body most aggressively push- ing ATM forward) has grown considerably. Nonetheless, ATM is a new technology that has not yet been demon- strated on nearly the same scale as the Internet protocols (ATM Forum, 19951. Two visions exist regarding the use of Intemet tech- nology and ATM networks in the emerging national information infrastructure. In one view, interconnected ATM networks will provide seamless, uniform, end-to- end transport of ATM cells between any two endpoints. In the other view, ATM wide area networks will be used to connect islands of local area or campus networks, with the individual islands running Intemet (or other) com- munication protocols. Both views are likely to be correct, in that applications exist in which the solutions implied within both views are viable. The balance between the use of one approach over the other will be determined in the emerging commercial marketplace, more by the speed at which commercial products emerge that meet user needs than by any fundamental advantage of one approach over the other. Software Support for Mobility One can consider three phases in the evolution toward the support of mobility in commercial telecommunica- tions networks. The first, associated with traditional "plain old telephone service (POTS)," supports essen- tially no mobility at all. In this case the terminal equip- ment has a fixed relationship to the network to which it is attached, and the user has a fixed relationship to the terminal (i.e., one attempts to call a specific person by dialing a number associated with a specific physical location). There is limited support for mobility via exten- sion telephones and cordless telephones, but these do not require any network-based intelligence. The second, the familiar case of today's cellular services, allows one of the relationships to be dynamic namely, the physical location of the telephones is dynamic. However, the relationship between user and terminal equipment re- mains fixed (i.e., a particular cellular telephone and associated telephone number are still used to attempt to reach a specific user). In the third phase, the relationship between the user and the terminal is also allowed to vale, and it is the user, rather than a specific terminal, to whom calls are directed. This is called "personal mobility." The 27 latter two phases require that software-controlled func- tionality be added to both the terminal equipment and the network. Smart-card technology offers still further flexibility to personal mobility. Smart cards contain user-specific data that turn a terminal into a de facto peripheral for the card. With the smart card, the terminal takes on the personality and behavior that the user wishes (and has paid for), including feature sets, custom dialing control, and authentication passwords. For systems in phase two, represented by today's cellular telephony, the approach to locating users (i.e., their moving terminals) who move from place to place is to maintain a system of home and visited databases called Home Location Registers (HLRs) and Visited Loca- tion Registers (VLRs). The HER is the place to which an incoming call for a roaming user is initially directed based on the user's telephone number. The HER will contain an entry that shows the VLR associated with the network in which the user is currently known to be located. This VLR knows that the user is in its domain because the user's telephone has communicated recently with one of its cellular nodes. The call will be forwarded to that network, where the OR will arrange to have the call delivered to the proper roaming user, based on its stored information regarding the user's current location in its associated network. The VLRs will also notify the FILRs when roaming users move in and out of their domains. To support full personal mobility, including smart cards, commercial network-based software will be up- graded and supplemented to include functionality not required in traditional fixed-terminal networks. In par- ticular, fast inquiry and response database systems will be deployed to (a) interrogate terminal units and data- bases for user authentication, (b) interrogate databases for number translations, and (c) transfer service profile information from one information database to another as users and their terminals travel from place to place. Networks will be (reprogrammed to recognize mobility- related numbers and respond to personal feature profiles associated with individual users. Distributed Computing Environments and Operating Systems Operating Systems Oeve/opment Operating systems are software systems that manage the hardware resources of a computer system to provide seIvices needed by applications. They evolved from earlier input-output control systems, which were loaded into early computer systems before an application began

28 COMMERCIAL MULTIMEDIA TECHNOLOGIES FOR 7 WEN7-Y-FIRST CEN71~YARMYBA T7ZEFIFI DS to run; this was typically done with a deck of punch cards placed immediately ahead of the cards used for the application. It became clear that there was a common set of functions needed by many applications, and this gave rise to early operating systems. Early machines in many cases were dedicated to a single use. Later machines were multipurpose, but the input-output control system scheme made for sequential execution of jobs, one after another. A major advance came from the idea of multiprogram- ming, which took advantage of the fact that the expensive processor was often wasted as slow input and output devices (such as printers, card punches, and tape ma- chines) were accessed by an application. Multiprogram- ming used the idle periods of the processor to perform other computational work until the input and output were completed. A variety of multiprogramming tech- niques were developed, with fixed and variable numbers of tasks, priorities, etc. Timesharing is a multiprogram- ming technique that allows interactive access to the multiprogrammed resources. Multiprogramming also involves sharing of processor memory resources. Modern multiprogramming technolo- gies have almost uniformly emnlc~ved the technique , . . .. . . -- J - ~~-r - - J - ~ called demand-paging." Demand-paging divides the storage of the machine into fixed-size units called pages. Application storage is also divided up into page-sized address ranges, which are mapped to the machine's storage through a technique known as virtual memory. All commercial operating systems for workstation or larger computers (e.g., MVS~, UNITY and its derivatives) now incorporate these techniques. Smaller systems, such as personal computers, have been evolving in the same fashion; the popular Windows application support soft- ware is essentially a single-user multiprogramming sys- tem overlaid on an extremely simplistic device- management core (MS-DOS~. Newer generations of personal computer software will support more advanced memory management techniques, such as demand- paged virtual memory. The lack of modern memory management technology in the popular MS-DOS soft- ware has been a major limitation in using these commod- ity machines for more complex applications and a major source of failures. These difficulties have provided op- portunities for alternative personal computer operating systems (e.g., OSLO), as well as penetration of the personal computer market by UNITE technology. A major challenge remaining for operating systems is the efficient processing of multimedia data (Nahrstedt and Steinmetz, 19951. Multiprogramming systems have embedded assumptions about scheduling jobs that they inherited from their predecessor technologies. For exam- ple, they often schedule job execution in a "round-robin" fashion to preserve a fair allocation of processing resources between jobs. This scheduling creates a "virtual time" model where each job's real processing time (wall- clock time) is dilated in proportion to the amount of competition for processing resources. Unfortunately, continuous media such as voice and video are characterized by their real-time requirements; 30 frames per second of video are required to preserve perceptual smoothness in spite of competing demands for resources. These real-time constraints suggest that the requirements of multiprogramming must be balanced against the application requirements for effective multi- media support in operating systems (Nahrstedt and Ste- inmetz, 19951. Substantial commercial R&D effects are underway to improve the support for multimedia appli- cations in commercial operating systems. Examples in- clude the XMedia(~) toolkit from DEC and the Hewlett Packard(~) MPower(D toolset. nteroperability and Distributed Computing Environments In the 1970s and before, computer programs were usually written for only one hardware and software platform at a time. "Porting" a large application to another platform was a difficult task, rarely undertaken. The UNION operating system, which blossomed in the 1970s, owes much of its popularity to the fact that it was explicitly designed to run on multiple platforms and was closely wedded to the C programming language, which was also designed for portability. In the 1980s, portability was among the most desirable of attributes sought for computer applications. In the 1990s, portability remains an important issue, but interoperability (the ability of computer applications developed by different vendors to cooperate on the same or different computing endeavors, and to share data between such applications) across software and hard- ware platforms has become the more sought-after attrib- ute. Interoperability and the improving price- performance trend of small computer systems have led to intense interest in what is known as a distributed computing environment—a set of standard interfaces, software components, and tools that permit computer applications developed by different vendors to cooperate on the same or different computing environments inter- connected by appropriate communication links. In the past, incompatible software applications have proliferated. This has occurred because of (a) mixing programs written in different languages, and (b) the use of different ways to communicate between programs running on the same or different computers that are connected by a communications system. Cooperation across incompatible platforms was so difficult to achieve that applications were commonly designed with no

REVIEW OF RELEVANT COMMERCIAL ~CHNOLOGI~ intention of ever cooperating with other applications, thus further exacerbating the problem. Today, there are two major commercial approaches to distributed com- puting: client-server architectures and the Distributed Computing Environment (DCE) from the Open Software Foundation (OSF). In a client-server architecture, client applications com- municate with a server application in a structured manner in order to obtain a service from that server. In the Internet for example, a WWW client might request that a multimedia document be sent to it by the server which stores, and controls access to, that document. Client-serv- er solutions are also used in database applications. Client-server solutions are typically used to build data- base applications; processing is distributed across a network of multiple clients using personal computers or workstations and one or more server computers where the database is hosted. Microsoft Visual Basic and Powersoft~ PowerBuilder are examples of commercial products that use the client-server architecture. Both of these products are limited to use on personal computers and employ proprietary languages, but they can commu- nicate with servers that recognize SQL, a standard data- base query language. The Open Software Foundation Distributed Comput- ing Environment consists of tools and software compo- nents that can be used to build distributed applications in any language and on multiple-vendor computer sys- tems. The DCE can be used to implement client-server architectures that are not limited to database applications. Digital Equipment Corporation's Distributed Computing Environment is the best known example of a product. Standards for distributed computing specified by cor- porate alliances and based on object-oriented technology are just now emerging. Examples are the Object Manage- ment Group (OMG) Common Object Request Broker Architecture (CORBA), the IBM System Object Model, and MS~ Object Linking and Embedding (OLE). These standards offer the promise of a market of interchange- able software components (i.e., not just whole applica- tions), residing almost anywhere, that can be mixed and matched as needed. Competition between these alliances and uncertainties concerning the business case for soft- ware components remain as obstacles to the fulfillment of the promise. BUILDING BLOCK TECHNOLOGIES (LlYER 111 MIDDLEWARE) 29 base management systems; user-friendly multimedia user interfaces (e.g., speech, graphical user interfaces (GUISE; and multimedia information analysis and processing building blocks and middleware services. Information Filtering Systems One of the challenges of the information age is to use computing technologies to overcome the problems of information overload. More and more information, in- cluding multimedia information containing graphics, photographs, and audio and video clips, is being stored in digital form and made accessible over digital telecom- munications networks. The question arises, "How will people find what might be most useful to them and sort through it all to obtain potentially useful information that is directed at them by others?" While many challenges remain, there has been pro- gress in using computing technologies to manage infor- mation overload. In fact, the astounding rate of progress in the power of low and moderately priced computers has made it possible to do such things as index every word in a document and to therefore be able to perform such tasks as "Find and display every place in this document where the words 'video clip' or the words 'audio clip' appear." Likewise, directories have been created on the Internet WWW that allow one to ask such questions as "Tell me the location of every document that has the words 'battlefield digitization' or the words 'digitizing the battlefield'." Using word spotting and more general speech recognition technologies it is becoming increasingly possible (but still very difficult) to index voice clips. Using pattern recognition technologies, it is becoming possible (but still very difficult) to automat- ically index images, and photographs. However, because these technologies are still somewhat primitive, indexing of audio, images, and photographs must be primarily done manually today, using keywords and other text to describe their contents. There are commercially available products and com- mercial products under development that can "read" incoming electronic mail, including facsimiles (by using scanners to convert printed text to digital form), in order to sort them by such characteristics as the name of the sender, the list of recipients, or keywords in the content. Likewise, electronic news feeds and incoming telephone calls can be sorted automatically according to a profile specified by an end user. As an example, an electronic mail filtering agent can be installed on a mail server that sorts mail into three priorities. Mail from certain specified Building block technologies discussed under Layer individuals (determined by their originating E-mail ad- III Middleware of the generic architecture (Figure 3-2) dress) is sorted into the highest priority bin. Mail directed include information filtering systems; multimedia data- to a large number of recipients is sorted into the lowest

30 COM~IERCIALMULTIMEDIATECHNOLOGIESFOR7WEN7-Y-FIRSTCkN7URYARMYBATIZEFI~S priority bin. Everything else gets medium priority, except mail that contains the word "urgent" anywhere in the subject header or the body of the message. Such "urgent" mail is inserted into the highest priority bin. This very simple sorting profile, specified by the user, is remarkably accurate in sorting messages according to their actual priority. A relatively new concept in information overload management is the concept of a mobile agent. A mobile agent is an executable computer process or application that can be launched into a network by an end user to perform specified tasks for that end user as the agent travels through the network from server to server. For example, a mobile agent could be launched to search databases for automobiles having a set of specified characteristics that are available for purchase. This excit- ing technology has been embodied in some early prod- ucts, such as the General Magic Telescript~ product, but there are many issues that need to be resolved before such mobile agents are widely used. In addition to standards that define the interface and the interactions between mobile agents and the servers they attach themselves to, there are still open technical issues regard- ing (a) the potential impact of agents on network and salver congestion, and (b) more generally, the security impacts of executing processes or application software received over a network. Multimedia Database Management Systems The role of a database management system is to provide reliable and efficient data storage and reliable and efficient access (e.g., query, retrieval, update) of the data store by a potentially large user population. As such, it must provide for (a) the integrity and reliability of the data; (b) concurrent use of the data (e.g., "locking" a piece of data being modified so that concurrent users do not access a partially updated/modified version of the data); and (c) easy application, user access, creation, or organization of the data. Current database systems typically employ a so-called relational data model, in which data are viewed as being stored in columns and rows. A relational database man- agement system (RDBMS) provided access and querying over these data in terms of this row and column structure. The RDBMS was well suited for business data processing where applications such as payroll and accounting fit naturally into the relational data model. For more sophis- ticated forms of data, particularly multimedia data, where the item being managed is more complex, the relational model begins to break down. An example of such a complex multimedia object might be a meeting or brief- ing consisting of synchronized audio segments, video segments, and textual display. Object-oriented database management systems (OODBMS) are considered to be better suited for man- aging such complex objects. In an OODBMS, a user sees a higher level of abstraction than in an RDBMS; data are encapsulated within an object and can only be accessed via OODBMS procedures associated with that object. For example, a multimedia briefing object might be accessed via OODBMS playout procedures, which provide for retrieval of media from the underlying storage device and synchronized delivery of the meeting's component media streams. Another important aspect of an OODBMS is the notion of an object request broker, which provides coordinated access to objects that may be distributed among many computers in a networked environment. ODMG-93 is the object-oriented database standard advo- cated by the OMG, and CORBA (Common Object Re- quest Broker Architecture) is the platform-independent object broker service supported by Object Management Group. Although OODBMS are relatively new, they are already being considered for deployment in large-scale environments. For example, NASA plans to incorporate object request brokers into its Earth Observing System Data and Information System. Because multimedia OODBMS manipulate media streams (e.g., voice and video) that have stringent timing, playout, and synchronization requirements, the underly- ing operating system and object stores must themselves support these requirements. The earlier subsection on protocols and the related functionality to support com- munications discusses these continuous media require- meets in more detail. Digital libraries are an important emerging application that will make considerable use of distributed multimedia OODBMS (Fox et al., 19951. Envisioned digital library applications currently being considered are thus likely to push forward the development of distributed multimedia OODBMS. These applications include querying and in- formation filtering of multimedia data (see prior subsec- tion on information filtering systems) and hypertext/ hypermedia browsing capabilities, which allow users and applications to navigate through the information in a nonlinear fashion (i.e., jumping from one piece of infor- mation to another piece of information in any desired sequence). Although commercial, networked, multime- dia database technology is currently not available to support these applications, the significant efforts cur- rently under way suggest that this component technology will mature as digital library efforts continue. For exam- ple, NSF, ARPA, and NASA have recently begun cooper- ating on a $24 million digital library research program. Both the U.S. Library of Congress and the British LibraIy are undertaking significant efforts to provide multimedia access to their information (Purday, 1995; Becker, 19951. The Encyclopedia Britannica has recently begun provid-

REVIEW OF RELEVAI`JT COMMERCIAL TECHNOLOGIES ing an on-line multimedia access service to its encyclo- pedia through the WWW. User-Friendly Multimedia User Interfaces While early computer users were tolerant of arcane means of communication with the machine (i.e., punch cards, paper tape, teletypewriters, and character-based displays), today's users expect and require ease of use, implying ease of interaction. This has given rise to the mouse or trackball as a representative point-and-select device and bit-mapped displays (capable of displaying graphics and images) as a means of visual representation. Since there are many situations where hands-free- operation is desirable (e.g., operation of a car phone), speech recognition technology is becoming increasingly important and popular. Origins of Speech Recognition Technology Speech recognition research began at a number of industrial research labs and universities in the 1950s. Early systems could only be used for the recognition of isolated or discrete speech, consisting of a single or a very limited number of words spoken slowly and delib- erately. High levels of accuracy also required a compli- cated tuning of the system for an individual speaker's characteristics. In the early 1960s, computers were capa- ble of only millisecond instruction execution times and had very limited memory capacities. To perform any significant level of analysis in a pseudo-real-time nature on speech requires computer instruction execution times closer to the nanosecond range and substantial random access memory. Thus, based on the complexity of the analysis, the amount of computation involved to handle a fully continuous speech pattern from any random speaker, and the relatively limited amount of computing power available in the 1950s, 1960s, and even 1970s, practical speech recognition was far from a reality during this period. A/e w Directions As computing capabilities increased it became much more practical for a useful voice recognition system to be built. Major research advances in the fundamental algorithms for speech recognition occurred during this time. By the early 1980s it became possible for a small mainframe system about the size of a desk, supported by three special-purpose array processors in individual 31 equipment racks and coordinated from a single worksta- tion, to recognize a vocabulary of roughly 5,000 words. Such a system was used to generate office-style corre- spondence. In the mid-1980s, it was possible for a personal computer, aided by a dozen special-purpose or "feature" cards housed in a separate chassis, to perform at the same level of speech recognition as the small mainframe had a few years earlier. As the power of personal systems continued to improve, a desktop personal computer with a single additional feature card housed within the system unit could perform office-style speech recognition with a vocabulary of 20,000 words. Current Speech Recognition Systems Personal-computer speech-recognition packages have been available on the commercial market for about five years. Initial offerings were of most interest to the very curious, highly skilled users with a significant discretion- ary budget. All required additional specialized hardware to operate. The most recent wave of product releases, however, has broken that barrier, running on standard platforms without expensive additional hardware. For the moment many retain some level of speaker-depend- ence, which requires some degree of training of the system for each speaker and limits the ability to handle truly natural, continuous speech. However, there are product offerings that are capable of handling continu- ous, speaker-independent input, with a vocabulary ca- pacity of 25,000 words. Programs permit dictation of text directly into a voice- aware application, working in conjunction with word processors and the like. They incorporate on-line error correction based on context and typically incorporate a learning feature that allows the system to improve its recognition of the user's speech with continued usage. Another variety of speech-recognition software is the "voice navigator," which in conjunction with a simple sound card permits the user to interact with common operating system commands, menus, etc., through voice control rather than mouse and keyboard input. The navigator takes the spoken input commands to pull down menus, makes selections, and launches and closes appli- cations. Products available also provide verbal input to many common applications or can be taught how to handle new ones. This has found application, for exam- ple, in voice-controlled dialing of cellular phones in automobiles, where the hands-free operation prevents distractions which may interfere with driving. The last category of speech recognition systems avail- able today is the development tool kits for C or Basic. These software packages permit the programmer to

32 COMMERCIAL MULTIMEDIA TECHNOLOGIES FOR 7FWEPiTY-FIRST CE~IURYARMYBA TTLEFIF~)S construct voice-enabled applications and add voice rec- ognition to existing applications for specific tasks. Such applications might be of a forms type, requiring that specific fields of user information be filled in with verbal input. and write application software to perform these tasks. However, since this ability is required by a large number of different applications, it makes sense to implement the functionality as generally usable building blocks, or middleware, rather than as application-specific software. Speech recognition has advanced far beyond the early Other examples of middleware include directory func- experimental systems of the 1950s; however, it is not yet tionality, protocol interface functionality (e.g., function- a mature and widely adopted application like spread- ality that allows stored data to be formatted and sent over sheets or word processors. The next few years will the Internet protocol), security and access control func- probably see that change, as these mature applications tionality, video and image compression or decompres- in use on millions of personal computers in both indus- sion functionality (e.g., Motion Picture Expert's Group trial settings and personal environments are opened up (MPEG) end Joins Photographic Expert's Group (JPEG)), ' ' user interface functionality, multimedia format conver- sion functionality, and various approaches to automat- ically sorting or indexing multimedia information (Wallace, 19919. There are two important advantages to implementing generic information analysis and processing building blocks in the form of middleware. The most self-evident advantage is the efficiency that results from the reuse of generic building blocks. The other advantage is in in- teroperability. When generic building blocks are reused, there is much less likelihood of incompatibilities resulting from different design approaches or different implemen- tations of what is intended to be equivalent functionality. The reuse of generic functionality, embodied in middle- ware, will likely be critical to achieving interoDerabilitY to include voice annotation and support. Graphical User Interfaces Graphical user interfaces have found heavy use in industry as they have allowed people to interact effec- tively with computers without mastering a huge vocabu- lary of obscure syntax. Further, standardization of these interfaces to provide a common "look-and-feel" has proven to be a powerful technique for speeding user learning of the system. Much of this work has come out of the computer-human interface community, and a good deal of the fundamental work sprang from early systems explorations at the Xerox Palo Alto Research Center. For example, the Apple List and its commercially successful derivative, the Macintosh~, had their origins in this work. The success of icon-based windowing environments, such as that of the Macintosh@, led to the realization that graphical user interfaces were a key enabler of wide- spread and effective use of computer-based applications. The creation by Microsoft of its MS~Windows~ operating system is indicative of this realization. Industry has focused on selecting a standard environ- ment in which to build applications. This has been accomplished by standardizing and using X-Windows and Motif-based graphical user interface libraries. Multimedia Information Analysis and Processing Building Blocks and Middleware Services As the information technology industry has matured, there has been a trend toward moving what may begin as application-specific software out of applications and into a set of generically useful capabilities that application developers can draw upon and reuse. These capabilities can be combined into applications as reusable generic building blocks, or they can be accessed as remote resources provided by servers. For example, if one wishes to convert text to speech or text to facsimile format, one could start from "scratch" ~ , , In large, a~s~r~outea systems supporting a wide variety of applications. BUILDING BLOCK TECHNOLOGIES (LAYER IV GENERIC APPLICATIONS~NABLERS) Building block technologies discussed under Layer IV—Generic Applications/Enablers of the generic architecture (Figure 3-2) include multimedia information access capabilities; decision support tools, groupware, multimedia teleconferencing; and multimedia messaging capabilities. Multimedia Information Access Capabilities One large class of applications of multimedia informa- tion networks is in making stored multimedia information accessible to others. Information access is a generic enabling application that can support a wide range of specific application domains ranging from electronic libraries to home shopping to medical applications in- volving stored patient records. To make information accessible, it must be stored in a database server capable of storing the types of infor-

REVIEW OF RELEVANT COM11IERCIAL TEClIIVOLOGIES mation desired (e.g., videos in the case of video-on-de- mand applications) and capable of managing the infor- mation to make it accessible to multiple users. Storage and database management technologies are building blocks that support information access. Likewise, infor- mation access requires directory and filtering capabilities to make information discoverable by users and their applications and to sort through information to find what is needed. Information access requires the existence of useful information in formats suitable for electronic storage and retrieval and thus the existence of informa- tion creation (authoring) tools. Information access re- quires (a) communications networks to allow infor- mation to be retrieved by users from remote servers; (b) security mechanisms to control access; (c) billing mechanisms to allow users to be charged for accessing the intellectual property of others; and (d) inte~working capabilities that allow a diversity of terminal types (i.e., end user appliances) with a diversity of communications capabilities to access stored information with the maxi- mum degree of facility permitted by the limitations associated with those terminals and communications capabilities. For example, users of the Internet WOW may or may not want to receive graphics and images, depending upon the speed of their access arrangement (e.g., dial-up modem vs. T1 line) and whether or not their terminal is configured as a bit-mapped display. The aggregation of appropriate building block tech- nologies, the establishment of standards for interfaces, formats, and other aspects of protocols, and the integra- tion of these into generic information application build- ing blocks for widespread use across a wide range of application domains are currently being pursued by the emerging information networking industry to meet exist- ing and anticipated market demand. Although a full suite of industry standards has not yet emerged, the products that are currently appearing to support access to the Internet WWW and its commercial counterparts (e.g., America On-Line, Prodigy, the MS~ Network, MCInet), as well as the efforts under way to build multimedia client-server application software for interactive video- on-demand applications, illustrate the substantial level of commercial R&D resources being applied. Decision Support Tools, Groupware, Multimedia Teleconferencing Traditional decision support tools have included rule- based systems, simulations, and spreadsheet-like analyses. These have typically developed as single-machine and single-user tools, although they have, in some cases, incor- porated sharing of data. They are widely used to support operations (e.g., the optimized routing of trucks and cou- 33 riers, the ordering of materials and components, and for testing "what-if" questions that can be quantified in a form that a spreadsheet or other analytical tool can manipulate). These tools have extended value if they can be used to support collaborative decision making. This potential has been recognized by industry, which uses "shared information" in corporate information systems. The finan- cial industry allows for automated decision making for some investment decisions (e.g., so called "program trading," which monitors real-time stock market data looking for a trading opportunity). Advances in networking technology coupled with in- creases in the visual and audio input and presentation capabilities of end-user terminals and appliances have enabled much greater collaborative capabilities. In particu- lar, several commercial systems (such as BBNs Slated, Lotus Notes~, and Intel's Proshare~) provide collaboration envi- ronments, and there are now tools emerging for supporting collaboration on the Internet. In particular, the "wb" pro- gram supports a "shared whiteboard" model of communi- cations for sharing of graphical information, and the "vat" tool allows for audio teleconferencing over the Internet. Widely available software tools for video teleconfer- encing include the CUSeeMe tool available from Cornell University, and commercial tools such as Sun Microsys- tem's ShowMe~ tool, IBM's Person-to-Person multi- media conferencing software, and Silicon Graphics InPERSON~ Desktop Conferencing Software. Internet technology, called the Multicast Backbone (Mbone), is now able to carry limited numbers of multicasts of video across the existing global Internet using slow frame rates, compression, and overlay support in Internet routers. This technology is maturing at an extremely rapid rate, as many entrepreneurs as well as large vendors see opportunities for sales of software, multimedia peripher- als, and services. No standard has yet emerged for many of these environments, and it is reasonable to expect that it will take several years for the commercial world to converge on either one "de facto" standard (likely, because these collaboration tools must easily and reliably interoperate> or a veIy small number of such standards. Multimedia Messaging Capabilities Voice mail, fax, and text E-mail (electronic mail) mes- saging are popular generic applications that allow people to communicate on a non-real-time basis and to commu- nicate images. The technology for multimedia messaging is similar to the technology used for current text-based electronic mail and has similar requirements for address- ing, security, and management of messages. The differ- ence is that the information in the message can have components representing a variety of data types (e.g.,

- 34 COllIML:RCLA1 MUl ~~ ~CHNOLOGI~ FOR ~~-FIR~C~Y~YBAm~I~S text, image, graphics, audio clip, video clip, data-struc- ture) with each type possibly occumng in a variety of standard formats. The simplest way to construct a multimedia message is to encode the multimedia information components, such as voice, in a set of formats that are recognized by the intended recipient. The encoded message file, while possibly voluminous, is then transported intact between source and destination. This end-to-end method is com- monly used on the Internet today. Examples might include the end-to-end transmission of compact-disk- standard digitized audio, Postscripts images, and GIF and JPEG images. The recipient (or the recipient's mail reader) accepts and converts the encoded data back into the appropriate form for replay or display (e.g., audio, images). The message can be augmented with indicators of the types of components and standard formats used for each component within the multimedia message. This is done, for example, with the Eudora~ mailer, which provides a facility for multimedia "attachments" to textual message contents so that the text and the multimedia or program executable accompany each other through the mail transport subsystem. By structuring the multimedia message this way with explicit indicators of its components and their formats, it is possible for intermediate systems to assist the recipi- ent's end system in converting various components of the message into formats that are compatible with the end system's capabilities (e.g., E-mail text might be converted to synthesized voice or to fax format). While multimedia messaging capabilities have not been completely standardized to date, it is expected that many features will be standardized as mailers and mail readers evolve to accommodate multimedia. A recent multimedia messaging standard, Multi-purpose Internet Mail Extensions (MIME) that has been adopted by the Intemet Engineering Task Force (IETF), defines how components of a multimedia message and their formats can be explicitly identified within the message as de- scribed above. Multimedia mail is expected to be a widely used, popular generic application supporting a wide variety of commercial, residential, and institutional appli- cation domains. BUILDING BLOCK TECHNOLOGIES (LAYER V - PECIFIC APPLICATIONS) General Observations Specific Applications, which form the top layer (Layer V of Figure 3-2) of the committee's generic archi- tecture, draw upon and combine the functionality of lower layer building blocks and add additional software- based functionality to those building blocks to enable a set of useful tasks to be accomplished. Applications can be designed to be generically useful in a wide range of application domains; they are capable of being tailored (programmed) by users to meet their specific needs; or they can be designed from the outset to serge a narrow set of purposes for a specific application domain or even a specific individual user. For example: · The MS~ Wordy application is a generically useful word processing application that can be used by almost anyone in almost any business, institution, school, or library. It makes use of a graphical user interface and a database management system as underlying building blocks, as well as the basic operating system and physical computing hardware upon which it runs. · The Intuits TurboTax~ application is specifically targeted toward the preparation of income tax returns, but it can be used by a wide number of individuals having a wide variety of tax situations. A financial management, payroll, and accounting application might be designed or customized for an individual firm, or it might take on a more generic form specifically targeted, for example, to lawyers . . . In private practices. . Whether or not an organization would develop a customized application or adopt a generic off-the-shelf application depends upon how unique its needs are perceived to be, and whether it wishes to use that applica- tion to obtain an advantage over its competitors. For example, a lawyer would not likely view an accounting application as a path toward obtaining a competitive advantage, but a major banking institution might view an accounting application as a principal source of compet~- tive advantage. It is useful to note that it is not unusual for applications to migrate from Layer V of the generic architecture into Layer IV to become generic/enabling applications. So, for example, as spreadsheets have become embedded as a building block in more complex or specialized appli- cations, they have taken on Layer IV Functionality, while at the same time general purpose spreadsheets such as MS~ Excels are also viewed as stand-alone applications in Layer V of the architecture. There are many specific commercial multimedia ap- plications, some of which were alluded to above and others of which are discussed later in this chapter. Since our discussions with Army personnel highlighted the important role of simulation in training, analysis of altematives, and pre-engagement practice, the committee

RE~EWOF~EVA~CO~CIAL ~CHNOLOGI~ has specifically focused on simulation as an application to be discussed. Simulation: Systems and Applications This section reviews the basic categories of models and simulations needed to understand where commercial technologies are currently positioned and where they are headed. We begin with basic definitions (DoD Modeling and Simulation (MS) Management, 19941: · A model is a physical, mathematical, or otherwise logical representation of a system, entity, phenome- non, or process. Typically, computerized imple- mentations take the form of rule-based, stochastic, or deterministic models. · Simulation is a class of techniques for testing, analysis, training, or entertainment in which real- world systems may be combined with computer- ized models. Simulations have been categorized as: · Constructive, which involve primarily computer calculations and representations combining se- lected aspects of the real world that have been previously analyzed and reduced to a mathematical model. · Virtual, which are simulations in which people play a central role as themselves. Virtual reality is pro- vided by computer- driven displays and interaction interfaces. Such simulations can be conducted as a single site or distributed across a network. · Live, which attempt to make the action seem as real as possible to the participants by including live elements in the simulation. Distributed Interactive Simulation (Bouwens et al., 1993) is defined as the execution of models at disparate sites, with humans-in-the-loop, linked for a common purpose and having a common view of that purpose. Information and data between sites (nodes) is passed using predefined protocols. The nodes can be distributed anywhere in the world, in any number, and they can operate with different hardware and software. Constructive and virtual simulations are discussed below. Because live simulations do not have noteworthy commercial interest, they are not discussed here. Constructive Simulation Commercial simulation software has for the most part been concerned with constructive simulations in manu- 5) factoring, design of communication networks, and a wide variety of other application domains. For example, using modern computer-aided design and manufacturing (CAD-CAM) software, one can create simulations of components and assemblies of automobiles, aircraft, machine tools, and other physical objects made up of pieceparts in order to experiment with alternative de- signs, to test functional performance, and to ensure that pieceparts fit together properly. In recent years, there has also been an increased use of constructive simulation in medical applications, where powerful computers and associated application software can be used to create two- and three-dimensional ren- derings of human organs based on data obtained from X-ray and ultrasonic scanners. Many simulations are written in general purpose lan- guages such as the C programming language for numeri- cally based science and engineering problems and Com- mon LISP@, a language designed for manipulation of lists of symbols and used for applications that operate on symbolic representations in ways functionally similar to human thinking (artificial intelligence). Object-oriented extensions to these general purpose languages, C++ and Common LISPS Object System, enhance the power of the respective basic languages considerably, but they do not provide the full range of useful programming facilities that are available in today's specialized simulation lan- guages and environments. A wide variety of commercial simulation languages can be found on the market, ranging from highly spe- cialized program generators (e.g., SIMFACTORY, COM- NET) to languages providing general discrete event modeling facilities (e.g., GPSS, MODSIM, SIMAN). Envi- ronments for expert system development such as ART- Enterprise and ProKAPPA contain comprehensive facilities for knowledge representation and reasoning (Hayes-Roth et al., 1983~. Simkit combines some discrete event simulation features with knowledge representation and reasoning. Neural networks (Hammerstrom, 1993) and genetic algorithms (Goldberg, 1989) for optimization have also appeared in commercial software packages. Virtue/ Simulation Virtual reality can be loosely characterized as a com- bined hardware and software system that allows users the capability to simulate any real or imaginary situation and to interact with it (Marefat, 19931. Virtual reality can be traced as far back as the work of Ivan Sutherland on the design of a head-mounted display, but much of the modern virtual reality work came about after the super- cockpit work at Wright-Patterson Air Force Base. De- pending on the application, virtual reality has also been

36 COMMERCIAL AIM ~CHNOLOGI~ FOR -FIRST CE~YA~YBA 1 'MIFF As referred to as immersion simulation, virtual environ- ments, telepresence, and artificial reality. About 50 small companies are in the business of developing the compo- nents to support virtual reality for business, training, and entertainment applications. Such components include: (a) input devices for sensory feedback such as head- mounted position trackers, data gloves, joysticks, key- boards, and body suits; (b) computer elements, including high speed graphic processors and data managers; and (c) output devices, including monitors, audio, and head- mounted displays. Large companies, such as Lockheed and Boeing, are also involved in virtual reality through the development of software environments to support virtual reality-based simulations. Such environments must integrate (a) visual elements involving geometric models and their appear- ance; (b) auditory and tactile elements (although such capabilities are still in early stages, the integration of the visual, auditory, and tactile sensory information is critical to true immersion sensation); and (c) behavior models. The difference between computer animation and virtual reality is that virtual worlds are interactive (i.e., the objects in the virtual worlds move, react, and are affected by external events) (Marefat, 19931. Recently, the focus of virtual reality research has started to move rapidly toward entertainment, engineer- ing, and medical applications of interest to the entertain- ment, industrial and architectural simulation, and health care communities. Research into telepresence, in which the user is projected into remote environments through sensor/actuator interfaces, is accelerating both in acade- mia and in potential commercial applications involving hazardous operations. Trends in Commercial Development Commercial systems are moving toward support of integrated modeling and simulation following many years of academic research (Oren and Zeigler, 1979; Henriksen, 1983~. Such systems provide support for the model development process in addition to the run-time execution facilities provided by current simulation lan- guages. The tools of artificial intelligence and knowl- edge-based systems are used to support aspects of the simulation enterprise (Balmer, 1986; Murray and Shep- herd, 1987; Reddy et al., 1986; Rozenbilt et al., 19903. Fundamental requirements for standardized simulation environments have been formulated (Tanir and Sevinc, 1994; Zeigler et al., 19941. The various kinds of support are illustrated in Table 3-2. Although none of the existing systems can satisfy all of the requirements implied by Table 3-2, Object- Oriented Knowledge-Based Simulation Environments TABLE 3-2 Simulation Support Various Kinds of Support for Simulations Software Development Support Enhanced model quality integral through development process Comfortable prototyping/devel op ment/reificat ion Stand-alone modules System composition from components Top-down design/bottom-up testability Configuration management Visual interface access to environment Output analysis, visualization, animation Integration with virtual reality immersion Model-Database Management Integrate with standard database management systems Support hierarchical synthesis process and model object hierarchies Exploit model components resident in a model object base Support organization and reuse of objects and models Model Specificatiorl/Execution Simulation Support continuous, discrete event and related dynamic system formalisms Support artificial-intelligence knowledge representation formalisms: goal planning, rules, frames Support multiple concurrent agents Support model evolution through hierarchical, modular construction Stable real-time simulation Model-to-Architecture Mapping Support Support mappings of models onto diverse computer architectures Facilitate upgrading to higher capability platforms through software compatibility Assure correct simulation process Generate alternative architectures and mappings Experiment with/select efficient (time/space) alternatives Support for Multiple Resolution Levels Refine models to provide high fidelity representation Coarsen models to meet execution time and space constraints Integrate into the model base via abstraction mechanisms with explicit links for state and parameter databases. Cross-validate models with respect to other (high or low resolution) models SOURCE: Zeigler et al., 1994.

REVIEW OF RELEVANT COMMERCIAL TECHNOLOGIES come closest to meeting them (Zeigler, 1990; Ruiz-Mier and Talavage, 19893. BUILDING BLOCK TECHNOLOGIES (LAYER Vl MANAGEMENT~ECURITY) Building block technologies discussed under Layer VI- Management/Security of the generic architec- ture (Figure 3-2) include security technologies; network management systems; and general purpose languages, tools, development environments. Security Technologies Security is a major issue and a major concern among all providers and users of information networking appli- cations and services. In the United States, privacy is one of our most cherished rights, and privacy concerns are a major impediment to successful realization of the vision of a national information infrastructure. Health care providers and patients are concerned about the protec- tion of patient-identifiable patient records. Educators are concerned about the unauthorized disclosure of student records and also about the uncontrolled or unauthorized access of students to pornographic materials. Individuals do not want their messages and real-time communica- tions to be disclosed, nor do they want information about their buying habits, what they read, their tax records, or other personal information to become available without their permission. Electronic commerce cannot flourish without the ability to conduct secure transactions and protect intellectual property from unauthorized uses. Security includes: . Protection of stored information or information in transit through a network from unauthorized access in usable form (eavesdropping); · Protection of stored information or information in transit from unauthorized modification; Authentication that what has been received over a network has not been modified, and that the source of the information is who he or she claims to be; Protection of resources from unauthorized access; Protection of users' identities from those who are not authorized to know their identities; · Protection of information about users' usage pat- terns: what they access, how often, from where; and Protection against denial-of-service attacks that pre- vent authorized users from accessing resources and information when they need to do so. . 37 Recent trends in commercial systems and applications have been to enhance security. There is a tradeoff be- tween the level of protection that can be provided and the ease with which legitimate users can use the services they are authorized to use. This tradeoff is shifting in time as advances in technology are making such things as powerful smart cards available. Standards are the pacing factor here, because the industry recognizes that users will not wish to carry around a variety of different types of smart cards for different purposes. Thus, much effort is currently being employed to identify the broad range of applications and the corresponding capabilities that must be embedded in smart cards. Smart cards are essentially credit-card-sized miniature computers that contain secret encryption keys, the ability to execute security (encryption) algorithms, and input/output capa- bilities to interface with terminals or appliances. They can store personal information such as medical records, and they can also be loaded with electronic money. To illustrate the trends in commercial practice, one can consider the following examples. First-generation cellular networks are relatively sus- ceptible to eavesdropping. Next-generation digital cellu- lar networks will employ encryption techniques to make eavesdropping more difficult. Emerging wireless per- sonal communications networks will, in some cases, employ spread-spectrum code division multiple access (CDMA) techniques and even more powerful encryption algorithms. These networks may also employ public key cryptography algorithms to allow accessing users to authenticate themselves to the network without disclos- ing their identities to eavesdroppers. CDMA systems are not only intrinsically eavesdrop-resistant but also resis- tant to direction finding because their low-power signals are spread over a broad range of frequencies, which are shared by all other users. When remote users log on to their host terminals over networks, there are opportunities for passwords to be compromised to eavesdroppers, particularly with wire- less networks. This has led to the use of special credit- card-sized tokens that generate and display pseudo-random numbers that are used as one-time pass- words. Using a variety of approaches, the passwords generated by these tokens can be authenticated by host security systems that know the secret information resid- ing in the token that generates the passwords. Eavesdrop- pers cannot make use of these passwords after their initial use by authorized users. The use of these single-purpose tokens may be eliminated when their functionality is absorbed into general-purpose smart cards in the future. Standards for digital signatures, based on public key cryptography, make it possible to not only verify the source of a piece of multimedia information that has been stored or transmitted in digital form, but also to validate

38 COMMERCIAL MULE ~CHNOLOGI~ FOR -FIRST CE~YA~YBA EMITS that no change has been made to the digital multimedia object subsequent to being signed by the originator. Extensions of this approach have resulted in the ability to also authenticate the exact time that a document was signed (digital notary service) in ways that are acceptable in legal settings. In general, cryptography makes it possible to make multimedia information inaccessible to unauthorized us- ers by placing it in a form that is not usable without the secret cryptographic decoding key. Commercial methods for implementing cryptography are widely available, although export restrictions, difficulties in negotiating terms with respect to the use of patented methods, and certain federal government initiatives with respect to encryption methods, which contain "back doors" to allow government access under specified circumstances, have temporarily hampered progress in converging on com- mercial standards for strong encryption methods. There are initiatives under way to create cryptographic methods to support electronic commerce, including the exchange of credit information over public networks. Several vendor-specific approaches are currently being employed in commercial networks to support electronic shopping and the sale of intellectual property over networks. Firewalls to prevent attacks on network enclaves (i.e., networks within a specified administrative domain, such as those of a company or a university) from determined intruders are available and are continually being upgraded as more sophisticated attacks are developed and em- ployed. Applications exist for automatically detecting net- work security vulnerabilities against known attacks due to improper configurations of networks and their attached hosts. However, protection of networks against determined attackers remains an ongoing problem for commercial and institutional system administrators. It has been described as a joumey, rather than a destination, where the objective is to minimize risks and to detect quickly and limit the damage associated with attacks. In summary, with network and information security as one of the pacing factors in the successful realization of the applications associated with a national information infrastructure, which represent a commercial market op- portunity measured in hundreds of billions of dollars per year by most estimates, there is a very large commercial R&D effort under way to create, standardize, and deploy easy-to-use, powerful, and inexpensive security technolo- gies and methodologies. Network Management Systems Network management systems are used to manage large, distributed, heterogeneous information systems. Management functions range from authorizing users to have access to specific services and applications to recovering from faults or attacks. Typically, management is accomplished in a layered fashion to make the man- agement process itself manageable. Individual network components such as communications nodes (e.g., switches, multiplexers, routers) and links (e.g., fiber optic systems), servers, and end systems contain self-diagnostic functionality and the ability to remotely configure or reconfigure their capabilities. The network management functionality that is de- voted to monitoring and controlling these individual components is referred to as residing in the element management layer. Collections of components work together to perform such functions as the provision of communications paths or accessible databases. While individual components may fail, redundancies can make it possible to maintain these functions. For example, a communication path can be maintained by using an alternative route through a multiconnected network. A backup server can be used to take over for a damaged server. The network management layer that is responsi- ble for maintaining such things as communications paths and database capabilities is known as the resource man- agement layer. Higher layers in the network management stack are responsible for providing specific types of services and applications to specific users and user classes. As distributed, multipurpose, multiprovider, heterogene- ous networks have proliferated in the commercial world, network management has become a major commercial market. Downsizing and reengineering of commercial firms and industries have placed ever more importance on the elimination of manual tasks and the use of automated systems to configure, troubleshoot, and control networks. The increasing dependence of society on information networks in such areas as commerce, health care, and air traffic control places a premium on reliable systems that can quickly control and isolate problems. General Purpose Languages, Tools, Development Environments Any piece of computer software, whether used in an operating system, a multimedia database, multimedia teleconferencing software, or a network management system, is a program. Any such program, in turn, must be written in a programming language. While the pro- gramming language in which software is written may be unimportant to a user of an application embodied in a program, if the software is to be extended, modified, or customized, the programming language in which this is done becomes critically important.

REVIEW OF RFLEVA~ CO~ERCIAL ~CHNOLOGI~ Early computer programming languages such as FOR- TRAN were designed primarily for numerical calculation and were aimed at freeing the programmer from having to consider the details of a computer's hardware when writing a program. Modern computer programming lan- guages such as C++ and Ada were designed to support a spectrum of application domains. They also recognize large-scale software development as a continuing group process involving many individuals and thus support widely recognized software engineering principles: (a) programming is a human activity, and (b) software should be maintainable, portable, modular, and reusable. Hundreds of computer programming languages have been developed, and yet only a relatively small handful have found widespread use. Rather than provide a comprehensive survey of languages, the committee ex- amines here two of the most important languages in use today for application programming and system software development: Ada95 and C++. Ada traces its birth back to 1975, when the Department of Defense (DoD) established requirements for a high- level language that could be used in all defense projects. In 1976, 23 existing languages were formally reviewed and none were found to meet the requirements. It was concluded that a new language was needed, and the Ada language was born. Ada became an American National Standards Institute (ANSI) standard in 1983 and an Inter- national Standard Organization (ISO) standard in 1987. The 1983 Ada standard was updated in early 199j and, like the original Ada, is intended for embedded and real-time systems. It also has a number of built-in features to support distributed computing. A major improvement found in Ada95 is its support for object-oriented program- ming and enhanced support for real-time systems. The so-called "Ada mandate," Public Law 101-511 Sec. 8092, states that Ada should be used for all DoD software: "Notwithstanding any other provisions of law, after June 1, 1991, where cost effective, all Department of Defense software shall be written in the programming language Ada, in the absence of special exemption by an official designated by the Secretary of Defense." Thus, Ada has considerable visibility within the defense contracting industry. The extent to which Ada is used in non-gov- ernment-sponsored software development is the subject of continual debate. Numerous commercial uses of Ada are documented (IIT, 1995~. C++ is another modern general-purpose language of roughly similar power to Ada. It is object-oriented and also has many features that modern software engineering practice considers important. It is a descendant of the C programming language, which was developed in the early 1970s at Bell Laboratories, and has found wide- spread use since then. Standardization efforts are under- way in both the ANSI (American) and ISO (International) 39 groups to develop a C++ standard. It has been stated that C++ is by far the most popular object-oriented program- ming language and that the number of C++ users is doubling every 7.5 to 9 months. Trade magazines contain numerous reviews of compilers and development envi- ronments for C and C++, thereby attesting to the wide- spread interest in this language. Associated with these languages are a number of tools and development environments. These are attempts to ease the programming task, organize teams of program- mers for large projects, and "debug" programs effectively. Examples of such tools include syntax-directed editors, source-code control systems, and symbolic debuggers. A syntax-directed editor provides programming lan- guage syntax checking and language-specific structuring as the program is typed in by the programmer. The advantage of this approach is that many of the "bugs" common in early stages of program development can be eliminated before the first trial compilation of the program. Systems for controlling source code, such as the Source Code Control System (SCCS) and Revision Control System (RCS), serve as repositories for the source code comprising a program. There are facilities provided for change management, which is critical in management of large-scale software projects. Symbolic debuggers allow a failed program to be analyzed in the form of the symbols used by the pro- "rammer to write the program. The advantage of this technology is that the programmer is able to more quickly isolate conceptual errors because the form of the error report is in the semantic structures used by the program- mer rather than those of the lower-level "object code" used by the machine. SYSTEMS In this section the committee gives examples of sys- tem-level applications of multimedia information tech- nology existing or emerging in the commercial domain. These examples will provide substantiation for the rec- ommendations in Chapter 4 as to which building block technologies the Army should adopt or adapt from the commercial domain and which building blocks are can- didates for Army-specific development to produce pro- prietary advantages over its adversaries. This section covers four major systems: cellular and wireless, elec- tronic commerce, intelligent transportation systems, and residential information services. Cellular and Wireless Telecommunications Systems Revenue growth and subscriber growth in cellular systems have exceeded even the most optimistic projec-

40 COMMERCIAL MULTIMEDIA TECHNOLOGIES FOR 7~ FIRST CENTURYARMYBA 771~EFIELDS tions of its early proponents. Yearly revenues in North Electronic Commerce America have grown from $500 million in 1985 to over $4.6 billion by year-end 1993 (Leeper, 19959. This is an average annual revenue growth of 32 percent. Revenue growth rates are retreating slightly but are still expected to exceed 20 percent annually through 1996. Globally, in 1993 alone, the number of subscribers went from 21.1 million to 33.1 million, a growth of 56 percent. In North America, the growth was 47 percent, going from 12.1 million to 17.7 million (Leeper, 1995~. Note that revenues are not growing as fast as subscribers because of declining prices. With more advanced electronic, battery, and antenna technologies, there has been a marked move toward personal, portable handsets. In 1987 only 5 percent of handset sales could be called "portable"—vehicular sets accounted for 78 percent of all sales, and "transportable" units 17 percent. In 1993, portable sales had jumped to a 36 percent share of the total, transportables to 35 percent; vehicular sales had declined to 29 percent of the overall market (Leeper, 19959. Prices on cellular subscriber units have dropped to well within the means of mass market consumers. In 1993, portable units in the United States had an average "walk-away" price of $343 with some units sold for as little as $43. Vehicular units averaged $264 and transport- ables $187. Despite their higher average price, portables remain the fastest growing segment of the market (Leeper, 19959. Customers appear willing to pay a premium for port- ability and convenience, and technology has made very small and lightweight phones possible. The leading handset manufacturer has recently introduced a "flip- phone" model weighing only 3.9 ounces. The phones are becoming as small as is practical for human fingers to operate; further reduction in size may require a paradigm shift in packaging and other means of input and output. Since it is still inconvenient in many circumstances to "wear" a phone and to answer it every time it rings, many users today wear a vibrating pager and use it to screen calls. The portable cellular phone stays in the briefcase until it is actually needed. This practice portends the day when a person may "wear" a per- sonal, wireless, LAN (local area network) that links a pager, phone, and PDA. Cellular and wireless users, particularly business users, are increasingly demanding more reliable, se- cure, nearly ubiquitous service with the ability to move around freely. They are demanding lighter, more reli- able handsets with longer operation between battery charges. They are also demanding multipurpose units that can operate as cordless telephones, cellular tele- phones, and telephones that can access emerging personal communication networks. Electronic commerce refers to the conduct of business using distributed information networks that connect geo- graphically distributed locations of the same firm, firms and their suppliers, firms and their customers, and mul- tiple firms jointly creating and marketing products. The banking industry is at the leading edge of elec- tronic commerce in its use of information networks to conduct billions of dollars of transactions on a daily basis. The banking industry uses information networks to move money among accounts distributed worldwide and to monitor critical information needed to make financial decisions, such as the granting of loans and lines of credit. These networks are also used to collect and process credit and debit card information from hundreds of thousands of merchants who accept these cards, to clear hundreds of millions of checks on a daily basis, and to operate automatic teller machine networks on a world- wide basis. All major stock exchanges depend upon information networks to conduct hundreds of millions of trades each day. This dependency has become increasingly evident during recent outages at the NASDAQ exchange. For more than a decade, electronic data interchange (EDI) has been used between firms and their suppliers to place orders, send invoices, and make payments on accounts payable. Some large firms will not deal with suppliers who cannot conduct their business using EDI. Recently, there have been successes in various forms of electronic shopping (e.g., the Home Shopping Chan- nel), and this success is fueling the emergence of on-line shopping services over which purchases can be trans- acted. Such transactions may involve the use of credit cards, debit cards, electronic checks, or anonymous electronic money. In all of these existing and emerging applications, network integrity, network reliability, and security are major, ongoing concerns. Not only are these networks susceptible to theft of services, fraud, compromise of private information, and attempts to steal or counterfeit electronic funds, but they are also susceptible to disrup- tive attacks and accidents that can cause billions of dollars per day of economic damage. Intelligent Transportation Systems Departments of transportation at the federal and state levels have concluded that it will be increasingly difficult, if not impossible, to construct new roads to accommodate increasing traffic loads over the next several decades. Meanwhile, there is a need to increase highway safety, to improve traffic flows to decrease congestion resulting

REVIEW OF REIEVANT COMMERCIAL TECHNOLOGI~ from accidents and stochastic traffic surges, and to track the locations of commercial and public vehicles. In response to this realization, an initiative known as Intelligent Transportation Systems (ITS, formerly known as Intelligent Vehicle/Highway Systems) has been established. Consensus estimates are that government and private investments in ITS cumulatively up to the year 2011 will be $210 billion (IMPS, 19923. In fiscal year 1995, the U.S. Department of Transportation budget includes $227.5 million in funds allocated to ITS research and develop- ment, operational tests, and other ITS-related initiatives and applications. These applications include highway sensors (including cameras) that will monitor traffic and send traffic information over wireless and wired commu- nications networks to centralized traffic control nodes; traveler information systems that will distribute traffic reports to travelers in automobiles, trucks, and their homes and offices; positioning systems that will allow vehicles to track and report their locations to centralized nodes; 911-emergency systems that will allow travelers to report problems, including their precise locations (currently a serious problem in cellular emergency calls); map delivery systems that will guide travelers to their destinations; and others that are less relevant to this report. These distributed systems and their associated appli- ances and applications will have to be reliable, secure, easy to use, and affordable in mass market applications. Residential Information Services In 1994 the sales of home-based personal computers equaled that of television sets ($8 billion) (Markoff, 19951. It is anticipated that, over the next decade and beyond, the use of residential multimedia computers to access information (education, health care, personal finance) and to shop for and purchase information and consumer products will become commonplace. In order for this vision to become a reality, residential applications must be intuitive and easy to use. There is an increasing awareness in government and industry that universal service will not be solely a matter of financial means but also a matter of the usability of information services and applications by those members of society who are not technologically oriented and have limited time to invest in learning how to use new technologies. Thus there is an ongoing, major R&D effort to achieve increasingly user-friendly graphical (and other) user interfaces and so-called plug-and-play capabilities. For example, there is a large amount of commercial activity related to the design of set top boxes for interac- tive multimedia applications in the home. The terminol- 41 ogy "set top box" refers to a piece of equipment, used in conjunction with a standard television set, which acts as an interface between an interactive or noninteractive multimedia communication service being provided via a coaxial cable, a pair of copper wires, a satellite or terrestrial microwave antenna, or an optical fiber and the standard antenna input of the television set. The set top box may contain powerful processing and informa- tion storage capabilities, and it may provide a sophisti- cated user interface that allows the user to do such things as navigate menus of available programs and other information and to interact with the application the user has selected. Much of this current activity relates to the design of a user interface that is easy and intuitive to use for the more than 95 percent of the general population that owns television sets. In addition, since the upstream (user-to-net~vork) bandwidth is very limited in many architectures for connecting end users to the information servers that provide multimedia to these set top boxes, yet the response time to user requests (e.g., program changes) must be very short, there is a big emphasis on maximizing performance in the context of bandwidth limitations. LESSONS LEARNED IN THE COMMERCIAL WORLD The major focus of this section is on lessons learned in the commercial world in the application of multimedia information technologies. These lessons support the committee's recommendations that appear later in this report. The following sources of lessons learned will be addressed: architecture, standards, vertical versus hori- zontal structures, leveraging commercial off-the-shelf (COTS) technology, how business meets special technol- ogy requirements, leveraging legacy investments and fostering rapid acceptance of information technology, and adopting a spiral model of development. Architecture Because we can observe its entire life cycle, the IBM System 360 serves as an excellent case history from which to draw a few essential lessons about architecture. In the late 1950s and early 1960s, IBM was facing a problem. IBM was fielding an ever-widening variety of systems, few of them compatible with one another and each separately optimized for a particular set of applica- tions. Further, each system required a separate training regimen for IBM's field support staff, leading to very high maintenance costs.

42 COMMERCIAL MULTIMEDIA TECHNOLOGIES FOR TWEN7-Y-FIRST CE~YA~YBA EMITS To solve the growing problem, IBM's executives com- missioned the design of a single, logical architecture from which an integrated family of systems could be built. The result was the now famously successful System 360 (and its follow-on, System 370) family of systems. What are the lessons to be learned from this successful commercial experience with architecture? Fred Brooks, System 360 Development Manager, says (Brooks, 19759: System 360 architects had two almost unprecedented advan- tages: enough time to work carefully, and political clout equal to that of the implementors. The provision of enough time came from the schedule of the new technology; the political equality came from the simultaneous construction of multiple implemen- tations. The necessity for strict compatibility among these served as the best possible enforcing agent for the specifications. Regarding the architecture design, Brooks writes: I will contend that conceptual integrity is ';the" most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have many good but independent and uncoordinated ideas. Conceptual integrity does require that a system reflect a single philosophy and that the specification as seen by the user flow from a few minds. The principal lessons here are that creation of a communications and computing architecture requires that (a) a few resonant minds create the architecture, (b) they be given time to work, and (c) the architecture be enforced not only by edict but also by simultaneously constructing several of the system implementations that use the architecture. The committee notes that cultural separations among existing functional groups, profit centers, divisions, etc., exist in all commercial companies and other institutions. Pride and esprit de corps within these are typically long-standing and well cultivated, and they typically have produced very positive results in the past. Unfortunately, they are also major obstacles to developing an integrated "enterprise" or"information" architecture. The challenge is to overcome these obstacles by taking steps like those taken at IBM in the context of system 360. Such successes are, to date, quite rare. Standards The commercial world places great value on the existence and widespread use of standards. Standards consist of sets of rules with which conformance to the standard can be evaluated. These rules can be applied at many layers in systems, ranging from physical connectors to the graphical user interfaces discussed elsewhere in this chapter. Standards have the business advantage that, once defined, all commercial enterprises that wish to compete for the provision of components of an integrated system can exploit whatever competitive advantages they pos- sess or can create without having to be vertically inte- grated suppliers of the end-to-end system. Thus, standards are pro-competitive. The consumer derives advantage from the fact that technologies adhering to a standard are interoperable. Interoperability means that one of a set of interoperable components can be pro- cured or upgraded independently of others. For example, all compact disc players use the same compact disc, although significantly different sampling schemes and signal processing technologies can be applied, resulting in a variety of consumer choices, from low quality to audiophile quality. Industry standards emerge in two ways, which can be interrelated and often are. First is through the use of a standards body. The purpose of the standards body is to provide an impartial design and selection of a standard. The most effective standards bodies rely on groups of technical experts in an area to define a useful and effective standard. Examples are the Institute for Electri- cal and Electronics Engineers (IEEE), the International Standards Organization (ISO), and the ATM (Asynchro- nous Transfer Mode) Forum. IEEE Standards usually relate to computer and communications devices and their functions. Examples include standard formats `~or com- puter representation of floating point numbers (IEEE 754) and standard interfaces for a portable operating system (IEEE 1003, POSIX). ISO standards include the Open Systems Interconnect standard or OSI; this standard defines a multilayer pro- tocol model which was carefully defined and accepted as a standard before implementation was begun. This latter case illustrates a risk with standardization by com- mittee. The risk is that the committee will be bypassed with a second form of standardization, the de facto standard based on user preference. In the case of OSI, implementation of the Internet protocol described earlier in this chapter proceeded without a complete formal standardization process, and yet it has become the de facto standard for Internet communications. De facto standards are a result of market dynamics. If a clear standard is not established when a company wishes to enter a market, it can either wait for a standards body to put forth a standard to which it will adhere, or it can take its own approach and presume that it will achieve sufficient market share to become one of a small set of accepted solutions. An example where this has occurred is in the design of command sets for asynchro- nous modems, where a manufacturer (Hayes) developed a command set that is a de facto standard. Such standards are sometimes developed as a byproduct of other com-

REVIEW OF RELEVANT COMMERCIAL TECHNOLOGIES petitive advantages possessed by a company. In the Hayes case it was a flexible microprocessor-augmented modem called the SmartModem~, which was a huge commercial success; the Hayes command set has outlived the company. Once established, such standards are violated at considerable commercial risk. Official standards and de facto standards can be the same if the official standard is available early enough so that companies see an advantage in adhering to it, or if the de facto standard becomes officially recognized by a standards body. The former case is exemplified by the ATM Forum, which specifies standards for a variety of protocol layers in ATM networks. The latter case, while pragmatic, can be fraught with difficulty as the company that originated the de facto standard may be given a further advantage by ratification of its technology as a standard. Standards bodies have traditionally been reluc- tant to ratify a situation that might, by giving advantage to a particular vendor, give the appearance that they may not be impartial, although recently there has been a trend toward the adoption of de facto standards by standard forums like the Open Software Foundation. Companies address their concerns with standards by becoming active participants in standards bodies when technological standards may affect them or be positively influenced by their input. Companies put their concerns into the deliberative process of the standards body. For example, computer manufacturers were highly influen- tial in the design of the ATM Adaptation Layer 5 standard, which allowed for overlapped operation of check-sum computation and data movement that is highly desirable in computer networking environments. Vertical Versus Horizontal Industry Structures In the first several decades of its existence, the com- puter industry was vertically integrated. Each firm (e.g., IBM, Digital Equipment Corporation) designed, devel- oped, and sold all of the hardware and software needed by its customers to implement their computing applica- tions. In the past 15 years, the computer industry has assumed a horizontal structure. Intel, Motorola, and others make microprocessors and memory chips. Com- paq, IBM, Apple, and many others make personal com- puters and a wide variety of plug-in boards and peripherals. Microsoft, IBM, Apple, and others make operating systems. A large number of firms sell middle- ware and application software (The Economist, 19933. The transition to a horizontal structure has been driven by several factors. Customers demanded open system solutions that would allow them to mix and match products from multiple suppliers; this necessitated the opening of interfaces, which allowed competing firms to 43 sell horizontally structured products. Economies of scale and a very competitive marketplace made it necessary to focus on one's core strength and to sell into as large a market as possible. This same transition from a vertical structure to a horizontal structure is affecting many other industries. Global competition is causing firms to focus on their differentiating advantages and to outsource what they can get better or cheaper from others. For example, an airline may determine that its reservation system should be a separate business rather than a vertically integrated part of a business that includes the component that actually flies passengers. The airline may also outsource its maintenance and meal preparation service. It is not clear that each airline needs to maintain its own baggage handling staff. What to keep and what to outsource is a critical decision regarding where one wants to differen- tiate from competitors. In the long distance telephone industry, competing firms have been differentiating themselves via the capa- bilities of their billing systems to support complex dis- count plans. It is conceivable that someday telephone companies will outsource their networks and differenti- ate themselves on the basis of marketing and customer support services. A lesson learned is that to achieve superiority (beat the competition) in information-technology-intensive businesses, one should focus development efforts on areas where one intends to achieve a differentiating advantage and should outsource everything else. Leveraging Commercial Off-the Shelf Technology The commercial telecommunications industry is one of the largest consumers of multimedia information tech- nologies. It is therefore useful to examine recent trends within the telecommunications industry in leveraging COTS multimedia information technologies. Much can be learned from successful companies in this industry. For example, MCI and SPRINT, two of the largest providers of inter-exchange ("long distance") telecommu- nications services ("carriers") in the United States, conduct only limited R&D activities. They focus on defining the overall architectures of the networks they wish to deploy, the associated management systems, and tracking technol- ogy trends. They carefully determine how they wish to differentiate themselves from their competitors (e.g., in such areas as billing systems and customer service), and they commission the development of those differentiating capa- bilities using commercial-off-the-shelf technologies (i.e., they focus on implementing applications of commercial off-the-shelf technologies, not the underlying technolo- gies themselves).

44 COMMERCIAL MUl COMEDY ~CHNOLOGI~ FOR -FIRST CE~YA~YBA WHIFF DS The providers of cellular telecommunications services have relied on their suppliers to produce innovations in technology, while they (the providers) have focused on applying that technology in their networks. When mem- bers of the cellular telecommunications industry deter- mine the need for a new capability (e.g., inter-net~vork signaling to enable nationwide roaming), they call upon their supplier community to produce proposals for how this might be implemented. Cable television companies follow a similar strategy to that of the cellular companies, maintaining only a modest R&D effort focused on defin- ing requirements for new system architectures and capabilities. Recently, the local exchange carriers (Ameritech, Bell Atlantic, and others) have been moving their R&D focus more toward applications of technology and differentia- tion from their competitors based on lower cost struc- tures and superior customer service enabled by the skilled application of commercial-off-the-shelf technolo- gies obtained from their suppliers. They are placing less emphasis on investing in the creation of the underlying technologies themselves and are relying instead on their suppliers to make those investments. However, they do spend considerable effort in understanding technology trends in order to anticipate both opportunities and competitive threats that might result from lower costs or new capabilities enabled by advances in underlying technologies in all of the layers of the generic technical architecture described earlier in this chapter. In the telecommunications marketplace, a specific example of this approach involves the introduction of new fiber optic systems based on Synchronous Optical Network (SONET) standards. These systems are more cost-effective and more easily reconfigured than the prior generation of fiber optic systems. The supplier community produces these systems and makes them available to all carriers. The carriers focus on applying these systems in their evolving network architectures to reduce their costs and to obtain the benefits of more flexible and reliable networks. Where carriers attempt to differentiate themselves is in the use of management systems that allow them to be more responsive than their competitors in filling orders for new services that are carried on their networks and in quickly respond- ing to service interruptions caused by cable cuts and equipment failures. How Business Meets Special Technology Requirements Business tends to solve problems using as much commercial technology as possible, since business is loathe to engage in R&D to solve immediate problems. It is worth studying an example in detail to understand the approach. A major investment bank, Morgan Stanley, needed a system to support trading operations in its New York City trading areas. The reliability requirements of the system were extremely high. The Morgan Stanley approach to this problem was at the system level (i.e., a system of systems to provide high reliability using commercial components). In this case, the commercial systems were redundant engineering workstations connected by dual Ethernet LANs. System software was written to automatically reroute work and network traffic in the case of failure. Thus the system was created from commercial technology using redundant commercial components in a nonstandard way. The nonstandard result was almost exactly twice as expen- sive, but it achieved a multiplicative gain in reliability for this cost plus the addition of a small amount of software and some management discipline. Thus a somewhat ad hoc and opportunistic ap- proach led to a solution that met Morgan Stanley's needs via the innovative application of COTS technolo- gies. The key to success was in focusing on meeting the need, while leaving the solution (detailed require- ments) flexible. Leveraging Legacy Investments and Fostering Rapid Acceptance of Information Technology Corporations and institutions have been deploying computer based systems and applications for 40 years. These systems are based on a wide variety of diverse technologies and architectures and were typically not designed to interoperate with each other in the context of an overall enterprise-wide architecture. Collections of such systems, which represent an embedded investment by the organization or enterprise, are typically referred to as "legacy systems." The issue of "what to do with legacy systems" is an old one in the commercial world, but it is growing in importance as the number and complexity of legacy systems increase and as the accom- panying maintenance costs and update backlog grow. In addition, the allure of more modern systems with up- dated technologies has made the weaknesses of legacy systems more prominent. The technical problem of designing a new system to replace a legacy system is usually the easiest part of a problem. Much more difficult is the cost justification of replacement, management of risk (at first, the new system might not work as well as the old), and reluctance of users and system operators to learn the way a new system works. On the other hand, most engineers prefer to work on new-systems design rather than upgrading old sys- tems, and legacy-system expertise becomes more and more scarce as time goes by.

REVIEW OF RELEVANT COMICAL ~CH.YOlOGI~ While there is no single preferred method of dealing with the legacy system dilemma, the following are sug- gested alternatives. Alternative 1 The first alternative would develop a new-technology, wholesale replacement for the legacy system, with no change in functionality or user interface. This approach has the advantages that the requirements may be well understood (see below) and there is minimal retraining for end users. Ostensibly there will be attractive future savings in maintenance costs, and the new system will accept upgrades more quickly and gracefully. The difficulty is that, in any given year, it is always cheaper to carry the legacy system a bit further than to undertake a new development. In addition, all of the requirements that are being met by the old system may not be well documented. Therefore, the new system may initially fall short of meeting all current business require- ments. In addition, for large systems, such "big bang" approaches to the replacement of legacy systems have almost always failed to meet schedules and budgets and have often resulted in major project failures where hundreds of millions of dollars of development have been "written off." Alternative 2 This alternative would develop a new-technology replacement for the legacy system, with new features and capabilities. This approach is similar to alternative 1 above, except it has the additional advantage of offering new features that may answer long-standing requests for legacy system upgrades. Such new features may add risk, delay, cost, and new user-training requirements. Alternative 3 The third alternative would freeze changes to the legacy system and "surround" or encapsulate it within a new system. Over time, legacy system functions can be replaced by new-technology elements until the legacy system is totally replaced. This approach has the advan- tage of leveraging capabilities already present in the legacy system without making further direct investments in it. It has the disadvantage that few legacy systems can be subsumed easily within a new system. An example of this approach is to make existing legacy system data accessible via modem graphical user inter- faces, which can access multiple legacy systems and new 45 systems in an intuitive, easy-to-use manner. This approach has been successfully employed to transition large legacy systems used to manage and automate telephone company operations. Alternative 4 The last alternative would (a) continue to use and maintain a legacy system but "cap" the number of users, and (b) develop a new system for new users (or some subset of the old users) and develop interworking ar- rangements with the legacy system as required. This approach has the advantage of limiting the expansion of the legacy system while simultaneously limiting the risk associated with wholesale replacements. If the new systems truly offer lower costs and increased capabilities, then it becomes easier to plan for the legacy system replacement because the benefits will be known in advance. This approach has the disadvantage that it may not be appropriate for large, tightly integrated systems. In particular, the inte~vorking problems with the legacy system could be substantial. Deciding which path to pursue is ultimately based on such things as cost-benefit trades and the culture of the organization facing the problem. In any given budget- year, it is almost always cheaper and politically safer to "get one more year" out of a legacy system than to attempt replacing it. Alternatives 3 and us above can be used to control risk, but ultimately it takes farsighted managers who encourage risk taking by subordinates to pursue a legacy system replacement program. Adopting a Spiral Model In recent years, industry has moved from its traditional model of software development, sometimes pejoratively referred to as a "waterfall" model, to a new model of Selfware development referred to as a "spiral" model (Boehm, 19871. In the traditional waterfall model. development pro- ceeds in one sequence through the following phases: system requirements specification, system design, soft- ware coding, and system tesiin~with any problems found in system testing generally repaired by iterating back to the design or coding phases. The waterfall metaphor derives from the one-way flow of this process down a sequence of, for the most part, irreversible steps. The difficulty with this process is that, in complex systems, requirements that are set early on may not adequately capture the needs of real users. In addition, some requirements may imply development difficulties and corresponding costs that are out of proportion to

46 their user benefits. Those who formulate the require- ments may not be aware of the latest emerging technolo- gies and their associated or potential capabilities, and thus they may specify requirements that cannot leverage these capabilities. As a result, large systems may be developed that fail to meet user needs, take longer to develop, and are more costly than necessary. To address this problem, a "spiral" model of develop- ment has been adopted by most developers of large, complex software systems. In the spiral model, one iterates quickly through a cycle of requirements specifi- cation, development of a prototype that captures the most important aspects of the requirements (prototyp- ing), and testing with real users. In this iterative process, one can quickly discover user needs that are not met (e.g., the system is hard for real users to use), and one can quickly discover requirements that drive cost and total development time out of proportion to their in- tended benefits. The spiral metaphor derives from the rapid cycling that occurs through the phases of require- ments specification (and respecification), prototype de- velopment, and testing. Experience shows that the spiral model of develop- ment leads to lower development costs, more rapid development, and substantially greater satisfaction of real user needs. Key to this process is the use of prototypes that simulate the most important aspects of the system under development but do not implement all of the detailed requirements on each cycle through the spiral model. As an illustration, an early mock-up of a user interface could be done with something as simple as Post-it notes stuck on a board to simulate pull-down menus. A simulation of a database access capability need not be connected to the real database system. It could, instead, be connected to a simulated database system that imitates the delays that will occur in returning an answer to a query and illustrates how the answer will be presented to the user. Process Improvement For all its importance, the production of software, especially large-scale system software, is still as much art as it is science. To address the problem, the Software Engineering Institute (SKI) of Carnegie-Mellon University developed a Capability Maturity Model for software organizations wishing to improve their proficiency (Humphrey, 19891. The approach provides an explicit road map for change and a way for an organization to keep score on its progress. Specifically, the SKI Capability Maturity Model allows an organization to rate itself and track its progress through five successive "levels" of proficiency. Level 1, COMMERCIAL MULTIMEDIA TECHNOLOGIES FOR 7~WEN7Y-FIRST CEN7-URYARMYI3A T~EFIELDS the lowest level, is characterized by chaos and unpre- dictability in cost, schedule, and quality. Level 5, the highest level, is one in which cost, schedule and, quality have become highly predictable based on quantitative, repeatable measurements and well-established proce- dures. The intermediate levels allow an organization to track its evolution toward Level 5. The SKI Capability Maturity Model has become well-established in the soft- ware industry. Most large software organizations conduct self-evaluations, and many are evaluated by outside consultants who specialize in doing so. The approach the SKI took is quite general- it is based on the writings of P. B. Crosby and the "quality maturity structure" that he defined (Crosby, 19791. The fundamen- tal (and common sense) notion taught by Crosby is that an organization wishing to make a positive change in the way it does business "must" find a way to treat its processes as measurable, trackable, and controllable. SUMMARY This chapter has outlined commercial multimedia technologies to provide support for the analysis con- tained in Chapter 4. The principle was to examine building block technologies selected on the basis of a generic layered architecture, which was introduced at the beginning of this chapter. The intent was to describe each of these building blocks, with a focus on their current status and likely trends. In addition, there was discussion of examples of commercial, system-level applications of multimedia technologies. Finally, there was a review of some impor- tant lessons learned in the commercial world with respect to these technologies. This chapter has shown that multimedia information technologies and the capabilities they enable are evolv- ing rapidly under the pressure of commercial market forces and underlying technological advances. This status portends well for the availability of solutions from the commercial world that will be addressed in Chapter 4. REFERENCES ATM Forum. 1995. FORUM. Universal Resource Locator (URL) http ://www. atmforum . com/atmforum/atm_basics/04/28/95. Balmer, D. 1986. CASM The right environment for simulation. Joumal of the Operational Research Society 37:443~52. Becker, H. 1995. Library of congress digital library effort. Communica- tions of the ACM 38(4):23-28. Bell, T. 1995. Technology 1995. IEEE Spectrum 32(1):24-25. Boehm, B. W. 1987. Software Engineering Economics. Englewoods Cliffs, NJ.: Prentice-Hall.

REVIEW OF RELEVANT COMMERCIAL TECHNOLOGIES Borman, D. A. 1989. Implementing TCP/IP on a cray computer. ACM Computer Communications Review 19(2):11-15. Bouwens, C., J. Brann, B. Butler, S. Knight, J. Lethert, M. McAuliffe, B. McDonald, D. Miller, D. Pace, B. Sottilare, and K. Williams. 1993. The DIS vision: A map to the future of distributed simulation (Comment Draft), Institute for Simulation and Training (prepared by the DIS Steering committee). October. Brooks, F. P., Jr. 1975. The Mythical Man-Month. Reading, Mass.: Addison-Wesley. Casner, S., and S. Deering. 1992. First IETF (Internet Engineering Task Force) internet audiocast. ACM Computer Communication Review 22(3):92-97. CDMA. 1994. Global mobile satellite systems comparison. CDMA Technology Forum, San Diego, March. Crosby, P. B. 1979. Quality Is Free: The Art of Making Quality Certain. New York: McGraw-Hill. DoD Modeling and Simulation (MS) Management.1994. DDoD 5000.59. Washington, D.C.: Office of the Under Secretary of Defense (Acqui- sition). The Economist. 1993. Reboot system and start again. 326(7800). Febru- ary 27. Fox, E., R. Akscyn, R. Furuta, and J. Leggett. 1995. Digital libraries. Communications of the ACM 38(4):2~28. Geppert, L. 1995. Solid state. IEEE Spectrum 32(1):35-39. Goldberg, D. E. 1989. Genetic Algorithms in Search, Optimization and Machine Learning. Reading, Mass.: Addison-Wesley. Halfhill, T. 1993. PDA's arrive. Byte 18(11):66~6. Hammerstrom, D. 1993. Working with neural networks. IEEE Spectrum 30(7):46-53. Hayes-Roth, F., et al., eds. 1983. Building Expert Systems. Reading, Mass.: Addison-Wesley. Henriksen, J. 1983. The integrated simulation environment. Operations Research 31:105~1073. Humphrey, W. S. 1989. Managing the Software Process. Reading, Mass.: Addison-Wesley. IEEE (Institute for Electrical and Electronic Engineers). 1990. Special Issue on Satellite Communications. Proceedings of the IEEE 78(7). IEEE. 1995. Special Issue on Wireless Personal Communications. IEEE Communications Magazine 33(1). IIT. 1995. Ada at work. IIT Research Institute, Lanham, Md. Prepared for the Ada Joint Program Office, Arlington, Va. 22041. January. Ivanek, F., ed. 1989. Terrestrial Digital Microwave Communications. Nor~vood, Mass.: Artech House. IVHS (Intelligent Vehicle Highway Society of America). 1992. Strategic plan for intelligent vehicle highway systems in the United States. IVHS America [now ITS America]. 2(5). Juliessen, E. 1995. Small computers. IEEE Spectn~m 32(1):35-39. Leeper, D. 1995. Motorola Market Research Data. (Unpublished Internal Company Reports.) 4/ Macedonia, M. R. 1994. MBone [Multicast Backbone) provides audio and video over the Internet. Computer 27(4):3~36. Manders C., and W. Wu. 1991. A Performance Measure for ISDN. ITU Telecom 91 Technical Proceeding, Geneva, October. Marefat, M.1993. Virtual Teaching Environments: A Framework, Current Bottlenecks, and Research Vision. Technical Report. ECE Depart- ment, University of Arizona. Markoff, J. 1995. Approaching a digital milestone. New York Times. January 7. Murray, K., and S. Shepherd. 1987. Automatic synthesis using automatic programming and expert systems techniques toward simulation modeling. Proceedings of the Winter Simulation Conference, Insti- tute of Electrical and Electronics Engineers, New York. Nahrstedt, K., and R. Steinmetz. 1995. Resource management in net- worked multimedia systems. IEEE Computer 28(5):52~3. Oren, T., and B. P. Zeigler. 1979. Concepts for advanced simulation methodologies. Simulation 32(3):69~2. Padgett, J. E., C. G. Gunther, and T. Hattori. 1995. Overview of wireless personal communications. IEEE Communications Magazine 33(1):2~41. Purday, J. 1995. The British Library initiatives for access projects. Communications of the ACM 38(4):2~28. Reddy, Y., M. S. Fox, N. Husain, and M. Roberts. 1986. The knowl- edge-based simulation system. IEEE Software Engineering 3(2):26-37. Rozenblit, J. W., J. Hu, T. Gon Kim, and B. P. Zeigler. 1990. Knowl- edge-based design and simulation environment (KBDSE): Founda- tional concepts and implementation. Journal of the Operational Research Society 41(6):475~89. Ruiz-Mier, S., and J. Talavage. 1989. A hybrid paradigm for modeling of complex systems. In, Artificial Intelligence, Simulation and Mod- eling. New York: Wiley Publishers. Shokoohi, F. 1995. Personal communication to S.D. Personick, Chair- man, Committee on Future Technologies for Army Multimedia Communications. Steinmetz, R., and K. Nahrstedt. 1995. Multimedia: Computing, Com- munications, and Applications. Englewood Cliffs, NJ.: Prentice-Hall. Tanir, O., and S. Sevinc. 1994. Defining requirements for a standard simulation environment. Computer 27(2):2~34. Wallace, G. K. 1991. The JPEG still picture compression standard. Communications of the ACM (Association for Computing Machin- ery). 34(4):31~4. Werner, K. 1993. The flat panel's future. IEEE Spectrum 30(11):1~26. Zeigler, B. P. 1990. Object-Oriented Simulation with Hierarchical, Modular Models. San Diego: Academic Press. Zeigler, B. P., S. Vahie, and D. Kim. 1994. Alternative Analysis for Computational HolonArchitectures. Bolt, Beranek and Newman Technical Report. Cambridge, Mass. Zhang, L., S. Deering, D. Estrin, S. Chenker, and D. Zappala.1993. RSVP: A new resource ReSerVation protocol. IEEE Network 7(5):8-18.

Next: Meeting Army Needs with Commercial Multimedia Technologies »
Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy Get This Book
×
 Commercial Multimedia Technologies for Twenty-First Century Army Battlefields: A Technology Management Strategy
Buy Paperback | $40.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This book responds to an request by the U.S. Army to study the applicability of commercial multimedia technologies to command, control, communications and intelligence needs on future battlefields. After reviewing Army's needs and discussing relevant commercial technologies within the context of a generic architecture, the book recommends approaches for meeting the Army's needs. Battlefield potential is illustrated, and—drawing on lessons learned from the private sector—a technology management strategy consisting of specific recommendations to the Army is provided. The key to future benefits is for the Army to accommodate the rapid changes taking place in the commercial world of multimedia technologies.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!