Click for next page ( 19


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 18
Review of Relevant Commercial Technologies In this chapter, the committee describes trends in commercial multimedia building block technologies. The technologies that are described were selected by the committee in the context of a layered archi- tecture that is relevant to generic multimedia appli- cations. In addition, there is discussion of some commercial, system-level applications of multimedia information technologies and some important les- sons learned in the application of multimedia tech- nologies in commercial venues. This chapter serves as a technical foundation for recommendations made later in this report. MULTIMEDIA ARCHITECTURE The committee configured a generic layered archi- tecture as a basis for identifying building block tech- nologies that are relevant to Army multimedia communications. This multimedia architecture is de- picted in Figure 3-1. The purpose of the committee's generic architecture is primarily for discussions of how a set of relevant building block technologies relate to each other. It does V. Specific Applications . _ 3u Or, - a: i_ IV. Generic Applications/Enablers III. Middleware II. System Software 1 ' 1 I. Physical Platforms FIGURE 3-1 Generic architecture for multimedia communications. 18 not represent a fully fleshed-out technical architecture.' The importance of a technical architecture is discussed in Chapters 4 and 6 and in the related recommendations in Chapter 7 of this report. The committee's architecture consists of several ge- neric layers. The naming of the various layers of the architecture explicitly reflects the fact that multimedia technologies are strongly dependent upon software. With the generic architecture as a framework, the committee selected relevant multimedia technologies and overlaid them onto the various layers of the architecture (see Figure 3-21. Generally, but not always, higher layer technologies employ the services of lower layer tech- nologies. Note that the building block technologies are numbered in Figure 3-2 from bottom layers to top layers in order to facilitate later discussions. The bottom layer of Figure 3-2 (Layer I) includes physical devices, subsystems, and systems (e.g., light- weight portable terminals, storage systems, and com- munications subsystems and systems to support people on the move). The next level (Layer II) provides protocols for interconnecting subsystems, systems, net- works and gateways, operating systems for managing computational resources, and distributed computing environments for managing distributed software proc- esses. The middleware (Layer III) is built on top of the lower level system software and provides capabilities such as information filtering, database management, and user-friendly multimedia user interfaces. Layer IV provides generic applications/enablers such as multi- media teleconferencing capabilities and groupware, iA fully fleshed-out technical architecture would not merely say that certain building block technologies lie in certain levels of the architec- ture. It would specify, for illustration, the Internet protocol or one or more alternative protocols as the protocols to be used for specific applications; it would specify a graphical user interface (e.g., Motif or one or more alternatives) as the graphical user interface to be used. Where more than one alternative is specified as acceptable for a specific building block, the documentation supporting the technical architecture would explain why there is more than one acceptable alternative, provide guidance regarding which alternatives should be used in which types of applications, and explain how interoperability is to be achieved between applications using different building block alternatives.

OCR for page 18
REVIEW OF RELEVANT COMMERCIAL TECHNOLOGIES Layer V Specific Applications ',) s s in, A. rid . _ c - - ~r. _ .,~ ~ _ _ a. ~ _ Or. ._ ~ t.C ~ ~ ^ ~ ~ C ~ c ._ _ ~ 3 it, Z ~ ~4 14. Simulation: systems and applications Layer IV Generic Applications/Enablers 13. Multimedia messaging capabilities 12. Decision support tools, groupware, multimedia teleconferencing 11. Multimedia information access capabili- ties Layer II0~Iiddleware 10. Multimedia information analysis and processing building blocks and middle- ware services 9. User-friendly multimedia user interfaces 8. Multimedia database management sys- tems 7. Information filtering systems Layer System Software 6. Distributed computing environments and operating systems Protocols and related Functionality to support communications Layer I Physical Platfonr~s Information capture technologies 3. Communications platforms that support people on the move 2. Storage systems for multimedia informa- tion 1. Lightweight, rugged, portable appli- ances and terminals FIGURE 3-2 Building block technologies in the generic multimedia architecture. which can be tailored for specific applications (e.g., simulation) at the top layer (Layer V). Woven through the architecture are network management and security technologies to provide reliable, secure information processing (Layer VI). The description of building block technologies in the sections below follows the arrangement of Figure 3-2. The value of doing this will become more apparent in Chapter 4, where it will be shown that these technologies and layered architecture concepts can be used to clarify the recommendations for how the Army should proceed to acquire the technologies to meet its operational needs and functional requirements. 19 The building block technologies described range from physical technologies, such as hand-held multi- media appliances and physical storage subsystems, to technologies that are embodied in algorithms and soft- ware (e.g., speech recognition and distributed computing technologies). They are discussed in the order of the six layers of the generic technical architecture (Figure 3-29. In the discussions of these building block technolo- gies, the focus is on the current status and likely trends in each technology, with particular emphasis on how large the respective commercially driven research and development (R&D) efforts are likely to be. These dis- cussions have been kept brief to avoid unnecessary technical detail and include only what is needed to support the recommendations in Chapter 4. BUILDING BLOCK TECHNOLOGIES (LlYER IPHYSICAL PLATFORMS) Building block technologies discussed under Layer I Physical Platforms of the generic architecture (Figure 3-2) include lightweight, rugged, portable appliances and terminals; storage systems for multime- dia information; communications platforms that sup- port people on the move; and information capture technologies. Lightweight, Rugged, Portable Appliances and Terminals During the first half of this decade, portable laptop and palmtop computers have grown into a multibillion dollar industry (based on sales of these computers in 19941. At the same time, these appliances have shrunk to the point where laptop computers provide the full functionality of bulkier desktop computers, except for the smaller display and keyboard, and weigh less than five pounds. Personal digital assistants (PDAs) for general purpose and special purpose applications are emerging in the marketplace. For example, they are being used by rental car agencies to expedite check-in and by delivery services to track the real-time status of packages in transit. In this section we examine the hardware component technologies- the processing chips, memories, storage devices, displays, and batter- ies- underlying these portable computing appliances and terminals to illustrate the rapid technological evo- lution of these appliances and terminals that is enabled by the underlying technology trends and driven by commercial market opportunities.

OCR for page 18
20 Processors At the heart of every computing device is a central processing unit (CPU), a processing chip that "executes" the instructions the computer is programmed to perform. While CPU capabilities can be characterized by various metrics, the MIPS (millions of instructions per second) that a processor can execute is one commonly accepted measure of performance. In 1994, microprocessor CPUs were announced with a rating of 1,200 MIP~an impres- sive figure given that CPUs found in many personal computers produced in the late 1980s and early 1990s were rated in tens of MIPS. Such remarkable, order-of- magnitude increases in processor speeds have been commonplace over the past decade, and there is every , indication that processor capabilities will continue to (Geppert, 1995~. increase in the foreseeable future. Memory The increasingly sophisticated operating systems and application programs used in today's computers are often characterized as being "memory hungry." They require more dynamic random access memory (DRAM) than their earlier counterparts in order to operate efficiently. One indication of this trend is the fact that the average amount of DRAM memory used in a personal computer has grown from 0.5 megabytes in 1984 to 8 megabytes in 1994, a 1 6-fold increase (Geppert, 19951. Fortunately, this increase in memory needs has been matched by a 20-fold decrease in memory costs over the 1982-1992 period. Trends in increasing memory density (i.e., the number of bits on a single chip) are expected to continue for the foreseeable future, with a factor-of-four increase every few years. Sixteen-megabit DRAM chips are now com- mon, and memory chip producers are assembling manu- facturing facilities for 64-megabit chips. Hitachi and NEC announced early designs for a 1-gigabit memory chip at a conference in February 1995. While such gigabit DRAMs are not expected to be produced in volume until the first years of the next century, a continued dramatic trend in increasing memory densities is evident. Permanent Storage: Disk Drives and Flashcards The miniaturization trends for CPUs and memory noted above are also evident in the area of disk drives, the traditional storage media for data that must be stored for extended periods of time. Disk drives for laptops are now available in so-called PCMCIA cards (Personal Com- puter Memory Card International Association), which are COMMERCIAL MULTIMEDIA TECHNOLOGIES FOR TWENI-Y-FlRST CENTURYARMYBA 1 IS lightweight, credit-card-sized devices that can be easily plugged into a laptop computer. Disk drives have mechanical parts and thus require sophisticated technology (such as liquid-bearing mo- t`>r.s~for~.se in laptops, where ruggedness is a concern. A recent competitor to disk drive technology, and one that is based on semiconductor technology containing no movable parts, is the so-called PCMCIA "flashcard." Currently, flashcards have less storage capacity than disk drives and are significantly more expensive, cost- ing about 15 times more than comparable disk drives. However, as flashcard technology is relatively new, prices may fall as the technology matures. Current flashcards can store 16 megabytes of data; 256-mega- hvre flashcards are expected to be available in 1997 Display Technology The need for lightweight and rugged displays for portable computers and the quest for flat screen televi- sion sets are the driving forces behind display technol- ogy. Laptop display technology can broadly be classified into passive liquid crystal display (LCD) technology and active LCD technology. Active matrix LCD (AMLCD) technology is the more recent of the Do and was developed to overcome some of the difficulties associ- ated with passive displays. Companies have spent an estimated $3 billion to commercialize AMLCDs. The cost of a manufacturing facility for AMLCDs is very substantial; it is estimated that a single state-of-the-art AMLCD pro- duction line exceeds $100 million (Werner, 19939. In addition to ongoing commercial AMLCD research, there is continuing commercial research on developing portable displays by improving upon passive LCD tech- nology. Another commercial initiative of interest is a push to develop lightweight and durable displays based on plastic LCDs. The emergence of high definition television (HDTV) as a consumer technology will fuel consumer demand for low-cost high resolution displays, particularly in the larger sizes usually associated with television viewing. This demand should, in turn, further stimulate invest- ments by commercial display manufacturers in all types of high resolution displays, including flat panel displays using the technologies described above. The large physi- cal size of conventional cathode ray tube (CRT) HDTV receivers (particularly the depth) will make them imprac- tical for many households. The commercial market op- portunity available to any company that can create a large flat panel display technology suitable for residential entertainment applications is enormous, and this drives the large investments being made with respect to re-

OCR for page 18
RENEW OF COEVAL CO~CIAL ~CHNOLOGI~ search on new manufacturing methods and entirely new approaches to creating displays. The underlying display technologies described above are being incorporated into novel commercial products such as virtual reality glasses or goggles and automotive "heads-up" displays. Virtual reality glasses or goggles use small liquid crystal elements and combinations of lenses and mirrors to create a virtual image that appears to the wearer to be projected in front of the wearer as a large image at a distance of several feet. In some cases, these virtual reality glasses or goggles are used to immerse the viewer in a visual environment that fills the viewer's sensory visual field of view and thus creates the sensation that the viewer is part of the three-dimensional environ- ment perceived. Heads-up displays use projected images to superimpose information on a window or screen through which the viewer can observe other important information (e.g., instrumentation information superim- posed on an automobile windshield). These emerging technologies tend to be at the high end of normal mass market consumer price points (e.g., more than several hundred dollars) but are expected to experience rapidly declining prices as mass production and competition take hold. Power and Batteries Power consumption and better battery technology are also key technology factors in lightweight, portable computing, devices. Many microprocessors and applica- tion-specific integrated circuits and memory chips are now being manufactured with decreased power require- ments. Many now run at almost half the power require- ment of previous chips, and many processors have a sleep mode in which only a minimum amount of power is consumed. Batteries are used in a wide variety of applications ranging from small batteries in toys and hand-held consumer appliances to large storage batteries in auto- mobiles and for backup power systems. Although not as dramatic as the progress that occurs each year in the performance of semiconductor-based components, there has been a steady improvement in battery technology and associated performance over the last several dec- ades. Alkaline batteries have become very popular as primary sources for small appliances. A variety of new battery types, such as nickel-cadmium and lithium-ion batteries, have emerged as rechargeable power sources for appliances such as cellular telephones, cordless tele- phones, and notebook computers. Recent advances, such as plastic lithium-ion batteries, show promise of increased energy densities, improved safety and environ- mental friendliness, ruggedness, and low cost. Research 21 continues on fuel cells and alternative large energy-stor- age batteries for automobiles and industrial applications. The worldwide market for batteries is $26 billion per year, of which almost 40 percent is for consumer single use batteries. Because of the economic impact of battery technology on automobiles, telecommunications, con- sumer electronics, and ultimately trade, the major indus- trialized countries have been funding battery research via national consortia. As an example, the United States Advanced Battery Consortium, which includes General Motors, Ford, Chrysler, and the U.S. Department of Energy, is a $260 million (total over several years) joint government-industry effort to develop advanced batter- ies for electric vehicles (Shokoohi, 19951. Personal Digital Assistants Personal digital assistants (PDAs) are extremely light- weight and compact hand-held computers. Rather than relying on a keyboard, PDAs use a stylus together with a touch-sensitive screen for input. Long-term data storage is provided via PCMCIA cards. PDAs can be used to maintain a small database (e.g., an address book), write and store notes, and send or receive electronic mail and facsimiles when plugged into a phone jack (or connected via a wireless modem). PDAs were introduced to the marketplace in 1993. Difficulties with the handwriting recognition system in the first PDA may have limited its widespread accep- tance. Other PDAs were introduced into the market in 1994 by many of the major international commercial consumer electronics, computer, and communications companies. PDA technology is still relatively immature. No stand- ard chip sets or common architectures have yet been adopted in the computer industry. From a user's point of view, PDAs also have a way to go. It has been predicted that it will take 10 years for PDA technology to meet its expectations (Halfhill, 19931. Epilogue: The Personal Computer industry The component hardware in portable laptop and palmtop computers is closely tied with developments in the larger personal computer industry. In order to indi- cate the momentum and scale of the resources being invested in this area, the section concludes with some brief figures indicating the current and projected size of the personal computer market. It is estimated that 16 million personal computers will be sold in the United States in 1995, with 34 million more machines being sold worldwide. The estimated installed

OCR for page 18
22 COMMERCIAL MULTIMEDIA TECHNOLOGIES FOR 7.WEN7Y-FIRST CENTURYARMYBA i 1LF~I~S base of personal computers in the United States is 80 million (approximately one personal computer for every three people) as of 1994, with 200 million worldwide. It has been estimated that the annual sales of personal computers will surpass 100 million units worldwide by the end of this decade (see Juliessen, 1995~. Storage Systems for Multimedia Information Storage systems are used to store information that is needed for performing computational tasks, the results of which may be displayed or presented to end users or re-stored for subsequent use. As discussed above, storage systems are used in lightweight, portable appliances and terminals. They are also used as the physical platform for storing information in distributed, networked applica- tions and for archiving information. Storage systems range from archival storage systems such as magnetic tapes and associated tape drives (whose information may take relatively long to access) to mag- netic disks and optical disks OCR for page 18
REVIEW OFRELLEVANT COMMERCIAL TECH~JOLOG1~ educational applications, interactive games, and other applications that involve multimedia information are being driven by a perceived market measured in billions of dollars per year in server and storage system sales. These developments include the compact, rugged multi- media storage systems needed by notebook computers (discussed above), personal compact disk players, set top boxes (as described under Residential Information Systems later in this chapter), and other consumer appliances. Communications Platforms That Support People on the Move "Wireless personal communications" is a commercial telecommunications industry term for networking serv- ices and associated applications that support people on the move (IEEE, 19951. Cordless telephones, cellular telephony, and paging systems are the most popular current manifestations of wireless personal communica- tion systems. Cordless telephony started as a low-cost, low-power, short-range (a few hundred feet) home appliance in- tended to eliminate the tether to the telephone network. The concept has blossomed into cordless telephones that people can carry away from home and operate anywhere within reach of a compatible base station. The CT2 Common Air Interface is a standard used in the United Kingdom, Canada, and Hong Kong and other parts of Southeast Asia. These systems have been optimized for cost, and the handsets are extremely light and portable. The Digital European Cordless Telecommunications system provides an advanced design that supports higher user densities. It uses small "picocells" and resembles a cellular system as much as it does a cordless system. It supports data transmission as well as voice. The Personal Handyphone System is specific to Japan. The system is designed to provide a single, small, port- able phone that can be used at home or office (already launched) or as a public access device (to be launched this year). The system will support fax as well as voice. The potential subscriber base is estimated at 5.5 million in 1998 with up to 39 million by 2010 (Padgett et al., 19951. In the United States, Bell Communications Research has developed an air interface for a Wireless Access Communications System (WACS). By combining parts of the Personal Handyphone System with WACS, the pro- posal is now called Personal Access Communications Services. The goal is to provide wireless access to the wireline networks of local exchange carriers. Base sta- tions are expected to be shoe-box-sized enclosures mounted on telephone poles about 600 meters apart. 23 In contrast to cordless systems that were developed for people walking around, cellular systems were origi- nally intended for vehicle applications. The first genera- tion systems, called Advanced Mobile Phone Service (AMPS), use analog modulation for speech and base stations with coverage of 10 km or less in some cases as little as 0.5 km. These systems have been widely deployed (see discussion of cellular elsewhere in this report). As the cost of digital electronics has continued to drop and low-rate digital speech coding techniques have continued to improve, digital versions of cellular have begun to appear. The European Global System for Mobile Communications (GSM) is expected to improve quality over systems like AMPS and to provide pan-European roaming. The GSM standard also includes specifications for synchronous and asynchronous data services at 9.6, 4.8, and 2.4 kbps. In the United States, the Electronic Industries Association and the Telecommunications IndustIy Association adopted a standard called IS-54, where IS stands for Interim Stand- ard. IS-54 equipment is operational in most of the top U.S. cellular markets, and customer adoption is increasing. A second Interim Standard, IS-95, is based on a different frequency sharing scheme called code division multiple access (CDMA), which was originally developed to increase jamming resistance for military applications. This is a relatively new approach, with the first systems expected to be deployed in California this year. Manufacturers of cellular hand-held units must ad- dress Ho fundamental markets. The first, sometimes called the "road warrior," is a user whose livelihood actually depends on phone contacts made while on the move. Sales people are an obvious example. These users demand high quality voice transmission and reliable connections. They are also the leading buyers of pre- mium services and features, and they are relatively insensitive to prices. The second major group is the "casual user." They are more concerned about price and are more tolerant of lower voice quality or occasional dropped connections. Manufacturers sell far more cas- ual-user phones than road-warrior phones, but the latter are very important to carriers because they generate many more minutes of usage. The net result is that manufacturers are driven to produce both a simple, high-volume, low-cost product line and a lower-volume, higher-cost, feature-rich line of handsets. In contrast to voice, there are fewer systems and standards (so far) for wireless data services. Wireless local area networks (LANs) are usually privately owned and operated and cover very small areas. These are generally intended for high data rates (greater than 1 Mbps) and operate in unlicensed spectrum. Standards for wireless LANs are being developed under the Institute for Electri-

OCR for page 18
24 COMMERCIAL 31ULTIMEDL`4 TECHNOLOGIES FOR 7-WEN7Y-FIRST CEI~I-URYARMYBATILEFIELDS cat and Electronics Engineers (IEEE) 802.11 committee in the United States and under the European Technical Standards Institute RES10 committee in Europe (HIPER- LAN). Wireless LANs can be used to form ad hoc and quasi-static networks, although full roaming mobility is not yet available. Generally, the subscriber units require a hub with which to communicate although there are exceptions. Some operate at data rates of several mega- bits per second, approaching that of a wired Ethernet. Many are sized to fit on a PCMCIA card. The Advanced Radio Data Information Service (ARDIS) in the United States covers much wider areas using specialized mobile radio (SMR) frequencies in the 800 to 900 MHz range. There are over 50,000 subscribers today, and service is offered in over 400 metropolitan areas. The prevailing data rate is 4.8 kbps, with 19.2 kbps available in some areas. Cellular digital packet data is another approach that reuses existing analog cellular networks. It is a transparent overlay to AMPS systems, taking advantage of idle time on analog channels. The European GSM infrastructure, already digital, is develop- ing the General Packet Radio Service to handle data. Satellite communications systems allow for a rapid expansion of communications infrastructure and provide connectivity to isolated locations. Commercial satellite services are designed to provide coverage to predeter- mined geographic areas, and their ability to redirect or expand this capacity is very limited. More Ku-band (14/12 GHz) systems are being de- ployed to augment those at C-band (6/4 GHz). Also, Ka-band (30/20 GHz) has been set aside for commercial satellite communications, and equipment for this band is in the experimental stage in the United States. Utilization of Ka-band began in Japan and Italy in 1990. These trends imply that significantly more capacity will be available in orbit (IEEE, 1990; Manders and Wu, 19911. Very Small Aperture Terminal systems are being used to provide two-way data services (T1 rates, 1.544 Mbps) to small terminals. Direct Broadcast Services (DBS) now transmit data at rates of tens of megabits/second to small terminals (less than about 2 feet in size). However, both of these latter services are available only in selected coverage areas. There are several commercial satellite systems for personal communications currently in planning and de- velopment phases; examples are IRIDIUM, Odyssey, Globalstar, and Inmarsat-P. These systems are intended to support users anywhere in the world. The Teledesic system of low-orbiting satellites would provide a very large number of medium-to-high data rate links (several megabits/second) to small terminals. These emerging satellite systems will represent more than $10 billion in development and construction (including launch) costs when deployed (CDMA, 1994~. In addition to providing wireless links to end users, communications platforms that support people on the move must include backbone and feeder transmission and switching facilities that interconnect the wireless access nodes. Terrestrial nodes (e.g., cell sites) are typi- cally connected with cable-based fiber optic or copper cable facilities and occasionally with point-to-point mi- crowave facilities. The cable-based facilities that have been used to date have relatively low bit-error rates. The point-to-point microwave facilities used in commercial applications generally have relatively low bit-error rates as well (10-6 or lower) (Ivanek, 1989~. Recently, there has been a great deal of commercial interest in the use of upgraded cable television systems to interconnect small cell sites in emerging personal communications net- works. Since the cable systems carry combinations of frequency-division-multiplexed analog television, digit- ized television, and other digital information streams, and because of the characteristics of the existing cable facili- ties, the digital error rates on these systems are expected to be somewhat higher than in other commercial facili- ties. As a result, certain protocols like the asynchronous transfer mode (ATM) protocol that was designed for low bit-error rate transmission facilities will not work properly unless steps are taken to encapsulate these protocols inside an error-tolerant transmission format. Steps are being taken by a number of standards bodies to create a transmission format that will allow ATM to be carried over facilities with relatively high error rates, like cable television facilities. This is considered a high prior- ity issue because of the investments that existing cable system operators have in their current facilities and because of the announced plans of some local exchange carriers to deploy new multipurpose broadband net- works based on combinations of fiber optic and coaxial cable. The ATM protocol is discussed at length later in this chapter under Protocols and Related Functionality to Support Communications. Information Capture Technologies Information capture technologies interface to the physical world to capture (sense) sounds, images, mov- ing scenes, etc., electronically and to convert the cap- tured information to digital forms that can be used by automated information systems. In the past ten years, information capture technologies have become consumer products in the form of sophis- ticated, but moderately low-priced and highly usable, camcorders. These commercial devices can operate in a wide range of light levels under control of their internal micro-processor based systems and can provide features such as zooming and integrated sound capture (micro- phones and associated electronics). These devices can

OCR for page 18
REVIEW OF RELEVANT COMMERCIAL ~CHNOLOGIES also communicate to videocassette recorders via inter- faces provided for that purpose, and it is anticipated that next-generation camcorders will appear shortly with their video and audio outputs encoded in standard digital formats (e.g., the Motion Picture Experts Group (MPEG) MPEG-2 standard). Camcorders are generally priced in the range of several hundred to a thousand dollars, although advances in mass production techniques and the desirability of reaching price points below $300 to address mass consumer markets will lead to lower prices in the future. The emergence of desktop multimedia teleconferencing, including video and image capture, have led to the appearance of low-cost charge-coupled device (CCD) video and still image cameras on the commercial market in the $50 price range. Associated with the emergence of these consumer devices, and with the continued demand for sophisti- cated multimedia programming by movie and television audiences as well as users of multimedia computers (e.g., games), is the continuing evolution of video processing systems that can be used to perform such functions as editing video and still images and combining video with superimposed text and images. Some of these are tran- sitioning into consumer products as increasingly power- ful computer technologies allow these functions to be performed on general purpose personal computers with appropriate software. Another example of a sensor that has found wide- spread commercial use is the infrared sensor technology used in such things as motion detectors. These devices are available commercially in products costing only a few dollars and are used by consumers in applications includ- ing home security systems and automated driveway lighting. Analog-to-digital conversion and other specialized sampling and data conversion tasks associated with sensors can now be performed easily with one or a few low-cost commercial chips, and associated signal proc- essing can be accomplished with commercial single-chip digital signal processors. This development means that analog multimedia information such as audio, image, and video can be captured and converted to digital form easily and cheaply using commercial technologies. For example, IBM recently announced low-cost encoder ($700) and decoder ($30) chip sets that implement the MPEG-2 video compression coding standard. BUILDING BLOCK TECHNOLOGIES (LAYER II - YSTEM SOFTWARE) Building block technologies discussed under Layer II System Software of the generic architecture 25 (Figure 3-2) include (a) protocols and related functional- ity to support communications, and (b) distributed com- puting environments and operating systems. Protocols and Related Functionality to Support Communications Network protocols are the system-level standards for formatting information, controlling information flows, reserving communications paths, authorizing access to communications resources, managing congestion, recov- ering from communication or equipment failures, deter- mining routing, and other processing tasks needed to transfer or access data between remote end systems (e.g., users) in a network. Since even a brief overview of network protocols could fill an entire report, we focus here on two leading protocol architectures: the Internet protocol suite and the emerging ATM high-speed net- work standards. For each, we begin with a brief history and architectural overview and then consider the types of network services they support, with an emphasis on multimedia communication. Also, a "technology fore- cast" is provided for each. This section concludes with a discussion about software support for mobility. The Internet Protocol Suite The Internet protocol suite is the result of 25 years of evolutionary development, much of which has been sponsored by the Advanced Research Projects Agency (ARPA). In the Internet architecture, there is a clear distinction between the services to be provided by the network-level infrastructure and the services to be built on top of these network services by the communicating end systems. The division of functionality between the network and the end systems has proven to be a remark- ably robust and prescient one. At the heart of the Internet architecture is a design philosophy in which the network-level infrastructure provides only a simple, minimal packet delivery service. The Internet Protocol (IP) and the associated route selection protocols that provide this net~vork-level serv- ice make no guarantees that (a) the packets sent by one end system will be received by the other end system, or (b) the network will deliver transmitted packets to the destination in the order in which they were sent. IF does not recognize any notion of a "connection" between two endpoints. Any additional functionality (e.g., reliable data trans- fer) required by an end application is provided by "end-to-end" protocols which execute on the end sys- tems. For example, the Transmission Control Protocol

OCR for page 18
26 COMMERCIAL MULE ~CHNOLOGI~ FOR -FIRST CE~YA~YBA EMITS (TCP) is an end-to-end service which uses IP to provide in-order, reliable delivery of data between two end systems. From a service standpoint, the Internet protocol suite provides for both reliable (via TCP) and unreliable (via the user datagram protocol (UDP)) data transfer between two end systems. Multicasting of data (i.e., the copying and delivery of a single data packet to multiple receivers) is also supported. All Internet applications (e.g., file transfer protocol, remote login (telnet), E-nail, World Wide Web (WWW)) are built upon either TCP or UDP data transfer. The Internet protocols provide no explicit support for real-time communication (e.g., interactive audio or video) or for applications that send information at a constant bit rate, and they require the stream of bits to be delivered with a very low delay variability among the bits in the stream. It should be noted, however, that several efforts are under way in the Internet Engineering Task Force (IETF), the Internet standards body, on developing protocols for supporting such services. Nota- ble among these efforts is the RSVP resource reservation protocol (Zhang et al., 19939. It should also be noted that several efforts have demonstrated the possibility of mul- timedia teleconferencing over the current Internet (Cas- ner and Deering, 1992; Macedonia, 19941. In principle, even the current Internet protocols can provide for teleconferencing services, as long as the network's traffic load remains low. Since there is increasing interest in extending the use of the Internet to applications that require guaranteed packet delivery within a specified range of delays, such as real-time interactive multimedia teleconferencing, it is anticipated that the next-generation Internet protocol (IP version 6) will include mechanisms that enable network resources to be reserved and packets to be assigned priorities for transport through intermediate routers. Thus the Internet protocol suite is moving from its traditional connectionless "best effort" delivery approach toward a set of approaches that includes the equivalent of connec- tion-oriented transport. Internet protocols were initially developed for rela- tively low speed (e.g., 56 kbps) communication links. However, they have been shown to be able to handle traffic at rates of hundreds of megabits per second (Borman, 1989~. The current Internet protocols were developed for fixed-terminal (i.e., nonmobile) network users; this is reflected in the design of IP (and to a lesser extent, TCP and UDP). However, there has been considerable recent activity and interest in developing a new IP that will support mobility. Several efforts are currently under way in the IETF to develop Internet standards in support of mobility. Nineteen ninety-tour can well be characterized as the year that the Internet moved into the public conscious- ness. The reach of the Internet and the installed base is indeed impressive. There were 3,864,000 host computers in 154 countries connected to the Internet as of December 1994; almost 100 billion packets passed through a single National Science Foundation (NSF) net site in a single month (Bell, 19951. Industry and commerce are now relying on the Internet for services. Security is also now a serious and legitimate concern for the Internet. Commercial needs and the increasing reliance on the Internet as part of our national infrastruc- ture are fueling efforts in the area of network security both from the network standpoint (i.e., protecting the network from tampering) and from the end-user point of view OCR for page 18
RENEW OF ~EVA~ CO~=C~ ~CHNOLOGI~ recting mechanisms for transport over facilities with relatively high bit-error rates. Before 1994, ATM products and prototypes were being supplied by only a handful of vendors and installed primarily in research institutions and government labs. In 1994, however, more than two dozen vendors brought ATM switching products to market. Membership in the ATM Forum (the standards body most aggressively push- ing ATM forward) has grown considerably. Nonetheless, ATM is a new technology that has not yet been demon- strated on nearly the same scale as the Internet protocols (ATM Forum, 19951. Two visions exist regarding the use of Intemet tech- nology and ATM networks in the emerging national information infrastructure. In one view, interconnected ATM networks will provide seamless, uniform, end-to- end transport of ATM cells between any two endpoints. In the other view, ATM wide area networks will be used to connect islands of local area or campus networks, with the individual islands running Intemet (or other) com- munication protocols. Both views are likely to be correct, in that applications exist in which the solutions implied within both views are viable. The balance between the use of one approach over the other will be determined in the emerging commercial marketplace, more by the speed at which commercial products emerge that meet user needs than by any fundamental advantage of one approach over the other. Software Support for Mobility One can consider three phases in the evolution toward the support of mobility in commercial telecommunica- tions networks. The first, associated with traditional "plain old telephone service (POTS)," supports essen- tially no mobility at all. In this case the terminal equip- ment has a fixed relationship to the network to which it is attached, and the user has a fixed relationship to the terminal (i.e., one attempts to call a specific person by dialing a number associated with a specific physical location). There is limited support for mobility via exten- sion telephones and cordless telephones, but these do not require any network-based intelligence. The second, the familiar case of today's cellular services, allows one of the relationships to be dynamic namely, the physical location of the telephones is dynamic. However, the relationship between user and terminal equipment re- mains fixed (i.e., a particular cellular telephone and associated telephone number are still used to attempt to reach a specific user). In the third phase, the relationship between the user and the terminal is also allowed to vale, and it is the user, rather than a specific terminal, to whom calls are directed. This is called "personal mobility." The 27 latter two phases require that software-controlled func- tionality be added to both the terminal equipment and the network. Smart-card technology offers still further flexibility to personal mobility. Smart cards contain user-specific data that turn a terminal into a de facto peripheral for the card. With the smart card, the terminal takes on the personality and behavior that the user wishes (and has paid for), including feature sets, custom dialing control, and authentication passwords. For systems in phase two, represented by today's cellular telephony, the approach to locating users (i.e., their moving terminals) who move from place to place is to maintain a system of home and visited databases called Home Location Registers (HLRs) and Visited Loca- tion Registers (VLRs). The HER is the place to which an incoming call for a roaming user is initially directed based on the user's telephone number. The HER will contain an entry that shows the VLR associated with the network in which the user is currently known to be located. This VLR knows that the user is in its domain because the user's telephone has communicated recently with one of its cellular nodes. The call will be forwarded to that network, where the OR will arrange to have the call delivered to the proper roaming user, based on its stored information regarding the user's current location in its associated network. The VLRs will also notify the FILRs when roaming users move in and out of their domains. To support full personal mobility, including smart cards, commercial network-based software will be up- graded and supplemented to include functionality not required in traditional fixed-terminal networks. In par- ticular, fast inquiry and response database systems will be deployed to (a) interrogate terminal units and data- bases for user authentication, (b) interrogate databases for number translations, and (c) transfer service profile information from one information database to another as users and their terminals travel from place to place. Networks will be (reprogrammed to recognize mobility- related numbers and respond to personal feature profiles associated with individual users. Distributed Computing Environments and Operating Systems Operating Systems Oeve/opment Operating systems are software systems that manage the hardware resources of a computer system to provide seIvices needed by applications. They evolved from earlier input-output control systems, which were loaded into early computer systems before an application began

OCR for page 18
28 COMMERCIAL MULTIMEDIA TECHNOLOGIES FOR 7 WEN7-Y-FIRST CEN71~YARMYBA T7ZEFIFI DS to run; this was typically done with a deck of punch cards placed immediately ahead of the cards used for the application. It became clear that there was a common set of functions needed by many applications, and this gave rise to early operating systems. Early machines in many cases were dedicated to a single use. Later machines were multipurpose, but the input-output control system scheme made for sequential execution of jobs, one after another. A major advance came from the idea of multiprogram- ming, which took advantage of the fact that the expensive processor was often wasted as slow input and output devices (such as printers, card punches, and tape ma- chines) were accessed by an application. Multiprogram- ming used the idle periods of the processor to perform other computational work until the input and output were completed. A variety of multiprogramming tech- niques were developed, with fixed and variable numbers of tasks, priorities, etc. Timesharing is a multiprogram- ming technique that allows interactive access to the multiprogrammed resources. Multiprogramming also involves sharing of processor memory resources. Modern multiprogramming technolo- gies have almost uniformly emnlc~ved the technique , . . .. . . -- J - ~~-r - - J - ~ called demand-paging." Demand-paging divides the storage of the machine into fixed-size units called pages. Application storage is also divided up into page-sized address ranges, which are mapped to the machine's storage through a technique known as virtual memory. All commercial operating systems for workstation or larger computers (e.g., MVS~, UNITY and its derivatives) now incorporate these techniques. Smaller systems, such as personal computers, have been evolving in the same fashion; the popular Windows application support soft- ware is essentially a single-user multiprogramming sys- tem overlaid on an extremely simplistic device- management core (MS-DOS~. Newer generations of personal computer software will support more advanced memory management techniques, such as demand- paged virtual memory. The lack of modern memory management technology in the popular MS-DOS soft- ware has been a major limitation in using these commod- ity machines for more complex applications and a major source of failures. These difficulties have provided op- portunities for alternative personal computer operating systems (e.g., OSLO), as well as penetration of the personal computer market by UNITE technology. A major challenge remaining for operating systems is the efficient processing of multimedia data (Nahrstedt and Steinmetz, 19951. Multiprogramming systems have embedded assumptions about scheduling jobs that they inherited from their predecessor technologies. For exam- ple, they often schedule job execution in a "round-robin" fashion to preserve a fair allocation of processing resources between jobs. This scheduling creates a "virtual time" model where each job's real processing time (wall- clock time) is dilated in proportion to the amount of competition for processing resources. Unfortunately, continuous media such as voice and video are characterized by their real-time requirements; 30 frames per second of video are required to preserve perceptual smoothness in spite of competing demands for resources. These real-time constraints suggest that the requirements of multiprogramming must be balanced against the application requirements for effective multi- media support in operating systems (Nahrstedt and Ste- inmetz, 19951. Substantial commercial R&D effects are underway to improve the support for multimedia appli- cations in commercial operating systems. Examples in- clude the XMedia(~) toolkit from DEC and the Hewlett Packard(~) MPower(D toolset. nteroperability and Distributed Computing Environments In the 1970s and before, computer programs were usually written for only one hardware and software platform at a time. "Porting" a large application to another platform was a difficult task, rarely undertaken. The UNION operating system, which blossomed in the 1970s, owes much of its popularity to the fact that it was explicitly designed to run on multiple platforms and was closely wedded to the C programming language, which was also designed for portability. In the 1980s, portability was among the most desirable of attributes sought for computer applications. In the 1990s, portability remains an important issue, but interoperability (the ability of computer applications developed by different vendors to cooperate on the same or different computing endeavors, and to share data between such applications) across software and hard- ware platforms has become the more sought-after attrib- ute. Interoperability and the improving price- performance trend of small computer systems have led to intense interest in what is known as a distributed computing environmenta set of standard interfaces, software components, and tools that permit computer applications developed by different vendors to cooperate on the same or different computing environments inter- connected by appropriate communication links. In the past, incompatible software applications have proliferated. This has occurred because of (a) mixing programs written in different languages, and (b) the use of different ways to communicate between programs running on the same or different computers that are connected by a communications system. Cooperation across incompatible platforms was so difficult to achieve that applications were commonly designed with no

OCR for page 18
REVIEW OF RELEVANT COMMERCIAL TECHNOLOGIES come closest to meeting them (Zeigler, 1990; Ruiz-Mier and Talavage, 19893. BUILDING BLOCK TECHNOLOGIES (LAYER Vl MANAGEMENT~ECURITY) Building block technologies discussed under Layer VI- Management/Security of the generic architec- ture (Figure 3-2) include security technologies; network management systems; and general purpose languages, tools, development environments. Security Technologies Security is a major issue and a major concern among all providers and users of information networking appli- cations and services. In the United States, privacy is one of our most cherished rights, and privacy concerns are a major impediment to successful realization of the vision of a national information infrastructure. Health care providers and patients are concerned about the protec- tion of patient-identifiable patient records. Educators are concerned about the unauthorized disclosure of student records and also about the uncontrolled or unauthorized access of students to pornographic materials. Individuals do not want their messages and real-time communica- tions to be disclosed, nor do they want information about their buying habits, what they read, their tax records, or other personal information to become available without their permission. Electronic commerce cannot flourish without the ability to conduct secure transactions and protect intellectual property from unauthorized uses. Security includes: . Protection of stored information or information in transit through a network from unauthorized access in usable form (eavesdropping); Protection of stored information or information in transit from unauthorized modification; Authentication that what has been received over a network has not been modified, and that the source of the information is who he or she claims to be; Protection of resources from unauthorized access; Protection of users' identities from those who are not authorized to know their identities; Protection of information about users' usage pat- terns: what they access, how often, from where; and Protection against denial-of-service attacks that pre- vent authorized users from accessing resources and information when they need to do so. . 37 Recent trends in commercial systems and applications have been to enhance security. There is a tradeoff be- tween the level of protection that can be provided and the ease with which legitimate users can use the services they are authorized to use. This tradeoff is shifting in time as advances in technology are making such things as powerful smart cards available. Standards are the pacing factor here, because the industry recognizes that users will not wish to carry around a variety of different types of smart cards for different purposes. Thus, much effort is currently being employed to identify the broad range of applications and the corresponding capabilities that must be embedded in smart cards. Smart cards are essentially credit-card-sized miniature computers that contain secret encryption keys, the ability to execute security (encryption) algorithms, and input/output capa- bilities to interface with terminals or appliances. They can store personal information such as medical records, and they can also be loaded with electronic money. To illustrate the trends in commercial practice, one can consider the following examples. First-generation cellular networks are relatively sus- ceptible to eavesdropping. Next-generation digital cellu- lar networks will employ encryption techniques to make eavesdropping more difficult. Emerging wireless per- sonal communications networks will, in some cases, employ spread-spectrum code division multiple access (CDMA) techniques and even more powerful encryption algorithms. These networks may also employ public key cryptography algorithms to allow accessing users to authenticate themselves to the network without disclos- ing their identities to eavesdroppers. CDMA systems are not only intrinsically eavesdrop-resistant but also resis- tant to direction finding because their low-power signals are spread over a broad range of frequencies, which are shared by all other users. When remote users log on to their host terminals over networks, there are opportunities for passwords to be compromised to eavesdroppers, particularly with wire- less networks. This has led to the use of special credit- card-sized tokens that generate and display pseudo-random numbers that are used as one-time pass- words. Using a variety of approaches, the passwords generated by these tokens can be authenticated by host security systems that know the secret information resid- ing in the token that generates the passwords. Eavesdrop- pers cannot make use of these passwords after their initial use by authorized users. The use of these single-purpose tokens may be eliminated when their functionality is absorbed into general-purpose smart cards in the future. Standards for digital signatures, based on public key cryptography, make it possible to not only verify the source of a piece of multimedia information that has been stored or transmitted in digital form, but also to validate

OCR for page 18
38 COMMERCIAL MULE ~CHNOLOGI~ FOR -FIRST CE~YA~YBA EMITS that no change has been made to the digital multimedia object subsequent to being signed by the originator. Extensions of this approach have resulted in the ability to also authenticate the exact time that a document was signed (digital notary service) in ways that are acceptable in legal settings. In general, cryptography makes it possible to make multimedia information inaccessible to unauthorized us- ers by placing it in a form that is not usable without the secret cryptographic decoding key. Commercial methods for implementing cryptography are widely available, although export restrictions, difficulties in negotiating terms with respect to the use of patented methods, and certain federal government initiatives with respect to encryption methods, which contain "back doors" to allow government access under specified circumstances, have temporarily hampered progress in converging on com- mercial standards for strong encryption methods. There are initiatives under way to create cryptographic methods to support electronic commerce, including the exchange of credit information over public networks. Several vendor-specific approaches are currently being employed in commercial networks to support electronic shopping and the sale of intellectual property over networks. Firewalls to prevent attacks on network enclaves (i.e., networks within a specified administrative domain, such as those of a company or a university) from determined intruders are available and are continually being upgraded as more sophisticated attacks are developed and em- ployed. Applications exist for automatically detecting net- work security vulnerabilities against known attacks due to improper configurations of networks and their attached hosts. However, protection of networks against determined attackers remains an ongoing problem for commercial and institutional system administrators. It has been described as a joumey, rather than a destination, where the objective is to minimize risks and to detect quickly and limit the damage associated with attacks. In summary, with network and information security as one of the pacing factors in the successful realization of the applications associated with a national information infrastructure, which represent a commercial market op- portunity measured in hundreds of billions of dollars per year by most estimates, there is a very large commercial R&D effort under way to create, standardize, and deploy easy-to-use, powerful, and inexpensive security technolo- gies and methodologies. Network Management Systems Network management systems are used to manage large, distributed, heterogeneous information systems. Management functions range from authorizing users to have access to specific services and applications to recovering from faults or attacks. Typically, management is accomplished in a layered fashion to make the man- agement process itself manageable. Individual network components such as communications nodes (e.g., switches, multiplexers, routers) and links (e.g., fiber optic systems), servers, and end systems contain self-diagnostic functionality and the ability to remotely configure or reconfigure their capabilities. The network management functionality that is de- voted to monitoring and controlling these individual components is referred to as residing in the element management layer. Collections of components work together to perform such functions as the provision of communications paths or accessible databases. While individual components may fail, redundancies can make it possible to maintain these functions. For example, a communication path can be maintained by using an alternative route through a multiconnected network. A backup server can be used to take over for a damaged server. The network management layer that is responsi- ble for maintaining such things as communications paths and database capabilities is known as the resource man- agement layer. Higher layers in the network management stack are responsible for providing specific types of services and applications to specific users and user classes. As distributed, multipurpose, multiprovider, heterogene- ous networks have proliferated in the commercial world, network management has become a major commercial market. Downsizing and reengineering of commercial firms and industries have placed ever more importance on the elimination of manual tasks and the use of automated systems to configure, troubleshoot, and control networks. The increasing dependence of society on information networks in such areas as commerce, health care, and air traffic control places a premium on reliable systems that can quickly control and isolate problems. General Purpose Languages, Tools, Development Environments Any piece of computer software, whether used in an operating system, a multimedia database, multimedia teleconferencing software, or a network management system, is a program. Any such program, in turn, must be written in a programming language. While the pro- gramming language in which software is written may be unimportant to a user of an application embodied in a program, if the software is to be extended, modified, or customized, the programming language in which this is done becomes critically important.

OCR for page 18
REVIEW OF RFLEVA~ CO~ERCIAL ~CHNOLOGI~ Early computer programming languages such as FOR- TRAN were designed primarily for numerical calculation and were aimed at freeing the programmer from having to consider the details of a computer's hardware when writing a program. Modern computer programming lan- guages such as C++ and Ada were designed to support a spectrum of application domains. They also recognize large-scale software development as a continuing group process involving many individuals and thus support widely recognized software engineering principles: (a) programming is a human activity, and (b) software should be maintainable, portable, modular, and reusable. Hundreds of computer programming languages have been developed, and yet only a relatively small handful have found widespread use. Rather than provide a comprehensive survey of languages, the committee ex- amines here two of the most important languages in use today for application programming and system software development: Ada95 and C++. Ada traces its birth back to 1975, when the Department of Defense (DoD) established requirements for a high- level language that could be used in all defense projects. In 1976, 23 existing languages were formally reviewed and none were found to meet the requirements. It was concluded that a new language was needed, and the Ada language was born. Ada became an American National Standards Institute (ANSI) standard in 1983 and an Inter- national Standard Organization (ISO) standard in 1987. The 1983 Ada standard was updated in early 199j and, like the original Ada, is intended for embedded and real-time systems. It also has a number of built-in features to support distributed computing. A major improvement found in Ada95 is its support for object-oriented program- ming and enhanced support for real-time systems. The so-called "Ada mandate," Public Law 101-511 Sec. 8092, states that Ada should be used for all DoD software: "Notwithstanding any other provisions of law, after June 1, 1991, where cost effective, all Department of Defense software shall be written in the programming language Ada, in the absence of special exemption by an official designated by the Secretary of Defense." Thus, Ada has considerable visibility within the defense contracting industry. The extent to which Ada is used in non-gov- ernment-sponsored software development is the subject of continual debate. Numerous commercial uses of Ada are documented (IIT, 1995~. C++ is another modern general-purpose language of roughly similar power to Ada. It is object-oriented and also has many features that modern software engineering practice considers important. It is a descendant of the C programming language, which was developed in the early 1970s at Bell Laboratories, and has found wide- spread use since then. Standardization efforts are under- way in both the ANSI (American) and ISO (International) 39 groups to develop a C++ standard. It has been stated that C++ is by far the most popular object-oriented program- ming language and that the number of C++ users is doubling every 7.5 to 9 months. Trade magazines contain numerous reviews of compilers and development envi- ronments for C and C++, thereby attesting to the wide- spread interest in this language. Associated with these languages are a number of tools and development environments. These are attempts to ease the programming task, organize teams of program- mers for large projects, and "debug" programs effectively. Examples of such tools include syntax-directed editors, source-code control systems, and symbolic debuggers. A syntax-directed editor provides programming lan- guage syntax checking and language-specific structuring as the program is typed in by the programmer. The advantage of this approach is that many of the "bugs" common in early stages of program development can be eliminated before the first trial compilation of the program. Systems for controlling source code, such as the Source Code Control System (SCCS) and Revision Control System (RCS), serve as repositories for the source code comprising a program. There are facilities provided for change management, which is critical in management of large-scale software projects. Symbolic debuggers allow a failed program to be analyzed in the form of the symbols used by the pro- "rammer to write the program. The advantage of this technology is that the programmer is able to more quickly isolate conceptual errors because the form of the error report is in the semantic structures used by the program- mer rather than those of the lower-level "object code" used by the machine. SYSTEMS In this section the committee gives examples of sys- tem-level applications of multimedia information tech- nology existing or emerging in the commercial domain. These examples will provide substantiation for the rec- ommendations in Chapter 4 as to which building block technologies the Army should adopt or adapt from the commercial domain and which building blocks are can- didates for Army-specific development to produce pro- prietary advantages over its adversaries. This section covers four major systems: cellular and wireless, elec- tronic commerce, intelligent transportation systems, and residential information services. Cellular and Wireless Telecommunications Systems Revenue growth and subscriber growth in cellular systems have exceeded even the most optimistic projec-

OCR for page 18
40 COMMERCIAL MULTIMEDIA TECHNOLOGIES FOR 7~ FIRST CENTURYARMYBA 771~EFIELDS tions of its early proponents. Yearly revenues in North Electronic Commerce America have grown from $500 million in 1985 to over $4.6 billion by year-end 1993 (Leeper, 19959. This is an average annual revenue growth of 32 percent. Revenue growth rates are retreating slightly but are still expected to exceed 20 percent annually through 1996. Globally, in 1993 alone, the number of subscribers went from 21.1 million to 33.1 million, a growth of 56 percent. In North America, the growth was 47 percent, going from 12.1 million to 17.7 million (Leeper, 1995~. Note that revenues are not growing as fast as subscribers because of declining prices. With more advanced electronic, battery, and antenna technologies, there has been a marked move toward personal, portable handsets. In 1987 only 5 percent of handset sales could be called "portable"vehicular sets accounted for 78 percent of all sales, and "transportable" units 17 percent. In 1993, portable sales had jumped to a 36 percent share of the total, transportables to 35 percent; vehicular sales had declined to 29 percent of the overall market (Leeper, 19959. Prices on cellular subscriber units have dropped to well within the means of mass market consumers. In 1993, portable units in the United States had an average "walk-away" price of $343 with some units sold for as little as $43. Vehicular units averaged $264 and transport- ables $187. Despite their higher average price, portables remain the fastest growing segment of the market (Leeper, 19959. Customers appear willing to pay a premium for port- ability and convenience, and technology has made very small and lightweight phones possible. The leading handset manufacturer has recently introduced a "flip- phone" model weighing only 3.9 ounces. The phones are becoming as small as is practical for human fingers to operate; further reduction in size may require a paradigm shift in packaging and other means of input and output. Since it is still inconvenient in many circumstances to "wear" a phone and to answer it every time it rings, many users today wear a vibrating pager and use it to screen calls. The portable cellular phone stays in the briefcase until it is actually needed. This practice portends the day when a person may "wear" a per- sonal, wireless, LAN (local area network) that links a pager, phone, and PDA. Cellular and wireless users, particularly business users, are increasingly demanding more reliable, se- cure, nearly ubiquitous service with the ability to move around freely. They are demanding lighter, more reli- able handsets with longer operation between battery charges. They are also demanding multipurpose units that can operate as cordless telephones, cellular tele- phones, and telephones that can access emerging personal communication networks. Electronic commerce refers to the conduct of business using distributed information networks that connect geo- graphically distributed locations of the same firm, firms and their suppliers, firms and their customers, and mul- tiple firms jointly creating and marketing products. The banking industry is at the leading edge of elec- tronic commerce in its use of information networks to conduct billions of dollars of transactions on a daily basis. The banking industry uses information networks to move money among accounts distributed worldwide and to monitor critical information needed to make financial decisions, such as the granting of loans and lines of credit. These networks are also used to collect and process credit and debit card information from hundreds of thousands of merchants who accept these cards, to clear hundreds of millions of checks on a daily basis, and to operate automatic teller machine networks on a world- wide basis. All major stock exchanges depend upon information networks to conduct hundreds of millions of trades each day. This dependency has become increasingly evident during recent outages at the NASDAQ exchange. For more than a decade, electronic data interchange (EDI) has been used between firms and their suppliers to place orders, send invoices, and make payments on accounts payable. Some large firms will not deal with suppliers who cannot conduct their business using EDI. Recently, there have been successes in various forms of electronic shopping (e.g., the Home Shopping Chan- nel), and this success is fueling the emergence of on-line shopping services over which purchases can be trans- acted. Such transactions may involve the use of credit cards, debit cards, electronic checks, or anonymous electronic money. In all of these existing and emerging applications, network integrity, network reliability, and security are major, ongoing concerns. Not only are these networks susceptible to theft of services, fraud, compromise of private information, and attempts to steal or counterfeit electronic funds, but they are also susceptible to disrup- tive attacks and accidents that can cause billions of dollars per day of economic damage. Intelligent Transportation Systems Departments of transportation at the federal and state levels have concluded that it will be increasingly difficult, if not impossible, to construct new roads to accommodate increasing traffic loads over the next several decades. Meanwhile, there is a need to increase highway safety, to improve traffic flows to decrease congestion resulting

OCR for page 18
REVIEW OF REIEVANT COMMERCIAL TECHNOLOGI~ from accidents and stochastic traffic surges, and to track the locations of commercial and public vehicles. In response to this realization, an initiative known as Intelligent Transportation Systems (ITS, formerly known as Intelligent Vehicle/Highway Systems) has been established. Consensus estimates are that government and private investments in ITS cumulatively up to the year 2011 will be $210 billion (IMPS, 19923. In fiscal year 1995, the U.S. Department of Transportation budget includes $227.5 million in funds allocated to ITS research and develop- ment, operational tests, and other ITS-related initiatives and applications. These applications include highway sensors (including cameras) that will monitor traffic and send traffic information over wireless and wired commu- nications networks to centralized traffic control nodes; traveler information systems that will distribute traffic reports to travelers in automobiles, trucks, and their homes and offices; positioning systems that will allow vehicles to track and report their locations to centralized nodes; 911-emergency systems that will allow travelers to report problems, including their precise locations (currently a serious problem in cellular emergency calls); map delivery systems that will guide travelers to their destinations; and others that are less relevant to this report. These distributed systems and their associated appli- ances and applications will have to be reliable, secure, easy to use, and affordable in mass market applications. Residential Information Services In 1994 the sales of home-based personal computers equaled that of television sets ($8 billion) (Markoff, 19951. It is anticipated that, over the next decade and beyond, the use of residential multimedia computers to access information (education, health care, personal finance) and to shop for and purchase information and consumer products will become commonplace. In order for this vision to become a reality, residential applications must be intuitive and easy to use. There is an increasing awareness in government and industry that universal service will not be solely a matter of financial means but also a matter of the usability of information services and applications by those members of society who are not technologically oriented and have limited time to invest in learning how to use new technologies. Thus there is an ongoing, major R&D effort to achieve increasingly user-friendly graphical (and other) user interfaces and so-called plug-and-play capabilities. For example, there is a large amount of commercial activity related to the design of set top boxes for interac- tive multimedia applications in the home. The terminol- 41 ogy "set top box" refers to a piece of equipment, used in conjunction with a standard television set, which acts as an interface between an interactive or noninteractive multimedia communication service being provided via a coaxial cable, a pair of copper wires, a satellite or terrestrial microwave antenna, or an optical fiber and the standard antenna input of the television set. The set top box may contain powerful processing and informa- tion storage capabilities, and it may provide a sophisti- cated user interface that allows the user to do such things as navigate menus of available programs and other information and to interact with the application the user has selected. Much of this current activity relates to the design of a user interface that is easy and intuitive to use for the more than 95 percent of the general population that owns television sets. In addition, since the upstream (user-to-net~vork) bandwidth is very limited in many architectures for connecting end users to the information servers that provide multimedia to these set top boxes, yet the response time to user requests (e.g., program changes) must be very short, there is a big emphasis on maximizing performance in the context of bandwidth limitations. LESSONS LEARNED IN THE COMMERCIAL WORLD The major focus of this section is on lessons learned in the commercial world in the application of multimedia information technologies. These lessons support the committee's recommendations that appear later in this report. The following sources of lessons learned will be addressed: architecture, standards, vertical versus hori- zontal structures, leveraging commercial off-the-shelf (COTS) technology, how business meets special technol- ogy requirements, leveraging legacy investments and fostering rapid acceptance of information technology, and adopting a spiral model of development. Architecture Because we can observe its entire life cycle, the IBM System 360 serves as an excellent case history from which to draw a few essential lessons about architecture. In the late 1950s and early 1960s, IBM was facing a problem. IBM was fielding an ever-widening variety of systems, few of them compatible with one another and each separately optimized for a particular set of applica- tions. Further, each system required a separate training regimen for IBM's field support staff, leading to very high maintenance costs.

OCR for page 18
42 COMMERCIAL MULTIMEDIA TECHNOLOGIES FOR TWEN7-Y-FIRST CE~YA~YBA EMITS To solve the growing problem, IBM's executives com- missioned the design of a single, logical architecture from which an integrated family of systems could be built. The result was the now famously successful System 360 (and its follow-on, System 370) family of systems. What are the lessons to be learned from this successful commercial experience with architecture? Fred Brooks, System 360 Development Manager, says (Brooks, 19759: System 360 architects had two almost unprecedented advan- tages: enough time to work carefully, and political clout equal to that of the implementors. The provision of enough time came from the schedule of the new technology; the political equality came from the simultaneous construction of multiple implemen- tations. The necessity for strict compatibility among these served as the best possible enforcing agent for the specifications. Regarding the architecture design, Brooks writes: I will contend that conceptual integrity is ';the" most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have many good but independent and uncoordinated ideas. Conceptual integrity does require that a system reflect a single philosophy and that the specification as seen by the user flow from a few minds. The principal lessons here are that creation of a communications and computing architecture requires that (a) a few resonant minds create the architecture, (b) they be given time to work, and (c) the architecture be enforced not only by edict but also by simultaneously constructing several of the system implementations that use the architecture. The committee notes that cultural separations among existing functional groups, profit centers, divisions, etc., exist in all commercial companies and other institutions. Pride and esprit de corps within these are typically long-standing and well cultivated, and they typically have produced very positive results in the past. Unfortunately, they are also major obstacles to developing an integrated "enterprise" or"information" architecture. The challenge is to overcome these obstacles by taking steps like those taken at IBM in the context of system 360. Such successes are, to date, quite rare. Standards The commercial world places great value on the existence and widespread use of standards. Standards consist of sets of rules with which conformance to the standard can be evaluated. These rules can be applied at many layers in systems, ranging from physical connectors to the graphical user interfaces discussed elsewhere in this chapter. Standards have the business advantage that, once defined, all commercial enterprises that wish to compete for the provision of components of an integrated system can exploit whatever competitive advantages they pos- sess or can create without having to be vertically inte- grated suppliers of the end-to-end system. Thus, standards are pro-competitive. The consumer derives advantage from the fact that technologies adhering to a standard are interoperable. Interoperability means that one of a set of interoperable components can be pro- cured or upgraded independently of others. For example, all compact disc players use the same compact disc, although significantly different sampling schemes and signal processing technologies can be applied, resulting in a variety of consumer choices, from low quality to audiophile quality. Industry standards emerge in two ways, which can be interrelated and often are. First is through the use of a standards body. The purpose of the standards body is to provide an impartial design and selection of a standard. The most effective standards bodies rely on groups of technical experts in an area to define a useful and effective standard. Examples are the Institute for Electri- cal and Electronics Engineers (IEEE), the International Standards Organization (ISO), and the ATM (Asynchro- nous Transfer Mode) Forum. IEEE Standards usually relate to computer and communications devices and their functions. Examples include standard formats `~or com- puter representation of floating point numbers (IEEE 754) and standard interfaces for a portable operating system (IEEE 1003, POSIX). ISO standards include the Open Systems Interconnect standard or OSI; this standard defines a multilayer pro- tocol model which was carefully defined and accepted as a standard before implementation was begun. This latter case illustrates a risk with standardization by com- mittee. The risk is that the committee will be bypassed with a second form of standardization, the de facto standard based on user preference. In the case of OSI, implementation of the Internet protocol described earlier in this chapter proceeded without a complete formal standardization process, and yet it has become the de facto standard for Internet communications. De facto standards are a result of market dynamics. If a clear standard is not established when a company wishes to enter a market, it can either wait for a standards body to put forth a standard to which it will adhere, or it can take its own approach and presume that it will achieve sufficient market share to become one of a small set of accepted solutions. An example where this has occurred is in the design of command sets for asynchro- nous modems, where a manufacturer (Hayes) developed a command set that is a de facto standard. Such standards are sometimes developed as a byproduct of other com-

OCR for page 18
REVIEW OF RELEVANT COMMERCIAL TECHNOLOGIES petitive advantages possessed by a company. In the Hayes case it was a flexible microprocessor-augmented modem called the SmartModem~, which was a huge commercial success; the Hayes command set has outlived the company. Once established, such standards are violated at considerable commercial risk. Official standards and de facto standards can be the same if the official standard is available early enough so that companies see an advantage in adhering to it, or if the de facto standard becomes officially recognized by a standards body. The former case is exemplified by the ATM Forum, which specifies standards for a variety of protocol layers in ATM networks. The latter case, while pragmatic, can be fraught with difficulty as the company that originated the de facto standard may be given a further advantage by ratification of its technology as a standard. Standards bodies have traditionally been reluc- tant to ratify a situation that might, by giving advantage to a particular vendor, give the appearance that they may not be impartial, although recently there has been a trend toward the adoption of de facto standards by standard forums like the Open Software Foundation. Companies address their concerns with standards by becoming active participants in standards bodies when technological standards may affect them or be positively influenced by their input. Companies put their concerns into the deliberative process of the standards body. For example, computer manufacturers were highly influen- tial in the design of the ATM Adaptation Layer 5 standard, which allowed for overlapped operation of check-sum computation and data movement that is highly desirable in computer networking environments. Vertical Versus Horizontal Industry Structures In the first several decades of its existence, the com- puter industry was vertically integrated. Each firm (e.g., IBM, Digital Equipment Corporation) designed, devel- oped, and sold all of the hardware and software needed by its customers to implement their computing applica- tions. In the past 15 years, the computer industry has assumed a horizontal structure. Intel, Motorola, and others make microprocessors and memory chips. Com- paq, IBM, Apple, and many others make personal com- puters and a wide variety of plug-in boards and peripherals. Microsoft, IBM, Apple, and others make operating systems. A large number of firms sell middle- ware and application software (The Economist, 19933. The transition to a horizontal structure has been driven by several factors. Customers demanded open system solutions that would allow them to mix and match products from multiple suppliers; this necessitated the opening of interfaces, which allowed competing firms to 43 sell horizontally structured products. Economies of scale and a very competitive marketplace made it necessary to focus on one's core strength and to sell into as large a market as possible. This same transition from a vertical structure to a horizontal structure is affecting many other industries. Global competition is causing firms to focus on their differentiating advantages and to outsource what they can get better or cheaper from others. For example, an airline may determine that its reservation system should be a separate business rather than a vertically integrated part of a business that includes the component that actually flies passengers. The airline may also outsource its maintenance and meal preparation service. It is not clear that each airline needs to maintain its own baggage handling staff. What to keep and what to outsource is a critical decision regarding where one wants to differen- tiate from competitors. In the long distance telephone industry, competing firms have been differentiating themselves via the capa- bilities of their billing systems to support complex dis- count plans. It is conceivable that someday telephone companies will outsource their networks and differenti- ate themselves on the basis of marketing and customer support services. A lesson learned is that to achieve superiority (beat the competition) in information-technology-intensive businesses, one should focus development efforts on areas where one intends to achieve a differentiating advantage and should outsource everything else. Leveraging Commercial Off-the Shelf Technology The commercial telecommunications industry is one of the largest consumers of multimedia information tech- nologies. It is therefore useful to examine recent trends within the telecommunications industry in leveraging COTS multimedia information technologies. Much can be learned from successful companies in this industry. For example, MCI and SPRINT, two of the largest providers of inter-exchange ("long distance") telecommu- nications services ("carriers") in the United States, conduct only limited R&D activities. They focus on defining the overall architectures of the networks they wish to deploy, the associated management systems, and tracking technol- ogy trends. They carefully determine how they wish to differentiate themselves from their competitors (e.g., in such areas as billing systems and customer service), and they commission the development of those differentiating capa- bilities using commercial-off-the-shelf technologies (i.e., they focus on implementing applications of commercial off-the-shelf technologies, not the underlying technolo- gies themselves).

OCR for page 18
44 COMMERCIAL MUl COMEDY ~CHNOLOGI~ FOR -FIRST CE~YA~YBA WHIFF DS The providers of cellular telecommunications services have relied on their suppliers to produce innovations in technology, while they (the providers) have focused on applying that technology in their networks. When mem- bers of the cellular telecommunications industry deter- mine the need for a new capability (e.g., inter-net~vork signaling to enable nationwide roaming), they call upon their supplier community to produce proposals for how this might be implemented. Cable television companies follow a similar strategy to that of the cellular companies, maintaining only a modest R&D effort focused on defin- ing requirements for new system architectures and capabilities. Recently, the local exchange carriers (Ameritech, Bell Atlantic, and others) have been moving their R&D focus more toward applications of technology and differentia- tion from their competitors based on lower cost struc- tures and superior customer service enabled by the skilled application of commercial-off-the-shelf technolo- gies obtained from their suppliers. They are placing less emphasis on investing in the creation of the underlying technologies themselves and are relying instead on their suppliers to make those investments. However, they do spend considerable effort in understanding technology trends in order to anticipate both opportunities and competitive threats that might result from lower costs or new capabilities enabled by advances in underlying technologies in all of the layers of the generic technical architecture described earlier in this chapter. In the telecommunications marketplace, a specific example of this approach involves the introduction of new fiber optic systems based on Synchronous Optical Network (SONET) standards. These systems are more cost-effective and more easily reconfigured than the prior generation of fiber optic systems. The supplier community produces these systems and makes them available to all carriers. The carriers focus on applying these systems in their evolving network architectures to reduce their costs and to obtain the benefits of more flexible and reliable networks. Where carriers attempt to differentiate themselves is in the use of management systems that allow them to be more responsive than their competitors in filling orders for new services that are carried on their networks and in quickly respond- ing to service interruptions caused by cable cuts and equipment failures. How Business Meets Special Technology Requirements Business tends to solve problems using as much commercial technology as possible, since business is loathe to engage in R&D to solve immediate problems. It is worth studying an example in detail to understand the approach. A major investment bank, Morgan Stanley, needed a system to support trading operations in its New York City trading areas. The reliability requirements of the system were extremely high. The Morgan Stanley approach to this problem was at the system level (i.e., a system of systems to provide high reliability using commercial components). In this case, the commercial systems were redundant engineering workstations connected by dual Ethernet LANs. System software was written to automatically reroute work and network traffic in the case of failure. Thus the system was created from commercial technology using redundant commercial components in a nonstandard way. The nonstandard result was almost exactly twice as expen- sive, but it achieved a multiplicative gain in reliability for this cost plus the addition of a small amount of software and some management discipline. Thus a somewhat ad hoc and opportunistic ap- proach led to a solution that met Morgan Stanley's needs via the innovative application of COTS technolo- gies. The key to success was in focusing on meeting the need, while leaving the solution (detailed require- ments) flexible. Leveraging Legacy Investments and Fostering Rapid Acceptance of Information Technology Corporations and institutions have been deploying computer based systems and applications for 40 years. These systems are based on a wide variety of diverse technologies and architectures and were typically not designed to interoperate with each other in the context of an overall enterprise-wide architecture. Collections of such systems, which represent an embedded investment by the organization or enterprise, are typically referred to as "legacy systems." The issue of "what to do with legacy systems" is an old one in the commercial world, but it is growing in importance as the number and complexity of legacy systems increase and as the accom- panying maintenance costs and update backlog grow. In addition, the allure of more modern systems with up- dated technologies has made the weaknesses of legacy systems more prominent. The technical problem of designing a new system to replace a legacy system is usually the easiest part of a problem. Much more difficult is the cost justification of replacement, management of risk (at first, the new system might not work as well as the old), and reluctance of users and system operators to learn the way a new system works. On the other hand, most engineers prefer to work on new-systems design rather than upgrading old sys- tems, and legacy-system expertise becomes more and more scarce as time goes by.

OCR for page 18
REVIEW OF RELEVANT COMICAL ~CH.YOlOGI~ While there is no single preferred method of dealing with the legacy system dilemma, the following are sug- gested alternatives. Alternative 1 The first alternative would develop a new-technology, wholesale replacement for the legacy system, with no change in functionality or user interface. This approach has the advantages that the requirements may be well understood (see below) and there is minimal retraining for end users. Ostensibly there will be attractive future savings in maintenance costs, and the new system will accept upgrades more quickly and gracefully. The difficulty is that, in any given year, it is always cheaper to carry the legacy system a bit further than to undertake a new development. In addition, all of the requirements that are being met by the old system may not be well documented. Therefore, the new system may initially fall short of meeting all current business require- ments. In addition, for large systems, such "big bang" approaches to the replacement of legacy systems have almost always failed to meet schedules and budgets and have often resulted in major project failures where hundreds of millions of dollars of development have been "written off." Alternative 2 This alternative would develop a new-technology replacement for the legacy system, with new features and capabilities. This approach is similar to alternative 1 above, except it has the additional advantage of offering new features that may answer long-standing requests for legacy system upgrades. Such new features may add risk, delay, cost, and new user-training requirements. Alternative 3 The third alternative would freeze changes to the legacy system and "surround" or encapsulate it within a new system. Over time, legacy system functions can be replaced by new-technology elements until the legacy system is totally replaced. This approach has the advan- tage of leveraging capabilities already present in the legacy system without making further direct investments in it. It has the disadvantage that few legacy systems can be subsumed easily within a new system. An example of this approach is to make existing legacy system data accessible via modem graphical user inter- faces, which can access multiple legacy systems and new 45 systems in an intuitive, easy-to-use manner. This approach has been successfully employed to transition large legacy systems used to manage and automate telephone company operations. Alternative 4 The last alternative would (a) continue to use and maintain a legacy system but "cap" the number of users, and (b) develop a new system for new users (or some subset of the old users) and develop interworking ar- rangements with the legacy system as required. This approach has the advantage of limiting the expansion of the legacy system while simultaneously limiting the risk associated with wholesale replacements. If the new systems truly offer lower costs and increased capabilities, then it becomes easier to plan for the legacy system replacement because the benefits will be known in advance. This approach has the disadvantage that it may not be appropriate for large, tightly integrated systems. In particular, the inte~vorking problems with the legacy system could be substantial. Deciding which path to pursue is ultimately based on such things as cost-benefit trades and the culture of the organization facing the problem. In any given budget- year, it is almost always cheaper and politically safer to "get one more year" out of a legacy system than to attempt replacing it. Alternatives 3 and us above can be used to control risk, but ultimately it takes farsighted managers who encourage risk taking by subordinates to pursue a legacy system replacement program. Adopting a Spiral Model In recent years, industry has moved from its traditional model of software development, sometimes pejoratively referred to as a "waterfall" model, to a new model of Selfware development referred to as a "spiral" model (Boehm, 19871. In the traditional waterfall model. development pro- ceeds in one sequence through the following phases: system requirements specification, system design, soft- ware coding, and system tesiin~with any problems found in system testing generally repaired by iterating back to the design or coding phases. The waterfall metaphor derives from the one-way flow of this process down a sequence of, for the most part, irreversible steps. The difficulty with this process is that, in complex systems, requirements that are set early on may not adequately capture the needs of real users. In addition, some requirements may imply development difficulties and corresponding costs that are out of proportion to

OCR for page 18
46 their user benefits. Those who formulate the require- ments may not be aware of the latest emerging technolo- gies and their associated or potential capabilities, and thus they may specify requirements that cannot leverage these capabilities. As a result, large systems may be developed that fail to meet user needs, take longer to develop, and are more costly than necessary. To address this problem, a "spiral" model of develop- ment has been adopted by most developers of large, complex software systems. In the spiral model, one iterates quickly through a cycle of requirements specifi- cation, development of a prototype that captures the most important aspects of the requirements (prototyp- ing), and testing with real users. In this iterative process, one can quickly discover user needs that are not met (e.g., the system is hard for real users to use), and one can quickly discover requirements that drive cost and total development time out of proportion to their in- tended benefits. The spiral metaphor derives from the rapid cycling that occurs through the phases of require- ments specification (and respecification), prototype de- velopment, and testing. Experience shows that the spiral model of develop- ment leads to lower development costs, more rapid development, and substantially greater satisfaction of real user needs. Key to this process is the use of prototypes that simulate the most important aspects of the system under development but do not implement all of the detailed requirements on each cycle through the spiral model. As an illustration, an early mock-up of a user interface could be done with something as simple as Post-it notes stuck on a board to simulate pull-down menus. A simulation of a database access capability need not be connected to the real database system. It could, instead, be connected to a simulated database system that imitates the delays that will occur in returning an answer to a query and illustrates how the answer will be presented to the user. Process Improvement For all its importance, the production of software, especially large-scale system software, is still as much art as it is science. To address the problem, the Software Engineering Institute (SKI) of Carnegie-Mellon University developed a Capability Maturity Model for software organizations wishing to improve their proficiency (Humphrey, 19891. The approach provides an explicit road map for change and a way for an organization to keep score on its progress. Specifically, the SKI Capability Maturity Model allows an organization to rate itself and track its progress through five successive "levels" of proficiency. Level 1, COMMERCIAL MULTIMEDIA TECHNOLOGIES FOR 7~WEN7Y-FIRST CEN7-URYARMYI3A T~EFIELDS the lowest level, is characterized by chaos and unpre- dictability in cost, schedule, and quality. Level 5, the highest level, is one in which cost, schedule and, quality have become highly predictable based on quantitative, repeatable measurements and well-established proce- dures. The intermediate levels allow an organization to track its evolution toward Level 5. The SKI Capability Maturity Model has become well-established in the soft- ware industry. Most large software organizations conduct self-evaluations, and many are evaluated by outside consultants who specialize in doing so. The approach the SKI took is quite general- it is based on the writings of P. B. Crosby and the "quality maturity structure" that he defined (Crosby, 19791. The fundamen- tal (and common sense) notion taught by Crosby is that an organization wishing to make a positive change in the way it does business "must" find a way to treat its processes as measurable, trackable, and controllable. SUMMARY This chapter has outlined commercial multimedia technologies to provide support for the analysis con- tained in Chapter 4. The principle was to examine building block technologies selected on the basis of a generic layered architecture, which was introduced at the beginning of this chapter. The intent was to describe each of these building blocks, with a focus on their current status and likely trends. In addition, there was discussion of examples of commercial, system-level applications of multimedia technologies. Finally, there was a review of some impor- tant lessons learned in the commercial world with respect to these technologies. This chapter has shown that multimedia information technologies and the capabilities they enable are evolv- ing rapidly under the pressure of commercial market forces and underlying technological advances. This status portends well for the availability of solutions from the commercial world that will be addressed in Chapter 4. REFERENCES ATM Forum. 1995. FORUM. Universal Resource Locator (URL) http ://www. atmforum . com/atmforum/atm_basics/04/28/95. Balmer, D. 1986. CASM The right environment for simulation. Joumal of the Operational Research Society 37:443~52. Becker, H. 1995. Library of congress digital library effort. Communica- tions of the ACM 38(4):23-28. Bell, T. 1995. Technology 1995. IEEE Spectrum 32(1):24-25. Boehm, B. W. 1987. Software Engineering Economics. Englewoods Cliffs, NJ.: Prentice-Hall.

OCR for page 18
REVIEW OF RELEVANT COMMERCIAL TECHNOLOGIES Borman, D. A. 1989. Implementing TCP/IP on a cray computer. ACM Computer Communications Review 19(2):11-15. Bouwens, C., J. Brann, B. Butler, S. Knight, J. Lethert, M. McAuliffe, B. McDonald, D. Miller, D. Pace, B. Sottilare, and K. Williams. 1993. The DIS vision: A map to the future of distributed simulation (Comment Draft), Institute for Simulation and Training (prepared by the DIS Steering committee). October. Brooks, F. P., Jr. 1975. The Mythical Man-Month. Reading, Mass.: Addison-Wesley. Casner, S., and S. Deering. 1992. First IETF (Internet Engineering Task Force) internet audiocast. ACM Computer Communication Review 22(3):92-97. CDMA. 1994. Global mobile satellite systems comparison. CDMA Technology Forum, San Diego, March. Crosby, P. B. 1979. Quality Is Free: The Art of Making Quality Certain. New York: McGraw-Hill. DoD Modeling and Simulation (MS) Management.1994. DDoD 5000.59. Washington, D.C.: Office of the Under Secretary of Defense (Acqui- sition). The Economist. 1993. Reboot system and start again. 326(7800). Febru- ary 27. Fox, E., R. Akscyn, R. Furuta, and J. Leggett. 1995. Digital libraries. Communications of the ACM 38(4):2~28. Geppert, L. 1995. Solid state. IEEE Spectrum 32(1):35-39. Goldberg, D. E. 1989. Genetic Algorithms in Search, Optimization and Machine Learning. Reading, Mass.: Addison-Wesley. Halfhill, T. 1993. PDA's arrive. Byte 18(11):66~6. Hammerstrom, D. 1993. Working with neural networks. IEEE Spectrum 30(7):46-53. Hayes-Roth, F., et al., eds. 1983. Building Expert Systems. Reading, Mass.: Addison-Wesley. Henriksen, J. 1983. The integrated simulation environment. Operations Research 31:105~1073. Humphrey, W. S. 1989. Managing the Software Process. Reading, Mass.: Addison-Wesley. IEEE (Institute for Electrical and Electronic Engineers). 1990. Special Issue on Satellite Communications. Proceedings of the IEEE 78(7). IEEE. 1995. Special Issue on Wireless Personal Communications. IEEE Communications Magazine 33(1). IIT. 1995. Ada at work. IIT Research Institute, Lanham, Md. Prepared for the Ada Joint Program Office, Arlington, Va. 22041. January. Ivanek, F., ed. 1989. Terrestrial Digital Microwave Communications. Nor~vood, Mass.: Artech House. IVHS (Intelligent Vehicle Highway Society of America). 1992. Strategic plan for intelligent vehicle highway systems in the United States. IVHS America [now ITS America]. 2(5). Juliessen, E. 1995. Small computers. IEEE Spectn~m 32(1):35-39. Leeper, D. 1995. Motorola Market Research Data. (Unpublished Internal Company Reports.) 4/ Macedonia, M. R. 1994. MBone [Multicast Backbone) provides audio and video over the Internet. Computer 27(4):3~36. Manders C., and W. Wu. 1991. A Performance Measure for ISDN. ITU Telecom 91 Technical Proceeding, Geneva, October. Marefat, M.1993. Virtual Teaching Environments: A Framework, Current Bottlenecks, and Research Vision. Technical Report. ECE Depart- ment, University of Arizona. Markoff, J. 1995. Approaching a digital milestone. New York Times. January 7. Murray, K., and S. Shepherd. 1987. Automatic synthesis using automatic programming and expert systems techniques toward simulation modeling. Proceedings of the Winter Simulation Conference, Insti- tute of Electrical and Electronics Engineers, New York. Nahrstedt, K., and R. Steinmetz. 1995. Resource management in net- worked multimedia systems. IEEE Computer 28(5):52~3. Oren, T., and B. P. Zeigler. 1979. Concepts for advanced simulation methodologies. Simulation 32(3):69~2. Padgett, J. E., C. G. Gunther, and T. Hattori. 1995. Overview of wireless personal communications. IEEE Communications Magazine 33(1):2~41. Purday, J. 1995. The British Library initiatives for access projects. Communications of the ACM 38(4):2~28. Reddy, Y., M. S. Fox, N. Husain, and M. Roberts. 1986. The knowl- edge-based simulation system. IEEE Software Engineering 3(2):26-37. Rozenblit, J. W., J. Hu, T. Gon Kim, and B. P. Zeigler. 1990. Knowl- edge-based design and simulation environment (KBDSE): Founda- tional concepts and implementation. Journal of the Operational Research Society 41(6):475~89. Ruiz-Mier, S., and J. Talavage. 1989. A hybrid paradigm for modeling of complex systems. In, Artificial Intelligence, Simulation and Mod- eling. New York: Wiley Publishers. Shokoohi, F. 1995. Personal communication to S.D. Personick, Chair- man, Committee on Future Technologies for Army Multimedia Communications. Steinmetz, R., and K. Nahrstedt. 1995. Multimedia: Computing, Com- munications, and Applications. Englewood Cliffs, NJ.: Prentice-Hall. Tanir, O., and S. Sevinc. 1994. Defining requirements for a standard simulation environment. Computer 27(2):2~34. Wallace, G. K. 1991. The JPEG still picture compression standard. Communications of the ACM (Association for Computing Machin- ery). 34(4):31~4. Werner, K. 1993. The flat panel's future. IEEE Spectrum 30(11):1~26. Zeigler, B. P. 1990. Object-Oriented Simulation with Hierarchical, Modular Models. San Diego: Academic Press. Zeigler, B. P., S. Vahie, and D. Kim. 1994. Alternative Analysis for Computational HolonArchitectures. Bolt, Beranek and Newman Technical Report. Cambridge, Mass. Zhang, L., S. Deering, D. Estrin, S. Chenker, and D. Zappala.1993. RSVP: A new resource ReSerVation protocol. IEEE Network 7(5):8-18.