Click for next page ( 164


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 163
6 What Is Computer Science and Engineering? Chapter 1 provided a brief sketch of computer science and engi- neering as an intellectual discipline. This chapter elaborates on that discussion, discusses some key structural features of the field, and provides some history on some of the major intellectual accomplish- ments of the field in a few selected areas. For the reader's conve- nience, the Chapter 1 section "Computer Science and Engineering" is reproduced in its entirety here. COMPUTER SCIENCE AND ENGINEERING Computational power however measured- has increased dra- matically in the last several decades. What is the source of this in- crease? The contributions of solid-state physicists and materials scientists to the increase of computer power are undeniable; their efforts have made successive generations of electronic components ever smaller, faster, lighter, and cheaper. But the ability to organize these compo- nents into useful computer hardware (e.g., processors, storage de- vices, displays) and to write the software required (e.g., spreadsheets, electronic mail packages, databases) to exploit this hardware are pri- marily the fruits of CS&E. Further advances in computer power and usability will also depend in large part on pushing back the frontiers of CS&E. 163

OCR for page 163
164 COMPUTING THE FUTURE Intellectually, the "science" in "computer science and engineer- ing" connotes understanding of computing activities, through mathe- matical and engineering models and based on theory and abstrac- tion. The term "engineering" in "computer science and engineering" refers to the practical application, based on abstraction and design, of the scientific principles and methodologies to the development and maintenance of computer systems be they composed of hardware, software, or both.: Thus both science and engineering characterize the approach of CS&E professionals to their object of study. What is the object of study? For the physicist, the object of study may be an atom or a star. For the biologist, it may be a cell or a plant. But computer scientists and engineers focus on information, on the ways of representing and processing information, and on the machines and systems that perform these tasks. The key intellectual themes in CS&E are algorithmic thinking, the representation of information, and computer programs. An algo- rithm is an unambiguous sequence of steps for processing informa- tion, and computer scientists and engineers tend to believe in an algorithmic approach to solving problems. In the words of Donald Knuth, one of the leaders of CS&E: CS&E is a field that attracts a different kind of thinker. I believe that one who is a natural computer scientist thinks algorithmically. Such people are especially good at dealing with situations where different rules apply in different cases; they are individuals who can rapidly change levels of abstraction, simultaneously seeing things "in the large" arid "in the small."2 The second key theme is the selection of appropriate representa- tions of information; indeed, designing data structures is often the first step in designing an algorithm. Much as with physics, where picking the right frame of reference and right coordinate system is critical to a simple solution, picking one data structure or another can make a problem easy or hard, its solution slow or fast. The issues are twofold: (1) how should the abstraction be repre- sented, and (2) how should the representation be properly structured to allow efficient access for common operations? A classic example is the problem of representing parts, suppliers, and customers. Each of these entities is represented by its attributes (e.g., a customer has a name, an address, a billing number, and so on). Each supplier has a price list, and each customer has a set of outstanding orders to each supplier. Thus there are five record types: parts, suppliers, custom- ers, price, and orders. The problem is to organize the data so that it is easy to answer questions like: Which supplier has the lowest price

OCR for page 163
WHAT IS COMPUTER SCIENCE AND ENGINEERING? 165 ore part P?, or, Who is the largest customer of supplier S? By cluster- ing related data together, and by constructing auxiliary indices on the data, it becomes possible to answer such questions quickly with- out having to search the ermine database. The two examples below also illustrate the importance of proper representation of information: A "white pages" telephone directory is arranged by name: knowing the name, it is possible to look up a telephone number. But a "criss- cross" directory that is arranged by number is necessary when one needs to identify the caller associated with a given number. Each directory contains the same information, but the different structuring of the information makes each directory useful in its own way. A circle can be represented by an equation or by a set of points. A circle to be drawn on a display screen may be more conveniently represented as a set of points, whereas an equation may be a better representation if a problem calls for determining if a given point lies inside or outside the circle. A computer program expresses algorithms and structures ir~for- mation using a programming language. Such languages provide a way to represent an algorithm precisely enough that a "high-level" description (i.e., one that is easily understood by humans) cart be mechanically translated ("compiled") into a "low-level" version that the computer can carry out (unexecuted; the execution of a program by a computer is what allows the algorithm to come alive, instructing the computer to perform the tasks the person has requested. Com- puter programs are thus the essential link between intellectual corr- structs such as algorithms and information representations arid the computers that enable the information revolution. Computer programs enable the computer scientist and engineer to feel the excitement of seeing something spring to life from the "mind's eye" and of creating information artifacts that have consid- erable practical utility for people in all walks of life. Fred Brooks has captured the excitement of programming: The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds castles in the air, creating by the exertion of the imagination.... Yet the program construct, unlike the poet's words, is real in the sense that it moves and works, pro ducing visible outputs separate from the construct itself. The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be.3

OCR for page 163
166 COMPUTING THE FUTURE Programmers are in equal portions playwright and puppeteer, working as a novelist would if he could make his characters come to life simply by touching the keys of his typewriter. As Ivan Suther- land, the father of computer graphics, has said, Through computer displays I have landed an airplane on the deck of a moving carrier, observed a nuclear particle hit a potential well, flown in a rocket at nearly the speed of light, and watched a com- puter reveal its innermost workings.4 Programming is an enormously challenging intellectual activity. Apart from deciding on appropriate algorithms and representations of information, perhaps the most fundamental issue in developing computer programs arises from the fact that the computer (unlike other similar devices such as non-programmable calculators) has the ability to take different courses of action based on the outcome of various decisions. Here are three examples of decisions that pro- grammers convey to a computer: Find a particular name associated with it. 1] n a list and dial the telephone number If this point lies within this circle then color it black; otherwise color it white. While the input data are greater than zero, display them on the screen. When a program does not involve such decisions, the exact se- quence of steps (i.e., the ''execution path") is known in advance. But in a program that involves many such decisions, the sequence of steps cannot be known in advance. Thus the programmer must an- ticipate all possible execution paths. The problem is that the number of possible paths grows very rapidly with the number of decisions: a program with only 10 "yes" or "no" decisions can have over 1000 possible paths, and one with 20 such decisions can have over 1 mil- lion. Algorithmic thinking, information representation, and computer programs are themes central to all subfields of CS&E research. Box 6.1 illustrates a typical taxonomy of these subfields. Consider the subarea of computer architecture. Computer engineers must have a basic understanding of the algorithms that will be executed on the computers they design, as illustrated by today's designers of parallel and concurrent computers. Indeed, computer engineers are faced with many decisions that involve the selection of appropriate algo- rithms, since any programmable algorithm can be implemented in hardware. Through a better understanding of algorithms, computer

OCR for page 163
WHAT IS COMPUTER SCIENCE AND ENGINEERING? 167 . . . . . . . . . ... .. . . . . .. ... . . . . . . . . . . . . ''"a.". 22"'^ i'" " " "~'''t21 "t' ' ' ' ' ' '" " ' ' '' ' ' " ' ' ' ' ' ' Programming I tngoages Computer architecture |~| :~41111 chili ' ~c=~,~..~.~.......... ~-~ ~ a~s:~:~n-ales: e~meats? ~ Maya titohnermOantd) '..'''''c'l'.''"'3""'""''' ........ ..... . ... ........ .......... ................................... . '""'I"""" I ~ to Cheer| (dentin jam re9~i~ii|S 8|| 5~( j3tC2|iQn$| tm3 . . . . . ~ ~ ~ ~ ~11~ engineers can better optimize the match between their hardware and the programs that will run on them. Those who design computer languages (item two in Box 6.1) with which people write programs also concern themselves with algorithms and information representation. Computer languages often differ in the ease with which various types of algorithms can be expressed and in their ability to represent different types of information. For example, a computer language such as Fortran is particularly conve- nient for implementing iterative algorithms for numerical calcula- tion, whereas Cobol may be much more convenient for problems that call for the manipulation and the input and output of large amounts of textual data. The language Lisp is useful for manipulating sym- bolic relations, while Ada is specifically designed for "embedded" computing problems (e.g., real-time flight control). The themes of algorithms, programs, and information representa- tion also provide material for intellectual study in and of themselves,

OCR for page 163
168 COMPUTING THE FUTURE often with important practical results. The study of algorithms with- in CS&E is as challenging as any area of mathematics; it has practical importance as well, since improperly chosen algorithms may solve problems in a highly inefficient manner, and problems can have in- trinsic limits on how many steps are needed to solve them (Box 6.2~. The study of programs is a broad area, ranging from the highly for- mal study of mathematically proving programs correct to very prac- tical considerations regarding tools with which to specify, write, de- bug, maintain, and modify very large software systems (otherwise called software engineering). Information representation is the cen- tral theme underlying the study of data structures (how information can best be represented for computer processing) and much of hu- man-computer interaction (how information can best be represented to maximize its utility for human beings). ABSTRACTIONS IN COMPUTER SYSTEMS While algorithmic thinking, computer programs, and information representation are the key intellectual themes in the study of CS&E, the design, construction, and operation of computer systems require talents from a wide range of fields, such as electrical engineering for hardware, logic and mathematical analysis for writing programs, and psychology for the design of better user interfaces, to name just a few. This breadth reflects the fact that computer systems are among

OCR for page 163
WHAT IS COMPUTER SCIENCE AND ENGINEERING? 169 the most complicated objects ever created by human beings. For example, the fastest computers today have around 100 billion transis- tors, while the personal computer on one's desk may have "only" a few million. People routinely use computer programs that are hun- dreds of thousands of lines long, and the largest software systems involve tens of millions of lines; printed at 50 lines per paper page, such a system might weigh several tons. One of the most effective ways to cope with such complexity is to use abstractions. Abstraction is a generic technique that allows the human scientist or engineer to focus only on certain features of an object or artifact while hiding the others. However, while scientists in other disciplines typically use abstractions as a way to simplify calculations for purposes of analysis, those in CS&E are concerned with abstractions for purposes of synthesis: to build working com- puter systems. Other engineering disciplines also use abstractions as the basis of synthesis, but the "stuff" of these disciplines-engineered and created artifacts is ultimately governed by the tangible reality of nature, which itself imposes structure on these abstractions. Com- puter programs are not similarly limited; instead, they are built out of ideas and information whose structuring is constrained only by human imagination. This extraordinary flexibility in structuring in- formation has no analog in the material world. The focus of the computer scientist or engineer in creating an abstraction is to hide the complexity of operation "underneath the abstraction/' while offering a simple and useful set of services "on top of it." Using such abstractions is CS&E's principal technique for organizing and constructing very sophisticated computer systems. One particularly useful abstraction uses hardware, system software, and application software as successive layers on which useful com- puter systems can be built. At the center of all computer systems is hardware, i.e., the por- tion of a computer system that one can see and touch. Hardware is divided into three principal components: Processors. Processors perform the arithmetic and logical oper- ations, much like the arithmetic operations available in a hand-held calculator. Processors also handle conditional behavior, executing one or another set of operations depending on the outcome of some . . c .eclslon. Memory (short term and long termJ. Memory hardware is like the memory function of a calculator, since it can be used to save and later retrieve information. Such information may be lost in a calcula- tor when the power is turned off, as it is in the short-term memory of

OCR for page 163
170 most computer systems. COMPUTING THE FUTURE Computer systems also need a long-term memory that doesn't forget when the power is lost. Communication (user-machine and machine-machined. For com puters to be useful, they must be able to communicate with people, and so display screens and keyboards are critical components of com- puter systems. For maximum usefulness, computers must be able to communicate with other computers, and so computers are often equipped with modems or other network connections that can transmit and receive data to and from other computers. Readers familiar with personal computers are likely to have en- countered the technical names or brand names for these three types of hardware. Personal computers might use an Intel 80486 processor as the processor hardware, dynamic random access memory (DRAM) as the short-term memory hardware, and disks as the long-term memory hardware. The user-computer communication hardware for personal computers is the keyboard, mouse, and video screen, while examples of machine-machine communication hardware are telephone modems and networks such as Appletalk and Ethernet. While the characteristics of the underlying hardware are what ultimately determine the computational power of a computer system, direct manipulation of hardware would be cumbersome, difficult, and error-prone, a lesson learned in the earliest days of computing when programmers literally connected wires to program a computer. Thus computer scientists and engineers construct a layer of "system soft- ware" around the hardware. System software hides the details of machine operation from the user, while providing services that most users will require, services such as displaying information on a screen, reading and writing in- formation to and from a disk drive, and so on. System software is commonly divided into three components: Operating system software that controls the hardware and or- chestrates how other programs work together. The operating system may also include network software that allows computers to commu- nicate with one another. Tools-the software (e.g., compilers, debuggers, linkers, data- base management systems) that allows programmers to write pro- grams that will perform a specific task. Compilers and linkers trans- late "high-level" languages into machine language, i.e., the ones and zeros that govern machine operation at the lowest level. Debuggers help programmers to find errors in their work. Database manage- ment systems store, organize, and retrieve data conveniently. User interface-the software that enables the user to interact with the machine.

OCR for page 163
WHAT IS COMPUTER SCIENCE AND ENGINEERING ? 171 Once again, personal computer users are likely already familiar with the brand names of these pieces of system software. MS-DOS in IBM Personal Computers and the System in Macintoshes are exam- ples of operating system software; Novell is the brand name of one type of widely used networking software; Dbase IV is a popular da- tabase management system; Microsoft Basic and Borland C are exam- ples of compiler system software; and Microsoft Windows and the Macintosh Desktop are examples of user interface system software. While the services provided by system software are usually re- quired by all users, they are not in themselves sufficient to provide computing that is useful in solving the problems of the end user, such as the secretary or the accountant or the pilot. Software de- signed to solve specific problems is called applications software; ex- amples include word processors, spreadsheets, climate models, au- tomatic teller machines, electronic mail, airline reservation systems, engineering structural analysis programs, real-time aircraft control systems, and so on. Such software makes use of the services provid- ed by system software as fundamental building blocks linked togeth- er in such a way that useful computing can be done. Examples of applications software include WordPerfect and Word (word-process- ing applications) and Lotus 1-2-3 and Excel (spreadsheet applications), and there are as many varieties of applications software as there are different computing problems to solve. The frequent use of system software services by all varieties of applications software underscores an important economic point: pro- viding these services in system software means that developers of applications software need not spend their time and effort in devel- oping these services, but rather can concentrate on programming that is specifically related to solving the problem of interest to the end user. The result is that it becomes much easier to develop applica- tions, leading to more and higher-quality computing applications than might otherwise be expected. This description of layering a computer system into hardware, system software, and applications software is a simple example of abstraction.5 (Box 6.3 explains why abstraction is so powerful a tool for the computer scientist and engineer.) But it suffices to illustrate one very important use of abstraction in computer systems: each layer provides the capability to specify that certain tasks be carried out without specifying how they should be carried out. In general, computing artifacts embody many different abstractions that capture many different levels of detail. A good abstraction is one that captures the important features of an artifact and allows the user to ignore the irrelevant ones. (The features decided to be important collectively constitute the interface

OCR for page 163
172 COMPUTING THE FUTURE

OCR for page 163
WHAT IS COMPUTER SCIENCE AND ENGINEERING? 173 of the artifact to the outside world.) By hiding details, an abstraction can make working with an artifact easier and less subject to error. But hiding details is not cost free in a particular programming problem, access to a hidden detail might in fact be quite useful to the person who will use that abstraction. Thus deciding how to construct an abstraction (i.e., deciding what is important and irrelevant) is one of the most challenging intellectual issues in CS&E. A simple example of this issue is the writing of a system program that displays data on a screen. The program may allow the user to specify the size of the screen, or it may assume that "one screen size fits all." In the first case, the user is given control over the screen size, at the cost of having to remember to specify it every time this program is used. In the second case, the user does not need to choose the screen size, but also loses the flexibility to use the program on screens of different size. A second challenging issue is how to manage all of the details that are hidden. The fact that they are hidden beneath the interface does not mean that they are irrelevant, but only that the computer scientist or engineer must design and implement approaches to han- dle these details "automatically," i.e., without external specification. Decisions about how best to handle these details are subject to nu- merous trade-offs. For example, a designer of computer systems may face the question of whether to implement a certain function in hard- ware or in software. By implementing the function in hardware, the designer gains speed of execution, but at the cost of making the func- tion very difficult to change. A function implemented in software executes more slowly, but is much easier to change. Abstractions enable computer scientists and engineers to deal with large differences of scale. At the highest level of abstraction, a per- son editing a document in a word-processing program needs only to mark the start and the end of the block of text and then press the DEL key to delete the block of text. But these few keystrokes can initiate the execution of thousands or tens of thousands of basic in- structions in the machine's hardware. Only by inserting abstractions intermediate between the user's keystrokes and the basic machine instructions can those keystrokes be predictably translated into the correct set of instructions. Thus the programmer who writes the word-processing program will provide one abstraction (in this case called a subroutine) that will delete the block from the screen, a sec- ond to reformat and redisplay the remaining text, and a third to save the changed document to disk. Within each of these abstractions will be lower-level abstractions that perform smaller tasks; each succes

OCR for page 163
206 COMPUTING THE FUTURE available to the user (unlike typing, in which the user may type any- thing). A Bit of History The history of interactive computer graphics goes back to the 1950s: pioneers using MIT's Whirlwind computer in the 1950s were the first to make use of an experimental display screen, and these graphics were later elaborated with the TX-2 computer and its exten- sive collection of input mechanisms: knobs, dials, keyboard, and light pen. In 1963, Ivan Sutherland developed a system called Sketchpad for the TX-2, which introduced the notion of interactively construct- ing hierarchies of objects on a virtual piece of paper, any portion of which could be shown at arbitrary magnification on the screen. Us- ers could specify simple constraints on picture elements to have the computer automatically enforce mathematical relationships among lines drawn with a light pen. This "constraint satisfaction" allowed a kind of free-hand sketching, with the computer "straightening out" the approximate drawings. Sketchpad was the opening round in an era of computer-driven displays as tools for specifying geometry for CAD/CAM in the automotive and aerospace industry and flight sim- ulators for pilot training. MIT's Steven Coons and other researchers in academia and industry developed various kinds of spline "patch- es" to define free-form surfaces for vehicle design. In the 1960s, Douglas Engelbart at SRI pursued the notion of using computer displays to "augment human intellect." - ~~o ~~~~-~~--~~ ~-~~r---~ -em - -- His group built the first hypermedia systems and office automation tools, in- cluding word processors, outline processors, systems for construct- ing and browsing hypermedia, software for tale-collaboration, as well as various input devices, including the mouse. In the early 1970s, the Xerox Palo Alto Research Center (PARC) pioneered bitmap raster graphics workstations with graphics in the user interface, both for office automation tools such as word processors and illustration pro- grams and for software development (e.g., Smalltalk). These devel- opments, coupled with the reduction in price of display subsystems from $100,000 in the mid-1960s to $10,000 in the early 1970s to about $1,000 today, are the underpinning for today's graphical user inter- faces. The Macintosh personal computer and Windows (for IBM-com- patible personal computers) have brought the experience of SRI and Xerox to the popular marketplace, dramatically lowering knowledge barriers to computer usage, and the commercial success of these products

OCR for page 163
WHAT IS COMPUTER SCIENCE AND ENGINEERING ? testifies to their significance and impact. 207 As users have required more and more interactive capability with their computers, the popu- larity of graphical user interfaces and graphical displays has grown by leaps and bounds. Now, millions of users have graphical interfac- es on their desks in the office or at home, and it is often possible for users to operate new applications with little or no reference to a manual. Even young children are comfortable using paint programs and illustration programs, not to mention video games and multime- dia electronic books and encyclopedias. With these developments, graphics at long last has begun to fulfill in earnest the promise of the Sketchpad days as a standard means of human-computer communi- cation. Scientific and Engineering Visualization The science and engineering community has also made good use of graphics technology. An NSF-sponsored reporti5 in 1987 empha- sized the need of computational science and engineering for graphics to help make sense of "firehoses of data" generated by supercomputers and hich-nowered workstations. In such data-rich situations, graphics are not merely desirable they are essential if users are to design objects or perceive subtle patterns and trends. Coupled to interactive computing that enables the user to use the visualization of intermedi- ate stages in the computation to direct further exploration, intuition and insight about phenomena and objects can be developed that would not otherwise be possible. Scientific visualizations make increasing use of three-dimension- al graphics. Depth perception is greatly enhanced by real three-di- mensional stereo images that are possible only when the visual in- puts to each eye are slightly different, and so some special input mechanism (e.g., glasses) is necessary to present each eye with the information necessary for the brain to construct a three-dimensional image. Technology such as stereo head-mounted displays with min- iature display screens for each eye can control all of the visual input received by the wearer and is the basis for "virtual-reality" presenta- tions that completely immerse the user in a synthetic world. Cou- pled with the ability to "walk through" such a presentation, suitable immersive representations can give the user an even better and more visceral understanding of the spatial relationships involved than is possible even with a non-immersive stereo presentation, provided that the user can control the vantage point from which the scene is viewed. Graphics are also beginning to assist scientists in developing pro ~ r ~

OCR for page 163
208 COMPUTING THE FUTURE grams to analyze their data. Traditionally, scientists had to choose between two styles of software use. They could write their own programs from scratch, one line at a time, until the entire program was written, or they could use a prewritten "canned" application. Writing from scratch gives the scientist tight control over what the program does but is time consuming and prone to error. Using a canned application is faster but lacks flexibility. However, in recent years, a style of `'visual programming" has begun to emerge that is intermediate between these two choices. The key construct within visual programming is the explicit specification of data and control flow between preexisting processing modules; such flows are implicitly specified when a program is written in the conventional manner. Using a screen and a mouse, the scientist specifies data and control flows between different modules, and the system automatically assembles the corresponding program from these mod- ules. The scientist has the freedom of specifying the flows as appro- priate to the problem at hand and also has some freedom to custom- ize each module to a limited extent by adjusting a few parameters (typically using a WIMP interface to do so). The result is that scien- tists can rapidly assemble systems that will perform a given task, much as a music buff can assemble stereo components into a working sound system without being much of an electrical engineer. Further, while this style of programming is currently limited to high-end sci- entific users, it is likely that visual programming will become in- creasingly important in the commercial world as well. Microsoft's Visual Basic is a good example of such an application. Engineering visualizations are at the heart of computer-aided de- sign today. Computer-aided design now includes design of mechan- ical parts, architectural or engineering structures, and even mole- cules. Large structures such as airplanes or buildings can be assembled "virtually," i.e., as information artifacts stored in a computer. Parts, substructures, and entire structures can be designed and fitted to- gether entirely on a display and are usually fed to on-line simulations that can analyze and virtually "test" these artifacts. The result is a dramatic shortening of the time required from concept to product. , . .. ,, , .. ~. Touch, Sound, Gestures Since vision is the communications channel with the highest band- width for information transmission, it is not surprising that a great deal of work has been done in graphics. But human beings also make use of sound, touch, and gestures to communicate information. Computer scientists and engineers have thus developed a variety of

OCR for page 163
WHAT IS COMPUTER SCIENCE AND ENGINEERING? 209 devices that allow users to communicate with computers using these channels as well: head or eye trackers, force and tactile feedback devices, gesture recognition via wands and data gloves provided with many-degree-of-freedom sensors, and so on. These devices are espe- cially useful in virtual-reality environments. Tactile feedback is illustrated well in the problem from biochem- istry of understanding molecule "docking." How a complex organic molecule physically "fits" into another is often a key to understand- ing its biological function. It is possible, although demanding, to calculate the molecular forces that determine docking positions and orientations. But as an aid to developing the biochemist's intuition and feel for docking behavior, computer scientists and engineers have developed ways to depict molecules visually. By controlling their orientation and position through the use of a specially designed joy- stick capable of responding with graduated forces, the user can ori- ent and move the molecule along the path of least resistance as he or she tries to fit the molecule into the receptor site. Sound will become a more important medium for interaction with computers in the future. Audio feedback is widely used in key- boards today to inform the user that a key has~actually been pressed. Some computer systems have voice synthesis systems that make in- formation accessible to users without demanding their visual atten- tion. Such output would be particularly useful when the demands of a task (e.g., driving) would prohibit the diversion of visual attention; thus an intelligent on-board navigator for automobiles will almost surely provide directions (e.g., ''turn left on Willow Street") by voice. Speech input is a more challenging problem discussed in Chapter 3. Finally, human beings use gestures to communicate meaning and manipulate objects. The Dataglove is a glove with sensors that can indicate the extent to which the fingers of a hand are bent. Appro- priate processing of this information allows the computer to con- struct a representation of the hand's configuration at any given mo- ment. Coordinated with a virtual reality presented via stereo goggles, the Dataglo~re allows the user to perform certain manipulations of a synthetic object with his or her hand as though it were real; however, the glove does not provide the tactile feedback that real objects pro- vide. Intellectual Challenges Computer graphics and interactive computing pose many intel- lectual challenges for CS&E. In the earliest days of computing, most of the computational power in a computer system could be devoted

OCR for page 163
210 COMPUTING THE FUTURE to executing the algorithm to solve a problem of interest, and devel- opers of computer systems paid a great deal of attention to making the best use of available computer resources. But with interactive computing that places the user at the center of the process and ever cheaper computational power, a larger and larger fraction of the com- putational power will be devoted to making the specific application as easy and comfortable to use as possible for the user. Developers of computer systems will pay much more attention to making the best use of human cognitive resources such as attention span and comprehension time. This trend is reflected in the development of computer hardware. Real-time user interaction requires that the time between user input (e.g., pointing to a screen icon) and computer response (updating of the displayed scene) be very short, on the order of a few tenths of a second. This limit places a premium on very fast processors and also drives the development of special-purpose hardware (e.g., graphics accelerators) that can relieve the central processing unit of some of the responsibility for servicing the user interface. - The importance of special-purpose hardware is complemented by an increasing emphasis on investigating better representations of data and information. While a "better" algorithm to solve a problem is generally one that runs faster, a "better" representation of informa- tion in the context of user interaction is one that provides insights not previously accessible. For example, consider the difference between looking at an or- thographic engineering diagram of a metal bracket and holding a metal bracket in one's hand. An orthographic diagram provides front, top, and side views of an object. In some sense, the orthographic diagram is "equivalent" to the metal bracket in hand, since the engi- neering diagram can be used to manufacture the bracket. But very few individuals can look at an arbitrary object in an engineering dia- gram and comprehend just what the object is. A much better sense for the object is obtained by manipulating it, looking at it from differ- ent angles, turning it in hand. Today's computers provide the next best thing: a visual image that can be viewed from different perspec- tives under user control. Using such a display provides a better intuition for time-varying behavior than does viewing a sequence of static images, because the viewer need not cope with the cognitively demanding process of identifying correspondences between the im- ages but rather can follow the correspondences from one moment to the next. The best of today's CAD systems provide images on screen that can be rotated, exploded, collapsed, and otherwise manipulated on command.

OCR for page 163
WHAT IS COMPUTER SCIENCE AND ENGINEERING? 211 The techniques of computer graphics offer a host of different choices to represent information, as demonstrated by modern molecular graphics. An important aspect of modern biochemistry is to understand how complex molecules fit together. Among other things, such under- standing involves knowledge of the outside contours of several mol- ecules in their "fitted-together" configuration, as well as knowledge of how surfaces from different molecules fit together "on the inside." To take a simple example, imagine two cubical dice that are brought together so that they are face to face. It would be important to know which surfaces were exposed to the outside (e.g., the 1, 2, 3, 4, 5 of die 1 and the 1, 2, 4, 5, 6 of die 2) and which surfaces were touching (the 6 of die 1 and the 3 of die 2~. Visualizations of molecule fittings could use so-called space-fill- ing models, the equivalent of models made of clay that- were fitted together. But the surfaces in contact would then be invisible from the outside, and since the precise geometry of the proper fit is deter- mined in a very complex manner, it is not as simple, as in the case of the two dice, to determine what surfaces from each molecule are in contact. But an alternative representation the dot-surface representation- provides a notable alternative. The dot-surface representation re- places the space-filling model with a shell representation. However, the shell's surface is not solid; rather it is composed of many dots. These dots are spaced far enough apart that it is possible to see past them, but close enough that the eye can perceive the surface of which they are a part. Using these dot shells instead of space-filling models to represent molecules enables the user to see both the outside con- tours (as before) but also "through" them to see formerly invisible surfaces in contact with each other. In many situations, the availability of different representations provides an unparalleled degree of flexibility in depicting informa- tion. Computer graphics-based representations can be created, mod- ified, and manipulated with such speed that the utility of different representations can be rapidly determined, and the user can choose which to use under any particular set of circumstances. These repre- sentations are limited only by the imagination of their creator, since any representation that can be imagined can be implemented. Thus computer graphics can be said to have spawned a new set of con- cepts for depicting information that were not practical prior to the computer. Finally, finding ways to provide perceptual coupling between com- puter-generated image and user is important. Evolution has provid- ed human beings with rich and coordinated senses of sight, sound,

OCR for page 163
212 COMPUTING THE FUTURE and touch; immersible virtual-reality presentations, data gloves, au- dio output, and force feedback are the first steps toward the full exploitation of human senses. SYNERGY LEADING TO INNOVATIONS AND RAPID PROGRESS As anyone who has recently purchased a personal computer knows all too well, computer technology advances at an extraordinarily rap- id rate. To a large degree, the rapidity of technological change arises from a high degree of interconnectedness among the various subdis- ciplines of CS&E. The resulting synergy is intellectually powerful and results in many practical benefits as well. Synergy arises in many contexts. Box 3.1 offered two examples of the impact of synergy between subdisciplines of CS&E. A third example, with more obvious and tangible economic impact, is that of workstations. It is common today to see a multimillion-dollar main- frame computer replaced by workstations that offer much better cost- performance ratios. The growth in the use of workstations is fueled by advances in many areas: networking (that enables workstations to be interconnected and thus enables users to share resources), re- duced-instruction-set computers (that speed up computations by large factors), portable operating systems (that can run on computers made by different vendors), and a better theoretical understanding of how compilers can best use hardware resources. No single one of these areas is solely responsible for growth in the workstation industry, but taken together they have produced a multibillion-dollar indus- try. The benefits of synergy are not limited to innovation. Synergy also speeds up developments in the field. Consider, for example, the development of the 80X86 processor chip family by the Intel Corpo- ration. (The 80X86 processor is the heart of IBM-compatible personal computers.) In designing the 80286 processor chip, Intel engineers had to use seven different brands of computers depending on the step of the design process. This was before portable operating sys- tems were available, and so a designer had to learn seven different command languages and seven different text editors to complete a microprocessor. However, shortly after the 80286 was released, portable operat- ing systems came into use at Intel. This meant that designers of the later processor chips had to learn only one command language and only one text editor, and this contributed to the shortening of the time between the 80486 and the 80586 compared to the time between

OCR for page 163
WHAT IS COMPUTER SCIENCE AND ENGINEERING? 213 the 8086 and the 80286.~7 Other reasons for the reduced develop- ment time include a graphical user interface to the design software (making the designer more productive), better compilers and algo- rithmic improvements to the design software (making the design software faster), and simply faster computers (so that the designer can per- form even unchanged tasks in less time). And in a fast-moving tech- nology like electronics, shorter design time corresponds directly to better performance and lower cost. Hence the designers of the Intel 80486 and 80586 (as well as their customers) have and will benefit from advances in hardware technology, operating systems, compil- ers, user interfaces, design software, and theory of algorithms. INTELLECTUAL AND STRUCTURAL CHARACTERISTICS OF CS&E AS A DISCIPLINE CS&E is an entirely new entity, neither simply science nor simply engineering. Its historical roots derive from mathematics and electri- cal engineering, and this parentage gives CS&E its distinctive flavor. Perhaps most important in creating this flavor is the close rela- tionship between the intellectual content of CS&E and technology. Technological developments have a major influence in defining what the interesting problems of the field are; problems in CS&E often become interesting because the technology necessary to implement solutions becomes available. Put another way, new generations of technology open up and make interesting a variety of new and chal- lenging questions. This intimate link to technology creates a deep connection between the research and educational activities of aca- demic CS&E and a multibillion-dollar computer industry, with intel- lectual progress strongly influenced by and at times strongly influ- encing commercial development. Intellectually, CS&E includes programming, which allows a pro- grammer's thoughts to spring to life on the screen. As a result, com- puting technology can create virtual realities universes unconstrained by the laws of physics-and this ability to take part in creating new worlds is part of the excitement of programming. In other words, the objects of study in CS&E (ways to process and represent informa- tion) are created by those who study those objects- whereas other scientists study what nature gives them (atoms, cells, stars, planets). This new form of expression also offers opportunities for important intellectual contributions to the formal study of information. Theory in CS&E often develops after years of practice, with ex- periments largely establishing the feasibility of new systems. By con- trast, experiments conducted by physicists and biologists play an im

OCR for page 163
214 COMPUTING THE FUTURE portent role in proving or disproving their theories. The experimen- tal construction of algorithms as programs is more likely to uncover practical issues that were ignored in the mathematical proofs rather than flaws in the proofs themselves. Feedback from experimentation cart then lead to the creation of more realistic models that in turn give rise to more relevant theories. In other words, CS&E experi- ments often demonstrate the irrelevance and inadequacy of previous theoretical work, rather than proving that theoretical work wrong. The commercial and industrial side of computing is also unusual. Whereas most industries focus on producing tangible goods, an in- creasingly large part of computing involves software and informa- tion. Software and information are unlike other products in that they are primarily an intangible intellectual output, more akin to a useful report than a useful widget. By far the largest cost of software and information involves research, development, documentation, testing, and maintenance (primarily intellectual activities) rather than manu- facturing (typically, copying disks and manuals). As a result, capital requirements for software production are relatively much lower than those for widget production, while personnel requirements are rela- tively much higher. When the recognition of CS&E's intellectual heritage is combined with understanding of the rapid pace of change, the youth of the field, the impact of a large industry, the magical implications of pro- gramming, and the mathematical implications of algorithms, the ex- citing and enticing nature of CS&E is revealed. NOTES 1. The notion of CS&E as a discipline based on theory, abstraction, and design is described in Peter Denning, Douglas E. Comer, David Cries, Michael C. Mulder, Allen Tucker, Joe Turner, and Paul R. Young, "Computing as a Discipline," Communications of the ACM, Volume 32(1), January 1989, pp. 9-23. 2. Personal communication, Donald Knuth, March 10, 1992 letter. 3. Frederick Brooks, The Mythical Man-Month, Addison-Wesley, Reading, Mass., 1975, pp. 7-8. 4. "Computer Displays," Ivan Sutherland, Scientific American, June 1970, p. 57. 5. This division isn't even necessarily unique, since abstractions in CS&E can be created with a great deal of flexibility. A grammar checker builds on a word-process- ing program, and thus the word processor plays a "system software" role for the grammar checker. But the word processor builds upon the operating system, so that the word processor is applications software from the perspective of the operating system. 6. John E. Hopcroft and Kenneth W. Kennedy, eds., Computer Science Achievements and Opportunities, Society for Industrial and Applied Mathematics, Philadelphia, 1989. 7. In practice, the trade-off is even more favorable than this discussion implies. In

OCR for page 163
WHAT IS COMPUTER SCIENCE AND ENGINEERING? 215 the former case, the area of the chip would be more than 100 times larger, due to second-order effects that are not accounted for in the VLSI complexity model. In the latter case, it could be possible to reduce the increase in total chip area to less than a factor of ten by designing its functional elements to perform multiple operations in a "pipeline" arrangement. (A pipeline architecture is analogous to a bucket brigade; when it is finished with an elementary operation, an element passes its result to the next element in sequence, thereby allowing the entire array of elements to be kept busy nearly all of the time.) 8. The discussion that follows is taken largely from Lui Sha and John B. Goode- nough, Real-Time Scheduling Theory and Ada, Technical Report CMU/SEI-89-TR-14, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, April 1989. 9. The reader may find it amusing to multiply two arbitrary 2 x 2 matrices with a minimal number of multiplications. The standard method of multiplying these matri- ces requires four additions and eight ordinary multiplications to compute the matrix product. It turns out that it is possible to compute the matrix product with only seven scalar multiplications though at the expense of additional additions (and subtractions). The algorithm for multiplying 2 x 2 matrices is the basis for a more general algorithm for multiplying matrices of any size that requires considerably fewer arithmetic opera- tions than would be expected from the standard definition of matrix multiplication. The details of the seven-multiplication algorithm are described on p. 216. 10. Actually, LP is applicable to a much broader class of problems. 11. A problem's complexity depends on the model of computation used to derive it. An area of active investigation today within theoretical computer science concerns the use of different models that differ in the fidelity with which they match actual com- puters. For example, some theoretical computer scientists consider so-called random access machine models, on the grounds that these models can access any memory location in a constant number of steps, while a Turing machine can access a given memory location only by stepping through all preceding locations. 12. The function 2n is called "exponential" in n. The function n2 is polynomial in n, as would be n raised to any exponent. An algorithm that executes in a time propor- tional to a function that is polynomial in n, where n is the size of the problem, is said to take "polynomial time" and may be feasible for large n, whereas an exponential algorithm is not. The reason is that for sufficiently large values of n, an exponential function will increase much faster than any polynomial function. 13. These algorithms would be nearly optimal in the sense that a given algorithm working on problems of a particular nature would consume no more than a certain amount of computational resources while generating solutions that are guaranteed to be very close to optimal with very high probability. 14. Batch computing refers to a mode of computing in which the user submits a program to a computer, waits for a while, and then obtains the results. If the user wishes to correct an error or change a parameter in the program, he or she resubmits the program and repeats the cycle. 15. Bruce H. McCormick, Thomas A. Defanti, and Maxine D. Brown, eds., Visualiza- tion in Scientific Computing, ACM Press, New York, July 1987. 16. Virtual reality, the object of both serious intellectual work and far-fetched hy- perbole, will be the subject of a forthcoming NRC project to be conducted jointly by the NRC's Computer Science and Telecommunications Board and the Committee on Human Factors. As a mode of information representation, virtual reality is still in its infancy compared to modes such as ordinary stereo or two-dimensional graphics. 17. The time between the 8086 and the 80286 was five years, while the time between the 80486 and the 80586 is expected to be three years (the 80586 will be released in

OCR for page 163
216 COMPUTING THE FUTURE 1992). See John L. Hennesy and David A. Patterson, Computer Architectures: A Quanti- tative Approach, Morgan Kaufmann Publishers, San Mateo, California, 1990, inside front cover. Solution to the problem posed in Note 9 above. Let the elements of matrix A be denoted by A = r all al2 1 L a21 a22 and similarly for B. If we define X1 = (all + a22) (bl1 + b22) X2 = (a21 + a22) tell x3 = all (b12 - b22) x4 = a22 (b21 - b11) Then the entries of C are given by c1l = x1 + x4 - x5 + x7 c21 = x2 + x4 x5 = (all + a12) b22 X6 = (a21 - all) (bl1 + bl2) x7 = (a12 - a22) (b21 + b22) c12 = x3 + x5 c22 = x1 + x3 - x2 + x6 The significance of this algorithm is not its utility for 2 x 2 matrices per se, but rather that it is the basic building block for an algorithm to multiply N x N matrices. Such a matrix can be treated simply as a 2 x 2 matrix where the elements are themselves matrices of size N/2 x N/2. The result is that instead of requiring on the order of N3 multiplications and additions (as would be expected from the definition of matrix multiplication), the matrix multiplication requires on the order of N281 arithmetic operations. While such an algorithm involves some additional overhead, it turns out that for sufficiently large values of N. it will run faster than one based on the defini- tion for matrix multiplication. SOURCE: This method was first published by V. Strassen, "Gaussian Elimination Is Not Optimal," Numerische Mathematik, Volume 13, 1969, pp. 354-356.