Click for next page ( 22


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 21
1 The Need for Continued Performance Growth I nformation technology (IT) has become an integral part of modern society, affecting nearly every aspect of our lives, including education, medicine, government, business, entertainment, and social interac- tions. Innovations in IT have been fueled by a continuous and extraor- dinary increase in computer performance. By some metrics computer performance has improved by a factor of an average of 10 every 5 years over the past 2 decades. A sustained downshift in the rate of growth in computing performance would have considerable ramifications both economically and for society. The industries involved are responsible for about $1 trillion of annual revenue in the United States. That revenue has depended on a sustained demand for IT products and services that in turn has fueled demand for constantly improving performance. Indeed, U.S. leadership in IT depends in no small part on its driving and taking advantage of the leading edge of computing performance. Virtually every sector of society—manufacturing, financial services, education, science, government, military, entertainment, and so on—has become dependent on the continued growth in computing performance to drive new efficien- cies and innovations. Moreover, all the current and foreseeable future applications rely on a huge software infrastructure, and the software infrastructure itself would have been impossible to develop with the more primitive software development and programming methods of the past. The principal force allowing better programming models, which empha- size programmer productivity over computing efficiency, has been the 21

OCR for page 21
22 THE FUTURE OF COMPUTING PERFORMANCE growth in computing performance. (Chapter 4 explores implications for software and programming in more detail.) This chapter first considers the general question of why faster computers are important. It then examines four broad fields—science, defense and national security, consumer applications, and enterprise productivity—that have depended on and will continue to depend on sustained growth in computing performance. The fields discussed by no means constitute an exhaustive list,1 but they are meant to illustrate how computing performance and its historic exponential growth have had vast effects on broad sectors of society and what the results of a slowdown in that growth would be. WHY FASTER COMPUTERS ARE IMPORTANT Computers can do only four things: they can move data from one place to another, they can create new data from old data via various arithmetic and logical operations, they can store data in and retrieve them from memories, and they can decide what to do next. Students studying computers or programming for the first time are often struck by the sur- prising intuition that, notwithstanding compelling appearance to the con- trary, computers are extremely primitive machines, capable of performing only the most mind-numbingly banal tasks. The trick is that computers can perform those simple tasks extremely fast—in periods measured in billionths of a second—and they perform these tasks reliably and repeat - ably. Like a drop of water in the Grand Canyon, each operation may be simple and may in itself not accomplish much, but a lot of them (billions per second, in the case of computers) can get a lot done. Over the last 60 years of computing history, computer buyers and users have essentially “voted with their wallets” by consistently paying more for faster computers, and computer makers have responded by pric- 1 Health care is another field in which IT has substantial effects—in, for example, patient care, research and innovation, and administration. A recent National Research Council (NRC) report, although it does not focus specifically on computing performance, provides numerous examples of ways in which computation technology and IT are critical under- pinnings of virtually every aspect of health care (NRC, 2009, Computational Technology for Effective Health Care: Immediate Steps and Strategic Directions, Washington, D.C.: The National Academies Press, available online at http://www.nap.edu/catalog.php?record_ id=12572). Yet another critically important field that increasingly benefits from computa - tion power is infrastructure. “Smart” infrastructure applications in urban planning, high- performance buildings, energy, traffic, and so on are of increasing importance. That is also the underlying theme of two of the articles in the February 2009 issue of Communications of the ACM (Tom Leighton, 2009, Improving performance on the Internet, Communications of the ACM 52(2): 44-51; and T.V. Raman, 2009, Toward 2W: Beyond Web 2.0, Communications of the ACM 52(2): 52-59).

OCR for page 21
23 THE NEED FOR CONTINUED PERFORMANCE GROWTH ing their systems accordingly: a high-end system may be, on the average, 10 percent faster and 30 percent more expensive than the next-best. That behavior has dovetailed perfectly with the underlying technology devel - opment in the computers—as ever-faster silicon technology has become available, faster and faster computers could be designed. It is the nature of the semiconductor manufacturing process that silicon chips coming off the fabrication line exhibit a range of speeds. Rather than discard the slower chips, the manufacturer simply charges less for them. Ever-rising performance has been the wellspring of the entire computer industry. Meanwhile, the improving economics of ever-larger shipment volumes have driven overall system costs down, reinforcing a virtuous spiral 2 by making computer systems available to lower-price, larger-unit-volume markets. For their part, computer buyers demand ever-faster computers in part because they believe that using faster machines confers on them an advantage in the marketplace in which they compete.3 Applications that run on a particular generation of computing system may be impractical or not run at all on a system that is only one-tenth as fast, and this encour- ages hardware replacements for performance every 3-5 years. That trend has also encouraged buyers to place a premium on fast new computer systems because buying fast systems will forestall system obsolescence as long as possible. Traditionally, software providers have shown a ten- dency to use exponentially more storage space and central processing unit (CPU) cycles to attain linearly more performance; a tradeoff commonly referred to as bloat. Reducing bloat is another way in which future system improvements may be possible. The need for periodic replacements exists whether the performance is taking place on the desktop or in the “cloud” 2 A small number of chips are fast, and many more are slower. That is how a range of prod- ucts is produced that in total provide profits and, ultimately, funding for the next generation of technology. The semiconductor industry is nearing a point where extreme ultraviolet (EUV) light sources—or other expensive, exotic alternatives—will be needed to continue the lithography-based steps in manufacturing. There are a few more techniques left to imple - ment before EUV is required, but they are increasingly expensive to use in manufacturing, and they are driving costs substantially higher. The future scenario that this implies is not only that very few companies will be able to manufacture chips with the smallest feature sizes but also that only very high-volume products will be able to justify the cost of using the latest generation of technology. 3 For scientific researchers, faster computers allow larger or more important questions to be pursued or more accurate answers to be obtained; office workers can model, communicate, store, retrieve, and search their data more productively; engineers can design buildings, bridges, materials, chemicals, and other devices more quickly and safely; and manufacturers can automate various parts of their assembly processes and delivery methods more cost- effectively. In fact, the increasing amounts of data that are generated, stored, indexed, and retrieved require continued performance improvements. See Box 1.1 for more on data as a performance driver.

OCR for page 21
24 THE FUTURE OF COMPUTING PERFORMANCE BOX 1.1 Growth of Stored and Retrievable Data The quantity of information and data that is stored in a digital format has been growing at an exponential rate that exceeds even the historical rate of growth in computing performance, which is the focus of this report. Data are of value only if they can be analyzed to produce useful information that can be retrieved when needed. Hence, the growth in stored information is another reason for the need to sustain substantial growth in computing performance. As the types and formats of information that is stored in digital form con- tinue to increase, they drive the rapid growth in stored data. Only a few decades ago, the primary data types stored in IT systems were text and numerical data. But images of increasing resolution, audio streams, and video have all become important types of data stored digitally and then indexed, searched, and re- trieved by computing systems. The growth of stored information is occurring at the personal, enterprise, national, and global levels. On the personal level, the expanding use of e-mail, text messaging, Web logs, and so on is adding to stored text. Digital cameras have enabled people to store many more images in their personal computers and data centers than they ever would have considered with traditional film cameras. Video cameras and audio recorders add yet more data that are stored and then must be indexed and searched. Embedding those devices into the ubiquitous cell phone means that people can and do take photos and movies of events that would previously not have been recorded. At the global level, the amount of information on the Internet continues to increase dramatically. As static Web pages give way to interactive pages and so- cial-networking sites support video, the amount of stored and searchable data continues its explosive growth. Storage technology has enabled this growth by reducing the cost of storage by a rate even greater than that of the growth in processor performance. The challenge is to match the growth in stored information with the com- putational capability to index, search, and retrieve relevant information. Today, there are not sufficiently powerful computing systems to process effectively all the images and video streams being stored. Satellite cameras and other remote sensing devices typically collect much more data than can be examined for use- ful information or important events. Considerably more progress is needed to achieve the vision described by Vannevar Bush in his 1945 paper about a MEMEX device that would collect and make available to users all the information relevant to their life and work.1 1Vannevar Bush, 1945, “As we may think,” Atlantic Magazine, July 1945, available online at http://www.theatlantic.com/magazine/archive/1969/12/as-we-may-think/3881/.

OCR for page 21
25 THE NEED FOR CONTINUED PERFORMANCE GROWTH in a Web-based service, although the pace of hardware replacement may vary in the cloud. All else being equal, faster computers are better computers.4 The unprecedented evolution of computers since 1980 exhibits an essentially exponential speedup that spans 4 orders of magnitude in performance for the same (or lower) price. No other engineered system in human his- tory has ever achieved that rate of improvement; small wonder that our intuitions are ill-tuned to perceive its significance. Whole fields of human endeavors have been transformed as computer system capability has ascended through various threshold performance values.5 The impact of computer technology is so widespread that it is nearly impossible to overstate its importance. Faster computers create not just the ability to do old things faster but the ability to do new things that were not feasible at all before.6 Fast computers have enabled cell phones, MP3 players, and global positioning devices; Internet search engines and worldwide online auctions; MRI and CT scanners; and handheld PDAs and wireless networks. In many cases, those achievements were not predicted, nor were computers designed specifically to cause the breakthroughs. There is no overarching roadmap for where faster computer technology will take us—each new achieve - ment opens doors to developments that we had not even conceived. We should assume that this pattern will continue as computer systems 4 See Box 1.2 for a discussion of why this is true even though desktop computers, for ex - ample, spend most of their time idle. 5 The music business, for example, is almost entirely digital now, from the initial sound capture through mixing, processing, mastering, and distribution. Computer-based tricks that were once almost inconceivable are now commonplace, from subtly adjusting a singer’s note to be more in tune with the instruments, to nudging the timing of one instrument relative to another. All keyboard instruments except acoustic pianos are now digital (computer-based) and not only can render very accurate imitations of existing instruments but also can alter them in real time in a dizzying variety of ways. It has even become possible to isolate a single note from a chord and alter it, a trick that had long been thought impossible. Similarly, modern cars have dozens of microprocessors that run the engine more efficiently, minimize exhaust pollution, control the antilock braking system, control the security system, control the sound system, control the navigation system, control the airbags and seatbelt retractors, operate the cruise control, and handle other features. Over many years, the increasing ca - pability of these embedded computer systems has allowed them to penetrate nearly every aspect of vehicles. 6 Anyone who has played state-of-the-art video games will recognize the various ways in which game designers wielded the computational and graphics horsepower of a new computer system for extra realism in a game’s features, screen resolution, frame rate, scope of the “theater of combat,” and so on.

OCR for page 21
26 THE FUTURE OF COMPUTING PERFORMANCE BOX 1.2 Why Do I Need More Performance When My Computer Is Idle Most of the Time? When computers find themselves with nothing to do, by default they run operating-system code known as the idle loop. The idle loop is like the cell- phone parking lot at an airport, where your spouse sits waiting to pick you up when you arrive and call him or her. It may seem surprising or incongruous that nearly all the computing cycles ever executed by computers have been wasted in the idle loop, but it is true. If we have “wasted” virtually all the computing horsepower available since the beginning of the computer age, why should we worry about a potential threat to increased performance in the future? Is there any point in making machinery execute the idle loop even faster? In fact, there is. The reason has as much to do with humans as it does with the computing machines that they design. Consider the automobile. The average internal-combustion vehicle has a six-cylinder engine capable of a peak output of around 200 horsepower. Many aspects of the engine and drivetrain reflect that peak horsepower: when you press the pedal to the floor while passing or entering a highway, you expect the vehicle to deliver that peak horsepower to the wheels, and you would be quite unhappy if various parts of the car were to leave the vehicle instead, unable to handle the load. But if you drive efficiently, over several years of driving, what fraction of the time is spent under that peak load condition? For most people, the answer is approximately zero. It only takes about 20 horsepower to keep a passenger car at highway speeds under nominal conditions, so you end up paying for a lot more horsepower than you use. But if all you had at your driving disposal was a 20-horsepower power plant (essentially, a golf cart), you would soon tire of driving the vehicle because you would recognize that energy efficiency is great but not everything; that annoy- ing all the other drivers as you slowly, painfully accelerate from an on-ramp gets old quickly; and that your own time is valuable to you as well. In effect, we all get faster yet.7 There is no reason to think that it will not continue as long as computers continue to improve. What has changed—and will be described in detail in later chapters—is how we achieve faster computers. In short, power dissipation can no longer be dealt with independently of performance (see Chapter 3). Moreover, although computing performance has many components (see Chapter 2), a touchstone in this report will be computer speed; as described in Box 1.3, speed can be traded for almost any other sort of functionality that one might want. 7 Some of the breakthroughs were not solely performance-driven—some depend on a particular performance at a particular cost. But cost and performance are closely related, and performance can be traded for lower cost if desired.

OCR for page 21
27 THE NEED FOR CONTINUED PERFORMANCE GROWTH accept a compromise that results in a system that is overdesigned for the com- mon case because we care about the uncommon case and are willing to pay for the resulting inefficiency. In a computing system, although you may know that the system is spending almost all its time doing nothing, that fact pales in comparison with how you feel when you ask the system to do something in real time and must wait for it to accomplish that task. For instance, when you click on an attachment or a file and are waiting for the associated application to open (assuming that it is not already open), every second drags.1 At that moment, all you want is a faster system, regardless of what the machine is doing when you are not there. And for the same reason that a car’s power plant and drivetrain are overdesigned for their normal use, your computing system will end up sporting clock frequen- cies, bus speeds, cache sizes, and memory capacity that will combine to yield a computing experience to you, the user, that is statistically rather rare but about which you care very much. The idle-loop effect is much less pronounced in dedicated environments— such as servers and cloud computing, scientific supercomputers, and some embedded applications—than it is on personal desktop computers. Servers and supercomputers can never go fast enough, however—there is no real limit to the demand for higher performance in them. Some embedded applications, such as the engine computer in a car, will idle for a considerable fraction of their existence, but they must remain fast enough to handle the worst-case compu- tational demands of the engine and the driver. Other embedded applications may run at a substantial fraction of peak capacity, depending on the workload and the system organization. 1 It is worth noting that the interval between clicking on most e-mail attachments and suc- cessful opening of their corresponding applications is not so much a function of the CPU’s performance as it is of disk speed, memory capacity, and input/output interconnect bandwidth. Finding: The information technology sector itself and most other sec - tors of society—for example, manufacturing, financial and other ser- vices, science, engineering, education, defense and other government services, and entertainment—have become dependent on continued growth in computing performance. The rest of this chapter describes a sampling of fields in which com- puting performance has been critical and in which a slowing of the growth of computing performance would have serious adverse repercussions. We focus first on high-performance computing and computing performance in the sciences. Threats to growth in computing performance will be felt there first, before inevitably extending to other types of computing.

OCR for page 21
28 THE FUTURE OF COMPUTING PERFORMANCE BOX 1.3 Computing Performance Is Fungible Computing speed can be traded for almost any other feature that one might want. In this sense, computing-system performance is fungible, and that is what gives it such importance. Workloads that are at or near the absolute capacity of a computing system tend to get all the publicity—for every new computing- system generation, the marketing holy grail is a “killer app” (software applica- tion), some new software application that was previously infeasible, now runs adequately, and is so desirable that buyers will replace their existing systems just to buy whatever hardware is fast enough to run it. The VisiCalc spreadsheet program on the Apple II was the canonical killer app; it appeared at the dawn of the personal-computing era and was so compelling that many people bought computers just to run it. It has been at least a decade since anything like a killer app appeared, at least outside the vocal but relatively small hard-core gaming community. The reality is that modern computing systems spend nearly all their time idle (see Box 1.2 for an explanation of why faster computers are needed despite that); thus, most systems have a substantial amount of excess computing capacity, which can be put to use in other ways. Performance can be traded for higher reliability: for example, the digital sig- nal processor in a compact-disk player executes an elaborate error-detection- and-correction algorithm, and the more processing capability can be brought to bear on that problem, the more bumps and shocks the player can withstand before the errors become audible to the listener. Computational capacity can also be used to index mail and other data on a computer periodically in the background to make the next search faster. Database servers can take elabo- rate precautions to ensure high system dependability in the face of inevitable hardware-component failures. Spacecraft computers often incorporate three processors where one would suffice for performance; the outputs of all three processors are compared via a voting scheme that detects if one of the three machines has failed. In effect, three processors’ worth of performance is re- duced to one processor’s performance in exchange for improved system de- pendability. Performance can be used in the service of other goals, as well. Files on a hard drive can be compressed, and this trades computing effort and time for better effective drive capacity. Files that are sent across a network or across the Internet use far less bandwidth and arrive at their destination faster when they are compressed. Likewise, files can be encrypted in much the same way to keep their contents private while in transit.

OCR for page 21
29 THE NEED FOR CONTINUED PERFORMANCE GROWTH THE IMPORTANCE OF COMPUTING PERFORMANCE FOR THE SCIENCES Computing has become a critical component of most sciences and complements the traditional roles of theory and experimentation. 8 Theo- retical models may be tested by implementing them in software, evaluat - ing them through simulation, and comparing their results with known experimental results. Computational techniques are critical when experi - mentation is too expensive, too dangerous, or simply impossible. Exam - ples include understanding the behavior of the universe after the Big Bang, the life cycle of stars, the structure of proteins, functions of living cells, genetics, and the behavior of subatomic particles. Computation is used for science and engineering problems that affect nearly every aspect of our daily lives, including the design of bridges, buildings, electronic devices, aircraft, medications, soft-drink containers, potato chips, and soap bubbles. Computation makes automobiles safer, more aerodynamic, and more energy-efficient. Extremely large computations are done to understand economics, national security, and climate change, and some of these computations are used in setting public policy. For example, hun - dreds of millions of processor hours are devoted to understanding and predicting climate change–one purpose of which is to inform the setting of international carbon-emission standards. In many cases, what scientists and engineers can accomplish is lim - ited by the performance of computing systems. With faster systems, they could simulate critical details—such as clouds in a climate model or mechanics, chemistry, and fluid dynamics in the human body—and they could run larger suites of computations that would improve confidence in the results of simulations and increase the range of scientific exploration. Two themes common to many computational science and engineering disciplines are driving increases in computational capability. The first is an increased desire to support multiphysics or coupled simulations, such as adding chemical models to simulations that involve fluid-dynamics simulations or structural simulations. Multiphysics simulations are neces- sary for understanding complex real-world systems, such as the climate, the human body, nuclear weapons, and energy production. Imagine, for example, a model of the human body in which one could experiment with the addition of new chemicals (medicines to change blood pressure), changing structures (artificial organs or prosthetic devices), or effects of radiation. Many scientific fields are ripe for multiphysics simulations 8 See an NRC report for one relatively recent take on computing and the sciences (NRC, 2008, The Potential Impact of High-End Capability Computing on Four Illustrative Fields of Science and Engineering, Washington, D.C.: The National Academies Press, available online at http://www.nap.edu/catalog.php?record_id=12451).

OCR for page 21
30 THE FUTURE OF COMPUTING PERFORMANCE because the individual components are understood well enough and are represented by a particular model and instantiation within a given code base. The next step is to take two or more such code bases and couple them in such a way that each communicates with the others. Climate modeling, for example, is well along that path toward deploying coupled models, but the approach is still emerging in some other science domains. The second crosscutting theme in the demand for increased comput - ing performance is the need to improve confidence in simulations to make computation truly predictive. At one level, this may involve running mul- tiple simulations and comparing results with different initial conditions, parameterizations, simulations at higher space or time resolutions or numerical precision, models, levels of detail, or implementations. In some fields, sophisticated “uncertainty quantification” techniques are built into application codes by using statistical models of uncertainty, redundant calculations, or other approaches. In any of those cases, the techniques to reduce uncertainty increase the demand for computing performance substantially.9 High-Energy Physics, Nuclear Physics, and Astrophysics The basic sciences, including physics, also rely heavily on high-end computing to solve some of the most challenging questions involving phenomena that are too large, too small, or too far away to study directly. The report of the 2008 Department of Energy (DOE) workshop on Sci - entific Grand Challenges: Challenges for Understanding the Quantum Universe and the Role of Computing at the Extreme Scale summarizes the computational gap: “To date, the computational capacity has barely been able to keep up with the experimental and theoretical research programs. There is considerable evidence that the gap between scientific aspiration and the availability of computing resource is now widening. . . .”10 One of the examples involves understanding properties of dark matter and dark energy by analyzing datasets from digital sky surveys, a technique that has already been used to explain the behavior of the universe shortly 9 In 2008 and 2009, the Department of Energy (DOE) held a series of workshops on com - puting and extreme scales in a variety of sciences. The workshop reports summarize some of the scientific challenges that require 1,000 times more computing than is available to the science community today. More information about these workshops and others is available online at DOE’s Office of Advanced Scientific Computing Research website, http://www. er.doe.gov/ascr/WorkshopsConferences/WorkshopsConferences.html. 10 DOE, 2009, Scientific Grand Challenges: Challenges for Understanding the Quan- tum Universe and the Role of Computing at the Extreme Scale, Workshop Report, Menlo Park, Cal., December 9-11, 2008, p. 2, available at http://www.er.doe.gov/ascr/Program Documents/ProgDocs.html.

OCR for page 21
31 THE NEED FOR CONTINUED PERFORMANCE GROWTH after the Big Bang and its continuing expansion. The new datasets are expected to be on the order of 100 petabytes (1017 bytes) in size and will be generated with new high-resolution telescopes that are on an exponential growth path in capability and data generation. High-resolution simula - tions of type Ia and type II supernova explosions will be used to calibrate their luminosity; the behavior of such explosions is of fundamental inter- est, and such observational data contribute to our understanding of the expansion of the universe. In addition, an improved understanding of supernovae yields a better understanding of turbulent combustion under conditions not achievable on Earth. Finally, one of the most computation - ally expensive problems in physics is aimed at revealing new physics beyond the standard model, described in the DOE report as “analogous to the development of atomic physics and quantum electrodynamics in the 20th century.”11 In addition to the data analysis needed for scientific experiments and basic compute-intensive problems to refine theory, computation is critical to engineering one-of-a-kind scientific instruments, such as particle accel - erators like the International Linear Collider and fusion reactors like ITER (which originally stood for International Thermonuclear Experimental Reactor). Computation is used to optimize the designs, save money in construction, and reduce the risk associated with these devices. Similarly, simulation can aid in the design of complex systems outside the realm of basic science, such as nuclear reactors, or in extending the life of existing reactor plants. Chemistry, Materials Science, and Fluid Dynamics A 2003 National Research Council report outlines several of the “grand challenges” in chemistry and chemical engineering, including two that explicitly require high-performance computing.12 The first is to “understand and control how molecules react—over all time scales and the full range of molecular size”; this will require advances in predictive computational modeling of molecular motions, which will complement other experimental and theoretical work. The second is to “learn how to design and produce new substances, materials, and molecular devices with properties that can be predicted, tailored, and tuned before produc - tion”; this will also require advances in computing and has implications for commercial use of chemical and materials engineering in medicine, 11 Ibid. at p. vi. 12 NRC, 2003, Beyond the Molecular Frontier: Challenges for Chemistry and Chemical Engineering, Washington, D.C.: The National Academies Press, available online at http:// www.nap.edu/catalog.php?record_id=10633.

OCR for page 21
42 THE FUTURE OF COMPUTING PERFORMANCE be set up and initiated via the Internet, and the havoc that it could poten - tially wreak on businesses and government could be catastrophic. It is not out of the question that such an eventuality could lead to physical war. The Internet was not designed with security in mind, and this over- sight is evident in its architecture and in the difficulty with which security measures can be retrofitted later. We cannot simply dismantle the Internet and start over with something more secure. But as computer-system tech- nology progresses and more performance becomes available, there will be opportunities to look for ways to trade the parallel performance afforded by the technology for improved defensive measures that will discourage hackers, help to identify the people and countries behind cyberattacks, and protect the secrets themselves better. The global Internet can be a dangerous place. The ubiquitous con- nectivity that yields the marvelous wonders of search engines, Web sites, browsers, and online purchasing also facilitates identity theft, propaga - tion of worms and viruses, ready platforms for staging denial-of-service attacks, and faceless nearly risk-free opportunities for breaking into the intellectual-property stores and information resources of companies, schools, government institutions, and military organizations. Today, a handful of Web-monitoring groups pool their observations and exper- tise with a few dozen university computer-science experts and many industrial and government watchdogs to help to spot Internet anomalies, malevolent patterns of behavior, and attacks on the Internet’s backbone and name-resolution facilities. As with video surveillance, the battle is ultimately human on human, so it seems unlikely that humans should ever be fully removed from the defensive side of the struggle. However, faster computers can help tremendously, especially if the good guys have much faster computing machinery than the bad guys. Stateful packet inspection, for example, is a state-of-the-art method for detecting the presence of a set of known virus signatures in traf- fic on communications networks, which on detection can be shunted into a quarantine area before damage is done. Port-based attacks can be identified before they are launched. The key to those mitigations is that all Internet traffic, harmful or not, must take the form of bits traversing various links of the Internet; computer systems capable of analyzing the contents over any given link are well positioned to eliminate a sizable fraction of threats. Data Analysis for Intelligence Vast amounts of unencrypted data not only are not generated in intel- ligence agencies but are available in the open for strategic data-mining. Continued performance improvements are needed if the agencies are to

OCR for page 21
43 THE NEED FOR CONTINUED PERFORMANCE GROWTH garner useful intelligence from raw data. There is a continuing need to analyze satellite images for evidence of military and nuclear buildups, evidence of emerging droughts or other natural disasters, evidence of ter- rorist training camps, and so on. Although it is no secret that the National Security Agency and the National Reconnaissance Office have some of the largest computer complexes in the world, the complexity of the data that they store and process and of the questions that they are asked to address is substantial. Increasing amounts of computational horsepower are needed not only to meet their mission objectives but also to maintain an advantage over adversaries. Nuclear-Stockpile Stewardship In the past, the reliability of a nuclear weapon (the probability that it detonates when commanded to do so) and its safety (the probability that it does not detonate otherwise) were established largely with physi - cal testing. Reliability tests detonated sample nuclear weapons from the stockpile, and safety tests subjected sample nuclear weapons to extreme conditions (such as fire and impact) to verify that they did not deto - nate under such stresses. However, for a variety of policy reasons, the safety and reliability of the nation’s nuclear weapons is today established largely with computer simulation, and the data from nonnuclear labora- tory experiments are used to validate the computer models. The simulation of a nuclear weapon is computationally extremely demanding in both computing capability and capacity. The already daunting task is complicated by the need to simulate the effects of aging. A 2003 JASON report21 concluded that at that time there were gaps in both capability and capacity in fulfilling the mission of stockpile stewardship— ensuring nuclear-weapon safety and reliability. Historically, the increase in single-processor performance played a large role in providing increased computing capability and capacity to meet the increasing demands of stockpile stewardship. In addition, par- allelism has been applied to the problem, so the rate of increase in per- formance of the large machines devoted to the task has been greater than called for by Moore’s law because the number of processors was increased at the same time that single-processor performance was increasing. The largest of the machines today have over 200,000 processors and LINPACK benchmark performance of more than 1,000 Tflops.22 21 Roy Schwitters, 2003, Requirements for ASCI, JSR-03-330, McLean, Va.: The MITRE Corporation. 22 For a list of the 500 most powerful known computer systems in the world, see “Top 500,” available online at http://www.absoluteastronomy.com/topics/TOP500.

OCR for page 21
44 THE FUTURE OF COMPUTING PERFORMANCE The end of single-processor performance scaling makes it difficult for those “capability” machines to continue scaling at historical rates and so makes it difficult to meet the projected increases in demands of nuclear- weapon simulation. The end of single-processor scaling has also made the energy and power demands of future capability systems problematic, as described in the recent DARPA ExaScale computing study.23 Furthermore, the historical increases in demand in the consumer market for computing hardware and software have driven down costs and increased software capabilities for military and science applications. If the consumer market suffers, the demands of science and military applications are not likely to be met. THE IMPORTANCE OF COMPUTING PERFORMANCE FOR CONSUMER NEEDS AND APPLICATIONS The previous two sections offered examples of where growth in com- puting performance has been essential for science, defense, and national security. The growth has also been a driver for individuals using con - sumer-oriented systems and applications. Two recent industry trends have substantially affected end-user computational needs: the increasing ubiquity of digital data and growth in the population of end users who are not technically savvy. Sustained growth in computing performance serves not only broad public-policy objectives, such as a strong defense and scientific leadership, but also the current and emerging needs of individual users. The growth in computing performance over the last 4 decades— impressive though it has been—has been dwarfed over the last decade or so by the growth in digital data.24 The amount of digital data is growing more rapidly than ever before. The volumes of data now available out- strip our ability to comprehend it, much less take maximum advantage 23 Peter Kogge, Keren Bergman, Shekhar Borkar, Dan Campbell, William Carlson, William Dally, Monty Denneau, Paul Franzon, William Harrod, Kerry Hill, Jon Hiller, Sherman Karp, Stephen Keckler, Dean Klein, Robert Lucas, Mark Richards, Al Scarpelli, Steven Scott, Allan Snavely, Thomas Sterling, R. Stanley Williams, and Katherine Yelick, 2008, ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems, Washington, D.C.: DARPA. Available online at http://www.er.doe.gov/ascr/Research/CS/DARPA%20 exascale%20-%20hardware%20(2008).pdf. 24 A February 2010 report observed that “quantifying the amount of information that ex - ists in the world is hard. What is clear is that there is an awful lot of it, and it is growing at a terrific rate (a compound annual 60%) that is speeding up all the time. The flood of data from sensors, computers, research labs, cameras, phones and the like surpassed the capac - ity of storage technologies in 2007” (Data, data, everywhere: A special report on managing information, The Economist, February 25, 2010, available online at http://www.economist. com/displaystory.cfm?story_id=15557443).

OCR for page 21
45 THE NEED FOR CONTINUED PERFORMANCE GROWTH of it. According to the How Much Information project at the University of California, Berkeley,25 print, film, magnetic, and optical storage media produced about 5 exabytes (EB) of new information in 2003. Further- more, the information explosion is accelerating. Market research firm IDC estimates that in 2006 161 EB of digital content was created and that that figure will rise to 988 EB by 2010. To handle so much information, people will need systems that can help them to understand the available data. We need computers to see data the way we do, identify what is useful to us, and assemble it for our review or even process it on our behalf. This grow- ing end-user need is the primary force behind the radical and continuing transformation of the Web as it shifts its focus from data presentation to end-users to automatic data-processing on behalf of end-users.26 The data avalanche and the consequent transformation of the Web’s functionality require increasing sophistication in data-processing and hence additional computational capability to be able to reason automatically in real time so that we can understand and interpret structured and unstructured collections of information via, for example, sets of dynamically learned inference rules. A computer’s ability to perform a huge number of computations per second has enabled many applications that have an important role in our daily lives.27 An important subset of applications continues to push the frontiers of very high computational needs. Examples of such applications are these: · Digital content creation—allows people to express creative skills and be entertained through various modern forms of electronic arts, such as animated films, digital photography, and video games. · Search and mining—enhances a person’s ability to search and recall objects, events, and patterns well beyond the natural limits of human memory by using modern search engines and the ever- growing archive of globally shared digital content. 25 See Peter Lyman and Hal R. Varian, 2003, How much information?, available online at http://www2.sims.berkeley.edu/research/projects/how-much-info-2003/index.htm, last accessed November 2, 2010. 26 See, for example, Tim Berners-Lee’s 2007 testimony to the U.S. Congress on the future of the World Wide Web, The digital future of the United States. Part I: The future of the World Wide Web,” Hearings before the Subcommittee on Telecommunications and the Internet of the Committee on Energy and Commerce, 110th Congress, available at http://dig.csail.mit. edu/2007/03/01-ushouse-future-of-the-web.html, last accessed November 2, 2010. 27 Of course, a computer system’s aggregate performance may be limited by many things: the nature of the workload itself, the CPU’s design, the memory subsystem, input/output device speeds and sizes, the operating system, and myriad other system aspects. Those and other aspects of performance are discussed in Chapter 2.

OCR for page 21
46 THE FUTURE OF COMPUTING PERFORMANCE · Real-time decision-making—enables growing use of computa- tional assistance for various complex problem-solving tasks, such as speech transcription and language translation. · Collaboration technology—offers a more immersive and interac- tive 3D environment for real-time collaboration and telepresence. · Machine-learning algorithms—filter e-mail spam, supply reli- able telephone-answering services, and make book and music recommendations. Computers have become so pervasive that a vast majority of end- users are not computer aficionados or system experts; rather, they are experts in some other field or disciplines, such as science, art, education, or entertainment. The shift has challenged the efficiency of human-com - puter interfaces. There has always been an inherent gap between a user’s conceptual model of a problem and a computer’s model of the problem. However, given the change in demographics of computer users, the need to bridge the gap is now more acute than ever before. The increased complexity of common end-user tasks (“find a picture like this” rather than “add these two numbers”) and the growing need to be able to offer an effective interface to a non-computer-expert user at a higher level of object semantics (for example, presenting not a Fourier transform data dump of a flower image but a synthesized realistic visual of a flower) have together increased the computational capability needed to provide real-time responses to user actions. Bridging the gap would be well served by computers that can deal with natural user inputs, such as speech and gestures, and output content in a visually rich form close to that of the physical world around us. A typ- ical everyday problem requires multiple iterations of execute and evaluate between the user and the computer system. Each such iteration normally narrows the original modeling gap, and this in turn requires additional computational capability. The larger the original gap, the more computa - tion is needed to bridge it. For example, some technology-savvy users working on an image-editing problem may iterate by editing a low-level machine representation of an image, whereas a more typical end-user may interact only at the level of a photo-real output of the image with virtual brushes and paints. Thanks to sustained growth in computing performance over the years, more effective computer-use models and visually rich human- computer interfaces are introducing new potential ways to bridge the gap. An alternative to involving the end-user in each iteration is to depend on a computer’s ability to refine model instances by itself and to nest multiple iterations of such an analytics loop for each iteration of a visual computing loop involving an end-user. Such nesting allows a reduction in

OCR for page 21
47 THE NEED FOR CONTINUED PERFORMANCE GROWTH the number of interactions between a user and his or her computer and therefore an increase in the system’s efficiency or response. However, it also creates the need to sustain continued growth in computational per- formance so that a wider variety of more complex tasks can be simulated and solved in real time for the growing majority of end-users. Real-time physical and behavioral simulation of even a simple daily-life object or events (such as water flow, the trajectory of a ball in a game, and sum - marizing of a text) is a surprisingly computationally expensive task, and requires multiple iterations or solutions of a large number of subproblems derived from decomposition of the original problem. Computationally intensive consumer applications include such phe - nomena as virtual world simulations and immersive social-networking, video karaoke (and other sorts of real-time video interactions), remote education and training that require simulation, and telemedicine (includ - ing interventional medical imaging).28 THE IMPORTANCE OF COMPUTING PERFORMANCE FOR ENTERPRISE PRODUCTIVITY Advances in computing technology in the form of more convenient communication and sharing of information have favorably affected the productivity of enterprises. Improved communication and sharing have been hallmarks of computing from the earliest days of time-sharing in corporate or academic environments to today’s increasingly mobile, smart phone-addicted labor force. Younger employees in many companies today can hardly recall business processes that did not make use of e-mail, chat and text messaging, group calendars, internal Web resources, blogs, Wiki toolkits, audio and video conferencing, and automated management of workflow. At the same time, huge improvements in magnetic storage technology, particularly for disk drives, have made it affordable to keep every item of an organization’s information accessible on line. Individual worker productivity is not the only aspect of an enterprise that has been affected by continued growth in computing performance. The ability of virtually every sort of enterprise to use computation to understand data related to its core lines of business—sometimes referred to as analytics— has improved dramatically as computer performance has increased over the years. In addition, massive amounts of data and computational capa- bility accessible on the Internet have increased the demand for Web ser- vices, or “software as a service,” in a variety of sectors. Analytics and 28 For more on emerging applications and their need for computational capability, see Justin Rattner, 2009, The dawn of terascale computing, IEEE Solid-State Circuits Magazine 1(1): 83-89.

OCR for page 21
48 THE FUTURE OF COMPUTING PERFORMANCE the implications of Web services for computing performance needs are discussed below. Analytics Increases in computing capability and efficiency have made it feasible to perform deep analysis of numerous kinds of business data—not just off line but increasingly in real time—to obtain better input into business decisions.29 Efficient computerized interactions between organizations have created more efficient end-to-end manufacturing processes through the use of supply-chain management systems that optimize invento- ries, expedite product delivery, and reduce exposure to varying market conditions. In the past, real-time business performance needs were dictated mostly by transaction rates. Analytics (which can be thought of as com - putationally enhanced decision-making) were mostly off line. The com - putational cost of actionable data-mining was too high to be of any value under real-time use constraints. However, the growth in computing per- formance has now made real-time analytics affordable for a larger class of enterprise users. One example is medical-imaging analytics. Over the last 2 decades, unprecedented growth has taken place in the amount and complexity of digital medical-image data collected on patients in standard medical prac - tice. The clinical necessity to diagnose diseases accurately and develop treatment strategies in a minimally invasive manner has mandated the development of new image-acquisition methods, high-resolution acquisi - tion hardware, and novel imaging modalities. Those requirements have placed substantial computational burdens on the ability to use the image information synergistically. With the increase in the quality and utility of medical-image data, clinicians are under increasing pressure to generate more accurate diagnoses or therapy plans. To meet the needs of the clini- cian, the imaging-research community must provide real-time (or near real-time) high-volume visualization and analyses of the image data to optimize the clinical experience. Today, nearly all use of computation in medical imaging is limited to “diagnostic imaging.” However, with suffi - cient computational capability, it is likely that real-time medical interven - tions could become possible. The shift from diagnostic imaging to inter- ventional imaging can usher in a new era in medical imaging. Real-time 29 IBM’s Smart Analytics System, for example, is developing solutions aimed at retail, insurance, banking, health care, and telecommunication. For more information see the IBM Smart Analytics System website, available online at http://www-01.ibm.com/software/ data/infosphere/smart-analytics-system/.

OCR for page 21
49 THE NEED FOR CONTINUED PERFORMANCE GROWTH medical analytics can guide medical professionals, such as surgeons, in their tasks. For example, surface extractions from volumetric data coupled with simulations of various what-if scenarios accomplished in real time offer clear advantages over basic preoperative planning scenarios. Web Services In the last 15 years, the Internet and the Web have had a transforma- tional effect on people’s lives. That effect has been enabled by two concur- rent and interdependent phenomena: the rapid expansion of Internet con - nectivity, particularly high-speed Internet connections, and the emergence of several extraordinarily useful Internet-based services. Web search and free Web-based e-mail were among the first such services to explode in popularity, and their emergence and continuous improvements have been made possible by dramatic advances in computing performance, storage, and networking technologies. Well beyond text, Web-server data now include videos, photos, and various other kinds of media. Users— individuals and businesses—increasingly need information systems to see data the way they do, identify what is useful, and assemble it for them. The ability to have computers understand the data and help us to use it in various enterprise endeavors could have enormous benefits. As a result, the Web is shifting its focus from data presentation to end- users to automatic data-processing on behalf of end-users. Finding pre - ferred travel routes while taking real-time traffic feeds into account and rapid growth in program trading are some of the examples of real-time decision-making. Consider Web search as an example. A Web search service’s funda- mental task is to take a user’s query, traverse data structures that are effec- tively proportional in size to the total amount of information available on line, and decide how to select from among possibly millions of candidate results the handful that would be most likely to match the user’s expecta- tion. The task needs to be accomplished in a few hundred milliseconds in a system that can sustain a throughput of several thousand requests per second. This and many other Web services are offered free and rely on on-line advertisement revenues, which, depending on the service, may bring only a few dollars for every thousand user page views. The com - puting system that can meet those performance requirements needs to be not only extremely powerful but also extremely cost-efficient so that the business model behind the Internet service remains viable. The appetite of Internet services for additional computing perfor- mance doesn’t appear to have a foreseeable limit. A Web search can be used to illustrate that, although a similar rationale could be applied to other types of services. Search-computing demands fundamentally grow

OCR for page 21
50 THE FUTURE OF COMPUTING PERFORMANCE in three dimensions: data-repository increases, search-query increases, and service-quality improvements. The amount of information currently indexed by search engines, although massive, is still generally considered a fraction of all on-line content even while the Web itself keeps expanding. Moreover, there are still several non-Web data sources that have yet to be added to the typical Web-search repositories (such as printed media). Universal search,30 for example, is one way in which search-computing demands can dramatically increase as all search queries are simultane - ously sent to diverse data sources. As more users go online or become more continuously connected to the Internet through better wireless links, traffic to useful services would undergo further substantial increases. In addition to the amount of data and types of queries, increases in the quality of the search product invariably cause more work to be performed on behalf of each query. For example, better results for a user’s query will often be satisfied by searching also for some common synonyms or plurals of the original query terms entered. To achieve the better results, one will need to perform multiple repository lookups for the combinations of variations and pick the best results among them, a process that can easily increase the computing demands for each query by substantial factors. In some cases, substantial service-quality improvements will demand improvements in computing performance along multiple dimensions simultaneously. For example, the Web would be much more useful if there were no language barriers; all information should be available in every existing language, and this might be achievable through machine- translation technology at a substantial processing cost. The cost would come both from the translation step itself, because accurate translations require very large models or learning over large corpora, and from the increased amount of information that then becomes available for users of every language. For example, a user search in Italian would traverse not only Italian-language documents but potentially documents in every lan - guage available to the translation system. The benefits to society at large from overcoming language barriers would arguably rival any other single technologic achievement in human history, especially if they extended to speech-to-speech real-time systems. The prospect of mobile computing systems—such as cell phones, vehicle computers, and media players—that are increasingly powerful, ubiquitous, and interconnected adds another set of opportunities for bet - 30 See Google’s announcement: Google begins move to universal search: Google introduces new search features and unveils new homepage design,” Press Release, Google.com, May 16, 2007, available online at http://www.google.com/intl/en/press/pressrel/universal- search_20070516.html.

OCR for page 21
51 THE NEED FOR CONTINUED PERFORMANCE GROWTH ter computing services that go beyond simply accessing the Web on more devices. Such devices could act as useful sensors and provide a rich set of data about their environment that could be useful once aggregated for real-time disaster response, traffic-congestion relief, and as-yet-unimag - ined applications. An early example of the potential use of such systems is illustrated in a recent experiment conducted by the University of Cali - fornia, Berkeley, and Nokia in which cell phones equipped with GPS units were used to provide data for a highway-conditions service.31 More generally, the unabated growth in digital data, although still a challenge for managing and sifting, has now reached a data volume large enough in many cases to have radical computing implications.32 Such huge amounts of data will be especially useful for a class of prob - lems that have so far defied analytic formulation and been reliant on a statistical data-driven approach. In the past, because of insufficiently large datasets, the problems have had to rely on various, sometimes question - able heuristics. Now, the digital-data volume for many of the problems has reached a level sufficient to revert to statistical approaches. Using sta - tistical approaches for this class of problems presents an unprecedented opportunity in the history of computing: the intersection of massive data with massive computational capability. In addition to the possibility of solving problems that have heretofore been intractable, the massive amounts of data that are increasingly avail- able for analysis by small and large businesses offer the opportunity to develop new products and services based on that analysis. Services can be envisioned that automate the analysis itself so that the businesses do not have to climb this learning curve. The machine-learning community has many ideas for quasi-intelligent automated agents that can roam the Web and assemble a much more thorough status of any topic at a much deeper level than a human has time or patience to acquire. Automated inferences can be drawn that show connections that have heretofore been unearthed only by very talented and experienced humans. On top of the massive amounts of data being created daily and all that portends for computational needs, the combination of three elements has the potential to deliver a massive increase in real-time computa - tional resources targeted toward end-user devices constrained by cost and power: 31 See the University of California, Berkeley, press release about this experiment (Sarah Yang, 2008, Joint Nokia research project captures traffic data using GPS-enabled cell phones, Press Release, UC Berkeley News, February 8, 2008, available online at http://berkeley.edu/ news/media/releases/2008/02/08_gps.shtml). 32 Wired.com ran a piece in 2008 declaring “the end of science”: The Petabyte Age: Be - cause more isn’t just more—more is different,” Wired.com, June 23, 2008, available online at http://www.wired.com/wired/issue/16-07.

OCR for page 21
52 THE FUTURE OF COMPUTING PERFORMANCE · Clouds of servers. · Vastly larger numbers of end-user devices, consoles, and various form-factor computing platforms. · The ubiquitous connectivity of computing equipment over a ser- vice-oriented infrastructure backbone. The primary technical challenge to take advantage of those resources lies in software. Specifically, innovation is needed to enable the discovery of the computing needs of various functional components of a specific ser- vice offering. Such discovery is best done adaptively and under the real- time constraints of available computing bandwidth at the client-server ends, network bandwidth, and latency. On-line games, such as Second Life, and virtual world simulations, such as Google Earth, are examples of such a service. The services involve judicious decomposition of computing needs over public client-server networks to produce an interactive, visu - ally rich end-user experience. The realization of such a vision of connected computing will require not only increased computing performance but standardization of network software layers. Standardization should make it easy to build and share unstructured data and application program - ming interfaces (APIs) and enable ad hoc and innovative combinations of various service offerings. In summary, computing in a typical end-user’s life is undergoing a momentous transformation from being useful yet nonessential software and products to being the foundation for around-the-clock relied-on vital services delivered by tomorrow’s enterprises.