abstractions match his or her domain expertise well. Higher-level abstractions and domain-specific toolkits, whether for technical computing or World Wide Web services, have allowed software developers to create complex systems quickly and with fewer common errors. However, implicit in this approach has been an assumption that hardware performance would continue to increase (hiding the overhead of these abstractions) and that developers need not understand the mapping of the abstractions to hardware to achieve adequate performance.3 As these assumptions break down, the difficulty in achieving high performance from software will rise, requiring hardware designers and software developers to work together much more closely and exposing increasing amounts of parallelism to software developers (discussed further below). One possible example of this is the use of computer-aided design tools for hardware-software co-design. Another source of continued improvements in delivered application performance could also come from efficient implementation techniques for high-level programming language abstractions.

1.1.2 Problems in Scaling Nanometer Devices

Early in the 2000s, semiconductor scaling—the process of technology improvement so that it performs the same functionalities at ever smaller scales—encountered fundamental physical limits that now make it impractical to continue along the historical paths to ever-increasing performance.4 Expected improvements in both performance and power achieved with technology scaling have slowed from their historical rates, whereas implicit expectations were that chip speed and performance would continue to increase dramatically. There are deep technical reasons for (1) why the scaling worked so well for so long and (2) why it is no longer delivering dramatic performance improvements. See Appendix E for a brief overview of the relationship between slowing processor performance growth and Dennard scaling and the powerful implications of this slowdown.

In fact, scaling of semiconductor technology hit several coincident roadblocks that led to this slowdown, including architectural design constraints, power limitations, and chip lithography challenges (both the high costs associated with patterning smaller and smaller integrated circuit features and with fundamental device physics). As described below, the combination of these challenges can be viewed as a perfect storm of difficulty for microprocessor performance scaling.

With regard to power, through the 1990s and early 2000s the power needed to deliver performance improvements on the best performing microprocessors grew from about 5–10 watts in 1990 to 100–150 watts in 2004 (see Figure 1-2). This increase in power stopped in 2004, because cooling and heat dissipation proved inadequate. Furthermore, the exploding demand for portable devices, such as phones, tablets, and netbooks, increased the market importance of lower-power and energy-efficient processor designs.

images/img-21-1.jpg

FIGURE 1-2 Thirty five years of microprocessor trend data. SOURCE: Original data collected and plotted by M. Horowitz, F. Labonte, O. Shacham, K. Olukotun, L. Hammond, and C. Batten. Dotted-line extrapolations by C. Moore: Chuck Moore, 2011, “Data processing in exascale-class computer systems,” The Salishan Conference on High Speed Computing, April 27, 2011. (www.lanl.gov/orgs/hpc/salishan)

In the past, computer architects increased performance with clever architectural techniques such as ILP (instruction-level parallelism through the use of deep pipelines, multiple instruction issue, and speculation) and memory locality (multiple levels of caches). As the number of transistors per unit area on a chip continued to increase (as predicted by Moore’s Law), microprocessor designers used these transistors to, in part, increase the potential to exploit ILP by increasing the number of instructions executed in parallel (IPC, or instructions per

_______________

3Such abstractions may increase the energy costs of computation over time; a focus on energy costs (as opposed to performance) may have led to radically different strategies for both hardware and software. Hence, energy-efficient software abstractions are an important area for future development.

4In “High-Performance Processors in a Power-Limited World,” Sam Naffziger reviews the Vdd limitations and describes various approaches (circuit, architecture) to future processor design given the voltage scaling limitations: Sam Naffziger, 2006, “High-performance processors in a power-limited world,” Proceedings of the IEEE Symposium on VLSI Circuits, Honolulu, HI, June 15–17, 2006, p. 93–97.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement