repeated doublings of single-core performance, CMPs inherit the mantle as the most obvious alternative, and industry is motivated to devote substantial resources to moving compatible CMPs forward. The downside is that the core designs being replicated are optimized for serial code with support for dynamic parallelism discovery, such as speculation and out-of-order execution, which may waste area and energy for programs that are already parallel. At the same time, they may be missing some of the features needed for highly efficient parallel programming, such as lightweight synchronization, global communication, and locality control in software. A great deal of research remains to be done on on-chip networking, cache coherence, and distributed cache and memory management.
One important role for academe is to explore CMP designs that are more aggressive than industry’s designs. Academics should project both hardware and software trends much further into the future to seek possible inflection points even if they are not sure when or even whether transitioning a technology from academe to industry will occur. Moreover, researchers have the opportunity to break the shackles of strict backward compatibility. Promising ideas should be nurtured to see whether they can either create enough benefit to be adopted without portability or to enable portability strategies to be developed later. There needs to be an intellectual ecosystem that enables ideas to be proposed, cross-fertilized, and refined and, ultimately, the best approaches to be adopted. Such an ecosystem requires sufficient resources to enable contributions from many competing and cooperating research teams.
Meeting the challenges will involve essentially all aspects of computing. Focusing on a single component—assuming a CMP architecture or a particular number of transistors, focusing on data parallelism or on heterogeneity, and so on—will be insufficient to the task. Chapter 5 discusses recommendations for research aimed at meeting the challenges.