National Academies Press: OpenBook

Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium (2011)

Chapter: Green Clouds: The Next Frontier--Parthasarathy Ranganathan

« Previous: Developing Robust Cloud Applications--Yuanyuan (YY) Zhou
Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

Green Clouds: The Next Frontier

PARTHASARATHY RANGANATHAN

Hewlett Packard Research Labs


We are entering an exciting era for computer-systems design. In addition to continued advances in performance, next-generation designs are also addressing important challenges related to power, sustainability, manageability, reliability, and scalability. At the same time, new combinations of emerging technologies (e.g., photonics, non-volatile storage, and 3D stacking), and new workloads (related to cloud computing, unstructured data, and virtualization) are presenting us with new opportunities and challenges. The confluence of these trends has led us to rethink the way we design systems—motivating holistic designs that cross traditional design boundaries.

In this article, we examine what this new approach means for the basic building blocks of future systems and how to manage them. Focusing on representative examples from recent research, we discuss the potential for dramatic (10 to 100X) improvements in efficiency in future designs and the challenges and opportunities they pose for future research.

PREDICTING THE FUTURE OF COMPUTING SYSTEMS

What can we predict for computing systems 10 years from now? Historically, the first computer to achieve terascale computing (1012, or one trillion computing operations per second) was demonstrated in the late 1990s. About 10 years later, in mid-2008, the first petascale computer was demonstrated at 1,000 times more performance capability. Extrapolating these trends, one can expect an exascale computer by approximately 2018. That is a staggering one million trillion computing operations per second and a thousand-fold improvement in performance over any current computer.

Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

Moore’s law (often described as the trend that computing performance doubles every 18 to 24 months) has traditionally helped predict performance challenges, for terascale and more recently petascale computing, but the transition from petascale to exascale computing is likely to pose some new challenges we need to address going forward.

CHALLENGES

The Power Wall

The first challenge is related to what is commonly referred to as the power wall. Power consumption is becoming a key constraint in the design of future systems. This problem is manifested in several ways: in the amount of electricity consumed by systems; in the ability to cool systems cost effectively; in reliability; and so on.

For example, recent reports indicate that the electricity costs for powering and cooling cloud datacenters can be millions of dollars per year, often more than was spent on buying the hardware (e.g., Barroso and Hölzle, 2007)! IDC, an industry analyst firm, has estimated that worldwide investment in power and cooling was close to $40 billion last year (Patel, 2008).

This emphasis on power has begun to have a visible impact on the design of computing systems, as system design constraints are shifting from optimizing performance to optimizing energy efficiency or performance achieved per watt of power consumed in the system. This shift has been partly responsible for the emergence of multi-core computing as the dominant way to design microprocessors.

In addition, recognition has been growing that designers of energy-efficiency optimized systems must take into consideration not only power consumed by the computing system, but also power consumed by the supporting equipment. For example, for every watt of power consumed in the server of a datacenter, an additional half to one watt of power is consumed in the equipment responsible for power delivery and cooling (often referred to as the burdened costs of power and cooling, or power usage effectiveness [PUE] [Belady et al., 2008]).

Sustainability

Sustainability is also emerging as an important issue. The electricity consumption associated with information technology (IT) equipment is responsible for 2 percent of the total carbon emissions in the world, more than the emissions of the entire aviation industry. More important, IT is increasingly being used as the tool of choice to address the remaining 98 percent of carbon emissions from non-IT industries (e.g., the use of video conferencing to reduce the need for travel or the use of cloud services to avoid transportation or excess manufacturing costs) (Banerjee et al., 2009).

Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

One way to improve sustainability is to consider the total life cycle of a system—including both the supply and demand side. In other words, in addition to the amount of energy used in operating a system, it is important to consider the amount of energy used in making the system.

Manageability

Sustainability is just one of the new “ilities” that pose challenges for the future. Another key challenge pertains to manageability, which can be defined as the collective processes of deployment, configuration, optimization, and administration during the life cycle of an IT system.

To illustrate this challenge, consider, as an example, the potential infrastructure in a future cloud datacenter. On the basis of recent trends, one can assume that there will be five global datacenters with 40 modular containers each, 10 racks per container, 4 enclosures per rack, and 16 blade servers per enclosure. If each blade server has two sockets with 32 cores each and 10 virtual machines per core, this cloud vendor will have a total of 81,920,000 virtual servers to operate its services. Each of the more than 80 million servers, in turn, will require several classes of operations—for bring-up, day-to-day operations, diagnostics, tuning, and other processes, ultimately including retirement or redeployment of the system. Although a lot of work has been done on managing computer systems, manageability on such a large scale poses new challenges.

Reliability

Trends in technology scaling circuit level and increased on-chip integration at the micro-architectural level lead to a higher incidence of both transient and permanent errors. Consequently, new systems must be designed to operate reliably and provide continued up-time, even when they are built of unreliable components.

Business Trends

Finally, these challenges must be met within the constraints of recent business trends. One important trend is the emphasis on reducing total costs of ownership for computing solutions. This often translates to a design constraint requiring the use of high-volume commodity components and avoiding specialization limited to niche markets.

OPPORTUNITIES

We believe that the combination of challenges—low power, sustainability, manageability, reliability, and costs—is likely to influence how we think about

Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

system design to achieve the next 1,000-fold increase in performance for the next decade. At the same time, we recognize that interesting opportunities are opening up as well.

Data-Centric Workloads

A fundamental shift has taken place in terms of data-centric workloads. The amount of data being created is increasing exponentially, much faster than Moore’s law predicted. For example, the size of the largest data warehouse in the Winter Top Ten Survey has been growing at a cumulative annual growth rate of 173 percent (Winter, 2008). The amount of online data is estimated to have increased nearly 60-fold in the last seven years, and data from richer sensors, digitization of offline content, and new applications like Twitter, Search, and others will surely increase data growth rates. Indeed, it is estimated that only 5 percent of the world’s off-line data has been digitized or made available through online repositories so far (Mayer, 2009).

The emergence and rapid increase of data as a driving force in computing has led to a corresponding increase in data-centric workloads. These workloads focus on different aspects of the data life cycle (capture, classify, analyze, maintain, archive, and so on) and pose significant challenges for the computing, storage, and networking elements of future systems.

Among these, an important recent trend (closely coupled with the growth of large-scale Internet web services) has been the emergence of complex analysis on an immense scale. Traditional data-centric workloads like web serving and online transaction processing (e-commerce) are being superseded by workloads like real-time multimedia streaming and conversion; history-based recommendation systems; searches of texts, images, and even videos; and deep analysis of unstructured data (e.g., Google Squared).

Emerging data-centric workloads have changed our assumptions about system design. These workloads typically operate at larger scale (hundreds of thousands of servers) and on more diverse data (e.g., structured, unstructured, rich media) with input/output (I/O) intensive, often random data-access patterns and limited locality. Another characteristic of data-centric workloads is a great deal of innovation in the software stack to increase scalability and commodity hardware (e.g., Google MapReduce/BigTable).

Improvements in Throughput, Energy Efficiency, Bandwidth, and Memory Storage

Recent trends suggest several potential technology disruptions on the horizon (Jouppi and Xie, 2009). On the computing side, recent microprocessors have favored multi-core designs that emphasize multiple simpler cores for greater throughput. This approach is well matched with the large-scale distributed parallelism in data-

Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

centric workloads. Operating cores at near-threshold voltage has been shown to significantly improve energy efficiency. Similarly, recent advances in networking, particularly related to optics, show a strong growth in bandwidth for communication among computing elements at various levels of the system.

Significant changes are also expected in the memory/storage industry. Recently, new non-volatile RAM (NVRAM) memory technologies have been demonstrated that significantly reduce latency and improve energy efficiency compared to Flash and Hard Disk. Some of these NV memories, such as phase-change RAM (PCRAM) and Memristors, have shown the potential to replace DRAM with competitive performance and better energy efficiency and technology scaling. At the same time, several studies have postulated the potential end of DRAM scaling (or at least a significant slowing down) over the next decade, which further increases the likelihood that DRAM will be replaced by NVRAM memories in future systems.

INVENTING THE FUTURE—CROSS-DISCIPLINARY HOLISTIC SYSTEM DESIGN

We believe that the confluence of all these trends—the march toward exascale computing and its associated challenges, opportunities related to emerging large-scale distributed data-centric workloads, and potential disruptions from emerging advances in technology—offers us a unique opportunity to rethink traditional system design.

We believe that the next decade of innovation will be characterized by a holistic emphasis that cuts across traditional design boundaries—across layers of design from chips to datacenters; across different fields in computer science, including hardware, systems, and applications; and across different engineering disciplines, including computer engineering, mechanical engineering, and environmental engineering.

We envision that in the future, rather than focusing on the design of single computers, we will focus on the design of computing elements. Specifically, future systems will be (1) composed of simple building blocks that are efficiently co-designed across hardware and software and (2) composed together into computing ensembles, as needed and when needed. We refer to these ideas as designing disaggregated dematerialized system elements bound together by a composable ensemble management layer. In the discussion below, we present three illustrative examples from our recent research demonstrating the potential for dramatic improvements.

Cross-Layer Power Management

In the past few years, interest has surged in enterprise power management. Given the multifaceted nature of the problem, the solutions have correspondingly

Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

focused on different dimensions. For example, some studies have focused on average power reduction for lower electricity costs while others have focused on peak power management for lower air conditioning and power-delivery costs.

Previous studies can also be categorized based on (1) their approaches (e.g., local resource management, distributed resource scheduling, virtual machine migration); (2) options for controlling power (e.g., processor voltage scaling, component sleep states, turning systems off); (3) specific levels of implementation—chip, server, cluster, or datacenter level—hardware, software, or firmware; and (4) objectives and constraints of the optimization problem—for example, whether or not we allow performance loss and whether or not we allow occasional violations in power budgets.

In the future, many (or all) of these solutions are likely to be deployed together to improve coverage and increase power savings. Currently, emergent behavior from the collection of individual optimizations may or may not be globally optimal, or even stable, or even correct! A key need, therefore, is for a carefully designed, flexible, extensible coordination framework that minimizes the need for global information exchange and central arbitration.

In this first example, we explain how a collaborative effort between computer scientists, thermo-mechanical engineers, and control engineering experts led to a novel coordination solution. The solution is summarized in Figure 1 and is further elaborated in Raghavendra et al. (2008). Briefly, this design is based on carefully connecting and overloading the abstractions in current implementations to allow individual controllers to learn and react to the effect of other controllers, the same way they would respond to variations in workload demand. This enables formal mathematical analysis of stability and provides flexibility to dynamic changes in the controllers and system environments. A specific coordination architecture for five individual solutions using different techniques and actuators to optimize for different goals at different system levels across hardware and software demonstrates that a cross-layer solution can achieve significant advantages in correctness, stability, and efficiency over existing state of the art.

Although illustrative design has shown the potential of a cross-disciplinary approach to improving power management for the cloud, many more opportunities have yet to be explored. Specifically, how do we define the communication and coordination interfaces to enable federated architectures? How do we extend solutions to adapt to application-level semantics and heterogeneity in the systems space (Kansal et al., 2009)? How do we design federation at the larger scale typical of cloud systems? Finally, although our discussions have focused on power management, the “intersecting control loops” problem is representative of a larger class of management problems—how architectures generalize to broader resource management domains.

Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
FIGURE 1 A coordinated power-management architecture. The proposed architecture coordinates different kinds of power-management solutions (multiple levels, approaches, time constants, objective functions, and actuators). Key features include (a) a control-theoretic core to enable formal guarantees of stability and (b) intelligent overloading of control channels to include the impact of other controllers, reduce the number of interfaces, and limit the need to access global data.

FIGURE 1 A coordinated power-management architecture. The proposed architecture coordinates different kinds of power-management solutions (multiple levels, approaches, time constants, objective functions, and actuators). Key features include (a) a control-theoretic core to enable formal guarantees of stability and (b) intelligent overloading of control channels to include the impact of other controllers, reduce the number of interfaces, and limit the need to access global data.

Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Dematerialized Datacenters

Our second example is a collaborative project by computer scientists, environmental engineers, and mechanical engineers to build a sustainability-aware new datacenter solution. Unlike prior studies that focused purely on operational energy consumption as a proxy for sustainability, we use the metric of life-cycle exergy destruction to systematically study the environmental impact of current designs for the entire life cycle of the system, including embedded impact factors related to materials and manufacturing.

A detailed description of exergy is beyond the scope of this article, but briefly, exergy corresponds to the available energy in a system. Unlike energy, which is neither created nor destroyed (the first law of thermodynamics), exergy is continuously consumed in the performance of useful work by any real entropy-generating process (the second law of thermodynamics). Previous studies have shown that the destruction (or consumption) of exergy is representative of the irreversibility associated with various processes. Consequently, at a first-level of approximation, exergy can be used as a proxy to study environmental sustainability.

Studying exergy-efficient designs leads to several new insights (Chang et al., 2010). First, focusing on the most efficient system design does not always produce the most sustainable solution. For example, although energy-proportional designs are optimal in terms of operational electricity consumption, virtual machine consolidation is more sustainable than energy proportionality in some cases. Next, the ratio of embedded exergy to total exergy has been steadily increasing over the years, motivating new optimizations that explicitly target embedded exergy (e.g., recycling or dematerialization). Finally, performance and embedded, operational, and infrastructure exergy are not independent variables. Sustainability must be addressed holistically to include them all.

Based on insights provided by the study just described, we propose a new solution (Figure 2) that is co-designed across system architecture and physical organization/packaging. This solution includes three advances that work together to improve sustainability: (1) new material-efficient physical organization, (2) environmentally efficient cooling infrastructures, and (3) effective design of system architectures to enable the reuse of components.

A detailed evaluation of our proposed solution, which includes a combination of sustainability models, computational fluid-dynamics modeling, and full-system computer architecture simulation, demonstrates significant improvements in sustainability, even compared to an aggressive future configuration (Meza et al., 2010). The proposed design illustrates the opportunities that lie ahead. New silicon-efficient architectures, system designs that explicitly target up-cycling, and data-centers with renewable energy sources are the subjects of on-going research that can bring us closer to truly sustainable datacenters (Patel, 2008).

Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
FIGURE 2 Conceptual sketch of a design for a dematerialized green datacenter. This specific design illustrates a container for cloud workloads that incorporates several optimizations co-designed with each other, including (1) new material-efficient physical design, (2) component reuse enabled by a disaggregated system architecture, (3) sharing among collections of systems to reduce the amount of material used in building the system, (4) environmentally efficient cooling that leverages ambient air, and (5) thermal density clustering for lower cooling exergy.

FIGURE 2 Conceptual sketch of a design for a dematerialized green datacenter. This specific design illustrates a container for cloud workloads that incorporates several optimizations co-designed with each other, including (1) new material-efficient physical design, (2) component reuse enabled by a disaggregated system architecture, (3) sharing among collections of systems to reduce the amount of material used in building the system, (4) environmentally efficient cooling that leverages ambient air, and (5) thermal density clustering for lower cooling exergy.

From Microprocessors to Nanostores

The third example is a cross-disciplinary collaboration among device physicists, computer engineers, and systems software developers to design a disruptive new system architecture for future data-centric workloads (Figure 3). Leveraging the memory-like and disk-like attributes of emerging non-volatile technologies, we propose a new building block, called a nanostore, for data-centric system design (Ranganathan, in press).

A nanostore is a single-chip computer that includes 3-D stacked layers of dense silicon non-volatile memory with a layer of compute cores and a network interface. A large number of individual nanostores can communicate over a simple interconnect and run a data-parallel execution environment like MapReduce to support large-scale distributed data-centric workloads. The two most impor-

Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
FIGURE 3 The combination of emerging data-centric workloads and upcoming non-volatile and other technologies offer the potential for a new architecture design—“nanostores” that co-locate power-efficient compute cores with non-volatile storage in the same package in a flatter memory hierarchy.

FIGURE 3 The combination of emerging data-centric workloads and upcoming non-volatile and other technologies offer the potential for a new architecture design—“nanostores” that co-locate power-efficient compute cores with non-volatile storage in the same package in a flatter memory hierarchy.

tant aspects of nanostores are (1) the co-location of power-efficient computing with a single-level data store, and (2) support for large-scale distributed design. Together, they provide several benefits.

The single-level data store enables improved performance by providing faster data access (in latency and bandwidth). Energy efficiency is also improved by the flattening of the memory hierarchy and the increased energy efficiency of NVRAM over disk and DRAM. The large-scale distributed design, which increases parallelism and overall data/network bandwidth, allows for higher performance. This design also improves energy efficiency by partitioning the system into smaller elements that can leverage more power-efficient components (e.g., simpler cores).

Our results show that nanostores can achieve orders of magnitude higher performance with dramatically better energy efficiency. More important, they have the potential to be used in new architectural models (e.g., leveraging a hierarchy of computes [Ranganathan, in press]) and to enable new data-centric applications that were previously not possible. Research opportunities include in-systems software optimizations for single-level data stores, new endurance optimizations to improve data reliability, and architectural balance among compute, communication, and storage.

Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

CLOSING

Although the research described in these examples shows promising results, we believe we have only scratched the surface of what is possible. Opportunities abound for further optimizations, including for hardware-software co-design (e.g., new interfaces and management of persistent data stores [Condit et al., 2009]) and other radical changes in system designs (e.g., bio-inspired “brain” computing [Snider, 2008]).

Overall, the future of computing systems offers rich opportunities for more innovation by the engineering community, particularly for cross-disciplinary research that goes beyond traditional design boundaries. Significant improvements in the computing fabric enabled by these innovations will also provide a foundation for innovations in other disciplines.

REFERENCES

Banerjee, P., C.D. Patel, C. Bash, and P. Ranganathan. 2009. Sustainable data centers: enabled by supply and demand side management. Design Automation Conference, 2009: 884–887.

Barroso, L.A., and U. Hölzle. 2007. The case for energy proportional computing. IEEE Computer 40(12): 33–37.

Belady, C., A. Rawson, J. Pefleuger, and T. Cader. 2008. Green Grid Data Center Power Efficiency Metrics: PUE and DCIE. Green Grid white paper #6. Available online at www.greengrid.org.

Chang, J., J. Meza, P. Ranganathan, C. Bash, and A. Shah. 2010. Green Server Design: Beyond Operational Energy to Sustainability. Paper presented at HotPower 2010, Vancouver, British Columbia, October 3, 2010.

Condit, J., E.B. Nightingale, C. Frost, E. Ipek, D. Burger, B. Lee, and D. Coetzee. 2009. Better I/O Through Byte-Addressable, Persistent Memory. Presented at Symposium on Operating Systems Principles (SOSP ’09), Association for Computing Machinery Inc., October 2009.

Jouppi, N., and Y. Xie. 2009. Emerging technologies and their impact on system design. Tutorial at the International Symposium on Low Power Electronics and Design, 2009.

Kansal, A., J. Liu, A. Singh, R. Nathuji, and T. Abdelzaher. 2009. Semantic-less Coordination of Power Management and Application Performance. In Hotpower 2009 (co-located with SOSP 2009), USENIX, October 10, 2009.

Mayer, M. 2009. The Physics of Data. Talk given at Xerox PARC, August 13, 2009. Available online at http://www.parc.com/event/936/innovation-at-google.html.

Meza, J., R. Shih, A. Shah, P. Ranganathan, J. Chang, and C. Bash. 2010. Lifecycle-Based Data Center Design. Presented at ASME International Mechanical Engineering Congress & Exposition, Vancouver, British Columbia, November 14–17, 2010.

Patel, C. 2008. Sustainable IT Ecosystem. Keynote address at the 6th USENIX Conference on File and Storage Technologies, February 26–29, 2008, San Jose, California.

Raghavendra, R., P. Ranganathan, V. Talwar, Z. Wang, and X. Zhu. 2008. No “Power” Struggles: Coordinated Multi-level Power Management for the Data Center. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), Seattle, Wash., March 2008.

Ranganathan, P. In press. From microprocessor to nanostores: rethinking system building blocks for the data-centric era. IEEE Computer.

Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

Snider, G., 2008. Memristors as Synapses in a Neural Computing Architecture. Presented at Memristor and Memristive Systems Symposium, University of California, Berkeley, November 21, 2008.

Winter. R. 2008. Why are data warehouses growing so fast? An Update on the Drivers of Data Warehouse Growth. Available online at http://www.b-eye-network.com/view/7188.

Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 27
Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 28
Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 29
Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 30
Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 31
Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 32
Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 33
Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 34
Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 35
Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 36
Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 37
Suggested Citation:"Green Clouds: The Next Frontier--Parthasarathy Ranganathan." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 38
Next: ENGINEERING AND MUSIC »
Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium Get This Book
×
Buy Paperback | $54.00 Buy Ebook | $43.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This volume highlights the papers presented at the National Academy of Engineering's 2010 U.S. Frontiers of Engineering Symposium. Every year, the symposium brings together 100 outstanding young leaders in engineering to share their cutting-edge research and technical work. The 2010 symposium was held September 23 - 25, and hosted by IBM at the IBM Learning Center in Armonk, New York. Speakers were asked to prepare extended summaries of their presentations, which are reprinted here. The intent of this book is to convey the excitement of this unique meeting and to highlight cutting-edge developments in engineering research and technical work.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!