National Academies Press: OpenBook

Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium (2011)

Chapter: Opportunities and Challenges of Cloud Computing--Armando Fox

« Previous: Introduction--Ali R. Butt and Dilma Da Silva
Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

Opportunities and Challenges of Cloud Computing

ARMANDO FOX

University of California, Berkeley


Computer science is moving forward so quickly and is so focused on its recent history that we are often surprised to learn that visionary ideas were articulated long before the technology for their practical implementation was developed. The following vision of “utility computing” is excerpted from an overview of the pioneering and highly influential MULTICS computing system (Corbató and Vyssotsky, 1965):

One of the overall design goals is to create a computing system which is capable of meeting almost all of the present and near-future requirements of a large computer utility. Such systems must run continuously and reliably 7 days a week, 24 hours a day in a way similar to telephone or power systems, and must be capable of meeting wide service demands … [T]he importance of a multiple access system operated as a computer utility is that it allows a vast enlargement of the scope of computer-based activities, which should in turn stimulate a corresponding enrichment of many areas of our society.

Today, 45 years later, that vision appears close to becoming reality. In 2008, Amazon announced the availability of its Elastic Compute Cloud (EC2), making it possible for anyone with a credit card to use the servers in Amazon’s datacenters for 10 cents per server hour with no minimum or maximum purchase and no contract (Amazon AWS, 2008b). Amazon has since added options and services and reduced the base price to 8.5 cents per server hour.) The user is charged for only as long as he/she uses the computer rounded up to the next hour.

The essence of cloud computing is making datacenter hardware and software available to the general public on a pay-as-you-go basis. Every user enjoys the

Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

illusion of having virtually infinite capacity available instantaneously on demand. Hence the term utility computing is used to describe the “product” sold by a cloud-computing provider.

Of course, by 2008, many companies, such as Google Search and Microsoft Hotmail, were already operating extensive “private clouds” that delivered proprietary SaaS (software as a service). These companies had found it necessary to develop the programming and operational expertise to run such installations.

In contrast, EC2 was the first truly low-cost utility computing that was not bundled with a particular SaaS application. Users of EC2 were allowed to deploy applications of their choice, which greatly increased the popularity of the system. Private-cloud operators Google and Microsoft soon followed suit and now provide public-cloud services in addition to their proprietary services.

At first, skeptics were hard pressed to believe that Amazon could operate such a service at a profit. But, as leading software architect James Hamilton observed (2008), because of economies of scale, the costs of bandwidth, storage, and power for warehouse-scale datacenters are five to seven times cheaper than for medium-sized datacenters (see Table 1). With Amazon’s retail-to-consumer operational expertise, the company found a profitable way to pass these savings along to individual users.

COST ASSOCIATIVITY AND ELASTICITY

The cloud-computing service model, which represents a radical departure from conventional information technology (IT), enables fundamentally new kinds of computation that were previously infeasible. For example, in 2008, the National Archives released 17,481 pages of documents, including First Lady Hillary Clinton’s daily schedule of activities. Peter Harkins, a senior engineer at The Washington Post, using 200 computers in EC2 for less than nine hours, produced a searchable corpus of the documents and made it publicly available on the World Wide Web less than a day later (Amazon AWS, 2008b). The server time cost

TABLE 1 Comparative Economies of Scale in 2006 for a Medium-Sized Datacenter (~1,000 servers) and a Warehouse-Scale Datacenter (~50,000 servers)

Technology

Medium-Sized Data Center

Warehouse-Scale Data Center

Ratio

Network

$95 per Mbit/sec/montha

$13 per Mbit/sec/month

7.1

Storage

2.20 per GByte/monthb

$0.40 per GByte/month

5.7

Administration

1 administrator per ≈ 140 servers

1 administrator for > 1,000 servers

7.1

aMbit/sec/month = megabit per second per month.

bGByte/month = gigabyte per month.

Source: Hamilton, 2008.

Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

Harkins less than $150—the same cost as using a single server for 1,800 hours, and far less than the cost of purchasing a single server outright. Being able to use 200 servers for nine hours for the same price as using one server for 1,800 hours is an unprecedented new capability in IT that can be called cost associativity.

That same year, 2008, programmers at the Web startup company Animoto developed an application to create music videos from a user’s photo collection. When that application was made available to the more than 200 million users of Facebook, it became so popular so quickly that the number of users doubled every 12 hours for the next three days, causing the number of servers to increase from 50 to 3,500. After the peak subsided, demand fell to a much lower level, and the unnecessary servers were released.

Elasticity, the ability to add and remove servers in minutes, rather than days or weeks, is also unprecedented in IT. Elasticity is financially appealing because it allows actual usage to closely track demand on an hour-by-hour basis, thereby transferring the risk of making a poor provisioning decision from the service operator to the cloud-computing provider.

But elasticity is even more important for handling spikes and data hot spots resulting from unexpected events. During the terrorist attacks of September 11, 2001, for example, viewer traffic on the CNN website increased by an order of magnitude in just 15 minutes (LeFebvre, 2001). In another case, when entertainer Michael Jackson died unexpectedly in 2009, the number of Web searches about Jackson spiked to nearly 10 times the average so suddenly that Google initially mistook the event for a malicious attack on its search service.

According to Tim O’Reilly, founding editor of O’Reilly Media, a leading technical publisher, the ability to deal with sudden surges is particularly important for mobile applications that “respond in real time to information provided either by their users or by non-human sensors” (quoted in Siegele, 2008). In other words, these services are accessible to the more than 50 percent of the world population equipped with cell phones, the most ubiquitous Internet access devices.

OPPORTUNITIES AND CHALLENGES

Scaling Down

Before the advent of cloud computing, scaling up was considered a permanent change, because it usually meant buying and installing new hardware. Consequently, extensive research was conducted on scaling up systems without taking them offline. The idea of subsequently scaling them down—and then possibly back up again—was not even considered.

Since cloud computing involves borrowing machines from a shared pool that is constantly upgraded, scale-up and scale-down are likely to mean that hardware will be more heterogeneous than in a conventional datacenter. Research is just beginning on software, such as scalable consistency-adjustable data storage

Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

(SCADS), which can gracefully scale down as well as up in a short time (Armbrust et al., 2009).

At the other extreme, fine-grained pricing may enable even cheaper utility computing during demand troughs. California power companies have already introduced demand-based pricing models in which power is discounted during off-peak times. By analogy, Amazon EC2 has introduced a new mechanism whereby otherwise unused machines are made available at a discounted rate on a “best-effort” basis. However, the user might be forced to give up the machine on short notice if demand increases and a priority customer is willing to pay a premium for it.

This leads to a relatively new situation of clusters whose topologies and sizes can change at any time and whose cycles may be “reclaimed” on short notice for higher priority applications. Research on scheduling frameworks, such as Mesos, is addressing how applications on cloud computing can deal gracefully with such fluctuations (Hindman et al., 2010).

The ability to scale down also introduces new motivations for improving the energy efficiency of IT. In traditional research proposals, energy costs are usually absorbed into general institutional overhead. With cloud computing, a customer who uses fewer machines consumes less energy and, therefore, pays less. Although warehouse-scale datacenters are now being built in locations where cheaper power (e.g., hydroelectric power) is available (Table 2), the pay-as-you-go model of cloud computing introduces a direct financial incentive for cloud users to reduce their energy usage.

Several challenges, however, may interfere with this opportunity for “greener” IT. Unfortunately, today’s servers consume nearly half as much energy when they are idle as when they are used. Barroso and Hölzle (2007) have argued that we will need design improvements at all levels, from the power supply to energy-aware software, to achieve “energy proportional” computing in which the amount of energy consumed by a server is proportional to how much work it does.

TABLE 2 Price of Kilowatt Hours (kWh) of Electricity

Cents per kWh

Region

Factors

3.6

Idaho

Hydroelectric power; no long-distance transmission

10.0

California

Long-distance transmission; limited transmission lines in Bay Area; no coal-fired electricity allowed in the state

18.0

Hawaii

Fuel must be shipped to generate electricity

Source: EIA, 2010.

Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Better and Faster Research

Cost associativity means that “embarrassingly parallel” experiments—experiments that require many trials or tasks that can be pursued independently—can be accelerated to the extent that available cloud resources allow. For example, an experiment that requires 100,000 trials of one minute each would take more than two months to complete on a single server. Cost associativity makes it possible to harness 1,000 cloud servers for two hours for the same cost. Researchers in the RAD Lab working on datacenter scale computing now routinely run experiments involving hundreds of servers to test out their ideas at realistic scale. Before cloud computing, this was impossible for any university laboratory.

Tools like Google’s MapReduce (Dean and Ghemawat, 2004) and the opensource equivalent, Hadoop, give programmers a familiar data-parallel “building block” and encapsulate the complex software engineering necessary for handling the challenges of resource scheduling and responding to machine failures in the cloud environment. However, because many problems cannot be easily expressed as MapReduce tasks, other frameworks, such as Pig, Hive, and Cascading, have emerged that provide higher level languages and abstractions for cloud programming.

Indeed, Amazon’s recently-introduced “Elastic MapReduce” service, which provides a “turnkey” version of the MapReduce framework, allows jobs to be written using not only those frameworks, but also statistical modeling packages, such as R. On the level of cloud infrastructure itself, the goal of the Berkeley BOOM project (boom.cs.berkeley.edu) is to simplify the creation of new cloud programming frameworks by applying principles from declarative networking.

Progress is being made on all of these fronts, and some new systems are in regular use in production environments. However, the artifacts and ecosystem comprising them are still a long way from “turnkey” systems that will allow domain-expert programmers to seamlessly combine the abstractions in their applications.

HIGH-PERFORMANCE COMPUTING

The scientific and high-performance computing (HPC) community has recently become more interested in cloud computing. Compared to SaaS workloads, which rely on request-level parallelism, HPC workloads typically rely on thread- or task-level parallelism, making them more communication-intensive and more sensitive to communication latency. These properties make HPC workloads particularly vulnerable to “performance noise” artifacts introduced by the pervasive use of virtualization in cloud environments (Armbrust et al., 2010b).

Legacy scientific codes often rely on resource-scheduling approaches, such as gang scheduling and make assumptions about the network topology that connects the servers. Such design decisions make sense in a statically provisioned

Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

environment but not for cloud computing. Thus, not surprisingly, early benchmarks of existing HPC applications on public clouds were not encouraging (Evangelinos and Hill, 2008; Walker, 2008).

However, cloud providers have been quick to respond to the potential HPC market, as illustrated by Amazon’s introduction in July 2010 of “Cluster Compute Instances” tuned specifically for HPC workloads. Experiments at the National Energy Research Scientific Computing (NERSC) Laboratory at Lawrence Berkeley Laboratory measured an 8.5X performance improvement on several HPC benchmarks when using this new type of instance compared to conventional EC2 instances. Amazon’s own measurements show that a “virtual cluster” of 880 HPC instances can run the LINPACK linear algebra benchmark faster than the 145th-fastest supercomputer in the world, as measured by Top500.com. These results have encouraged more scientists and engineers to try cloud computing for their experiments. Installations operated by academic/industrial consortia, such as the Google/IBM/NSF CluE cluster that runs Hadoop (NSF, 2009), Yahoo’s M45 cluster (http://labs.yahoo.com/Cloud_Computing), and OpenCirrus (opencirrus.org), are other examples of cloud computing for scientific research.

Even if the running time of a problem is slower on cloud computing than on a dedicated supercomputer, the total time-to-answer might still be shorter with cloud computing, because unlike traditional HPC facilities, the user can provision a “virtual supercomputer” in the cloud instantly rather than waiting in line behind other users (Foster, 2009).

Longtime HPC veteran Dan Reed, now head of the eXtreme Computing Group (XCG) at Microsoft Research, also believes cloud computing is a “game changer” for HPC (West, 2009). He points out that while cloud infrastructure design shares many of the challenges of HPC supercomputer design, the much larger volume of the cloud infrastructure market will influence hardware design in a way that traditional HPC has been unable to do.

TRANSFERS OF BIG DATA

According to Wikipedia, the Large Hadron Collider could generate up to 15 petabytes (15×1015 bytes) of data per year, and researchers in astronomy, biology, and many other fields routinely deal with multi-terabyte (TB) datasets. A boon of cloud computing is its ability to make available tremendous amounts of computation on-demand with large datasets. Indeed, Amazon is hosting large public datasets for free, perhaps hoping to attract users to purchase nearby cloud computing cycles (Amazon AWS, 2008a).

The key word here is nearby. Transferring 10 TB over a network connection at 20 megabits per second—a typical speed observed in measurements of long-haul bandwidth in and out of Amazon’s S3 cloud storage service (Garfinkel, 2007)—would take more than 45 days and incur transfer charges of $100 to $150 per TB.

Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

In the overview of cloud computing by Armbrust et al. (2010b), we therefore proposed a service that would enable users to instead ship crates of hard drives containing large datasets overnight to a cloud provider, who would physically incorporate them directly into the cloud infrastructure. This idea was based on experience with this method by the late Jim Gray, the Turing Award-winning computer scientist who was recently instrumental in promoting the use of large-scale computation in science and engineering. Gray reported using this technique reliably; even if disks are damaged in transit, well-known RAID-like techniques could be used to mitigate the effects of such failures (Patterson, 2003).

Shortly after the overview was published, Amazon began offering such a service and continues to do so. Because network cost/performance is improving more slowly than any other cloud computing technology (see Table 3), the “FedEx a disk” option for large data transfers is likely to become increasingly attractive.

TABLE 3 Update of Gray’s Costs of Computing Resources from 2003 to 2008

 

Wide-area (long-haul) Network Bandwidth/Month

CPU Hours (all cores)

Disk Storage

Item in 2003

1 Mbps WANa link

2 GHz CPU, 2 GB DRAM

200 GB disk, 50 Mb/s transfer rate

Cost in 2003

$100/month

$2,000

$200

What $1 buys in 2003

1 GB

8 CPU hours

1 GB

Item in 2008

100 Mbps WAN link

2 GHz, 2 sockets, 4 cores/socket, 4 GB DRAM

1 TB disk, 115 MB/s sustained transfer

Cost in 2008

$3,600/month

$1,000

$100

What $1 buys in 2008

2.7 GB

128 CPU hours

10 GB

Cost/performance improvement

2.7x

16x

10x

Cost to rent

$0.27–$0.40

$2.56

$1.20–$1.50

What $1 buys on AWSb in 2008

$0.10–$0.15/GB × 3 GB

128 × 2 VMs@$0.10 each

$0.12–$0.15/GB-month × 10 GB

aWAN = wide-area (long-haul) network

bAWS = Amazon Web Services

Source: Armbrust et al., 2010a.

Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

LICENSING AND CLOUD PROVIDER LOCK-IN

Amazon’s EC2 represents one end of a spectrum in that its utility computing service consists of a bare-bones server built around the Intel x86 processor architecture. Cloud users must provide all of the software themselves, and open-source building blocks, such as the Linux operating system, are popular starting points. However, scientific and engineering research also frequently requires the use of proprietary software packages, such as Matlab.

Although some publishers of proprietary software (including Matlab) now offer a pay-as-you-go licensing model like the model used for the public cloud, most software is still licensed in a “cloud-unfriendly” manner (e.g., per seat or per computer). Changing the structure of software licenses to approximate the public cloud pricing model is a nontechnical but real obstacle to the increased use of the cloud in scientific computing.

In addition, if other providers, such as Google AppEngine or Microsoft Azure, provide value-added software functionality in their clouds, users might become dependent on such software to the point that their computing jobs come to require it. An example is Google AppEngine’s automatic scale-up and scale-down functionality, which is available for certain kinds of user-deployed applications. If such applications were migrated to a non-Google platform, the application authors might have to create this functionality themselves.

The potential risk of “lock-in” to a single provider could be partially mitigated by standardizing the application programming interfaces and data formats used by different cloud services. Providers could then differentiate their offerings by the quality of their implementations, and migration from one provider to another would result in a possible loss of performance, rather than a loss of functionality. The Data Liberation Front, a project started by a group of Google engineers, is one group that is actively pursuing data standardization.

CONCLUSION

In 1995, researchers at Berkeley and elsewhere had argued that networks of commodity workstations (NOWs) offered potential advantages over high-performance symmetrical multiprocessors (Anderson et al., 1995). The advantages would include better scalability, cost-effectiveness, and potential high availability through inexpensive redundancy.

At that time software could not deal with important aspects of NOW architecture, such as the possibility of partial failure. Nevertheless, the economic and technical arguments for NOW seemed so compelling that, over the course of several years, academic researchers and commercial and open-source software authors developed tools and infrastructure for programming this idiosyncratic architecture at a much higher level of abstraction. As a result, applications that

Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

once took years for engineers to develop and deploy on a NOW can be prototyped today by Berkeley undergraduates as an eight-week course project.

Given this rapid evolution, there is good reason to be optimistic that in the near future computer-based scientific and engineering experiments that take weeks today will yield results in a matter of hours. When that time arrives, the necessity of purchasing and administering one’s own supercomputer or computer cluster (and then waiting in line to use it) will seem as archaic as text-only interfaces do today.

ACKNOWLEDGMENTS

This work was done at the University of California Berkeley Reliable Adaptive Distributed Systems Laboratory (RAD Lab), and much of what is reported here builds on a survey article written by Michael Armbrust, the present author, Rean Griffith, Anthony D. Joseph, Randy Katz, Andy Konwinski, Gunho Lee, David Patterson, Ariel Rabkin, Ion Stoica, and Matei Zaharia. Support was provided by Sun Microsystems, Google, Microsoft, Amazon Web Services, Cisco Systems, Cloudera, eBay, Facebook, Fujitsu, Hewlett-Packard, Intel, Network Appliance, SAP, VMWare, and Yahoo! Inc., as well as matching funds from the State of California MICRO Program (Grants 06-152, 07-010, 06-148, 07-012, 06-146, 07-009, 06-147, 07-013, 06-149, 06-150, and 07-008), the National Science Foundation (Grant CNS-0509559), and the University of California Industry/University Cooperative Research Program (UC Discovery) Grant COM07-10240.

REFERENCES

Amazon AWS. 2008a. Public Data Sets on AWS. Available online at http://aws.amazon.com/publicdatasets/.

Amazon AWS. 2008b. AWS Case Study: Washington Post. Available online at http://aws.amazon.com/solutions/case-studies/washington-post.

Anderson, T.E., D.E. Culler, and D. Patterson. 1995. A case for NOW (networks of workstations). IEEE Micro 15(1): 54–64.

Armbrust, M., A. Fox, D.A. Patterson, N. Lanham, B. Trushkowsky, J. Trutna, and H. Oh. 2009. SCADS: scale-independent storage for social computing applications. In CIDR Perspectives 2009. Available online at http://www.db.cs.wisc.edu/cidr/cidr2009/Paper_86.pdf.

Armbrust, M., A. Fox, R. Griffith, A.D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia. 2010a. Above the Clouds: A Berkeley View of Cloud Computing. Technical Report EECS-2009-28, EECS Department, University of California, Berkeley. Available online at www.abovetheclouds.cs.berkeley.edu.

Armbrust, M., A. Fox, R. Griffith, A.D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia. 2010b. A view of cloud computing. Communications of the ACM 53(4): 50–58.

Barroso, L.A., and U. Hölzle. 2007. The case for energy-proportional computing. IEEE Computer 40(12): 33–37.

Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×

Corbató, F.J., and V.A. Vyssotsky. 1965. Introduction and overview of the multics system. P. 185 in Proceedings of the Fall Joint Computer Conference, 1965. New York: IEEE.

Dean, J., and S. Ghemawat. 2004. MapReduce: Simplified data processing on large clusters. Pp. 137–150 in Proceedings of the 6th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’04), December 5–8, 2004, San Diego, Calif. Berkeley, Calif.: USENIX.

EIA (Energy Information Administration). 2010. State Electricity Prices, 2006. Available online at http://www.eia.doe.gov/neic/rankings/stateelectricityprice.htm.

Evangelinos, C., and C.N. Hill. 2008. Cloud computing for parallel scientific HPC applications: feasibility of running coupled atmosphere-ocean climate models on Amazon’s EC2. In First ACM Workshop on Cloud Computing and its Applications (CCA’08), October 22–23, 2008, Chicago, Ill. New York: ACM.

Foster, I. 2009. What’s faster—a supercomputer or EC2? Available online at http://ianfoster.typepad.com/blog/2009/08/whats-fastera-supercomputer-or-ec2.html.

Garfinkel, S. 2007. An Evaluation of Amazon’s Grid Computing Services: EC2, S3 and SQS. Technical Report TR-08-07. Harvard University. Available online at http://simson.net/clips/academic/2007.Harvard.S3.pdf.

Hamilton, J. 2008. Internet-Scale Service Efficiency. Presentation at 2nd Large-Scale Distributed Systems and Middleware (LADIS) Workshop, September 15–17, 2008, White Plains, NY. Available online at http://www.cs.cornell.edu/courses/cs5410/2008fa/Slides/MSCloud.pdf.

Hindman, B., A. Konwinski, M. Zaharia, A. Ghodsi, A.D. Joseph, R.H. Katz, S. Shenker, and I. Stoica. 2010. Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center. Technical Report UCB-EECS-2010-87. University of California, Berkeley. Available online at http://www.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-87.html.

LeFebvre, W. 2001. CNN.com: Facing a World Crisis. In Proceedings of the 15th Conference on Systems Administration (LISA 2001), San Diego, Calif., December 2–7, 2001. Berkeley, Calif.: USENIX.

NSF (National Science Foundation). 2009. National Science Foundation Awards Millions to Fourteen Universities for Cloud Computing Research. Available online at http://www.nsf.gov/news/news_images.jsp?cntn_id=114686&org=NSF.

Patterson, D. 2003. A Conversation with Jim Gray. ACM Queue 4(1). Available online at http://queue.acm.org/detail.cfm?id=864078.

Siegele, L. 2008. A Survey of Corporate IT: Let It Rise. The Economist (October 23). Web edition only.

Walker, E. 2008. Benchmarking Amazon EC2 for high performance scientific computing. ;login 33(5): 18–23. Available online at http://www.usenix.org/publications/login/2008-10/openpdfs/walker.pdf.

West, J. 2009. Twins Separated at Birth: Cloud Computing, HPC and How Microsoft is Trying to Change How We Think About Scale. Available online at http://www.hpcwire.com/features/Twins-Separated-at-Birth-41173917.html.

Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 5
Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 6
Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 7
Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 8
Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 9
Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 10
Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 11
Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 12
Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 13
Suggested Citation:"Opportunities and Challenges of Cloud Computing--Armando Fox." National Academy of Engineering. 2011. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/13043.
×
Page 14
Next: Warehouse-Scale Computing: The Machinery That Runs the Cloud--Luiz André Barroso »
Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2010 Symposium Get This Book
×
Buy Paperback | $54.00 Buy Ebook | $43.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This volume highlights the papers presented at the National Academy of Engineering's 2010 U.S. Frontiers of Engineering Symposium. Every year, the symposium brings together 100 outstanding young leaders in engineering to share their cutting-edge research and technical work. The 2010 symposium was held September 23 - 25, and hosted by IBM at the IBM Learning Center in Armonk, New York. Speakers were asked to prepare extended summaries of their presentations, which are reprinted here. The intent of this book is to convey the excitement of this unique meeting and to highlight cutting-edge developments in engineering research and technical work.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!