Skip to main content

Currently Skimming:

Opportunities and Challenges of Cloud Computing--Armando Fox
Pages 5-14

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 5...
... , making it possible for anyone with a credit card to use the servers in Amazon's datacenters for 10 cents per server hour with no minimum or maximum purchase and no contract (Amazon AWS, 2008b)
From page 6...
... COST ASSOCIATIvITY AND ELASTICITY The cloud-computing service model, which represents a radical departure from conventional information technology (IT) , enables fundamentally new kinds of computation that were previously infeasible.
From page 7...
... Elasticity is financially appealing because it allows actual usage to closely track demand on an hour-by-hour basis, thereby transferring the risk of making a poor provisioning decision from the service operator to the cloud-computing provider. But elasticity is even more important for handling spikes and data hot spots resulting from unexpected events.
From page 8...
... In traditional research proposals, energy costs are usually absorbed into general institutional overhead. With cloud computing, a customer who uses fewer machines consumes less energy and, therefore, pays less.
From page 9...
... Cost associativity makes it pos sible to harness 1,000 cloud servers for two hours for the same cost. Researchers in the RAD Lab working on datacenter scale computing now routinely run experiments involving hundreds of servers to test out their ideas at realistic scale.
From page 10...
... . He points out that while cloud infrastructure design shares many of the challenges of HPC supercomputer design, the much larger volume of the cloud infrastructure market will influence hardware design in a way that traditional HPC has been unable to do.
From page 11...
... (2010b) , we therefore proposed a service that would enable users to instead ship crates of hard drives containing large datasets overnight to a cloud provider, who would physically incorporate them directly into the cloud infrastructure.
From page 12...
... Providers could then differentiate their offerings by the quality of their implementations, and migration from one provider to another would result in a possible loss of performance, rather than a loss of functionality. The Data Liberation Front, a project started by a group of Google engineers, is one group that is actively pursuing data standardization.
From page 13...
... Joseph, Randy katz, Andy konwinski, Gunho Lee, David Patterson, Ariel Rabkin, Ion Stoica, and Matei zaharia. Support was provided by Sun Microsystems, Google, Microsoft, Amazon Web Services, Cisco Systems, Cloudera, eBay, Facebook, Fujitsu, Hewlett-Packard, Intel, Network Appliance, SAP, VMWare, and yahoo!
From page 14...
... 2008. Benchmarking Amazon EC2 for high performance scientific computing.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.