Skip to main content

Currently Skimming:

3 Implications for Cloud Services and Isolation
Pages 28-38

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 28...
... When Google moved to create its own browser and eventually a public cloud service, a much trickier set of security concerns emerged, Grosse said. For the first time, Google's servers would be running code from external users.
From page 29...
... are ultimately hardware issues, they have important implications for cloud computing systems such as Google's. Public cloud managers are heavily invested in security, which, Grosse emphasized, is a shared responsibility between the cloud vendor and the user.
From page 30...
... With live migration, Google engineers were able to patch the physical hosts underneath virtual machines (VMs) as needed, even down to the firmware, without customers noticing any disruptions to service or even requiring any of the VMs to be rebooted, all before Spectre became public knowledge.
From page 31...
... It is hard to attack workloads certain types of directly, because the zones are too big and high-value secrets the scheduling is unpredictable. As a result, bad actors have to run attacks continuously out of a shared and dig very deep to reach valuable data domain altogether.
From page 32...
... He concluded with the observation that one additional positive aspect of the cloud is that cloud services tend to cycle through hardware very quickly, with some machines lasting only a few years. While these hardware issues get sorted out across subsequent generations of CPUs, refreshing hardware, he said, is good for security.
From page 33...
... The more the customer can customize the behavior of a service by uploading some kind of executable content, the more these kinds of attacks become possible. The most vulnerable cloud service is, therefore, always going to be a VM service, because there the customer has In the context of the most complete ability to run arbitrary code down to the operating system level the cloud, servers and observe the time characteristics of that execution on shared infrastructure.
From page 34...
... For the T family of instance types, which do provide better prices based on some degree of oversubscription, AWS devised a "CPU credits" model that still provides predictable behavior. Importantly, these efforts designed to provide consistency and predictability ended up being natural mitigations for a lot of the security issues seen on the hardware side.
From page 35...
... Ryland next described Nitro, Amazon's latest computing architecture for its VM service. Nitro has several security features, including hardware and firmware validation of the Intel chip and system firmware at every reboot, no interactive shell (all privileged access is done by APIs)
From page 36...
... Building on themes raised earlier in the workshop, they also discussed trust issues and potential areas for improvement. Security at Google and AWS Grosse asked Baker to comment on the security implications of live migration, given that operating systems or applications may never actually need to be rebooted.
From page 37...
... Ryland agreed that it is a challenge, and he noted that AWS applies current technology judiciously to keep its environments as trustworthy as possible. He added that AWS itself makes its own chips for certain use cases, such as the Nitro hardware, which offers a certain level of customization.
From page 38...
... Ryland noted that thieves have to work very hard even to find the data worth stealing. If we use microservices for data, it will make the attacker's task even harder.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.