National Academies Press: OpenBook

Cryptographic Agility and Interoperability: Proceedings of a Workshop (2017)

Chapter: 3 Standards and Security Implications

« Previous: 2 Government and Infrastructure
Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

3

Standards and Security Implications

The workshop’s third session focused on standards for cryptographic agility as well as the security implications for various levels and types of agility. Russ Housley described the process and perspective of the recent Request for Comments (RFC) 7696 guidelines from the Internet Engineering Task Force, and David McGrew drew on real-world experiences and data to highlight lessons learned and future directions.

RFC 7696: GUIDELINES FOR CRYPTOGRAPHIC ALGORITHM AGILITY AND SELECTING MANDATORY-TO-IMPLEMENT ALGORITHMS

Russ Housley, Vigil Security, LLC

Russ Housley, founder of Vigil Security, LLC, and past chair of the Internet Architecture Board (IAB), presented on RFC 7696: Guidelines for Cryptographic Algorithm Agility and Selecting Mandatory-to-Implement Algorithms.

RFC 7696, which is also known as Best Current Practices 201, was initiated by IAB under its Privacy and Security Program. After the early draft development and initial review and comment process, the project was transferred to the Internet Engineering Task Force (IETF) Security Area Advisory Group. Before publishing the document in November 2015, IETF shepherded it through a broader review process and achieved consensus.

Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

The goal of RFC 7696 is simple to state but not so to achieve: to ensure that security protocols can migrate from one algorithm or suite of algorithms to a newer, stronger one when needed. Tackling this, Housley said, requires viewing the problem from two perspectives: that of the protocol designer and that of the protocol implementer.

Protocol implementers need to be able to add a new algorithm, which requires modularity that would not otherwise be needed, and they need a way to detect if the new algorithm has been successfully added, which is something many protocols currently lack. “We are hoping that one of the things that will result from this guidance is that we will get that capability,” Housley said.

The protocol designer has a different goal: to identify which algorithms are old, which are new, and which are mandatory to implement to achieve interoperability. The answers to these questions change each time a new algorithm is added, which increases confusion. A registry of identifiers for all known protocols, algorithms, and algorithm suites would help both designers and implementers, Housley said, noting that popular consensus is that such a registry be created by the Internet Assigned Numbers Authority.

Selecting Mandatory-to-Implement Algorithms

Housley emphasized that an important role of RFC 7696 concerns the selection of algorithms that are mandatory to implement and necessary for interoperability. Mandatory-to-implement algorithms should be included in any implementation of a particular protocol, although there can be debate over which algorithm to select. For example, there is currently a debate over whether to default to a probabilistic signature scheme (a method for constructing an artifact before applying the signature operation) as part of the Transport Layer Security (TLS) 1.3 protocol. Either way, once the decision is made, all implementations must include the common algorithm in order to achieve interoperability and to be compliant.

Prompted by Bob Blakley, CitiGroup, Inc., Housley clarified that if the mandatory-to-implement algorithm is not implemented, the product would be considered non-compliant with IETF standards, though he noted that IETF has no policing power to enforce these standards. “It is more of a ‘Hall of Shame’ kind of a consequence,” he said.

Although IETF provides guidance that allows for either choosing algorithm suites or taking a piece-by-piece approach (in which a cryptographic mechanism is built from a collection of known elements), Housley noted that sometimes a piece-by-piece approach can result in a mechanism that looks reasonable but is not actually secure. Housley stressed the importance of building a cryptographic mechanism in which each piece is roughly equal in strength. A weak key agreement paired with a strong cipher, for example, is still vulnerable to attack at its weakest point—another downside to a piece-by-

Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

piece approach. Echoing a sentiment expressed by other speakers, Housley emphasized that simple implementations are essential to strong security.

Ideally, Housley noted, when a new, stronger algorithm is developed and introduced, the older algorithm is retired. But it takes a lot of work and coordination to make this happen; there must be a mechanism in place to reject the older (often flawed) algorithm before the new one is rolled out to all implementations. The challenge is compounded when an algorithm is implemented in hardware. In that case, a piece of equipment such as a chip or board has to be replaced (rather than distributing software patches over the network). “No matter what we do,” said Housley, “transition is going to be hard.” And when interoperability fails, the one who turns off the old algorithm becomes “the bad guy” who is blamed when the two systems are no longer able to work together.

The IETF process of selecting mandatory-to-implement algorithms was designed to “let the experts in this technology do the picking,” Housley said. The number of algorithms or implementations is also important. If there are too many algorithms or implementations available, several may be rarely used; if these have flaws or breaches, they can go unnoticed, and thus unfixed and exploited, for a long time. On the other hand, if only one algorithm or implementation is chosen and it turns out to have a flaw, that is clearly a problem, too. RFC 7696 therefore recommends choosing two algorithms for implementation. The second choice can act as a backup if the first is found to be flawed.

During the discussion, Blakley inquired about what constraints might be imposed by fixed-length fields in hardware and protocol headers in the context of agility. Housley responded that in his experience this is less and less of a problem as it gets easier to accommodate variable field lengths. Although the field length might affect the speed of a communication, he said, it is not likely to drop the communication altogether, as might have been the case in the past.

Opportunistic Security and the Need for Deliberate Security Decisions

RFC 7696 came about when the idea of “opportunistic security” was gaining traction. Under this model, communications are encrypted, possibly without any authentication. Basically, any encryption technique that is common between the implementations is used, which increases the effort for an adversary to intercept communications. In this sense, opportunistic security makes pervasive, passive surveillance difficult because many users employing a weaker algorithm or shorter keys can force an entity trying to do covert surveillance to break all of those endpoints, which is a difficult and time-consuming task.

In the discussion, Eric Grosse, Google, Inc., said that when most communication was clear text, he was in favor of opportunistic security. However, he now opposes it for two reasons: (1) people can become overly reliant on it and neglect true security; and

Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

(2) state actors or enterprises can take advantage of the relative weakness of opportunistic encryption to do their own content injection.

Housley replied that he does believe opportunistic security increases the amount of work required to conduct surveillance or push content. For example, if a weak cipher is being used, decrypting it still takes time. However, he agreed that data are not well protected with opportunistic encryption, and he cited the difficult situation faced by communications carriers. They want to offer their users the fastest service, while also keeping communication secure. There is currently a debate between the IETF and the Groupe Speciale Mobile Association about how wireless service would be affected if everything becomes encrypted. Email and web encryption might be hardly noticeable to users, creating only tiny delays in the transmission of information, but users “certainly would notice if their voice call had sporadic sputters in it.”

Housley said that a mechanism is essential to determine what needs encryption and what does not. A one-bit covert channel might be enough to enable the right distinctions, he suggested, although he noted that many firewalls currently being used would need to be updated.

It is also crucial that cryptography experts be actively engaged in designing and evaluating systems. Prompted by a question from Donna Dodson, National Institute of Standards and Technology, Housley pointed to instances where cryptographers were not involved, but they could have been of use. For example, Wired Equivalent Privacy (WEP) was flawed in ways that would have been relatively obvious to a cryptographer, but “no cryptographer cared to look” because WEP used 40-bit keys, which were already known to be breakable. When a 128-bit key variant was introduced, the system was still highly flawed because the security decisions had been made by a committee with little cryptographic expertise. “I would argue that we need educated people involved in the protocol design to avoid another WEP,” Housley said.

The International Context

Housley touched on how the international context influences standard-setting. Offering an example from his past role as IETF Security Area Director, he recalled receiving a complaint from Russian financial institutions that there was no support for the GOST algorithms (an alternative to Data Encryption Standard) in the existing TLS protocol. They wanted to use TLS to secure online banking, but the Russian regulations required the use of GOST for all financial transactions in that country. As a consequence, the entire country was unable to bank online. Housley and Steven Bellovin, the two IETF Security Area Directors at the time, agreed to tackle the problem. “I was not willing to be the guy that said the Russians cannot do online banking,” Housley said.

Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

Peter Swire, Georgia Institute of Technology, asked how many countries have national encryption standards and how that landscape affects agility. Housley noted that, in addition to Russia and China, many other countries including Korea and Japan have these standards. Each is different; some focus only on the encryption algorithm (as in the case of Korea’s SEED algorithm) and others include the whole suite of algorithms, key management, digital signature, hashes, and so on (as in the case of Russia’s GOST family of algorithms).

Different countries have different priorities when it comes to investing in cryptography and algorithms, and technologists have no control over these political decisions. “All we can do is come up with a way that lets [the algorithms] be used in the policy environment where they are required,” said Housley.

The mandatory-to-implement algorithm needs to be as strong as possible to give all users a baseline level of security.

Noting that this diverse international context creates modularization of security mechanisms, Swire asked whether these differences might be a “blessing in disguise.” Housley categorized it as a double-edged sword: while modularity has its benefits, configuration around national algorithms makes it harder to maintain interoperability and creates a tough situation when flaws are found. Then, the question becomes how to change the configuration or implementation to eliminate the flawed module without losing interoperability.

Paul Kocher, Cryptography Research Division, Rambus, Inc., raised the ethical side of the international context. Given that governments have differing reasons for imposing requirements and that sometimes those reasons are at odds with the ability for individuals to obtain adequate security, he asked about the role of standard-setting bodies like the IETF: Do we give these governments the cryptography (and “back door”) they want, or can we force them to accept something that “gives everybody well understood security properties?”

In response, Housley said that first, the mandatory-to-implement algorithm needs to be as strong as possible so as to give all Internet users a baseline level of security. Beyond that, each country is operating within its own political context, which is beyond the purview of the IETF. “I think we have to accommodate the local decision—I do not see how we cannot,” he said. A possible balance could be that local identifiers recognize the mandatory-to-implement algorithms even if they do not implement them. He also noted a downside to refusing to comply with international requests. If he had refused the Russians’ request regarding GOST and the TLS protocol, someone would likely have built

Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

an alternative TLS, probably with flaws, to accommodate the Russian demand for GOST. That kind of “forking,” Housley said, is harmful to the Internet overall.

Building on this point, Fred Schneider, Cornell University, noted that while one cannot necessarily prevent a government from acting according to its best interests, the IETF could still benefit users by requiring transparency. Transparency would allow people to see how secure the algorithms are before making their decisions. Such transparency also could discourage the creation of intentionally insecure algorithms. Housley said that the IETF has required that certain specifications be readily available for review, and some nations have gone even further, translating their algorithm specifications to English so that the IETF and other standards organizations can make their own judgments.

CRYPTOGRAPHIC AGILITY IN THE REAL WORLD

David McGrew, Cisco Systems, Inc.

David McGrew is a fellow in the Advanced Security Research Group at Cisco Systems, Inc. While cryptographic agility is broadly accepted, it is not broadly understood, he said. Suggesting that there are in fact different types of agility, he structured his talk to focus first on principles, then on real-world experiences, and finally on conclusions to inform future efforts.

Principles Relevant to Cryptographic Agility

McGrew began by highlighting what he sees as key principles related to cryptographic services, implementations, and agility. The first is that agility is essential for protecting against future threats and supporting backward compatibility. McGrew acknowledged quantum computing as an important threat but emphasized that it is not the only one, underscoring the need to be “future proof.” Agility is also essential for implementations, not just algorithms, he said.

A second principle is a non-technical one: customers should be able to choose what cryptography they use. He referred to Housley’s story of Russian online banking as an example of this idea. Cisco, as a multinational company, must balance support for international standards with support for the needs of its international customers. “We do not want to force any particular national agenda,” he said. For example, Cisco supports Japan’s use of its own national cipher, Camellia, as long as Japanese customers are willing to bear the economic costs of that decision. A related principle is that one country should not be able to force its cryptographic choices on the rest of the world, McGrew said.

The final principle is that complexity should not be forced upon users. Agility can be complex, as Paul Kocher outlined in his talk. “There is a concentration of interests and

Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

a diffusion of cost,” McGrew said. He continued that if any country wants to add cryptography to the current standard to increase agility, that act increases the complexity for all implementers, testers, users, and policy makers, resulting in a “hidden cost” each time a new option or cipher is added. “I think this is a really important point that might get lost when talking about the benefits of agility,” McGrew said.

Understanding the Real-World Context

Cryptography is implemented in many different places in today’s systems. The level of agility that is appropriate or feasible depends on the context. McGrew described how hardware, field programmable gate arrays (FPGAs), operating systems, applications, and scripts can be arrayed on a scale ranging from most conservative/least flexible to most agile. In hardware, the cryptography is built in, but with FPGAs, there is slightly more flexibility. Operating systems have even more flexibility, applications can be very agile, and scripting languages such as JavaScript offer tremendous agility.

It is important to keep this spectrum in mind when designing cryptography to be agile or conservative, McGrew said. In his view, hash-based signatures can provide the best security against advances in cryptography, potentially even against quantum computing. He emphasized the importance of considering how long the equipment or product is expected to be used. Being conservative is more important than being agile in the context of products that are expected to be in the field a decade or more, he said, pointing to devices in the Internet of Things as an important example.

Elucidating the Different Types of Agility

McGrew posited that there is not one, but three types of agility: algorithm agility, protocol agility, and implementation agility. Algorithm agility means being able to swap ciphers or algorithms in and out easily; often this is done by swapping in a new block cipher. While in some senses it is the easiest form of agility, the variety of algorithm designs makes it more difficult than it sounds. He pointed to GOST, a cipher with a 64-bit block, and Advanced Encryption Standard, one with a 128-bit block. “Swapping the two is not as easy as you might think,” he said. Differences in modes of operation also affect potential swapping.

McGrew defined protocol agility as moving to a new version of a protocol, such as 1.1 to 1.2 for TLS. Protocol agility can be easy or difficult, but in McGrew’s view, it is more important, in many ways, than algorithm agility.

McGrew defined the final type, implementation agility, as the ability to update or replace software found to have a security flaw. This might not seem the same as cryptographic agility, but McGrew argued that if the user can bring a new implementation

Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

himself, through Debian Linux apt-get or Windows Update, that counts as a situation requiring agility.

Lessons Learned from Discovered Flaws

Pointing to an analysis he conducted of OpenSSL common vulnerabilities and exposures (CVEs), McGrew described five categories of flaws: (1) algorithm flaws, including block ciphers and cryptographic primitives; (2) protocol problems, meaning fixes that require changes to the protocol; (3) side channel attacks, where secret keys leak out; (4) padding attacks, which may overlap protocols and implementations; and (5) implementation flaws, where there is a bug in the code that is often not revealed until it is too late to rewrite it. Of CVEs logged from 2002 to 2016, implementation flaws accounted for far and away the greatest number, findings that underscore Paul Kocher’s concern over implementation flaws.

Although agility allows for known bugs to be fixed quickly, “fixes” increase complexity and run the risk of introducing yet more flaws.

The analysis also revealed what McGrew categorized as “a horrifying number” of flaws introduced when trying to fix previous flaws. Although agility allows for known bugs to be fixed quickly, all of these “fixes” increase complexity and run the risk of introducing yet more flaws. “There may be an implementation bug in the fix, or somebody did not think the fix through,” McGrew said. “I worry a lot about complexity.”

CVEs are not the only measure of successful security. An analysis of the use of network algorithms and key sizes revealed that obsolete cryptography is still available and in use, he said, “including some traffic that really should not be using it.” How to get people to stop using such obsolete ciphers is “a fascinating question,” he said. With TLS up to version 1.2, he continued, it is possible to block particular ciphers known to be insecure, perhaps through a “smart rule” that denies connections on the basis of poor security. However, as Kerry McKay had noted in her presentation, it would not necessarily be fair to block a senior citizen from accessing the Social Security Administration website when the person might not know how to do that. “That is a really hard problem,” he said.

Data on client key lengths also revealed that very few people were using optimum key sizes to maximize security, a finding that surprised McGrew. In addition, McGrew expressed serious concern about the large number of people using software and hardware that is no longer secure or supported by the manufacturer. Another problem is that the people who originally bought, sold, installed, or configured a piece

Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

of software may no longer be available to answer questions or make repairs for the organization that is using it.

Looking Forward

Looking ahead, McGrew noted the importance of fungibility—the ability for security components to be swapped easily. Block ciphers, he said, are fungible because they have a simple interface. Standards regarding authenticating encryption, for example, should also encourage fungibility.

He also posited that the rapid pace of generating the new TLS 1.3 standard was making it difficult for engineers to design for better agility, though he noted that it could be debated whether or not it would be worth slowing the deployment of TLS 1.3 for this purpose.

In closing, McGrew suggested that “implementation agility is the most important thing of all,” and implementation is where security needs to be as conservative as possible. In response to a question from Bob Blakley, McGrew elaborated on this point, arguing that system integrity should be a constrained, conservative area because it is best to have a small, trusted computing base. In part, this is why he supports hash-based signatures. Later in the discussion, Eric Grosse agreed that such signatures are the best bet for firmware updates because they are expected to be solid for the many years that firmware lasts.

Agility in the Context of Data at Rest

Tadayoshi Kohno, University of Washington, raised a question about the agility context for data at rest as opposed to data in transit. McGrew hypothesized that a storage-bounded model could work to keep stored data secure while it is deencrypted away from a weak security system and reencrypted with stronger security. Russ Housley added that a working group known as Long-term Archive and Notary Standards was looking at this question in terms of the security of digital signatures, but he did not know whether it had considered confidential stored data.

Noting that he is struggling with this issue on a design right now, Grosse said his team’s approach is to encrypt stored data on discs with fairly conservative symmetric ciphers. For metadata, which is a smaller amount of data, the team is using public-key algorithms that are more agile and likely to change in the face of a post–quantum computing world. Assuming that the current cryptography is secure enough, he estimated that this system provides more agility to reencrypt data as necessary, although it has not been proven. Housley shared that he has seen a similar approach work, but not for changing algorithms stored on discs with encrypted data.

Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

Envisioning a Worst-Case Scenario

Steven Lipner, independent consultant, speculated about how it might be possible to design for a potential catastrophic future event, whether it is a quantum computer breaking all cryptographic code or another sort of catastrophe that could prompt the failure of an algorithm that we rely on, creating global computing chaos. “I do not think we can get away with ignoring it or hoping it will not happen,” he contended. Emphasizing the importance of implementation agility, McGrew said that implementation agility can complement algorithm agility when interface designers anticipate future flaws, whether in the anti-replay, freshness checking, padding, or another aspect of cryptography. Whatever it is, it needs to be properly addressed and easily replaceable before such a flaw is detected, he said. With true implementation agility, deploying new code offers a lot of flexibility to fix the problem correctly, without hastily plugging a leak and introducing a new flaw. Lipner argued that McGrew was envisioning the fairly quotidian experience of finding and fixing a flaw, whereas Lipner was speaking of a more extreme event. At some point in the future, Lipner hypothesized, “there are going to be some algorithms that are not part of the implementation you are going to want to replace. You are going to have to interface with the implementation [ . . . and . . . ] manage things at that interface,” which is, perhaps, a far more challenging task.

Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 29
Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 30
Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 31
Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 32
Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 33
Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 34
Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 35
Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 36
Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 37
Suggested Citation:"3 Standards and Security Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 38
Next: 4 Engineering at Scale and User Implications »
Cryptographic Agility and Interoperability: Proceedings of a Workshop Get This Book
×
 Cryptographic Agility and Interoperability: Proceedings of a Workshop
Buy Ebook | $14.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In May 2016, the National Academies of Sciences, Engineering, and Medicine hosted a workshop on Cryptographic Agility and Interoperability. Speakers at the workshop discussed the history and practice of cryptography, its current challenges, and its future possibilities. This publication summarizes the presentations and discussions from the workshop.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!