National Academies Press: OpenBook

Cryptographic Agility and Interoperability: Proceedings of a Workshop (2017)

Chapter: 5 Research, Industry, and Policy Implications

« Previous: 4 Engineering at Scale and User Implications
Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

5

Research, Industry, and
Policy Implications

The final two presentations of the workshop explored the reasons for pursuing cryptographic agility as well as some of the tricky research questions facing the field. Steven Bellovin focused on particular challenges in regard to creating agility in embedded devices, and John Manferdelli explored sources of resistance to agility, potential paths forward, and the threat of quantum computing. Both speakers touched briefly on the policy context.

AGILITY IS ESSENTIAL (BUT EXTREMELY CHALLENGING)

Steven Bellovin, Columbia University

Steven Bellovin, currently a professor of computer science at Columbia University, previously worked as a fellow at AT&T Labs and served as chief technologist for the U.S. Federal Trade Commission. He opened with the assertion that agility is essential: “It is not even worth discussing whether or not we need it,” he said. Declaring that little of the cryptography that was used 20 years ago is particularly useful anymore, he suggested the same may well be true 20 years hence. Public-key algorithms, for example, will not pass muster in a post-quantum world. Agility is necessary, he emphasized, because an algorithm cannot simply be turned off and replaced with a new one overnight.

Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

Key Agility Challenges

Cryptography itself is already very difficult, and, like other workshop speakers, Bellovin acknowledged that adding agility invites complications such as new failure modes or downgrade attacks. The Transport Layer Security (TLS) anecdote illustrates this fact: many TLS errors stem from supporting legacy export ciphers needed for policies that were changed more than 15 years ago. “Most organizations cannot get this stuff right, even the very best,” Bellovin said. “It is just inherently a really hard problem.”

He pointed to the oldest cryptographic protocol in the open literature1 as another example of these challenges; 18 years after the paper was published, a new attack on the protocol was discovered. Later, when newly discovered weaknesses in Message-Digest Algorithm-5 (MD5) and Secure Hash Algorithm-1 (SHA-1) required that new ones be deployed, Bellovin and Eric Rescorla reported that all the major Internet Engineering Task Force (IETF) protocols made mistakes with hash functions negotiation in part because only those two algorithms existed when the protocols were designed.2 Put simply, the negotiation process required knowledge of the newer algorithms, but older systems would not know them. In this case even the IETF, representing the field’s elite experts, “tried hard and got it wrong,” he said, underscoring the risks for companies attempting to design their own cryptographic agility approaches.

Bellovin also expressed doubt that negotiation toolkits, code, or application programming interfaces (APIs) designed by standards bodies would necessarily fully address these challenges. Citing a study that found 80 percent of mobile applications contain cryptography errors in the TLS protocol alone,3 he asked, is it likely those designing new mobile applications would understand the even more complex systems under discussion today?

A further complication is that when code or protocols are designed successfully, they eventually are used by people all over the world, and so the possibility of transition from one cryptosuite to another must be factored into the design. Transition, he emphasized, is a very difficult event to anticipate. He suggested finding ways to test a protocol transition in a limited domain before the system is deployed broadly. This can help illuminate whether there are issues not only with the syntax of the transition but also with the semantics when there are multiple security systems involved. When signing executable programs, for example, the semantics of the signatures should signal whether or not the

___________________

1 R.M. Needham and M.D. Schroeder, 1978, Using encryption for authentication in large networks of computers, Communications of the ACM 21 (12): 993-999, http://dl.acm.org/citation.cfm?id=359659.

2 S. Bellovin and E. Rescorla, 2006, “Deploying a new Hash Algorithm,” Paper presented at the NIST Cryptographic Hash Workshop, August 24-25, Santa Barbara, Calif., http://www.internetsociety.org/sites/default/files/deploying_new_hash_algorithm.pdf.

3 Veracode, 2015, State of Software Security Report: Focus on Application Development (Supplement to Volume 6), https://www.veracode.com/sites/default/files/Resources/Reports/state-of-software-security-focus-on-applicationdevelopment.pdf.

Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

code is secure, he explained. He also noted that there have been a number of cases in which trouble stems from the version numbers of the protocols.

In the discussion, Bellovin shared his calculation that a comprehensive cryptographic upgrade takes about 12 to 15 years from algorithm design to deployment. That timeline accounts for the time it takes for older systems to die off and be replaced, which he estimated at about 5 years on average for general-purpose computers (shorter for items like phones and longer for items like cars), and about 10 years on average for the engineering, certification, protocol work, design, coding, and testing. Though this upgrade process takes a long time, Bellovin said, “I cannot see how to lower that number.”

Embedded Systems and the Internet of Things

Bellovin shared his perspective that the biggest problem in agility today is embedded systems such as the computers that are now being built into cars and many other Internet of Things devices. Many connected items in the Internet of Things world lack an update path. While the product itself may need to last for many years, the upgrade lifetime for such systems is actually quite short. Rather than supporting upgrades to existing chips and devices, vendors have a financial incentive to abandon old systems and focus on selling new ones with new chips. Today’s connected cars will likely be running cryptography that is 15-plus years old at the end of their useful lives, he noted, which raises a host of questions: How can security and functionality be balanced? Can the system even be updated? If it cannot, how could a new algorithm be added? Bellovin added, “It is especially serious when it comes to agility because algorithms age in a way that other software does not—because new attacks are discovered.”

Should algorithm and protocol update capabilities be decoupled from more general software updates?

Pointing to Dan Geer’s suggestion of a “suicide date” for embedded systems,4 Bellovin suggested that a guaranteed lifespan of 5 years—not shorter, not longer—might be one potentially viable approach. Such a system could be appealing to vendors, who would sell consumers replacement products after 5 years, and potentially palatable for consumers, who typically are limited to short warranty periods and may benefit from a longer period of guaranteed support, even if it comes at the cost of a firm end date.

Bellovin considered potential approaches to making such devices updatable. Would it be possible, he wondered, to decouple the algorithm and protocol update capability

___________________

4 D. Geer, 2014, “Security of Things Keynote Address,” address presented at Security of Things Forum, May 7, Cambridge, Mass., http://geer.tinho.net/geer.secot.7v14.txt.

Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

from the more general software update in order to facilitate upgrades? Another problem emerges when vendors go out of business, a somewhat common occurrence in the high-tech world. An open API might allow users to make updates themselves in such situations, though an open API also invites its share of problems, not least of which is who maintains the public keys needed to authenticate such updates.

More promising, he suggested, are solutions that include parameterized algorithms that allow a higher iteration count, different round counts, and different substitution boxes, for example. Downgrade attacks and correctness errors could still cause problems, but these solutions could introduce more agility for embedded systems. New algorithms could be designed to allow for negotiable parameters, which might affect field lengths but would address the upgrade problem, he said.

Considering Unintended Consequences

Agility can allow for weaker cryptography to persist, which Bellovin emphasized was problematic for many reasons: “Backwards compatibility can be ‘bug-wards compatibility,’ and that is a threat we have to meet as well.” While downgrade attacks need to be prevented, he acknowledged that sometimes it is necessary to roll back to an older version of a security mechanism because the newer one is not working. Agility requires thinking seriously about consequences and making unpopular decisions. In addition, he said, out-of-band knowledge is important when transitioning to a new algorithm. For instance, it can help a service declare its security intentions, such as “I will never again accept TLS 1.1 or below. It is not secure. If you see this from me, it is wrong.” Yet such capability is not typically supported and is challenging in systems containing disparate embedded systems (for example, there are at least 50 in a modern car), which typically do not have a centralized system administrator.

New algorithms being written should be open to small changes without having to go through major overhauls to the algorithm, its field sizes, or iterations, Bellovin said. Designers need to prove not just the algorithm, but also iteration counts and key sizes that would work. In addition, negotiating a stronger mode of operation could help keep ciphers useful, as long as the data structures support that. Bellovin concluded, “We still need to protect the negotiation for algorithm agility.” Although this is challenging when even the primitives are under threat, Bellovin suggested research on privacy protection and secure “ratcheting” (always increasing the security levels) to protect against downgrade attacks could help address these challenges.

Peter Swire, Georgia Institute of Technology, asked whether there could be something to learn from the design of highly trusted critical systems, such as the software used in aviation. Bellovin responded that highly trusted systems in fact raise additional challenges in the context of agility because there is often not a full understanding of

Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

the consequences of making changes. That is one reason, he said, that hospitals are not updating their Windows XP–dependent magnetic resonance imaging machines (and, he noted, putting their systems at risk as a result).

Wrapping up, Bellovin expanded on the privacy and human rights implications of certain approaches to cryptographic agility. One, he said, is that if agility weakens overall security, authoritarian governments will be able to spy on people more easily. Another concern is that too much flexibility for individuals to choose their cryptography could lead to “fingerprinting” users based on their preferred security profiles. Similarly, if a certain website requires an unusual combination of cryptographic parameters, this could make it easy to expose and track the activities of a specific user. A post-quantum algorithm cannot use today’s public-key algorithms because they would be broken too easily. The alternative, to use symmetric encryption with a universal key distribution center (KDC), would be a challenge, Bellovin said, because it creates a single vulnerable point that puts everyone’s privacy and security at risk if it were broken. A KDC could make it possible to track every website a user visits, creating, in Bellovin’s view, “a very serious privacy threat.”

AGILITY AND THE NEED TO PREPARE FOR FAILURE

John Manferdelli, Google, Inc.

John Manferdelli is a mathematician at Google, Inc. (as engineering director) and has also worked at Intel (as senior principal engineer) and Microsoft. He began by reiterating that agility is important—not only in the context of cryptography, but also more broadly—because nothing lasts forever. Key management systems and implementations eventually fail, and operational errors eventually arise. “You have to be able to change things quickly,” Manferdelli said, noting that at the same time, “You really cannot anticipate what you have to change.”

Aiming for Agility While Acknowledging Its Limitations

Manferdelli noted that agility is important and applies to more than just cryptographic agility—it is not always clear in advance what will need to be changed, and changed quickly. For example, implementation agility, or the ability to change things in the field, is crucial. Manferdelli pointed to a 2002 event in which a certification authority issued a certificate that looked like it belonged to Microsoft but was issued to another party. Agility allows a company to revoke fraudulent certificates. Instead, Microsoft had to reconfigure the operating system explicitly to render that certificate void.

Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

Manferdelli expressed doubt that perfectly agile cryptography could ever be designed, in part because it is not possible to fully anticipate all user needs. While acknowledging that designers are “actually pretty good” at substituting cryptography algorithms like symmetric-key systems, as requirements become more complicated (e.g., wanting both secrecy and integrity), the challenges to designing cryptographic systems increase.

Commenting on an earlier statement from Butler Lampson, Manferdelli agreed with the advice to “always be a little circumspect about what you want to make secure.” There are some things we can get right, and we should try to do so—implementation agility is an important part of accomplishing this—but it must be acknowledged that no device or interface will work for “everything in the world forever and ever,” he said. This idea comes into play in the debate over cipher negotiation, for example. On the one hand, negotiation can be a bad thing because there is a chance that a developer or user could make a mistake. On the other hand, developers and users need tools. Trying to create a solution that completely satisfies all scenarios creates an argument with no right answer, and no winner, Manferdelli said.

Another reason we need agility, he emphasized, is because anything that is used long enough will eventually be cracked by attackers. “Really, the best line of defense is being able to change things,” he said, suggesting that modest goals are preferable to trying to create something to last 50 years. Even algorithms change, he said, noting that this has not always, in his experience, been well appreciated: When he first began working on security for Windows, for example, there was no one working on cryptography because it was considered so unlikely that anyone would ever need to move on from MD5, Data Encryption Standard (DES), or RSA 1024.

Facing the Quantum Threat

Quantum computing is seen as perhaps the biggest reason public-key algorithms might need to change. Quantum computing would also bring new opportunities, he noted. Computers have been built on the same model since the 1940s, and quantum computing would offer strikingly different capabilities. For that reason alone, it would be wise to plan for agility and to take advantage of cryptographic advances where they emerge. Whatever the likelihood that quantum computing will be invented, and however uncertain we are about when it might become a reality, Manferdelli emphasized that it is important to plan for a quantum threat because of its potential catastrophic consequences. On the positive side, he is optimistic that quantum-resistant algorithms will be created and that they can be added to existing cryptographic systems. While recognizing that dealing with quantum computing would be a huge change (key sizes alone would need to be much larger), Manferdelli expressed confidence that the computing world can meet these challenges.

Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

During the discussion, William Sanders, University of Illinois, Urbana-Champaign, explored the topic of implementation agility, questioning whether it would be straightforward to create a framework now that would be usable if quantum computing capabilities were to be realized. Manferdelli responded that while he does believe agility is important and improving, it is essential to recognize that errors will inevitably occur. “You are just not going to get it right the first time,” he said. Testing and understanding the agile layer fully, before implementation, will save money and configuration problems in the long run. He suggested creating and testing quantum-resistant algorithms now, perhaps inside a cipher suite. Either an upgrade strategy or short service life could help when a new protocol inevitably goes wrong.

Why Resist Agility?

Manferdelli considered the reasons for resistance to the idea of using or requiring agility. Besides the quantum threat, there are plenty of good reasons to use agility today—for example, to address the mistakes that are causing today’s keys and information to leak. Having fully agile systems is ideal, but even “just changing keys every once in a while would not be such a bad idea,” he said.

In the past, he said, one reason for not using agility was that the code was so brittle and easily broken when changes were attempted. Another was that there was a pervasive resistance to adding new software. Manferdelli said both of these issues are now more nuanced, the code libraries have improved, and algorithms can now be switched out successfully as long as the format of the output is not changed too much.

Managing keys is another reason people avoid agility. Noting a rumor that Apple had not allowed any new certificates for 6 months, he explained that root keys and certificates last a long time, which causes problems. He pointed to some improvements in this area, including key pinning (a security technique used to prevent “man-in-the-middle” attacks) and certificate transparency. Key pinning via upgrades allows a secure, trusted key to persist through the upgrade. Such a system allows a measured, reliable security device to travel with a user.

Another factor is that some businesses might not want to remove old algorithms because they do not want to reduce their market share, which is not something that technological improvements can necessarily address. Often, businesses do not make changes until a catastrophic event happens; otherwise, they do not see their business as being impacted.

Preparing for Failure

The biggest problem when making upgrades is that, as with cryptography generally, one can never predict what is going to break, and Manferdelli asserted that the debate over

Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

agility today will not be solved until upgrades can be guaranteed to happen. Noting that the old Bell telephone system was required to run expensive tests to check periodically for catastrophic failure, he suggested it might be good for computer systems and the people who build them to run similar tests.

Whether one views the glass as half-full or half-empty (or completely empty), it is crucial to prepare for failure, Manferdelli asserted. For example, key management might be configured so that keys are expected to change frequently. “Any prudent person will really want to be able to make sure they can change their keys quickly,” he said. While recognizing that complex systems have a lot of moving parts, he asserted that it is possible to allow for key agility, not just algorithm agility. Embedded applications with certificate transparency, which make issuing fraudulent certificates more difficult, could also help.

Creating a key inventory, especially in a large company, is a very difficult task.

It is also necessary, Manferdelli noted, to continue monitoring and analyzing keys to make sure that the retired ones are truly retired. Kocher, Cryptography Research Division, Rambus, Inc., expanded on this point in the discussion. One step in deciding whether to retire a protocol is to determine how many people or programs are still using it. Kocher wondered if that inadvertently allows attackers to encourage the use of an older, unsafe protocol: if they can force enough traffic through it, it might never be turned off. Clarifying that he had spoken specifically about key use, and not protocols, Manferdelli also expressed support for monitoring and analyzing protocol use, as well as creating an inventory of existing protocols. Returning to the focus on keys, he asserted that “there is really no downside in keeping an eye on what keys people are using and going after them and tapping them on the shoulder and saying, ‘This was not a great idea.’” To this, Bob Blakley, CitiGroup, Inc., added that creating a key inventory, especially in a large company, is a very difficult task.

Wrapping up his presentation, Manferdelli shared his hope that a sensible conclusion could be reached on the topic of access to plaintext by law enforcement. Under certain circumstances, he allowed that there should be a way for law enforcement to access evidence in data, but giving law enforcement universal access is not, in his view, the optimal way to achieve it.

Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×

The Broader Problem of Software Updates

Participants discussed the broader context of agility. Thinking holistically, the agility problem is not merely one of cryptography; it is a problem for the entire computing system. Steven Bellovin categorized software update problems as “fiendishly difficult” research problems that are much harder than the cryptographic agility problem on its own. He pointed out that every major software vendor has experienced update problems, where a patch either had to be recalled or replaced, or it ended up breaking devices completely. Manferdelli concurred, though he noted signs of progress, including a recent switch in parts of the U.S. government from a policy of not updating software until exhaustive qualification and certification tests are complete to embracing updates as they are released.

Fred Schneider, Cornell University, asked how governance of software updates might fit into the discussion, describing a situation in which a user buys a product that is later updated in a way that affects the product’s security or privacy framework. Who has control over access to our devices, he asked—the manufacturers, the government, or some sort of mixed authority? Agreeing that it presents a complicated research problem, Manferdelli said that the usual practice is to set expectations and legal responses, though the expectations in this context are currently unclear. With regard to the matter of setting a lifetime for embedded devices, for example, Manferdelli said that he did not have a specific answer. “I would certainly like it if there were well-formed, well-thought-out expectations,” he concluded. “Then, you could think about how policy might flow from that.”

Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 53
Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 54
Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 55
Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 56
Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 57
Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 58
Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 59
Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 60
Suggested Citation:"5 Research, Industry, and Policy Implications." National Academies of Sciences, Engineering, and Medicine. 2017. Cryptographic Agility and Interoperability: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/24636.
×
Page 61
Next: 6 Discussion and Wrap-up »
Cryptographic Agility and Interoperability: Proceedings of a Workshop Get This Book
×
 Cryptographic Agility and Interoperability: Proceedings of a Workshop
Buy Ebook | $14.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In May 2016, the National Academies of Sciences, Engineering, and Medicine hosted a workshop on Cryptographic Agility and Interoperability. Speakers at the workshop discussed the history and practice of cryptography, its current challenges, and its future possibilities. This publication summarizes the presentations and discussions from the workshop.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!