For the final session, Fred Cate wrapped up the workshop by giving each participant a final chance to add comments to the discussion. He invited them to address one or more of the following four questions:
- Is there anything that has not been said that you think needs to be said?
- Is there anything you have heard that has been particularly surprising?
- Do you have any broad suggestions for next steps?
- Do you have comments on whether the discussion that we have, which focused largely on law enforcement examples related to the technologies of encryption and their implications, can be applied in the national security environment, or whether there are special or distinct considerations in the national security environment that affect these technological questions?
In the following summary, participant comments are grouped by topic.
Kevin Bankston began the discussion by suggesting that there is a need to invest in law enforcement’s technical capabilities. He said that a substantial policy and legal debate is necessary to define responsible, reasonable, and constitutional government hacking. He suggested that the United States can either invest hundreds of millions of dollars to update law enforcement’s investigative capabilities for the 21st century or the economy can face a loss of billions of dollars if exceptional access is mandated for U.S. products. Susan Landau underscored Bankston’s argument that Silicon Valley brings enormous value to the U.S. economy and national security alike, noting that the nation’s defense organizations have relied heavily on commercial, off-the-shelf technology for the past 20 years and that these products have been critically important for national security. This, in Landau’s view, supports the argument that device encryption should be as strong as possible. Bankston indicated his support for raising law enforcement’s technological capabilities rather than diminishing the capabilities of technology for everyone.
Matt Blaze agreed with Bankston’s comments from a technical perspective and posited that the costs of an exceptional access mandate would be enormous. He pointed to the 1990s, when a tremendous amount of effort spent trying to resolve this debate served as a distraction and discouraged the development of critical security infrastructure that could have prevented a large fraction of the digital crimes seen today. Standards, infrastructure,
and products are not nearly as robust as they should be and could have been, in his view. He suggested that, if these same mistakes are repeated, the costs will be even higher, and that the stakes will continue to rise. He added that it is critically important to national security and to our public safety that the government not discourage, but instead encourage, the use of the most robust security technology for our increasingly digital world.
Daniel Kahn Gillmor echoed Blaze’s sentiment on the significant downside of exceptional access. In his view, any benefits to law enforcement are likely to be small and diminishing, because such a system would undoubtedly be bypassable. Developers of encryption software are trying to provide better security for everyone by designing systems that minimize risk; any type of exceptional access, he emphasized, would increase risk. In his view, none of the proposed schemes could reliably distinguish between exceptional access for law enforcement and unauthorized access. He emphasized the importance of open discussion of security needs and looked to existing engineering communities that operate based on openness as one possible avenue for future discussions. Finally, Gillmor suggested that if the goal is increased security of communications for law-abiding citizens, security—and in particular strong cryptographic protections—should be enabled as a default setting.
Andrew Sherman asked how much of our economy we are willing to hand over to organized crime if we get security around lawful access wrong. He noted that security concerns are not just about foreign governments trying to hack into our systems, but also about criminals acting for financial gain.
Eric Rescorla added some perspective on software, pointing out that security has become a very high priority for software companies because of its importance to users. Encryption, which is one of the only tools that software companies have for addressing security, is increasingly expected by end users, he said, and consumer software will increasingly incorporate encryption features; any requirement for exceptional access would become a large burden because software companies must take on the responsibility of managing the systems and processes. Although this would be difficult enough for a large organization, it would be very difficult for the many very small companies or individuals who are app developers, owing to the relative ease of building software today, Rescorla observed. In addition, the availability of open-source software and operating systems would make it very hard to enforce compliance by app developers. The risk of losing control of an exceptional access asset is especially serious for communications applications compared to storage applications, he added, because communications could be remotely captured without a user’s knowledge.
Orin Kerr predicted that exceptional access mandates are unlikely to happen. As reflected in the discussions at this workshop, there is a strong aversion to the idea of exceptional access in many communities, and from a more practical standpoint, it remains unclear what exceptional access mandates would look like and what the parameters would be.
Rescorla raised another, more specific practical impediment. Given that the best designs for providing exceptional access are escrow systems that, by design, involve long-term access (to provide retrospective access to data once they have been transmitted), not only must these assets be protected, but they must be protected in perpetuity. In many cases, this would involve keeping assets in escrow long after the company has gone out of business and ceased to provide any services to users.
Butler Lampson suggested that the best way to understand the practical feasibility and security implications of a key escrow system would be to build a test system and study it. He proposed using an approach similar to that used in the Defense Advanced Research Projects Agency’s challenge competitions for self-driving cars and robotics. This could involve, for example, funding five or ten universities to build such a system and then offering a big prize for breaking it. Such an approach might cost $20 to $40 million dollars, and the results could add a lot of clarity to the discussion, he said.
At several points throughout the workshop, participants discussed the need for the law enforcement and national security communities to enhance their hacking capabilities, either as part of a “Plan A”—that is, alongside the development of an exceptional access scheme—or as a “Plan B”—the only viable option for accessing plaintext if exceptional access is not possible. Cate said that although this might be the inevitable, and perhaps right, thing to do, it could be difficult to sell the idea to policy makers and the public that the best solution is for companies
to invest in building stronger and stronger encryption and then for government to invest more and more public money to try to break into it. He noted that it may be important to find a better way of talking about this idea to make it more palatable.
Taking a different perspective, Bankston suggested that such an approach could be considered a win-win situation. “I think that [it] is actually an appropriate use of government resources to both investigate a crime and then prevent many more crimes by getting the vulnerability patched,” he said. Given that fighting cybercrime is also part of broader law enforcement goals, the security gained by allowing vulnerabilities to be patched should be factored into the cost/benefit analysis for lawful hacking.
The compelling arguments on both sides of discussions about encryption and lawful access make it increasingly important to find common ground rather than thinking in absolutes, said Cate. This requires not just thinking at the macro level but also, in very specific terms, about the choices that will have to be made in various situations. Decisions must be made, said Cate, yet it isn’t clear who will make these decisions: Congress, courts, the market, or hackers? Could it be that law enforcement makes decisions because they have not been given any general guidance by Congress in this area? “We may not like the thought that there are decisions, but in the inevitability that there will be, I think it becomes increasingly important to say who makes them and based on what information and what values,” said Cate.
Cate also echoed Lampson’s point that it is important to recognize the intersections between technology, law, and politics. Bad things are going to happen, and political leaders often respond in ways that may not meet our definitions of what is rational or well thought out. The risk is that when a crisis or key event happens, the issues that have been talked about during this workshop in what Cate described as a “fairly erudite and high-flying manner” will be dealt with in a “fairly bold and brash manner.” This means that not only should perfect, best, or ideal solutions be developed, but also “second best, and quick-and-dirty ways of dealing with issues on the fly.” Recognizing that many of the technologists at the workshop feel that it would be impossible to mitigate all of the risks of encryption, Fred Chang pointed out that exogenous factors can sometimes force an issue, and thus it could be good to have already thought through the technology mitigations so that you are not scrambling to respond.
Referencing a paper from the literature on the economics of cybersecurity,1 Chang said that the authors found a positive correlation between firms that had used encryption and the likelihood that there would be a reported data breach afterward. In other words, the probability of a data breach goes up after encryption is implemented, a counterintuitive finding. The authors speculated that perhaps the firms thought they were secure because they had encrypted then got careless. One take-home message from that anecdote is that some consequences may be counterintuitive, and some sort of technology forecasting could help us to understand and ward off unintended consequences.
Kerr referred to earlier comments made by Chris Inglis and James Baker, who said that some form of international coordination about how to approach exceptional access mandates is necessary to make such strategies even remotely effective. He noted that this requirement significantly raises the level of difficulty of the task, making a solution even more unlikely. He shared his view that, in the event of a major terrorist attack, the government would be more likely to implement requirements that are just outside of exceptional access mandates, such as data retention requirements.
Brian LaMacchia pointed out that although implementing high-quality encryption is not trivial, the skill to create such implementations exists around the world. High-quality open-source encryption technology is available everywhere. He argued that any type of duty imposed on U.S. companies in terms of the software they write will put this software at a disadvantage relative to the rest of the world because people can acquire similar software overseas. LaMacchia said that there has already been what he views as an irrational shift away from cryptographic standards from the National Institute of Standards and Technology simply because they are U.S.-based. Moreover,
1 A.R. Miller and C. Tucker, 2010, “Encryption and Data Loss,” Workshop on the Economics of Information Security, http://www.econinfosec.org/archive/weis2010/papers/session1/weis2010_tucker.pdf.
users can circumvent controls through superencryption. Trying to block superencryption inevitably leads to bugs because it requires violating the abstractions one depends on when engineering complex yet trustworthy systems, LaMacchia observed.
Cate observed that if the vast majority of encryption software comes from overseas, and it is getting increasingly accessible and easy to use, then everyone could use software from someplace else, and U.S. regulations would have little impact. However, he noted that similar arguments could apply to a lot of other areas that the United States nonetheless chooses to regulate. For example, the United States imposes environmental regulations within its borders even though other nations take a different approach. He suggested that waiting on or expecting an international compact on encryption is unrealistic. While figuring out how to address the international challenge is beyond the scope of this workshop, it is directly relevant to the issues this workshop is charged with.
Drew Mitnick with Access Now suggested that, while countries such as China and Russia may pursue their own aims (which would likely include government back doors), a lot of other countries may look to the United States on this. He argued that the broader global impact of the U.S. policies under consideration needs to be kept in mind.
Richard Littlehale pointed out that if the national conversation concludes that exceptional access is not appropriate, then it will be necessary to figure out what alternative options can meet the needs of law enforcement. He suggested that a next step could be bringing law enforcement and technologists together for discussions within an academic or institutional setting akin to the present workshop. In such forums, the technical experts could offer their perspectives on alternative sources of the evidence that law enforcement needs, and law enforcement experts could share their perspective on the feasibility of those alternatives when understood in the context of the realities of the criminal justice system. Underscoring the value of such exchange, Littlehale emphasized that leaving technologists out of the discussion when laws are made can result in serious unintended consequences. He suggested that continuing to have calm, productive, and ongoing conversations could help avoid mandates that fail to either resolve problems or address stakeholder needs.
Kerr noted that it is helpful to keep in mind the three contexts in which the issues being debated and discussed arise: (1) national security intelligence, which gets the majority of the attention; (2) federal criminal law enforcement; and (3) state and local law enforcement. The best path forward must address the needs of all three at the same time, he said, but their needs are likely different. For example, the device encryption issue is primarily, but not exclusively, a criminal law issue at the federal, state, and local levels more than a national security issue.
Commenting on the recommendation to increase FBI funding for technical capabilities, specifically computer network exploitation, James Burrell reminded attendees that computer network exploitation is by no means a singular solution to this challenge. He noted that many discussions around law enforcement requirements focus on a particular problem or particular technical issue, while in reality, a typical investigation usually involves multiple challenges and thus brings more complexities. Burrell also emphasized the importance of scalability and timeliness to the FBI.
He added that FBI Director James Comey has been adamant about the FBI’s greater engagement with academia and industry as well as other government agencies, in order to identify potential technical options and to facilitate a better understanding. He added that the FBI is interested in continuing these discussions outside of this particular forum. He also explained that the FBI has a technical advisory board that works with the FBI itself, other foreign governments, industry, and academia. The board identifies new technologies, examines their adoption and use by potential adversaries, and considers the implications for the FBI’s own work.
Marc Donner suggested that a fair amount of the challenge faced by law enforcement stems not only from insufficient technological capabilities, but also from staff not having an appropriate level of training in how to handle technical assets and devices. Governance is going to be a key factor in solving this problem, he said, adding that the issue of governance extends far beyond the FBI and broadly into the law enforcement community.
Lampson proposed a possible solution for the budget and resource problems faced by law enforcement. A commercial or government-sponsored organization could bring together expertise so that anybody with a legitimate governmental need for lawful hacking could send that organization a seized smartphone and have the best
available resources applied to cracking it. This, he suggested, would be preferable to fragmented efforts where no single entity has sufficient resources to succeed.
Bud Tribble said that much discussion of lawful hacking would be needed to arrive at an equities disclosure process that works across law enforcement. He also believes that an increased amount of funding and training by law enforcement, whether it is local, state, or federal, will help to increase the productivity of discussions between law enforcement, industry, and academia. He also said it was “pleasantly surprising” to hear more from government attendees regarding the issues of collective security versus individual security, instead of framing the issue as being about choices in business models.
Sherman pointed to a blog post by Columbia University computer scientist Steve Bellovin in July 2014, “What Spies Do,”2 that summarizes some of the differences between the law enforcement sector and the national security sector, including places where behavior that is acceptable in one sector is totally unacceptable in the other because of differences in function, levels of proof required, and so forth. At a 90,000-foot level, Landau clarified that national security focuses on investigations, as opposed to collecting evidence, whereas law enforcement has to do both. Also, national security is largely focused outside the United States, while law enforcement is largely focused inside the country. These distinctions bring different needs in terms of the quality of the information and how it is obtained. These two areas also have important differences in the resources that are available to them. Finally, because of where national security and law enforcement activities are focused, she said that there is an argument for changing the Vulnerabilities Equities Process.
Several participants addressed the government’s responsibilities when it discovers a vulnerability in the course of lawful hacking. Kerr noted that many of these issues are only starting to be considered. Given the great expenditure of resources involved in finding vulnerabilities, Kerr argued that it may be unlikely that those vulnerabilities will be routinely disclosed to private companies. He suggested that perhaps these questions might be dealt with using something similar to the Classified Information Procedures Act (P.L. 96-456, 94 Stat. 2025), an effort to deal with the potential disclosure of classified information in prosecutions in the court setting. He said the topic of vulnerabilities deserves more discussion and possibly its own workshop.
LaMacchia said that from a private sector standpoint, it is highly desirable for law enforcement to disclose any vulnerabilities it uncovers. It would put companies in a terrible position to be unable to act on a vulnerability that has been discovered. Landau said that in the context of national security, while she can see why U.S. companies would want to know about vulnerabilities, she also understands the logic of the argument that if there are very few users within the United States and many abroad, especially if they are targets of interest, it would be reasonable to delay informing the company.
Landau added that globalization and the Internet have drastically changed the world in the past 20 to 30 years, yet society and societal structures have not changed at the same pace. Referring to Kerr’s earlier comment that if we are using a technology, we have to explain it to the courts, Landau asked, “Will we explain the vulnerabilities? How will we do that?” If the courts are to use evidence that stems from lawful hacking or related approaches, they will need to know something about how it works and whether or not there are potential errors in it.
Tribble brought up the fact that with the increasing prevalence of user-controlled encryption comes an ever-increasing footprint of data that includes metadata, content, or something in between. He suggested that the size of this broad footprint seems to be increasing faster than the amount of content under encryption. Lampson added that the metadata versus content discussion is extremely problematic because the question of what information can be inferred from a given amount of metadata is not a question that theorems can be proved for, so it should be assumed that metadata will reveal a lot more information than has been thought.
2 S.M. Bellovin, “What Spies Do,” SMBlog, July 20, 2014, https://www.cs.columbia.edu/~smb/blog/2014-07/2014-07-20.html.
LaMacchia raised the concept of telemetry, that is, the data about the operation of a device in combination with its operating system and the cloud services it connects to. It is common practice for vendors—such as Microsoft, Apple, and Google, with Android phones as a key example—to collect data about how a user operates the system and to use these data to make improvements. Users have a choice of whether to opt in to this sharing of data. For example, Microsoft collects data on what causes the most common “blue screen” crashes in order to fix those problems, and data from people’s browsing histories are used to help block malicious websites. These telemetry data are different from connection-oriented metadata because they are user- or system-oriented. LaMacchia suggested that future conversations could discuss the importance of telemetry, how it differs from metadata and data, and what may be detectable or deducible from the telemetry. Telemetry data should also be part of the conversation about access to data, and what types of exceptional access there should be to certain types of data. “The way we think about data that is part of the relationship between the user and service providers, for example, is getting increasingly more complex,” he said, noting that labeling all these types of data as metadata glosses over categories of data that may be useful or sometimes a bad thing, in a collective environment.
Littlehale responded to the telemetry discussion by saying that telemetry represents the kind of thing that might be of value to law enforcement that it does not yet know about. Right now, the interaction between law enforcement and large service providers or large technology companies happens through a particular office that uses particular methods for the interaction. Very often, law enforcement does not know what evidence the technology company holds. “For law enforcement to accomplish its mission, we need to think about ways that we can have conversations that reveal information like this,” he said. Looking at the other side, Littlehale did note the potential counterargument of service providers—that is, if users are told or if they find out that such data may be available to law enforcement, users might not choose to have this information collected.
Lampson added that he thinks of telemetry as a special case of a more general phenomenon, which is that more and more processes on a device are inextricably entangled with what goes on in the cloud. Therefore, thinking in an entirely device-centric way is going to become increasingly irrelevant for most purposes. LaMacchia agreed with this by saying that when he thinks about a device now, he thinks about a device plus services in the cloud and that combined ecosystem, of which telemetry is a part.
From the workshop panels, Kerr said he drew the conclusion that device encryption raises a distinct set of cost-benefit trade-offs and options from other forms of encryption, because, at least with current technology, the user has knowledge of the password and physical control of the devices.
Cate noted the difficulty of speaking of technology in isolation, in part because everyone has their own values and convictions that are very hard to leave behind. He suggested that all stakeholders in this issue want to do the right thing, and they believe in what they are doing.
Cate also observed that technology is one part of a much bigger societal system, and it is necessary for security and public safety and other things people care about. These technologies are only a part—an important part—of the whole system, and the whole system has to work, he said.
He noted the tendency of people to think that we are on a largely fixed track, where encryption will get better and better, hacking will get better and better, and the encryption and security landscapes will look like advanced versions of today’s. He cautioned that things could look vastly different in the future. For example, he suggested that fraud patterns will be different, and, extrapolating from trends of the past decade, suggested that the way in which people engage with technologies will be fundamentally different. He reemphasized the fact that technologies are not evolving in a vacuum; the societal context in which they work is also changing. He suggested that these changes could be viewed as robust progress or tremendous instability.
He also noted the importance of better educating a large segment of the population about the technologies involved in encryption and in security, suggesting that policy makers, industry leaders, judges, and others need to understand those technologies better than they do today. He noted some of the common analogies and metaphors for talking about encryption and exceptional access—for example, comparison of exceptional access to physi-
cal searches, or to providing police with keys to one’s back door—suggesting that they are not always precise or accurate. Cate suggested that it could be useful for the technical people to come up with better agreed-upon, common metaphors to use when communicating about these issues.
Cate noted the importance of building trust between the technology community and law enforcement to encourage dialogue and to increase each group’s understanding of how technologies are used. This, he suggested, could help make technologists’ responses more appropriate, detailed, and specific. “I think we suffer not from a lack of good will, but from a trust deficit, in large part because we have a knowledge deficit,” he summarized.
Cate suggested that it is hard to segregate users and uses when it comes to security. “I think that is going to be a really hard sell to say that we think the nation’s secrets have to be protected, but it is okay if the public’s secrets aren’t so well protected,” he said. However, he noted that there may be some useful ways to segregate technologies and to articulate the right technology for the right purpose.
In closing, Cate noted the exceptional collection of people who participated in the workshop: senior representatives from the FBI, senior representatives from the Office of the Director of National Intelligence, senior representatives from industry, and many of the leading lights in academia. Remarking that the workshop represented an extraordinarily important moment, he expressed his hope that it was just the beginning of a longer process.
This page intentionally left blank.
This page intentionally left blank.