The workshop’s fourth session began with a presentation by Susan Landau. She covered a range of issues relevant to lawful hacking, including its benefits, its limitations, and the trade-offs it forces. Toward the end of her presentation, she also offered some solutions to the limitations as she sees them.
Orin Kerr then moderated a discussion covering a range of topics, including legal constraints and trends, considerations for the use of metadata, and the challenges and expectations for local, state, and federal law enforcement.
Landau began her remarks with an acknowledgment that although she had often disagreed with James Baker, the Federal Bureau of Investigation (FBI) general counsel, in the past, their views are moving into closer alignment. Indeed, he had said on the workshop’s first day many of the same things she had planned to say in her presentation. One of the beliefs that they have in common, she said, is that the debate over encryption is not merely about privacy versus security but about privacy and security and how the two goals relate to law enforcement’s ability to conduct investigations. Security is at the heart of both sides of the debate.
Landau noted that cybercrime has topped the U.S. Worldwide Threat Assessment list in both 2015 and 2016.1 Although domestic cyberattacks from international actors are not common currently, she cautioned that they could increase. Iran and North Korea, for example, do not have the same international constraints suppressing such attacks as China and Russia do. As an example, she said, a small, relatively unimportant dam in Westchester County, New York State, was hacked by Iranians in 2013. The attack itself was not particularly dangerous, Landau said—the worst that could have happened was that a few house basements could have been flooded—but it was an example of the type of attacks and attackers that might be faced in the future.
This type of threat means that not only do we need secure communications (the central issue in the first “Crypto War” 20-30 years ago), but we also need secure devices, Landau said. She posited that smartphones are the best forms of secure authentication and observed that companies like Google and Facebook have been using smartphone-based, two-factor authentication for their employees and customers. Government agencies also use phones for secure authentication, and Landau emphasized that if such phones themselves are made less secure, that makes them a less secure authenticator.
1 J.R. Clapper, 2016, “Worldwide Threat Assessment of the U.S. Intelligence Community,” https://www.dni.gov/files/documents/SASC_Unclassified_2016_ATA_SFR_FINAL.pdf.
To understand the scale at which mitigation is needed, Landau cited several pieces of data from the 2014 U.S. Wiretap Report,2 the most recent report released as of the workshop date. Title III wiretapping covers electronic communication surveillance of criminals, and the report lists a range of information for these wiretaps, including who the district attorney was, who the judge was, how many incriminating conversations were logged, and how many arrests and convictions were made. Since 2000, the report has also listed the number of cases in which encryption was encountered in a wiretapped conversation. Unfortunately, it does not indicate whether a conviction was reached because of a wiretap, which would be virtually impossible to discern because of all the other factors juries must consider in addition to the content of the wiretap when deciding a case.
Despite that drawback, Landau said, the report offers useful data, particularly with regard to trends. In 2014, there were approximately 3,500 Title III wiretap requests, 1,300 from federal law enforcement and 2,300 from state and local law enforcement. Of the 3,500 requests, 3,400 were for mobile devices. This number is important to keep in mind, Landau said, because a mobile phone is more than just a phone. It tracks a user’s location and much more information, some of which is not stored on the phone itself. It’s an incredibly rich data source, as law enforcement is well aware. The data available from smartphones have placed surveillance in a new era, Landau averred, yet smartphones were adopted so quickly that society, Congress, and law enforcement may not have a fully developed understanding of how the situation has fundamentally changed.
Of the 3,500 wiretaps requested, 3,200 were for narcotics investigations. The rest relate to an assortment of other crimes, such as homicide, assault, and kidnapping. Landau mentioned kidnapping specifically because crimes against children are often cited in the context of these debates, but wiretaps are very rarely used in kidnapping (the number is usually about four per year), likely because investigators typically don’t know who has the child. Tapping the phones of the missing child’s family members is known as a “consensual overhear” and does not require a wiretap or involve encryption.
Not every wiretap request is granted. Only 313 federal wiretaps were installed in 2014, far fewer than the number requested, Landau pointed out. Each one costs the federal government about $41,000, most of which is spent on “minimization”—that is, someone to monitor the wiretap and assess its content. Landau noted that the number of wiretaps requested at the federal level increased in the 1990s and 2000s, then dropped by about half in the late 2000s, before increasing again to current rate of some 1,300 per year. State and local requests are also increasing.
Landau reiterated the point made by other workshop participants that encryption is still a relatively rare element in wiretaps because if law enforcement believes there will be encryption on a phone, they do not bother requesting a wiretap. Obtaining a wiretap warrant is a much more complicated process than obtaining a regular warrant, and it is typically not considered worth the time if a line is encrypted.
The increasing rate of state and local wiretap requests is significant, Landau noted, because those agencies have very different needs and resources than the federal agencies. Landau referred to James Burrell’s statements concerning the FBI’s Operational Technology Division (responsible for digital forensics, tactical operations, and other functions), which he noted has a budget in the hundreds of millions. But the FBI’s budget for responding to the going dark problem is much smaller, Landau observed. Citing numbers provided to her by staff from the House Committee on the Judiciary, Landau said that the FBI’s activities to respond to encryption and anonymization are carried out by 39 staff members (only 11 of whom are agents) and with a budget of $31 million (which may increase to $38 million in 2017, but without an increase in staff). These numbers are much smaller than people may assume, Landau pointed out, and state and local police are most likely spending even less.
During the discussion portion of this session, Burrell clarified that the budget and personnel figures Landau cited for going dark activities were from a budget request and did not necessarily reflect what the FBI or other federal, state, or local agencies actually spent on these activities. As for the personnel, Burrell declined to say whether the numbers she cited were correct. Landau clarified that those figures were the best that House Judiciary Committee staff, who twice tried to clarify this issue with the FBI, could provide. She argued that that kind of
2 U.S. Courts, Wiretap Report 2014, last updated on December 31, 2014, http://www.uscourts.gov/statistics-reports/wiretap-report-2014.
information should be made public. In response, Burrell noted that the metrics are in a state of flux, which makes it difficult to quantify the level of effort required to respond to the going dark challenge.
Continuing with her presentation, Landau said that in addition to small budgets, another impediment to investigations is the wide variety of phones. In 2011, before there even were encrypted phones, Landau learned from Mark Marshall, the president of the International Association of Chiefs of Police, that the police’s biggest difficulty was the variety of the devices people use. According to Landau, he said law enforcement does not have the resources to penetrate all the different operating systems they encounter in investigations. Landau wondered how local police might handle a fully encrypted phone today, given the technological changes of the past 5 years.
Citing a paper she coauthored,3 Landau turned to the topic of lawful hacking. She noted that the FBI first started its lawful hacking program in the early 2000s and has created several technologies to facilitate it. She described lawful hacking as a multistep process. First, agents obtain a court order to penetrate the device and determine the operating system and applications (and which versions of each) the device uses. Then, a second court order is needed for agents to actually exploit a vulnerability to install their surveillance software. Some devices are harder to hack than others, depending on how up-to-date and secure a user keeps it. The processes used to obtain the warrants needed to pursue lawful hacking are important, she emphasized. Some cases have been thrown out because the FBI didn’t have the necessary two warrants.
Sometimes agents are able to exploit well-known security vulnerabilities, but sometimes they must purchase such knowledge. Landau said that a National Security Agency (NSA) employee once told her that any laptop could be hacked if the hacker wanted to do it badly enough. The San Bernardino iPhone hack brought that notion into the public consciousness.
Landau also noted that lawful hacking has to be particularized to each device. That makes it fundamentally different from previous wiretapping techniques such as alligator clips (surveillance equipment connecting a phone to a listening device, used in the 1930s) or wiretapping done under the Communications Assistance for Law Enforcement Act (CALEA, which uses a standardized wiretap interface and has the cooperation of the telecommunications provider). However, Landau emphasized that vulnerabilities will always exist. Decades ago, Fred Brooks wrote that long, complex programs will always have problems,4 and today’s programs are notably longer and more complex than ever before. Modern operating systems and applications are 10 times larger than the largest system from the time of the Orange Book (a Department of Defense standard for evaluating the effectiveness of computer security)5 about 20 years after Brook’s book. Eliminating bugs is a long, slow task, Landau said, and has not yet been accomplished.
Vulnerabilities are also a moving target. Once the vulnerability used for the exploit is patched, a substitute needs to be found. Landau noted that the vulnerabilities that enable hacks can last a very long time. At last count, several years ago, the average was 312 days. This is important to keep in mind, Landau said, because when the FBI hacks in, it is liable to be able to use it for a while.
Lawful hacking costs vary. Using a well-known vulnerability could be nearly free, but finding a new one could cost from $10,000 to upwards of $1 million, as in the San Bernardino case, said Landau.
Overall, Landau argued that lawful hacking is a necessary solution. The most sophisticated criminals, such as the Mexican criminal syndicate known as Los Zetas, have extremely good security. In order to investigate such criminals, the FBI needs particularized solutions. Furthermore, as encryption-secured devices become increasingly ubiquitous, the FBI will need particularized solutions every time it conducts an investigation, regardless of the sophistication of the target.
3 S.M. Bellovin, M. Blaze, S. Clark, and S. Sandau, 2013, “Lawful Hacking: Using Existing Vulnerabilities for Wiretapping on the Internet,” Privacy Legal Scholars Conference, June, available at Social Science Research Network, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2312107.
4 F.P. Brooks, Jr., 1975, The Mythical Man-Month: Essays on Software Engineering, Addison-Wesley, Reading, Mass.
5 U.S. Department of Defense, 1983, The Trusted Computer System Evaluation Criteria.
Landau then turned to the issue of communications metadata, offering several examples of different ways metadata have been used in criminal investigations. While it once took the U.S. Marshals Service an average of 44 days to track fugitives, Landau noted that with cell phone surveillance, now it takes only 2 days.
Landau provided another example from 2009, when Boston police were able to catch the “Craigslist killer” after analyzing closed-circuit television images and cell phone records from the hotel where the murder occurred. A 2008 Drug Enforcement Agency (DEA) investigation offers another example. In that case, the targets were constantly switching cell phone ownership, making it impossible for the agents to obtain wiretaps. However, after being granted location information for the phones, they were able to crack the case.
Landau noted that she only learned about the DEA story from Baker’s 2010 testimony about the case to the Senate Judiciary Committee.6 In her view, this underscores that there is very little publicly available data about how investigations are run. Although it is understandable that the FBI cannot always share its methods, she posited that such information would be useful in the context of this discussion. Without the appropriate data, Landau stated, technologists, academics, and the government are unable to have a reasonable discussion about the trade-offs that are being made. When the first Crypto Wars were taking place, she said, the Wiretap Report7 data were very helpful. Right now, there are no public data about which techniques the government uses to conduct investigations, such as metadata, lawful hacking, or tools used to track online trafficking of child pornography—nor do we know how effective those techniques are.
Building on the discussion from a previous session, Landau highlighted the additional challenges presented by the proliferation of the Internet of Things (IoT). The concern is not about the security of our refrigerators or toasters, she said, but rather about the information that such devices can unwittingly transmit about their user, such as when they are home or how many people live in a household. The IoT can also offer more avenues into a secured network. She cited as an example the view of Rob Joyce, head of the NSA’s Tailored Access Operations Unit and known as the nation’s “hacker-in-chief,” who told the audience at the USENIX Enigma conference in January 2016 that the most useful information to hackers today is the login credentials of systems administrators and others with a high level of network access.
Looking forward, Landau offered some solutions to the problems she outlined. First and most important, she suggested that the FBI and its Going Dark program should be operating with far more resources. Cyberthreats are real, Landau emphasized, and systems must be secured against them. This budget would be a necessary expense. Computer bits comprise—and compromise—much of the world’s plans, software, and knowledge. Those bits are easily stolen, Landau said, making it critical to secure login credentials and communication. In this arena, she said, law enforcement is “badly outgunned.”
Second, she argued that a Vulnerabilities Equities Process (VEP) should be established for law enforcement. Landau referred to Chris Inglis’s comments at the workshop about the VEP and how the NSA reports 93 percent of the vulnerabilities, domestic or international, that it finds (although it is not known how quickly they are reported). The FBI’s unlocking of the 2015 San Bernardino shooting suspect’s iPhone disturbed Landau, she said, because they paid a third party to do it without handing the vulnerability they exploited over to Apple. The bureau cannot tell Apple how to fix it (leaving aside the question of whether it should hand over a vulnerability it paid $1 mil-
6 Statement of James A. Baker, Associate Deputy Attorney General, U.S. Department of Justice, before the Senate Judiciary Committee, September 22, 2010, https://www.justice.gov/sites/default/files/testimonies/witnesses/attachments/09/22/10/09-22-10-baker-electronic-commprivacy-act.pdf.
7 U.S. Courts, Wiretap Report 2014, 2014.
lion for when Apple had no “bug bounty” to pay for it themselves), yet it does know that it affects the phones of millions of people. Landau called this situation “appalling.”
A third possible solution is the creation of an official policy of information sharing to assist state and local law enforcement agencies, which are having a much harder time than the FBI even as they are filing more wiretap requests than the federal government. Information sharing, perhaps through regional centers, could help.
More education and better communication could also make a significant impact, Landau said, while being relatively easy to accomplish. Landau praised Baker for his frequent engagement in dialogue and listening on these issues with researchers and academics.
Part of the problem, Landau reiterated, is that the world has shifted, yet law enforcement does not fully understand those shifts. For example, investigators did not realize that iMessage metadata are stored with Apple, rather than with the telecommunications provider. She suggested that officers need to feel comfortable asking for assistance when they run into technology they do not understand, so that their understanding can grow. Companies do cooperate with law enforcement, even without a legal requirement to do so—for example, by voluntarily scanning image hashes to detect known images of child pornography. Although there has been tension over the San Bernardino case, Landau has also seen promising examples of cooperation between the FBI and private companies in the computer industry and believes that there could be more.
Although the variety of communications metadata formats is a problem for law enforcement, Landau said, the research community has tackled similar problems before, as exemplified by an article by Kathleen Fisher and Robert Gruber8 about dealing with unstructured data. Landau asserted that a technological advisory board could help the FBI leverage this sort of work.
In closing, Landau asked the group to look at encryption and lawful hacking in another way—as cost shifting. Moving away from easier wiretapping secures the general public but makes wiretaps more expensive. Expensive wiretaps, however, have always been supported in the past, she said. Supporting them now would mean significantly expanding the FBI’s going dark program as well as establishing better mechanisms to aid state and local law enforcement.
Orin Kerr moderated an open discussion, taking up many of the points Landau presented in her talk. The discussion is arranged here by topic; the order of the contributions does not necessarily match the order in which items were discussed at the workshop.
Kerr expressed his belief that the responses to—and investment in—the Going Dark program relate to an equilibrium between legal and technical protections. When there are more technical protections on data, he suggested, the court will require less legal protection. If technology weakens data security, the courts tend to step in to increase legal protections.
In the current encryption landscape, he argued that that the technological protection will likely grow, and so the courts will give technology less legal protection. He cited as an example the treatment of metadata. Metadata are especially tricky because the courts are not sure if it they are covered by the Fourth Amendment. He believes that as encryption gets stronger and more widespread, making investigations more difficult, the courts will respond by limiting the constitutional protection for metadata. He challenged attendees to consider whether that is a tradeoff we are comfortable with. In response to this idea, Bankston posited that there are many areas where protection would not be lost, because the item or data type (such as metadata) is already unprotected, although he declined to take a firm stance on the legal/technical balance Kerr had suggested.
8 K. Fisher and R. Gruber, 2005, “PADS: A Domain-Specific Language for Processing Ad Hoc Data,” PLDI ‘05: Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation, Association for Computing Machinery, pp. 295-304.
Kerr raised another, perhaps more troubling, prospect. Before iOS 8, the government could obtain a warrant and easily decrypt a phone’s data. Given the difficulty of opening a post-iOS 8 phone or a similarly protected device, investigators are now forced to turn to other methods. Kerr pointed to a case in the Third Circuit Court of Appeals of a former police officer allegedly dealing in child pornography who would not divulge his computer passcode; he was found in contempt of court and jailed until he discloses the password. Kerr noted that it is quite possible for someone to withhold a passcode purposefully, but it is also possible for someone to genuinely forget it, and this is left to a judge to decide. If “failure to decrypt your device” leads to indefinite jail time, Kerr asked, is that an outcome we are willing to accept?
Matt Blaze suggested that the risk of such outcomes will lessen as the technology continues to evolve. The actual encryption keys are generally derived from passwords and pass phrases, so it is rare for someone to have direct knowledge of the encryption key to unlock stored data. Moreover, because people can’t remember long sequences of numbers, passcodes that humans can remember are highly vulnerable to a computerized search. He said future approaches are more likely to use passcodes that unlock a security module that actually holds the security key, which is either within the security module or deleted immediately after use. Thus, if law enforcement cannot force a suspect to divulge the passcode, investigators can instead attack the security module to extract the key.
Weighing in later in the discussion, Bankston argued that this is clearly an issue of due process and one that he believes the courts can adjudicate. It belongs in that realm, he reasoned, and not in the discussion about encryption, where decisions are being made that could affect the information security of billions of people. He suggested that far fewer people are likely to be detained for refusing to provide passcodes than are likely to be adversely affected by insecure phones.
Daniel Kahn Gillmor suggested that an important aspect of this scenario is that the ability of investigators to enter the suspect’s computer depends on the suspect actively divulging his passcode. On the other hand, if law enforcement can exploit a vulnerability to get in, a suspect would not necessarily know that his or her encryption has been broken. Gillmor argued that targets of investigations should be notified when the government breaks into their software or device through lawful hacking. Taking a lesson from wiretapping procedure, Landau replied that the law requires that a person wiretapped in a law enforcement investigation be notified 90 days after the tap has ended.9 For an intelligence wiretap, the target is notified only if content from it is used in court.
Another factor that is potentially more disruptive to law enforcement than encryption is file deletion, Blaze pointed out. There are, of course, perfectly legitimate reasons to delete files, such as when someone wants to sell their computer to someone else, but erasing or overwriting files containing evidence of criminal activity is very common. Nevertheless, there has been little discussion about restricting such behavior or weakening deletion capabilities.
Gillmor reiterated his view that these questions will continue to come up, because all of the proposed systems for exceptional access are still bypassable, and some passwords will be unbreakable. People will always want to protect their data, and there will always be software to enable that. Even the threat of indefinite jail time will not be effective against the world’s worst criminals, he argued, because they will always find a way to protect their data.
Richard Littlehale countered that most criminals are not very sophisticated, so even a bypass-able system could be useful. In addition, he suggested that perhaps technological trends such as biometrics could help law enforcement avoid indefinite jailing of suspects who refuse to give up their passcodes. Although not a perfect solution, a method of unlocking a phone that could be compelled without undue harm to its owner could be very helpful, he said.
Landau noted that in the communications world, there has so far been a clear distinction between message content and message metadata. Metadata, which include information on dialing, signaling, routing, and addressing, have generally not been well protected and can be obtained via subpoena. However, when Landau and her colleagues looked deeper into the issue in the modern context, they found that it was technically very muddled and that there was a vast middle area between content and metadata. Sometimes, what is not itself content can still be revealing of content.
9 18 USC Sec. 2518(8)(d).
On the theory that if we have greater technical protection on our phones it would lead to less legal protection in other areas, Kerr suggested that the courts could wind up defining all those data in the middle as metadata and give the government access to them. Although the debate is often framed as privacy versus security, he said that he sees it more as about different kinds of security, different kinds of privacy, and different civil liberties.
Reiterating a point discussed in the workshop’s third session, Landau pointed out that because phones contain so much more data than they did 10 or 20 years ago, they on balance still provide investigators with a tremendous amount of material. Kerr countered that the courts could argue that that material is also not protected by the Fourth Amendment. Such an outcome would give everyone less protection, he argued, because the courts would be saying that the government can access data that are not on the device, plus they can even access the device itself with the threat of jail time, as previously discussed. Kerr and Landau agreed that the distinction is between something that can be accessed easily or not accessed at all.
Pointing to the 2014 Wiretap Report,10 Bankston noted that out of approximately 25 instances where encryption was detected, law enforcement was unable to extract plaintext only 4 times. Landau noted Bankston and Soltani’s paper on United States v. Jones,11 which Bankston described as underscoring the fact that as technology evolves, it is getting exponentially easier and cheaper for law enforcement to conduct surveillance. Gillmor emphasized the flip side of the trade-off Kerr posited: If the courts tend toward providing less protection for metadata, then technology will evolve to provide more. Noting that there are already messaging services that protect metadata, he cautioned that those who want less legal protection of metadata should be aware that they are encouraging stronger encryption technology overall. Gillmor then questioned whether Kerr’s balance theory is right, noting that he has not seen an increase in legal protections for metadata even as people develop more technologies to curtail it. In response, Kerr noted some examples of protection of metadata, such as the California Electronic Consumer Protection Act, which requires a warrant for pursuing electronic surveillance and metadata warrant protections (several courts have stated that metadata are protected by the Fourth Amendment).
Burrell addressed some nuances related to the value and use of metadata in the law enforcement context. Although metadata format is not an issue for the FBI, he said, data management presents a far greater challenge. The growth of metadata has brought much more data to bear on every investigation, but he cautioned that that does not mean much of it is useful as evidence. Burrell said that the FBI has discussed metadata standards with various communications providers and metadata storage organizations, which are usually aware that their data could be useful to law enforcement. However, they do not all collect the same information, which is a key difference from the past. Building on this point, Landau pointed out that 30 years ago there was only one communications provider, whereas now there are multiple providers and multiple communication methods. Burrell concurred and noted that there are a number of providers now that, for various reasons, are not covered by CALEA. Despite a general perception that law enforcement always get the data they want, that is not actually the case, he said.
Landau reiterated the point that a technical advisory board could help the FBI identify and better utilize all the information channels available to it. Some companies are extremely good at instrumenting and measuring things, she said, and those skills could help the FBI.
Littlehale noted that a further complication in the use of metadata by law enforcement relates to the difference between knowing something and being able to prove it in court. The police may know where a phone was at a certain time, but it is extremely difficult to prove in court exactly who was holding that phone at the time, especially in the absence of video evidence or a direct observation of the event.
Kerr returned to Landau’s argument about the value of encryption for supporting smartphones as authenticators. In the physical world, he said, authentication standards are not very secure; a driver’s license or the last four digits
10 U.S. Courts, Wiretap Report 2014, 2014.
11 K.S. Bankston and A. Soltani, 2014, “Tiny Constables and the Cost of Surveillance: Making Cents Out of United States v. Jones,” The Yale Law Journal Online, Volume 123, January 9, http://www.yalelawjournal.org/forum/tiny-constables-and-the-cost-of-surveillance-making-cents-out-of-united-states-v-jones.
of a social security number are often accepted. In that context, he asked, why does phone authentication in particular require such strong security?
Landau replied that it is needed because of the scale of crime that digital platforms make possible. In the physical world, you are limited by how many humans can pull off a scheme such as a bank robbery. Online, many banks could be targeted simultaneously using machines. Even one phone that falls into the wrong hands—for example, when someone is separated from their phone during airport screening or in a confidential meeting—could offer an adversary important access points for valuable information or account credentials.
Kerr pointed out that the discussion was not whether certain “VIPs” need phones with good security, it’s whether everyone should have them. The rule should be for everyone or for no one, he said. Given that, he asked, how much security would be needed to have secure authentication? Landau emphasized that in her view all consumer phones should have a high level of encryption. Many high-level government employees use consumer phones for secure authentication (sometimes in order to hide the fact that they are government employees, she noted), as do VIPs in many other industries.
Underscoring this point, Donner emphasized that government agencies and others are strongly dependent on consumer technology because history has proven that developing customized technology is a dead end. Because consumer technology is far more widely used than customized technology, companies like Apple and Google can take advantage of economies of scale. It is also far easier to find and fix bugs if millions of people are using your products. Gillmor added that using consumer devices for official work also requires far less training and is easier on users.
When the government hacks into a system, must it divulge the exploited vulnerability so it can be fixed? Given that if the FBI has found a vulnerability, criminals can too. Landau argued that the vulnerability must be disclosed to protect others using that software or device. This is an argument about security, she said, not privacy. One caveat, as mentioned in the lawful hacking paper cited above,12 is that the obligation for immediate disclosure might be stronger in the context of law enforcement than in the context of national security, she said. This is because law-enforcement investigations are generally within the United States; therefore, a vulnerability that works for accessing one criminal’s phone is likely to also be present in many other phones in use by legitimate users in the United States, she explained.
Picking up on topics raised earlier in the workshop, Kerr noted that disclosing vulnerabilities raises the cost of government hacking. Landau countered that when viewed in the context of the cost of cybercrime in the United States, a much higher budget for federal, state, and local law enforcement is justified. Furthermore, she said, if it takes an average of 312 days to fix a known vulnerability, then law enforcement should be able to use it much more than once, even after disclosure. In response to a question from Littlehale, several participants noted that the number of potential vulnerabilities in an operating system is essentially infinite.
Rescorla posited that lawful hacking presents a threat to others users of that software or hardware.
The longer a vulnerability is used by the government, the more likely it is that criminals will discover and exploit it. There are three primary ways in which government possession and use of a vulnerability increases the risk, Rescorla suggested. First, when the government uses the vulnerability, they are inherently disclosing it to their targets, who might discover how it works. Second, if the vulnerability is purchased rather than found by the government, it could be sold again to someone else. Third, if the government discloses vulnerability in court, then this information might leak. Any lawful hacking program needs to take these risks into account when crafting disclosure policies, he added.
Picking up on the question of whether and how an exploited vulnerability might be understood in the court system, Littlehale pointed to a criminal defendant’s right to challenge the foundation of evidence offered against him or her. In some cases it may be that the defendant or the court will want to know exactly how the information was obtained or require proof that the information was not altered.
12 S.M. Bellovin et al., 2013.