The workshop concluded with a period of open, moderated discussion, giving participants a chance to raise additional issues and circle back to matters raised during the preceding presentations and discussions. This chapter, organized into thematic areas, describes the content of the final discussion and also integrates crosscutting points made during presentations and earlier discussions, highlighting some of the broader themes that emerged throughout the workshop.
Many participants touched on the complexities of defining harm in the context of data breaches. Data breaches can cause harm at many levels, including harm to individuals, groups, companies, governments, and nations. Participants described examples of harms to individuals that include, among other possibilities, identity theft, the exposure of financial and medical information, damage to personal reputation, endangerment of personal safety, and psychological harms related to fear, loss of trust, and inconvenience. Several noted that data breaches have repercussions not just for individuals, but also for business practices and trade secrets, the economy, and national security.
David Clark, Massachusetts Institute of Technology, reiterated the need for a taxonomy of harms. A breach raises many unanswered questions; when one occurs, it is
generally difficult to determine with certainty the motives of the attacker or the consequential harms that might occur months or years later, he said. When determining how to quantify, insure against, and recover from breaches, Clark pointed out that various types of harms—financial, medical, reputational—are all different in important ways.
At several points throughout the workshop, debate emerged about the degree to which disclosures of certain types of information, or the burden of dealing with the repercussions of a data breach—for example, the time and inconvenience of replacing a credit card—constitutes harm to a breach victim. Noting that it is easy to acquire, for a small fee, a large amount of public information about an individual, Butler Lampson, Microsoft Corporation, questioned the need for concern over disclosure of personally identifying information. Apart from special cases like Ashley Madison, he contended that disclosure of identifying information does not constitute a significant harm. In his presentation, James Harvey, Alston & Bird, suggested that having a credit card number stolen may not in itself constitute tangible loss if the consumer does not have to pay for any fraudulent charges and only has to have the credit card replaced. Others, including Fred Schneider, Cornell University, and Anita Allen, University of Pennsylvania, countered that the costs and harms in these situations are indeed real, even if the victim does not pay money out of pocket, not least because of the time it takes to update credit card numbers and take other necessary steps or protect one’s finances or identity following a breach. Several participants commented that technological advances could help to reduce these burdens by allowing consumers to more easily replace a credit card and update all of their routine or automatic charges.
To fully understand the harms of data breaches and build better models for remediation, it may be important to consider the specific circumstances of the victims. Allen pointed out that not everyone is equally equipped to understand and appropriately respond to becoming the victim of a data breach, suggesting that some groups, such as intellectually disabled adults, younger people, and the elderly, may be particularly vulnerable. “I think that a lot of us just don’t have the means, the resources, to cope with the results of someone getting ahold of our data,” she said.
Allen also shared her view that the conversation about defining harm from the legal perspective has stagnated over the past several years, with continuing debates over whether harm should be defined broadly, as it is throughout the rest of privacy law, or more narrowly to restrict liability, and whether or when a “no harm/no foul” approach might be appropriate.
The need for better mechanisms and incentives to prevent data breaches was a recurrent theme throughout the workshop, and discussants also noted the importance of learning from breaches so that aftermath and remediation work can help prevent future breaches.
Tony Sager, Center for Internet Security, emphasized the importance of learning lessons from previous incidents to implement good practices before another incident occurs. For example, he said it can make a big difference what parties are assigned administrative rights and who controls privileges. Poor security practices not only leave openings for a breach to occur, they also make it impossible or extremely costly to respond quickly when one does occur, Sager said. As Deirdre Mulligan, University of California, Berkeley, put it, “Your security practices upfront are about prevention, but they’re also basically the architecture for instant response.”
Suggesting that preventing data breaches should be thought of as an infrastructure issue, Fred Cate, Indiana University, expressed his view that there has been an inappropriate reliance on complicated password systems, which he called “silly infrastructures.” “We have been giving and listening to presentations for what, a decade now, about how worthless passwords are—so what does every company and government agency represented in [this workshop] do? It requires a password,” he said.
In her wrap-up comments, Mulligan pointed to a general sense among workshop attendees that “we’re standing on a pretty faulty [technical] foundation” and that systems need to be re-architected with an eye toward preventing breaches. Paul Kocher, Cryptography Research, Inc., described current systems as being “in the middle of a scaling problem.” Our technical foundations are expected to hold up an ever more complex array of data and systems, and they are cracking under the weight, he said, adding that this burden will only increase in the future. With the advent of the Internet of Things, individuals may have to update and maintain dozens of connected devices in their homes rather than just a few; a similar process is under way at the levels of companies and governments, he observed. “There’s this path that we’re kind of on by default, where more and more balls are going to get handed to the juggler, and we’re dropping some, and we’ll drop more,” Kocher said.
A partial solution, in Kocher’s view, is to invest more time and money in data protection at the early stages by investing in basic technical components that can have a big impact. For example, he said, instead of designing processors “where every device driver is one bug away from compromising the whole device,” software architectures need to be built so that some portions of data security stay intact even in the face of inevitable human error on the part of developers.
Steve Lipner built on these themes. In his view, conducting a thorough root
cause analysis after a breach is not only crucial to fixing the weakness behind the current breach, but essential for learning how to avoid making the same mistake again. Whether the incentives to conduct such analyses come from the insurance industry or from standards bodies or other sources, “I think that feedback loop, given that security is imperfect, is super important and something that organizations ought to be building in,” Lipner said.
Perhaps more than on any other issue, workshop attendees expressed broad agreement that current remediation measures are insufficient to address the harms caused by today’s data breaches.
Credit monitoring is the overwhelmingly predominant remediation, a measure that many attendees viewed as not only inadequate protection against financial and identity theft—since it can only help victims detect identity theft but not prevent that theft from occurring—but completely inappropriate for a wide variety of other types of harms that can result from data breaches. David Vladeck, Georgetown University, reiterated the fact that there is no pathway for remediation of medical identity theft, for example, and emphasized the need to address such gaps. Schneider expanded on this point, suggesting that prevention, deterrence, and remediation might need to be wielded in different ways for different types of breaches. “We should stop thinking that data breach and identity theft are coupled—we should start thinking that there are lots of harms that could happen,” Schneider said. “There are lots of remediations that are possible. Some harms may not have remediation, [in which case] you have to count on prevention and deterrence.”
Cate suggested that in designing remediation measures, there is a need to be more specific about the type of data involved and the context of the breach. Lumping all breaches together, as is typically done in relevant laws, he said, fails to adequately address the variety of breaches that occur and also contributes to breach fatigue among victims, who likely routinely ignore notices and don’t take steps to protect themselves because they simply do not pay attention to breach notices anymore.
Because it is challenging to identify harms from a breach, it is similarly challenging to quantify losses for the purposes of informing remediation. Bob Blakley, CitiGroup, Inc., raised the question of whether it is best to focus on losses to individuals or losses in the aggregate and whether the tort process or the statutory damages framework is better suited to addressing remediation. Although it is difficult to quantify the damage from a data breach, Reidenberg, Fordham University, noted that the ability to do so could open up consumer remediation via individual payouts, provide the basis for a new type of in-
surance to cover losses from data breaches, or help a court set statutory damage criteria. The outcome of a case currently with the Supreme Court, Spokeo, Inc. v. Robins, could influence some of these issues, he noted.
In terms of describing harms, designing remediation, and assigning liability, Mulligan posited that some types of information may need to be considered under the legal standard of strict liability because there is no way for a victim to fully recover from the repercussions of the breach. She cited as an example the Ashley Madison breach, which she said could be viewed as the equivalent of “an inherently dangerous product.” She noted that in that case, the company had actually encrypted all of its customers’ financial data, but the financial information did not constitute the totality of the sensitive information they were sitting on.
Allen contended that even breaches like Ashley Madison can and should be remedied in some way. “Privacy law in general has always, [in] the last 100 years, been there to deal with embarrassment, humiliation, shame. So we lawyers have a lot of examples and a lot of precedent to build on for why we should take this kind of harm very seriously and why we can address it through law.” Mulligan noted that one of the weaknesses of this approach is that it requires the injured party to publicly acknowledge private matters in court, which can present a barrier, and she wondered whether it would be possible for such victims to be made whole in some other way.
Circling back to the question of companies’ motives in disclosing or concealing information following a breach, Eric Grosse, Google, Inc., underscored the viewpoint that many companies do or should notify victims after a breach, not only because they “have to” in certain circumstances, but because it’s the right thing to do—a point Heather Adkins, Google, Inc., had emphasized in her presentation. Of course, companies also listen to the guidance of their legal teams about what must be said and when it’s perhaps wiser to minimize communications. But Grosse contended that companies ought to notify victims whenever possible because doing so helps them understand the cyber risks we all live with every day, helps to alleviate uncertainty, and perhaps more importantly, it allows victims a chance to respond. “I sort of resent the fact that it seems like the legal system is forcing us to minimize what we say,” Grosse said. “I think that is not the right policy outcome.”
Cate emphasized that there is a long way to go to adequately address breaches: “I’m not sure anyone has much to be proud of,” he said of the current system. Offering a pointed analogy, he highlighted the inadequacy of breach notifications toward resolving the harms suffered by victims. “Our first and primary response, as a matter of legal systems or our society, has been to tell the person who has been victimized that they have been victimized,” he said. “Imagine that in any other area—imagine we had a law for murder that said the first thing we’re going to do is not try to arrest the person, not prosecute him, but let’s tell the person who has been killed they have been killed.”
Cate also expressed disappointment in the government’s ability to appropriately respond to breaches, citing the federal Office of Personnel Management breach as an example. “The most extraordinary breach of our lives of the most sensitive information of a group of people to whom the government owes the most, and [the government] has done nothing, absolutely nothing,” he said. “It took [the government] 3 months to admit it had even happened. For the next 3 months, it argued over how big it was. And now, it has offered credit monitoring!” Based on this track record, he said that in his view it would not be productive to rely on the government to provide adequate remedies. He recognized that the FTC may provide some solutions in the consumer space, although he noted that it lacks the necessary regulatory authority. Moreover, the discussion throughout the day made clear that there is no clear consensus on what the remediable harms are.
Given the general sense among attendees that the current information security framework has not been effective at preventing, deterring, or adequately remediating data breaches, participants explored how the situation could be improved. In her closing comments, Mulligan noted that while there has been “a lot of money changing hands” and many lawyers and insurance companies getting involved in addressing data breaches, there is still a sense that there hasn’t been much progress on the day-to-day protection of information. To move forward, she called for deeper thinking about harms and how problems develop, to allow the field to focus on creating incentives and infrastructure improvements where they can have the greatest impact.
Understanding the Problem
Attendees discussed different conceptual frameworks for viewing data breaches and explored the need for data collection, sharing, and research.
Lampson suggested refocusing the discussion from data breaches in particular to the broader issues of “data flow”—a framework that encompasses all of the pathways through which data collected by one entity can be transferred to other entities—and how such flows can be traced and controlled. “You might take the view that data breach is just one extreme case of data flow, where there’s a minimal amount of control involved,” he said. Lampson also emphasized the “fractal” nature of information security, noting that the difficulty of solving subsets of the problem often seems just as difficult as solving the entire problem. As a result, making progress will require taking the broad view and being “much more ruthless” about setting priorities and addressing weaknesses, he said.
William Sanders, University of Illinois, Urbana-Champaign, and others emphasized the need for more data to fully understand the problem. He pointed to the illuminating data that had been presented at the workshop and said more studies were needed to understand both the policy and technical issues and how they fit together. Lampson expressed agreement but noted that there is a question about who is going to pay for collecting the needed data, because such an undertaking is likely to be difficult and potentially expensive. Vladeck noted that there is currently no comprehensive requirement to report data breaches to the government and suggested that some system of mandatory reporting could help address data collection issues as well as facilitate better remediation.
While attendees broadly expressed support for collecting more data and conducting more research on breaches, several noted inherent weaknesses in such research. For example, in data breaches and other cyber events, there are a lot of unknown unknowns—potential events that go undetected or unreported and cannot be appropriately accounted for. In reference to Sasha Romanosky’s research on the costs and causes of cyber incidents, Grosse pointed out that phishing might account for a much higher percentage of breached data than the analysis recognized. In addition, Kocher said that comparing the costs of cyber incidents to other sources of loss, such as stolen intellectual property or bad debt, relies on a variety of studies using different methodologies and different data sources, which makes for a comparison that is, perhaps, at best an approximation.
Mulligan summarized the general viewpoint that thorough root cause analyses after breaches would be beneficial in identifying harms, risks, and threat models, but that these analyses are not conducted often enough. Another key issue, participants noted, is how to transfer lessons learned from one breach to help inform standards or practices that can help others avoid the same mistakes. At several points, workshop attendees discussed the potential role of Information Sharing and Analysis Centers (ISACs) in facilitating such exchange, but views on the merits of ISACs were mixed, and some favored other frameworks for information exchange.
Bob Belair, Arnall Golden Gregory, LLP, noted that while workshop attendees expressed widespread agreement that reforms are needed to better address data breaches—for example, “as to what standards folks that are holding data ought to adhere to, what the process ought to be in terms of auditing and compliance with the policies, and then what happens if there is a breach and remediation”—he expressed doubts that these reforms would come in the form of federal legislation. The question then becomes, What is the right venue for these changes?
Belair suggested that giving rulemaking authority to the Federal Trade Commission
(FTC) would be one obvious approach, but in the absence of that, there are still ways for the FTC to take leadership by convening a multiagency effort, perhaps involving the National Academies of Sciences, Engineering, and Medicine, to give the private sector guidance. Lampson also noted that a consensus study from the Academies could help bring clarity on data breach harms, prevention, and remediation and help to establish standards. “I think it has been clear from this discussion that it would be good for the National Academies to undertake a study that can produce recommendations on this subject,” he said.
Mulligan said there was enthusiasm for developing guidance about technical data security protection, such as configurations, defaults, and management. Belair concurred: “Really, you know, the truth is, except for outliers who you don’t need to worry about, the private sector wants guidance. They want rules. They may not entirely like those rules, but getting rules creates certainty and is far better than being in a situation where you don’t quite know what constitutes an appropriate approach to data security and to avoiding breaches,” he said.
Allen said it is likely that the solution lies in some combination of technical fixes and strong incentives for the business sector. Allen and others noted that the increasing role of the insurance industry in incentivizing and even developing data security standards is an interesting recent development that warrants more consideration.
Cate reiterated that it seems clear that incentives are needed to help industry bear the cost of their breaches. It is only natural, he said, that industry and government would fail to adequately protect data when breaches do not have a major impact on the bottom line. Lampson agreed with this point but noted that there was no clear consensus on what form those incentives ought to take. One challenge, Lampson said, is that “there is this conflict between the desire to punish people for behaving badly and the desire to not stamp out innovation. And because a lot of this stuff is so new, it’s extremely unclear, in my mind anyway, about how you can reconcile that conflict,” he said.
Many agreed that a more thorough empirical and theoretical understanding of the problem would help to reconcile the difficult questions surrounding standards, incentives, and technology for better data breach prevention and response. Moving forward, Schneider said, “I believe we’re going to probably come up with some principles that would justify remediations for different classes of harms.” He noted that “if we got to that point, then we would really have a more powerful way to talk about laws that might compel people to do the right thing.”