Deirdre Mulligan, University of California, Berkeley
Deirdre Mulligan, associate professor at the School of Information at the University of California, Berkeley, framed the challenge of software updates with a broad overview of the tangle of issues at the intersection of public values and private infrastructure.
Mulligan opened with her proposal, first posited along with co-author Fred Schneider in 2011, for a new doctrine of cybersecurity—an approach that considers cybersecurity as a public good and stresses greater attention to security issues during software development and managing vulnerabilities after deployment, as opposed to chasing hackers or making software invulnerable.1 No matter how attentive software designers are to security issues, she said, new threats and insecurities will inevitably emerge. Figuring out how to deal with that ongoing insecurity deserves some portion of our attention from both a technical and policy standpoint.
1 D.K. Mulligan and F.B. Schneider, 2011, Doctrine for cybersecurity, Daedalus 140(4):70-92, doi:10.1162/daed_a_00116.
While current policies that recognize and reduce vulnerabilities are beneficial, Mulligan suggested that there should be incentives, or even coercive measures, to induce companies to produce and maintain more secure software in order to better defend the rights of individuals to be protected against threats.
Mulligan identified three main aspects of a potential policy infrastructure for software updates. The first is a need for a shared definition of the term “cybersecurity.” For example, whose “security,” exactly, does it refer to? Also, what would justify the use of a security update channel? Second, she noted that cybersecurity is embedded within an ecosystem of related issues such as individual privacy, consumer protection, industry competition, and intellectual property. Finally, software updates encompass the tension between society’s collective interests, on one hand, and the rights and interests of individuals and corporations on the other.
Channels used to convey updates must remain secure themselves.
In Mulligan’s view, situations like the Mirai attack (described briefly on page 4) demonstrate why policy intervention is warranted. “When we have externalities—when my choices are impacting you in ways that are negative and in ways that are really difficult for you to address—that is a justifiable reason for looking to public policy to intervene, to create incentives, or to punish people for failure,” she asserted. A public policy intervention, in this case, could potentially create incentives, for effective software updates or even punishments for failure, while still being mindful of a balance between the rights of individuals and the needs of a society.
An update infrastructure that allows the monitoring—and addressing—of threats and vulnerabilities would require extensive information, Mulligan noted. Monitoring and surveillance would be necessary to identify specific threats and contribute to a shared knowledge of evolving risks or attacks. This requires knowledge about devices, such as their properties, the applications used on them, and their network connections. An update infrastructure, she noted, would also require the ability to update a device “in the wild,” and channels used to convey such updates must remain secure themselves.
Developing an infrastructure for updates raises several key challenges from the perspective of both businesses and consumers, and these challenges are especially acute as we move into the Internet of Things (IoT) space, said Mulligan.
Making accurate vulnerability assessments of software and fixing associated vulnerabilities could potentially require the collection of private data from consumers’ devices, where there is an expectation of privacy and control. As a result, it is important to consider consumers’ expectations and the privacy implications of a software update infrastructure, Mulligan said.
An update infrastructure policy would also, of course, affect companies. There remain many unanswered questions. For example, should the initial seller have exclusive power to update device security or functionality, or should other companies offering similar, potentially more secure, updates be allowed to compete in this space? Stability is another concern, because security updates could destabilize a device or service, frustrating the consumer or the manufacturer.
For the IoT space, there is a need to carefully consider how to use traditional update channels, which have long been used to update other kinds of computers, in the context of IoT devices. Should a new channel exclusively for security updates be created, and could it truly be secured? As an example, Mulligan pointed to Tesla, which sends updates to its cars via an over-the-air update channel. One recent update included a new feature, Summon, which enables owners within a certain distance to press a button and “summon” the car to them. The initial software had safety issues that the company later corrected, but that update channel could have potentially created a huge risk to others near the car when it is “summoned,”2 Mulligan noted.
It is also possible that update channels could be abused to downgrade product functionality, which could make consumers unhappy and mistrustful of the update channel altogether. Update channels highlight the tensions between the market, which may want to use the channel as an avenue to increase sales, and consumer advocates, who want to make sure it is used to deliver necessary security improvements.
Defining what, exactly, is a security upgrade is another nettlesome question, as is the question of who gets to define cybersecurity. Mulligan cited the example of Sony’s use of DRM-protected compact discs (CDs) to prevent theft of its intellectual property; while the CDs may have advanced security in the eyes of the company, they also provided
2 J. Fisher, 2016, “Tesla to Fix Self-Parking Feature After Consumer Reports Raises Safety Concern,” Consumer Reports, February 10, http://www.consumerreports.org/car-safety/tesla-fixes-self-parking-feature-after-consumerreports-raises-safety-concern.
the company with backdoor access to the consumer’s machine, arguably not supporting security in the eyes of the consumer. Gatekeeping is also an issue: Would individual companies control update channels exclusively, or could other companies use them as well?
Consumers would have their own questions about update capabilities, such as whether they are mandatory or optional. If they are mandatory, consumers might have to reveal private data to receive the security benefit. Yet if the security channel isn’t properly restricted, consumers could be exposed to identity theft in addition to unwanted downgrades or modifications.
IoT products in particular have poorly defined security requirements and support timelines, said Mulligan. What, if any, obligations do manufacturers have for maintaining or updating their security? Given that many of these devices are so low cost as to be considered almost disposable, for how long can a consumer expect them to remain secure? If IoT devices aren’t upgraded but are still in use, what economic costs does that impose and on whom?
All these questions and concerns led Mulligan and Schneider to their guiding principles for cybersecurity. First, they posit that cybersecurity is a public good, and thus private or individual choices could have negative consequences for the public as a whole. Second, cybersecurity is a political construct whose goals and means must be clearly defined and generally agreed on through a series of conversations. The variety of domains where cybersecurity is important and software updates take place means that solutions may need to be adjusted at times. Finally, if cybersecurity is defined as a public good to be protected, it follows that it has to be in the forefront of the design stage of software and update channels instead of an afterthought for others to deal with.
Conversations about the definition and governance of software updates could involve a range of players, Mulligan said. The Federal Trade Commission has a long history of protecting consumers. The National Telecommunications and Information Administration is beginning to discuss these issues as they relate to IoT devices. The National Highway Traffic Safety Administration has also started discussing these questions. Mulligan expressed her hope that agencies increase their focus on individual privacy, especially in
the context of automotive over-the-air updates. These and other existing institutions can be leveraged to take on this task.
The bigger obstacle, to Mulligan, will be convincing industry to more effectively integrate cybersecurity protections into their product design objectives. The technical community has certainly embraced security as a design issue, and there have also been privacy improvements, but Mulligan noted that engineers by and large do not feel they have the expertise to make privacy decisions when creating software.
Building software with security updates in mind requires genuine dialogue between technical experts and policy experts. Both groups might at times be out of their depth, but effectively addressing issues around consumer protection, industry competition, and software engineering demands a high level of coordination and commitment on both sides: Bringing both technical and policy expertise to the design process is an important piece of the puzzle to support true cybersecurity.
Such a strategy also requires an educational commitment from universities—a commitment to discussing the legal and ethical consequences of software updates when teaching software design, Mulligan argued. Workplaces also need to prioritize such thinking so that engineers know that policies are in place to ensure that cybersecurity is addressed at every stage of design. Such policies should rely on an interdisciplinary approach to solving complex problems like software updates, Mulligan concluded.
Richard Danzig, Johns Hopkins University Applied Physics Laboratory, asked Mulligan to address the speed at which technology changes and proliferates. Given that this is a dynamic and not a static system, big problems are generated quickly. He likened the situation to the evolution of cars: As cars and their use changed over the decades and the challenges increased in scale, policy responses (licenses, roads, policing, and so forth) evolved in response. He wondered if public policy is destined to lag behind technological innovation, and, further, the degree to which public policy regulations might actually stifle innovation. He also raised the additional concern that policy makers may inevitably lag behind the technical reality in terms of their understanding of the issues.
Building on the example of cars, Mulligan noted how the automotive industry deals with recalls and other safety risks. Historically, car makers have relied on owners to have their vehicles serviced, and the turnout is usually below what car makers hope. In this respect, a technical breakthrough like Tesla’s over-the-air channel theoretically improves vehicle safety by sending updates in a timely way. Getting the policy part
right, along with the technical part, could lead to faster, safer patches, a “potential win-win,” she said.
But while most automotive product recalls have to do with reducing the risk to passengers, there is another security risk: Software vulnerabilities that could turn a car into a weapon, such as the vulnerabilities discovered in Jeep Cherokees that allowed remote hacking of the entire system, engine included, which makes these issues all the more urgent.3 As the potential dangers increase, Mulligan said, cybersecurity becomes a necessary public good. This means requiring manufacturers to update car software automatically, instead of putting the onus on consumers. The risk now is not one of mere malfunction but of actual malfeasance—cars could be leveraged in real time to create an attack.
Regulations may not always be able to keep up with technological innovations, Mulligan acknowledged, but she said that right now, the marketplace is not keeping up either. The development of standards and definitions will make it easier to design, develop, and deploy secure products and patches. Regulation, in this case, can happen alongside the speed of technological change as long as both camps are committed to learning from the past, fixing current problems, and addressing future challenges. The policy and technology communities can collaborate on the values and issues—public, private, individual, corporate, and legal—and on finding the right balance among them. Furthermore, while these conversations might be easier to have in private settings, they need to happen in public, Mulligan said, because cybersecurity is a public good.
Several participants raised nuances of the business environment, particularly for IoT devices, that warrant consideration in the policy space.
Tadayoshi Kohno, University of Washington, noted the proliferation of companies launching products with funding from Kickstarter.com or similar services. After producing an IoT device, for example, and attracting perhaps hundreds of thousands of users, such companies can sometimes go out of business quickly. Where does that leave customers, who are using devices with software that is never going to be maintained or updated, he asked.
Mulligan shared two ideas: Devices could come with a kill switch that is activated once the software for the device is no longer being maintained (although depending on
3 A. Greenberg, 2015, “Hackers Remotely Kill a Jeep on the Highway—With Me in It,” Wired, July 21, doi:https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway.
the device, this might pose safety concerns), or, the software could become open source after the manufacturer ceases supporting it, so that others could take on the security support themselves.
Drilling deeper into the example of the hacked Jeep Mulligan raised in her presentation, one participant asked what policies could protect against this type of situation, without infringing on other areas. Mulligan noted an important distinction between the Jeep case, which she believes was a vulnerability that was not intentional on the part of the manufacturer—a mistake, and cases such as the Volkswagen test mode code, which appears to have been a deliberate decision to deceive regulators.
New capabilities of IoT devices may require new certification and testing methods.
A policy framework would need to account for both types of cases, Mulligan said. One lesson that applies across the board is that the agencies responsible for standards and compliance need increased technical expertise, she suggested. Mulligan noted that there has been progress in this area. The Federal Automated Vehicle Policy focuses on increasing technical expertise within government and also creates an external advisory board made up of industry and academic experts to enable the agency to better understand new trends, research, and potential threats.4
These examples also raise the prospect that the new capabilities of cars and other IoT devices may require new certification and testing methods, Mulligan said. This might look similar to the level of regulation used by the U.S. Food and Drug Administration. While she noted that this might not be appropriate for all areas of IoT, “When we’re talking about large pieces of metal hurtling around at really fast speeds, it’s something we should at least be considering,” she said.
The discussion wrapped up with a question from another participant regarding the potential policy implications of the distinction between hardware and software products that reside on a customer’s device, in which the user often has some control over software updates, versus cloud-based services, for which the user typically has no choice as to what version they are using. Mulligan noted that consumer protection agencies have historically focused on products that are sold directly to consumers—for example, stepping in if the company’s behavior seems deceptive or egregious—but the issue of cloud-based software or services will demand closer scrutiny in the future.