The workshop concluded with an overarching reflection on the workshop by Deirdre Mulligan, University of California, Berkeley, followed by a short open discussion session.
Despite the clear need for an update infrastructure to support security throughout the software life cycle, the workshop underscored just how complex that proposition is, Mulligan said. The presentations demonstrated a great deal of variation in the approaches being used as well as the perceptions of what’s possible, from both a technical and a managerial perspective. There also was a great deal of variation in how different people and organizations perceive what they do and do not have control over, as well as how the external landscape influences their decision-making.
Mulligan reflected that she was surprised to learn how little overlap there seems to be in the types of software update challenges faced by different sectors. While some challenges are universal, the list of challenges seemed to grow longer and longer with each presentation.
The workshop surfaced some important differences in terms of the technical factors and the business factors at play in various types of situations, she noted, highlighting, as an example, the differences between Linux and Microsoft in terms of how updates are created and deployed. While open-source development allows for a lot of innovation and iteration, staging and rolling out updates in a closed system, such as Microsoft, has its own distinct advantages. Also, working in a business environment that can be
tightly controlled and producing single-purpose products can also bring advantages, as exemplified by the experience of Schweitzer Engineering Laboratories, Inc. (SEL). Mulligan suggested that makers of other single-purpose devices, such as perhaps those making medical devices, could learn from the SEL approach.
Mulligan highlighted the idea that if actual software updates to a particular device aren’t feasible, then perhaps signatures, firewalls, patches, or other mechanisms that affect other parts of the system could be used to help limit attacks or the damage they can cause. Alternatively, at the other end of the spectrum, if a device or software can’t be updated, it is useful to explore ways for it to be disabled or at least disconnected from the network to limit the damage of a potential breach while retaining the device’s core functionality. For example, a “gatekeeper” could contain vulnerable software and compel a user to patch it, giving people both an education and an incentive to attend to updates while adding a layer of protection. While “cutting off” a device could frustrate users, it might be necessary to craft a cybersecurity policy that defines the situations in which doing so is justified in order to protect the greater society as whole, Mulligan suggested.
If a device can’t be updated, explore ways to limit the damage of a potential breach while retaining core functionality.
The appropriate timeline for supporting a product once it is sold, an issue that arose on multiple occasions, is also an important factor to consider, Mulligan reflected. She noted, for example, that it is useful to hear that Microsoft generally supports updates for 10 years on its operating systems, and Cisco supports updates for 5 years on its routers. It would be helpful to know how companies come to those decisions and how that could be applied in other markets, she suggested. While it’s not reasonable to expect everything to be maintained for 20 years, like SEL’s products are, some kind of guidance or transparency around these support timelines for products could be very helpful for consumers, consumer advocates, and decision makers.
Mulligan also highlighted the discussion around software inventories and other technical solutions that would allow users to assess the state of their software stack and determine whether updates are needed, a vein that the National Institute of Standards and Technology (NIST) is supporting with its Software Identification (SWID) project. This idea, she suggested, could be very helpful for both businesses and customers. In addition, another important factor that arose multiple times is that updates are often released in a way that can alert hackers to the vulnerabilities they are fixing, underscoring the need to think about ways to mitigate that.
One aspect not covered very extensively in the workshop was the process by which companies discover software vulnerabilities and what influences their update process. Mulligan suggested this topic could perhaps be the basis for a future workshop. Other important related issues that were only touched on briefly but that could warrant further discussion include privacy implications, future business models, new software trends such as ephemeral code, and the need for a mechanism to involve the public in advancing conversations about what the appropriate limits to monitoring might be, Mulligan concluded.
Launching the discussion session, Butler Lampson, Microsoft Corporation, suggested that the workshop focused too much on how software operated in the past, when it was downloaded and expected to perform one function. Today, he said, software is embedded into complex, cloud-based systems that run across multiple devices. “I think most of what was said at the very least would need to be rethought in a fairly serious way in order to meet this new reality,” he said. While he acknowledged that in the future, many devices will indeed operate on the model described in today’s presentations, “they’re just going to be an increasingly small fraction of everything that’s going on,” he emphasized.
Bob Blakley, CitiGroup, said he was struck by the lack of a “baseline threat model” on the part of manufacturers and consumers alike. If such a model existed, he suggested, “It might be more obvious why [software] should be designed properly for updates, and it also might be more obvious why you want to update them.” For example, it’s not obvious to a consumer why a network-connected light bulb might need a software update, but there are nonetheless potential threats. For example, a hacker could gain control of an individual light bulb and program it to flash in such a way to induce seizures. Or, large number of light bulbs could be controlled and used to originate a distributed denial-of-service (DDoS) attack. “But,” he cautioned, “in the absence of at least some notion of what each of these devices do, it’s hard [for consumers] to become psychologically fond of the idea that updating them is important.”