Nicko van Someren, Linux Foundation
Nicko van Someren is the chief technology officer of the Linux Foundation and a fellow of the Royal Academy of Engineering in the United Kingdom. His talk focused on his role as head of Linux’s Core Infrastructure Initiative (CII), a nonprofit organization funded by the information technology industry that provides training, testing, and financial support for software security projects. Its mission is to improve the security of open-source software, an essential part of the Internet ecosystem.
CII was created after the 2014 Heartbleed bug found in OpenSSL, which left approximately 70 percent of the HTTPS services on the Internet vulnerable to security breaches of sensitive information. The Heartbleed bug was eventually patched, but 7 percent of Internet servers today are still vulnerable to Heartbleed, despite the fact that the patch was released 3 years ago and was considered a critical security vulnerability. The experience sparked the creation of CII and underscores the fundamental challenge of disseminating software updates in the open-source environment, said van Someren.
CII focuses on a variety of methods to secure open-source software. One method is to devote resources to projects focused on open-source security, such as OpenSSL, OpenSSH, and GnuPG. CII also invests in infrastructure like the Network Time Protocol daemon (NTPD), timing software used until recently by many of the world’s major stock exchanges, which, van Someren mentioned, happens to be maintained part-time by
a single developer. In response to a question from Bob Blakley, CitiGroup, Inc., who wondered if there had been a risk analysis of the consequences if the NTPD stopped working or, worse, if it came under the control of a hacker, van Someren noted that CII had conducted a security audit on the NTPD code and that maintainers of NTPD have established the Network Time Foundation to support its continued maintenance. In addition, CII established a project known as Census that attempts to identify at-risk components of NTPD and similar infrastructure to determine the factors that contribute to risks, gauge vulnerabilities, and identify the community of people who could step in to maintain it if circumstances should change.
Elaborating on the NTPD story later in the discussion, van Someren described CII’s approach to risk assessment for maintenance of open-source software. His team looks at several characteristics, such as how many people maintain it, how vibrant a community it has, and how long it takes for bugs to be fixed, to calculate a risk score for each component. For products that are found to lack consistent and robust maintenance, CII is creating a “software orphanage” where engineers can monitor this untended software, respond to bug reports, and create and deploy updates.
Open-source software is not inherently any more or less secure than other software, but it does have a few different characteristics that affect the security landscape. First, its development process is fragmented, not streamlined, and often spread out across more people than in a closed-source development team. Also, instead of having top-down management pushing specifications, open source operates with a more a collaborative spirit and an emphasis on running code.
These differences add to the complexities of software updates. Eric Grosse, an independent consultant, commented during Kevin Fu’s presentation that it might be helpful to see all the components within a software program in order to assess the need for updates on each of them. In open-source software, van Someren said, there is
often such a deep, layered stack of components, highly specialized to the user’s unique needs, that such transparency may not be feasible. These deeply layered setups also make sending updates difficult because of all the testing that has to be done for each component, in order, from the bottom of the stack to the top. While in theory open-source users can refigure their stacks at any time, in practice they are more likely to wait until another user has tested the piece one level down, which adds delays to any software update deployment as updates filter up the stack.
This collaborative, distributed model also means that it can take longer for the open-source community to identify and respond to vulnerabilities. Once a patch is built and made available, it still has to go through several layers of distribution, such as GitHub, Red Hat, and others, before reaching users. “We have got a bunch of extra layers, I think, that make the handling of software updates a bit more complicated and a bit different,” van Someren said.
It can take longer for the open-source community to respond to vulnerabilities.
Open-source software is not one monolithic entity, and it is deeply embedded across commercial software, numerous operating systems, and other crucial products, making the challenges of open-source software updates both diverse and pervasive. There is no one-size-fits-all solution, van Someren said, “because pretty much every project has its own unique way of doing things.”
Each project even has its own way of describing software updates, he observed. For example, when OpenSSL releases an update, developers call it a “security update” and detail all the changes it involves. With Linux kernels, on the other hand, there are no specific “security bugs,” because the philosophy in that community is that all bugs could affect security. A user could reasonably wonder, however, whether it is necessary to install all the new kernels, which are released weekly, even if they are not specified as improving security, since there is a risk that changing kernels frequently could destabilize their system. By not indicating how severe the vulnerability is, Linux kernels could be inadvertently turning users away from deploying essential updates.
Another major difference with open-source software is the matter of liability. Commercial software often involves legal obligations for service and updates; with open-source software, there is no such agreement between developers and users. While some businesses offer service-level agreements to support open-source software, that is not feasible for every case or every organization. Those responsible for the Linux kernel, for example, do not want to take on any level of liability. “There are
a whole host of problems and opportunities that crop up from that abrogation of liability,” van Someren concluded.
“Free software,” a subset of open-source software that is free to use and alter, also brings a host of unique challenges. The most common example of this is the GNU Public License (GPL), which is how the Linux kernel is licensed. TiVo at one point used code released under GPL, but its devices restricted users’ ability to update the software themselves by requiring their devices to only run code signed by TiVo. When the next version of GPL was drafted, designers added “anti-TiVo-ization” code to prevent this behavior. While this was considered a victory for free software, it raises new software update challenges, such as the question of who is authorized to sign an update before it runs on a device.
Van Someren suggested that those security liabilities are a good reason to avoid GPL on open-source products, despite some technical solutions that can improve security. In general, he noted, it is said that software can be secure or software can be free. “You can have one or the other—but you can’t have both,” van Someren summarized.
Another CII project is working to create a global platform of trusted execution engines for mobile devices. Some options being pursued would allow users a certain amount of control, and van Someren expressed optimism about their potential value.
Right now, the biggest impediment van Someren sees to software updates for open source is users’ common concern about whether a software update will cause instability across their systems. This concern partly stems from that fact that users aren’t sure exactly who made the patches or how they were tested. He told the story of a Wall Street chief information officer (CIO) who “nearly had a heart attack” when van Someren pointed out that many of the bank’s front-end Web services came from GitHub, which means that an attacker could commit code to a project used by the bank and thus insert vulnerabilities into its live online banking system. The CIO’s first instinct, to stop applying security patches, is not necessarily the best move. The story illustrates a key question that everyone is wrestling with in open source, said van Someren: Who has authority to determine what code is “good” and what code is not?
In the discussion, Forum Chair Fred Schneider reiterated the sense that fear of system destabilization is a main reason users may ignore updates. Van Someren noted that several Linux distributions do offer rollbacks if an update causes destabilization. Vendors of open-source distributions (as opposed to the community using free software)
also offer restabilization, because their business model depends on keeping customers satisfied.
Open-source creators should, in theory, retest any existing components that they add to new software, but that takes a long time to percolate up through all levels of software, especially when components are stacked so deeply. Open-source distributors like Red Hat have done this extra testing, which should increase stability, van Someren noted, but he emphasized that there really isn’t a single solution that would work in all cases. Because the open-source world is so varied, many projects have come up with their own systems for handling the destabilization issue.
Carlos Picoto, Microsoft Corporation, noted that using a public or well-known interface to deploy security updates does not always prevent destabilization. For example, he pointed to software that is either not using the common Microsoft API or is using it incorrectly; he noted that Linux kernels can be particularly problematic for Microsoft updates, for example, as can vendors who do not use properly compatible software.
Van Someren wrapped up with a discussion of the role of open source in Internet of Things (IoT) devices. As noted at several points in the workshop, startup IoT manufacturers often assume their products have a short life span and are, essentially, disposable. Van Someren countered that while that may be true for more esoteric products like a Wi-Fi-connected hairbrush, it is not for household necessities like water heaters and lighting systems, and it is certainly not for cars or industrial machines. “We need to think about what to do when, not if, the [IoT] vendor goes bust,” van Someren said.
One idea CII is considering for these situations is “code escrow,” in which vendors put the last version of their code, along with dedicated funding, into a trust that can then be tapped to continue to support the software when the company goes under. How exactly this would work—and whether companies would sign up for it—are open questions, van Someren said. In the discussion, Bob Blakley, CitiGroup, Inc., delved deeper into how companies could potentially be convinced to get on board with the idea of code escrow. Noting that the Dodd-Frank bill required financial institutions to set aside capital in case of emergencies, he speculated that regulations requiring software firms to escrow code on the basis of public interest might be worth considering.
Peter Swire, Georgia Institute of Technology, questioned how such an escrow system would work in the context of bankruptcy law, which prevents companies from
holding on to assets amassed within 6 months preceding a declaration of bankruptcy. Van Someren clarified that, ideally, an escrow would be established at the outset of launching a product, or, failing that, when the reality of bankruptcy is at least 6 months away.
In response to a follow-up question from Blakley, who suggested that such a system would require regulation to enforce it, van Someren clarified that he would prefer a mechanism such as a checklist or badge-earning system where open-source IoT projects could self-assess their ability to create a secure development life cycle, similar to Linux’s existing Best Practice Badge Program for open-source software. Such criteria would improve security outcomes by allowing companies to self-certify, but then make that certification public, where it could be scrutinized and verified. The best outcome, in van Someren’s view, would be to get the industry’s buy-in on such a program before approaching the Federal Trade Commission and asking it to “put some teeth behind it.”
One other possible mechanism for protecting consumers from the dangers of outdated IoT devices, van Someren suggested, is to take advantage of what’s known as a watchdog, a system configured to detect critical failures and trigger a complete reset of the system. In this vein, software updates could be seen as “food to feed the watchdog,” allowing a device to maintain its Internet connectivity. For this to work, the devices themselves—such as garage door openers, lights, and hot water heaters—would need to be able to perform their functions without Internet connectivity, although they would stop communicating with their mobile and Internet-based controls. This limits damage when devices stop receiving software updates. “We need to make sure that the mode of failure is disconnection, rather than the product stopping working,” van Someren said.