Kevin Fu, University of Michigan
Kevin Fu is an associate professor of computer science at the University of Michigan and co-founder of Virta Labs, a health-care cybersecurity company serving health-care providers and medical device manufacturers. He explored the technical aspects of software updates, discussing the evolution of updates over time before delving into contemporary problems, with a particular focus on the area of embedded medical devices.
As a graduate student 17 years ago, Fu focused on creating secure, scalable, high-performance software updates. At that time, software updates were made via RPM Package Managers (RPMs)—software packages downloaded from websites. Each package was individually signed, which meant that the updates had been authenticated and, thus, could be assumed to be trustworthy. “The beauty was you could download it and not have to worry about the package being secure, because it was signed,” Fu reflected.
However, although each package may have been signed, the overall system was not signed, which could lead to problems. One such problem was what Fu referred to
as the “freshness factor,” in which users could be tricked into installing old updates with known vulnerabilities.
Turning to antivirus updates as an example, Fu noted that today, it is common for companies and universities to require that users regularly update antivirus DAT files to detect new viruses. In an experiment 11 years ago, Fu’s students demonstrated how easy it is to establish a root shell on a computer using an antivirus software update; the experiment revealed that McAfee, the popular antivirus software company, was not checking the cryptographic signatures on its updates. In essence, McAfee’s update channel, which was meant to increase software security, actually created new risks.
Unfortunately, many software products still do not use proper authentication, leaving this channel open to exploitation, Fu said. Antivirus software can cause other problems as well; Fu described an instance when the system at a Rhode Island hospital accidentally misclassified a critical Windows DLL as malicious, and the hospital’s admission systems ground to a halt, forcing the hospital to stop admitting patients except for those with gunshot wounds.
Embedded medical devices raise particular—and often overlooked—concerns, Fu explained. Before wireless updates to pacemakers, changing a device’s settings required inserting a needle into the pacemaker through a wearer’s armpit and manually adjusting a dial. Now, software is updated over the air, which is far easier for wearers, although this creates new types of vulnerabilities.
When medical device makers expressed their belief, at a Food and Drug Administration (FDA) workshop, that malware was not a concern for an automated external defibrillator being developed, Fu recruited his colleagues Dawn Song and Steve Hanna to prove them wrong. The team created a custom firmware update that included malware that could spread from a hospital computer, onto the defibrillator, and back onto more hospital computers, revealing that there appeared to be no authentication at all on the firmware going into the defibrillator.
Fu also related the story of a ventilator company that, after an FDA recall, instructed customers to download a software update from its website. Although initially encouraged that the company was providing an update, when he tried to download it himself, he was presented with a malware warning by his Web browser. Digging deeper, Fu discovered that hackers had laced the update with drive-by downloads thanks to an opening left by an outdated version of Microsoft IIS and unmaintained server scripts, leading the Google Safe Web browsing service to flag the software
download as suspicious. Fu was alarmed to learn that he was the first to report the problem, despite the update, in theory, having been downloaded by many actual users of the software.
Despite these examples, manufacturers are making progress, Fu said. For medical devices, there are now detailed regulatory guidelines that dictate the obligations that manufacturers and consumers (in this case, the hospitals) each have, for example. Philips, a maker of medical devices and other electronics, is considering providing consumers with a list of the software components used on its devices, so that they have a better understanding of the risks they take when using each product, Fu noted. That may not completely solve the problem, but it would help to know exactly what, including third-party software, is inside these complex systems, said Fu. A fully linked vulnerability database, he suggested, could also help consumers figure out where their risk is and who is responsible for managing it.
Focus on the end users who are not trained IT security professionals.
Fu noted that the Association for the Advancement of Medical Instruments, the standards body for medical device safety, is also working on this problem. Last summer the group released the first FDA-recognized standards for premarket security on embedded devices, and it is now tackling standards for software updates. However, he said, there are still crucial questions to be answered, including the need to define the responsibilities of the producer versus the purchaser.
Although cryptography can help address key security properties, such as integrity, authenticity, and freshness, in practice it is difficult to ensure all of these solutions are implemented flawlessly, Fu said. In the scheme of current vulnerabilities, what is most crucial, he argued, is safety—particularly for “cyber-physical” devices with moving parts, such as cars, satellites, and medical devices. He also stressed the importance of human factors. Engineers must focus on the end users, he argued, who, more often than not, are not trained information technology security professionals. “Putting the onus on the user is a pretty big ask for the average American,” he said.
Other questions concern who is responsible for applying the software update—the manufacturer, or the user—and whether the update is optional or required. The degree of risk can be a key factor in these decisions, Fu observed. Several years ago, he said, Medtronic discovered a faulty electrode in its defibrillators. A hardware recall, requiring removal of the device, carried a small but real risk of death. Faced with this conundrum, Medtronic came up with a clever solution: They developed a software update, delivered wirelessly, that would measure the fitness of the electrode and made
a risk-based decision as to whether the patient required the extra risk of removal and reimplantation, or whether the risk was small enough to live with.
The timing of update deployment is also important to consider, Fu noted, and if safety is paramount, then additional verification, validation, and other engineering would be required. Rather than asking, Can we be hacked?, Fu believes the more important questions are the following: How well can we survive these attacks? What kind of tolerance do we have? and How can we fail gracefully?
In the discussion, Eric Grosse, an independent consultant, wondered if it were possible, legally, to attach a copyright notice to open-source tools in such a way that it would be possible to extract a list of all the software components operating in the system and their respective version numbers, thus allowing a user to more accurately assess whether updates are needed.
While recognizing that the legal questions involved in such a solution are better answered by a lawyer, from a technical perspective, Fu suggested such a solution would find some support in the National Institute of Standards and Technology (NIST) Cybersecurity Framework. He pointed to three pillars for industrial control systems within the framework: First, it’s important to know exactly what assets are involved and what the risk is. Second, the proper controls should be deployed depending on the risk. And finally, it is important to continuously measure the effectiveness of those controls. Making all software components transparent, as Grosse suggested, would answer the first question. This is an important step, Fu emphasized, because often even product managers themselves aren’t aware of all the software involved in a complex system.
Deirdre Mulligan, University of California, Berkeley, weighed in with a legal perspective. She suggested that Grosse’s goal would perhaps be better fulfilled using open-source licensing or the Creative Commons approach, which grants the user a few more rights than copyright as a legal mechanism. That could be helpful, she said, as could creating a set of standardized software disclosures. The National Telecommunications and Information Administration project that Mulligan mentioned in her presentation is researching similar ideas. Whatever path that group takes, Mulligan stressed that detailed disclosures about software updates, maintenance, and other issues would be essential to consumers.
Nicko van Someren, Linux Foundation, mentioned that a Linux project, the Software Package Data Exchange, could also provide a mechanism for making what is inside these products more visible.
Tadayoshi Kohno, University of Washington, asked the group to consider whether software updates could be one of a broader set of solutions to vulnerabilities, as opposed to depending on it as the whole solution. As an example, he pointed to Microsoft’s Shield, a product designed to detect attack signatures that are exploiting known vulnerabilities while engineers are working on a patch. He noted that other kinds of instrumentation or mitigations are important in improving cybersecurity, as well.
Recognizing the reality that software updates are sometimes an afterthought in the software community, Fu noted that concepts of “controlled risk” versus “uncontrolled risk” might be useful for framing the challenges after real-world deployment. Software updates should be balanced between those kinds of risks, he said, bearing in mind that there is always the risk that an update could do further harm. This risk balance might also shift depending on whether you are considering harm to a few individuals or harm to a larger population.
Another potential remedy to consider is turning off the software, which could halt intrusions but have other negative consequences. The question of providing software updates is probably not a yes-no question, Fu said, but one where different risks must be assessed.
Fred Schneider added that it may sometimes be feasible to update an associated part of the software system but not necessarily the device itself. In the recent Mirai attack, for example, a firewall from the Internet service providers could have been used to address the large amount of traffic coming from the devices, thus compensating for the fact that the device manufacturers had not built sufficient protections into the affected devices themselves.