During the next day’s forum, Mike Walker was joined by three other privacy and computer security experts: Batya Friedman, professor in the Information School at the University of Washington; Aanchal Gupta, director of security at Facebook; and Lea Kissner, global lead of privacy technologies at Google. Their wide-ranging discussion of security and privacy was moderated by Ali Velshi, anchor and business correspondent for NBC News and MSNBC.
Kissner began by noting that people do not agree on exactly what privacy means in the context of computing. Some people think it means that no one will learn anything about them. Others think it means that they have control over who learns what and when. Still others think it means that they will not have intrusions into their devices. In effect, people have different needs, desires, and expectations about computing, said Kissner, and these values need to be built into the devices they use. For her part, she associates privacy with respect. A secure system is built in such a way that it does what people think it is going to do. Security and privacy also enable people to interact with each other well in a system; she noted that “we have seen lots of places on the internet where people don’t necessarily behave well toward each other.”
Friedman elaborated that people typically interpret privacy in terms of the control of information. But one of the first papers to discuss privacy in the United States, “The Right to Privacy” by Samuel Warren and Louis Brandeis published in the Harvard Law Review in 1890, interpreted privacy as the right to be left alone. As people bring computing into their devices, their homes, their workplaces, and their schools, it raises again the question about the right to be left alone, not to be interrupted, and to experience a sense of self. “If our technologies are continually poking at [us], we are being deprived of our ability to experience ourselves,” said Friedman.
More broadly, privacy is related to other human values, such as trust, a sense of agency, identity, respect, and dignity, Friedman continued. For example, early antitrust cases involving the collection of communications data were based on a claim that if information is collected for one reason, the organization that has collected that information cannot arbitrarily decide to use it in some other way. Part of the argument hinged on what people expected the data collected about them to be used for. An important question now is whether, with new business models, people have different expectations.
Friedman also made the observation that questions about privacy and security do not involve only computer scientists. As devices become ubiquitous, whether in people’s surroundings or on/in their bodies, virtually all engineers will need to be concerned with privacy and security. “Whatever your field of engineering and your practice as an engineer, these issues are there.”
In characterizing the range of threats to privacy and security, Walker likened the current situation to a pyramid. The width of the pyramid is the number of actors, and its height is their expertise and access to resources. At the bottom of the pyramid are numerous low-tier actors who try to do things like insert ads on webpages to install malware. In the middle are agents who, for example, use encryption to lock people’s data and then sell the keys back to them—global parasites, Walker called them. At the very top are nation-states and their equivalents that have the ability to target the large, sophisticated systems that perform essential functions in society, such as counting votes or running electrical grids. “There are very few organizations that look like that,” he said, but experience has revealed that they exist and that modern enterprises have to deal with them.
Good engineering makes it costlier for these individuals to attack, said Walker, lowering the number of individuals who do so. But different approaches are needed for the professionals at the tip of the pyramid who are using sophisticated tools to break into systems. These tools do not rely on guessing passwords, and no alert pops up on a system to say that it has been compromised when an adversary is nevertheless in the system. “It is very difficult to make any promise of absolute immunity against career professionals who are studying the system,” he said. For example, threats may exist from people who are recruited within an organization. “The comforting thing that I would like people to take away about that threat model of highly advanced actors who can defeat the entire system in depth is that that [part of the] pyramid is very small.”
According to one point of view, the technology sector is an independent and neutral entity in these contests at the tip of the pyramid, Walker continued. It is the responsibility of the international legal community to come to an agreement about nation-state forces involved at this level, but Walker commended Microsoft president Brad Smith’s proposal for a “digital Geneva Convention,” which would commit governments to protecting civilians from nation-state attacks.
Gupta pointed out that the situation has changed over time, with new kinds of threats appearing at all levels of the pyramid. “Our
attackers are getting smarter and smarter. They are going to use nontraditional ways. There will be times when you will be caught off-guard. It is important for us to have our plans in place when we are in such situations.”
For example, a person might buy a device to turn a house’s Christmas lights on and off with a phone. But that system is connected to a wireless home network, which creates a vulnerability. “If you are expecting end users to think through these things as they buy this simple plug to turn on and off their Christmas lights, that is the wrong expectation,” she said. “We really need to get in front of this. We need to make sure that these devices are protected by design.”
Attackers are building strong tools, and the numbers of devices are proliferating, especially as the Internet of Things takes shape. The designers of such systems need to build into them the capacity to patch them without a user’s intervention. People should be able to go about their daily business while their devices are upgraded and protected. “We, as the security industry, are not there yet. We have a long way to go. We have to mature our systems so we are not laying this burden on users and can do it for them transparently.”
Kissner made the similar point that malicious actors develop new methods over time, such as targeted phishing attacks called spear phishing attacks, which were used for some of the interference with elections. She said that the security of election systems “is not something that I am as comfortable with as I would like.” Good reports have been issued on how to improve the security of voting machines, but she warned about other aspects of elections, such as voter registration systems. Hackers could unregister people from voting rolls or insert their names on lists of felons, which would cause them to be removed from rolls.
“We need to look at all these different areas and invest in them, even the ones that people don’t think about as much, like the voter registration systems.” In such cases, it would be helpful to put the entire internet into a machine learning model that is powerful enough to detect the subtle signals generated by attackers. “They won’t set off a giant alarm, but there will be some little traces that you need to be able to find. You need a lot of skill and a lot of data to be able to do that.”
Walker, too, noted that new threats can be unexpected and remain undetected. In World War II, Abraham Wald examined the distribution of bullet holes in airplanes to make a recommendation to the Air Force about where planes should be armored. He saw that there were fewer reports of bullet holes in the engines than in the rest of the plane. He therefore recommended that armor be concentrated around the engines, because the airplanes being shot in the engines had crashed and could not be sampled. Said Walker, “It’s sometimes hard to see the holes in the data.”
The nature of the threats often depends on the characteristics of the user. In an exchange with moderator Velshi, Kissner pointed out that journalists, along with human rights workers and activists, are considered some of the most difficult people to secure. They have interesting information and often end up having powerful organizations that are angry with them. They do not “speak technology, necessarily, as a first language,” and they are busy, have sensitive information, and “need to be protected really, really strongly.”
Assaults on privacy and security do not come only from malicious actors. For example, in response to a question, the panelists talked about misinformation being disseminated over social media. To address this threat, Facebook has tried to increase the transparency of its news postings by making it possible to click on a link and see the source of any news story, said Gupta. “You can get a lot more information. You can make a judgment call whether this is authentic news or not.”
The panelists also discussed the use of email or other online activities to direct specific ads to people. As Friedman pointed out, when Google first introduced targeted ads with Gmail (its web-based email service), some people using Gmail realized that the ads they were receiving were targeted and thought that a person at Google was reading their email messages. “It was actually a piece of software,” she clarified. “There is no other human being that knows that information,” but, from other perspectives on privacy, it “begins to move into this area of the right to be left alone.”
Kissner pointed out that essentially the same technology is used to remove spam from email inboxes, and people do not object to this use. But they tend to be uncomfortable with ad targeting, even if it does not raise privacy issues. “It is a very similar computation, but people feel differently about it, which is why I think about building these technologies with respect.”
But she acknowledged that predicting what will make people uncomfortable can be very difficult. “For example, some people are
going to be super excited that you just suggested a chicken recipe to them, and some people will be deeply offended by that. It is hard to predict, because humans have all of the diversity and the beauty of the entire world.” Google tries to avoid surprising people and wants people to have an option to prevent surprises.
Finally, in the context of efforts to address the range of threats Walker briefly mentioned squirrels. “When I say ‘squirrels’ in this room, everybody laughs. If you go to a national power grid conference, nobody laughs about squirrels. Squirrels are deadly serious business, because they are an adversary that is constantly attacking the world’s power grid. They are a very real threat to power delivery. The reason we have a resilient power grid is in large part because it has had to become resistant to squirrel attacks. The next time you see them, thank them for their service.”
Building security and privacy into a device and a system has implications throughout the design process, observed Kissner. It requires many different skill sets and user experiences. It takes people who understand how very large systems work. It even takes philosophers and ethicists. “We have to all come together behind this goal.”
The task of building security into systems is complicated by their pace of development and numbers of users, Gupta added. Facebook, with 2.5 billion users on its platform, is constantly developing its system, and security reviews that take a long time add friction to the process. “What do you do in those circumstances?”
Three things, she said. The first is to build automated technical solutions that designers, users, and security experts can apply. Underlying tools like libraries can provide an infrastructure with which software engineers can write secure code so that they do not need to become encryption experts. Tools such as two-factor authentication and login alerts can empower technology users. The security community can contribute by sharing what they have learned about security risks and solutions across companies.
The second is to build lightweight processors that can perform tasks like anomaly detection and track modeling to protect data. Security reviews of hardware and software will still be necessary, but some of the burden can be relieved by the right devices.
The third is to develop security programs. These may be internal to a company or external. For example, ThreatExchange is a platform built by Facebook that shares indicators of compromise (known as IOCs in the industry) in an organized fashion. “Now we have more than 700 companies on this platform sharing what they see in their own network,” Gupta reported. “They can collaborate. That makes us a better security community.”
Security needs to be central to everyone’s planning and development, Gupta continued. “You cannot do your work in this day and age unless you secure what you are carrying.” That requires people to know and be able to access their rights on a platform, including how their data will be used. “Be very upfront. Be very transparent about it. Once you have shared that, follow that and also give [users] a choice so that they can opt out.”
On Facebook, for example, people often wonder about the advertisements they see. But they can change their preferences on their Facebook page to indicate that they no longer want ads to be targeted. They will still see ads, but they will not be targeted to their profile. The platform overlays security on top of that choice to make sure it follows through on its users’ expectations.
People need to understand privacy settings so they can make good choices, Kissner observed. If privacy choices become “essentially the control panel from Apollo 11,” so that they are very difficult to use effectively, people will not feel comfortable with that system. On the other hand, if people are able to make choices (e.g., using cryptography so other people cannot see data) a system will be more understandable and predictable, even if one consequence of that choice is to slow the system.
On this topic, Gupta noted that Facebook is beginning to use artificial intelligence to figure out what are the right controls for a person’s account and to surface those controls to the user. For example, with third-party applications on Facebook, the system might remove access to data after 30 to 60 days of nonuse of an application, rather than letting
the application pose a continuing vulnerability. “That way we can protect their account better versus them having to stay on top of this at all times. These are the kinds of advances we have to make across all the features.”
New technologies can also make a big difference. For example, Google key is a device that plugs into a USB port or talks with a computer via Bluetooth and forms a second layer of authentication. It uses public key cryptography to authenticate to a server, and it cannot be tricked, as even two-factor authentication using messaging, for example, can be. “Since we deployed these, we haven’t seen any of our employee accounts get compromised, which is a really strong statement given the kinds of adversaries we are dealing with.” They are not right for everyone, Kissner warned, because if they are lost or dropped in water and destroyed, people can lose access to their data and computing ability. “If I show up with just a password and without that security key, how is Google supposed to know that is not an attacker? Account recovery mechanisms are really hard to secure.”
Friedman observed that the lower-level architecture strongly shapes what will be easier and harder to do at a higher level. Technologies therefore need a way to reach back into the lower-level architecture and facilitate what people want to do. “As these technologies unfold in society, they will be appropriated in ways that we can’t anticipate,” she said.
“We can anticipate different kinds of harms or biases that might unfold, we can be on the front end of thinking about those things. But good engineering also means that we are fundamentally building architectures that, as they unfold and are appropriated in society, we can easily reach in and make those adaptations.”
She explained that a successful design enables someone to construct a mental model that allows that person to use a technology in a way that reflects how the person wants data to flow or not flow or be shared. People use technologies to apply to a university, communicate with family members, or do a business transaction. As a user, she said, “I don’t want to think about privacy. I don’t want to think about security. I want to know how to make good choices.” Good technologies enable people to construct mental models that let them make good choices. “It is very hard to achieve, but that is the gold standard.”
Walker pointed out that the computer security community is divided over the issue of “cyber hygiene,” which holds that a good user who makes good choices is going to be safe while someone who makes bad choices will have an unsafe outcome. Some members of the computer security community contend that there should be no such thing as cyber hygiene: People should have only good choices available to them. It should not be possible to compromise a person’s computer. “That sounds like a very simple mandate, but,” he echoed Friedman, “it is incredibly difficult.” Thousands of people in security engineering teams across the country are working on that goal.
Engineering requires a moral as well as a technical imagination, Friedman said. Good engineering practice needs to encompass not just the optimal solution but both the average case and the worst case. “We need not just to ask the technical questions—will my bridge fall, will my plane drop out of the sky? We also need to ask about the societal and ethical dimensions of those [questions]: Does my system treat all groups equitably? What are the worst-case security risks?” These social and ethical questions need to be asked of research, practice, and mentoring, she said.
This will often require adopting a longer timeframe. In the computing industry, engineers typically design for 3 weeks to 6 months, deploy for 18 months, and consider a technology obsolete after 5 years. But computing is now going to be integral to many physical and biological materials, with millions and billions of devices generating data for 20,
30, 50 years and more into the future. “How does this scale?” Friedman asked. “What steps should we take now to ensure that in 50 years society is in a healthy, flourishing state, [and] for whom?” Fully considering the scales of time and number will change the kinds of infrastructure being built, the kinds of data being gathered, and the kinds of data being retained, she said.
Friedman also asked whether the computer industry is generating technologies that could put society at risk in the future. For example, digital data are actually quite fragile. The lifetime of digital data on the servers where they are stored is only about 5 years, 10 years at the outside. “If somebody told you that you put everything that was the infrastructure of your society on material that was going to decay within 5 to 10 years, and you couldn’t completely secure that,… would you shake your head and say, ‘What kind of country, what kind of scientific infrastructure, would do such a thing?’ Yet that characterization is not so far from where we are.” Getting lost in the next technical fix or strategy can obscure hard questions like these that need to be framed in terms of relevant historical examples.
Human moral capabilities are lagging behind technological ones, she added. Over the past 200 years, sharp exponential curves describe the augmentation of physical strength, speed, magnification, and manipulation of matter and energy. In comparison, the ethical and moral capacity of individuals and society has changed little during that time. “Our technical abilities, the things that we have built, can be used for good or harm, but they have grown at enormous scale. We could feed the planet; we could destroy the planet with atomic bombs.… If we are honest, our wisdom for using our scientific and technical prowess has not grown apace.”
These observations lead to some hard but important questions: How can technologies and engineering practice be better aligned with ethical and moral capabilities? What checks and balances should be engineered into new technologies? What, if anything, should not be built, and why? What social processes should be put in place to hold society accountable for how technologies are engineered and deployed? “I don’t claim to
have much insight into the answers to these questions, but I think asking them explicitly is very important for us as we go forward,” Friedman said.
The moral dimensions of design also arise in the obligations of companies to protect unsophisticated users from attacks, such as the unsuspecting user who clicks on a phishing email, as noted in one question from the audience. Walker stated that “technology companies certainly have a collective responsibility to protect users. Working in security teams, we talk about that all the time.” During a security incident, “it is always all hands on deck. If you run a global technology company and that company has software installed on hundreds of millions or billions of devices, then any threat to the ecosystem is a threat to that brand.”
Security experts have always worried about how to keep secrets safe. They have worried less about how to protect people from themselves—for example, by posting inappropriate material online. “The security community is brand new to that question,” Walker said. “There we need answers from our futurists.”
Kissner reported that some research has been directed to the problem. For example, a team of hers did research on the security and privacy practices of survivors of intimate partner abuse to design products that are safer for people.
In most such circumstances, no single choice works for everyone, she noted. Someone may benefit from sharing their location on Google Maps, such as someone with a health emergency, while other people may be harmed by that action, such as someone who is being stalked. “I don’t want to make assumptions that everybody in the world is the same,” she said. “We are successful if everybody is in the right place, whether that is using a product or not using a product.”
As Gupta said, technologies need to work for the entire spectrum of people. “That is never easy. You have to literally think like them.”
In response to a question from the audience, the panelists spent some time discussing the European Union’s General Data Protection Regulation (GDPR), which imposes regulations on companies to give greater
control to individuals over the use of their personal data. Kissner said that she has been spending “a lot of time getting Google ready for GDPR.” Google has developed systems that can reliably delete data at scale, download data for inspection, and conduct impact assessments for data protection. GDPR requires more paperwork, she said, but the capabilities it mandates are available not just for people in the European Union but for many people elsewhere.
Gupta made the point that companies tend to do the right things even when there are no regulations. “All of these things that are required by GDPR, companies already do them.” Companies that decide not to do the right thing will still find a way not to do the right thing, she added; regulations do not necessarily help in that regard. For example, anyone who collects credit card information has to go through an auditing process, but providers who go through an audit can still be breached. Putting in place regulations to conduct the audit has not made the problem go away. “We have to think beyond these things. We have to make it our daily habit to do the right things for our users.”
Walker also noted that all the major cloud companies have been working hard on GDPR. “It has been a massive lift,” but the result is that GDPR compliance is “cooked into the DNA” of all the major cloud computing companies. “It was a forcing function that had to happen.”
Yet the consequences of complying with GDPR remain uncertain, he said. As an example, he mentioned the phenomenon called bit squatting. Random events, like a cosmic ray from space, can hit the memory in a computer and flip one bit. If that bit stores a domain name, the domain name will be one bit off. An attacker can register a domain name that is one bit different from a legitimate domain name, and devices whose memory has been corrupted by a random failure will go to that alternate registered domain instead of the intended domain. Security engineering teams need to protect against bit squatting, and to do that they need to know who is registering these domains. However, the GDPR prohibits demanding that people list personally identifiable information on the internet, which can make it impossible to know who is registering a domain. “There were a lot of predictions about what was going to happen. Was there going to be a spam apocalypse when we couldn’t figure out who was registering domain names?” This apocalypse has not occurred, said Walker, although the reasons remain
obscure. “Is it because the attackers haven’t figured out how to take advantage of this? Is it because data mosaic owners, people who work with a whole bunch of different datasets, have other ways of finding out what fraudulent domain names are?… Or was private registration of domains not as big a problem as defenders thought it was going to be?” The answers remain unknown.
In addition, Walker observed that regulations have the potential to go awry. An example is the requirement that people change their password every month. Many people began to use a shortcut of changing their passwords in simple and easy-to-remember ways. “That actually made things easier” for hackers, he said. “All human beings solve this problem exactly the same way.” But even when this system was abandoned, it remained in the regulations, so that people still have to rotate their passwords. “It has been mandated that we do this, and that mandate became a snapshot of doing things incorrectly.”
As companies move to solve problems, regulations should not make the process more difficult. “We need to be careful not to impede that trajectory and make a snapshot of doing things incorrectly.”
Friedman expressed her appreciation for the European push for GDPR, since it helps individuals control their information. But, like Walker, she pointed out that “the design of regulation, like the design of technology, is a very hard problem.” Just as technologies need to change as problems arise, regulations need an “agile process of making adjustments.”
On the topic of regulation, Friedman also commented on the right to be forgotten. In society, people have ways of taking back things that they say in the heat of the moment. They might remember what was said, but the memory fades with time. But if the same things are said in an email, the mechanisms for social recovery are not as easy. “The words are there. They are recorded.” By and large, people are successful at recovering from conflict in social life. What happened can be forgotten or softened. The right to be forgotten can help bring those elements into the digital domain “The idea of this right to be forgotten is really important at a time where so much data is recorded.”
However, Friedman questioned whether a piece of data can be deleted with absolute confidence. “Is there any piece of data that we can
with 100 percent confidence know we have removed?” Data are always being replicated. Someone could put data on a flash drive, and efforts to eradicate those data might not eradicate what is on that drive. This relates to the issue of trust, she said. “If the public and the larger society are going to trust us as engineers, we have to be very clear about what we tell them about what can we do technically.… If we tell them we can really remove everything and then, lo and behold, somebody else had a copy on a thumb drive and it shows up, and that happens a few times, then our credibility goes way down fast.”
Great progress has occurred in security and privacy that is often overlooked, Gupta noted. In the early 2000s, web attacks were rampant, cookies were simple to steal, and hackers could easily pretend to be someone else. Now websites take care of many such problems, browsers have become much more secure, and the jobs of attackers have become much harder. “They have to be more creative now,” said Gupta. “They have to look at more venues, [and] not just because we are building these security functions. As a community we are coming together. Users are much smarter. They know how to secure their accounts with all of the features they have been provided.”
Walker made a similar point, noting that in the early 2000s computers were plagued by what were called worms, self-reproducing viruses that exploited flaws to make copies of themselves. They could go through the internet in a matter of hours and cause tens of billions of dollars in damage. The computer security community got worms under control “not through any particular single innovation or magic bullet but through very careful engineering, software armoring, a whole series of advances,” which have so far kept a global catastrophe from happening.
Phishing emails, which try to trick people into giving up their secrets, provide another example of progress in security. The world is leaving the password behind, Walker observed. For example, hardware authentication makes it impossible for someone to give up a secret that can be exploited. “Now there are no phishing attacks at Google because there is nothing to convince anyone to give up.” These and other “little victories” come not through fixing an architectural flaw that should have been spotted earlier but through reacting to the cleverness of attackers and through careful engineering by dedicated teams of hundreds of people.
Kissner pointed out that improvements are still needed in many areas. People still use lots of passwords, and many get phished to give up those passwords. Google routinely scans the entire internet for phishing attacks and publishes a feed that provides warnings for all kinds of applications and browsers. The system is not perfect, she said, since a phishing attack can occur in seconds, which is faster than Google can scan the internet, but improvements continue to be made.
Friedman cautioned that pursuing a single solution with a single business model is inherently dangerous. In any kind of ecosystem, greater diversity enables a broader range of ideas, choices, and solutions. From that perspective, she said, “corporations are too powerful in owning the territory of the solutions that are being explored.” Addressing that monopoly on the solution space may require more public sector activity or different kinds of solutions coming from the scientific community, she said, which could require a different role for regulation. “Is there a way in which regulation could work to diversify, to require companies to pursue multiple kinds of paths?” An example is the advertising business model that currently dominates the sector, which requires that activities and transactions be observed, recorded, and analyzed to generate revenues. Many other kinds of business models exist that would not require extracting as much information or data from people to succeed. “That is a choice. Corporations could make different choices. The public doesn’t have an opportunity right now to go to companies that are offering different kinds of choice.”
On the issue of diversity, Kissner noted that individuals and small groups are not on their own going to be able to solve many pressing security questions. She has built a team that is extremely diverse, including neuroscientists, social workers, ex-military, members of LGBT+ groups, former teachers, former lawyers. “We have people who have done all sorts of things. People who were born on every continent except Antarctica. We have people who have seen all of these sorts of things and have different skill sets. I brought them together so that we can try to understand this stuff better.” Even then, her team will need to talk with others to solve the problems they are confronting.
Technologies that are currently on the drawing board will continue to yield progress, all the panelists observed. Some of the technolo-
gies have tremendous potential. Walker mentioned such possibilities as DNA-based data storage and a second-generation underwater data center, Project Natick, that his laboratory is building north of Scotland that is powered and cooled by the ocean and can save data for as long as 20 years. He invited audience members to check out the center’s live fish cam.
Walker also noted that every major cloud vendor is working on technology that would lock themselves out completely from access to customer data, because the demand for such systems is strong. “Those technologies are on the roadmap. Everyone is working on them.” As a specific example, he cited a technology he saw demonstrated while working at DARPA. It used a new technique called homomorphic encryption, which can perform operations on data while they are encrypted, to handle six voices in a conference call with echo cancellation and voice mixing. The problem, he said, is that the technology is incredibly demanding on computer power. The “long march” between a technology being possible and a global product with that technology inside “can be decades,” he said. “It has often been engineering constraints—the tyranny of the possible and the tyranny of the now—that has selected the model we have that works.”
Kissner agreed that the technical capability does not yet exist to make all the choices that people might want to make. For example, she, too, pointed out that homomorphic encryption to do computation without having to see the data underneath can be overwhelmingly data intensive for even relatively simple tasks, such as using the contacts on a person’s phone to find people on a platform. Performing this task in a reasonable time would have taken all Google’s available computers and more. “We literally couldn’t run that one feature at scale. We had to give up.”
At the same time, the solutions are not just technological, Kissner added. In particular, technology companies, regulators, and the public “need to develop societal and economic mechanisms so that we have robust ways of interacting,” she said. “We don’t have all of the tools we need to make all of the options available.” This relates even to how power relationships work in society. “So much of how people interact is based on power between groups and power between individuals. How
can we build the technical mechanisms that we need in order to make some of these mechanisms work?”
Walker made a similar point in the context of concerns about election security. “All technology adoption by human civilization is push and pull,” he said. “The technologists create new solutions. The market or the public demands it.” Technologies exist that can safeguard elections security and privacy. “There are end-to-end verifiable voting systems that exist to preserve ballot box secrecy while allowing you to carry public proof that your vote was reported accurately.” Companies in the United States have built open-source, highly secured, formally verified voting systems. But there has not been much demand for these systems. “Pull as hard as you can,” he urged members of the audience. “Democracy matters.”
The future of computer security will continue to involve a complex interplay between technological capabilities and human expectations, the panelists observed. Similarly, the competition between new technological capabilities and continued assaults on privacy will persist. “The reason that research is underway is because people are concerned,” Walker said. “The question is going to be whether we can get these technologies out of prototype, out of research, and into the core internet fabric.… It really does always feel like a race.”
This page intentionally left blank.