Some Potential Research Directions for Furthering the Usability, Security, and Privacy of Computer Systems
A principal goal for the workshop was to identify research questions and areas within the emerging field of usability, security, and privacy that would assist in increasing the security of computer systems used by individuals and organizations. Limiting the discussion to research questions was, perhaps not surprisingly, a challenge. Participants approached the problem from a multitude of perspectives, reflecting the many disciplines represented at the workshop and the involvement of academic, industry, and government researchers as practitioners. And because many participants were very engaged with the usability-security-privacy challenge, there was a natural temptation to explore possible solutions as well as fruitful research areas. The following sections summarize research directions that emerged from the questions posed to workshop participants and from breakout sessions, reports back from the breakout sessions, and plenary presentations and discussion.
DIMENSIONS OF USABILITY, SECURITY, AND PRIVACY
Breakout session participants spent a considerable amount of time grappling with how to define usable security, working under the belief that one cannot improve something that cannot be measured and that one cannot measure something without a good definition for what one seeks to measure. Indeed, definitions were discussed in every breakout
session in some form, leading the committee to identify the need for better agreement on terminology and definitions as one of the four overarching research challenges at the intersection of usability, security, and privacy (see Chapter 5).
Usability for Whom?
Although usability is often equated with the experience of end users of IT systems—and this was indeed the focus of many presentations and discussions at the workshop—usability concerns for other groups were also discussed. Notably, administrators of IT systems also contend with systems that are difficult to understand and configure. The security or privacy consequences of a misconfiguration or other error by a system administrator can, of course, be much more serious and wider in scope than the consequences of an error of a single user. However, the line between administrator and end user is somewhat blurry because every home user is in effect the administrator of his or her own home network and the computers and other devices attached to it, which suggests that both system administrators and home users stand to benefit from improvements aimed at either group.
Usability also matters for system developers. More usable tools would make it easier for them to avoid or detect design and coding errors that affect security and privacy. Moreover, there is an opportunity to improve the usability and security of systems by introducing better usable security and privacy features to development environments and libraries.
To what extent do demographic and cultural differences affect usability, security, and privacy? One particular question that came up repeatedly during the workshop was whether it was true that younger generations are more security-savvy and less privacy-sensitive. A related question, assuming that younger users are less privacy-sensitive, was whether they would retain that perspective as they grew older.
Finally, participants cautioned that academic studies of usability are not necessarily representative of the user population. They typically employ small groups of college students, which reflects poor experimental design for two reasons: the group sizes are too small, and they are not drawn from a group that is representative of the broader population. Companies can also make the same mistake with respect to usability studies used to test new services.
Is Usability for Security and Privacy Special?
How might usability for security and privacy be distinct from the broader topic of usability of information technology? One difference of
possible significance is that security inherently involves an actor other than the user—the active adversary who will try to take advantage of usability flaws and may also attempt to mislead the user through “social engineering.” Another is that security involves focusing the user’s attention not only on the task at hand but also on the future consequences and aftereffects of the task. Yet another is that security is generally not the end user’s primary concern. Further investigation of the similarities and differences might yield insights as to what lessons can be transferred directly from other usability work and where the issues are in fact different.
METRICS, EVALUATION CRITERIA, AND STANDARDS
Related to metrics is the question of what criteria should be used in evaluating and accepting the usability and security of an IT system and how one might go about certifying a system as aligning security, privacy, and usability. How might such criteria be instantiated as future guidelines? Are there exemplar software applications that could be identified as benchmarks for security and usability and therefore serve as a source for creating a set of criteria for usable, yet secure, systems? Several discussions considered how such criteria might vary accordingly to application, context, or perspective. For example, how might one divide applications into categories in which similar weights would be given to security and usability? Despite the likely differences among the categories, might it be possible to develop a common checklist that contains a core set of usability and security criteria that would cover 80 percent of all applications?
Workshop participants also grappled with the question of perspective. How might criteria for a usable and secure system differ for people in different roles, including system administrators, security professionals, system owners, end users, security designers, and developers?
Another question raised was whether compliance with usability and security standards might become a condition of connecting to enterprise or public networks. Finally, with respect to the development of standards, it was observed that such efforts would be challenging today given the limited understanding of what constitutes a system that is usable and secure and that appropriately protects personal information. What would be required to develop useful standards? What organizations and institutions are best positioned to develop them?
Central to the topic of usability is a better understanding of users. An approach known as user-centered design addresses the needs, desires, and limitations of users. The related field known as human-centered computing concerns information technology artifacts and their relationship to people. Both approaches are informed by and depend on observation of human behavior. Workshop presentations and discussions approached this topic from several perspectives: user mental models, risk perception and communication, and user incentives. (Incentives, another important topic with respect to understanding users and their motivations, are considered separately below, because they also apply to other actors.)
User Mental Models
“Mental models” describe people’s thought processes and understanding. (A related term used by some speakers was “user metaphors.”) Workshop participants suggested that work to understand and enhance models of security and privacy would be valuable.
A first research topic and logical starting point is to gain a better understanding of the mental models that people apply to security and privacy today. What are the best ways to elicit these current mental models? What do they tell us that could be used to make improvements in today’s systems and in the design of future systems? What specifically do system designers and developers need to know about user mental models to design systems and applications that are usable yet secure?
A second research topic is the development of better models that could be adopted in system design. For example, are there models for security or privacy that have the concreteness and usefulness of the now-familiar desktop and folder scheme? This nearly ubiquitous metaphor
has been enormously successful in making computing accessible to a broad population. What abstractions might make security and privacy more usable?
A third topic is how to deploy better models—that is, how best to introduce new models to users and incorporate these new models in future system design. (This issue also relates to the topic of user education, discussed below.) One specific suggestion was that it might be useful to develop “user stories” describing appropriate use of IT that highlights the importance of security and privacy. Such user stories could be created after the development of a better understanding of how users make use of security indicators and interfaces. Taking an epidemiological perspective, it would be useful to understand how many individual users’ mental models would have to be changed to make a noticeable impact in improving computer security “for the masses.”
A fourth topic is to study how well users understand their own user model. Can they assess their technical proficiency well enough to understand whether or not they are capable of making informed security decisions? One suggestion for how to assess this understanding is to compare the results of self-reporting with testing, to determine the proficiency of different user types to make informed security decisions.
Risk Perception and Communication
Do people understand how secure (or insecure) their computers are? Do they understand the concept of risk—that is, the probabilities and consequences—and the risks associated with particular actions? Do people understand the implications for themselves and others of a lack of attention to security? Do they understand the risks associated with system failure, disclosure of confidential information, or the release of their private information? Do people care less about damage to others if they themselves do not pay or incur damages for security breaches caused to others? What role does the information source play in getting users to change their behavior? What impact do disclosures about the use of personal information have on the use of security functionality? How might the literature on risk communication developed largely in other domains be applied and extended to enhancing security and privacy? How can what is known about how people understand and react to risk be used to induce them to do things that are good for them and for society?
A closely related set of issues involve what languages and processes can best be used to communicate with users, including those within particular organizations as well as the general public. How can best practices be transferred to those who compose training materials, documentation, and user messages?
Learning About and From Mistakes
A number of comments at the workshop related to the importance of understanding users’ mistakes that play a role in security incidents—mistakes that are often a direct result of usability problems. A better understanding of these mistakes could be fed back into better designs and better user education. One suggestion was to develop a taxonomy of human security errors and mistakes, which would help with identifying general classes of problems and thus a set of general solutions that would influence behavior. A first step would be to conduct a literature review and meta-analysis of past studies.
Participants noted that it is not easy to gather information on user mistakes. How does one get users to figure out that they have made mistakes? How can users be convinced to report mistakes (and how are the associated privacy issues to be dealt with)? How does one create an environment in which users are motivated to report errors (so that design and user education can be improved), yet maintain a culture of user accountability? What can be learned from the records that organizations (e.g., enterprises or Internet service providers) keep about security incidents?
It was also noted that individual users may have quite different definitions of what constitutes an error; there may be many security incidents in which the end user would not view something as an error even though others might. What are useful definitions for developers, managers, and users to adopt?
Users who better understand how to use systems and appreciate the security and privacy implications of their actions are better positioned to protect security and privacy. Better education can help overcome usability challenges; however, workshop participants cautioned that an emphasis on education not be used as an excuse for not improving usability. One suggested area for research is to achieve a better understanding of the knowledge that users currently have and how they attained that knowledge.
User education was also suggested as a way of influencing values associated with security and privacy. How can one influence norms for acceptable and/or appropriate behavior with respect to security and privacy? How is a “culture of security” to be created among different user groups? What can be learned from such fields as social psychology or social marketing?
Participants also suggested examining the limits of user education as a way of improving security and privacy. For example, to what extent is it valid to assert that “if they understood why they are being inconve-
nienced, users would follow the directions”? The discussion of incentives, below, suggests that there are significant limits. Another limiting factor may be that security is generally not the end user’s primary concern.
A final set of questions relates to curriculum and institutionalizing education. What are core concepts that one should teach? How could user education best be incorporated into specific settings such as kindergarten through grade 12 education or employee training programs? How might user education be introduced into informal learning settings such as libraries? How might other informal learning techniques be used—techniques such as videos that play while software is loading or online games that teach about security and privacy? Under what circumstances should user education be mandated, and by whom?
INCENTIVES FOR BETTER SECURITY AND PRIVACY
Many workshop participants observed that incentives are an important force in shaping the behavior related to security and privacy. Incentives can be applied to different actors. (For example, should the onus for security be placed on a home Internet user or on that user’s ISP or on both?) One might even consider how incentives apply to adversaries. (For example, if the cost of mass-scale attacks is increased, will adversaries instead conduct targeted attacks?)
Incentives can take both positive and negative forms. For example, employees can be given positive incentives through the use of awards for maintaining good security, or they can be given negative incentives through reprimands or poorer evaluations for security failures. In the marketplace, positive incentives might include favorable reviews of products with better security, whereas negative incentives would include liability for inadequate security or negative reports in the press.
Importantly, incentives for usability, security, and privacy are not necessarily aligned. To take a simple example, an employee who faces pressure to accomplish a task to meet a deadline may choose to sidestep security measures that slow his or her work. However, if a system administrator fears being sanctioned for a possible security breach, he or she may impose on user activity onerous restrictions that reduce usability.
Externalities play an important role in considering incentives. Individuals can easily take steps that have little consequence for themselves but negatively affect many others. For example, household computer users do not face the cost of damage that poorly secured computers may have across the Internet when those household users fail to take simple steps to prevent their computers from being infected. Nor does an employee incur the total cost of allowing a virus to infect a corporate network. The result is that individual users will tend to pay less attention
to security than is desirable from an organizational or societal perspective. How can the right incentives be created so that users choose a level of security that better protects everyone else? What fraction of such failures can be attributed to inadequate incentives, a lack of information, or the poor usability of today’s security tools?
More generally, participants noted a misalignment of individual, corporate, and societal incentives. Modern computer systems, especially in a network setting such as the Internet, exhibit very significant differences between the effects of an insecure computing environment on an individual and the effects on society. In particular, often each individual faces small negative consequences from a lack of security for his or her computer system, but when such a lack of security is widespread, the consequences are exponentially large negative effects, even catastrophic ones. The divergence between private and public incentives with respect to exerting effort to secure computing systems leads with mathematical certainty to a less secure IT environment as computing systems become more interconnected and more complex, making better alignment of private and public incentives on security an important challenge for policy makers.
With respect to incentives for businesses, participants asked where the money is in usable security. How might business models be adjusted to make usable security profitable? How might regulatory models be adjusted to make unusable security less profitable? They also pointed to a particular problem of users who continue to use systems even though their subscriptions to security updates have expired. Are there viable business models in which security subscriptions never expire?
Behavioral aspects surfaced repeatedly in the workshop discussions, notably in the observation that sometimes individuals seem not to act in a fully rational way in protecting security. Such seemingly irrational behavior can have multiple explanations—actors not being well informed, actors considering a wider range of outcomes than have been anticipated by the system designer, or such ideas as “bounded rationality” that have been developed in behavioral economics.
Finally, participants observed that it is hard to develop appropriate incentives when little is known about costs or impacts. For example, relatively little is known about the cost of identity theft or cybersecurity breaches. This is due in part to the inherent difficulty in obtaining access to the relevant data. Neither private firms nor the government is incentivized to share such data (see Chapter 5). The ironic result is that it may be necessary to address the issue of incentives to share data in order to acquire a better understanding of how to increase incentives to enhance security and privacy.
APPROACHES TO CONSTRUCTING SYSTEMS WITH “USABLE SECURITY”
One specific approach to improving the usability of systems is to reduce the burden on the end user through automation. People may be more satisfied with systems when they have more control; but in the context of security, it may be that the more control allowed the user, the greater the opportunities for introducing vulnerabilities or security breaches. To what extent and when should usable security aim to automate security decision making and remove the human from the loop entirely, versus providing a more usable interface for the human to interact with? Despite the appeal of taking the human out of the loop, participants cautioned that there are limits, because automation cannot handle unexpected, novel events—and the one thing that is known about such events is that they are certain to occur at some point.
Several specific ideas were proposed. One was to use machine learning from context to come up with an acceptable security policy for a user without the user’s directly having to adjust security or privacy parameters. Another idea was to have a user establish policy by specifying desired outcomes and having the system express those outcomes as a set of security rules. The system would then verify that the rules derived from those outcomes are consistent and complete, and only ask the user for additional instructions in the event that they are not. Research could help shed light on the feasibility of such approaches.
Authentication Beyond Passwords
Many participants noted the well-known shortcomings of passwords with respect to security and usability. Simply, the effort spent entering passwords and recovering or resetting them when they are forgotten was noted to be a significant waste of time. Passwords that are easy to remember are also easy to guess, but passwords that are hard to remember are more easily forgotten or subject to compromise if they are written. Systems often require users to change passwords periodically, which may also lead to users’ writing them down or using guessable mnemonic schemes for generating their passwords. Systems typically require their own passwords, often with conflicting rules about acceptable user names and passwords, meaning that users must keep track of a wide array of credentials.
Alternatives that address these shortcomings have been developed. They are used for certain applications but have not enjoyed widespread support and use. These alternatives include hardware token authentica-
tion, which provides stronger authentication than do passwords, and (primarily within enterprises) federation-based authentication schemes that free users from keeping track of multiple passwords. Several barriers to these alternatives were mentioned, including a lack of awareness about alternatives, the cost of implementing a new approach, and the lack of off-the-shelf “drop-in” replacement technology. Another barrier is the potential impact on privacy arising from the potential for the use of alternatives to link activities across multiple systems. Several techniques have been proposed to reduce the likelihood of such linkage, but they may nonetheless be susceptible to determined attack.2
Participants offered a number of open questions that research could address:
What obstacles have been encountered to the deployment of alternatives to passwords? What can be learned from data and research collected by industry groups such as the OpenID Foundation?
What have been the barriers to the adoption of federation-based authentication schemes? Would standardizing the rigor of systems used for authenticating help?
Suppose that authentication schemes were to be considered as they related to the needs of sets of users: How would one even begin to classify what the different sets of users are?
There are populations of users that have already been issued strong authenticators (e.g., the federal government’s Personal Identity Verification card and its predecessor, the Common Access Card). What has prevented their use outside the workplace?
Suppose that users had a single authenticator that could be used universally. Would they prefer to have that supplied by the government or by private industry? How aware and concerned are people about the potential for the linkage of activities across multiple systems? What approaches are best suited for preventing linkage across multiple systems, and what would it take for them to be widely deployed?
Processes and Tools
Participants suggested a number of development and management processes and tools that would help advance usable security and privacy—as well as associated research challenges:
Creating better developer support tools. Guidelines, principles, and design patterns can all help support developers in building systems that provide usable security and privacy. Research questions include how well usable security can be built into such elements as integrated development environments or libraries and how one would evaluate the effectiveness of support tools.
Dealing with dynamic threats that develop between design iterations. Security threats involve adversaries who seek to exploit weaknesses—often more rapidly than the typical design-cycle time. How are threats to be dealt with that arise between typical design iterations? Can the design process be sped up?
Making recovery more usable. Recovery from security breaches, where the extent of the damage done may be difficult to determine, is a major challenge. How can recovery processes be made more secure and usable?
Simplifying user decisions. Complexity impedes usability. How can one make the best use of such approaches as establishing useful bundles of security settings or secure default settings in order to reduce the burden on users?
Redesigning infrastructure. Are there ways that key infrastructure such as the Internet or operating systems (which can be difficult to change in major ways given their enormous installed base) might be redesigned to provide more usable security and privacy? How might barriers to making such changes be overcome?
Usability Through the “Stack”
Computer systems are often thought of in terms of layers—for example, the commonly used Open System Interconnection model for communications networks consists of the physical, data link, network, transport, session, and presentation layers. Similarly, software runs on top of operating systems that provide abstractions for accessing computing, storage, and display resources. Such layering hides details below each layer from the layers above. Much of the work in usable security has focused on advances at the presentation layer—in user interfaces. But it was suggested at the workshop that one should consider whether changes to this conventional model might enhance usable security and privacy. Participants suggested several questions regarding how these conventional abstractions might be reconsidered in order to enhance usable security and privacy.
How “far down the stack”—that is, how far down into the design of the underlying system—is it necessary to go to provide usable security?
Can one enhance usable security by tweaking the abstractions that are used today? What are possible improvements that might result from rethinking the abstractions? How might lower layers be redesigned to support metaphors that would improve usable security? What ambitious new usable security goals could be achieved by redesigning the stack?
What information is needed from lower levels to interact with the user about security errors? How can the application developers at upper levels be helped to understand and use the security information from lower levels?
What if the abstractions were to be changed, say from hosts in the network to user data? How would one express protocols in those terms? Would this help with users’ control of their information? Does moving the security abstractions to the data make them safer? How can a life-cycle view of user data be incorporated—that is, who can it be sent to, who can store it, how is it protected, and how is it controlled?
Other Opportunities for Improving Systems
Presentations and discussions advanced a number of specific opportunities for improving the usability, security, and privacy of IT systems:
Distinguishing green and red machines. Butler Lampson’s talk (Chapter 2) suggests enhancing security and privacy by using separate “green” and “red” machines for conducting activities that are safe and not safe. The green machine, used for important things, would demand account-ability, whereas the red one would not. This approach immediately raises a usability question: How does a user readily identify green and red machines and understand their distinct purposes? More generally, what are the potential advantages of more-specialized machines, and what are the usability challenges associated with using multiple machines?
“Scarlet letter” option. Is it helpful to inform users that they are interacting with a system or service that is following unsafe practices? What can be learned about the effectiveness of such capabilities that have been included in browsers and search engine result pages? How does one deal with the risk of spoofing? How does one address the privacy issues introduced because the service identifying unsafe activities knows what systems or services the user is interacting with?
Building systems that assume worst-case scenarios. No matter how usable computer systems are, no matter how well users are trained and motivated, and no matter what precautions are taken, errors will occur and systems will be compromised. How should systems be built to cope with these inevitable problems?