Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
4 Security and Usability Previous chapters describe abstract notions of authentication and privacy. Fully understanding the implications of authentication for privacy requires considering authentication systems as a whole. No working authentication technology exists in a vacuum. How is the technology deployed? What policies are in place with respect to its use? Which resources is the system meant to protect? What are the goals of the system? Understanding the technology as part of a larger system is key to evaluating its privacy implications. In this chapter, authentication is ex- amined within a broader systems context. Two important systems-level characteristics of authentication systems are discussed: security and us- ability. As noted previously, security is a primary reason for the deployment of authentication systems. Security is also vital to the preservation of privacy in that one must make use of security technology in order to protect privacy-related information. It is not simply the technical mecha- nisms of security that matter but also the processes and policies govern- ing who has access (and how) to sensitive data. It is therefore essential to understand the security requirements and properties of both the authen- tication system itself and the resources it is protecting. To that end, a discussion of threat models and how to think about security risks is pre- sented. The people using these systems are an important component of them, and their needs and behaviors must be taken into account. Accord- ingly, the committee develops the notion of user-centered design, with particular emphasis on the authentication context. Finally, it remarks on 80
SECURITY AND USABILITY 81 and makes recommendations about secondary and unplanned-for uses of authentication systems. THREAT MODELS As noted previously, a significant motivator for authentication tech- nologies is increased system security. Ultimately, understanding the con- text in which the system will be deployed and the threats likely to be faced will enable determining whether authorization, accountability, and/ or identification (as well as authentication) will be required. While au- thentication technologies are generally used to increase system security, security is not a binary property of a system. A system is secure, or insecure, only relative to a perceived threat.2 To understand this concise characterization, some definitions are required. Threats The terms "attack" and "threat" are often used interchangeably in security discussions, and in informal discussions this is an acceptable practice. However, in this report the committee adopts more precise definitions for these terms and other, related security terms to facilitate an understanding of the security issues related to authentication systems: · A vulnerability is a security-relevant flaw in a system. Vulnerabili- ties arise as a result of hardware or software bugs or procedural, person- nel, or physical security problems. · An attack is a means of exploiting a vulnerability. Attacks may be has part of overall system security, the security of the authentication component itself (separate from broader system security issues) is crucial, because without it, a primary purpose of the authentication process is undermined. For any authentication technology, the possible vulnerabilities, present and future, must be evaluated. Apart from flaws par- ticular to a given method, there are several questions that can be asked of any scheme, such as whether the authentication credentials can be shared (and if so, whether the original owner still retains the ability to use them), whether the credentials can be forged, and which sorts of errors can be due to human limitations (such as forgetting a secret or losing a token). Another question that bears on the security of the authentication system is whether a secret value is transmitted to or stored by the verifier. In some sense, a proper under- standing of the vulnerabilities is even more important than the vulnerabilities themselves. Do system security administrators understand the failure modes? Do users understand the weak points? 2For an in-depth discussion of computer and information security, see Computer Science and Telecommunications Board, National Research Council, Trust in Cyberspace, Washington, D.C. National Academv Press 1999. Available online at <http://cstb.org/pub_trust/>.
82 WHO GOES THERE? technical, procedural, physical, and so on, corresponding to the type of vulnerability being exploited. Passive wiretapping, buffer overflows, and social engineering (for example, deceiving individuals such that they re- veal privileged information) are examples of attacks. · An adversary is an entity (an individual or an organization) with hostile intent. Hackers, criminals, terrorists, and overly aggressive mar- keters are examples of adversaries. · A threat is a motivated, capable adversary. The adversary is moti- vated to violate the security of a target (system) and has the ability to mount attacks that will exploit vulnerabilities of the target.3 · A countermeasure is a security mechanism or procedure designed to counter one or more types of attack. A countermeasure does not remove a vulnerability but instead prevents some types of attack from effectively exploiting one or more vulnerabilities. Secure Sockets Layer (SSL), for example, can be used to encrypt communication between a browser and a server in order to counter passive wiretapping attacks that could disclose a static password.4 In practice, every system contains vulnerabilities of some sort, when viewed in a broad context. The existence of a vulnerability does not in itself make a system insecure. Rather, a system is insecure only in the context of a perceived threat, because that threat is motivated and capable of exploiting one or more vulnerabilities present in the system. For ex- ample, in order to exploit a vulnerability in an implementation of a user- authentication system, an adversary might have to possess a very sophis- ticated technical capability. If likely adversaries do not possess that capability, then the system may be considered adequately secure for an intended application context. Of course, the adversary could also bribe one or more insiders. All vulnerabilities must be considered. To understand threats, one usually begins with a list of common ad- versaries and a discussion of their possible motivations, capabilities, and degree of aversion to detection. The following examples illustrate this notion: · Hackers represent a class of adversaries who tend to be opportunis- tic. That is, a target often is selected because of its vulnerability rather than for its strategic value. Hacker capabilities are primarily in the form of attacks launched via network access, as opposed to the exploitation of 3This definition is consistent with the use of the term in political and military contexts, such as references to the soviet threat,, during the Cold war. 40f course, this protection does not address other fundamental vulnerabilities of static passwords, such as ease of guessing.
SECURITY AND USABILITY 83 physical or personnel vulnerabilities. Often these attacks are not stealthy, and many hackers do not seem to be especially averse to detection. Indi- vidual hackers do not tend to possess significant resources, but groups of them may collaborate to bring to bear significant computing resources against a target (some distributed denial-of-service attacks are an example, although such attacks may also be carried out by individuals). A small number of hackers are highly skilled, and these individuals create attack tools that are distributed to a much larger, less-skilled hacker community. Because of the opportunistic nature of hackers and because the hacker community is so large, all systems with network connectivity should con- sider hackers as threats. Many hackers seem to place little value on their time, and thus may be willing to expend considerable personal time on what might seem a trivial target, perhaps motivated more by the desire for bragging rights than by the value of the data accessed. · Insiders are authorized users in some organizational context. Thus, they have legitimate access to some set of computers and networks within that context and usually have physical access to computers employed by other users in that context. The threat from insiders can arise in one of two ways. First, benignly intended insiders may behave inappropriately out of either curiosity or error, causing damage. Second, malicious insid- ers may intend to cause damage. In both cases, the set of things that could be damaged or taken usually is constrained by the organizational context in which an insider operates. Insiders are usually averse to detection, although a disgruntled employee who is being fired may be less so. Mali- cious insiders typically have limited resources, but their intimate knowl- edge of systems and physical access to them give malicious insiders ad- vantages relative to external attackers. · Industrial spies, in contrast to hackers, select targets on the basis of the perceived value of some aspect of the target (for example, content), and they are highly averse to detection. They tend to employ stealthy online attacks to reduce the risk of detection, but they also may employ attacks (for example, bribery) against personnel. Because these adversar- ies are paid to conduct attacks, their methods take personnel and materiel costs into account. Industrial spies may also take jobs in order to acquire insider access (see above). · Criminals often select targets for financial gain and thus often pre- fer stealthy attacks that minimize the risk of detection. (An exception might be denial-of-service attacks used to extort.) They may be willing to exploit personnel or physical security vulnerabilities as well as to engage in technical attacks. They may employ considerable financial resources. · Activists launch attacks whose goal might be to generate publicity (for example, by disrupting services) to serve a more subtle purpose (for example, to acquire data used to embarrass the target). They are not espe-
84 WHO GOES THERE? cially averse to detection, nor do they generally possess significant re- sources, but they may exploit ideological biases in personnel (insiders) and may engage in physical attacks. · Nation-state spies and terrorists typically select targets with great care and employ very stealthy techniques to avoid detection and attribu- tion. They may bring significant financial and technical resources to bear against the target. They may employ a full range of attacks to exploit physical, procedural, and personnel vulnerabilities. Even state-sponsored espionage budgets are not infinite, so the cost of attacking a target is balanced against the expected gains. In the context of user-authentication technologies, it is important to understand the threats against which the technologies are effective and under what circumstances the technologies will fail.5 If security is com- promised, privacy is likely to be compromised as well. In addition, as the rest of the report describes, even with sufficient security there still may be threats to privacy. Choices about which kinds of authentication systems to deploy need to take these factors into account. Recommendation 4.1: In designing or choosing an authentica- tion system, the first step should be to articulate a threat model in order to make an intelligent choice among competing tech- nologies, policies, and management strategies. The threat model should encompass all of the threats applicable to the system. Among the aspects that should be considered are the privacy implications of using the technologies. Dealing with Threats Assuming (perhaps unrealistically) that a serious and explicit evalua- tion of the security of a system relative to a perceived threat model has been carried out, the next question is what to do in response: How should one protect the system? The answer will depend on the potential losses (including the potential damage to reputations) if attacks were to succeed, as well as on the risk-management strategies adopted by those making the decisions. Several fundamental approaches to securing systems have been adopted over time, and multiple approaches usually are employed that complement one another.6 5For example, an authentication technology that uses a one-time password list may be very effective against hackers using network-based attacks but ineffective against an in- sider who might have ready access to the list taped to a monitor. 6The concise mantra adopted by the Department of Defense in the 1ssos, '~Prevent, de-
SECURITY AND USABILITY 85 If a vulnerability cannot be eliminated, security countermeasures are often deployed to mitigate the risk that the vulnerability will be exploited. Countermeasures usually thwart some specific, known class of attacks and thus may not offer protection against new attacks that exploit the vulnerability. Although deploying countermeasures is not as attractive as eliminating underlying vulnerabilities, it is an essential part of the secu- rity technology arsenal, since countermeasures can be added into an ex- isting system to complement the security capabilities of that system. The minimization of vulnerabilities and deployment of countermea- sures generally will improve system security, but rarely will they ensure system security in the face of a wide range of attacks. Thus, it is also prudent to engage in monitoring in order to detect traffic patterns or system behavior that may be indicative of an attack. This process is referred to as "intrusion detection" and is commonly employed in many environments today. Of course, it implies being able to differentiate the behavior of intruders from that of normal authorized users. Even if an intrusion-detection system detects an attack only after the attack has been successful, the detection can be useful for evaluating the extent or mode of the attack. Responses to attacks usually take the form of applying new software releases or updates, deploying new countermeasures, or adjusting intru- sion detection systems to better monitor newly discovered vulnerabili- ties. Rarely is it practical to engage in any form of retaliatory action against an attacker; this is true for several reasons, including the difficulty of locating the source of an attack and legal constraints on such activities. Once a sufficient threat analysis has been undertaken, the security requirements of the system should be more explicit.7 It is at this point that decisions can be made about whether authentication is necessary and sect, respond," reflects this multifaceted theme. Another commonly accepted approach to security is captured by the phrase "defense in depth." The notion here is that multiple, independent security mechanisms provide greatly improved security, since all must be circumvented in order to breach security. Defense in depth is compatible with efforts to prevent, detect, respond,,, though it also can be pursued independently. Preventative measures attempt to prevent attacks from succeeding. For example, it is obviously desirable to remove vulnerabilities from a system whenever that proves feasible. The security patches Security-relevant software updates' issued by vendors are an example of this process. Ap- plying security patches generally is viewed as a preventative measure since it prevents later exploitation of a vulnerability. But in many cases a patch is remedial that is, the patch is distributed after an attacker has already exploited a vulnerability. 70ne of the reasons that e-commerce has blossomed is that banks assume the risk of bad credit card transactions Perhaps passing some portion of this risk on to merchantsy. Risk analysis and apportionment must therefore be part of a careful threat and security require- ment analysis.
86 WHO GOES THERE? what requirements the authentication subsystem would need to satisfy. The overall security requirements of the system will determine whether an authentication component is required, and, as indicated previously, the authentication system itself will have its own security needs. Finally, the perceptions and actualities of ease of use and usefulness (in this case, the prevention of perceived harm) play important roles in how secure systems of countermeasures are in practice. This is discussed further in the next section. AUTHENTICATION AND PEOPLE USER-CENTERED DESIGN Authentication and privacy schemes will fail if they do not incorpo- rate knowledge of human strengths and limits. People cannot remember all of their passwords, so they write them down (typically under the keyboard or on their handheld computers) or make them easy to remem- ber (using pets' names or their own birth dates) and consequently easy to guess. People do not change passwords, because it makes them too hard to remember. Furthermore, people do not want to delete a file containing pages and pages of characters such as " Xyyyyyy°y o:ITyyyy H ~ ." Its purpose is unclear, and there is a fear of disabling some application on the computer. Similarly, although much private information is stored in the cookies on computers information about personal identification numbers, preferences, and time-stamped indicators of which Web sites were visited people do not remove these after every transaction, even in cases where the cookies provide no real benefit to the user.8 Two possible reasons for this are that since there are no visible traces of their existence and use, people forget that the cookies are there, and if they do remember, it is too hard to find and delete them. Similarly, people find it difficult to read the "fine print" of privacy notices that companies are now required to send out, both because the print itself may be too small and because the notices are usually written in obfuscated style.9 Therefore, in order to work effectively, authentication and privacy schemes need to be designed with the same consideration of human strengths and limits as for any other technology, maybe more. To do this, one can borrow heavily from the practice of user-centered design and 8Rarely are there negative consequences from cookie contents, so people continue to trust the system. This potentially misplaced trust leads to other vulnerabilities and raises a philosophical problem. Like civil liberties, society gains from an atmosphere of trust. Is the desire to preserve that atmosphere enough to accept a few vulnerabilities, or is it so impor- tant to prevent the vulnerabilities that the atmosphere of trust will be sacrificed? 9The problem of providing effective privacy notice is appreciated by consumer protec- tion experts.
SECURITY AND USABILITY 87 from some additional aspects of psychology. The following section out- lines the practice of user-centered design and discusses how it applies to the design of authentication and privacy schemes. Some other facts about human behavior that are relevant to these issues are noted. A suggested design and evaluation process is presented for considering various alter- natives for authentication and privacy that are proposed in this report. Lessons from User-Centered Design User-centered design puts the user's needs and capabilities at the forefront. Much of today's technology is driven by the pursuit of a tech- nology solution, without regard to whether the technology provides the full functionality that the user needs in the context of its use or whether it is learnable, usable, and pleasant. After the failure of a number of prod- ucts and systems notably the Apple Newton, the Mars orbiter, and the Florida 2000 election ballot more and more software engineers are put- ting "user-experience engineers," or people with human factors experi- ence, on their teams.l° User-experience engineers use various methods to understand the user's needs and to design an interface that is easy to learn, usable, and enjoyable. There is evidence that the software is mea- surably better and the products more successful in the marketplace when these methods are used. User-experience engineers rely on a number of design principles that are based on the known strengths and limits of human cognition. Cogni- tive psychologists have known for years, for example, that human short- term memory is limited and fragile, that learning takes time, and that people make moment-by-moment decisions about cost or effort and per- ceived benefit. However, people are also remarkably perceptive, in both the literal and conceptual sense. Humans have great powers of under- standing visual input, particularly if it follows certain principles of group- ing and use of color and dimension. People make errors by misremem- 1OExtracted from several articles by user-experience expert Donald Norman. Norman provides the example of the Soviet Union's Phobos 1 satellite in a 1990 article in Communi- cations of the ACM. The orbiter was lost on its way to Mars not long after launch. Later, it was found that the cause of the error was an operator who sent a sequence of digital commands to the satellite but mistyped a single character. Unfortunately, this error trig- gered a test sequence stored in the read-only memory that was supposed to be executed only when the spacecraft was on the ground. The wrong sequence set the satellite in rota- tion, and it was no longer possible to resume control over it; it was lost in space (D.A. Norman, "Commentary: Human Error and the Design of Computer Systems," Communica- tions of the ACM 33~1990~: 4-7.) Norman commented on the 2000 Florida ballot in an inter- view with Kenneth Chang of the New York Times ("From Ballots to Cockpits, Questions of Design," New York Times, January 23, 2001~.
88 TABLE 4.1 Key Design Principles in User-Centered Design WHO GOES THERE? Principle Practices Build on what the user knows. Simplify. Allow users to do things in the order in which they think of them. Display information in clustered, meaningful visual displays. Design for errors. Pace the interaction so that the user is in control. Do not tax the memory of users by having them learn too many new things. Use their words. Build the system interaction with a story or metaphor in mind that the user will easily understand. Do not add features that aren't useful. Package large feature sets in terms of clusters of things for a particular task, not jumbled together to cover all possibilities. Do not make users do things in the order that the computer needs if it is different from what users would do naturally. (The computer can keep track of things better than the user can.) Place things that go together conceptually near each other in the display and in an order that fits what users are trying to achieve. People make errors. Design so that errors are not costly to the user or difficult to correct. Allow each action to be undone once the user has seen the consequence of the error. The user can be slow, but the system shouldn't be. Slow reaction to a user action discourages acceptance of a technology. tiering, by not attending to signals, and by simple mistaken acts such as hitting a key adjacent to the one intended. On the basis of knowledge accumulated from decades of research in cognitive psychology, user-experience engineers have developed a core set of design principles, a common set of which are shown in Table 4.1. How these lessons apply to systems employing authentication and pri- vacy schemes is explored next.
SECURITY AND USABILITY Issues of Limited Memory 89 The issue of forgetting passwords is clearly an issue of limited memory and of the first two principles shown in Table 4.1. Learning and recall are hard and take time. Because of this limitation, people will augment their memories by writing things down. Recently, new services have been offered on the Internet and in some operating sys- tems to alleviate the problem that many people have with many differ- ent passwords for example, one for ordering tickets online, one for access to digital libraries, and others for getting financial information online. This type of service will build for a person a single portal to which he or she signs on with one password. It stores all of that person's passwords and automatically accesses the services for which he or she has signed up. Many of the user's resources are conse- quently subject to theft with the loss of a single password the one used to access such a resource. Simplicity There are systems that have the desired functionality of allowing the user fine-grained control but that tend to be far too complicated to use. Some e-mail systems, for example, allow the user to formulate rules to sort incoming e-mail into folders for examination later in some order of priority. There are two major problems with these systems. First, people find that the rules that they have to write are maddeningly difficult because they must specify their preferences in very specific terms having to do with searchable fields in e-mail. For example, when a correspondent has more than one e-mail account, the rule writer must remember not just a particular person's name, but all of the character strings that might appear in that person's e-mail "From" line. Second, if the rules designate things to be automatically deleted, the user may have no trace of unintended consequences. Similar effects follow when users attempt to program their telephones to deny calls from telemarketers. They must specify their rules not by naming people from whom they want to hear but by designating telephone numbers. The system was designed without regard for the difficulty that the user will incur at the interface. Technologies that are difficult to learn will exclude people who are not willing to or cannot expend the effort needed to learn them. Of note, a recent study of people's understanding of Pretty Good Privacy (PGP), a communications encryption technology, showed that many users failed outright to understand what they were supposed to do and made cata- strophic errors, such as sending the private key instead of the public
So WHO GOES THERE? key.ll However, criticisms of user-interface problems are usually more accurately applied to an implementation than to a building-block technol- ogy per se. Making Things Visible Associated with people's ability to understand complex visual dis- plays is their inattention to things not visible: "Out of sight, out of mind." The example of cookies earlier in this report is an instance of invisibility. Things invisible are often not attended to. The files in "temp" folders similarly fail with regard to visibility and readability. But, since the pos- sible consequences of the loss of privacy are not brought to the user's attention on a daily basis, it can be easy to forget about their possible harm. The fact that people ignore things that are not visible may explain why they claim to be concerned with privacy and then do not protect their privacy when given the opportunity. Indirect Effects of Bad User-Centered Design Poorly designed systems can also have indirect effects on privacy, accuracy, and control. Many of the databases in use at large financial companies, motor vehicle bureaus, social service agencies, and so on are legacy systems with badly designed interfaces and poor checks on the accuracy of data entered, resulting in errors in database records. Many of these errors are caused by poorly designed interfaces, poor training for the high-turnover workforce (high turnover caused by the boredom of such a low-level job), and low motivation. And, since it costs an organiza- tion money to ensure accuracy (for example, verifying by doubly entering the data and finding mismatches, or building algorithms that will check on accuracy and duplication, and so on), errors are accepted. It is the victim of an error who seems to have to bear the cost of correction. Lessons from Cognitive and Social Psychology Other aspects of human behavior, including the following, affect the success of authentication and privacy schemes: · How people make decisions, · The basis on which they trust other people and institutions, 1lA. Whitten and J.D. Tygar. "Usability of Security: A Case Study." Proceedings of the 9th USENIX Security Symposium. August 1999.
SECURITY AND USABILITY 91 · Their assumption that the physical world does not apply to the virtual world, and · How a person's behavior changes when that person is not visible to others. These aspects of human behavior are all relevant to the design of authentication and privacy schemes. Decision Making Throughout daily life, people make small decisions about whether to take actions or not. Often such decisions are based on simple calculations of cost and benefit. In making decisions, people frequently overvalue things that have immediate value and undervalue actions that may have a long-term payoff. And, sometimes the cost is clear and the benefit unknown. This tendency causes people to not make the effort to do something that would protect their long-term interests. For example, knowing that people do not like having their private information accessible to others unknown to them, some entrepreneurs have built software called "cookie cutters" that automatically preserves the cookies a person chooses to preserve and flushes the rest when a Web session is over. Unfortunately, not only are such programs gener- ally expensive, but they also require a particular level of skill for instal- lation and require users to specify their preferences in much the same way as e-mail filtering programs and telephone screening devices do. The cost is high, both in time and in the effort to understand and specify things, and the benefits are really unclear. It is hard to articulate the benefit of keeping this information private, both now and in an unfore- seen future. The fact that people often see immediate benefits but not long-term consequences also explains why they are willing to divulge private infor- mation for short-term freebies. For example, some people will allow their Web activity to be monitored if they are given a free e-mail account. Trust Much of our society is based on trust. People take actions that make them vulnerable, believing that others will do them no harm. They accept payment in checks or credit cards assuming that the payer has the resources and will pay when asked. They follow traffic rules in general, expecting others to do so as well. People buy things sight- unseen on the Web, expecting delivery of the goods and/or services as advertised. People believe, often wrongly, that when they share their
92 WHO GOES THERE? e-mail address or Social Security number, this information will not be sold to third parties who will either solicit them to buy unwanted things or defraud them in some way (steal from their accounts). In normal, everyday behavior, people do not authenticate one another. They spend little effort and make assumptions based on commonly available data. Furthermore, people are usually not proved wrong after doing this, so their experience with the system encourages them to behave this way. On what information do people base their judgments with respect to trust and lying? The best information that someone is trustworthy comes from direct experience: A person was trusted in the past and all behavioral evidence is that he or she caused the trusting person no harm or, better yet, looked after his or her interests. The second source of information when making such judgments is a set of physical and behavioral attributes that suggest someone is similar to the trusting person. The chain of reasoning goes like this: If I am trustworthy and you are like me, then you must be trustworthy, too. The third source is endorsements, either from someone a person knows, such as a friend who has had direct experience with the person or institution in question, or from some famous person who would lose reputation if he or she lied about another's trustworthiness. The fourth source is assessment of reputation directly for example, the reputation of a person in a powerful position could be lost if he or she were not trustwor- thy or an organization that has been around for a long time might not continue to exist if it was less trustworthy. Knowing how people make decisions, some people and institutions hide behind false information. Some individuals might buy from a Web site that has an endorsement symbol from the Better Business Bureau, even though there is not necessarily an immediate way to authenticate the valid use of the symbol. It could simply have been copied from another site. Some people might choose to enter credit card information into a site that seems to be designed well and not into one that seems to be slapped together, making the assumption that a well-designed site costs money and could not have been afforded by a fly-by-night vendor. Because people do not spend the time and effort to investigate authenticity and the shortcut attributes that they use are well known, they are left open to fraud at many levels. Blithe Federal Trade Commission is charged with consumer protection broadly and has been investigating conduct on the Web. More information on the FTC, privacy, and fair information practices is presented in Chapter 3.
SECURITY AND USABILITY Assumptions from the Real World That Do Not Transfer to the Virtual 93 In our social interaction in the physical-spatial world, some behaviors are correlated for example, when I see you, you can see me. When I get closer to you, you get closer to me; when I hear you, you can hear me. The digital world has uncorrelated these attributes. With Web cameras or one-way videoconferencing, I can see you, but you can't see me. A whole host of surveillance devices allows detection without obvious detectabil- ity. What used to be reciprocal is now one-way. However, people behave as if the virtual world had the same reciprocity as the real world. One day, for example, as a demonstrator was explaining online video- conferencing on a public site, she was startled when someone she could not see spoke to her. That person, in turn, could see her, but, because of the camera angle, could not see that she had an audience of high-level executives. Thinking that she was alone, he made several lewd remarks. The demonstrator and the executives were surprised and offended by the encounter. They had wrongly assumed that the only others online were those they could see. There are a number of cases today in which the person using a service or technology is told the conditions (for example, that others may monitor an interaction or that the numbers a person types in will be used for other purposes), but they forget. The many years that humans spend learning physical reality are hard to dismiss when one is merely told that things might be otherwise. Accountability A related issue is accountability. There is mounting evi- dence that when people are anonymous or hidden, they will behave in ways that are very different from normal. For example, when feedback for a product or service is anonymous, people are more likely to say negative things, knowing that they are not held accountable for their behavior. They can get away with more. A similar effect comes into play in e-mail flaming. When the recipient is not visible to the writer of the e- mail, the writer tends to say more emotionally charged things, things that he or she would soften if the recipient were visible. There is evidence from experiments on trust that if people cannot be seen by others, they will behave in a more self-serving, as opposed to a cooperative and trust- ing way. Actions There are at least two approaches to accommodating the known limits of human behavior when considering authentication and privacy schemes: designing to fit those known limits and training people to be cautious when caution is warranted.
94 WHO GOES THERE? Finding 4.1: People either do not use systems that are not de- signed with human limitations in mind or they make errors in using them; these errors can compromise privacy. Recommendation 4.2: User-centered design methods should be integral to the development of authentication schemes and pri- · ~ vacy pa 1cles. Training, Public Education Campaigns, and Regulations It is unlikely that technologies and the policies associated with them will be transparent to everyone. People have a hard time changing expec- tations and assumptions in a world that is invisible to them. Conse- quently, some protective measures may have to be put in place to make up for these human shortcomings. In many cases, it is not apparent to the user what information is being given to the verifier. Most users do not know what information is on magnetic-stripe cards, smart cards, and bar- coded cards. Printed information on the cards is more apparent to the user, but information learned by way of back-end processes (for example, a check of a credit report) can be invasive of privacy, yet invisible to the user. As for protective measures at a micro level, Web sites could have visible statements of the sort heard in many recorded messages on cus- tomer service hotlines namely, that the conversation (site use) may be monitored to ensure quality. Similarly, small explanatory or warning signs might appear when one moves the mouse pointer over a field that asks for information from the user. More broadly, a public education campaign may be necessary to keep people from unknowingly making themselves vulnerable, just as people are warned of new scams. The goal of such education should not be to cause people to mistrust others but to train them to understand the nature of information systems and how they can and should protect themselves. Ultimately, most individuals do not understand the privacy and security aspects of the authentication systems that they are required to use in interactions with commercial and government organizations. As a result, they may behave in a way that compromises their privacy and/ or undermines the security of the authentication systems. To remedy these problems, system interfaces should be developed that reveal the information collected and how it is used. Research is needed to explore how to do this effectively. In addition, as part of the deployment of any new system that includes the collection of privacy-sensitive data, indi- viduals should be educated about the privacy and security aspects of the authentication systems that they use, including the risks associated with the systems.
SECURITY AND USABILITY FACTORS BEHIND THE TECHNOLOGY CHOICE 95 A crucial factor that will encourage or discourage the use of any authentication technology is ease of deployment. A scheme that relies on something that users already have (or already "are") is easier to deploy than one that requires shipping (and perhaps installing) new equipment. Smart cards are relatively hard to deploy, however, since few people have the smart card readers and associated software on their computers. In addition, most card issuers insist on owning their own cards, which is why cards that could technically "share" issuers (for example, a payment card that is also an airline affinity-program card) have not been successful in the market. With respect to possession of the correct hardware, how- ever, most computers sold in the past several years have Universal Serial Bus (USB) ports; a USB-based token that functioned like a smart card might require less effort to deploy, although the software barrier would still exist. This observation is not an endorsement of USB tokens but rather points out how many factors, cost among them, may inhibit or facilitate the deployment of hardware-based authentication technologies. The building blocks from which different authentication systems are constructed may be used to protect or invade privacy. Among the pri- vacy issues that arise are what data are revealed to the issuer upon initial interaction with the system and what data are created and stored at that time, as well as when authentication events occur. A system in which personal data are retained after an authentication transaction is more intrusive of privacy than one in which no data are collected or retained. One of the most crucial issues with respect to privacy and authentication systems, as discussed in Chapter 2, is linkage: Can the results of an au- thentication process be linked to other sets of data? If the same individual uses cash for two activities, those activities are not linkable through the cash that was used; however, two uses of a stored-value card are linkable even if the card itself was purchased anonymously. Cost is another factor in deciding which authentication technologies to deploy. Authentication can be expensive. There may be hardware, software, and procedural-development costs for the presenter, the veri- fier, and the issuer. Integrating authentication procedures into existing systems and procedures may be expensive as well. Data collection and verification, both for initial registration and for recovery from lost creden- tials, can impose additional overhead. After deployment, authentication systems have ongoing costs. New users must be added to the system and old users deleted. Hardware and software require maintenance and up- grades. Procedural updates and user and administrator training and education are important (and costly) too; many systems fail not for tech- nical reasons, but because someone did not follow proper procedures.
96 WHO GOES THERE? That, in turn, raises the issue of skills: What does it cost to obtain the services of enough people with appropriate abilities to deploy and oper- ate an authentication system? Finally, costs have to be apportioned. Who pays for all of this? What is the balance of costs between the user, the issuer, the verifier, and any back-end systems? Balanced against the costs of using an authentication system are the perceived and actual gains from deploying the system. Since vulnerabili- ties are hidden and threats are generally not observable except to those who have done ample analysis of the particular system and its context, people using the system may underestimate the benefit of risk mitigation. This can result in inappropriate choices of authentication systems or in inappropriate usage models that defeat the purpose of the system. For example, a recent study found that many users did not adhere to the rules about not making their passwords guessable, or about keeping them se- cret and changing them often. People had a number of excuses for ignor- ing such rules: they were not the kind of people who keep secrets, they shared passwords because doing so showed how they trusted other people, they thought they were not targets because they had nothing of value, and so on.l3 Other factors besides cost and security have a bearing on technology choices. Not all authentication systems work for all population sizes. Some are too expensive on a per-user basis to work with large numbers of users; others have too high an up-front cost to be suitable for small popu- lations. Furthermore, all systems have error rates encompassing false positives and/or false negatives. The likelihood and cost of each type of error, along with a priori estimates of the number of potential importers, must be assessed to determine what trade-offs should be made.l4 In addition, the consequences of privacy invasions represent costs that may be hard to monetize (absent actual damages) and may be incurred by someone other than the system owner. Finally, an authentication system must be matched to the needs of the context in which it is employed, keeping in mind that authentication may be excessive if simple authorization will suffice. Notwithstanding that stronger authentication is often suggested as an initial step to improve security, the issues are often much subtler. Deciding whether and how to employ authentication systems (or any other security technology) requires 13D. Weirich and M.A. Sasse. "Persuasive Password Security." Proceedings of CHI 2001 ACM Conference on Human Factors in Computing Systems. Seattle, Wash., April 2001. 14Creation of exception-handling procedures for dealing with incorrect decisions opens up additional vulnerabilities for the system, as impostors might claim to have been falsely rejected and request handling as an exception.
SECURITY AND USABILITY 97 careful thought and decisions about what types of risk-management strat- egies are acceptable. Avoiding expedient responses and kneejerk choices is critical, both for security and for privacy protection. This complexity, the multitude of contexts in which authentication systems could be de- ployed, and the ultimate need for someone to make policy decisions about security and privacy requirements are why simple cost-benefit analyses are unlikely to be effective in guiding the needed choices. A final factor that must be considered when deciding how to pro- ceed namely, the recognition that authentication systems must be ad- equate to protect the resources they are guarding against the perceived threats, while at the same time remaining simple enough for administra- tors and others to use. This important point is often overlooked when developing technologies. Authentication systems will ultimately be used by people, so as described in the previous section their ease of use and understandability will have a major impact on their effectiveness. SYSTEMS AND SECONDARY USE Understanding only the underlying technologies is insufficient for appreciating the ramifications of authentication systems. It is important to know how the systems will work in context to determine their security and privacy implications. As discussed previously, some of the gravest privacy violations come about because of the inappropriate linking of data within or across systems. This can happen because the same identi- fier is used in multiple systems, because efforts were made to correlate data that has had the identification information removed, or for other reasons. Once data have been collected about an individual, dossiers (anonymous or identified) can be created. The creation of dossiers coupled with an authentication system may pose a privacy risk. Avoid- ing this risk requires awareness of the broader context in which the au- thentication systems and the data associated with authentication events will be used. Finding 4.2: The existence of dossiers magnifies the privacy risks of authentication systems that come along later and retro- actively link to or use dossiers. Even a so-called de-identified dossier constitutes a privacy risk, in that identities often can be reconstructed from de-identified data. Finding 4.3: The use of a single or small number of identifiers across multiple systems facilitates record linkage. Accordingly, if a single identifier is relied on across multiple institutions, its fraudulent or inappropriate use (and subsequent recovery ac-
98 WHO GOES THERE? tions) could have far greater ramifications than if it is used in only a single system. Recommendation 4.3: A guiding principle in the design or se- lection of authentication technologies should be to minimize the linking of user information across systems unless the ex- press purpose of the system is to provide such linkage. In addition to dossier creation through planned-for uses of authenti- cation systems, secondary use and unplanned-for uses increase the risk of privacy violations. Unintended uses of a given technology or information system can always have inadvertent side effects, and there are numerous examples in the literature of a system meeting its design specifications but failing in the field because it was used in ways not anticipated by its designers. In the realm of authentication technology and individual pri- vacy, unplanned-for secondary uses can have grave consequences for users. Arguably, much identity theft is accomplished through secondary uses of authentication or identification data. See Box 4.1 for a more in- depth discussion of identity theft. An authentication system could be designed and deployed in a se- cure, privacy-sensitive fashion. However, if some aspect of the system were to be used in ways not originally intended, both security and any privacy protection available could be at risk. The driver's license is a canonical example of inappropriate secondary use. Its primary function is to identify those authorized to drive motor vehicles. It is quite unlikely that the designers of the original processes by which driver's licenses are issued anticipated they would be used as a security document needed to board a commercial airliner or drink in bars. Most information systems, including authentication systems, do not explicitly guard against secondary uses, although occasionally there are contractual relationships that limit secondary use (such as credit card agreements). In some cases, the credential presented may be used for additional verification purposes in contexts unrelated to the original pur- pose (such as with a driver's license or when an e-commerce Web site gives permission for the cookies that users have allowed it to place on their hard drives to be used by other entities). In other instances, the data collected prior to, during, or subsequent to authentication may be used in ways that have little to do with the authentication step itself. A simple example would be when information used to register at a Web site in order to access content that is later used for marketing purposes by that Web site. A more insidious form of unplanned-for usage would involve a technology designed for a certain security context, user population, and so on that is later (intentionally or unintentionally) used in a new context
SECURITY AND USABILITY 99
00 WHO GOES THERE? without a determination as to whether the security and privacy safe- guards still hold. Abstractly, the difficulties that arise from secondary use are primarily due to incorrect assumptions. Secondary uses are implicitly relying on whatever assurances, security models, and privacy protections the origi- nal designers and implementers were working with. These may not align
SECURITY AND USABILITY 101 well with the needs of the secondary user. In addition, the original sys- tem was probably designed with a particular threat model in mind. How- ever, that threat model may not be appropriate for secondary uses. This incongruity can make it difficult to respond to an attack on the primary system, since with widespread secondary use, the universe of motiva- tions behind the attack is much larger. Another problem is that the data collected for primary purposes may not be the data that are needed by secondary uses, or, they may not be of appropriate quality or reliability. In addition, secondary uses can facilitate information leakage from the original system, which can cause both security and privacy problems. As noted in the committee's first report,l5 understanding and clearly articu- lating the goals of the system is crucial and may help mitigate any prob- lems that might arise from secondary uses. Nonetheless, the risks of secondary use are significant and must be considered. Finding 4.4: Current authentication technology is not generally designed to prevent secondary uses or mitigate their effects. In fact, it often facilitates secondary use without the knowledge or consent of the individual being authenticated. Finding 4.5: Secondary uses of authentication systems, that is, uses for which the systems were not originally intended, often lead to privacy and security problems. They can compromise the underlying mission of the original system user by fostering inappropriate usage models, creating security concerns for the issuer and generating additional costs. Recommendation 4.4: Future authentication systems should be designed to make secondary uses difficult, because such uses often undermine privacy, pose a security risk, create unplanned- for costs, and may generate public opposition to the issuer. CONCLUDING REMARKS As is evident from this and preceding chapters, neither authentica- tion nor privacy is a simple issue. Instead, the issues interact in complex ways with the total system. This is seen clearly in the case of knowledge- based authentication, which relies on some prior history of contact and on reasonably private and unguessable knowledge that the authentic client 15Computer Science and Telecommunications Board, National Research Council. IDs- Not That Easy: Questions About Nationwide Identity Systems. Washington, D.C., National Acad- emy Press, 2002.
02 WHO GOES THERE? will nevertheless have access to. Seen from this perspective, password- based authentication is popular because it demands so little up-front- initial passwords can easily be chosen or assigned without any need for history, special hardware, custom software on the client's computer (if any), and so on. By the same token, the weaknesses of passwords- guessability, the cost of recovering forgotten passwords, and so on stem from many of the same roots. Privacy has similar attributes. Many organizations that rely on iden- tity-based authorization do so not because they wish to, but because it is easy: they can rely on an infrastructure that someone else has built. This accounts for the ubiquity of driver's licenses as a de facto "official" iden- tification and age-authorization card: The departments of motor vehicles (DMVs) are paying the cost of issuing the cards; everyone else can ride free. Furthermore, it is precisely this overloading of function that gives rise to privacy violations: Many different transactions can be linked back to the same individual. Switching to better and privacy-protecting- technologies is thus not an easy task. Finally, computer security in many organizations is not as strong as it could be or needs to be.l6 In the federal government, there are numerous efforts to document federal agency computer security plans and practices and some that find glaring weaknesses in these plans and practices.l7 Many reports over the years have described security issues, concerns, and research challenges.l8 While security and privacy are often discussed as though they were in opposition to one another, in many ways adequate security is a prerequisite for privacy. If data are not well protected, they may compromise the privacy of the individual to whom they pertain. However, achieving information security sometimes requires the disclo- sure of personal information (for example, by requiring authentication). At the same time, insufficient privacy protection may mean that personal information about others is easily discovered, calling into question the reliability of authentication systems that depend on such information. While this report urges that care be taken to avoid unnecessary authenti- cation and identification (and therefore avoid unnecessary privacy risks), 16See Computer Science and Telecommunications Board, National Research Council, Cybersecurity Today and Tomorrow: Pay Now or Pay Later, Washington, D.C., National Acad- emy Press, 2002. 17See Representative Stephen Horn's report card on federal agency computer security efforts as one measure of the state of preparedness in this area; available online at <http:// www.house.gov / reform / gmit /hearings /2000hearings /000911 computersecurity/ 000911reportcard.htm>. Another measure is the Office of Management and Budget's scorecards, available at <http://www.whitehouse.gov/omb/budintegration/scorecards/ agency_scorecards.html>. 18See CSTB's reports on security at the Web site <http://cstb.org/topic_security/>.
SECURITY AND USABILITY 103 the interplay between achieving privacy protection and security in the development of information systems should be carefully considered. Pri- vacy and security, while often in tension, are complementary as well. Security can protect private data, and maintaining privacy can aid in avoiding security breaches. Usability is a key component of this mix, since hard-to-understand or hard-to-use systems will be prone to errors and may drive an individual to work around either the security mecha- nisms or the mechanisms that would protect privacy. Lessons learned in trying to create secure, usable systems therefore apply when seeking to develop systems that protect privacy. Finding 4.6: Privacy protection, like security, is very poor in many systems, and there are inadequate incentives for system operators and vendors to improve the quality of both. Finding 4.7: Effective privacy protection is unlikely to emerge voluntarily unless significant incentives to respect privacy emerge to counterbalance the existing incentives to compro- mise privacy. The experience to date suggests that market forces alone are unlikely to sufficiently motivate effective privacy pro- tection. Recommendation 4.5: System designers, developers, and ven- dors should improve the usability and manageability of au- thentication mechanisms, as well as their intrinsic security and privacy characteristics.