Click for next page ( 105


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 104
5 Authentication Technologies This chapter describes the basic technologies used as building blocks for authentication systems, especially those employed in computer and network environments. First, it describes technological choices that determine a dimension of authentication separate from the different kinds of authentication described in Chapter 2. Then, technological instantiations of the three main authentication mechanisms (something you know, have, are) are described. Multifactor authentication is consid- ered, and decentralized and centralized systems are compared. Finally, security and cost considerations for individual authentication technolo- gies are discussed. Throughout, this chapter also touches on the privacy implications of specific technologies in the context of authentication sys- tems, as appropriate. TECHNOLOGICAL FLAVORS OF AUTHENTICATION "Individual authentication" is defined in Chapter 1 as the process of establishing an understood level of confidence that an identifier refers to a specific individual. In an information systems context, it often is useful to distinguish among several types or modes of authentication, both for individuals (often referred to as "users") and for devices (such as comput- ers). This is a dimension distinct from the individual/attribute/identity authentication types discussed in Chapter 2. In the security literature, these modes are often referred to as one-way as opposed to two-way 104

OCR for page 104
AUTHENTICATION TECHNOLOGIES 105 authentication, initial versus continuous authentication, and data origin versus peer-entity authentication. Much individual authentication in an information system takes place in a client/server context, in which the individual user is the client (a "presenter" in the terminology introduced in Chapter 2) and some com- puter is a form of server (the "verifier". A user is required to authenti- cate his or her identity to a computer, usually as a prerequisite for gaining access to resources (access control or authorization). This is typically an explicit one-way authentication process; that is, the user authenticates himself or herself to the computer. If the user is authenticating to a computer directly (for example, when sitting at a desktop or laptop com- puter), there is an implicit two-way authentication; the user sees the com- puter with which he or she is interacting and presumably knows that it is the one he or she wishes to use.1 However, if the user is authenticating to a computer accessed via a communication network, there is often no way to verify that the com- puter at the other end of the communication path is the one that the user is trying to contact. The user typically relies on the communication infra- structure operating properly and thus connecting him or her to the in- tended computer. This assumption may be violated by any of a number of attacks against the communication path, starting with the computer that the user is employing locally. This lack of explicit, secure, two-way authentication can subvert many types of individual authentication mechanisms. If a presenter provides an identifier and authenticator to the wrong verifier, both security and privacy are adversely affected. Thus, two-way authentication is preferred so that a presenter can verify the identity of the verifier to which a secret may be disclosed. Initial authentication takes place when an individual first establishes a connection of some sort to a system. This may be a direct, very local connection, such as logging in to a desktop or laptop computer, or it may be a remote connection to a computer via a communication network. In either case, there is an assumption that future communication, for some period of time, is taking place between the two parties who were initially authenticated. For a direct connection, as defined here, this assumption usually relies on physical and procedural security measures; there is an assumption that the user will log out when leaving the computer unat- tended and in a place where others might access it. This is a form of implicit, continuous authentication. This assumption may not always be 1Looks can be deceiving, and even visual inspection of a proximate device is not always sufficient to authenticate it.

OCR for page 104
06 WHO GOES THERE? valid, and sometimes users are required to reauthenticate themselves ex- plicitly to the computer periodically, to verify that they are still present. This periodic reauthentication requirement is an explicit attempt at con- tinuous authentication, although it is not really continuous. Periodic reauthentication is also burdensome for the user and thus not commonly employed. When the connection between the user and a computer is through a network, there are many more opportunities for the connection to be "hijacked" that is, for an attacker to inject traffic into the connection or to seize the connection from the legitimate user. In remote-access con- texts, it is appropriate to employ explicit measures to ensure continuous authentication. Typically, this continuity is effected using cryptographic means, based on a secret (a cryptographic key) shared between a local computer employed by the user and a remote computer being accessed by the user, for the life of the connection. In this latter context, the techni- cal term for the security service being provided is "data origin authentica- tion." Continuous authentication is generally a result of a transition from initial, individual authentication to data origin authentication. It is the source (origin) of the data sent between two systems for example, be- tween a user's desktop and a server that is being authenticated rather than the user per se. A further technical distinction is sometimes applied. If the authentication mechanism ensures the timeliness of the communi- cation and thus provides protection against attacks that replay old mes- sages, the service is referred to as "peer-entity authentication." Individual authentication increasingly takes place in the context of information systems, and thus all of the flavors of authentication de- scribed above are relevant to this discussion of individual authentication technologies. BASIC TYPES OF AUTHENTICATION MECHANISMS By the mid-1970s, three basic classes of authentication technologies for use with information systems had been identified.2 They are colloqui- ally characterized as "something you know, something you have, and something you are" and were discussed abstractly in Chapter 2. This section focuses on specific technological examples of each of these basic classes. In the first class are authentication technologies based on what an individual can memorize (know). Passwords and personal identifica- 2D.E. Raphael and J.R. Young. Automated Personal Identification. Stanford Research Insti- tute International, 1974; National Bureau of Standards. "Evaluation Techniques for Human Identification." FIPSPUB48, April 1977.

OCR for page 104
AUTHENTICATION TECHNOLOGIES 107 lion numbers (PINs) are the canonical examples of such technology. In the "something you have" class are physical objects that are (assumed to be) hard to forge or to alter, such as magnetic-stripe cards, smart cards, SecurID cards, and so on. The object is issued to an identified individual and retained by the individual, so that possession of the object serves to identify the individual. In the last class are biometric authentication tech- nologies, which measure physical and behavioral characteristics of an individual. Each of these classes of authentication technologies has ad- vantages and limitations with regard to security, usability, and cost. Something You Know Simple, password-based authentication is the most common form of initial, one-way authentication used in information systems. A user re- members a short string of characters (typically six to eight) and presents the character string to a system for verification when requested. The string of characters is reused many times in authenticating to the same system; hence the passwords are usually referred to as "static." This sort of system is susceptible to many forms of attack. Because users have trouble choosing and remembering values with a significant number of "random" bits, passwords generally are vulnerable to guessing attacks. Unless passwords are protected (usually this means encrypted) for trans- mission over communication paths, they are subject to interception and subsequent use by a wiretapper. The lack of two-way authentication means that a user can be tricked into revealing a password if he or she connects to an attacker instead of to the desired system. Password-based authentication is cheap to implement; it may not require any explicit software purchases. It is easy for users and develop- ers to understand, so training costs are low. But, on a life-cycle basis, passwords are expensive for an organization to administer, largely be- cause of the costs of help-desk support for users who forget passwords. Users find passwords difficult to manage as they deal with a growing number of systems that require them. This leads to password reuse (that is, using the same password for multiple systems) and insecure storage of passwords (for example, in unencrypted files on computers). Both of these practices undermine the security of password-based systems. In the former case, if one system is compromised and the passwords used in it become known, other systems in which the user employs the same pass- words could be compromised. That is, compromise of the user's desktop or laptop or personal digital assistant (PDA) compromises all of the pass- words employed by that user to access many other systems. Many of the common recommendations for improving the security of passwords without changing the fundamental mechanisms involved trade

OCR for page 104
08 WHO GOES THERE? one form of insecurity for another. For example, if users are encouraged to choose passwords that are hard to guess, they will probably have to record the passwords somewhere (because these passwords are not easily remembered), making them vulnerable to attacks against the stored pass- words. Users are encouraged to change passwords periodically, which also increases the likelihood of recording the passwords in vulnerable locations. Passwords are easily shared. A user can tell others his or her pass- word; in many situations this is common and even encouraged as a sign of trust.3 Passwords can also be shared inadvertently, as they are often written down in semipublic places.4 This is not always a serious problem if the threat model focuses primarily on outsiders, but insiders represent threats in many contexts, and users often do not consider this type of threat when sharing passwords or recording them. In principle, passwords can offer a great deal of anonymity. In prac- tice, however, most people cannot remember many different passwords, and they tend to reuse the same passwords for different purposes. More- over, if allowed to select an identifier as well as a password, the user may choose to use the same values for multiple systems. This makes it poten- tially easy to link multiple accounts to the same user across system bound- aries, even though the base technology does not necessarily impose such linkages. Additionally, the recovery mechanisms for lost passwords gen- erally require one's mother's maiden name, an e-mail address, or some other form of personal information. Thus, the infrastructure for pass- word maintenance often requires sharing other forms of information that is personal and so presumed to be less likely to be forgotten (see Box 5.1~. This, too, potentially undermines privacy. Finding 5.1: Static passwords are the most commonly used form of user authentication, but they are also the source of many system security weaknesses, especially because they are often used inappropriately. Recommendation 5.1: Users should be educated with respect to the weaknesses of static passwords. System designers must consider trade-offs between usability and security when de- ploying authentication systems that rely on static passwords to ensure that the protections provided are commensurate with 3D. Weirich and M.A. Sasse. "Persuasive Password Security." Proceedings of CHI 2001 Conference on Human Factors in Computing Systems, Seattle, Wash., April 2001. 4J. Nielsen, "Security and Human Factors," Useit.com's Alertbox, November 26, 2000. Accessed on March 26, 2002, at .

OCR for page 104
AUTHENTICATION TECHNOLOGIES 109 the risk and harm from a potential compromise of such an au- thentication solution. Great care should be taken in the design of systems that rely on static passwords. More secure authentication technologies can be based on password technology at some levels. For example, schemes such as encrypted key exchange (EKE)5 and Kerberos (a network authentication protocol)6 also 5s. Bellovin and M. Merritt. "Encrypted Key Exchange: Password-Based Protocols Secure Against Dictionary Attacks." Proceedings of the IEEE Symposium on Security and Privacy. Oak-

OCR for page 104
0 WHO GOES THERE? make use of static passwords. These schemes employ sophisticated cryp- tographic mechanisms and protocols to counter many of the attacks that are effective against static passwords. They typically provide one-way, initial authentication, which may transition to two-way, data-origin and peer-entity authentication for subsequent communication. These are not, per se, password-based authentication technologies. The section "Multi- factor Authentication" discusses in more detail authentication protocols of this sort. Something You Have The "something you have" class of authentication technologies is based on the possession of some form of physical token that is presumed to be hard to forge or alter. Many forms of physical tokens are used for authentication and for authorization outside the context of information systems, and they exhibit varying degrees of resistance to forgery and alteration. For example, many driver's licenses and credit cards make use of holograms as a deterrent to forgery, relying on visual verification by a human being when they are presented. Yet credit cards are now used extensively for purchases by mail or telephone or over the Web. In these contexts, there is no visual verification of the credential, so the antitamper security mechanisms are ineffective. In information systems, the security of hardware tokens, to first order, is usually based on the ability of these devices to store, and maybe make direct use of, one or more secret values. Each of these secrets can be much larger and more random than typical passwords, so physical tokens address some of the vulnerabilities, such as guessability, cited above for passwords. Nonetheless, the simplest forms of tokens share some of the same vulnerabilities as passwords that is, they both deal with static, secret values. A magnetic-stripe card is an example of a simple physical authentica- tion token. Tokens of this sort offer the primary benefit of storing larger secrets, but they offer almost no protection if the token is lost or stolen, because readers are readily available and can extract all data (secrets) from the magnetic stripe. After a secret is read from the card (even in the context of a legitimate authentication process), the secret is vulnerable in the same ways that a password is (for example, it can be intercepted if transmitted via an insecure communication channel or compromised land, Calif., May 1992, pp. 72-84. Available online at . 6More information on Kerberos is available online at .

OCR for page 104
AUTHENTICATION TECHNOLOGIES 111 while held in storage in a computers. When a magnetic-stripe card is swiped, all the data can be read from the card and become accessible to malicious software in the system. This possible misuse argues against storing the secrets used to authenticate multiple, distinct identities on one card. (The storage space on cards of this sort also is very limited.) Con- versely, requiring a user to carry multiple physical cards to maintain multiple secrets is inconvenient for the user and adds to overall costs. Although magnetic-stripe cards and their readers are not very expensive, computer systems (other than in retail sales contexts) generally do not offer readers as standard equipment, so there are cost barriers to the use of such cards for individual authentication in home or corporate environ- ments. This is a good example of trade-offs among competing goals of security, user convenience, and cost. Sometimes applications for mag- netic cards focus on authorization rather than authentication, as in the case of transit fare cards (see Box 5.2~. An a weii-aesignea system, a secret read from a care would not be retained in storage on the system for very long, and the vulnerability here would be much less than when pass- words are stored in a file.

OCR for page 104
2 WHO GOES THERE? A not-so-obvious form of "something you have" authentication is a Web cookie. The need for cookies arises because the protocol used for Web access, hypertext transfer protocol (HTTP),is "stateless"; HTTP does not provide a reliable way for a server to know that a particular request is coming from the same user as a previous request.8 Pure stateless opera- tion would make it difficult to provide functions such as browsing through an online catalog and collecting a "shopping cart" full of items that the user has decided to purchase. Also, if the user is browsing through information that needs authentication (such as information about the user's bank account), it would be inconvenient if the user had to type a name and password each time a different page was viewed. The solution to this problem, designed by Netscape, is called a "cookie." A cookie is data given by a Web server to the client to maintain state. Each time a client makes a request to a server, any cookies provided to the client by that server are sent to the server along with the request. Thus, for example, the identifier and password provided by a user for initial authentication may be transformed into a cookie to facilitate con- tinuous authentication of the HTTP session. Sometimes a cookie is a bigger secret than an individual could remember. It may be like a secret stored in a token; in this case, the token is the user's computer, with all the attendant security problems that arise from storing secrets in a file on a computer and the problems that arise if the secret is transmitted across a communication network without encryption. Sometimes a cookie is used to track an individual's authorization in the context of an HTTP session. In such cases, the cookie itself may be a cryptographically protected value in order to prevent a user from tampering with it and thus fooling the Web server. The use of cookies is often criticized as a mechanism that violates privacy, but it depends on how they are used. If they are used solely to effect session continuity, overcoming the limitations of HTTP, and if the server does not maintain information about the user, they can be a pri- vacy-neutral or even privacy-enhancing technology. But cookies are sometimes used to track a user's movements through multiple sites. Many sites that do not require authentication will set a cookie on the first visit. This lets the site track return visits by presumably the same user, even though the site operators do not know who that person is in a larger context. Often, this technique is employed by third-party advertising 8The statelessness of the HTTP protocol implies that each request sent for a page is completely independent of any requests that came before. Thus preserving information from one click to the next requires additional technology (D. Kristol and L. Montulli, "HTTP State Management Mechanism," Request for Comments (RFC) 2965~.

OCR for page 104
AUTHENTICATION TECHNOLOGIES 113 sites; this use of cookies permits tracking users (and their interests) across multiple Web sites. This is a form of covert identification (see Chapter 1~; the user's identity as a Web site visitor and a dossier of his or her activity are compiled and retained, and an identifier in the form of the cookie is assigned. It is not necessary to use cookies to track user activity, however. Even if cookies were banned, it would still be possible to track a user's Web history through other mechanisms such as log files, browser caches, and browser history files.9 Smart cards are credit-card-size tokens that contain memory and, of- ten, a processor. Smart cards that act only as memory devices are essen- tially as vulnerable as magnetic-stripe cards in terms of extracting the secrets stored on the cards, because readers are widely available, and malicious software can extract stored values from the card. The costs for these cards is somewhat higher than those for magnetic-stripe cards, and smart card readers are more expensive as well, but smart storage cards offer more data storage than magnetic-stripe cards do, and they resist wear better.l Universal Serial Bus (USB) storage tokens are another hardware storage token format. They have a potential advantage in that many PCs offer USB interfaces, thus eliminating reader cost and availabil- ity as barriers to deployment. Tokens that act only as storage devices may be used to provide initial, one-way authentication analogous to static passwords. However, be- cause these devices can hold larger, "more random" secret values (that is, an arbitrary collection of bits as opposed to something meaningful or mnemonic to a person), they can provide somewhat better security. In- creasingly, tokens of this sort are being used to bootstrap continuous data-origin authentication schemes that are implemented using the pro- cessing capabilities of a computer to which the token is (locally) con- nected. (Recall that the authentication taking place here is authenticating a local computer to a remote computer, not a person to a remote com- puter.) These schemes are often challenge/response protocols, as de- scribed below. Since these protocols are executed in the computer, not the 9So-called "Web bugs" are another mechanism used to surreptitiously observe an individual's actions online. They are objects, usually one-pixel-square graphic images, em- bedded within the HTML source on a Web site that cause part of the displayed Web page to be retrieved from another Web site, thereby transmitting information about the requester to a third party. Web bugs are used on a surprisingly large number of sites, primarily for statistical purposes and to gauge the effectiveness of advertising. The information trans- mitted to the "bugger" includes an IF address and the last site visited and may be linked to cookies to collect individual Web surfing profiles. Web bugs are also embedded in e-mail messages by shammers, who use them to validate live addresses. 1OThe magnetic stripe can abrade, and the data records on it may be degraded by expo- sure to magnetic fields.

OCR for page 104
4 WHO GOES THERE? token, they also can make use of secrets stored in the computer rather than on separate hardware storage tokens. The term "software token" has been coined to refer to the use of secrets stored on a computer and employed in conjunction with an authentication protocol. Software tokens are not as secure as hardware storage tokens, since the secrets used by the software are held in files in a computer on a long-term basis. At best, these secrets typically are protected by a password. Thus, any attack against the com- puter that compromises these files allows an attacker to retrieve the stored secrets through password-guessing attacks. In contrast, a well-designed authentication technology that uses a hardware storage token would read the secretes) stored on the token, use them, then erase them from the com- puter memory as quickly as possible. These actions present a smaller win- dow of opportunity for the compromise of the secretes), making the use of hardware storage tokens potentially more secure. The main attraction of software tokens is the low cost; the software may be free or inexpensive, and there is no need to buy token readers. Some of the earliest hardware authentication tokensll and some of the most popular ones employed today, such as SecurID (see Box 5.3), do not interface directly with an authentication system. Instead, the user is required to act as an interface, relaying information between an informa- tion system and the token. Tokens of this sort typically implement a type of authentication known as algorithmic challenge/response, or just chal- lenge/response. Challenge/response schemes operate much like human- enacted authentication scenarios. Most movie goers would recognize the words "Halt! Who goes there?" as the beginning of a challenge/response exchange between a guard and an individual approaching a guarded area. The password in such a scenario would usually change daily, con- sistent with human limitations for adapting to new passwords. In an online authentication technology, the challenge can change every time, making the corresponding response unique in order to thwart eavesdrop- ping attacks. Challenge/response schemes are a generic technique to prove knowl- edge of a secret, sometimes even without disclosing it to the party per- forming the authentication check.l2 Challenge/response schemes are analogous to Intruder: Friend or foe (IFF) systems originally developed 1lJ. Herman, S. Kent, and P. Sevcik. "Personal Authentication System for Access Control to the Defense Data Network." Proceedings of the 15th Annual IEEE Electronics and Aerospace Systems Conference (EASCON), September 1982. 12Research into a class of algorithms known as "zero knowledge algorithms" is moving work forward in this area. As a starting point for what this work involves, see S. Goldwasser, S. Micali, and C. Rackoff, "The Knowledge Complexity of Interactive Proof- Systems," in Proceedings of the Seventeenth Annual ACM Symposium on Theory of Computing,

OCR for page 104
AUTHENTICATION TECHNOLOGIES - .f :~:~:~:~:~:~:~:~:~:~: :~: A:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~ Client 127 ,~ ~ / ,-~ FIGURE 5.1 Kerberos: 1. User provides a principal (user name) and password to the client system. 2. Client queries the Initial Ticket Service of the Kerberos key distribution center (KDC) for a ticket-granting ticket (TGT), which will allow the client to request tickets for specific services later on. The client's request in- cludes a derivative of the user's password, which the Initial Ticket Service . i. verities. 3. The KDC's Initial Ticket Service provides the client with a dual-encrypted initial TGT containing a log-in session key. The client system converts the user's password into an encryption key and attempts to decrypt the TGT. 4. The client uses the TGT and the log-in session key to request tickets to specific services from the KDC's Ticket-Granting Service. 5. The Ticket-Granting Service decrypts the TGT with its own key, and then decrypts the service request using the TOT's session key. If decryption is successful on both counts, the Ticket-Granting Service accepts the user's authentication and returns a service ticket and a service-session key (en- crypted with the log-in session key) for the targeted service. This result can be cached and reused by the client. 6. The client uses the log-in session key provided in step 3 to decrypt the service ticket, gaining access to the service-session key. This key is then used to request access to the target service. This request is accompanied by an encrypted time stamp as an authenticator. Access to the target service is granted. Steps 4 through 7 can be repeated when access to other services is needed; service messages can be encrypted with the service-session key. A time limit is built into the log-in session in steps 3 and 5; the user will need to enter the password again when the log-in session has timed out.

OCR for page 104
28 WHO GOES THERE? Certainly a very large-scale PKI would have very serious privacy implica- tions, as it might provide a single, uniform identifier that an individual would employ in transactions with many different organizations. (See Box 5.6 for a brief description of public key cryptography.) Since each public key certificate carries a clearly visible identifier for the person represented by the certificate, it is easy to link different uses of the same certificate to that person's identity. The General Services Administration's Access Certificates for Elec- tronic Services (ACES) program, described more fully in Chapter 6 in this report,22 has this flavor for citizen interactions with the U.S. gov- ernment. In lapan, plans call for the creation of a national-level PKI that would be used not only for individual interactions with the government but also for a wide range of private sector interactions. VeriSign and other so-called trusted third party (TIP) certificate authorities (CAB) in both the United States and Europe promote the notion of using a single public key certificate as the universal personal authenticator for a wide range of transactions. For example, if citizens were issued a single "interact with the gov- ernment" public key certificate, it might be relatively easy to determine if, say, the individual who had a reservation to visit Yosemite National Park was the same person who had sought treatment in a Department of Veter- ans Affairs (VA) hospital for a sexually transmitted disease. By contrast, if the VA and the National Park Service each issued their own certificates, or if they relied on some other decentralized authentication mechanism, such linkage would be harder to establish. Thus, it is not the use of PKI per se (except as it is an authentication system with all of the privacy implications intrinsic to authentication itself see Chapters 2, 3, and 7 in this report) but rather the scope of the PKI that influences the privacy of the authentication system. PKI technology does not intrinsically require large scale or use across multiple domains in order to be useful or cost-effective to deploy. This report has already argued that individuals typically have multiple identi- ties and that most identities are meaningful only in limited contexts, which suggests that many PKIs could arise, each issuing certificates to individu- als in a limited context, with an identifier that is meaningful only in that context.23 PKIs of this sort can be privacy-preserving, in contrast to very large-scale PKIs. Proposals have been made to use PKIs in a highly de- 22See . 23For another view of PKI, digital certificates, and privacy, see Stefan Brands, Rethinking Public Key Infrastructures and Digital Certificates: Building in Privacy, Cambridge, Mass., MIT Press, 2000.

OCR for page 104
AUTHENTICATION TECHNOLOGIES 129

OCR for page 104
30 WHO GOES THERE? centralized fashion24 25 that supports this notion of multiple identities for an individual and thus supports privacy. However, multiple PKIs might impose burdens on users, who would be required to manage the multi- tude of certificates that would result. In a sense, this is not too different from the common, current situation in which an individual may hold many physical credentials and has to manage their use. If individuals are going to accept and make use of a multitude of PKIs, software needs to provide a user interface that minimizes the burden on users. 24S. Kent. "How Many Certification Authorities Are Enough?" Proceedings of MILCOM (unclassified papers) 97~1~(November 1997~:61-68. 25S. Kent. "Security Issues in PKI and Certification Authority Design." Advanced Security Technologies in Networking. NATO Science Series. Burke, Va., IOS Press, pp. 33-52, 2001.

OCR for page 104
AUTHENTICATION TECHNOLOGIES Finding 5.3: Public certificate authorities and trusted third par- ties present significant potential privacy and security concerns. Finding 5.4: Public key infrastructures have a reputation for being difficult to use and hard to deploy. Current products do little to dispel this notion. Finding 5.5: Many of the problems that appear to be intrinsic to public key infrastructures (as opposed to specific public key infrastructure products) seem to derive from the scope of the public key infrastructures. Recommendation 5.3: Public key infrastructures should be lim- ited in scope in order to simplify their deployment and to limit adverse privacy effects. Software such as browsers should pro- vide better support for private (versus public) certificate au- thorities and for the use of private keys and certificates among multiple computers associated with the same user to facilitate the use of private certificate authorities. 131 This analysis suggests that authentication technologies that imply some degree of centralization can be operated over a range of scales with vastly differing privacy implications. Thus, neither Kerberos nor PKI intrinsically undermines privacy (beyond the fact that they are authenti- cation systems and as such can affect privacy), although each could be used in a way that would do so. In general, decentralized systems tend to be more preserving of privacy: No single party has access to more than its own transaction records. An individual may use the same password for two different Web sites; for a third party to verify this, the party would need at least the cooperation of both sites and (depending on the precise password storage technology being used) perhaps special-purpose moni- toring software on both sites. But if users employ the same identifiers at each site, the potential for privacy violations is significantly increased. This same observation applies to any form of decentralized authentica- tion system. An essential requirement for preserving privacy in authentication systems is allowing an individual to employ a different identifier when he or she asserts a different identity for example, in different organiza- tional contexts. The use of different identifiers makes it harder to corre- late the individual's activities across systems, which helps preserve pri- vacy. This goal can be achieved with technologies ranging from passwords to PKIs. Also, if each system collects less personal information

OCR for page 104
32 WHO GOES THERE? on its users only what is required to satisfy the requirements of that system this, too, is privacy-preserving. Finding 5.6: Core authentication technologies are generally more neutral with respect to privacy than is usually believed. How these technologies are designed, developed, and deployed in systems is what most critically determines their privacy im- plications. SECURITY CONSIDERATIONS FOR INDIVIDUAL AUTHENTICATION TECHNOLOGIES Authentication technologies are often characterized by the security that they offer, specifically in terms of resistance to various types of at- tack. Many authentication technologies rely on the use of secret values such as passwords, PINs, cryptographic keys, and so on. Secrets may be vulnerable to guessing attacks if they are selected from a set of values that is too small or predictable. Passwords, when selected by individuals and not subject to screening, often exhibit this vulnerability.26 Secrets also may be compromised by computational attacks, even when the secrets are chosen from large sets of values. For example, a large, randomly chosen cryptographic key would generally be immune to guessing at- tacks. But this key could be used in an authentication protocol in a man- ner that permits an attacker to perform computations that reveal the value of the key. If secrets are transmitted across a communication network, from pre- senter to verifier, as part of an authentication process, they are vulnerable to interception unless otherwise protected (e.g., by encryption). An en- crypted communication path is often necessary, but it is not sufficient to protect secrets against being transmitted. An attacker might masquerade as a system that a user wants to access and thus trick the user into reveal- ing an authentication secret, even though the secret was encrypted en route.27 A secret need not be transmitted across a network to be subject to attacks of this sort. Several years ago, thieves installed a fake ATM in a 26In this context, the user (presenter) also is acting as the issuer, and as an issuer is doing a poor job. 27This is an example of why two-way authentication is important. A form of this attack sometimes takes place when a user employs Secure Sockets Layer to encrypt communica- tion between a browser and a Web server. The user may reveal credit card account infor- mation (account number, expiration date, shipping address, and so on) to a sham merchant, who then can use this information to carry out unauthorized transactions.

OCR for page 104
AUTHENTICATION TECHNOLOGIES 133 shopping mall.28 Unsuspecting individuals inserted ATM cards and en- tered PINs, which were collected by the thieves and used to make unau- thorized withdrawals from the users' accounts. Thus, even physical prox- imity and an ability to see a verifier does not ensure that it is the device it appears to be and one to which authentication information should be presented! Secret values that are too big for individuals to remember must be stored. The way in which the secrets are stored may make them vulner- able. For example, passwords written on a note stuck on a monitor in the workplace may be observed by other employees, custodial staff, or even visitors. Secret values stored in a file on a computer can be compromised by a wide variety of attacks against the computer, ranging from physical theft to network intrusions. Even secret values stored in hardware dedi- cated to authentication can be extracted illicitly, with varying degrees of difficulty, depending on the technology used to store the secrets. Often there is a requirement to prevent individuals from sharing au- thentication data in support of individual accountability. If authentica- tion data are known to individuals or are easily extracted from storage, then individuals may voluntarily make copies and thus circumvent this system goal (see Chapter 4~. Even when secrets are stored in physical tokens, the tokens may be loaned to others, in violation of procedural aspects of a security policy. Sometimes authentication is based not on the possession of a secret value but on the possession of a physical item that is presumed to be resistant to tampering and forgery. An authentication system may be attacked successfully if the assumptions about its tamper- or forgery- resistance prove to be false. In many cases, the security of the credential is derived from the integrity of the data associated with the credential, rather than on the physical characteristics of the credential. For example, a physical credential might contain digitally signed data attesting to a name and employee ID number. Verification of the credential, and thus authentication of the individual possessing the credential, would be based on successful validation of the digital signature associated with the data. Careful use of public key cryptography can make the digital signature highly secure, protecting against modification of the signed data or cre- ation of new, fake signed data. However, it may be quite feasible to copy the data to additional physical credentials. These duplicate credentials represent a form of forgery. Unless the signed data are linked directly to 28In 1993 in Connecticut, a fraudulent ATM was installed in a shopping center. See the RISKS digest for more details; available online at .

OCR for page 104
34 WHO GOES THERE? the holder of the credential (for example, by means of biometrics), this sort of forgery by duplication is a security concern. Biometric authentication also relies on the possession of a physical item that is presumed to be resistant to tampering and forgery, namely some measurable part of an individual's body or behavior. Examples include fingerprints, voiceprints, hand geometry, iris patterns, and so on. Biometric values are not secrets; we leave fingerprints on many items that we touch, our voices and facial images may be recorded, and so on.29 Thus, the security of biometric authentication systems relies extensively on the integrity of the process used to capture the biometric values and on the initial, accurate binding of those values to an identifier. It is critical that later instances of the biometric capture process ensure that it is a real person whose biometric features are being captured this may mean re- quiring biometric sensors to be continuously monitored by humans. Bio- metric authentication systems may be fooled by fake body parts or photo- graphs created to mimic the body parts of real individuals.30 They also may be attacked by capturing the digitized representation of a biometric feature for an individual and injecting it into the system, claiming that the data are a real scan of some biometric feature. The preceding analysis of the security vulnerabilities of classes of authentication technologies, while accurate, does not determine whether any of these technologies is suitable for use in any specific context. Instead, a candidate technology must be evaluated relative to a per- ceived threat in order to determine whether the technology is adequately secure. Nonetheless, it is important to understand these vulnerabilities 29"The physical characteristics of a person's voice, its tone and manner, as opposed to the content of a specific conversation, are constantly exposed to the public. Like a man's facial characteristics, or handwriting, his voice is repeatedly produced for others to hear. No person can have a reasonable expectation that others will not know the sound of his voice, any more than he can reasonably expect that his face will be a mystery to the world." Justice Potter Stewart for the majority in U.S. v. Dionisio, 410 U.S. 1,1973. 30See T. Matsumoto, H. Matsumoto, K. Yamada, and S. Hoshino, "Impact of Artificial Gummy Fingers on Fingerprint Systems," Proceedings of the International Society for Optical Engineering (SPIE) 4677 Qanuary 2002), available online at ; L. Thalheim, J. Krissler, and P. Ziegler, "Biometric Access Protection Devices and Their Programs Put to the Test," c't Magazine 11 (May 21, 2002~:114, available online at ; T. van der Putte and J. Kenning, "Biometrical Fingerprint Recognition: Don't Get Your Fin- gers Burned," Proceedings of the IFIP TC8/WG8.8 Fourth Working Conference on Smart Card Research and Advanced Applications, Kluwer Academic Publishers, Dordrecht, The Nether- lands, 2000, pp. 289-303, available online at ; and D. Blackburn, M. Bone, P. Grother, and J. Phillips, Facial Recognition Vendor Test 2000: Evaluation Report, U.S. Department of De- fense, January 2001, available online at .

OCR for page 104
AUTHENTICATION TECHNOLOGIES 135 when evaluating the security characteristics of individual authentica- tion technologies. COST CONSIDERATIONS FOR INDIVIDUAL AUTHENTICATION TECHNOLOGIES Costs are an important factor in the selection of authentication tech- nologies. These costs take many forms. Capital costs are associated with the acquisition of any hardware or software needed for an authentication technology. The hardware and software costs may be a function of the number of individuals being authenticated, or of the number of points at which authentication takes place, or both. For example, an authentication system that makes use of hardware tokens has a per-user cost, since each user must have his or her own token, and each device that will authenti- cate the user (for example, each desktop or laptop computer) must be equipped with a reader for the token. A biometric authentication system might typically require readers at each point where individuals are au- thenticated, and there would be a per-device, not a per-person cost. A software-based authentication system may impose costs only for each computer, not each individual, although licensing terms directed by a vendor might translate into per-user costs as well. Many authentication systems also make use of some common infra- structure, which also has associated hardware and software acquisition costs. The infrastructure may be offline and infrequently used, or it may be online and require constant availability. In the online case, it may be necessary to acquire replicated components of the infrastructure, to geo- graphically disperse these components, and to arrange for uninterruptible power supplies, in order to ensure high availability. The key distribution center component of a Kerberos system (see Box 5.7) and the ACE/Server used by the SecurID system are examples of the latter sort of infrastruc- ture. A certificate authority in a PKI is an example of the online type of infrastructure component. Operation of an authentication system involves labor costs of vari- ous types. Help desks must be manned to respond to users' questions and problems. If the system relies on secret values that users are re- quired to remember, the help desk will have to interact with users to reset forgotten secret values. If the system makes use of hardware to- kens, provisions will have to be made to replace lost or stolen tokens. Users and system administrators must be trained to work with an au- thentication technology and with that technology's interaction with varying operating systems and applications. Application developers must learn how to make use of an authentication technology and to integrate it into their applications.

OCR for page 104
136 WHO GOES THERE? This brief discussion illustrates how complex it can be to evaluate the cost of an individual authentication system. Initial capital outlays are greater for some types of systems; ongoing costs of other types of systems may eventually outweigh these capital outlays. Different contexts merit different levels of security and will tolerate different costs for authentica- tion technology. Thus, there can be no single right answer to the question of how much authentication technology should cost. CONCLUDING REMARKS The preceding chapters describe three different conceptual types of authentication (identity, attribute, and individual), and this chapter fo-

OCR for page 104
AUTHENTICATION TECHNOLOGIES 137 cuses on the technologies that go into building an authentication system and some of the technology-related decisions that must be made. Some of these decisions will bear on the privacy implications of the overall sys- tem. In general, decentralized systems tend to be more preserving of privacy, but the core authentication technologies that make up authenti- cation systems tend to be privacy-neutral. What matters most in terms of privacy are design, implementation, and policy choices, as described else- where in this report.