Like the topics examined in previous chapters, privacy issues take on unique aspects in the electronic environment. The discussion of Scenario 1 below centers on four issues: the obligation to notify authorities of death threats, general provider practices and responsibilities, liability for violating privacy, and ethical and social issues. The later discussion of Scenario 2 focuses on legal privacy protection for subscriber information, informed consent, and blocking unwanted sales pitches. Both scenarios involved a relatively large commercial or university provider. However, a wide range of sizes and types of providers is possible, and so it is unlikely that one type of analysis or policy will fit all.
SCENARIO 1: OPERATOR LEARNS OF POSSIBLE DEATH THREAT
John and Sally "meet" on a bulletin board provided by a commercial network operator and then begin corresponding through e-mail. The operator has a "standard" set of acceptable use policies that prohibit sending (through e-mail) or posting (on either type of bulletin board) any "defamatory, obscene, threatening, sexually explicit, ethnically offensive, or illegal material."1 John receives a private e-mail note from Sally's husband saying, "I know what's going on between you
two, and I'm not putting up with it. It's too late to save her. She's a gonner." John notifies the operator and forwards a copy of this note, requesting that the operator notify the local police in Sally's home town (since John has no way of knowing where Sally lives).
Issue: Obligation to Notify Authorities of Death Threats
Panel members generally agreed that authorities should be notified of the threat in Scenario 1, although they expressed different views on the legal ramifications and requirements.
Speaking from the perspective of the common carriers, Kenneth R. Raymond, director of Technology Strategies Analysis for NYNEX Telesector Resources Group, noted that telephone users complaining of annoyance or threatening telephone calls are given a special telephone company telephone number and then referred to local law enforcement authorities. Law enforcement officers provide the paperwork needed to obtain information from the telephone company. "In this way, the common carrier provides the referral and the information required by law enforcement authorities, placing the issue squarely in the hands of those that are best able to deal with it."
Although Raymond acknowledged that "Scenario 1 is somewhat different, in that two end users are communicating through a third party," he asserted that authorities still need to be notified; the question is, What information can be obtained in a timely manner? Basic notification of police is required in New York, where state law requires that telephone operators who overhear certain types of communications, such as those involving fraud or threats, report them to law enforcement authorities, he said. Thus, said Raymond, the telephone operator's duty to notify police is not a question here. Rather, the question is whether others have a duty, need, or requirement to facilitate the provision of information to help save Sally.
James E. Tobin, vice president for international consumer affairs at American Express Company, agreed with Raymond that authorities need to be notified in Scenario 1. He said the network operator can set up the expectation that information will be kept private unless certain events occur, but "pragmatism tends to override. In fact, I would say that, as important as privacy is, it would be difficult to argue that a life is subservient to that important right of privacy. And I think, leaving legalities aside, there are moral questions that come into play."
Tobin argued that John is the recipient and "owner" of the message2
and therefore could report it to the police directly; then the police could find out from the network operator where Sally lives. Even if the operator came across the threat without John's help (e.g., in performing system maintenance), the police should be notified, Tobin argued. "If there is any question, turn it over to the police and let the chips fall where they may," he said. "If there is a reason to believe that someone's life may be in danger, I think you just have to grit your teeth, call your attorney, and turn it over to the police, or you are going to face the consequences."
Ann Harkins, then chief counsel to the Senate Judiciary Subcommittee on Technology and the Law, said the system operator has no clear duty under the law but would be well advised to react practically, because there would be a significant risk of a lawsuit for negligence if Sally were killed. Therefore, even though the language of the threat is ambiguous, the operator should contact police without disclosing any additional information about John or Sally, Harkins said. She added that it was for all practical purposes irrelevant that the rules of the system prohibited threats, because the life-threatening nature of the information was the dominating issue.
One forum participant argued that the operator should do nothing. The operator's position differs from that of John, who is arguably the owner of the message, and the law is not clear as to the operator's obligation, according to Marc Rotenberg of Computer Professionals for Social Responsibility. Using the telephone analogy, the operator is acting as a common carrier and thus, according to the Electronic Communications Privacy Act (ECPA), should not interfere, Rotenberg said, emphasizing that the service is supposed to be confidential. He added that the situation would be very different if the message involved had been posted in a publicly accessible area. Finally, if the operator had monitored the communications directly (rather than being told of the contents by one of the participants), the operator might still be exposed to liability for that action even if John were ultimately convicted of wrongdoing.
As to whether the operator would be liable for failing to act if Sally were killed, Rotenberg suggested that the duty is not clear enough to invoke liability. He argued that it may not be easy to identify the source of trouble; perhaps, for example, John is the guilty party, having
set up the situation to find out where Sally lives so that he can harm her.
Issue: General Provider Practices and Responsibilities
The mere existence of system rules does not guarantee that the system operator will know the content of messages, as Raymond pointed out. One approach, he noted, would be to remove from the network those users about whom the system operator received complaints from other users. ''[I]f there is a complaint that somebody does something that is inappropriate, and you then investigate and find [that] … it's not something that requires law enforcement or anything like that, you would then tell that customer that he or she is no longer welcomed on this service."
Raymond further wondered how knowledge of questionable or potentially reportable activities might reach system operators. In the absence of a specific report by a user (as was made in the scenario), he thought that because supervising user activities and message traffic would be difficult and costly, "I would never be monitoring this traffic in any way, shape, or form, and I can't imagine how you would do it," he said. Such a problem also affects network service providers, who carry such large amounts of message traffic that it is for the most part impractical to monitor the contents of all messages transmitted. In such cases, the only practical approach is for the network service provider to establish rules and guidelines on how its services are to be used and to rely on end-user action to bring questionable activities to its attention and/or to monitor public areas of message postings. Once the provider's attention is drawn to a questionable activity, the provider is then in a position to take action (such as to contact the appropriate law enforcement authorities).
Other forum participants expressed various views on how operators should handle these situations. David Hughes of Old Colorado City Communications said he would "communicate with the guy that's making the threat and say, 'You made a threat that I interpret as a death threat. Do you understand the consequences of that?" Vinton G. Cerf, president of the Internet Society, agreed that operators should not specify responses in advance, explaining that he would "rather say nothing and hold the ethical sense internally, and when a situation arises, judge that myself."
On the other hand, Murray Turoff of the New Jersey Institute of Technology said, "I would like to argue once again for more responsibility on the other side. If you are going to have any policies on what sort of traffic you are going to allow or not allow, you should
have clear-cut policies on what you're going to do when those policies are violated."
Issue: Liability for Violating Privacy
A number of participants suggested that the current legal regime, including the ECPA, probably would protect the operator from liability for violation of privacy if he or she notified the police. Harkins argued that Section 2702 (which states that an operator may report such a communication to the police but is not obligated to do so) would allow John to dispose of the message as he saw fit without liability; in particular, he could be regarded as consenting explicitly to the operator's disclosing of the message to the police, and both John and the operator would be held blameless for that act. Justice Department prosecutor Scott Charney said Section 2702 states that a service provider who inadvertently learns the contents of a communication that pertains to the commission of a crime may give it to the police; the statute thus may preempt any civil liability for violation of privacy.
The protection might hold even if the threat turned out to be a hoax, according to Rotenberg. If the operator notified police and the husband were arrested, but the note turned out to be a prank played by Sally, then the operator probably still would not be liable, Rotenberg said. He said a good samaritan law3 would protect individuals who offer help with good intentions but inadvertently cause some harm.
Michael Godwin of the Electronic Frontier Foundation and David Hughes praised Section 2702 of the ECPA. Godwin said that "it strikes a balance between the legal traditions of not imposing an obligation on individuals to prevent crime with benevolence and an allowance for someone who is a good samaritan, who does inadvertently discover that there is crime and wants to disclose it pursuant to the ethical principle just articulated." Hughes called this distinction "extremely important" and desirable.
Issue: Ethical and Social Issues
Oliver Reed Smoot, Jr., an attorney who serves as executive vice
president and treasurer of the Computer and Business Equipment Manufacturers Association, said that the threat of murder clearly outweighs privacy interests in Scenario 1, but that a more ambiguous situation (i.e., one that involved a lesser threat) might be more difficult to handle. He said the legal system provides a formal, disinterested process for resolving conflicting values. In particular, Smoot noted that under the criminal statute, "in order for the [operator's] obligation to be created, what has to happen is that a prosecutor has to decide that this is a serious enough problem to get a subpoena, or perhaps convince a magistrate that a warrant is appropriate. And I think maybe that's a better way to handle this, to resolve the conflicting values and to handle the ambiguity, than it is just to say, 'Oh well, the system operator obviously has an ethical obligation to disregard the privacy interest and pass this along,' because that's not at all obvious to me."4
Forum participants offered several views of the relationship between ethics and the law. Patrick Sullivan, executive director of the Computer Ethics Institute, suggested that ethics should take precedence in Scenario 1 and that any action taken should be the same as in a nonelectronic environment. "In terms of the ethics of the scenario, there is really no question," he said. "There is an obligation to warn, and one isn't going to split hairs over linguistic interpretations of the message. There will be a duty to prevent harm. …"
But Alan Westin of Columbia University contended that legal questions still must be answered. "So you frame [the question] first as an ethical dilemma, but in a litigious society in which numbers of people have said that whoever has the deepest pockets is going to face the lawsuit, and the system operator is the one typically with the deepest pockets in this situation, you can't disconnect the question of what is the legal liability from what is the ethical choice."
Attorney David Johnson suggested that the real issue, whether in an ethical or a legal context, is not what actions are required but rather whether the operator considers the situation carefully. "The ultimate duty may be to simply pay attention to the question and be
thoughtful about it," Johnson said. "… Stone's book5 on corporate responsibility in a sense makes the point that we can probably, in an organized context like this … go a long way by simply having a sense of procedures for deliberating thoughtfully on these questions." Johnson further argued that it would be appropriate for a defendant to assert in court that "even though he made a bad call, he was thoughtful about it."
SCENARIO 2: PROVIDER SELLS USER PROFILES TO MERCHANDISERS
A commercial network operator collects information about the interests and purchases of its users by keeping track of the forums and bulletin boards they use and the purchases they make; it then sells this information to other merchandisers. Users are not asked if they wish to participate in the redistribution of such information.
Issue: Legal Privacy Protection For Subscriber Information
The redistribution of information in Scenario 2 is neither permitted nor forbidden under current law, Harkins said. She asserted that the ECPA allows companies to sell general information about subscribers to electronic services but not the contents of their communications.
By contrast, the law is more protective of the privacy of cable subscribers. The Cable Communications Policy Act of 1984 states that cable operators must notify subscribers annually, in writing, concerning what personal information is collected and how it is used. Subscribers may request that their information not be used for these purposes. George Perry of the Prodigy Services Company noted that, ironically, this provision would cover users of emerging interactive cable services but not users of similar services provided through personal computers and telephones.
Rotenberg advocated applying the cable privacy model to electronic commercial transactions. He argued that intrusion can be greater in the electronic world than outside it, noting that mere participation in a forum or discussion may be disclosed. "The problem is not only
that there are commercial transactions which are generated through the use of the network disclosed to others, but that one's participation in a forum or discussion group [can be made available]. The facts of those exchanges or those inquiries may be disclosed to others, and that takes concerns about privacy to a higher level."
Tobin said he would support use of the cable privacy model for electronic networks but not necessarily a legal requirement for such notification. He did stress that apart from legal constraints, companies have a commercial incentive to avoid annoying their customers, who may take their business elsewhere. Providers may use information about their customers so long as the latter know about and approve it, Tobin said. "They may not find out," he added. "And if you want to take that chance as a network operator, then you take that chance; but you could end up with some serious trouble on your hands. …" Despite such incentives, however, what types of behavior constitute "annoyance'' of customers is the subject of much debate.
Harkins said privacy notice and consent mechanisms are not likely to be legislated for electronic networks because the issues are so complicated and cut across so many jurisdictional lines; five or six senate committees, for instance, deal with privacy statutes. "The instinct, at least [from] what I've seen in the last 5 years, is to push and put a lot of pressure on industry to have its own watchdog system …," she said.
Issue: Informed Consent
Participants generally agreed that users should have control over dissemination of their personal information, but they also suggested that this protection may be difficult to assure. Westin asserted that individuals should be allowed to make their own decisions and choices about their personal privacy. He suggested that electronic networks can place unique pressures on privacy, in that electronic networks may be able to compile a "richer, more detailed profile" of a user than can individual companies preparing one-dimensional lists of their customers.
Rotenberg argued that the public should be able to choose in advance whether their personal information may be disclosed, just as they do in everyday life, and that users should set their own risk levels, just as people do in the physical world. "But unfortunately, in the electronic world, the default is in the wrong setting, particularly in this area of electronic communications privacy," Rotenberg said. "The default [setting for] the transactional data is that it may be
publicly disclosed, sold, [and] collected unless someone says something to the contrary. So I would pull us back in the other direction and give people a choice about disclosing data [i.e., an opt-in choice6]." Providers should assume all information is sensitive and therefore provide increased protection, Rotenberg said.
The need to obtain informed consent varies in salience and urgency, depending on the nature of the information and society's attitudes about it, Tobin said. "Opt-in" consent probably is not needed in direct marketing, he said, because the risk of harm is minimal if a provider releases, for example, information about who subscribes to Time magazine.7 But, he argued, "as you move down the spectrum toward medical information, certainly something as sensitive as being HIV-positive or [one's] sexual lifestyle—the potential damage to somebody of that information getting into the wrong hands or even hands that they don't want it in is so great that I think that consent in advance, probably in writing, is required if you are going to do it at all."
Obtaining informed consent is not as easy as might be expected, Tobin said. According to Tobin, at American Express,
every 6 months there is an attitudinal survey done of customers. We actually tell them what we do with the information, but in a very superficial way. We tell them what it results in. We tell them we take information on what they purchase and merge it with other commercially available information, including some credit bureau information, [and] put their names on mailing lists to offer them products and services in which we believe they may be interested. And so what we try to tell them is what happens. We don't tell them how it happens. We also try to elicit from them, in their own words, what they expect, and we find that they basically understand that we take subscription lists. Americans actually have a pretty
good understanding of what happens in very broad terms. But the level of sensitivity to it is surprisingly lower than you would expect.
At the same time, Tobin felt that the definitions of "informed" and "consent" were debatable, and that the potential damage resulting from collection and use of information is difficult to determine. ''We don't want to make that decision [about the use of information]. … We don't want Congress to make that decision. We want the customer to make that decision," Tobin said. "But the trick here is how do you explain to these people the complexity of information that is available to us, what we do with it, and what they get out of it? … I'm convinced that the vast majority of our customers don't really understand technically what's going on."
Cerf said individuals tend to value convenience over privacy.
I think, judging from my own behavior and that of others, that we have a remarkable ability to ignore risk in favor of convenience. And so when you're confronted even with the understanding of how much information is floating around about your personal habits, the convenience of ordering things over the telephone with your credit card or sending something on e-mail … almost invariably overcomes most people's reason for concern about privacy and other kinds of risks. And so this raises an interesting question about whether we have to contemplate saving people from themselves. I have no position at this point, but I raise it as a very interesting subject.
Sara Kiesler of Carnegie Mellon University pointed out that "the way people perceive risks is not always in direct parallel with the actual risk. That is, people will get very concerned about things like getting brain tumors from using cellular phones, and they will probably overevaluate the probability of damage. … It's difficult for people to consider negative consequences that they can't actually see in their minds, and the sharing of information about them is hard to visualize. [As a result, people do not ask themselves] what bad things could happen to them as a result of people knowing all these things about them." This, she said, explains why convenience overrides risk: people are unable to visualize the negative consequences. Kiesler also felt this phenomenon explained why people tend to habituate to warnings (e.g., a message displayed daily on a user's computer stating that electronic mail is not private) and to behave as though those warnings were never displayed.8
The result is what Kiesler called the illusion of privacy. "When you don't have cues around you about who else is there," Kiesler explained, "and especially in electronic forums where you have lurkers [people who only receive messages and never send them] and so on, what happens is that you have an illusion that you are much more private than you really are. … [People] continue to say things on networks that they wouldn't say otherwise, even though you warn them." Perry concurred, saying many PRODIGY users are new to computers and do not understand the difference between public bulletin board notes and private e-mail. "And even after you explain it to them, they don't understand. So it really is a very critical problem because the messages look so much alike and users treat them kind of alike." William Dutton of the University of Southern California questioned whether there is any legitimate expectation of privacy on electronic networks yet, in that the courts have ruled out any such expectation on cordless telephones.9
Others felt that user education was a reasonable option, even while advocating individual choice about disclosure. Raymond noted that personal communications services will keep track of not only telephone and account numbers but also location data. "Of necessity, additional information will be in that database," he said, "and I think the informed-choice kind of approach—educate consumers as to what goes in there and how it is going to be used—is one reasonable way to handle it, because then they can decide whether it is worth it to have that service or whether they would prefer not having that kind of information collected."
Steven Metalitz, vice president and counsel to the Information Industry Association, argued that the issue went beyond "convenience versus risk" to the broader question of "overall benefit versus risk," posing the possibility that a user might get "cheaper use of a system if he consented or if he did not opt out of your information collection." Metalitz was troubled by "people sitting in this room deciding that the public at large is not sufficiently educated about this. …
[W]e have to avoid the temptation to conclude that other people just can't figure out what their privacy is worth, or … that we have to decide for them or put our thumb on the scales for them as to how they should decide these things."
Issue: Blocking Unwanted Sales Pitches
Some forum participants believed that current techniques for blocking unwanted sales pitches, whether in physical or electronic form, are inadequate. For the sake of discussion, Westin suggested the use of an approach used by the Direct Marketing Association (DMA): the DMA maintains a list of consumers who ask to be removed from mailing lists, and each electronic network maintains a master list of users who do not wish their personal information circulated; these lists could be shared among the networks.
A number of speakers, however, felt the DMA system did not apply to blocking sales calls or the sending of junk e-mail. They also noted other problems. Allan Adler noted that use of the DMA list is voluntary and applies only to national—not regional or local—mailers. Tobin said the list is effective only if a mailer uses the exact name and address provided by the consumer. Sullivan noted that consumers who are unaware of the list appear by default to want the mailings, when in fact they may not. Mitchell Kapor said the DMA list is an example of an ineffective private-sector privacy code, saying the association does not enforce the code.
Adler noted that the Congress, recognizing the problems with the DMA list, enacted the Telephone Consumer Protection Act of 1991 (Public Law 102-243), which calls for the Federal Communications Commission (FCC) to establish a method whereby consumers may elude unsolicited telephone sales calls. The sponsors of the law originally wanted to establish a national database of consumers who do not want to receive unsolicited sales calls, but merchandisers opposed this idea. So the FCC decided that every solicitor must keep an in-house "do not call" list, Adler said. If consumers who ask to be placed on the list are called again within a certain time period, they may bring a civil suit, Adler said.
The electronic environment presents both unique problems and singular solutions regarding sales pitches, which forum participants indicated is an increasing problem. On the negative side, Cerf noted that whereas unwanted junk mail may simply be thrown out, unwanted "junk e-mail" could clog an electronic mailbox, blocking messages of higher priority from entering. "On some commercial e-mail services, the size of the mailbox is of some finite length. It runs out
after 100 messages and you get this little message that comes back saying there is no room left. If my e-mail runs out of space and the important messages don't make it, I'm going to get annoyed."
On the positive side, several speakers suggested that marketplace mechanisms could resolve these problems. Turoff proposed that merchandisers be required to pay consumers to whom they advertise, with the price set by the recipients. He also suggested that users employ a screening feature to delete material from a particular mailer. David Farber of the University of Pennsylvania, for instance, runs an "invisible script" that quietly discards e-mail from specific individuals and specific bins.
First, the dialogue revealed that many providers and attorneys agreed on several important points, including the efficacy of the ECPA—codified primarily at United States Code Annotated, Title 18, Section 2702—and the prudence of allowing users to make their own decisions and choices regarding their personal privacy. Second, it was emphasized that models of ethical practice with regard to privacy can be drawn from networked communities.
On the more general issues about privacy, it is clear that such concerns are not new, and Americans have a strong tradition of wishing to be left alone. At the same time, the right to privacy has never been an absolute one, and its costs and benefits must be balanced against those of public disclosure and/or surveillance.10 Moreover, electronic networks have a number of characteristics that magnify some of the traditional concerns.
A legal regime has developed around information stored on paper. How this regime should be interpreted when information is stored electronically is problematic. For example, in a technological environment in which the content of a document can be assembled instantaneously from a multitude of sources, what counts as a "record"? What about institutions that deliberately refrain from the collection
of certain types of electronic information to forestall requests for such records? Many electronic networks produce vast and unique traces of both verbatim communications and transactional data (e.g., who called whom when, network usage statistics, user credit histories). What traces count as a record?
A second issue is that the many network interconnections give rise to potential conflicts across international boundaries. Different nation-states have different approaches to the protection of privacy (e.g., the United States takes a sectoral case-by-case approach, protecting different types of information in distinct ways, whereas Europeans focus on umbrella rules covering all personal information, according to Metalitz), and there is no international privacy law.11
A third issue is that the meaning of the legal term "the reasonable expectation of privacy" (the foundation for U.S. privacy law as enunciated by the Supreme Court in 1967) is not clear in the electronic environment. For example, it is clear that electronic networks tend to encourage frank if not imprudent speech, thus magnifying the confusion over what circumstances provide a "reasonable expectation of privacy." Even today, conversations on cordless telephones tied to a base station are not afforded the same legal protection from eavesdropping as cellular telephones that are truly mobile. The perceptions of the "man on the street" about what is "fair'' may conflict with basic assumptions regarding the conduct of business. Matters such as the extent of computer literacy in the public may affect profoundly what constitutes reasonable expectations of privacy.
Given all the dilemmas, a variety of new measures—both technical and legal—likely will be needed to ensure electronic privacy and security. Technology will not be able to provide perfect privacy or security, but technological fixes can provide preventive measures to help reduce the range of risks that are faced by users of electronic networks. Legal measures will be necessary to help deal (remedially, if nothing else) with those circumstances in which it is impractical or undesirable to use the technical measures.