National Academies Press: OpenBook

Engaging Privacy and Information Technology in a Digital Age (2007)

Chapter: Part III Privacy in Context, 6 Privacy and Organizations

« Previous: 5 The Politics of Privacy Policy in the United States
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

Part III
Privacy in Context

Chapters 2-5 sketch out the intellectual tools with which the committee addresses privacy in specific contexts. As noted in Chapter 1, privacy in the abstract is an ill-defined concept. However, privacy in specific contexts is much easier to define and talk about.

Chapter 6 (“Privacy and Organizations”) discusses how organizations of various kinds use personal information and looks at some of the implications for privacy of such use. In particular, Chapter 6 focuses on the education sector (both K-12 and university), financial institutions, retail establishments, data brokers and aggregators, nonprofit institutions and charities, mass media and publishing companies, and statistical and research agencies. The diversity of these sectors suggests that the interaction of technology and privacy is not an issue that can be limited to only some isolated areas of our society. What this quick look at various sectors of society makes clear is that it is often not the gathering of information itself that is perceived as a violation of privacy, but rather a specific use of that information. In addition, a number of generic questions are suggested by the privacy issues these domains raise, questions that set the stage for a more detailed analysis of three important sectors: health care, libraries, and law enforcement and national security.

Chapter 7 (“Health and Medical Privacy”) notes the importance of personal information for providing effective health care, and it describes four approaches for protecting such information: industry self-regulation, legislation and regulation, consumer/patient awareness and self-help, and official advocacy. The chapter also notes that issues related to the

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

privacy of health information will become more salient and important as electronic medical records become more widely deployed.

Chapter 8 (“Libraries and Privacy”) addresses the long tradition of sensitivity to privacy issues in the library community. In addition, the library community has been an early adopter of information technology as a way of furthering its mission, and thus the impacts of technological change have manifested themselves very clearly in the library context. Thus, many of the most basic questions about policy can be seen in libraries and among librarians.

Chapter 9 (“Privacy, Law Enforcement, and National Security”) addresses some of the starkest polarities in the privacy debate. Since the terrorist attacks on September 11, 2001, some of the loudest arguments have been heard about the appropriate balance between counterterrorism efforts and privacy. Although this is not a new tension, new information technologies make it possible for privacy to be eroded far more extensively than ever before. Chapter 9 identifies a number of reasons that citizens might be concerned about privacy in a law enforcement/national security context. First, these individuals may be concerned that such information might be abused. Second, government knowledge about certain activities often has a chilling effect on individuals’ participation in such activities, even if such activities are entirely legal. Third, many individuals do not want government authorities to collect personal information simply on the theory that such collection raises their profiles and makes it more likely that they might be erroneously singled out in some manner to their detriment even if they have done nothing illegal.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

6
Privacy and Organizations

Privacy is an issue in many sectors of society. This report addresses privacy concerns in depth in the areas of health care (Chapter 7), libraries (Chapter 8), and law enforcement and national security (Chapter 9). However, tensions between the access to information that technology makes possible and the privacy of the individual are not restricted to those clearly sensitive areas. In recent years, technology has transformed organizations and institutional practice across the board. Our lives are intimately tied to organizations and institutions that gather, collate, and use information about us. Whether those organizations are for-profit corporations, educational institutions, media and content providers, or not-for-profit organizations, they all gather, store, and use information about their customers, users, and clients.

This chapter presents a brief overview of several institutional sectors and their use of information and information technology particularly as that use relates to the privacy of the individuals involved. It points out some of the difficult tradeoffs required in applying the technology and shows how concerns about privacy can arise even when the technology user’s intent is to help the customer or client. The purpose of this chapter is not to examine any of the areas in depth or to solve any of the problems being discussed, but rather to indicate the difficulty of sorting them out and addressing them even when it would seem that answers should be easy to find.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

6.1
INSTITUTIONAL USE OF INFORMATION

Information is an enabler for modern businesses and other institutions. That technology allows increasing amounts of information to be brought to bear raises the possibility that decision making can be improved.1 For example, an insurance company can use more and better information about customers as a basis for improving the judgments it makes about risks. A retail firm can use more and better information about customers to target advertising to those who are most likely to respond to it. Businesses and organizations that know more about their customers are better able to offer enhanced services, or even completely new services.

At the same time, the personal information collected about customers can be used for many different purposes, and information that was initially gathered for a benign purpose can be used in different ways for other purposes—sometimes with undesirable or inappropriate results. For example, information gathered to study the correlations between the financial well-being of residents and their place of residence can be used for redlining by lenders, that is, denying financial services to people based solely on the shared attribute of where they live, rather than a full consideration of their individual situation. Information gathered to allow updating people on new therapies can be misused in the marketing of antidepressants. The same techniques that can be used to offer higher levels of service can also be used to target products to particularly susceptible individuals, to generate and send larger quantities of both electronic and physical junk mail, or to inappropriately deny financial or other kinds of services to individuals based on their place of residence, their ethnicity, or a host of other factors that are only weakly correlated, if at all, with increased risk.

A key aspect of the use of information by businesses involves the practice of record linkage, or in other words linking databases on individuals collected for apparently unrelated purposes. For example, a small amount of information collected at the drugstore about your purchase can become quite valuable to businesses if linked to your medical records, which contain much more information.

As a point of departure, consider the issue of privacy as it relates

1

Of course, technology-based presentations of information can hide inadequacies in that information. Beyond the dangers of drowning in the data, the information age offers an abundance of unsubstantiated theories and bogus data, and unquestioning faith in “the computer that said so” has been the downfall of many a decision maker. Data entries and means of analyzing these are not given in nature but reflect human decisions at a multitude of levels. Interesting though these considerations are, they are unfortunately outside the scope of this report.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

to businesses linking records on their customers. Using the anchoring vignette approach described in Section 2.4 (see Box 2.2), a possible survey question might be, How much do businesses respect [your/“Name’s”] privacy? Here are a number of possible vignettes:

  1. [Jerry] signs up for an account at the local video store. The rental record is shared with the affiliated video store in a neighboring city.

  2. [Suzanne] signs up for an account at the local video store. The store shares her rental record with the affiliated local music store, which begins to send [Suzanne] coupons for soundtrack CDs of movies that she has rented.

  3. [Roderick] sees a doctor to treat an illness. The doctor calls in the prescription to the pharmacy via a shared database. [Roderick] begins to receive advertisements from the pharmacy for drugs that treat his illness.

  4. [Anne’s] bank shares information about its customers’ income and spending habits, including those of [Anne], with its investment division. [Anne] now regularly receives investment advertisements related to her recent purchases.

  5. A parent company creates a database with consumer information obtained from its subsidiary companies. The database contains information on people’s spending habits at grocery stores, cable TV usage, telephone calls, and the Internet surfing of many consumers, including [Marie]. The company offers this information for free on its Web site, although in a de-identified form.

As indicated in the above vignette, information originally collected for one reason can be used for many different reasons—a practice known as repurposing. Individuals may be unaware of how their information is used or what the fine print they supposedly have agreed to actually means.2 The information collector may be disingenuous in describing how information will be used. Information may be fraudulently obtained (as in cases of identity theft) and used for purposes clearly unanticipated by its original provider. And, in many instances, a new use for information occurs simply because a clever individual or an innovative organization discovers or invents a way that information already collected and on

2

The “fine print” of published privacy policies is a well-known issue. Many privacy policies are written in a way that requires college-level reading scores to interpret. See, for example, Mark Hochhauser, “Lost in the Fine Print: Readability of Financial Privacy Notices,” Privacy Rights Clearinghouse, July 2001, available at http://www.privacyrights.org/ar/GLB-Reading.htm.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

file can be used in some novel way to solve some problem or to advance some interest.3

“Repurposing” of information is not by definition wrong, at least not always. But it often happens that information collected and used in one domain, and expected by the individual to be used in that domain, turns up in another. Even if such use is entirely legal, the surprise and sometimes shock individuals feel as a result of learning about this use of information about them can generate not only personal angst, but also political issues in our system of democratic elections, judicial litigation, and public debate.

Similar issues arise in an Internet context. Consider the issue of privacy as it relates to businesses and the behavior of their Internet customers. Using the anchoring vignette approach, a possible survey question might be, How much privacy [do you/does “Name”] have about information that [you/he/she] disclose[s] while surfing the Internet? Here are a number of possible vignettes:

  1. [Sandra] is diagnosed with diabetes and consults the Web for related information. She begins to receive e-mail advertisements offering diabetes supplies.

  2. [Jamie] is diagnosed with diabetes and consults the Web for related information. He begins to receive catalogs for products related to diabetes.

  3. [Ricardo] is diagnosed with diabetes and consults the Web for related information. He begins to receive catalogs for products related to diabetes. Some of the catalogs are too big to fit into his mailbox, and his neighbors see them.

  4. [Alicia] is diagnosed with diabetes and participates in an online diabetes support group. She reads and posts anonymous e-mail to the support group from her computer at work. Her employer monitors all Web usage from work computers and learns that she has diabetes.

A broader though related issue is how businesses advertise their goods and services to prospective customers. Consumers often find advertising, particularly targeted advertising based on personal information, infuriating, but they also find some advertisements, catalogues, and so on to be of

3

For example, the use of the SWIFT banking communications network as a tool for tracing international banking system transfers of funds to and from terrorists was an innovative way to use existing information for a new purpose. For more information, see Jennifer K. Elsea and M. Maureen Murphy, Treasury’s Terrorist Finance Program’s Access to Information Held by the Society for Worldwide Interbank Financial Telecommunication (SWIFT), Congressional Research Service, Report Code RS22469, July 7, 2006, available at http://www.fas.org/sgp/crs/natsec/RS22469.pdf.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

considerable information value. Businesses normally want more information to reduce the costs of advertising by better targeting, but they do not want a backlash from consumers. A different set of vignettes might pose the following survey question: How much privacy [do you/does “Name”] have from business solicitations?

  1. [Elizabeth] has an unlisted telephone number and address. She never receives any advertisements or telemarketing calls.

  2. [Jasper] occasionally receives “pop-up” advertisements while browsing the Web.

  3. [George] occasionally receives e-mail advertisements.

  4. [Mark] occasionally receives catalogues from department stores that he has shopped at.

  5. [Grace] frequently receives phone calls from telemarketers asking her to purchase various household items.

  6. Door-to-door salesmen frequently come to [Derek’s] home and attempt to sell household items to him.

These vignettes suggest some of the variability in this issue and leave room for consumers, businesses, public policy makers, and others to identify scenarios that they find appropriate and inappropriate.

Yet another dimension of organizational conduct involves the relationship between the supervision of employees in the workplace and the nature and extent of surveillance of those employees.4 It is broadly accepted that employers have rights and even obligations to supervise employees. In one sense, any kind of employee supervision might be regarded as surveillance. As a point of departure, consider the possible survey question, How much privacy [do you/does “Name”] have at work from [your/his/her] employer? Here are a number of possible vignettes:

  1. [Alex] works without supervision. He sets his own schedule and takes breaks whenever he wants.

  2. [Bob] submits a time sheet summarizing how he has spent his day. He may take breaks as long as they are listed.

  3. [Carol] punches a clock to check in and out of work. Her boss checks in on her frequently and uses a monitoring system to record how many keystrokes she types per minute.

  4. [Jane’s] employer keeps lists of every Web site visited by each

4

Additional discussion of privacy issues related to worker surveillance can be found in Mark Jeffery, ed., “Information Technology and Workers’ Privacy,” Comparative Labor Law and Policy Journal 23(4):251-280, 2002.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

employee who uses a computer at work. Her boss occasionally reviews the lists.

  1. [Gordon’s] employer hires a company to search the Web for information about all employees, including their posts to Web boards and chat rooms. The employer reviews this information to see if employees are criticizing the company.

  2. [Debbie’s] boss frequently listens in on her phone conversations at work and reads her e-mail, whether work-related or not.

  3. [Ed’s] boss monitors all forms of communications in the office, whether work-related or not, and uses a video camera system to track work activity. [Ed] must bring a letter from his doctor to be paid for his sick leaves, and breaks are timed to the minute.

Government collection of personal information presents special issues by virtue of government’s unique status without competitors, its coercive capabilities, and the mandatory character of many of its data requests. Governments are involved in many activities of daily life, and they collect a great deal of personal information pursuant to such involvement. This provides many opportunities for repurposing. For example, states issue drivers’ licenses, for which they collect personal information. Such information is manifestly necessary for the purpose of enforcing state laws about driving—but states have also sold driver’s license information, including names and addresses, to obtain additional revenue. Such actions have had tragic consequences, as in the 1987 Rebecca Schaeffer shooting discussed in Section 4.3.1. Government agencies also collect large amounts of personal information for statistical purposes, such as the Census.

The scenarios discussed above and below are not necessarily instances of “good” technology or information being misappropriated by “bad” people, or of “bad” technology that is being used only for the invasion of privacy. Looking at particular cases shows the range of purposes and motives for both the technology and the institutions using that technology. There is often a difference in perception about whether a given application of technology offers more or less privacy and whether the outcome of the use is good or bad. Indeed, there are conflicting desires by both the targets of the information gathering and those who are doing the gathering. Understanding these issues gives a picture of a privacy landscape that is painted not in black and white but in multiple shades of gray.

To the extent that businesses and other organizations see fit to develop and implement privacy policies, these policies to varying degrees are informed by the principles of fair information practice described in Section 1.5.4. Fair information practices were originally developed in a context of government use of information, but over the past 30 years, they

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

have proven relevant to private sector use of information as well. This is not to say that businesses and other organizations have fully embraced them in practice—only that they are an important point of departure for these organizations in formulating relevant privacy policies.

6.2
EDUCATION AND ACADEMIC RESEARCH INSTITUTIONS

6.2.1
Student Information Collected for Administrative Purposes

Educational institutions at all levels maintain enormous quantities of information about their students. Indeed, school children often learn that information about them is being kept and accumulated in a “permanent record” that potentially follows them throughout life. This record contains not only grades but also standardized testing scores, comments from teachers, and a record of behavioral and developmental changes, challenges, and observations. Although all educational institutions at all levels collect a rich store of information about the students who have attended the institution, the amount of information increases with the level of education. Elementary and secondary schools have considerable information about the grades, behaviors, and capabilities of their current and former students. Colleges and universities usually have richer (although, perhaps, less centrally aggregated) stores of information about their students. Indeed, colleges and universities could be regarded as conglomerates of different “businesses” that need personal information for different purposes (e.g., student health services, registration, management of facilities such as dormitories, issuing transcripts and parking permits, providing food service, and so on) in addition to the primary purposes of educating students and performing research. In the course of their everyday functioning, they may collect information on students’ movements on campus (as ID cards are used to unlock doors electronically), library use (as they check out books), and even some forms of consumption (as their ID card is used to debit their account when they purchase a meal at the cafeteria or condoms at the campus book store).

Much of this information is gathered to chart the progress of the individual student. Grades, standardized test scores, and various evaluations are used to track the progress of the individual student and to determine future placements and admissions as well as past accomplishments. Most of this information is considered confidential—it is available only to the student and possibly that student’s parents, teachers who can demonstrate a need to see the information, and the administrators and counselors in the school itself. Some information, such as scores on diagnostic or capabilities testing, may not even be available to the student or parents of the student.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

While the original goal of gathering information about students was to aid in the education of the student, much of that information is now being used for secondary purposes. Standardized test scores are now often aggregated and used to evaluate the effectiveness of teachers, the curriculum, or the school itself. Demographic information about students is gathered and used to show compliance with federal regulations concerning equal opportunity. Information about students—sometimes individually and sometimes in the aggregate—is used internally and externally by schools for fund-raising purposes and for marketing of the school itself.

Colleges and universities also gather large amounts of information about students as a by-product of the educational process. Such information ranges from the mundane (e.g., student telephone listings and class schedules) to the potentially quite intrusive (e.g., records of e-mail and chat sessions conducted via school computer networks, records of Web browsing activities). In a desire to exploit the full educational potential of computer networks, many institutions now provide “wired” dormitories, classrooms, and even campuses. Information collected within such systems may include records of whereabouts on campus generated by networked ID cards or laptop Ethernet access logs, purchase records generated by multipurpose student ID/debit cards, and so on. University libraries contain records of varying degrees of completeness of what students have borrowed or checked out or used online. Meal plan records may contain detailed information on the eating habits of individual students. Student health services collect detailed medical histories. As more and more educational institutions begin using access control mechanisms for entry to their facilities, even the location of individual students will become more traceable at various levels of granularity—and information about who entered and left a given location in a given time window could facilitate the identification of social networks.

Much of the academic information about students is subject to the protection of the Family Educational Rights and Privacy Act (FERPA), sometimes known as the Buckley Amendment, which drastically limits the range of information that schools and colleges can release about their students. This act bars nonconsensual release of student records, and little beyond a student’s enrolled status and (if it appears in a published source like a campus directory) address and phone number can be revealed, except to persons within the institution who have a demonstrable “need to know.” It also ensures that students (and the parents of students who are minors) will have access to those records and a right to correct or amend those records. Other information, such as medical records generated in university hospitals, is often subject to other legal protections, such as those mandated by the Health Insurance Portability and Accountability Act (HIPAA) of 1996 (as discussed in Chapter 7).

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

The implementation of appropriate data management and security procedures to fulfill external and internal privacy requirements would be a challenge in the best of cases, and is doubly so in many educational contexts where the network of computers storing personal information tends to develop out of systems that were originally self-contained and unavailable over a network. Adding appropriate security to such applications is difficult on a case-by-case basis. Ensuring that all of the applications containing confidential information are appropriately protected is far more difficult.

This is especially so as these institutions try to use the Internet to allow easy access for staff, faculty, and students. Simply designing appropriate authentication procedures is a challenge; coupling those with the correct authorization and auditing mechanisms is an additional technical challenge.5 Media accounts frequently report on weaknesses in such systems, whether it be the inappropriate use of Social Security numbers for identification (as was done at Princeton University in 20026) or the hacking in to a third-party admissions records site for the MBA programs of such schools as Harvard University, MIT, and Carnegie Mellon University (Box 6.1).

If nothing else, these cases illustrate the fact that implementing appropriate data management procedures has always been a challenge; doing so securely in a digital networked environment is even more difficult. The desire to provide secure but accessible online services that would simplify the application process for students and allow ready access for those within the educational institutions to the submitted material led to a situation in which the security of the overall system could be compromised. Similar worries have arisen over other systems used by educational institutions, whether they have to do with applications, current students, or alumni. In all cases, allowing anyone access to these online repositories raises additional security concerns that those who are not to have access will somehow gain it, violating the privacy of the individuals involved. The issues range from the proper design of authentication procedures to parameters for enabling access from publicly accessible terminals.

This case also points to disagreements about the proper balance between technical and non-technical approaches to guaranteeing security and privacy. Those who argue that the applicants should not be penalized since they only exploited a hole in the system advance a position that anything that can be done in such systems is permissible. The schools,

5

See National Research Council, Who Goes There? Authentication Through the Lens of Privacy, Stephen T. Kent and Lynette I. Millett, eds., The National Academies Press, Washington, D.C., 2003.

6

“Cybercrime? Princeton Spies on Yale,” CBS News, July 26, 2002, available at http://www.cbsnews.com/stories/2002/07/27/tech/main516598.shtml.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

BOX 6.1

A Case Study in the Ethics of Privacy

The discovery, reaction, and counter-reaction to the 2005 compromise of information in records for several universities’ business-school admissions add up to an interesting case study in the area of privacy, technology, and education. A number of business schools subscribed to a service that allowed a single admissions dossier to be shared among the schools, which is a convenience for the students applying to these schools. The service also allowed the schools to manage their own admissions procedure, which is a way for the schools to gain efficiency. However, the security of the service was compromised by a person who published a way for those who had used the service to get access to their own records (which could in principle contain information about the disposition of their applications). A number of the applicants did so. However, using an audit procedure, the schools were able to determine which records had been observed in this way, and a number of those schools decided that anyone who had accessed the records would be denied admission to the program on the basis of a lack of ethics. A privacy issue arises because applicants could also, in principle, gain access to the records of other applicants, although none were known to do so in this case.

This decision by the schools caused considerable controversy. There were some who agreed with the schools, pointing out that such a lapse was just the sort of bending of the rules that had led to scandals such as those surrounding Enron and WorldCom. Others claimed that the schools had acted far too harshly, arguing that the breach of security was the fault of the service used by the schools, and the use of the mechanism by the applicants was no worse than looking at their records if those records had been left out in public.


SOURCE: Geoff Gloeckler and Jennifer Merritt, “An Ethics Lesson for MBA Wannabes,” Business Week, March 9, 2005, available at http://www.businessweek.com/bschools/content/mar2005/bs2005039_7827_bs001.htm.

on the other hand, took the position that even though looking at the sites was technically feasible, actually looking shows a flaw in character that counts against the applicant. They are enforcing the security by other than technical means—by showing that violation of the integrity of the admissions process will entail a penalty, they hope to deter such actions in the future. This case also raises the question of when individuals should have access to information about themselves and just whose information it is. The schools maintain that the information about the applicants was properly withheld, while others argued that the information (including admissions status), being about a given student, should properly be accessible to that student.

As a condition for the use of campus IT resources, many institutions require students to sign and abide by acceptable use policies under which

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

students agree that their Internet activities are subject to monitoring under certain circumstances. Although students sign these agreements routinely, 6 months later they often have no memory of having signed them, let alone what the agreement actually said.7 As a result, these institutions find themselves in the forefront of debates about the values and costs of surveillance, and must develop policies about handling the vast amounts of information they cause to be generated.

6.2.2
Personal Information Collected for Research Purposes

Along with all of the information gathered and stored concerning potential, current, and future students, educational institutions involved in research gather and store large amounts of data as part of that research. Some of this information (especially in the social sciences) may be confidential but not refer to any person actually associated with the educational institution (e.g., data on public responses to a questionnaire). Other information can have considerable worth in terms of intellectual property.

Unlike information about the students that attend such institutions, information gathered in the course of research is not clearly covered by the general laws and regulations regarding the privacy of student records. However, there are extensive federal and international statues, policies, and guidelines that govern the use of human subjects in research. These regulations, many of which trace their heritage back to the Nuremberg Code,8 govern not only what information can be gathered as part of research but when and how that information can be released to ensure the privacy of the individuals who were the subjects of the study.

However, there are competing interests in such information, given that the institution holding the information needs to weigh the cost of releasing the information, both in terms of the privacy of the subjects and in terms of possible lost revenues in terms of patent rights and other intellectual property fees, against the value of having open research results based on repeatable experimentation.

The tradeoff between the value of privacy to an individual and the

7

Janet W. Schofield and Ann L. Davidson, Bringing the Internet to School: Lessons from an Urban District, Jossey-Bass, New York, 2002, pp. 319-320.

8

The Nuremberg Code was developed in the wake of the Nuremberg Tribunals after World War II. Briefly, the Nuremberg Code articulates 10 points that define legitimate and permissible medical research. Prior to the Nuremberg Tribunals, no international law or informal statement differentiated between legal and illegal human experimentation. See Trials of War Criminals Before the Nuremberg Military Tribunals Under Control Council Law No. 10, Vol. 2, pp. 181-182, U.S. Government Printing Office, Washington, D.C., 1949, available at http://www.hhs.gov/ohrp/references/nurcode.htm.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

value of the individual’s information to the researcher and ultimately to society is real, substantial, and not resolvable in any final sense. It will always remain an important tension, no matter how society’s rules govern any particular research project, at any one time, in any one institution, under any one set of policies, and as governed by any given granting institution. The benefits here are so contextual and dependent on the type of privacy and the value of the information to the individual and society that there will be a continuing need to make decisions about the tradeoff in each research situation.

6.3
FINANCIAL INSTITUTIONS

Financial organizations (including insurance companies) gather and maintain enormous amounts of sensitive information about individual adults. Financial organizations such as banks, credit card issuers, investment houses, and loan originators (all of which may be part of the same organization) know how much we make, how much we save, what investments we make, and how much (and to whom and for what) we owe money. Such organizations seek information about their customers and potential customers, both so that the organizations can offer new services to those customers and so that the organizations can more completely manage the risks involved in a customer’s use of those services (such as the use of credit). Insurance companies seek and maintain information on the health, possessions, security provisions, and other habits of their customers, both to keep track of what is insured and to determine the prices they will charge based on the actuarial information that can be derived from such information.

The amount, sensitivity, and importance of this information have long been known. The financial sector was one of the first to be subject to broad-ranging privacy legislation with the passage of the Fair Credit Reporting Act of 1970, and many of the considerations cited in the landmark study Records, Computers, and the Rights of Citizens originated in concerns regarding the gathering and use of financial information.9 Many of these regulations have as their main goal ensuring that the information gathered and used by these institutions is accurate. However, recent worries have also centered on how that information is used and shared between various parts of the financial institution.

The need for accuracy is clear; inaccurate information that makes a person appear to be a higher risk than would otherwise be the case can

9

U.S. Department of Health, Education, and Welfare, Records, Computers, and the Rights of Citizens, Report of the Secretary’s Advisory Committee on Automated Personal Data Systems, MIT Press, Cambridge, Mass., 1973.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

slow the delivery of financial services, increase the cost to the person of those services, or even keep the person from receiving those services at all.10 The Fair Credit Reporting Act allows consumers to see the information on which financial institutions base their decisions concerning lending or the offer of other services, and provides mechanisms by which those credit records can be corrected or amended.

In 1978, the U.S. Congress passed the Right to Financial Privacy Act, which is intended to protect the confidentiality of personal financial records. Today, the act covers financial records held by covered institutions, including banks, credit card issuing institutions, credit unions, and securities firms, among others. The act forbids most federal authorities from obtaining access to these records unless the individual(s) in question has granted access or an appropriate legal authorization has been explicitly sought. In addition, under most circumstances, the individual in question has the right to challenge the government’s request before the access occurs.

However, the act also immunizes covered institutions and their employees against civil liability for the voluntary filing of suspicious activity reports (SARs) with the Financial Crimes Enforcement Network of the Department of Treasury. The USA PATRIOT Act, passed in 2001, also expanded the circumstances under which covered institutions must file an SAR and established identification requirements for customers. The Right to Financial Privacy Act also does not apply to state or local governments, private organizations, or individuals—and to the extent that covered institutions do not comply with requests for such records originating from these entities, their refusal is based on constraints other than the act (e.g., rules of business practice, auditing requirements, state or local law, and so on).

More recent worries about such information center on the use of the information gathered for one purpose and then used for a completely different purpose. For example, information gathered to determine the risk of offering loan or credit services could be used to market other, unrelated services to particularly creditworthy customers, such as additional credit cards or lines of credit. Payment records indicating international travel could be used to market travel insurance or loss protection. Such repurposing of information has led many consumers to feel that their privacy is being violated, and led to the passage of the privacy protections contained

10

The Sarbanes-Oxley Act, also known as the Public Company Accounting Reform and Investor Protection Act of 2002, was intended to increase management accountability in private firms, and has had the effect of increasing the need for high-quality personal information before it is aggregated or de-identified and transformed into “financial data.”

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

in the Gramm-Leach-Bliley Act (also known as the Financial Services Modernization Act) of 1999.

The primary purpose of the Gramm-Leach-Bliley Act was to eliminate distinctions between commercial banking and investment banking. It allowed the creation of financial service companies that could hold commercial banks, investment banks, and insurance companies as affiliated subsidiaries, and it permitted those subsidiaries to sell each other’s products where such sales had not been permitted in the previous regulatory regime established by the 1933 Banking Act (also known as the Glass-Steagall Act). More important from a privacy standpoint, financial service companies were allowed to use personal information obtained from one subsidiary to further the sales of another subsidiary’s products. For this reason, the Gramm-Leach-Bliley Act required financial service companies to state their policies having to do with privacy, especially with respect to sharing information among subsidiaries and selling that information to third parties. The act also gave consumers the right to opt out of various forms of information sharing that would result in the use of that information for purposes other than the originally intended purpose.

The success of the Gramm-Leach-Bliley Act is uncertain, at best. From the consumer standpoint, the privacy statements that are required by the law are detailed and technical. They are hard to understand, and as a result there are a number of public Web sites that attempt to explain to consumers what the privacy statements mean,11 and some regulators are pushing to rationalize privacy notices in order to increase their clarity and usefulness to customers. While the law allows consumers to choose not to allow sharing of certain kinds of information, some studies have shown that relatively few consumers who could make such a choice have actually done so. This could be an indication of a lack of interest in blocking such sharing, or it could be an indication of the complexity of the mechanism created by the law for making such a choice. The exercise of formulating these notices, however, has arguably forced financial institutions to review their privacy policies and data-handling practices in a way that they otherwise would not have done, and thus reduced the likelihood of egregious privacy practices that might have slipped through the cracks.

Even if opting out were easier, it is not clear that making use of the mechanism would have the intended effect. The worry of the privacy advocates is that by sharing this information across divisions, subsidiaries, and with partners, the companies doing the marketing are adding to the number of useless catalogs, mass mailings, and solicitations received by consumers. However, those within the industry argue that such shar-

11

See, for example, “Fact Sheet 24(a): How to Read Your ‘Opt-Out’ Notices,” Privacy Rights Clearinghouse, available at http://www.privacyrights.org/fs/fs24a-optout.htm.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

ing actually reduces the amount of extraneous marketing received by a consumer by enabling solicitation that can be targeted to only those more likely to respond to it as determined by the interests shown in the shared information. The alternative is to ensure that the solicitations are sent to everyone, rather than a targeted set.

Finally, the financial sector presents a number of good examples to illustrate the need for tradeoffs. The discussion above makes clear at least a rough societal consensus that financial information is sensitive and is deserving of considerable privacy protection. At the same time, criminal elements often interact with financial institutions, and law enforcement authorities have found financial information to be of enormous value in apprehending and prosecuting criminals. Thus, a number of laws enable law enforcement agencies to obtain personal financial information under appropriately authorized circumstances. Laws related to the reporting of large cash transactions (in excess of $10,000) are also intended to discourage money laundering, even though a privacy interest might well be asserted in such transactions.

6.4
RETAIL BUSINESSES

The attempts by financial institutions to provide better service (and to cut their own costs) through the gathering and mining of information about their customers have been mirrored by similar efforts in the retail industry. Whether in online e-commerce or the bricks-and-mortar retail trade, the gathering of information about the buying habits and past histories of customers has led to efficiencies in retail businesses, but also to concerns about the privacy of the individuals about whom the information is gathered.

Although many different schemes have been used to collect information about consumers, the dimension of privacy that these affect is fairly straightforward to understand. As a point of departure, consider the issue of privacy as it relates to merchants collecting information from shoppers. Using the anchoring vignette approach, a possible survey question might be, How much privacy [do you/does “Name”] have from merchants while shopping? Here are a number of possible vignettes:

  1. [Susan] pays cash at a large, crowded department store and provides no information about herself to the cashier.

  2. [Mary] pays with cash at a convenience store. The clerk insists on recording her zip code on the computer-generated receipt.

  3. [Carmen] pays with her credit card at the convenience store. The clerk insists that she provide picture identification, as well as her telephone number to record on the transaction slip.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
  1. [Horace] goes to a drugstore to buy film, which was advertised to be on sale. He finds out at the store that in order to receive the discount, he must apply for a courtesy card, which entails an application requiring home address, work, and marital status.

  2. [Julio] applied for a courtesy card at the drugstore, which entailed an application requiring home address, work, and marital status. Whenever he shops he receives by mail advertisements and coupons for alternatives to the drugs that he usually purchases.

  3. [Evelyn] applied for a courtesy card at the drugstore, which entailed an application requiring home address, work, and marital status. Evelyn used the middle initial “Q” on her application, even though that is not her real middle initial. She now receives catalogs in the mail from businesses that she has never patronized, all with mailing labels that include the middle initial “Q.”

  4. [Rosco] applies to join a local gym. The membership application includes questions about his health, income, and criminal background. In addition, he is required to grant permission for the search of public records and undergo a credit check.

Of course, although one dimension can be defined by example through these vignettes, consumers and different businesses have markedly different preferences about what level of privacy, as indicated by one of these seven vignettes listed from most privacy preserving to least, is acceptable or even should be legal.

One such information-enabled marketing effort to come to the attention of consumers is the use of historical information by the online bookstore Amazon.com to make suggestions to visitors on items that might interest them. When customers log in to the Amazon.com Web site, they are greeted with a series of recommendations on items they might like. These recommendations are based on the purchase history of the customer and the purchase history of other customers who resemble the one logging on. Many people find the recommendations helpful, and Amazon.com finds that it helps their business. Nevertheless, there are some who find this an indication of how much information has been gathered about them and wonder what else this customer database reveals about them.

A similar trend can be seen in bricks-and-mortar retail businesses such as grocery stores and pharmacies that use customer loyalty cards. These cards are used to identify customers, allowing the purchases made by those customers to be tracked and aggregated. Some stores use the information to give out discount coupons differentially depending on the interests and history of different customers, which can be thought of as a variation on the recommendations made by the online retail sites. In addition to the accumulation of information that these cards allow, there

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

have been some who are troubled by the fact that the use of such cards can be a condition for a discount on certain purchases, meaning that those who do not want this information gathered about them are forced to pay higher prices than those who allow the information to be gathered.12

The change, in both the online and the bricks-and-mortar retail case, is not necessarily in the information that is being gathered. For some time now, individual merchants have had their own credit cards, and all purchases by customers using such cards were thus recorded and the records made available to those merchants. What has changed is the use of that information. With new data-mining software, this information has become an important input for decisions about what suggestions to make to particular customers to how to lay out a retail store to what items to put on sale.

As the above discussion suggests, retailers, credit card companies, and manufacturers often collect and make subsequent use of purchase information, and because they do not go out of their way to remind consumers that they are doing so, such collection and use are unlikely to be foremost in a consumer’s mind. To the extent that individual consumers are not aware of information-based marketing, they may find such marketing helpful and benign or intrusive and inappropriate.

In the helpful and benign category are marketing offers that consumers value, such as a discount on the next purchase of an item previously purchased, a coupon for a competitor’s product, a more convenient online shopping experience, or a suggestion for a different purchase that the customer might find useful and of interest. Indeed, some consumers seek out information-based marketing services and knowingly provide information about themselves to improve the operation of recommendation systems.

In the intrusive and inappropriate category are sales techniques that make individuals feel that their privacy has been violated, such as advertisements for undesired sexually oriented material or drugs for socially stigmatizing diseases. More troubling is the use of information-based marketing to avoid certain demographic groups in offering an advantageous deal or to target certain groups with fraudulent intentions in mind.

12

In some cases, a customer without an individual loyalty card is supplied with a “register” card upon checking out, thereby enabling the customer to receive the discount. However, the existence of this practice does not negate the potential privacy concerns raised by customer loyalty cards in the first place. Although even a customer with a loyalty card can request that the register card be used, the customer must know about that option to exercise it, and it is not accidental that there is generally no sign at the register indicating that customers may use the register card. In addition, the customer may lose any benefits associated with aggregate purchases over a period of time (e.g., a coupon for a 10 percent savings after $500 in purchases).

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

Some have asserted that one way to solve, or at least substantially mitigate, the intrusive aspects of information-based marketing is to collect even more personal information so that offers can be targeted more precisely to those who are likely to appreciate getting them, and other customers can then be left alone. But the notion that preserving privacy could depend on providing even more personal information is ironic and counter-intuitive at best. Indeed, much of the objection to marketing uses of personal and transactional information is based in the fact that many people simply do not believe that marketers are their agents working in their interest. By contrast, sharing personal and sensitive information with someone known to have the information provider’s interests at heart is likely to be undertaken with much greater comfort and ease.

The latest extension to worries in a retail privacy context is the introduction of radio-frequency identification (RFID) tags that carry an identifier able to differentiate retail goods at the item level (as opposed to just the kind of item, which is the case with barcodes). RFID tags respond to transmissions in the radio frequency range by sending a reply that is their unique identifier. Their use is not currently widespread, but both the Department of Defense and WalMart have active plans for deploying the technology in the near future.

Privacy advocates have argued that RFID tags will allow anyone with a reader to determine all of the items carried or worn by an individual, and would allow someone with a reader to take a complete inventory of the contents of a house from outside the house. Correlating the items being worn by an individual could enable determining the identity of that individual. Moreover, a tag wearer/holder’s movements and personal contacts might become more traceable.13 Tags placed in books could allow inferences to be made about the reading habits of the individual.

To date, the use of RFID technology in retail applications has, in almost all cases, been confined to the supply chain, in keeping with RFID’s original purpose to ensure a smooth movement from the manufacturer, through the warehouse, and to the final retail space. It is believed that the automation of the identification of palettes and items through this process will save considerable cost and improve the detection of lost, misplaced, and stolen items. Even in the retail store, the main use of RFID technology is intended to be the reduction of the inventory that the store must carry—pilot programs (such as that done in the United Kingdom by

13

Once an individual is identified and associated with the serial numbers of tagged items possessed, the individual is subject to identification when he or she passes an RFID-monitoring point. If two such people meet—deliberately or by chance—nearby a monitoring point, a de facto record of their meeting can made.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

Marks and Spencer14) show considerable cost savings in the use of this technology.

RFIDs also have marketing significance. For example, it is relatively straightforward to embed a different product serial number in each product, so that every shirt sold by any store has its own serial number. A consumer who bought an RFID-tagged shirt at Store A could then be identified every time she wore that shirt in Store A. Since the array of personal items she would be carrying on each visit would vary, it would be possible over time to develop an inventory of many of the personal items that she owned. Furthermore, it is possible that the databases of different stores could be networked together, which means that every store in the network would have such information available. With an inventory of personal items in hand, albeit incomplete, stores could deploy recommender systems to suggest other items that an individual might be likely to purchase, with suggestions transmitted to the consumer as text messages on a cell phone.

To date, no retailer has announced any such plans and would receive abundant negative press if it did. This subject has become “radioactive” because of privacy concerns, and retailers are currently using RFID technology only for inventory, supply chain management, and theft control. An individual retailer might be reluctant to expose itself to such risk, and it is even less likely that retailers would do this en masse, an action that would be required for the network scheme described above. But as privacy advocates point out, it could someday be possible.

Although few people object to the use of RFID tags in retail stores before an item becomes the property of the consumer, post-sale privacy concerns have been raised regarding just such scenarios. As in the case of the Amazon.com book recommender systems, targeted marketing can be considered a benefit to the willing consumer or an intrusion to the unwilling consumer. One technical approach to address post-sale privacy concerns involves deactivating the tags at the point of sale.15 Such an operation would make the tags permanently unresponsive to any request; in effect the tag would become inoperable.

Most of the controversy around the collection of information at the retail level seems to stem, on analysis, from a concern about the amount of information that could be gathered about everyone during even the most mundane of tasks. Today, even the seemingly anonymous shopping over

14

See generally, Simson Garfinkel and Beth Rosenberg, eds., RFID Applications, Security and Privacy, Addison Wesley, 2006. See especially Chapters 4-6.

15

Some RFID tags can also be deactivated by microwaving them for several seconds. Consumer deactivation of an RFID tag has the advantage of verification, as vendors themselves have little inherent economic incentive to kill the tag.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

the Internet results in enough information to allow merchants to make suggestions that seem uncomfortably accurate to some. New technologies seem to allow ever greater collections of information about what we buy, wear, and do, and about our associates as well. The information can be gathered at a distance (and thus without our knowledge) and might even be gathered well after the action that led to the acquisition of the item giving out the information (as would be the case with RFID tags being read in the clothing we were wearing instead of buying). Multiple worries arise from the volume and variety of information being gathered, the known uses to which that information is being put, and the unknowns about what other uses there are now or may be in the future.

6.5
DATA AGGREGATION ORGANIZATIONS

In addition to allowing the collection, retention, and analysis of information by existing organizations, advances in technology have led to the creation of the data aggregation business. This business might be thought of as the networked-world’s equivalent to the traditional private detective agency of the past, in that it is built around being able to supply information to those who need it. Unlike the detective agencies of the past, however, these new-age businesses attempt to aggregate and repackage already-available information.

Data aggregation services obtain their information in a number of ways. Much of the information is gathered from public sources, such as the records held by various governmental bodies. These records are public by law, and many of these records are now available in digital form, either by request or directly over the Internet. Other forms of information come from partner businesses, or from businesses that want to use the information supplied by the data aggregation service. Such information can include the history of insurance claims made or the jobs held by an individual. In addition, customers of a data aggregation service supply it with some information about an individual of interest, which can then be used to find still more information about that individual. When the work for the client who has supplied the “seed” information is done, the seed data are added to the aggregator’s store of information.

Unlike search engine companies, which index information about individual users as a by-product of the overall indexing of the World Wide Web, the main business of data aggregation companies is the gathering and indexing of information about individuals, and the amount of information that can be gathered in the ways described above is staggering. And the more information acquired concerning an individual, the more valuable the services data aggregators can provide.

Data aggregation services are businesses—one must pay for and must

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

be credentialed by the data aggregation company to use, those services.16 The services offered by data aggregators are used by businesses for pre-employment background checks, by law enforcement agencies for investigations, and by financial and insurance and other companies to check the backgrounds of potential customers and associates. However, the quality of the data available through these services is variable. In a 2005 study, Pierce and Ackerman found that 67 to 73 percent of data records on individuals obtained from two data aggregation services contained incorrect biographical data, and between 13 and 25 percent contained errors in basic biographical data (name, date of birth, Social Security number, current address, phone number).17

In a sense, data aggregators can be seen as an extension of companies such as Equifax, Experian, and Trans Union Corporation—credit bureaus that have made a business of amassing financial information about individuals and businesses for years. But data aggregators are made possible by the advances in technology over the past decade. Only because of the amount of information that is available on the network, the amount that can be easily stored, and the advances in hardware and software that allow analysis of that information can data aggregators offer information services that cover almost everyone in the United States.

That these companies can collect enough information that they can “know” a person well is noteworthy to many and troubling to some. Perhaps a greater concern is the fact that many of the activities of these companies are not clearly covered by the laws and regulations that cover financial institutions, such as the Fair Credit Reporting Act. Unless they are in fact covered by such laws and regulations, there is no requirement that the companies make known to individuals information gathered about them, nor are individuals guaranteed by law a means for challenging, changing, correcting, or amending that information.

Indeed, the public was generally uninformed about the existence of data aggregation services until one company (ChoicePoint) disclosed that it had provided large amounts of personal information on many individuals to fraudulently constituted businesses. ChoicePoint has always marketed itself to business and the government rather than consumers, thereby escaping much public notice. In February 2005, ChoicePoint reported, most likely as the result of California law mandating such notice

16

Some data aggregation services are free, although the amounts of data made available for free are quite limited. For example, Zabasearch (www.zabasearch.com) makes available for free personal information regarding name, address, phone number, and year of birth.

17

Deborah Pierce and Linda Ackerman, Data Aggregators: A Study of Data Quality and Responsiveness, May 19, 2005, available at http://www.privacyactivism.org/docs/DataAggregatorsStudy.pdf#search=%22data%20brokers%20choicepoint%20acxiom%22.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

in the event of improper disclosures of personal information, that it had sold information about individuals to fraudulent front companies.18

ChoicePoint’s response to this breach was to tighten its mechanisms for credentialing a company; from its point of view the problem was one of fraud in obtaining the services it offered. But many observers argued that the incident showed a basic problem with the relatively unregulated and unrestrained companies that collect and store personal information about individuals without their knowledge and without direct benefit to them. These observers argued that the appropriate response was greater regulation of the data aggregation industry along the lines of how the financial and health sectors are now regulated.

It is not known at this writing what the ultimate reaction will be to the disclosure of the loss of this information. Calls have been made for new legislation and regulation of data aggregators. Many people have expressed shock that these kinds of businesses even exist. However, without such services it would be more difficult for businesses to do the required checks on potential employees to validate claims regarding educational background or the absence of prior criminal records, although data aggregation companies have much more personal information on individuals than just what is needed for background checks regarding criminal records and educational history.

What and how much information should be collected about citizens by private businesses has generally not been the subject of regulation in the United States, where worries have generally focused on the potential for privacy violations by the government. Knowledge of the existence of data aggregation services, and the dangers posed by the compromise of the information held by such services, potentially changes that, and concerns may increase about the possibility of privacy violations by private firms, especially as the data aggregation industry grows. In addition, an increasing tendency for government agencies to contract with data aggregation companies to provide otherwise unavailable data could easily lead to more intense concern as the line between the public and private sector becomes more blurred.19 Box 6.2 lists the data that are easily accessible to

18

“Consumer Data Company Warns 145,000 of Possible Identity Theft,” AP News, February 17, 2005, available at http://sfgate.com/cgi-bin/article.cgi?f=/n/a/2005/02/17/state/n041832S59.DTL.

19

For example, Hoofnagle found that law enforcement authorities can quickly obtain a broad array of personal information about individuals from data aggregation companies. Indeed, in 2004, ChoicePoint had designed a Web site, www.cpgov.com, as a one-stop shopping point for obtaining a compilation of personal information on almost any adult (Chris Jay Hoofnagle, “Big Brother’s Little Helpers: How ChoicePoint and Other Commercial Data Brokers Collect, Process, and Package Your Data for Law Enforcement,” University of North Carolina Journal of International Law & Commercial Regulation 29:595, 2004). At this writing, this site has been replaced by another site, www.atxp.com, which is the entry point for a

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

BOX 6.2

The AutoTrackXP Service of ChoicePoint

A typical information set (AutoTrackXP) from ChoicePoint offers the following information on a given individual subject:

  • Aliases for the subject

  • Social Security numbers associated with the subject

  • Other names and associated Social Security numbers linked with the subject

  • Driver licenses held

  • Addresses associated with the subject

  • Risk classification for the subject’s address

  • Infractions

  • Phone listings for the subject’s addresses

  • Sexual predator status

  • Felony/probation/parole status

  • Real-property ownership and deed transfers

  • Property owners of subject’s addresses

  • Deed transfers

  • Vehicles registered at subject’s addresses

  • Real-time vehicle registrations

  • Criminal offenses

  • Watercraft owned

  • Federal Aviation Administration (FAA) aircraft registrations

  • Uniform Commerical Code (UCC) filings

  • Bankruptcies, liens, and judgments

  • Professional licenses

  • FAA pilot licenses

  • Drug Enforcement Administration controlled-substance licenses

  • Hunting and fishing licenses

  • Business affiliations (including officer name match)

  • Fictitious business names (doing business as, or dba)

  • Names of relatives

  • Other people who have used the same addresses as the subject

  • Licensed drivers at the subject’s addresses

  • Neighbor listings for the subject’s addresses

ChoicePoint customers, and although most of the information is available from public sources, the service provided is that of one-stop shopping on a relatively short time scale.

new service known as AutoTrackXP (see Box 6.2). The www.cpgov.com Web site notes that the ChoicePoint Online public records interface is no longer available and directs users to the new site, www.atxp.com, with instant access to “ChoicePoint’s Premier Web-based investigative information solution, AutoTrackXP®.” The site further notes that AutoTrackXP “provides the extensive public record content you are accustomed to obtaining through ChoicePoint Online.”

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

Observers critical of data aggregators are specifically concerned about the ways in which private firms in the business of collecting, aggregating, and aggressively marketing services that depend on the secondary use of information about individuals have managed to avoid compliance with fair information principles.20 They suggest that implementing fair information practices in a way that would protect legitimate privacy interests in this growing sphere of activity would require the following:

  1. Some mechanism for providing notice to the general public about the kinds of information gathered for use by organizations that are the clients and customers of these firms;

  2. A centralized resource that would allow individuals to consent to, or at the very least opt out of, particular kinds of secondary uses of their personal information; and

  3. Reduction of the government’s reliance on private firms as adjuncts that enable agencies to bypass statutory limitations on access to personal information. Of particular importance would be the development of rules governing the kinds of contracts that can be let by agencies for data-mining efforts.

6.6
NONPROFITS AND CHARITIES

Nonprofit organizations and charities have become increasingly sophisticated in the information that they gather about their contributors, members, and potential contributors and members. Many of these organizations use some of the techniques of for-profit businesses, such as keeping track of those who visit their Web sites or make use of the services they offer. In many respects, the personal information stored by noncommercial entities is much the same as the information stored by for-profit enterprises. Credit card information, for example, is often stored to allow ease of contribution in the future, or private financial information is stored over time to enable automatic payments from bank accounts.

At times the information acquired by noncommercial entities about their members or contributors is even more sensitive than that kept by for-profit businesses. While tracking clothing stores patronized and the purchases made at those stores can generate information about an individual’s taste and style, knowing the charities to which an individual contributes and the nonprofit organizations of which one is a member can reveal political or religious views, intellectual interests, and personal

20

Daniel J. Solove and Chris Jay Hoofnagle, “A Model Regime of Privacy Protection (Version 2.0),” GWU Law School Public Law Research Paper No. 132, GWU Legal Studies Research Paper No. 132, April 5, 2005, available at http://ssrn.com/abstract=699701.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

opinions that are far more telling and private. The Supreme Court recognized the connection between the privacy of organizational membership and the right to free association in NAACP v. Alabama (357 U.S. 449 (1958)), holding that public identification could not be forced for members of an organization engaged in the dissemination of ideas as such identification could be a limit on the right of free association. Some nonprofit organizations also seek to raise money from wealthy individuals, and they often compile information relevant to estimating a potential donor’s net worth. Such information also may be regarded as sensitive in many contexts.

Unlike for-profit entities such as financial institutions, noncommercial collectors of information are not governed by laws concerning either the privacy of those about whom they collect information, or the uses to which they can put the information. They are exempt from the Do Not Call Registry on First Amendment grounds. Further, such organizations are often resource constrained, and thus unable or unwilling to invest in a security infrastructure that will protect from acquisition by third parties the information they have gathered. The combination of the information gathered and the weaker security found in many noncommercial undertakings makes them lucrative targets for those gathering information needed for identity theft, or for observation for political purposes, although to the committee’s knowledge such things have happened only rarely, if at all.

6.7
MASS MEDIA AND CONTENT DISTRIBUTION INDUSTRIES

Whether they distribute information through the printed page or broadcast media or the Internet, mass media and content distribution companies gather information about their customers both to hone the content they offer and to determine the rates that they charge the advertisers that are often their main source of revenue.

Customer databases maintained by providers of subscription- or membership-based content to ensure delivery to and billing of the customers have evolved to often include information on the age, sex, income levels, and other demographic and personal details of the subscriber—information that the content providers keep and use to determine how best to serve their customers, and also to determine the size and the demographics of their audience, which in turn allows them to attract advertisers and set rates. The more information that can be gathered, the better the information that can be used to plan the content provided in the future.

Newpapers, radio and television news, and Internet sites all try to provide content of interest to subscribers, viewers, and readers that often includes information of a personal nature about individuals and that might be considered private. Although libel laws provide some protec-

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

tions against the publication of untrue information, it is difficult to claim invasion of privacy when the information is truthful (as discussed in Section 4.2).

For many people, privacy from the media is important. Of concern to them is the surprise factor, that unbeknownst to individuals, and without their permission, they are suddenly in the public view more than they had realized would be possible. As a point of departure, consider the issue of privacy as it relates to the media collecting personal information about individuals. Using the anchoring vignette approach, a possible survey question might be, How much privacy [do you/does “Name”] have from the media? Here are a number of possible vignettes:

  1. [Claudio] just got divorced from his spouse. He calls his close friends to tell them and they keep this confidential.

  2. [Pamela] just got divorced from her spouse. The local newspaper publishes a list of all civil divorce filings, including [Pamela’s], in its back section.

  3. [Mary] just got divorced from her spouse. Her college alumni magazine publishes an article about her divorce, speculating what the disagreement was about.

  4. [Christopher] just got divorced from his spouse. Without his permission, CNN runs a feature story on divorce in America, which includes interviews with his ex-spouse and friends about his divorce.

The range here is quite clear, and the diverse interested parties involved will often have different preferences about where on this scale the media should be allowed to go. Developing consensus positions is especially difficult when views change as they affect individuals. This will be all the more so as marketing continues to become more focused, and as the need for personal information about the audience for a particular form of mass media becomes ever more important and the risk of exposing individual information in a way that is unexpected thus increases.

Information about the number of people who might be reached through a particular program or publication is no longer sufficient to attract advertisers. Instead, the advertisers want to see that the “right” kind of people for their product will be attracted by the content. Advertisers attracted to the Web site www.Collegehumor.com are very different from those that advertise on network telecasts of a golf tournament. As the amount of information about a viewer becomes more sophisticated and more personalized, advertising can be more targeted. Internet sites that sell advertising, for example, can now determine which advertisement to show based on the viewing and browsing habits of the individual visiting the site.

Another dimension of personal privacy has emerged as the result of

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

the digitization of entertainment content such as music, video, and movies over the past decade. Digitization has allowed new mechanisms for distribution of those forms of content, but has also allowed the possibility of perfect copying of a digital work. Such perfect copying, in which no information (and thus no quality) is lost, was not economical with analogue versions of these kinds of content. But with a standard computer, it is possible to make an unlimited number of copies of digital content without degrading the original in any physical manner, a capability that has led the owners of the intellectual content of such works to worry that they are losing (or have already lost) control of that property. The result has been an attempt to reassert the property rights of these owners, and such reassertion has privacy implications. In particular, content owners have sought to create new technologies for digital rights management (or DRM) that will allow the owners of the intellectual property to control copies of that property even when it has been sold to the consumer and is no longer physically under the direct control of the initial owner. These technologies may have a serious impact on the privacy of the consumers of the content.

DRM technologies would allow the original content owners (such as the producers of a movie or the distributor of a music CD) to control when and where that content could be used and, more importantly, how that content could be copied. The privacy concern, which is discussed more fully in Chapter 8, is that ensuring such control means that the content owner will be able to trace what content a person buys, what devices are used to view or listen to the content, how often the content is accessed, what parts the user finds most interesting, and perhaps even where the content is accessed, all in a manner that is entirely impossible with traditional media.

There is also the worry that information gathered in the name of protecting intellectual property will in fact be repurposed to other ends, since that information will be gathered and owned by the companies producing the content. Such information, not available in content without digital rights management, could lead to the establishment of even more invasive databases for marketing purposes.

6.8
STATISTICAL AND RESEARCH AGENCIES21

A large number of federal agencies have a role in collecting data from individuals, households, farms, businesses, and governmental bod-

21

This section is based largely on National Research Council, Expanding Access to Research Data: Reconciling Risks and Opportunities, The National Academies Press, Washington, D.C., 2005; and National Research Council, Private Lives and Public Policies: Confidentiality and Accessibility of Government Statistics, National Academy Press, Washington, D.C., 1993. Another

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

ies and disseminating those data for a variety of statistical purposes, including the development and dissemination of large, general-purpose data sets based on censuses, surveys, and administrative records. They also include the collection and analysis of personal data in experimental research with human subjects. A few federal statistical agencies conduct general or multipurpose programs (e.g., the Bureau of the Census), but many others conduct specialized programs or activities (e.g., the Bureau of Labor Statistics and the National Center for Education Statistics). Some programmatic agencies conduct some statistical activities (e.g., the Federal Aviation Administration and the Internal Revenue Service). The data collected by these agencies help policy makers understand the state of the nation—from the national economy to household use of Medicare—and support both the evaluation of existing programs and the development of new ones.

Agencies work with both statistical and administrative data. To carry out their basic functions, government agencies collect enormous amounts of data, most of which are used directly for various administrative purposes and much of which is personally identifiable information. Those data collected exclusively for statistical and research purposes form a tiny fraction of the total. Data collected for administrative purposes (which include matters such as determination of benefit eligibility and amounts) are often useful and appropriate for statistical purposes, as when patterns of Food Stamp applications are used to trace the effects of program changes. In contrast, data collected for research and statistical purposes are inappropriate for administrative uses, and privacy concerns can arise if data subjects worry that their provision of data intended for statistical purposes might be used administratively. (For example, a Census survey respondent might be worried that his or her survey answers might be turned over to the Internal Revenue Service and make him or her more vulnerable to a tax audit.)

All of the statistical agencies work to protect individual respondents (data subjects) against the use of statistical data for administrative purposes. In some cases, these protections are provided through statutes. Government-wide legislation includes the Privacy Act of 1974, the Freedom of Information Act of 1966, and the Paperwork Reduction Act of 1980. Agency-specific legislation further specifies the confidentiality and data access policies that those specific agencies must follow (e.g., the Bureau of the Census and the National Center for Health Statistics). How-

useful reference is G.T. Duncan, “Exploring the Tension Between Privacy and the Social Benefits of Governmental Databases,” in Peter M. Shane, John Podesta, and Richard C. Leone, eds., A Little Knowledge: Privacy, Security, and Public Information after September 11, The Century Foundation, New York, 2004.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

ever, the confidentiality policies of some agencies are not backed by statutory provisions. Instead, these agencies rely on persuasion, common-law tradition, and other means to protect identifiable statistical records from mandatory disclosure for nonstatistical uses, and such means may not always be successful.

In part to ensure that statistical data are not used for administrative purposes, agencies give data subjects pledges of confidentiality, both explicit and implicit. But when those pledges are not backed by statutory assurances, the pledges may not necessarily be honored (and statutory assurances can themselves be changed retroactively).

These pledges of confidentiality also lead to another set of privacy concerns. For analytical purpose, it is sometimes valuable to release to the public microdata data sets, that is, data sets consisting of some of the individual survey responses that were collected for statistical purposes. But the confidentiality pledges require that these data sets be released in a form that does not allow individual identification in any form or in any way, and promoting access to microdata increases the risks of breaching the confidentiality of the data.

One approach to honoring the confidentiality pledge in this context is the use of statistical disclosure limitation techniques (discussed in Section 3.8.2.2) to transform data in ways that limit the risk of identity disclosure. Use of such a procedure is called masking the data, because it is intended to hide personal information associated with data subjects. Some statistical disclosure limitation techniques are designed for data accessed as tables, and some are designed for data accessed as records of individual data subjects (microdata). Statistical disclosure limitation techniques almost always degrade data to some extent, although the degradation involved may not matter for a given purpose.

6.9
CONCLUSION

Many types of organizations face privacy issues. Some of these, like financial institutions, have long been recognized as holding large amounts of sensitive information about individuals and have had some scrutiny in their handling and use of that information. Other organizations, such as retail businesses merchants, data aggregation services, and noncommercial groups, are not so clearly identified with privacy issues, either in the public eye or through regulation, but are beginning to be seen as gathering many of the same kinds of information and of having many of the same vulnerabilities that can lead to concerns about privacy.

This brief examination of a variety of privacy issues centering on institutions and organizations makes it clear that the interaction of information technology and privacy is not an issue in only some isolated areas

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

of society. Large amounts of information are being gathered by agencies in many areas of U.S. society, whether it be government, the private commercial sector, or the noncommercial sector. This information is being aggregated, mined, and exchanged in ways about which most of us are unaware. Although some situations (such as the use of RFID tags) have attracted public attention, others (such as the aggregation of information in data services such as ChoicePoint) are not known about until a major breach is announced.

Another feature of privacy that this chapter illustrates is that often it is not the gathering of information itself that is a violation of privacy, but rather the use of that information. Schools, for example, need to gather large amounts of information about their students to be able to design classes for those students, among other purposes. But that same information can be used to provide marketing information, and that secondary use can lead to the perception of a violation of privacy.

BOX 6.3

Questions for Judgments and Policies About Privacy

  1. Goals—Have the goals been clearly stated, justified, and prioritized? Are they consistent with the values of a democratic society?

  2. Accountable, public, and participatory policy development—Has the decision to apply the technique been developed through an open process, and if appropriate, with the participation of those to be surveilled? This involves a transparency principle.

  3. Law and ethics—Are the means and ends not only legal but also ethical?

  4. Opening doors—Has adequate thought been given to precedent-creation and long-term consequences?

  5. Golden rule—Would the watcher be comfortable in being the subject rather than the agent of surveillance if the situation were reversed? Is reciprocity or equivalence possible and appropriate?

  6. Informed consent—Are participants apprised of the system’s presence and the conditions under which it operates? What exceptions to informed consent are deemed legitimate? Is consent genuine (i.e., beyond a response to deception or unreasonable seduction) and can participation be refused without dire consequences for the person?

  7. Truth in use—Where personal and private information is involved, does a principle of unitary usage apply, whereby information collected for one purpose is not used for another? Are the announced goals the real goals?

  8. Means-ends relationships—Are the means clearly related to the end sought and proportional in costs and benefits to the goals?

  9. Can science save us?—Can a strong empirical and logical case be made that a means will in fact have the broad positive consequences its advocates claim?

  10. Competent application—Even if in theory it works, does the system (or operative) using it apply it as intended?

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

There can even be differences in the perceived threat to privacy for the same action seen from different viewpoints. For example, information on past purchases can be used by marketing organizations to send out more targeted catalogs; this is seen by the marketing organizations as a way of cutting down the number of catalogs that a consumer receives (and thus is a way of giving that customer better, more personalized service). But some customers see this use of information as a mechanism to build a dossier of the customer’s likes and dislikes, and therefore as a violation of the privacy of the customer.

Abstracting across the domains outlined in Sections 6.2 to 6.8, a number of generic questions are suggested by the privacy issues these domains raise (Box 6.3). Asked about any proposed collection of personal information, these questions can help to indicate the complexity of these issues.

There are no simple answers in this complex of issues surrounding privacy. The principles implied in these issues are not necessarily of equal

  1. Human review—Are automated results with significant implications for life chances subject to human review before action is taken?

  2. Minimization—If risks and harm are associated with a tactic, is it applied to minimize risk and harm with only the degree of intrusiveness and invasiveness that is absolutely necessary?

  3. Alternatives—Are alternative solutions available that would meet the same ends with lesser costs and greater benefits (using a variety of measures, not just financial measures)?

  4. Inaction as action—Has consideration been given to the principle that sometimes it is better to do nothing?

  5. Periodic review—Are there regular efforts to test the system’s vulnerability, effectiveness, and fairness and to review policies?

  6. Discovery and rectification of mistakes, errors, and abuses—Are there clear means for identifying and fixing these (and in the case of abuse, applying sanctions)?

  7. Right of inspection—Can individuals see and challenge their own records?

  8. Reversibility—If evidence suggests that the costs outweigh the benefits, how easily can the surveillance be stopped (e.g., extent of capital expenditures and available alternatives)?

  9. Unintended consequences—Has adequate consideration been given to undesirable consequences, including possible harm to watchers, the watched, and third parties? Can harm be easily discovered and compensated for?

  10. Data protection and security—Can data collectors protect the information they collect? Do they follow standard data protection and information rights as expressed in the Code of Fair Information Protection Practices and the expanded European Data Protection Directive?

SOURCE: G.T. Marx, “Seeing Hazily (But Not Darkly) Through the Lens: Some Recent Empirical Studies of Surveillance Technologies,” Law and Social Inquiry 30(2):339-400, 2005.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×

weight and are sometimes even in tension. They touch on other issues involving innovation, property rights, the desire to provide customers better and more personalized services, and improvement of efficiency and profitability by gathering more information. Furthermore, their applicability will vary depending on perceptions of crisis and across contexts (e.g., public health, law enforcement, and national security may involve exemptions that would be inappropriate for the private sector or individuals). A snapshot view of these institutions selectively illustrates some of the problems that will have to be addressed in thinking about privacy in an age of information.

Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 175
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 176
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 177
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 178
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 179
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 180
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 181
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 182
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 183
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 184
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 185
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 186
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 187
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 188
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 189
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 190
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 191
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 192
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 193
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 194
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 195
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 196
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 197
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 198
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 199
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 200
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 201
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 202
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 203
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 204
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 205
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 206
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 207
Suggested Citation:"Part III Privacy in Context, 6 Privacy and Organizations." National Research Council. 2007. Engaging Privacy and Information Technology in a Digital Age. Washington, DC: The National Academies Press. doi: 10.17226/11896.
×
Page 208
Next: 7 Health and Medical Privacy »
Engaging Privacy and Information Technology in a Digital Age Get This Book
×
Buy Hardback | $59.95 Buy Ebook | $47.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Privacy is a growing concern in the United States and around the world. The spread of the Internet and the seemingly boundaryless options for collecting, saving, sharing, and comparing information trigger consumer worries. Online practices of business and government agencies may present new ways to compromise privacy, and e-commerce and technologies that make a wide range of personal information available to anyone with a Web browser only begin to hint at the possibilities for inappropriate or unwarranted intrusion into our personal lives. Engaging Privacy and Information Technology in a Digital Age presents a comprehensive and multidisciplinary examination of privacy in the information age. It explores such important concepts as how the threats to privacy evolving, how can privacy be protected and how society can balance the interests of individuals, businesses and government in ways that promote privacy reasonably and effectively? This book seeks to raise awareness of the web of connectedness among the actions one takes and the privacy policies that are enacted, and provides a variety of tools and concepts with which debates over privacy can be more fruitfully engaged. Engaging Privacy and Information Technology in a Digital Age focuses on three major components affecting notions, perceptions, and expectations of privacy: technological change, societal shifts, and circumstantial discontinuities. This book will be of special interest to anyone interested in understanding why privacy issues are often so intractable.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!