A Short History of Surveillance and Privacy in the United States
Routine surveillance is an inescapable feature of daily life in the United States at the start of the 21st century. We all leave a trail of electronic traces that are picked up for processing by a variety of organizations and agencies. Constant exchanges thus occur in which personal information is involved. Sometimes people are able to negotiate the amount of data they are willing to disclose—for instance when filling out a product registration form for a new computer. At other times, an individual’s leverage over the organization is minimal, as when people apply for business or government services such as loans or insurance. The interaction becomes even more unidirectional when personal data are collected from a distance, without the direct participation of the citizen or consumer.
“Surveillance” may be thought of as systematic attention to personal details for the purposes of influence, management, or control. Some surveillance is personal, face-to-face supervision, but the main concern in what follows is situations in which data are routinely collected. Such systematic collection of personal information is not a new phenomenon in American society. As far back as the colonial era, organizations were actively interested in the details of people’s daily lives. While the inten-
NOTE: This appendix is based largely on work performed on contract to the committee by David Lyon, Queen’s University, Canada (with extensive research assistance from Bart Bonikowski, Sociology, Duke University).
sity and scope of surveillance have varied since the 17th century, the same factors shape them: the interests of those in positions of power, the technology available to them, and the legal frameworks within which they operate.
The history of surveillance in the United States can be divided into five time periods, each characterized by particular political, legal, and technological developments. While these divisions are arbitrary, they highlight some of the main trends that have characterized the institutional collection of information and the corresponding moral and legal responses:
The first phase, which spans the decades from the mid-17th century to the American Revolution, is dominated by the Puritan ethic of colonial New England, with its emphasis on the enforcement of a strict moral code by means of neighborly “watchfulness.”
The second covers the early American Republic, from its confederation to the Civil War—a time of rapid social and political change and of a marked shift in surveillance from being an informal practice based in religious dogma to becoming an embryonic political tool of the government.
The third era stretches from the Civil War to the mid-20th century and is characterized by rapid technological growth, an increased reliance of government and business on surveillance, and the initial formulations of privacy as a legal right.
The fourth stage is that of post–World War II America from 1950 to 1980, which gave rise to computerized and centralized surveillance but also to a first concerted social effort at developing a legal right to privacy as an effective countermeasure.
The fifth period encompasses the main technological and political developments in surveillance and privacy from 1980 to the present, including the growth of computer interconnectivity, wireless technology, and the emergence of antiterrorism as the primary justification for intensified surveillance.
COLONIAL NEW ENGLAND (CA. 1650-1776)
Tensions between privacy and information gathering were present during the earliest colonial times. In New England, for example, the physical conditions of frontier life played an important role in shaping surveillance and privacy in the late 17th and early 18th centuries. The population of the Colonies in the 17th century was scattered among small settlements. Large families lived in small, crowded homes, many of which faced the town square or the main road, and frequently opened their
doors to servants and lodgers, further reducing the amount of personal space available. It is not difficult to imagine how little personal information the inhabitants of New England were able to keep secret. In small, isolated communities where every face was familiar and information from the outside world was largely unavailable, people were involved primarily with one another’s lives.
Surveillance and privacy were also affected by the strict adherence to the form of religious morality that lay at the heart of colonial New England’s Puritan ethic. Members of the community were expected to divide their time between their occupation (usually farming), family obligations, and religious duties. Tobacco use, card playing, cursing, idleness, premarital and extramarital sex, breaking the Sabbath, and excessive drinking were seen as sinful and met with religious and criminal sanctions. The moral stigma also extended to solitary activities such as “night-walking” and living alone. Puritan congregations expected their followers not only to eschew those vices themselves, but also to keep watch on others to prevent them from wrongdoing. This mutual watchfulness was central to the colonial system of law enforcement and church discipline. Ironically, given the Puritans’ aversion to the involvement of the state in religious matters, stemming from their persecution by English authorities in the 17th century, church and government law were closely intertwined in colonial New England.
It would seem reasonable, based on the description thus far, to conclude that the lives of New England settlers were under constant and intense surveillance. However, surveillance was partly restricted by the legal system of colonial New England. As early as 1647, Rhode Island adopted the principle that “a man’s house is his castle,” originally formulated by English jurist Sir Edward Coke. The colony outlawed “forcible Entry and Detainer” into a private dwelling, except by law enforcement officers acting under exceptional circumstances.1 Massachusetts followed suit in 1659.
Meanwhile, a slow shift away from forced and self-incriminating testimony in the courtroom and the church, which laid the groundwork for the eventual construction of the Fifth Amendment of the Bill of Rights, was symptomatic of deeper changes in colonial society. America was gradually abandoning the strict ethic of the Puritan movement, a slow transition that continued well into the 19th century. During the 18th century, the Colonies also experienced rapid population growth, which increased the size of most towns, thereby altering the physical conditions that once facilitated mutual surveillance.
It is also important to note that the type of surveillance widely conducted within the Puritan society differed significantly from surveillance in the 19th and 20th centuries. The political and religious institutions of colonial America were largely informal and unstructured. As a result, surveillance was less an institutional practice than a communal one. No organized police force was charged with investigating individuals, and no widespread enumeration of the populace took place. In fact, official records were nearly nonexistent, with the exception of court files, internal administrative records, and vital records (such as birth and death certificates). Instead, surveillance was an unsystematic activity carried out by particularly zealous members of the community, with varying repercussions for those under its gaze. Unlike its more concentrated bureaucratic form, which emerged in the following centuries, the power of colonial surveillance was widely dispersed among the population.2
THE EARLY REPUBLIC (1776-1861)
The influence of Puritan values on surveillance and privacy in America diminished throughout the 18th and early 19th centuries. The increasing mobility of individuals and the growth of urban centers made keeping tabs on one’s neighbors increasingly difficult. Furthermore, improvements in literacy and education, combined with the ready availability of the press, meant that the interest of some also went beyond neighborhoods to include the national or international political forum. Although these privileges were principally limited to white males,3 they contributed to changes in the social priorities of society as a whole. After all, these white males set social norms and government policies. The turning point in this phase of surveillance and privacy history was the American Revolution and the subsequent formation of an independent republic of the United States of America.
The origins of institutional surveillance in Western society can be generally traced back to the establishment of the modern bureaucracy, which had its beginnings in the military organization, the bureaucratic state, and
the capitalist enterprise.4 All three of these modern institutions came into existence in the United States at the end of the 18th century.
The escalating tensions between the Colonies and the British government over the arbitrary levying of taxes and the stationing of British troops in New England led to the outbreak of the first skirmishes of the Revolutionary War at Lexington, Concord, and Bunker Hill. In response, the first modern American army was born, under the command of George Washington and by direction of the Continental Congress, the newly established system of revolutionary self-government. The new army replaced the scattered militia and came complete with army drill, regular roll call, and punishment for disobedience.5
A year earlier, in 1774, the early bureaucratic structure of American government had emerged with the First Continental Congress, which organized local committees in most towns, cities, and counties of the Colonies.6 During the Constitutional Convention of 1787, which followed the War of Independence, this temporary structure was replaced by the present-form U.S. government, with three branches of power and a hierarchical organizational structure. The U.S. government soon became the primary site of institutional surveillance, as it gradually developed record-keeping practices to monitor its citizens.
While U.S. industry did not take shape until the latter half of the 19th century, the roots of capitalist enterprise in the United States can be found in early-19th-century shipping trade, the Southern plantations, the emergence of banking, and the westward search for gold. As U.S. businesses continued to grow, they developed bureaucratic record-keeping practices to keep track of contracts, loans, assets, and taxable revenues. These records of course included some information that referred to identifiable individuals.
The revolutionary era also marked the first instance of political-loyalty surveillance in the interest of national security. In the past, American colonists had felt endangered almost exclusively by “outside” forces, such as Native American tribes or the French and the Spanish. This sense of endangerment shifted during the Revolution, as the perceived threat began emanating from within the ranks of colonial society. As the tensions between the colonists and the British mounted in the 1770s, resentment against domestic supporters of British rule (Tories or Loyalists) became widespread. With the commencement of the Revolution, this antipathy
transformed into overt discrimination and abuse, as most Tories were ostracized and many fell victim to mob violence. The revolutionaries also persecuted Quakers, many of whom were pacifists opposed to the Revolution on moral and religious principle.
The surveillance capabilities of the new state were also used for other purposes. Since the Constitution of the United States called for the establishment of democratic popular elections and mandated a decennial census, the government needed to create organizational procedures that would regulate the proper fulfillment of these responsibilities. While electoral registration was an erratic practice whose application varied from state to state, the popular census of 1790 was the first attempt by the U.S. government to conduct systematic and universal gathering of information about its citizens. The census collected basic data on the gender, color, and identity of free males above the age of 16 years.7 With time, this crude enumeration tool evolved into a sophisticated source of demographic information employed by social scientists, policy makers, and government officials throughout the country.
While the post-Revolution era gave rise to early surveillance practices by the government and the military, its primary contribution to the history of surveillance in the United States was the codification in the Constitution and the Bill of Rights of rules restricting invasive information gathering. Although the ideological bases for the American Republic are complex, some of the basic values contained in the Constitution require mention, since they bear directly on surveillance in the new Republic. Drawing largely on the philosophy of John Locke and the heritage of Puritanism, the U.S. Constitution sought to protect the rights of the individual, to ensure the limited role of government in American society, and to reinforce the central importance of private property for the exercise of individual liberty.8 The Bill of Rights that emerged from these values was intended to shield citizens from unrestricted surveillance by those in positions of power and thus to avoid the development of authoritarian society, such as that of feudal Europe. The judicial interpretation of the Bill of Rights throughout the 19th and early 20th centuries rarely found solid grounds (especially based on the vague right to privacy) for curtailing government and corporate surveillance. Nonetheless, the sentiment of the founders against institutional intrusion in the life of citizens is unmistakable.
It must be stressed that the laws described above pertained to those
members of society who were viewed by the state as free citizens, which generally meant white male adults. The plight of those deprived of citizenship rights, such as African American slaves and Native Americans, was drastically different. Their individual rights were not protected by U.S. laws, since the former were seen as private property belonging to slaveholders, while the latter were treated as savages not worthy of legal protection. Consequently, slaves continued living in a state of almost total surveillance in which their every action was subject to scrutiny by overseers. Undesirable behavior was punished severely, with no legal recourse. Native Americans were banished to reservations and faced persecution and death if they resisted. Thus, when constitutional protections and restricted practices of state surveillance after the Revolution are spoken of, it must be remembered that such conditions were the norm only for a limited fraction of the U.S. population.
THE MODERN REPUBLIC (1861-1950)
Between the Revolution and the Civil War, surveillance and individual rights coexisted in a fragile balance. The growing desire of U.S. bureaucracies to keep track of citizens was offset by limited technological means of information gathering and by the constitutional, statutory, and common-law regulations developed in the new Republic. This is not to say that surveillance did not exist in those decades; however, its scope and intensity were relatively limited. The average free male’s interaction with surveillance systems rarely went beyond reporting basic information in census surveys and furnishing land deeds during elections. This changed drastically in the years that followed the Civil War. The balance of surveillance and individual rights was upset by unprecedented technological development, the rapid growth of bureaucratic institutions (both governmental and commercial), and the failure of lawmakers to formulate adequate legal protections against surveillance practices.
Many contemporary surveillance technologies owe their existence to late-19th-century American enterprise. The steam engine enabled unprecedented rates of travel, and mechanization shifted work from the farm to the factory, a harbinger of greater mobility to come. Yet no technological developments had as much impact on the practice of surveillance in the 19th century as the inventions of the telegraph, the Dictograph recorder, the instantaneous photographic camera, and the punch-card-tabulating machine.
The telegraph, invented in the 1850s, created a completely new means of communication. For the first time, communication was separate from transport. However, as would be the case through much of U.S. history, the new medium also facilitated the interception of information sent
through its channels. As early as the Civil War, during which the telegraph was first used on a wide scale, Confederate and Union forces tapped each other’s telegraph lines to remain informed of their adversary’s strategic decisions.9
Photographic and sound-recording technology made it possible to record people’s actions and words from a distance and, on occasion, without their consent.10 Photographic images also replaced the easily forged signature as a common identifier for bureaucracies eager to keep track of growing urban populations. With time, however, an efficient source of bodily identification—the fingerprint—further revolutionized the identification of citizens.
By 1902, fingerprints were systematically used by U.S. authorities, beginning with the New York Civil Service Commission. The practice quickly spread to prisons, the U.S. military, and police departments throughout the country. In 1924, the Federal Bureau of Investigation (FBI) received a legislative mandate to manage a national fingerprint card database, which contained 100 million records by 1946. Since that time, fingerprinting has been the predominant method of identification used by U.S. law enforcement agencies.
The final technological development of this era that had a marked impact on surveillance practices was the punch-card-tabulating machine. Invented by Herman Hollerith in 1889, the machine was designed to streamline the processing of the census. Hollerith’s invention, which aggregated information from patterns of holes punched into cardboard cards, was first tested in the 1890 census, shortening its tabulation and analysis from 18 to 6 weeks. The device was an instant success, as it revolutionized record keeping, enabling quick information input and retrieval and decreasing the amount of space necessary for storing records.
The new surveillance technology was both a driving force in the growth of institutional surveillance and a product of increasing bureaucratic needs for information gathering. Both the bureaucratic institutional model and the technologies that it employed were the products of the pervasive pursuit of efficiency that dominated modern American society. The social fabric of the country changed dramatically in the late 19th century owing to immigration and industry, and a continent-wide railroad system allowed increasing mobility. Workers seeking employment were looking beyond their neighborhoods and towns, and the proportion of
those who could provide for themselves and their families by living off their land decreased rapidly.
U.S. bureaucracies began relying more heavily on records to keep track of the growing, increasingly mobile population. They collected vital records, school records, employment records, land and housing records, bank and credit records, professional licensing records, military records, church records, law enforcement records, and many others. Some of these information practices were not new—birth and death data and church records had been collected as far back as during the colonial era. However, the compilation of records became significantly more sophisticated at the end of the 19th century. Record keeping became more universal, more systematic, and more thoroughgoing than ever before in American history. Yet the records of this era differed from those of the mid-20th century in an important way: They were maintained predominantly at the local level. The lack of centralized management of record keeping limited its social control function, since a person could move to a different area to escape bad credit or a criminal investigation, although to do so would require the necessary resources.
Despite the predominance of uncoordinated local records in 19th-century America, it would be a mistake to conclude that no national surveillance practices existed before the 20th century. The decennial census had been the site of widespread information collection since 1790. By 1880, it was a sophisticated demographic tool under the jurisdiction of a newly established Census Office within the Department of the Interior.11,12 While the early surveys collected only the most rudimentary information regarding the classes of people inhabiting a household, the 1880 census featured questions about the age, gender, marital status, place of birth, education, occupation, and literacy status of all household members.
Loyalty surveillance also played a role during the Civil War, and it reemerged during World War I, although this time the lens of surveillance was focused on German Americans and antiwar activists.13 Concerns about the war caused the government to pass the Espionage Act in June 1917. Peace protests were put down by police and the military; newspapers publishing antiwar articles were refused circulation by the Post Office Department; films with ostensible antiwar content were banned; and many professors critical of their universities’ pro-war stance were
fired. Interestingly, the Espionage Act remains coded in U.S. law to this day, though enforcement of its provisions has been reserved for times of war.14
After World War I, similar loyalty-surveillance tactics were used against Socialists and labor unions, and such tactics later re-emerged in full force during World War II against Japanese-Americans. According to scholars William Seltzer and Margo Anderson,15 the Bureau of the Census assisted U.S. law enforcement authorities in carrying out the presidentially ordered internment of Japanese-Americans. Thus, a surveillance practice established for ostensibly benign statistical purposes was used for the implementation of the most oppressive domestic government action in U.S. history, aside from the negative treatment meted out against African American slaves and Native Americans. Although loyalty surveillance would never reach such overt extremes again, its presence would continue to dominate American political life from the 1950s to the late 1970s.
Another government agency highly dependent on gathering information from most U.S. citizens was the Bureau of Internal Revenue (which became the Internal Revenue Service in 1952). Initially set up as the office of the Commissioner of Internal Revenue, the agency was responsible for the collection of the first income tax in the United States between 1862 and 1872. However, the authority of the U.S. Congress to levy an income tax was not established until 1913, with the passage of the Sixteenth Amendment. Income tax in that year was graduated, and so the commissioner needed to keep track of the income of all taxpayers, giving rise to one of the first centralized document databases of the U.S. government.
By the 1930s, personal identification documents, whose proliferation was initially prompted by the outbreak of World War I, were important means for distinguishing those who were eligible for state programs from those who were not. Franklin D. Roosevelt’s New Deal offered Americans new benefits, including Social Security and labor standards, in order to pull the country out of the Great Depression. Yet, at the same time, the New Deal substantially increased the government’s administrative burden, requiring new surveillance procedures to keep track of the millions of new benefit recipients and minimize fraudulent claims. This uneasy combination of social benefits and regulatory mechanisms would come
to define the nature of bureaucratic surveillance in the 20th century, as it continually oscillated between the provision of care and the exercise of control. The Social Security Board (later to become the Social Security Administration), established in 1936 under the New Deal, embodied both of these contradictory values.
The development of surveillance was not limited to the political arena. In fact, some of the most overt uses of workplace behavior monitoring and record keeping took place in the burgeoning private sector. As would be the case for the remainder of the 20th century, early business surveillance focused on two distinct objectives: the monitoring of the worker and, increasingly, the investigation of consumer behavior. One could add a third objective, credit reporting, although this task was quickly taken over from individual businesses by a dedicated industry. Whatever the objective, private businesses were quick to recognize the potential profits to be made from consumer information.
Public policy and jurisprudence posed few constraints on the intensification of surveillance in bureaucratic record keeping, immigration, law enforcement, and the workplace. Whatever resistance to surveillance was mounted by private property rights before the Civil War largely failed to slow the spread of surveillance in industrial America. New surveillance technologies often did not breach private property. Microphones could be installed in adjacent apartments, telephone taps could be installed outside the home, and photographs could be taken from afar, thus upholding property rights. In the meantime, the less-intrusive forms of surveillance, such as bureaucratic record keeping, were simply seen by the law as necessary elements of a developing nation-state and were afforded few protective regulations.
The period from the late 19th to the early 20th centuries was a formative period for considering privacy rights. A key moment was Samuel Warren and Louis Brandeis’s definition of privacy as the “right to be left alone.”16 The article described the progression of common law from the protecting of property and persons to the defending of spiritual and emotional states, as well as making the innovative observation that technology would soon make such discussions a more urgent concern.17 It is not clear that their warning was heeded until the 1960s, 70 years after they offered it.
Another important development was the passage of the Telecommunications Act of 1934, specifically Section 605, which provided that “no
Samuel D. Warren and Louis D. Brandeis, Harvard Law Review IV (December 15, No. 5):195, 205, 1890, available at http://www.lawrence.edu/fac/boardmaw/Privacy_brand_warr2.html.
Alan F. Westin, Privacy and Freedom, 1967, p. 246.
person not being authorized by the sender shall intercept any communication and divulge or publish the existence, contents, substance, purport, effect, or meaning of such intercepted communication to any person.” In two subsequent decisions, the U.S. Supreme Court held that the plain language of this section applied to federal agents,18 that evidence obtained from the interception of wire and radio communications was inadmissible in court,19 and that evidence indirectly derived from such interceptions was inadmissible as well.20 Note, however, that the federal government subsequently continued to use wiretapping for purposes other than the collection of evidence to be introduced in court.21
COLD WAR AMERICA (1950-1980)
The three decades that followed World War II brought issues of surveillance and privacy into the light of serious public debate for the first time in U.S. history. Stories of excesses of government surveillance were featured prominently in the mass media, congressional hearings resulted in the passage of privacy laws, and new regulations emerged to govern the information practices within some private industries. Movies featured surveillance, and social scientists started to analyze it.22
Amidst all the attention given to privacy, surveillance was becoming ever more ubiquitous. Fueled by unprecedented rates of consumption, a new relationship developed between the individual and the retail sector, one governed by credit lending and surveillance-based marketing practices. Despite the advances of organized labor in the 1930s, the vast new middle class was under pressure in the workplace from hiring practices that demanded personal information and strict performance monitoring after the point of hiring. The political loyalty of citizens was questioned on a scale never before witnessed in American society as the anti-Communist mood swept the United States. And all these surveillance practices were facilitated by rapid technological development.
Beginning in the late 1950s, the computer became a central tool of organizational surveillance. It addressed problems of space and time in the management of records and data analysis and fueled the trend of cen-
tralization of records. The power of databases to aggregate information previously scattered across diverse locations gave institutions the ability to create comprehensive personal profiles of individuals, frequently without their knowledge or cooperation. The possibility of the use of such power for authoritarian purposes awakened images of Orwellian dystopia in the minds of countless journalists, scholars, writers, and politicians during the 1960s, drawing wide-scale public attention to surveillance and lending urgency to the emerging legal debate over privacy rights.
One of the sectors that immediately benefited from the introduction of computer database technology was the credit-reporting industry. As was the case with most bureaucratic record systems, credit reporting began as a decentralized practice. In 1965, the newly established Credit Data Corporation (CDC)—a for-profit, computerized central agency—became the first national credit-reporting firm in the United States. It was soon followed by other firms, such as the Retail Credit Company, Hooper-Holmes, and the Medical Information Bureau (MIB), which served the insurance industry.
But the credit and insurance industries were not alone. Banks, utility companies, telephone companies, medical institutions, marketing firms, and many other businesses were compiling national and regional dossiers about their clients and competitors in quantities never before seen in the United States. The public sector was equally enthusiastic about the new capabilities of computers. Most federal, state, and local government agencies collected growing volumes of data and invested vast resources in the computerization of their systems. The U.S. military, the Internal Revenue Service, the Social Security Administration, and the Bureau of the Census were among the largest consumers of information and were thus some of the first to become computerized.
While record keeping was growing in all segments of society, the federal government continued its long-standing practice of loyalty surveillance—now increasingly computer-assisted. In the 1950s, the enemies were Communists; in the 1960s, black rights activists; and in the late 1960s and early 1970s, antiwar protesters. The existence of these groups was believed to justify the federal government’s development of security records to monitor anyone deemed a threat. It used these security records for two purposes: to monitor the suitability of federal employees and to monitor subversive activity outside the government.
During the 1950s and 1960s, negative reactions to the growing centralization and computerization of records and the continued abuse of surveillance power by law enforcement authorities began to mount. Critics emerged from all sectors of society, including the academy, the mass media, churches, the arts community, and even the corporate world. Some politicians, who received increasing numbers of complaints from their
constituents, began raising the issue in Congress. As a result, over the course of the 1960s and early 1970s a number of groundbreaking congressional committees began investigating the use of surveillance practices by the federal government and the private sector, most notably in connection with the Watergate scandal. Under the leadership of political leaders such as Senator Sam J. Ervin, Representative Cornelius Gallagher, and Senator Edward Long, the committees interviewed hundreds of public- and private-sector officials and analyzed thousands of internal documents, revealing the immense scope of surveillance in American society.
Aside from direct impacts on practices, the work of the congressional committees helped create public awareness and support for resisting surveillance. As a result of the hearings, the legislation of surveillance became one of the priorities of the U.S. government. With continued lobbying by individuals like Senator Ervin and Alan Westin, a leading expert on information privacy, the first concrete federal antisurveillance statutes were passed. Beginning in 1966, Congress began responding to the widespread calls for the regulation of surveillance. The Freedom of Information Act (FOIA), passed in 1966 and amended in 1974 and 1976; the Omnibus Crime Control and Safe Streets Act of 1968; and the Fair Credit Reporting Act of 1970 were important steps, respectively, in giving people control over their information, placing limits on police surveillance, and legislating accuracy and confidentiality for credit bureaus.23
In 1974, the Privacy Act was passed. For the first time, legislation explicitly identified and protected the right to privacy as a fundamental right. Although the original draft called for the regulation of information practices in federal, state, and local government as well as in the private sector, the final bill extended only to the federal government and the private companies with which it does business.24 This continues to be the case, since private corporations generally only have to answer to self-government or sector-specific laws. The Privacy Act governed the collection of personal information and outlawed its disclosure without the consent of the individual in question. The exceptions to the antidisclosure clause included standard intraagency use, disclosure under FOIA, routine use for original purposes, and use for the purposes of the census, statistical research, the National Archives, law enforcement, health and safety administration, Congress, the Comptroller General, and court orders.25
A major challenge faced by the new privacy legislation was its proper enforcement. Even Senator Ervin, whose work led directly to the passing
of the Privacy Act, was less than enthusiastic about its impact: “The Privacy Act, if enforced, would be a pretty good thing. But the government doesn’t like it. The government has an insatiable appetite for power, and it will not stop usurping power unless it is restrained by laws they cannot repeal or nullify.”26 Indeed, failures to comply with surveillance regulations penetrated even the top tiers of the federal government. During the presidential campaign of 1972, five burglars with links to the Nixon administration were caught breaking into the Democratic National Committee offices in order to install surveillance equipment. The resulting Watergate affair revealed a wide array of secret government surveillance practices aimed at political opponents, journalists, and antiwar activists.27 All this took place 4 years after the enactment of the Omnibus Crime Control and Safe Streets Act.
Court cases continued to bring privacy and freedom of information issues to the fore. In the 1965 Supreme Court case of Griswold v. Connecticut, the court rejected a statute forbidding the distribution of birth control information. Using this case as a basis, the Roe v. Wade ruling of 1973 eased the way to legal abortion, arguing that it was a privacy issue. Regardless of whether it was directly or indirectly related, privacy would continue dominating the surveillance discourse. Over the next two decades its role would become ever more crucial, as government and business surveillance continued to increase in intensity and scope, despite the modest legal victories of the 1960s and 1970s.
GLOBALIZED AMERICA (1980-PRESENT)
The end of the 20th century was a time of increasing globalization of America’s economy. Computer interconnectivity allowed the leading corporations to expand their manufacturing and marketing bases to countries around the world, forming immense, multinational business networks, coordinated in real time.28 This process was bolstered by the consolidation of many industries, as countless mergers and takeovers created business conglomerates in many sectors. With the convergence of corporate management came the convergence of company records and technologies. In the meantime, the rise of the personal computer in the 1980s, and networking along with wireless communication in the 1990s, were contributing to change in the daily lives of Americans. People’s
interactions with technology became routine, allowing them to accomplish many tasks from a distance. The growth of the electronics industry added momentum to continually growing consumerism, which too was facilitated by computer networking, popularizing credit and debit purchases. Since virtually all financial transactions became electronic, they were automatically tracked and recorded by computer databases.
Advances in science and technology redefined the human body as a site of information, making it a prime tool for surveillance practices. Closed-circuit cameras emerged in many retail locations to monitor customer and employee behavior. With time, they also became commonplace in public areas for the purpose of crime control. In the 1990s, the development of biometrics, the automatic identification technique based on bodily characteristics, suggested the possibility of identifying people without the need for documents. The emergence of DNA analysis and the subsequent mapping of the human genome promised revolutionary possibilities for identification and medical testing.
The marketing industry was transformed by information gathering. The development of demographic profiling based on consumer-behavior records led to the development of targeted marketing, which allowed companies to focus their promotional dollars on consumers they deemed desirable. Detailed information about the preferences and habits of consumers that facilitated such targeted marketing practices became a valuable commodity. So-called customer relationship marketing, which relies on sophisticated profiles based on purchasing and preference data,was developed as a major software tool for matching consumers to products and services.
In all its applications, surveillance was becoming increasingly rhizomic.29 No longer were national data centers necessary, since information from decentralized databases could be aggregated and analyzed through the use of computer networks. And the rhizomes of surveillance systems were permeating every facet of American society. Highway toll systems, automatic teller machines, grocery store checkouts, airport check-ins, and countless other points of interaction with surveillance systems automatically fed information into computer systems, although not in ways that are interoperable or that allow easy data correlation among systems.
Against the backdrop of intensifying practices, Congress passed a number of laws to regulate surveillance within specific sectors, as described in Section 4.3.1. These bills have restricted the disclosure and misuse of personal information within particular industries, but such
legislation has generally been narrowly drawn, and so problems outside of the specific purview of these bills have gone unaddressed.
Despite these limited attempts at bolstering the surveillance power of the government, many commentators believed that the role of the state in surveillance was weakening in the 1990s. Some went as far as to dismiss the very concept of a nation-state as an anachronism that would not survive the age of globalization. However, both arguments became moot after September 11, 2001. After the terrorist attacks on New York and Washington, D.C., the Bush administration forcefully reasserted the power of the state, launching the United States on a “war against terrorism.”30 One of the major components of this war was state-sponsored surveillance.
In the days immediately after September 11, the power of rhizomic surveillance was demonstrated to the public as the actions of the terrorists before the attack were reconstructed from bank records, closed-circuit television cameras, and airport systems. In order to enhance the existing surveillance infrastructure, the President and the Congress enacted the USA Patriot Act of 2001. The act gave the government greater surveillance power over citizens of the United States in order to increase security.