National Academies Press: OpenBook
« Previous: 16 Business Models Based on Advertising
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

17
Constitutional Law and the Law of Cyberspace

Larry Lessig

17.1 INTRODUCTION

I am a professor at Stanford Law School, where I teach constitutional law and the law of cyberspace. I have been involved from the beginning in this debate about how best to solve the problem of controlling children’s access to pornographic material. I got into a lot of trouble for the positions I initially took in the debate, which made me confident that I must be on to something right.

This is, necessarily, a question about the interaction between a certain technological environment and certain rules that govern that environment. This question about children’s access to materials deemed harmful to minors obviously was not raised for the first time in cyberspace; it was raised many years prior in the context of real space. In real space, as Justice O’Connor said in Reno v. ACLU, 521 U.S. 844, 887 (1997), a majority of the states expressly regulate the rights of purveyors of pornography to sell it to children. This regulation serves an important purpose because of certain features of the architecture of real space.

It is helpful to think this through. You could suppose a community that has a law that says that if you sell pornography or other material harmful to minors, then you must assure that the person purchasing it is above the age of 18. But in addition to a law, there are clearly also norms that govern even the pornographer in his willingness to sell pornography to a child. The market, too, participates in this zoning of pornography from children; pornography costs money, and children obviously do not have a lot of money. Yet the most important thing facilitating this regulation is that, in real space, it is relatively difficult to hide the fact that you

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

are a child. A kid might use stilts and put on a mustache and dark coat, but when the kid walks into a pornography store, the pornographer probably knows that this is a kid. In real space, age is relatively self-authenticating.

This is the single feature of the architecture of cyberspace that makes this form of regulation difficult to replicate there. Even if you have exactly the same laws, exactly the same norms, and a similar market structure, the character of the original architecture or technology of cyberspace is such that age is not relatively self-authenticating.

17.2 REGULATION IN CYBERSPACE

The question, then, is how to interact with this environment in a way that facilitates the legitimate state interest of making sure that parents have the ability to control their children’s access to this stuff, while continuing to preserve the extremely important First Amendment values that exist in cyberspace. The initial reaction of civil libertarian groups was to say the government should do nothing here—that if the government did something, it would be censorship, which is banned by the First Amendment. Instead, we should allow the private market to take care of this problem.

Although the U.S. Congress passed the Communications Decency Act (CDA) of 1996, there is fairly uniform support among civil liberty organizations to strike it down for that very reason. When Bruce Ennis argued this case before the Supreme Court, he said, “Private systems, these private technologies for blocking content, will serve this function just as well as law.” And the Court avers the fact that there exists private technology that could serve this purpose as well as law.

But the thing to keep in focus is that just as law regulates cyberspace, so does technology regulate cyberspace. Law and code together regulate cyberspace. Just as there is bad law so, too, there is bad code for regulating cyberspace. In my code-obsessive state of California, we say there is bad East Coast code—this is what happens in Congress—and bad West Coast code, which is what happens when people write poor technology for filtering cyberspace. The objective of someone who is worried about both free speech in cyberspace and giving parents the right type of control should be to find the mix between good East Coast and good West Coast code that gives parents this ability while preserving the maximum amount of freedom for people who should not be affected by this type of regulation.

In my view, when the civil liberties organizations said government should do nothing, they were wrong. They were wrong because it created a huge market for the development of bad West Coast code—block-

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

ing software, or censor-ware, which made it possible for companies to filter out content on the Web. The reason I call this type of technology “bad code” is that it filters much too broadly relative to the legitimate state interest in facilitating the control of parents over their children’s access to materials that are harmful to them.

There is a lot of good evidence about how poorly this technology filters cyberspace: how it filters the wrong type of material. There are also more insidious examples of what the companies that release this software do. For example, if you become known as a critic of that software, mysteriously your Web site may appear on the list of blocked Web sites, which becomes an extraordinary blacklist of banned books. The problem with this blacklist of banned books is that the public cannot look at it. It is a secret list—a secret list of filtered sites that is being sold to the public on account of parents’ legitimate desire to find a way to protect their children.

17.3 POSSIBLE SOLUTIONS

My view is that there is a mixture of government and market actions that could help facilitate the type of control that parents deserve while minimizing the bad effects of this West Coast code. I will describe two versions of it. One is more problematic; the other is more invasive.

Imagine a browser that allows you to select G-rated surfing. As the browser perused the Web, the client would signal to the server that this person wants G-rated browsing. This means that, if you have material that is harmful to minors on your site, you cannot serve that G-rated browser this material. The necessary law to make the regime work is simply a requirement that sites respect the request that only G-rated material be sent to a particular client. All that is required is that you forbid people from sending so-called “harmful-to-minors” material to a browser that says, “I want G-rated material.”

If there were such a law—and only that law—then there would be a strong incentive for the market to develop many browser technologies that would signal efficiently, “I want G-rated material.” A family in a particular house could have many different accounts on the browser, so that children have G-rated accounts and the parents do not. The market would provide the technology to make that system work.

One problem with this system is that, by going around and raising your hand and saying, “I want G-rated browsing material,” you are also saying, “I am likely to be a child.” People who want to abuse children can then take advantage of that hand-waving in ways that we obviously do not want. There is a way around this problem, but let us move to the second solution, which I think solves it more directly.

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

Imagine a law that says, “You must, if you have a Web site, have a certain tag at the server or the page level that signals the presence of material that is harmful to minors.” This is the type of judgment that book-stores have to make now. It is not an easy judgment, but it is one already entrusted to booksellers today. An incentive is thereby created in the market for the development of a G-rated browser, but this time it does not signal its use by a child. It simply looks for this particular tag. If it finds this tag, then it does not give the user access to the Web site.

This, too, is a mixture of a certain amount of regulation, which says “you must tag this content,” and a certain expectation about how the market will respond. To the extent that parents want to protect their children, they will adopt versions of the browser that facilitate this blocking on the basis of age. To the extent they do not want to protect their children, they will not use these types of browsers. But the power either to adopt the technology to block access or not will be within the hands of parents. Obviously, browsers—at least in the current browser war—are inexpensive; Microsoft has promised they will be free forever. Thus, the cost of the technology implemented from the parents’ side is very low.

The advantage to this approach is that the only people blocked by this system are either parents who opt to use the blocking or schools that adopt browsers that facilitate blocking to protect children from harmful content while at school. It does not have the over-inclusiveness problem that the other solutions tend to have. Because the incentive is structured so that all we need to worry about is material harmful to minors, it does not create an incentive to block much more broadly than what the law legitimately can require.1

If Geoff Stone2 were here, he would say, “Yes, but aren’t you forcing Web sites to speak, by forcing them to put these little tags on their systems? And so isn’t this a compelled speech, and isn’t that a violation of the Constitution?” I think the answer is no, because the relevant compelled speech is not that you must display on your Web site a banner that

1  

Milo Medin said he likes this scheme because there are many ways of implementing it— not only in a browser, but also as a service that a user could buy from a network provider. The provider would be able to look at the tags as part of the caching process, and people would not be subject to the usual workarounds on the software side. Another appealing aspect is that it puts all the people who want to cooperate on one side of the issue. The other people do not want to cooperate and do not want their stuff to be restricted. The question is, what incentives do these people have? Many personal publishers, who publish just because it is fun, would be affected directly by this. It would not affect the large companies, because they would act rationally.

2  

Geoff Stone, from the University of Chicago, spoke on the First Amendment at the committee’s first meeting, in July 2000.

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

says, “This is material harmful to minors.” It is not that you must, in any public way, advertise this characteristic. You simply enable the Web site to label itself properly through the HTML code in the background. The Supreme Court has upheld the right of states to force providers of material harmful to minors to discriminate in the distribution of this material. It seems to me perfectly consistent with that opinion to say that sites that have this type of material must put a hidden tag in it that facilitates the type of blocking that would enable parents to regain some kind of control.

Geoff Stone taught me the First Amendment, so I understand his perspective toward it. But I think he is undercounting how this action looks in light of the other things Congress has done. There is a certain pragmatic character to how the Supreme Court decides cases; the court will not say the Congress can never do anything until the end of time. This type of regulation seems to me to be a relatively slight intrusion that would facilitate a better free-speech environment than would exist in the absence of any federal regulation. If we had no federal regulation at all, the result would be, for example, the blocking of many sites about contraception using private filters. In this way, the First Amendment world is worse without this regulation than with it.

The necessary condition for success is not an agreement about what material is harmful to minors but rather what the language of the harmful-to-minor tag would be. The former would be left to the ordinary system of letting people decide what the character of the material is and self-rating. The standard imposed by the Supreme Court is that you must adopt the least-restrictive means. CDA-1 failed because it was overly broad in trying to regulate things that were clearly not speech harmful to minors and because it created too much of a burden on users by requiring them to carry IDs around if they wanted to use the Web. I think CDA-2 will be struck down because it continues to require that you carry an ID. These burdens would have to be borne by everyone who wanted to use the Web, just so that children could be protected.

In my scenario, the burden is borne by Web site administrators, who already are spending extraordinary amounts of money developing their Web pages. It is just one more tag. No one can argue that the marginal cost of one more tag is expensive. What is expensive is making a judgment about your Web site. But if you are in the pornography business, then it is an easy judgment. If you are in the business of advising children about access to contraception, then I think it is an easy judgment. The Starr report3 is not harmful to minors. There would be difficult cases, but the law passed by Congress requires these difficult decisions anyway.

3  

This is a reference to the 1998 report by Independent Counsel Kenneth Starr on President Clinton’s relationship with a White House intern.

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

I envision the G-rating feature as an opt-in setting on a browser. It could be a default instead,4 but I contend that if parents do not know how to turn on the G-rating feature, then they ought to learn. Constitutionally, opting out clearly is different from opting in. The way to analyze the constitutional balancing test is as follows: is the additional burden placed on the 100 million people who do not have children and do not care about protecting children worth the advantage of making sure that the 60 million people who do want to protect children do not have to take any extra steps? I cannot predict how this type of judgment would be made. But as the market develops, people will start branding themselves, much like AOL has done. One reason why AOL likes the existing system so much is that the company draws a lot of parents to its content, because it has taken many steps to provide for them.

Age verification would be performed by the family in switching the browser on or off the G-rating setting. This is the big difference between this type of a solution and the CDA type of solution, in which age verification is done over the Internet. With age verification over the Internet, the incentives for cheating are big, so the system needs to be sophisticated enough to prevent it.

My proposal suggests a two-tier system in a library setting,5 with one tier available to children and either available to adults. Just as libraries now might have an adult section that is not accessible to children, you can imagine having some browsers that are G-rated and others that are not. It is difficult to know the library’s role in enforcing the rule on children, however. Some libraries have adopted the practice of requiring a child’s library card to be marked. I am less concerned about libraries enforcing this rule when only a tiny fraction of speech is being regulated, as opposed to many types of speech. It does suggest some minimal role for librarians.6

4  

Linda Hodge noted that most parents are not using filters and suggested that the G-rating feature be a default, requiring action to opt out. To disable the G-rating feature, a user could change the default setting. Milo Medin said the ISPs supply browsers and provide an option either at startup or in an upgrade panel that asks the user to “check this or that.”

5  

Marilyn Mason said that one of the most troublesome things about the current legislation is that it puts the burden of deciding what is harmful to minors on the shoulders of every school and library. She said aspects of Lessig’s proposal are appealing: the least-restrictive setting becomes the norm, the list of what is G-rated or not is public, a challenge is a public event, public agencies are removed from the middle, and millions of people are relieved of the burden of deciding what “harmful to minors” means.

6  

Marilyn Mason said the tier system could be handled with a library card or smart card. An adult has an adult card so there is no problem. Children have their parents sign for their cards. If a parent wants a child to have unlimited access, then the card can be so coded. The cards can be read by machine. David Forsyth said librarians have told the committee that

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

There are problems with the system I have described, but only with those involving a state regulation that attempts to guarantee that material harmful to minors is not handed over to children without the permission of parents. My concept is more complicated because it involves the Internet, but it anticipates the same type of problem that exists in the majority of states now, when material like this is distributed.

Sites would have to do self-rating. Importantly, the self-rating would not go beyond this category of harmful to minors. PICS technology, the Platform for Internet Content Selection, enables site rating in a wide range of circumstances. PICS is the same technology as P3P, the Platform for Privacy Preferences Project, but is applied to material harmful to minors. (I am skeptical of PICS because it enables general labeling, which is much broader than the legitimate interest at issue when dealing with material harmful to minors. Its architecture is such that the label or filter can be imposed anywhere in the distribution chain. If the world turned out the way the PICS author wanted, you would have many rich filtering systems that could become the tools of censors who wanted to prevent access to speech about China or the like. My proposal involves a much narrower label.)

To avoid asking a site to slander itself, the label could be an equivalent to the one on cigarette packets. This label does not say, “I think this is harmful to your health.” It says only that the Surgeon General thinks cigarettes are harmful to your health. An equivalent entity could find material harmful to minors. The label would not actually say this—it would be a computer code, of course. On the other hand, I could reveal the code and see it, so you might say that this is equivalent to self-slander, although I am not sure where the harm is. The label means that the speech is of a class that can be restricted. We could make up a word and call it “XYZ speech.” I can be required to block children’s access to XYZ speech. The law cannot force me to keep the speech away from my own children. All this does is improve the vocabulary of the space so that people can make decisions in a relatively consistent frame.

   

they already monitor library activity and discourage users who are making others uncomfortable or behaving inappropriately. It might not be necessary for a library to require children to identify themselves before using the Internet; the “tap on the shoulder” mechanism probably can deal with it. Milo Medin said this approach moves the incentive for labeling or doing the labor to the content publishers, as opposed to the people who do not want to be affected. This localizes the problem and trims a wide range of responsibility. Labeling provides the negative incentive needed for the system to work.

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

We cannot simply create a dot-xxx space for material harmful to minors because there are other types of potentially harmful speech besides hardcore pornography. Here Geoff Stone would appear in full force, and I am behind him now. The fact that you force me to go into a dot-xxx space is harmful to me if I do not convey hardcore pornography but rather other material that perhaps should not be given to children. You are forcing me to associate with a space that has a certain kind of meaning. If that were the only option, then maybe it would be constitutionally acceptable. But there is no reason to force me to associate with the hardcore pornographers when an invisible filtering/zoning system, such as the P3P labels in the HTML tag, can be employed instead. I can be a dot-com and be tagged. Some of my Web pages would be blocked to a child, whereas others would not. Because I have both types of content, I contend that I should be free to be a dot-org or dot-com and not be forced into the dot-xxx ghetto.

Of course, a site might take the position that the First Amendment protects it in delivering my material to children, regardless of what the parents think. The parents might have a different view, thinking they should be allowed to block access to that site. The point about this structure is that the question would be resolved in a public context. If the parents believe that this material properly is considered harmful to minors, and the site refuses to label it as such, then there would be an adjudication of whether this is material harmful to minors. I am much happier to have this adjudication in the context of a First Amendment tradition, which does limit the degree to which you can restrict speech, as opposed to a cyberspace board meeting, where the real issue is, “How is this going to play in the market if people think we’re accepting this kind of speech?” In my view, we can ensure more protection of free speech if we have that argument in the context of adjudicators, who understand the tradition of free speech that we are trying to protect.

I want to emphasize that it would be stupid and probably unconstitutional to make the requirement to label punishable through a criminal sanction. We want to keep the punishment low in order to preserve this proposed system against constitutional challenge. To the extent that you raise the punishment, the Supreme Court is likely to say, “This is too dangerous, and it will chill speech if you threaten 30 years in jail because someone failed to properly tag a site.” Alternatively, I like causes of action. I push this in the context of spam all the time. A cause of action might be one in which bounty hunters were deployed to find sites that they believe are harmful to minors. They would then employ some system for adjudicating this issue. Then you would get lots of efficient enforcement technology out there, for people who really care about this issue, and the enforcement would be enforced in a context in which the

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

First Amendment is the constraint as opposed to a corporate context in which the board worries about public relations.

You have to implement this solution step by step. You have to be open to the fact that we do not understand well enough how the different factors interact. We can make speculations, but we need to use real data to analyze it, and this requires some experience in taking one step and evaluating it. The Web is the first place to worry about. You could play with that for a year or more and see what works, and then decide where else you need to deploy this solution. Usenet is a network that uses an NTP protocol. An ISP can decide which protocols to allow across its network. It might say, I am a G-rated ISP and will not allow any Usenet services to come across. Sometimes people get access to the Usenet through the Web. In these cases, you can still require the same kind of filtering. It is only in the context of getting access to Usenet outside of the Web that a problem arises.7

17.4 PRACTICAL CONSIDERATIONS

Let me map out a sample proceeding. Let us say there has been a failure to properly tag something that is, in fact, harmful to minors. Imagine that something like a bounty is available. The bounty hunter brings an action: hopefully not a federal court action. In principle, anyone could bring the action. The person says, “This site by Playboy has material that is properly considered harmful to minors, and they have not implemented this tag.” Then there has to be a judgment about whether the material is, in fact, harmful to minors. A court must make this type of judgment, as they always have done. It is difficult in some cases, but the public has long survived this judgment being made in real space. If the court finds that this is material harmful to minors and the site has not put up this tag, then there would be some sanction. I think the sanction should be a civil sanction, such as a fine, sufficient to achieve compliance, that is, set at a

7  

Dick Thornburgh said the person doing the conversion from Usenet to the Web would end up doing the labeling, not the person who posts the content. In this example, the problem is not difficult to solve. But the generic issue is that there is some level of restriction on the connection; it is not necessarily a complete removal of either an intermediary or software on the PC, although it greatly facilitates things. There is no reason why you could not enforce the same type of labeling requirement on the publisher. There is usually a way of labeling files available via file transfer protocol or other types of protocols, for example. It could apply to chat groups, instant messaging traffic, and so on. The key point is to shift the burden, make it general enough that people have an incentive to cooperate, and enable bounty hunters so the marketplace can police it and you would not necessarily need law enforcement.

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

level such that a rational businessperson thinks, “It’s cheaper for us to comply.”

You could assume that no one would comply with this law, that there would be thousands of these prosecutions, and it would bog down the courts and end up like the war on drugs. This situation would be similar to a denial-of-service attack8 and would prove that this system is terrible. On the other hand, you could assume that people will behave rationally based on what they expect the consequences and cost of compliance will be. Then the world segregates into a vast majority that are willing to comply because it is cheaper and they do not wish to violate the law anyway and a smaller number that we have to worry about controlling.

A bounty action could be structured so that the first to file gets to litigate, and, after a judgment is rendered, that is the end of it. If a frivolous action is filed, it should be punishable by a filing for malicious prosecution. A class action analogy is possible, but the cumbersome nature of class actions now might make it simpler to have just a single action. I do not think it is possible to eliminate the possibility of a proliferation of actions, but there are ways to try. For instance, we could limit it by geographic district, for example, to avoid the problem of trying to sue someone across the country and imposing that type of burden. A lot of creative thinking will be needed. A qui tam action9 could be troubling constitutionally. There are people who believe that a party should be found to lack standing unless there is a demonstration of harm.10 But there is such a long tradition of qui tam that, like bounty actions, it will survive.

The one area of this jurisprudence that has not been developed is whether and how the community standards component of the traditional obscenity doctrine applies in the context of material harmful to minors. There is a need for the courts to figure out something new. The decision in the Third Circuit, ACLU v. Reno, 217 F.3d 162 (3rd Cir. 2000), striking

8  

David Forsyth sought to draw an analogy to a denial-of-service attack in which a large number of people do a small inappropriate thing on a network and overload the system administrator. In the legal context, a sufficient number of small bounty-seeking actions from enough different people would bring the system to a halt.

9  

A qui tam action is one filed in court by a private individual who sees some misconduct that is actionable under the law. If the individual prevails in court, he or she is entitled to some of the proceeds that the transgressor must pay.

10  

David Forsyth questioned whether bounty hunters could participate in civil actions, because he thought that some harm had to be demonstrated in order to sue. Dick Thornburgh said that, in a qui tam case, the evidence brought forth as the basis of the action must be something peculiar to the individual. A person cannot walk in off the street and bring a qui tam claim by showing a simple fact such as a lack of a tag on a program. These claims are numerous within an industry where evidence has been accumulated and there is only one person or a small group of people who could bring an action.

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

down the most recent action of Congress made it sound as if there is no possible way to get over the community standards problem when trying to regulate this material in cyberspace, because there are so many different communities and problems associated with applying different types of tests. What if the architecture requires you to label or unlabel depending on where, geographically, a person is coming from? The way the architecture is now, it is relatively difficult to figure out where a user is located. This is where the additional layer of community standards becomes difficult to architect. I confess that I do not know how to solve this problem.

The Supreme Court is difficult to predict. My confidence in predicting what this Court will do has dropped dramatically in the last year, so I will not predict how the Court will resolve this issue. But I cannot believe that it will decide that nothing can be done. The resolution will not be that one standard fits the whole nation either; the Court will instead attempt to find some compromise. In a sense, it has struck the same balance in real space through the same legal standard applied to real-space materials.

This leads to the question of how the community standards issue would play out in a place like a library, which serves a wide range of people, presumably with different ideas of what is harmful. If there were thousands of lawsuits, this could create a chilling effect on free speech, because people would think, “Well, every time I have a certain type of speech on my site, I’m going to get into a lawsuit. It will be blocked, so I’m not going to have that speech on this site (without labeling).” Yet we often forget that, with the existing censorware, Web sites already make the same judgment. They say, “Hmm. I want to avoid getting on the CYBERSitter list. I want to include this interesting information about how to get contraception in certain cases, but it’s too dangerous, because this speech will be filtered. When my speech is filtered in the context of CYBERSitter, there is no court to which I can go to order that it is improper to filter my speech. I am stuck.”

In other words, there is already a chilling effect on free speech created by these invisible blacklists that spread across cyberspace. I do not think we can avoid some chilling effect. The question is how to minimize it. Focusing on a legal standard that is interpreted in a legal context is a way to minimize the chilling effect and maximize the amount of speech that can be protected.

“Chill” has a more precise meaning than just causing you to not post material. It means that you are uncertain and afraid of punishment, so you choose not to post what otherwise you should be allowed to post. It is the variance (the uncertainty in application) that we are concerned about. Given the range of private censors, the variance that we need to

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

consider is much greater than it would be if there were a single standard defining material harmful to minors. Thus, I think that “chill” has greater meaning in the private censorship context than in the government context. This is not to say that we could not imagine the court developing a doctrine such that people are terrified and do not do anything. That is an unavoidable consequence if you screw it up and would be terrible for free speech. Maybe this is a lawyer-centric view, but I am much happier if that battle occurs in court, because then I have the right to argue that this standard is wrong and inconsistent. When it is done in the private censorship context, I do not have the right to make that argument.

Here is the disingenuous part of my scenario. It is extremely difficult to say what the standard “harmful to minors” means. The burden is on the government or prosecutor to demonstrate that this material is harmful to minors. I have the right to free speech until the state can demonstrate this. But what does the government actually have to show? The government does not need to show data that demonstrate the harm. The way these cases are typically litigated involves comparisons to “like kinds” of material. Obscenity is harmful to minors. As the court said, the sort of sexually explicit speech that appropriately is kept from children is like obscenity to children.

To date, “harmful to minors” has been interpreted by the Supreme Court to include sexually explicit speech only. It does not include hate speech, for example. There is a lower court judgment that expands the interpretation, but I don’t believe that interpretation will be sustained. Therefore, in my view, the legitimate interest of the government has been prescribed to include only sexually explicit speech. I am sure that people will try to bring other types of speech to the courts. But I am also sure that the Supreme Court would look at Ku Klux Klan (KKK) speech, for example, and say, “It is terrible speech, I agree, but this is the core of First Amendment type of speech that we must protect.” We will get into an argument about whether 6-year-olds should see KKK speech, and this will be difficult for the court.

I have no kids and I do not look at this material. I have no way of figuring out how to draw the line. But part of the solution is to realize that no one will have a complete solution. We depend on the diversity of institutions to contribute their parts. Some part has to be contributed by people making judgments. In a paper that I wrote with Paul Resnick, who was originally on this committee, we described techniques for minimizing the cost of determining what “harmful to minors” means. Geoff Stone would look at some of these techniques and say, “No, no, the Constitution would forbid them.”

Imagine a site asking a government agency, “Can you give me a sign that this material is okay?” This is like a promise not to prosecute, and it

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

is done now. It amounts to preclearance of material that is on the borderline. It is not saying that you cannot publish unless you get permission. It is not saying that if you do not get permission, you cannot publish. All it means is that if you get preclearance, there is a guarantee that you will not be punished. It is a safe harbor—it takes care of the “chill” problem. If the government says, “We can’t give you a safe harbor here,” then you have a problem. Then you must decide whether it is worth the risk to speak. But, again, this is a problem we face now. People currently make this decision when they decide how to distribute material in more than half of the states. We should minimize the cost of that problem, but I do not think we can say the Constitution requires us to make that cost zero.

As times and standards change, crude standards help, because a fine-grained system would become out-of-date.11 Because this discriminator is so crude, I think that what happens in cyberspace would mirror what would happen in real space—people only worry about and prosecute the extreme cases. There is a lot of material floating around that nobody wastes time worrying about. But, in principle, we would have to worry about how things are updated over time. In cyberspace, 10 years is a long time. I am not sure what the burden of that is. My personal preference is that we do as little as possible but enough to avoid the problem of too much private censorship. The system also needs to be sensitive to what we learn about the consequences of what we do.

This solution will not eliminate all private filtering. But my view is that a significant amount of demand for private filtering results from the lack of any less-restrictive alternative. If you asked the filtering companies, 90 percent of them would say, “What Lessig is talking about is terrible and unconstitutional”—because it would drive 90 percent of them out of business. But there still would be parents who are on the Christian Right, for example, and who want to add another layer of protection on top. We will not go from a world of perfect censorship to perfect free speech, but a balance is needed between the two. Under the existing system, we have so many examples of overreaching and private censoring that some way to undermine it is needed.

Given the international context for the Internet, this solution is not a

11  

Bob Schloss asked who would label orphan content, which is floating around on the Internet or on hard disks but whose publisher is dead or not paying attention, and how the binary indicator—a yes or no answer to the question of whether something is harmful to minors—would hold up over time as community standards changed. It might work for 10 years, but in the end, to deal with the problem of both shifting standards and orphan content, the system could end up with a third-party rating process again.

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×

complete one. But our nation is very powerful. When you set up a simple system for people to comply with, and there is some threat that they will be attacked by the United States if they are not in compliance, then it will be easier for most people to comply. Tiny sanctions and tiny compliance costs actually have a significant effect on convincing people to obey.

Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 110
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 111
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 112
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 113
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 114
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 115
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 116
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 117
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 118
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 119
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 120
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 121
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 122
Suggested Citation:"17 Constitutional Law and the Law of Cyberspace." National Research Council and Institute of Medicine. 2002. Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10324.
×
Page 123
Next: Appendix: Biographies of Presenters »
Technical, Business, and Legal Dimensions of Protecting Children from Pornography on the Internet: Proceedings of a Workshop Get This Book
×
Buy Paperback | $48.00 Buy Ebook | $38.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In response to a mandate from Congress in conjunction with the Protection of Children from Sexual Predators Act of 1998, the Computer Science and Telecommunications Board (CSTB) and the Board on Children, Youth, and Families of the National Research Council (NRC) and the Institute of Medicine established the Committee to Study Tools and Strategies for Protecting Kids from Pornography and Their Applicability to Other Inappropriate Internet Content.

To collect input and to disseminate useful information to the nation on this question, the committee held two public workshops. On December 13, 2000, in Washington, D.C., the committee convened a workshop to focus on nontechnical strategies that could be effective in a broad range of settings (e.g., home, school, libraries) in which young people might be online. This workshop brought together researchers, educators, policy makers, and other key stakeholders to consider and discuss these approaches and to identify some of the benefits and limitations of various nontechnical strategies. The December workshop is summarized in Nontechnical Strategies to Reduce Children's Exposure to Inappropriate Material on the Internet: Summary of a Workshop. The second workshop was held on March 7, 2001, in Redwood City, California. This second workshop focused on some of the technical, business, and legal factors that affect how one might choose to protect kids from pornography on the Internet. The present report provides, in the form of edited transcripts, the presentations at that workshop.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!