The Industrial Green Game. 1997. Pp. 200–211.
Washington, DC: National Academy Press.
Public Perception, Understanding, and Values
M. GRANGER MORGAN
As with many other problems in engineering, what the public knows and thinks can have important implications for the design and the success of the various systems and activities that are loosely termed industrial ecology. This paper begins by considering when public perceptions matter and briefly discusses why they can be difficult to measure. It describes and illustrates some improved methods for measuring public perception that have recently been developed by the author and colleagues at Carnegie Mellon University (CMU) in Pittsburgh. The author illustrates how these can be used in developing public communications materials and closes by suggesting some hypotheses about public perceptions as they relate to the field of industrial ecology.
WHY DO PUBLIC PERCEPTIONS MATTER?
Suppose we can identify a set of changes in the design and operation of consumer products or in industrial process that would place fewer and lower loadings on the environment. Can we just go ahead and make the changes? In many cases the answer is yes. However, in cases where the changes require modifying public policy or public behavior, public perceptions become important and if ignored may result in the failure of technically good innovations. For example, a different regulatory approach or different regulatory priorities may be necessary. Changes in tax or other incentive structures may be necessary to induce needed behavioral changes. Government-supported research or demonstration activities may be necessary. In such circumstances, public perceptions do matter, and those trying to introduce changes ignore them at their peril.
DIFFICULTIES IN MEASURING PUBLIC UNDERSTANDING, PERCEPTIONS AND VALUES
The standard method for learning what the public knows and thinks is survey research. This method has several problems, the two most serious of which are differentiating what the respondents actually know from what they infer from the content of the questions they are asked and framing effects, which can lead to different responses depending on how the question is posed (e.g., number of lives lost vs. numbers of deaths prevented).
Standard methods for learning public values include contingent valuation and utility elicitation. The first involves posing questions about willingness to pay, either to avoid having some condition occur or to make it go away once it has occurred. The second involves asking questions about the importance that the respondent attaches to various valued attributes so as to be able to construct a normative model of choice among outcomes that involve different levels of the valued attributes. Both of these approaches suffer from the difficulty that they assume that people have well-articulated values on the questions of interest and that the problem is simply one of measuring those values. In many instances relevant to a topic such as industrial ecology, this is most unlikely (Fischhoff, 1993).
Two examples illustrate the types of problems that arise. The first has to do with the way questions are framed. A few years ago, the U.S. Environmental Protection Agency (EPA) decided to do an internal study that looked across the agency and asked. ''Are we paying attention to the right set of environmental risks?" It was an internal risk-ranking exercise, but the agency also commissioned a Roper survey that gave respondents a long list of scary sounding things and asked how serious they thought each one was. The New York Times summarized the results as follows: "The American public and the EPA rank environmental threats quite differently with the public's fear focused most sharply on hazardous wastes sites which the government views much less serious" (Stevens, 1991). EPA's Frederick W. Allen wrote, "The most obvious reasons for the difference is that the general public simply does not have the information." EPA has argued that it is these public perceptions that have driven the agency to focus on things like Superfund, the program designed to clean up hazardous waste sites (Stevens, 1991).
The way framing works is best illustrated by the results from a different survey on the same topic that was conducted a few years later. The survey instructions asked respondents to take 7 minutes to list as many risks as they could think of. When the questionnaire was first piloted, participants voiced concerns such as "I'm worried about the risk of loosing my job", "I'm worried about the risks of my love life going on the rocks", "I'm worried about the risks of my kids flunking out of school", "I'm worried about the risks of eternal damnation", and so on. That seems like a pretty reasonable set of concerns, but it was not the set
the surveyors were interested in. The instructions were revised to ask people to list risks to health safety in the environment. Respondents were then asked to identify the five risks they were most concerned about and answer a series of detailed questions about those risks.
The second approach is a very different strategy. It uses a way of framing the question different from that used in the Roper survey. The results have been published elsewhere (Fischer et. al., 1991) and are summarized here. Environmental risks are a significant concern (mentioned by 44 percent of respondents compared with 23 percent who cited health risks, 22 percent who were concerned about safety, and 11 percent who noted socially based risks, such as crime). Concerns about traditional pollutants were more frequently cited than were concerns about "exotic" pollutants (21 percent vs. 13 percent). This was a three-generation study, and there were interesting generational effects. Younger people and women were slightly more concerned about the environment; everybody was worried about AIDS; middle-aged people were worried about on-the-job risks; older people were worried about things like cancer and heart attacks. In short, a picture emerged that is very different from the one from the EPA Roper survey. The general insight is that care must be taken about how questions are asked and how inferences are made from the answers.
The second example is from a study undertaken several years ago by the Chemical Manufacturers Association (CMA). CMA commissioned three of the country's leading experts in risk communication to review the literature and extract from it advice for chemical plant managers on how to communicate risk information, focusing particularly on risk comparison. On the basis of this very careful reading of the literature, the experts offered advice on good and bad ways of comparing risk (Covello et al., 1988). They concluded by providing 14 examples of text illustrating good and bad comparisons for a specific ethylene oxide plant.
These 14 pieces of text were presented in a CMU study to several different samples of Americans with the following scenario, "You have a friend. He's the manager of an ethylene oxide plant in the Midwest. He's about to get up and give a talk to a community group and here's a bunch of text that he is proposing to use. If it overlaps, he'll edit that out in the final version. Here are several factors that he's concerned about. Rank each of these pieces of text on the basis of each of those factors."
Again, the results are in the literature (Roth et al., 1990). In summary, through the use of various analytical strategies no correlation was found between the acceptability judgment predicted by the manual (i.e., by the literature), and those produced by the subjects of the survey. The insight here is not that the experts did a bad job summarizing the literature, but that even experienced professionals have limited predictive abilities with respect to the design of risk communication. There is no substitute for taking an iterative empirical approach. Studies must be done with actual people in order to see what effect the message is having; the state of the art is such that predictions cannot be made.
IMPROVED METHODS FOR STUDYING PUBLIC UNDERSTANDING, PERCEPTIONS, AND VALUES
Baruch Fischhoff, among others, has worked extensively on the problem of measuring public perceptions of risks (Kahneman et al., 1982; Slovic et al., 1980). More recently, as part of a large project on improving risk communication supported by the National Science Foundation, we have developed a method based on a "mental model" that can be useful for getting a better sense of what the public knows and thinks about issues relevant to various aspects of industrial ecology (Morgan et al., 1992).
It is easiest to illustrate how the mental model works with an example. Suppose the objective is to devise a risk-communication brochure for lay people about radon in homes. Using the traditional approach, a radiation health physicist would be asked. "What should people be told about radon in their home?" The information would be relayed to a public relations or communications firm that would package it for a lay audience. Then, to add sophistication, the firm might be asked to evaluate the brochure's impact on public perceptions.
There are two key pieces that are missing in this traditional approach. First, what people already know about the subject is not known. People generally have some existing mental model, some knowledge structure relevant to the subject. Because anything that is told to them is going to be processed through that mental model, it is important to know what that mental model is before designing a communications strategy.
Second, people have to make decisions and choices. It is critical to know, in decision-analytic terms, what minimum information is necessary for people to make certain choices. Of course, people are not decision analysts. A number of other things matter in the communication besides the information that a conventional decision analyst would need. If that information is not supplied, the communication is going to fail at the most elementary level. Surprisingly, many risk communications fail to provide the minimum necessary information.
Our four-step approach to risk communication is based on people's mental models of risk processes. The steps are open-ended elicitation of people's beliefs about a hazard, allowing expression of both accurate and inaccurate concepts; structured questionnaires designed to determine the prevalence of these beliefs; development of communications based on both a decision-analytic assessment of what people need to know to make informed decisions and a psychological assessment of their current beliefs; and iterative testing of successive versions of those communications using open-ended, closed-form, and problem-solving instruments administered before, during, and after the receipt of messages.
The approach starts with open-ended elicitation of people's beliefs about a hazard allowing them to express both accurate and inaccurate concepts. In other words, the interviewer says to the interviewee, "Tell me about radon," and the interviewee starts talking. After a while, the interviewer says. "You said that
radon comes in through the basement. Would you tell me more about that?" That is, the interviewer keeps playing back bits of information that the interviewee has already supplied and asks for elaboration. A conversation is held that may go on for some time before the interviewee has finished telling the interviewer about the subject. This is followed by a more structured portion in which the interviewer supplies a bit more information and walks the interviewee through exposure and effects processes.
This type of interview is relatively difficult to do. The first time it was done for radon, several engineering graduate students were sent out with tape recorders to interview a few of their friends. The stack of transcripts that was returned were really quite hilarious. After just a few minutes into the interview, most of the lay people had figured out that the graduate student knew a lot about radon and were busily extracting information about radon from the interviewers. To deal with this problem, a public health nurse was hired and trained in "Eliza-like" interview techniques, that is, in techniques that do not supply any new information but simply play back details the respondent has already provided and ask for elaboration.
Interviews of this kind are an indispensable first step to find out what is in people's minds. The difficulty is that they are also very labor intensive. Twenty such interviews can result in a thick stack of transcripts; conducting enough interviews to get a statistical picture of the prevalence of these various ideas in the general population is not practical. However, it is important to get a sense of that distribution, because some of what emerges in these interviews will not be representative of general public views. If these exceptions are not sorted out, a lot of time may be invested in trying to deal with issues that are not relevant.
A similar set of interviews done on 60-hertz (Hz) electromagnetic fields is illustrative. In one of these early interviews, one respondent said something like, "Oh yeah, radiation from power lines. That happens because the radiation leaks out of the nuclear power plant and travels out along the power lines and then leaks off onto the people." We thought we had a serious problem, but later studies showed that this is not a common view (Morgan et al., 1990).
The second stage of the approach uses a closed-form survey to determine the prevalence in the sample population of the concepts uncovered in the mental model interviews. Many of these concepts would never be imagined by investigators sitting in an office. For example, it turns out that about one-third of all Americans believe that radon in their home permanently contaminates the home and nothing can be done. Given what people know, that is not an implausible view. People have heard about long-lived radionuclides and they know about long-term contamination from pesticides. Nobody has told them and they do not understand that radon-decay products have very short half-lives. If, as the EPA did, you devise a citizen's guide to radon and mail it out by the hundreds of thousands all across the country without knowing that fact, as indeed EPA did not when they did their first version, then you have a problem. (Incidentally, EPA
did know this by the time they put out their much-improved second version, because we had briefed them at length). If this type of information is lacking when a communication is being devised, the communication will likely misfire. For example, the first EPA brochure told householders that they should measure radon. Householders believed that if they made the measurement and found a problem, there was nothing they could do about it, and if they ever wanted to sell the house they faced a moral dilemma. They may have quite rationally concluded that they would be better off not measuring.
The third stage of the approach uses the results of the mental-model interviews and an assessment of the choices people actually face to develop the best communication possible. However, it does not end there, as it is unlikely that everything will be right the first time. The message has to be subjected to careful empirical evaluation by groups of lay people using such methods as read-aloud protocols and focus groups. This can be a bit hard on the editorial ego after much effort has been invested in devising the communication. However, in every evaluation we have run, we have found important problems that needed to be addressed.
Three examples of the opening exchanges in several mental-model interviews conducted on the topic of climate change are given in the Appendix. Note that the interviewer is careful not to supply additional information. In each case, the respondent mentions a number of specifics, each of which can be followed up on in subsequent questions.
Detailed results from our mental model studies of climate change have been published (Bostrom et al., 1994a; Read et al., 1994). The respondents in the studies regarded global warming as bad and highly likely, and many believed that warming has already occurred. Respondents tended to confuse stratospheric ozone depletion with the greenhouse effect and weather with climate. Automobile use, industrial-process heat and emissions, pollution in general, and aerosol spray cans were perceived as the main causes of global warming. The greenhouse effect was often interpreted literally to mean a hot and steamy climate. Respondents described global climate change effects that included increased incidence of skin cancer and changes in agricultural yields. Many of the mitigation and control strategies proposed focused on general pollution control and regulation, with emphasis on automobile and industrial emissions. Specific links to carbon dioxide and energy use were relatively infrequent. Respondents appeared to be relatively unfamiliar with recent regulatory developments regarding the environment, such as the ban on chlorofluorocarbons for nonessential uses, which includes use as a propellant in spray cans.
The mental-model interviews are very labor intensive. However, when the number of new concepts encountered is plotted as a function of the number of interviews, with a reasonably identified targeted audience from which respondents are drawn, the rate at which new concepts are picked up drops to a very low level after about 20 interviews.
The number of 20 interviews is borne out by formal analysis with the results
of the first stage, using expert-influence diagrams and mapping the lay concepts onto the expert framing (Atman et al., 1994; Bostrom et al., 1992; Bostrom et al., 1994b). Therefore, it is probably safe after 20 such interviews to move on to the second stage.
Having identified plausible sets of concepts and misperceptions in the second stage of the process, one must determine just how prevalent they are, which requires a larger sample size. Incidentally, developing appropriate questions is sometimes not easy. For example, a series of studies on public understanding of the physics of 60-Hz electromagnetic fields (Morgan et al., 1990) shows that it is a fairly challenging job to produce questions about electromagnetic field theory that are in lay language but are precise enough to illicit an unambiguously correct answer from every physicist or electrical engineer. Expert language has a role of adding precision. Translating expert language into lay language can often be challenging.
The second stage of the mental model is best illustrated with results from the climate-change example. The results from the open-ended interviews, were used to develop a questionnaire that was administered to two well-educated samples of lay people (177 people total). Most subjects did not understand that if significant global warming occurs, it will primarily result from increases in the concentration of carbon dioxide in the earth's atmosphere, and that the single most important source of carbon dioxide additions to the atmosphere is combustion of fossil fuels.
The mental model-based procedures have now been applied to a number of topics including radon in homes (Bostrom et al., 1992); 60-Hz electric and magnetic fields (Morgan et al., 1990); climate change (Bostrom et al., 1994; Read et al., 1994); dams and floods (Lave and Lave, 1991); and space launch of nuclear energy sources (Maharik and Fischhoff, 1992). In addition, Sarah Thorne of Dow Canada is making extensive use of mental-model methods to develop public communications, internally at Dow for training and to facilitate institutional change.
The traditional assumption in economics is that people have well-articulated values. Contingent valuation is a procedure for measuring those values. Much work in modern cognitive psychology and behavioral decision theory suggests that although we may have well-articulated values for things we deal with regularly, we probably do not have such values for many of the sorts of issues that are relevant in policy contexts. In these cases, Baruch Fischhoff argues that what we should be asking is, "What values would a typical member of the public construct if they were given all the relevant facts and sufficient time and support to think about them?"
Some earlier work on citizen groups advising on transmission line siting suggests that lay groups will work very hard to do a good job, and that when there is high substantive content the results may be fairly independent of group dynamics (Hester et al., 1990). Fischhoff is now running a pilot study using fuel taxes as
the case context to develop new methods to help lay groups construct such values in an information-rich environment. His method alternates between individual measurement and group activities.
In work recently done for the White House Office of Science Technology and Policy to develop a procedure that federal risk-management agencies might use to rank risks (Morgan et al., 1994), the Carnegie Mellon team proposed a six-step procedure that can be summarized as follows: 1) Define and categorize the risks that will be ranked, 2) Identify the attributes that should be considered, 3) Describe the risks in terms of the attributes, 4) select the groups that will do the ranking, 5) Perform the rankings and merge the results, and 6) Provide a reasonably rich description of the results. Of course, ranking risks does not by itself solve the risk-management problem. A ranking says nothing about what it will cost to do something about risks. For this and other reasons, ranks may not translate directly into budgetary priorities.
DEVELOPING RISK COMMUNICATION MATERIALS
We have used the mental-model procedures outlined above to develop risk communication materials for the general public, including two communications on radon, four on 60-Hz fields, and one on climate change. Let us use the climate change data to illustrate how one may proceed. Society cannot have an intelligent debate on spending hundreds of billions of dollars on solutions to the specific problems of climate change unless the lay public is better informed than it is today. The clarifications needed to produce adequate public understanding are, we believe, well within the capabilities of modern risk communication.
We have just published a climate brochure that is arranged hierarchically in three levels of detail. An initial two-page spread provides a brief overview and lists and corrects common misunderstandings. Then three two-page spreads systematically explore the questions, "What is climate change?"; "If climate changes, what might happen?"; and ''What can be done about climate change?" Finally, in pouches at the back of each of these three sections are detailed booklets that discuss the topic in much greater depth.
The brochure was developed from the results from our mental-model studies and an analysis of the kinds of private and public choices that lay people face. We developed the best communication that we could, circulated it among colleagues at Carnegie Mellon, and then made extensive corrections. Next, we sent it out for review by a large number of outside experts and made an additional set of extensive revisions. At this point, we had a communication that was in pretty good shape technically but still had not been subjected to lay evaluation. To obtain such an evaluation we conducted a series of read-aloud protocol studies and conducted a number of focus groups in which the materials were reviewed section by section and line by line. These reviews resulted in a number of major revisions. For example, the overview section was added after a blue-collar evaluation group
found the second-level material too substantively dense and reported that they were getting lost.
SOME HYPOTHESES ABOUT PUBLIC PERCEPTIONS AND VALUES AS THEY RELATE TO INDUSTRIAL ECOLOGY
Relatively little work on public perceptions and values in the context of industrial ecology has been done, but several general hypotheses can be advanced about what may be found when that work is done. These hypothesis are based on my own work, which utilized unpublished data supplied by the anthropologist Willett Kempton.
The first hypothesis is that the public supports improving environmental performance but primarily frames the issue in moral terms (i.e., good versus evil) and in terms of command and control (i.e., forcing people to be good). Support for the environment is shown in national poll results, and we have seen signs of the good-versus-evil thinking pattern in our mental-model interviews. Willett Kempton has reported similar results from his ethnographic and survey studies.
The second hypothesis is that the public thinks of energy conservation as morally correct, a way to save money, and requiring sacrifice. Often the link between conservation and reduced emissions of pollutants or carbon dioxide is not made. Both Kempton and we at CMU have seen evidence for this view in our open- and closed-form studies.
A correlate to hypothesis 2 is that the public associates environmental protection with sacrifice. At least in this context, people do not recognize that alternative designs can result in similar or better services or products with much lower externalities. This is supported by unpublished survey data provided by Kempton (personal communication).
The third hypothesis is that the public does not clearly see taxes as a plausible vehicle for inducing desired environmental behaviors. The public believes that the price elasticity for goods such as gasoline is close to infinite. At least in this context, little understanding is shown of the difference between short- and long-run elasticity. We have seen hints of this in our various mental-model interview transcripts; there is clear evidence in our results on carbon taxes. Kempton et al. (1995) report specific findings that support the hypothesis in the context of fuel taxes.
A Dow Jones wire story (Goldman, 1994) reported on a survey of 22,516 readers who looked at 300 "green" advertisements that have run in 186 journals since 1991. They survey found that appeals to the general good can misfire and advised advertisers to be specific about products' benefits in terms of the consumer, not society in general, and exploit the inherent visual power of environmentalism.
A further commentary on market signals is that current residential energy bills send relatively few disaggregated market signals. Kempton and Layne (1994) studied customer use of information in such bills and noted more extensive
data collection and analysis by residential consumers than one might expect and clear limits to the inferences consumers can make with existing data. They concluded that there is a need for better data and analysis and that customers would use these if they were provided.
Richard Sonnenblick and Mark Levine of Lawrence Berkely Laboratory studied two major demand-side management-incentive programs involving electric lighting (Levine and Sonnenblick, 1994). They found high levels of participant satisfaction, low participant direct costs, good reduction in participant hidden costs, and substantial learning effects likely to lead to further conservation. A similar study (Eto et al., 1994) found little evidence of "take-back effect" in lighting retrofit rebate programs and clear evidence of additional energy efficiency installations by participants (spillover) and nonparticipants (free drivers), due to expanded awareness.
The fourth hypothesis suggests that because the public frames environmental problems in moral terms rather than in terms of social structure and design choices, theories based on self-interest and conspiracy will be popular and views about the possibilities for and promise of structural and system redesign will be confused.
The author thanks Willett Kempton and Richard Sonnenblick for supplying unpublished data for the speech on which this paper is based. Principal collaborators in the work on mental models and risk communication were Jack Adams, Ann Bostrom, Baruch Fischhoff, Keith Florig, Gordon Hester, Lester Lave, Indira Nair, Daniel Read, and Tom Smuts. Thanks are also due to Cindy Atman, Conception Cortds, Hadi Dowlatabadi, Greg Fischer, Max Henrion, Urbano Lopez, Michael Maharik, Kevin Marsh, Fran McMichael, Jon Merz, Denise Murrin-Macey, Karen Pavlosky, Daniel Resendiz-Carrillo, Emilie Roth, Mitchell Small, Patti Steranchak, Joel Tarr and Jun Zhang of Carnegie Mellon University, and Paul Slovic and Don MacGregor of Decision Research. The work described in this paper was supported by grant SES-8715564 and others from the National Science Foundation, several contracts from the Electric Power Research Institute, a grant from the Scaife Family Foundation, and various other grants.
Atman, C. J., A. Bostrom, B. Fischhoff, and M. G. Morgan. 1994. Designing risk communications: Completing and correcting mental models of hazardous processes, part I. Risk Analysis 14:779–788.
Bostrom, A., B. Fischhoff, and M. G. Morgan, 1992. Characterizing mental models of hazardous processes: A methodology and an application to radon. Journal of Social Issues, 48(4):85–100.
Bostrom, A., M. G. Morgan, B. Fischhoff, and D. Read. 1994a. What do people know about global climate change? Part I: Mental models. Risk Analysis: 959–970.
Bostrom, A., C. J. Atman, B. Fischhoff, and M. G. Morgan. 1994b. Evaluating risk communications: Completing and correcting mental models of hazardous processes, part I. Risk Analysis 14:789–798.
Covello, V. T., P. M. Sandman, and P. Slovic. 1988. Risk Communication. Risk Statistics, and Risk Comparisons: A Manual for Plant Managers, Washington, D.C.: Chemical Manufacturers Association.
Eto, J., E. Vine, L. Shown, R. Sonnenblick, and C. Payne. 1994. The Cost and Performance of Utility Commercial Lighting Programs. LBL-34967. Berkeley, Calif.: Lawrence Berkeley Laboratory.
Goldman, K. 1994. April 1. Wall Street Journal, B, 8:1.
Fischer, G. W., M. G. Morgan, B. Fischhoff, L. Nair, and L. B. Lave. 1991. What risks are people concerned about? Risk Analysis 11:303–314.
Fischhoff, B. 1993. Value elicitation: Is there anything in there? Pp. 36–42 in The Origin of Values, M. Hecter, R. E. Mischod, and L. Nadel, eds. New York: Aldine Degruyter.
Hester, G., M. G. Morgan L. Nair, and K. Florig. 1990. Small group studies of regulatory decision making for power-frequency electric and magnetic fields. Risk Analysis 10:213–228.
Kahneman, D., P. Slovic, and A. Tversky, eds. 1982. Judgment Under Uncertainty: Heuristics and Biases. New York: Cambridge University Press.
Kempton, W., J. S. Boster, and J. Hartley. 1995. Environmental Values in American Culture. Cambridge, Mass.: MIT Press.
Kempton, W., and L. Layne. 1994. The consumer's energy analysis environment. Energy Policy 22:657–665.
Lave, T. R., and L. B. Lave. 1991. Public perception of the risk of floods: Implications for communication. Risk Analysis 11:255–268.
Levine, M. D., and R. Sonnenblick. 1994. On the assessment of utility demand-side management programs. Energy Policy 22(10) 848–856.
Maharik, M., and B. Fischhoff. 1992. The risks of nuclear energy sources in space: Some activists' perceptions. Risk Analysis 12:383–392.
Morgan, M. G., H. K. Florig, L. Nair, C. Cortds, K. Marsh, and K. Pavlosky . 1990. Lay understanding of power-frequency fields. Bioelectromagnetics 11:313–335.
Morgan, M. G., B. Fischhoff, A. Bostrom, L. Lave, and C. J. Atman 1992. Communicating risk to the public. Environmental Science and Technology 26:2048–2056.
Morgan, M. G., B. Fischhoff, L. Lave, and P. Fischbeck with S. Byram, K. Jenni, G. Louis. S. McBride, L. Painton, S. Siegel, and N. Welch. 1994. A Procedure for Risk Ranking for Federal Risk Management Agencies. Manuscript prepared for the Office of Science and Technology Policy.
Read, D., A. Bostrom, B. Fischhoff, and M. G. Morgan. 1994. What do people know about global climate change? Part 2: Survey studies of educated lay people. Risk Analysis. 14:971–982.
Roth, E., M. G. Morgan, B. Fischhoff, L. B. Lave, and A. Bostrom. 1990. What do we know about making risk comparisons? Risk Analysis 10:375–392.
Slovic, P., B. Fischhoff, and S. Lichtenstein. 1980. Facts and fears: Understanding perceived risks. Pp. 181–214 in Societal Risk Assessment. How Safe is Safe Enough? R. Schwing and W. A. Albers, Jr., eds. New York: Plenum Press.
Stevens, W. K. 1991. What Really Threatens the Environment? New York Times, January 29, C, 4:1.
Three Examples of Opening Responses in Mental-Model Interviews on Climate Change
Interviewer: I'd like you to tell me all about the issue of climate change.Subject: Climate change. Do you mean global warming?
Interviewer: Climate change.
Subject: OK. Let's see. What do I know. The earth is getting warmer because there are holes in the atmosphere and this is global warming and the green-house effect. Um … I really don't know very much about it, but it does seem to be true. The temperatures do seem to be kind of warm in the winters. They do seem to be warmer than in the past … and … hmm … That's all I know about global warming.
Interviewer: Tell me all about the issue of climate change.
Subject: I'm pretty interested in it … The ice caps are melting—the hole in the ozone layer. They think pollution from cars and aerosol cans are the cause of all that. I think the space shuttle might have something to do with it too, because they always send that up through the earth, to get out in outer space. So I think that would have something to do with it, too.
Interviewer: Tell me all about the issue of climate change.
Subject: Climate change? Like, what about it? Like, as far as the ozone layer and ice caps melting, water level raising, rain forest going down, oxygen going down because of that? All of that kind of stuff?
Interviewer: Anything else?
Subject: Well, erosion all over the place. Um, topsoils going down into everywhere. Fertilizer poisoning.
Interviewer: Anything else that comes to mind related to climate change?
Subject: Climate change. Winter's ain't like they used to be. Nothing's as severe. Not as much snow. Nothing like that.