or (3) promotion of a decisionmaking partnership, to involve people actively in risk management and decisionmaking, including structuring the problem and selecting management options (National Research Council, 1989). The method and content of any particular instance of risk communication depend on the goals of the communicator. For example, an organization whose goal is to promote immunization may tend in its public communication efforts to emphasize the benefits and minimize the risks associated with that intervention. On the other hand, an organization whose goal is to alert the public to the risks of an intervention such as vaccination may tend to emphasize the risks and minimize the benefits of the intervention.
Studies of effectiveness for both behavioral and informational goals suggest that to be effective a communication must evoke a sense of personal relevance in the recipient, and that the recipient can do something to reduce or control the risk.
Risk Perception and Decisionmaking2
There are many influences on how people perceive and respond to risks. Several participants noted that individuals' values, beliefs, and attitudes as well as the wider social or cultural values or dispositions strongly influence how risks are perceived or accepted. A better understanding of risks, consequently, will not lead to a uniform response to them. As an expert in risk communication noted, information alone does not resolve controversy. Good risk communication depends on understanding more than quantitative risks and benefits; background experiences and values also influence the process. For example, people who have a general mistrust of government or big business may be less likely to accept the vaccine risk estimates published by government health agencies or vaccine manufacturers.
Decisions about health risks were described by one speaker as being made not only on a rational basis but also on emotional, psychological, religious, spiritual, philosophical, and intuitive bases. This "cultural rationality" recognizes a richer range of influences on decisionmaking than does the narrower concept of rationality commonly used by experts in the field, according to a speaker who studies risk communication.
Studies show that voluntary, natural, and controllable risks are generally more accepted than risks that are imposed, not within an individual's control, or due to human-made causes. Risks that are familiar are also usually more accepted than
those that are unfamiliar or hypothetical (Slovic et al., 1979; Lichtenstein et al., 1978; Fischhoff et al., 1978). Morgan (1993) uses observability and controllability as the two dimensions that characterize a hazard's "dreadfulness" and the degree to which it is understood (see Figure 1).
Heuristics and Biases
Cognitive shortcuts or rules of thumb known as heuristics affect peoples' quantitative estimates of risk. Risk scientists have shown that there are regular and predictable patterns in the ways that these operate. Use of these heuristics can result in biases in quantitative estimates of risk.
Anchoring refers to a lack of feel for absolute frequency and a tendency for people to estimate frequencies for a new event on the basis of the frequencies presented for other events. For example, if a person is told that 1,000 people a year die from electrocution and then is asked to estimate how many people die from influenza, his or her number is likely to be lower than if the person is first told that 45,000 people a year die in automobile accidents (Kahneman and Tversky, 1972). The tendency is to "anchor" on the first number and not adjust far enough from it. Consequently, how and what probability estimates of risk are presented and in what order they are presented may affect how risks are perceived because of anchoring effects.
Compression is the overestimation of small frequency risks and the underestimation of large frequency risks (Fischhoff et al., 1993). If this applied to vaccine risks, people would behave as if the risk of rare adverse effects from vaccines were higher than reported.
Availability means that events that are easily remembered or imagined are more accessible or "available" to people, so that their frequencies are overestimated (Tversky and Kahneman, 1973). If, for example, a particular risk has recently or often been reported in the popular press, people may well overestimate its frequency. A science writer commented that people pay more attention to dramatic, new, or unknown risks or risks conveyed within the context of a personal story. Most people will give proportionally more weight to a dramatic risk of dying from an airplane crash, for example, than to the risk of dying from lung cancer due to smoking, even though the latter is more likely. Drama, symbolism and identifiable victims, particularly children or celebrities, the science writer said, also make a risk more memorable.
When risks are given as verbal probabilities (e.g., likely, unlikely, rare, and common), interpretation depends on the context (Budescu and Wallsten, 1985; Wallsten et al., 1986). The phrase "likely to catch a cold" will be interpreted differently from "likely to become infected with HIV," for example.
Exposure refers to the fact that people tend to underestimate the cumulative effect of multiple exposures to a risk (Linville et al., 1983). In many instances of risk, the concern is about exposure over time, not necessarily from a single exposure alone. Communication of cumulative risk can be helpful in these instances. Cigarette smoking is an example of an exposure in which cumulative risk is important.
Comparisons. Risk is multidimensional, but when a communicator makes a risk comparison on the basis of one or two dimensions, people may assume that many dimensions are being compared and draw conclusions based on the broader comparison rather than that which was intended. For instance, experts may say that the risk of an environmental exposure is inconsequential because on average it is low, but ordinary people might call for action because they fear that the risk falls disproportionally, and thus unfairly, on vulnerable groups.
Omission bias is the tendency to believe that an error of omission is less serious than an error of commission. That is, people tend to be more averse to a risk incurred by taking an action than one incurred by taking no action. For example, a University of Pennsylvania study found that nonvaccinators (parents who chose not to vaccinate their children) were more likely to accept deaths caused by a disease (that is, omitting vaccination) than deaths caused by vaccination (an act of commission) (Meszaros et al., 1996).
Framing, the way in which information is presented or the context into which it is placed, affects how risk communication messages are received. Studies show that a different framing of the same options can induce people to change their preferences among options (Tversky and Kahneman, 1973; Lichtenstein and Slovic, 1971). This is known as a preference reversal. For example, the data on lung cancer treatment suggest that surgical treatment has a higher initial mortality rate but radiation has a higher 5 year mortality rate. In one illustration, 10 percent of surgery patients die during treatment, 32 percent will have died one year after surgery, and 66 will have died by five years. For radiation, 23 percent die by one year and 78 die by five years. When people are given these mortality statistics, they tend to be evenly split between preferring radiation and preferring surgery. When the same statistics are given as life expectancies (6.1 years for surgery and 4.7 years for radiation) there is an overwhelming preference for surgery (McNeil et al., 1982).
How information is framed can also affect whether people allow an omission bias to be a prime motivator of a decision not to vaccinate. One study of university students found that when the issue of responsibility was removed, subjects were more likely to opt for vaccination. Responsibility was removed by reframing the question as "if you were the child, what decision would you like to see made" (Baron, 1992).