Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
v Risk Communication
A Mental Mode} Approach to Risk Communication M. Granger Morgan When the word risk is mentioned, many of us first think in terms of a single value such as the expected number of deaths or injuries. However, a simple thought experiment can quickly convince us that things are more complicated. Suppose that we are considering introducing a new product. After careful market research we have determined that we can sell a number of them and make a profit, but there will be some net impact, D, on overall US mortality. What sorts of things do we need to know before we decide whether we are justified in introducing this product? In addition to the sign and magnitude of D, most people want to know things such as whether the risk is immediate or delayed, how equally or unequally it is distributed among different people, whether those at risk have any control over their exposure, whether the effects are immediate or delayed, whether there are intergenerational effects, how well the risk is understood, whether it is similar to other risks society already accepts, what the product does, who uses it, and how responsibility and liability will be distributed. As we pursue this simple question we quickly come to understand that risk is a multi-attribute concept. We care about more than just some measure of the number of deaths and . . . nJurles. Slovic, Fischhoff and Lichtenstein82 have shown that one can group such attributes of risk into three broad factors which allow us to reliably sort risks into a "factor space." People's perceptions of risks, including their beliefs about the need for regulatory intervention, are a strong Unction of where a risk falls in this space. Risks posed by such common and well known objects and activities as skiing, bicycles, and automobiles appear in the lower left corner 82Slovic, P. B Fischhoff and S Lichtenstein (1980). Facts and fears: Understanding perceived risk, in Schwing, R and W Albers (ed.), Societal Risk Assessment. New York: Plenum. 111
. 112 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK of the space. In contrast, risks such as those associated with nuclear power, asbestos, and pesticides lie in the upper right hand comer of the space. Experimental psychologists have discovered that in making judgments, such as the number of deaths from a chance event, people use simple mental rules of thumb called "cognitive heuristics."83 In many day-to-day circumstances, these serve us very well, but in some instances, they can lead to biases in the judgments we make. This can be a problem for both laypeople and for experts. Three such heuristics are particularly common: Availability: the probability of an event is judged in proportion to the ease with which people can think of previous occurrences of the event or can imagine such occurrences. Anchoring and adjustment: the probability judgment is driven by a starting value (anchor) from which people typically do not adjust sufficiently as they consider various relevant factors. Representiveness: the probability that an object belongs to a particular class is judged in terms of how much it resembles that class. The design of effective risk communication requires a recognition of the multi-attribute nature of risk, an awareness of the psychology of risk perceptions and judgment under uncertainty, and an analysis of the information needs of the people for whom the communication is intended. Developing a risk communication has traditionally been a two-step process. First, you find some health or safety specialist who knows a lot about the risk and you ask them what they think people should be told. Then you find someone who is called a "communications expert," who is usually either a writer or someone who works in public relations. You give them the information you got Dom the health or safety specialist and they decide how they think it should be packaged and delivered. If you think about it for a while, you will notice that two key things are missing Mom this traditional approach. First, it doesn't determine systematically what people already know about the risk. People's knowledge is important because they interpret anything you tell them in light of what they already believe. If some of those beliefs happen to be wrong, or misdirected, your message may be misunderstood. It may even lead people to draw conclusions that are exactly the opposite from what you intended. Second, the traditional method doesn't determine systematically the precise information that people need to make the decisions they face. There are formal methods, 83Kahneman, D, P Slovic and A Tversky (ed.) (1982). Judgment Under Uncertainty: Heuristics and Biases. New York: Cambridge University Press. Dawes, RM (1988). Rational Choice in an Uncertain World. Orlando, Florida: Harcourt Brace.
RISK COMMUNICATION 113 decision analysis, that can provide a precise answer to a question such as "what is the minimum set of infonnation I need to make the decisions I care about." Of course, most of us have never heard of decision analysis, and we don't use it in our daily decision making. For various reasons we typically require more than the minimum set of information to make the decisions we face. Yet, when you start reviewing traditional risk communication messages it is amazing how many of them fail to provide even this minimum set of information. Most of us already have some relevant knowledge and beliefs about any risk that a communication is designed to inform us about. Often we've already heard some specific things about the risk. If we haven't, we've heard about other risks which sound pretty similar. In any event, we have a lot of knowledge about the world around us. We have beliefs about how things work, and about which things are more and less important. When someone tells us about a risk we use all our previous knowledge and beliefs, called our "mental model," in order to interpret what we are being told. Finding out what someone already knows about a risk means learning about their "mental model." That's easier said than done. We could administer a questionnaire, but people aren't stupid. I have to ask questions about something. As soon as I start putting infonnation in my questions, people are going to start using that information to make inferences and draw conclusions. Pretty soon I'm not going to know if the answers I am getting are telling me about the mental model that the person already had before I started quizzing them, or the new mental model that the person is building because of the all the infonnation I am supplying in my questions. To overcome these and other problems, we have developed a five-step method for creating, testing and refining risk communication messages: 84 1. Carefully review scientific knowledge about the risk, and summarize it in terms of a fonnal diagram called an "influence diagram." 2. Conduct open-ended elicitations of people's beliefs about the hazard, allowing expression of both accurate and inaccurate concepts. Use a "mental model interview protocol" that has been shaped by the influence diagram. 3. Administer structured questionnaires to a larger set of people in order to determine the prevalence of the beliefs encountered in the "mental model" interviews conducted in Step 2. ~ Morgan, MG, B Fischhoff, A Bostrom, ~ Lave, and C Atman (1992). Communicating risk to the public, Environmental Science & Technology, 26: 2048-2056. Bostrom, A, B Fischhoff and MG Morgan (1992). Characterizing mental models of hazardous processes: A methodology and an application to radon, Journal of Social Issues, 48: 85-100.
114 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK 4. Develop a draft risk communication message based on both a decision analytic assessment of what people need to know in order to make informed decisions and a psychological assessment of their current beliefs. 5. Iteratively test and refine successive versions of the risk communication message using open-ended interviews, closed-form questionnaires, and various problem-solving tasks, administered before, during, and after people receive the message. Suppose that I want to learn about the mental model that someone has for a risk such as the safety of the blood supply. In response to a question like "Tell me about the safety of the blood supply," most people can only talk for a few sentences before they run out of steam. However, those few sentences often contain five or ten different ideas. If the interviewer has been trained to keep track of all the things that are mentioned, they can then go on to ask questions that follow up on each one. For example, they might say "You mentioned that screening blood donors can improve safety. Tell me more about that. . ." By systematically following up on all the concepts that the subject introduces, a well-trained interviewer can often sustain a conversation about the risk for 10 to 20 minutes, introducing no new ideas of their own. Only in a later stage of the interview will the interviewer go on to ask questions about other key ideas which the subject did not bring up on their own. By conducting a number of interviews of this sort, we can begin to build up some sense of what people know and believe about a risk. Then in step three of the process, using a closed-form questionnaire, we can determine the relative frequency with which various beliefs actually occur in the general public. Every time that we have conducted mental model interview studies to crecare a risk communication we have learned surprising and important things ~ ~ _ _ which have had a major effect on the message we developed. Among the most important insights have come from some of the common misconceptions. For example, in studies of radon, we learned that a significant number of Americans believe that once they get radon in their house, the house becomes permanently contaminated and there is nothing they can do about it. Of course, this is not true. Radon is a radioactive gas that decays into various particles which become nonradioactive in a matter of hours. Thus, if the source of radon gas can be closed off, all of the resulting radioactivity will soon be gone from the home. . . . ~. This is an important part of any risk communication message, because if you test your home for radon and find elevated concentrations, you can take various steps to prevent the radon Tom entering the home and thus reduce or eliminate the risk. How could people think that a house that has radon is permanently contaminated? Probably they
RISK COMMUNICATION 115 are extrapolating from other things they know. They have heard about radioactive waste from power plants and bomb factories that remains dangerous for 100,000 years. They also may have heard of houses that have become chemically contaminated when they were sprayed by very long lasting pesticides. They make a reasonable extrapolation (which in this case happens to be wrong) that radon is like these other cases. This is the sort of misconception that it is critical to know about if you are going to design an effective risk communication. When EPA designed their first Citizen's Guide to Radon, which was mailed to citizens all over the country, they didn't know about this common misconception. Their central message was "You should test your house for radon." However, many people who believe that radon permanently contaminates a house would probably be inclined to ignore this message, figuring they are better off not knowing whether their house has a high concentration of radon. For example, if they don't know, they could sell their house some time in the future with a clear conscience. In summary, in order to develop an effective risk communication one must recognize that risk is a multi-attribute concept. Building on the literature on risk perception and judgment under uncertainty, one should learn what people already know about the risk at hand, first through open-ended mental model interviews, and then through closed-form questionnaires. On the basis of this, and a careful assessment of the information people need to make on the decisions they face, a first draft of the communication can be developed. However, there is no such thing as an expert in risk communication. The only way to be certain that a message works, that it is understood in the way that it is intended, is to try it out on real people. By using a variety of methods, including one-on-one read aloud protocols and focus groups, the message can be iteratively refined. Test it; refine it; test it again. Don't stop until it works. DISCUSSION Henrik Bendixen: How do you know when risk communication works effectively? M. Granger Morgan: One strategy is to present people with a hypothetical situation and ask them what actions they would take. For example, we did a three-way study with the first version of the EPA Citizen's Guide to Radon and two brochures we developed from the results of our work. In terms of simple recall, regurgitating the facts, the three did roughly comparably. Then we asked participants to respond to a question such as: "You have a neighbor who has just measured radon levels in their home of 10 picocuries per liter. What advice do you give them?" The people who read either of the brochures
1 16 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK we developed were able to give direct, cogent, and correct advice, but the people who had read only the EPA brochure had a lot of trouble. They basically were not able to provide an answer.
Risk Communication: Building Credibility Caron Chess The following true story illustrates the difficulty of relying only on numbers to explain risk. A high-ranking government official was speaking to a public meeting of hundreds of people about a proposal for a hazardous waste incinerator in their neighborhood. The audience was told that the incinerator would pose only a 1 in 1 million risk of an increased death from cancer. The crowd's response: "We hope you are the one."8s If this agency representative had been able to explain more eloquently the 1 in a million risk, would people have said, "Oh, now we understand. We will graciously accept your incinerator." The answer, of course, is no. Nonetheless, senior scientists, administrators, and public health officials seek to improve their explanations of the risk numbers in hopes that the public will yield to experts' judgments. These experts are overemphasizing the impact of explanations of the mortality and morbidity data. I suggest that this and other environmental communication issues apply to communication about blood safety. In the following few pages I discuss three myths that scientists and policy makers believe about explaining risk and propose alternative approaches. ROLE OF INFORMATION One myth is that information changes behavior, while in reality, information has little association with behavior. Although scientific experts are very careful to limit extrapolation beyond the data when dealing with their own disciplines, they tend go beyond the social science data (and sometimes go where no social scientist has gone before). 85Hance, B.], C Chess, PM Sandman (1988). Improving Dialogue with Communities: ~ Risk Communication Manual for Government. Trenton: New Jersey Department of Environmental Protection. 117
118 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK Experts who are not social scientists have in mind a model about the role of information that is flawed. Many non-social scientists erroneously believe that if you give people information about risk, they will change their attitudes and then they will change their behaviors. However, empirical research suggests that increased knowledge about technology is not necessarily associated with support of that technology. For example, in a review of research on perception of the risks of nuclear energy, about one-half of the studies show that supporters of nuclear power know more factual knowledge than opponents. The other studies show that opponents of nuclear power know more information than supporters or that opponents and supporters have equal amounts of information. 86 In the health education field there is an evolving consensus that knowledge has a weak link to behavior. For example, high school kids right now can pass tests of their AIDS knowledge, but they have not changed their behavior dramatically. Likewise, women may know that they are at risk for AIDS if they have unprotected sex with a partner who is HIV infected. That knowledge does not mean that women will insist that their partners use condoms. Advertisers who want to change behavior talk about messages and appeals rather than information: appeals to status, social approval, sex appeal and so forth. Similarly, those who attempt to change behavior for the social good, dubbed social marketing, develop communication strategies and messages based on what motivates people to change their behavioral For instance, years ago antismoking campaigns were replete with images of anatomically explicit photographs of damage to lungs caused by smoking. Somewhat later, there was a complete change in message, based on research about what motivates teenagers' behavior: television ads featured Brooke Shields telling kids that it was uncool to smoke. When those of you in the blood industry are considering how to increase the pool of donors, you need to think in terms of social marketing. For example, the message that donating blood cannot lead to HIV infection, while a seemingly logical response to public fears, needs to be supported by empirical research on questions such as: To what extent does fear of HIV affect those people who have a record of donating blood? Does that fear largely affect those whose psychological portrait suggests that they would be donors? Or does it largely affect those who would be unlikely to give blood under any conditions? If the fear does affect people otherwise likely to give 86Johnson, BB (1993). Advancing understanding of knowledge's role in lay risk perception. Risk: Issues in Health, Safer, and the Environment, 4: 189-212. 87For example, Rice, RE, CK Atkin (1989). Public Communication Campaigns. Newbury Park: Sage.
RISK COMMUNICATION 119 blood, how well is their fear reduced by the message about the lack of connection between HIV and blood donation? Some existing empirical research begins to answer this question by distinguishing the attitudes of those who donate blood from those who do not donate blood. According to one study, people who give blood tend to associate blood donation with generosity, civic mindedness, and usefulness as well as feelings of assurance and relaxation. Those who do not donate blood connect donation with illness and discomfort. This study suggests that motivating donors might build on their positive associations, not merely factual infonnation about the lack of connection between giving blood and AIDS. Attracting more donors requires more such research to determine feelings about blood donation, to develop potential messages based on that research, and to test those messages. If you want to change behavior, for example, encouraging people to donate blood or to become repeat donors, you need to understand their motivations, biases, and beliefs. This same need for understanding is also essential to discouraging people who continue to donate blood, even though they know they are HIV-positive or that they may be at risk of HIV infection. If you are going to communicate with those individuals, you want to know what their attitudes and beliefs are and what motivates them to behave in this manner. When you do provide information, you should consider what your audience wants to know, not merely what you want to tell them. To provide this information, you need to know what your audience thinks is important and knowing your audience requires further research. THE ROLE OF RISK NUMBERS Scientific experts tend to subscribe to a second myth: the public is influenced largely by the risk numbers. However, as the previous discussion suggests, individuals are influenced by factors other than the risk data. A true story about a prominent Environmental Protection Agency risk assessor illustrates the influence of other variables.89 The risk assessor was in the hospital for tests, including one that had a slight risk of causing kidney failure. He found that more sophisticated diagnostic equipment, without the potential risk, existed at a hospital across town. He decided to take an ambulance to the other hospital to use the more sophisticated diagnostic equipment even heckler, SJ (1989). Scales for the measurement of attitudes towards blood donation. Transfusion, 29: 401-404. 89Siegel, B (1987). Managing risks: Sense and science. Los Angeles Times, July 5, I 28.
120 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK though his wife pointed out to him that the risk of the ambulance trip across town was greater than the risk of kidney failure. The risk assessor chose a familiar risk (a ride across town) rather than the less familiar one (the unfamiliar, potentially harmful technology). The risk assessor acknowledged that he did not make his personal decision by the numbers, even though he knew it was illogical. Now how does this story relate to the field of blood donation? People tend to be less fearful of the familiar risks than the unfamiliar risks, regardless of the risk numbers.90 According to the literature on risk perception, there are also psychological rules of thumb that influence perceptions more than the risk numbers. One rule of thumb, "the availability heuristic," suggests that the most vivid images on our psychic landscape influence how we see risk.9~ Therefore, because homicides get a great deal of publicity, most of us see homicides as a more Sequent cause of death than it is. Similarly, people are less influenced by numbers such as a 1 in 400,000 risk of contracting HIV from a blood transfusion, than they are by the powerful images of famous AIDS victims such as Elizabeth Glaser or Ryan White. THE ROLE OF RISK COMPARISONS Risk comparisons reflect the tremendous desire on the part of scientists to compare risks that are not part of daily lives (such as the risk of contracting HIV from blood donation) with familiar risks that people take every day. Often, the goal is to suggest that everyday risks, such as driving, are more risky than those risks that people fear, such as donating or receiving blood. However, despite the interest in using risk comparisons, few studies have explored the impacts of such comparisons. The results of one suggested that the risk comparison did not have any effect when subjected to emotional criticism of the comparison.92 Another study asked subjects to vote on their willingness to accept a hazardous waste incinerator.93 The risk of the facility was compared to smoking a dozen cigarettes. When the comparison was used, more people said they would oppose the incinerator than when the comparison was not used. 90For example, Slovic, P (1987). Perception of risk. Science 236: 28~285. 9~Kahneman, D, P Slovic, ATversky (ed.) (1982). Judgment under Uncertainty: Heuristics and Biases. New York: Cambridge University Press. 92Slovic, P. N Krause, V Covello (1990). What should we know about making risk comparisons? Risk Analysis, 10: 389-392. 93Freudenberg, W. J Rursch (1994). The risk of puking numbers in context: A cautionary tale. Risk Analysis 14: 949-958.
RISK COMMUNICATION 121 The research did not suggest why the risk comparison had this impact. However, the lesson is that if you want to use a risk comparison, you better conduct research on its influence. SOURCES OF CONFLICT Yet another myth subscribed to by scientific experts is that conflict is due to the public's lack of understanding of a subject, while in reality, trust and values often play much larger roles. One model for societal decision making values information as the most critical element, while another values the importance of participatory decision making. The premise of the first model is that cost/benefit analysis and risk trade-offs are acceptable ways to make societal decisions. This model assumes that decisions should be made on a scientific basis and that science is objective. The second model suggests that decisions should be premised on the basis of equity and fairness rather than merely risk.94 These are obviously two very different equations for decision making. Is one right and the other wrong? I would submit you that the difference in the models is a question of values, not rightness or wrongness. The second model of decision making explains how conflict about scientific issues may have roots in the lack of trust in those who have decision-making authority. According to one review of the literature, trust is asymmetrical.95 It is easy for experts to lose trust; it is far harder to gain it. Once trust is broken, it is difficult to recover. Also, sources of bad news are more credible than sources of good news, at least in our culture, and bad news carries more weight than good news. These findings Tom the literature were applied to a study in which participants were asked to respond to statements concerning activities of a hypothetical local nuclear power plant.96 The one condition that was found to increase trust significantly referred to a local board with the power to shut down the nuclear power plant if it did not function as promised. This finding underlines the importance of control to individuals confronted with situations they view as risky. Similarly, a literature review concerning siting of hazardous waste facilities, which was conducted for the trade association for 94For example, Vaughan, E. (1995). The significance of socioeconomic and ethnic diversity for the risk communication process. Risk Analysis, 15: 169-180. 95Slovic, P (1993). Risk, trust and democracy. Risk Analysis 13: 675081 . 96Ibid.
122 BLOOD AND BLOOD PRODUCTS: SAFER AND RISK the nuclear power industry, concluded that public participation seemed to increase the likelihood of acceptance of siting.97 What are the implications of such research for the blood industry? One, you need to focus on increasing trust, not merely conveying data. Two, you need to discuss more about the process of how decisions are made and who makes them. And three, you need to consider involving those who are affected by the decisions in the decision-making process. Extrapolation from the environment data can make this link more explicit: if I were a hemophiliac I might be less interested in hearing from you the current risk data than knowing what you are doing to reduce my risk, to protect me from whatever risk is out there. I am not going to merely weigh the benefit of my getting plasma versus the risk of not getting it. I am also likely to consider: To what extent did the blood industry protect my life? Did they do as much as they could have? If you want to build trust, one of your hopes may be involving other credible sources in the decision making. If you think you need trust in order to make these policy decisions, you may have a hard time if you make decisions solely on the basis of risk data without consideration of the elements that may affect trust. Paul Slovic's researches suggests that credibility is made up of two components: competence and trust. Do you know how to do what you do? Can you be trusted? I submit to the blood industry that you may need to change perceptions of your competence and your trustworthiness and that information alone will not change those perceptions. DISCUSSION Question from the audience: So are you saying we need someone like Brooke Shields as a spokesperson? Caron Chess: I do not know enough about perceptions of the blood supply to know what image is going to change people's behavior toward donating blood or the blood industry. But I would also consider how you are framing the problem. I would look not only at how to increase the donor pool but also whether to educate doctors to reduce the number of transfusions (since the Institute of Medicine briefing book suggested that doctors may overuse transfusions). ~ - ~ 1 also might look at people who are repeat donors and try to 97Richards, M. (1993). Siting industrial facilities: Lessons from the social science literature. International Conference for the Advancement of Socioeconomics, March 2~28, 1993, New York. 9SSlovic, P (1993), op Cit.
RISK COMMUNICATION 123 find out what type of person gives blood routinely. It might be easier to increase the number of repeat donors than to increase the donor pool. That is, I try to look at the range of questions before I frame the research or the campaign. Question from the audience: What you are saying is that we can increase donation by appropriate techniques? Caron Chess: Yes, but, and this is a major but, I think social marketing ~^ ~ ~ societal consensus. We can conduct social marketing campaigns about smoking cessation and using seat belts; we can market routine mammograms and blood donation. But you cannot conduct social marketing, I think, ethically or in a way that would increase trust, to fight more regulation or more testing of the blood supply. Of course, as an industry you can battle those issues in other ways which might further erode your credibility. You can, however, deal with the availability of blood as a social marketing issue. campaigns can only be done effectively when there is Question from the audience: I think there are cases where risk-taking behavior changes after the fact. I am trying to figure out why that happens. Eating habits, for example, might be an example. Has that happened or is social marketing always there as the driving force to positive change? Caron Chess: In order to conduct smoking cessation campaigns I suspect that researchers had to talk to some kids who stopped smoking to try to figure out why they did, although social marketing alone is not sufficient to explain all behavioral changes. M. Granger Morgan: I would argue for a slightly larger role for information than you have. I do not disagree at all in terms of immediate action, but I would argue that in the context of something such as smoking, there has been a long, gradual diffusion effect, and in the absence of the information the diffusion effect probably would not have occurred. We probably would not have seen the social transformation we saw. I doubt anybody can describe the actual causal mechanisms that gave rise about 8 or 10 years ago to the social tipping that suddenly resulted in the decision that the time has come to do something about smoking. Many of us are hoping that we are seeing the early signs of a similar social tipping with respect to handguns in this society. Caron Chess: I agree. Certainly there is a diffusion effect with information. Information also has an agenda-setting effect. What I am suggesting is that people who are scientists sometimes overrely on conveying information to
124 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK transform people's consciousness and behavior. I am not saying throw the information out the window. Question from the audience: You said that once trust is lost it is nearly impossible to regain. I think the public has largely lost trust in the American usual Supply hymen, for whatever reason. Are there strategies that have been used to regain trust? Is there a stepwise process that over time one can engage in to regain trust, given that you said it is nearly impossible? I_I~A ~ 1_. __.~_ _ A___ __ 1_ _. . .. . . Caron Chess: Human beings are difficult, because we don't respond like rats. Something like trust is fairly complex. In the environmental field the question of regaining trust is also a very live one because our environmental institutions have lost credibility. The question is: How do institutions regain credibility? One of the elements that is being looked at is stakeholder involvement. If there is not trust in the decision makers, maybe you need to involve the skeptics. Different approaches to participation are being tried around the country. Question from the audience: Do you know of any example that has been stunted where trust was lost and then regained? Does Tylenol count? Caron Chess: If Tylenol had come out and said, "It is not us; we are the good guys; there is nothing to worry about," they might have faced an erosion of trust and then needed to increase their credibility. Instead, right from the beginning their chief executive officer, and this is a classic case that is used in the literature time and time again, said, "I do not care about the statistics. I do not care what you lawyers say. We are going to be more proactive." So Tylenol lost some of its market share but then regained it. I think that loss in market share was due more to people's fear of being victims of random poisoning than loss of trust in the people who make Tylenol. Question from the audience: Ford recovered from Pinto and they did it by behaving in a trustworthy way for a long, long time and also pounding away on safety messages. Henrik Bendixen: some trouble. A more modern example is Intel. They may be having Caron Chess: Yes, and that shows the more cynical approach taken by companies such as Exxon and Union Carbide-the "It's not mY fault approach" that goes something like, "What? mistaken." ~~ ~ ~ , ~or ~ , ~ Me? Us? No, you are `` A. .. . 1 hen comes the backpedaling and saying, ~Kay, well, maybe we
RISK COMMUNICATION 125 do have some responsibility." Both companies lost a tremendous amount of credibility. The so-called issue-attention cycled is also involved in retrieving credibility. I think if people do not focus on an issue for a long enough period of time, the involved company is not on people's mind, and over time the memory of the company's action fades. Finally, acting in ways people perceive as trustworthy may be at least as important as talking about safety. However, you people in the blood industry really blew it. The infected populations are not going to forget too soon. I think that what you need to do is to listen to your detractors' messages. In environmental situations where win-win solutions are being sought, issues have been going to mediation. I am not suggesting that the blood industry should necessarily go to mediation. What I am suggesting is that perhaps you need to look at alternative approaches to problem solving and alternative ways of dealing with your detractors. What I would not be doing if I were in the blood industry is beating up on the Food and Drug Administration. Why? Because people are going to feel more safe if they feel that there is a good watchdog out there. Also, if I were in your shoes I would be funding researchers to find out how people perceive blood donation and the blood industry. Richard K. Spence: Is it possible to overcome some of the problem of trust by making the risk more acceptable to people? If you point out the benefits to transfusion, how do people buy into it? We know it is risky to drive a car, less so than to fly an airplane, but perhaps people drive because the benefit is better. Caron Chess: The benefits to whom and under whose value system? There are situations in which all the money imaginable, according to empirical research, would not get people to accept a hazardous waste facility because money was not as important to them as control of their lives. There were other issues at stake other than more traditional weighing of costs and benefits. While I understand your point about the benefits of transfusion, if I were a hemophiliac I am not sure that your statistics about benefits would give me comfort. What I would want to know is: "What are you doing to make me safe these days. What kinds of decisions are you making and on what basis?" 99Downs, A (1972). Up and down with ecology: The issue-attention cycle. Me Public Interest, 28: 38-50.
Attitudes Toward Risk: The Right to Know and the Right to Give Informed Consent Jonathan D. Moreno I am a philosopher by training. Because I work in medical ethics at a medical center, I often think about issues of trust. To a very great extent my field is directly a result of the gradual erosion of trust in the medical profession that has occurred in the American community at large. The buzz word that is used in bioethics is paternalism, to refer to doctors who think that they know better than patients do what is good for the patients. The peculiar consequence has been that another expert group has been created, namely, the bioethicists, who so far have not been burned by any horrible scandals that have undermined the public confidence in what bioethicists have to say. The social scientist is often supposed to give a prescription that would solve the problem. Bioethicists are also expected to give such a prescription in a way that a natural scientist would not be expected to do. When that happens I think of my mother-in-law's advice to me just before I was married. She said to me, "Look, I am not going to tell you what to do, I am just going to tell you the right thing to do." I try not to tell my colleagues what to do. I want to share with you a couple of thoughts about something I am doing this year that has made me think about risk in a new way. This year I am working for the President's Advisory Committee on Human Radiation Experiments, a group of 13 experts and 1 nonexpert, to give it legitimacy, in the fields of nuclear medicine, radiation oncology, ethics, history, and law. There has been a great deal of public attention to 50 years of human radiation experiments, as they are somewhat awkwardly called, paid for with federal funds. A lot of people in medicine use radioisotopes, and they are very unhappy about all the attention. Many feel that people will have a misunderstanding about the actual risks of radiation compared to all the other risks to which we are exposed. I have had people in nuclear medicine complain to me that because radioisotopes are relatively very safe, especially when used in tracer 127
128 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK doses, and that it is really people who use external radiation, X-rays, who ought to be studied more carefully. Along these lines, besides such straightforward clinical uses of radiation, the Advisory Committee is also studying the fact that federal nuclear sites like the one in Hanford, Washington, and possibly others, had engaged over the years in what are called intentional releases of ionizing radiation, radioactive iodine and other substances, into the atmosphere. This has caused a great deal of concern in places such as Seattle whose population might have have received some of this extra radiation in the atmosphere. It turns out that an intentional release called the Green Run probably contributed only 1 percent of the radiation that was added to the atmosphere from Hanford in that year. Yet, what is important to people is not necessarily that 1 percent but the reason for that 1 percent. Many believe that the reason had something to do with secret government activity, with knowing how to measure radiation when we were worried about radiation warfare, or maybe the way that radiation would be taken into the plant life and get into the food chain. It is not so much the numbers in many cases that matter to people as it is the background story, the reason that people were exposed to risk, how things happened the way they did, and the decision process that people went through. The fact that it was done in secret simply makes people more anxious and thus affects the way that people assess the nature of these specific risks. I will very briefly say some things about what people in medical ethics have to say about risk issues and the so-called right to know. I feel very insecure when I give talks before people in the blood services because in bioethics we have done a fair job of dealing with issues in the clinical setting and in research ethics. Theoretically, we are pretty good at dealing with those kinds of issues analytically. However, when you get into public health ethics we are not doing as well. We have a very limited framework of analysis. The blood area is one that is very difficult for us to cope with because the risks and benefits associated with transfusion or with retrospective studies such as look-backs are so hard to know. In many cases, harms as well as benefits are remote and the bioethical conceptual framework is not well designed for those kinds of problems. We are struggling with the same kinds of uncertainty problems that you are struggling with with respect to blood. That we in bioethics are limited in our ability to help you does not mean that I will not tell you what I think you ought to do. The conceptual framework that people in bioethics have developed might be called an autonomy-based framework. It is founded on the idea of personal self- determination. The principles that you hear from bioethicists are often called our mantra: autonomy, beneficence, nonmaleficence, and justice. Autonomy
RISK COMMUNICATION 129 comes first. The ethical framework that we have derived Mom the clinical and research setting is an autonomy-based framework. It is very difficult to know theoretically how to trump self-determination. That is very important from your point of view, because if autonomy, also called self-determination, is more important than any other ethical consideration then clearly people have a right to know just about anything and everything that could possibly go into their decision making so that they can continue to preserve and promote their own self-determination. This kind of framework seems to work fairly well in the clinical setting. It probably works even better in research, especially when you are talking about research that will not benefit the subjects, normal healthy volunteers, for example. However, in the public health area autonomy can generate terrible problems unless you are at the extreme case of a highly contagious disease in which it is quite clear that the benefit to the wider community ought to trump personal autonomy. In New York we recently had the example of HIV-related multiple-drug- resistant tuberculosis (TB). This was an easy one to resolve because most of these people were prisoners. There was not a lot of argument about the need to isolate them. There was not even a lot of argument about invading their privacy and doing direct observation of therapy to malice sure that they actually took this unpleasant medication to be certain that we do not develop new strains of drug-resistant TB. When you get to an area such as AIDS, for example, a disease that has to do with intimate behaviors that are infectious but not contagious, you are dealing with a condition that strains what classical liberalism understands is the boundary between the private and the public. Is decision making with respect to the control of HIV a private matter or a public matter? Diseases that are infectious but not highly contagious tend to straddle the public-private boundary. We do not really know very well how to deal with those kinds of issues. An example of the way that bioethics tried to deal with this public-private boundary problem and the dominance of autonomy in its typical analytic framework in the AIDS era concerned a problem that came up about 10 years ago. What would you do if you had a coded list of blood donors and you knew which donors had given you HIV-positive units? You had collected that list for contact tracing or to develop a sense of the epidemiology of the disease. Once you have such a list, do you have to go to the donors and inform them that they are HIV positive? I am sure you remember these days very well. The autonomy-based conclusion was that if you have such a list, if as the person in authority you know the name of someone who has given such a unit, then you have an obligation to respect the autonomy of that person and share the information with that person.
130 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK However, what if you had never organized such a list and you are only considering making such a list? Do you have an obligation to make a list, to identify the units that are infected so that you can tell these donors that they were infected? The answer given to that was no. You may have an obligation for public health reasons to find out who gave the HIV-positive units, but you do not have a moral obligation. Why not? Because there is nothing you can necessarily do for that person therapeutically once you tell him or her. The counterargument was that maybe you could at least prevent that person from infecting somebody else. That goes back to a public health obligation. We got to this" very peculiar point in an autonomy-based analysis in which once you had the list of identified HIV-positive donors then you had to go tell them. But if you had not made such a list in the first place, then autonomy did not obligate you to go out and make that list. To many people this kind of analysis is and continues to be unsatisfactory, but it was the conclusion that was generally reached in the bioethics community at the time that these lists were being assembled. Let me walk you through the way that an analysis of right-to-know issues is supposed to take place in contemporary bioethics. I want to precede this walkthrough by just pointing out that the data on the desire among cancer patients to know their diagnosis indicate that the desire is very high. Basically, 90 percent of cancer patients say that they want to know their diagnosis, especially if it is a grave diagnosis.~°° A day or two after they are told the information relative to their diagnosis, half of cancer patients retain one or two relevant facts from the clinical interview in which the nature of their disease was explained to them. There is a significant gap between how much people want to know and feel that they have a right to know and how much they can actually retain. Nonetheless, the vast majority of patients in this culture say that they want to know virtually everything that is relevant to their disease. So with that background, how would we understand the right to know in bioethics? The answer is that people have a right to run the course of their own lives and they need information in order to do that. That does not mean that you have the obligation to go out and get the information. It does mean that if you have it, you have to share it with them because it is something they need to know. It will change the way that they think about their future and change their . . . . decision making. The problem with this idea of autonomy is that it comes from multiple sources. When you get to finely tuned problems, such as whether we should continue syphilis testing as a surrogate for lifestyles, this multiply sourced ~°°Alfidi, RJ (1971). Informed consent: a study of patient reaction. Journal of the American Medical Association, 216: 1325.
RISK COMMUNICATION 131 notion of autonomy is not necessarily all that helpful. One source for the idea of autonomy is legal cases. There have been a string of legal cases mostly having to do with surgery from the early 1900s to the 1960s, in which surgeons removed tissues without telling people they were going to do that or why they were going to do that. Gradually, the doctrine of informed consent came along through the courts. Strangely, but importantly, the notion that people have a right to be informed before surgery is not necessarily followed by the notion that failure to inform is something that has to be paid for by the surgeon or his or her insurance company. In other words, it is clear that, in theory at least, patients are supposed to be informed before surgical procedures. It is not at all clear that a patient who is not informed can collect damages unless he or she suffers some injury at the same time in that procedure. We also get the notion of autonomy, of course, from various philosophers, especially from Immanuel Kant. What Kant thought autonomy is is different from what most of us think it is. Kant thought that autonomy meant following the moral law. Not too many Americans think that autonomy means following the moral law; rather, they think that it means following their preferences. That is not what the philosophical tradition thinks of autonomy. We also get notions concerning autonomy from the many political and sociological changes we have gone through, from the gradual transformation of patients into consumers. We also get it, of course, from research ethics scandals: Tuskegee, Willowbrook, and the Brooklyn Jewish Chronic Disease Hospital are among the most famous of these.'°~ Let me apply some of the elements of this multiply sourced notion of autonomy to the notion of risk. The tenn risk is to a very great extent a placeholder for the likelihood of something bad happening or the likelihood of something bad happening to me, which is even worse. The idea of risk is also ambiguous, not differentiating between the likelihood of a harm and the severity of that harm. In fact, many people who write about risk-benefit these days prefer to talk about harm-benef~t. Our thin theory of autonomy that is multiply-sourced does reasonably well with risks that are likely, severe, and known to others, such as getting an HIV- positive unit of blood, especially when the potential offsetting benefits of not being informed are remote or ambiguous. Thus, in the clinical setting cancer patients must be informed that a bone marrow transplant is unlikely to succeed (although they often do not appreciate that), will cause discomfort, and may itself lead to death. In the research setting one must inform subjects that an innovative treatment will probably not help them (although they very often Oxford. i°iFaden, R. T Beauchamp (1986). ~ Hisfo~y and Theory of Informed Consent. New York:
132 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK appreciate that that is true), but that what will be learned may help somebody else. How then does an autonomy-based bioethics that is multiply-sourced and pretty thin assess the very refined specific questions that come up in the field of blood services? How does a thin theory of autonomy help people to assess informing people about harms that are remote, albeit severe, and perhaps known to others? Recall one of the other principles in the bioethics mantra, which is beneficence. Imagine that we have a case in which the benefits of informing will be very great for a few, though minor when spread out over the aggregate of the community, and justice will perhaps suffer the third principle in the mantra because health care resources will now have been distributed. They will have been spent on great benefits for a very few but have relatively no effect on the vast majority of people. Even if you had this kind of situation in which the benefits are great for a few and justice is perhaps violated because only a few people enjoy these very expensive benefits the money could be better spent elsewhere in an autonomy-based bioethic that is not necessarily conclusive. An autonomy- based bioethical theory may obligate people to spend a lot of money on a very few and therefore benefit them but leave the vast majority untouched, and risk the problem of maldistribution of precious social resources. That is pretty much where we are. Our bioethical theory is no better than the general public's attitude. When the general public reads about or learns about an exemplary individual case, there is a reflex to rescue, and that reflex to rescue the known victim does not have a clear answer in what bioethical theory has to offer in these kinds of problematic situations. A number of different criteria have been used in the clinical setting to determine the limits of the right to know. Doctors want to know how meager a risk they to talk to their patients about? How much do they have to tell them, and so forth? Informing about every possible risk looks crazy beyond a certain point. A number of other standards have been used, because autonomy-based bioethics leads to this potentially crazy result in which even the most remote risks have to be disclosed. One is the reasonable person standard, that is, what a reasonable person would want to know. However, we know, of course, that risk assessment is not reasonable and is mostly apprehended through famous, exemplary cases. The second standard is what would this particular person want to know. This is popular in bioethics because it conforms to an autonomy-based theory, in that you respect the autonomy of this particular patient. That might work if you have a long-term doctor-patient relationship and so know the values of your patient, but it is unlikely to work in the health maintenance organization (HMO) setting. In public health it does not have any usefulness at all.
RISK COMMUNICATION 133 The third possibility is whatever the experts think plus the precedent of what they decided to do before. That has some value legally. Arguably it has moral importance because it suggests a policy is consistent from one case to another. That is important. Similar cases are being treated similarly. Unfortunately, it might also beg the question and perpetuate unreasonable previous policies and standards. Suppose we accept the last standard: what the experts think, plus what they decided before. That criterion may be modified by a special fact that makes blood services so interesting and unique. That is, in your field trust is perhaps uniquely important. You know pretty fast if you have lost the public trust because people stop showing up to donate or because the various interest groups that are recipients of blood give you feedback pretty fast. It is the case that expert judgment plus precedent plus bending over backwards prudently, if not morally, to overinform about risks is the best formula that we can come up with with respect to the issues that concern you. I want to leave a place for expert judgment. I want to encourage you to sustain a level of consistency about the way that you approach each case, and, to bend over backwards, if needed, to "hyperinform" because in the last instance the issues of blood are more sensitive to problems of trust than are other issues in health care. I do have one thing to add to the issues of expert judgment, precedent and hyperinforming. There are some indications that financial issues are affecting health policy recommendations in a way that they have not explicitly affected them before. As an example, a few months ago the National Cancer Institute (NCI) changed its policy on recommendations concerning routine annual breast cancer screening, increasing the age from 40 to 45 to 50. Some people have argued in the literature that this decision was transparently a cost-containment policy initiative. They note that when Samuel Broder, the Dierector of the NCI, testified on this before Congress, he seemed to be uncomfortable. During questioning he said that was a policy that he recommended in his position at the NCI. When he was asked what he would recommend to a 40-year-old patient who perhaps had a family history of breast cncer, he said he would tell her to get screened. This whole discussion may be different 5 years from now. When we are talking especially about doing look-backs, cost issues may be much more salient than they ever have been before in this kind of analysis. In summing up, I would say, first, that in blood services, warnings of prospective risks to recipients of blood should probably favor what many would regard as overinforming, because of society's autonomy orientation and expectations, prudence, and the fact that trust is such a delicate quality. Second, studies of retrospective risks are only morally obligatory when the problem is serious and an effective therapy is available. When such
134 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK studies are undertaken and specific at-risk individuals are identified, then those individuals have to be informed. Finally, I suggest to you that decisions to undertake look-backs will undoubtedly be more affected by cost considerations in the future than they have been in the past. DISCUSSION Henrik Bendixen: I would offer one comment. Several years ago Joshua Lederburg chaired a committees which predicted that there would be more new diseases coming along. As we speak about trust and communicating risk, the most important thing that we could do in the next few years is to look to the future, to think about what the situation might be 5 or 10 years Tom now and to mobilize in the best possible way to be ready for the next one when it comes, because come it will. M. Granger Morgan: Suppose, in fact, a medical facility and a blood system in some region starts providing elaborate information about risk, while another facility and system does not provide that elaborate information. Suppose I then run a parallel study and discover that for a set of standard procedures the mortality risk is significantly higher in the informed population because many people are foregoing the use of blood. Do you want to tell me about that from the point of view of a medical bioethicist? Jonathan Moreno: This is very much the same kind of question that clinicians have raised over the years when people have argued that they ought to tell people about the downside of chemotherapy, for example. Not only is it potentially inhumane to do that when there are very few alternatives, but you might scare them away and create more disease and a worse outcome. I do not know if the experience that we have had in the clinical setting can be applied to the kind of aggregate communities that we are talking about in blood. However, in the clinical setting patients feel that they know the potential harms of what the physician is recommending and have decided to accept the physician's recommendation. It is a good research question. Maybe we could do a controlled study at two centers and try it out. Thomas Zuck: There may be a way that we can get at it with the Gann Act in California, which requires a very detailed formulation of what you have to Reinstitute of Medicine (1992). (ed.) (1992). Emerging Infections: Microbial Threats to Health in the United States. Washington, D.C.: National Academy Press.
RISK COMMUNICATION 135 tell patients and what the alternatives are. Three or four states now have Gann Acts. Could we somehow compare outcomes in those places that have Gann Acts for specific kinds of procedures with outcomes in states that do not? Kenrad Nelson: I would like to raise a question related to blood bank practices that tails on the previous question. Autologous donations now have been promoted by physicians and the general public. This may be an important way to reduce the risks of transfusion. But also there has been some movement to increase the use of directed donation. That is, you have a relative or somebody who specifically donates for you or for a given person. Some studies suggest that the prevalence of markers of hepatitis, HIV, and other infectious agents is significantly higher among the directed donors than it is among the general pool of donors. It relates to the fact that if you have pressure from a relative to donate a unit of blood for the family you may be less inclined to admit a behavioral risk for HIV. Therefore, the risk from a directed donation is higher. I was recently visited by a lawyer who was representing a patient who developed hepatitis C after a transfusion and now had chronic hepatitis and cirrhosis. His legal argument was going to be that the person was not told by the blood bank or by the surgeon about the possibility of a directed donation. If he had known, he could have made the decision to ask a relative to donate and this would have been safer. I said, yes, he could have done that, but the risk would have been higher from a directed donation. I know that when many blood banks have a directed donation and the intended recipient does not need the unit, the blood bank will not use the blood for another patient, even though it has been tested and has gone through the same screening procedure as all other donations. What is the ethical duty of the blood bank with regard to directed donations? Jonathan Moreno: Do you accept directed donation in New York? What are your ethics? Comment from the audience: What do you do when the patient does not need it? It is discarded. Kenrad Nelson: But you feel a duty to inform people of that option, and if you did not inform them of that option, would that not be ethical? You did not inform them that something was more dangerous, essentially. Celso Bianco: We do, but there is conflicting information. Maybe the issue is not just the right to know, but it is the right to do something about it when you know. That is what the hemophiliacs have demanded. It is very interesting because although you gave the example of tuberculosis versus HIV, I think a slightly better example is syphilis and AIDS. New York Public
136 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK Health Law requires that when we identify an individual who is positive for syphilis, we must notify the City Department of Health within 24 hours, but we do not notify for HIV. Jonathan Moreno: We do for AIDS. Celso Bianco: But for HIV-positive individuals, who blood banks see far more often than AIDS patients, there is no communication. Why is it ethically acceptable not to notify (and thus not protect the partner from infection), in the case of HIV, even though notification is a legal requirement in the case of syphilis? Jonathan Moreno: The blood bankers in my hospital in New York have said to me that they do not like autologous donation because it means there are more quantities of blood around that must be stored. They also think that it sends the wrong message because you reinforce the notion that people must protect themselves from the blood supply. That may be something like Dr. Broder's answer: I might oppose it as a matter of policy but prefer it in my own case. I cannot justify the course that New York City has decided to take except to say with respect to HIV or AIDS informing, as well as syphilis at this point if you are talking about contact tracing for HIV, the fact is that you are talking about hundreds of thousands of people. While it might have been a practical issue 8 or 10 years ago, I do not think anyone would find it practical anymore. It is ironic because, as many people who work with AIDS patients have said to me, when you are symptomatic, you feel like sex a lot less then when you are just infected. It does seem to turn things upside down a little bit. Richard K. Spence: I would like to comment on the first issue that was raised. Tom Zuck mentioned that California has the Gann Act. New Jersey also has a Blood Safety Act that is an unmodified version of what California had enough good sense to change. In thinking about what I have to do in my daily practice in informed consent in terms of transfusion risk and alternatives, I see a dichotomy between what you talk about, Dr. Moreno, with hyperinforming patients, and with what Dr. Chess talked about in terms of knowing your audience, with practical implications for practice. What I find is that patients who are going to have a surgical procedure basically want to know whether they are going to get through this procedure without something harmful happening to them. We start with whether or not they are going to need a transfusion. If I can tell them that I am going to take out their gallbladder and in my experience, they are not going to need a transfusion, they are thankfill that they
RISK COMMUNICATION 137 do not have to worry about that. However, if I am going to repair an aneurysm, they very possibly will need a transfusion. Rather than hyperinform them, because it will take a half an hour and scare them out of the room, I tell them that there are risks associated with this. They could die from the transfusion, or get an infection. These things could happen. I find that people know that because they have read about it. What they want to know is what can we do to minimize the risk. That is when we talk about autologous donation and so forth. I do not hyperinform them. I do not want to try to change their perceptions with numbers, because they know there are risks of dying, getting AIDS, whatever. There is no point arguing that getting hepatitis is more important than getting HIV from this. I find it relatively more facile for me to comply with the requirements of the law by knowing what the audience wants, listening to them, and responding to them with feedback rather than giving them a whole detailed list of what goes on. Jonathan Moreno: I did not mean anything in particular by hyperinforming with respect to the way that the message should be communicated. I will leave that to the people who know how to communicate the message. I did not mean just throwing in the numbers, for example. I meant if there is a question about whether those people in our society would want to know about this risk, error on the side of caution and let them know about the risk, however that message is framed. Caron Chess: Ethically, if one has an obligation to hyperinform, does that mean that there is an obligation on the part of the source to make sure that he or she is understood or only to inform, whether it is understood or not? Jonathan Moreno: There is certainly an obligation to try to ascertain within a reasonable range of confidence and certainty whether somebody understands the information communicated or not. The question is to what extreme does one have to go, how much confidence does one have to have, and how much information does one need to be convinced that the individual understands. Those are not questions to which I have any simple answers. What we say in the clinical setting could perhaps be applied here and follows from what you just said. If people can understand the implications of what you have told them for their possible alternative futures, they can express themselves with respect to what they want for their lives, how they want to function, and that is about as much as we can expect. It probably satisfies our obligation. Obviously, we should not expect that people will be able to recite all the data in exactly the way that they have been given it in order to understand it. Caron Chess: Various people have various needs to know. For instance, if you were the physician for an epidemiologist and you offered more
138 BLOOD AND BLOOD PRODUCTS: SAFETY AND DISK information, you would probably be taken up on your offer. Someone else might never want that information. Often we think of these situations in terms of dichotomies when there are multiple options. Jonathan Moreno: Some individuals in bioethics have suggested recently that one way to deal with the issue of how much to inform is to think of this as a tiered process of one, two, or even three tiers. For a surgical procedure, for example, you would give a relatively concise explanation, with the patient then given an explicit invitation to ask questions. If the patient asked questions, there is another tier to this information. Perhaps the third tier could even involve written material. With this tiered approach you can satisfy what most people want to know, that is, how can they get fixed, and be consistent with your moral obligations in our society to inform them, without going to an extreme. M. Granger Morgan: I want to make three simple observations. First, there were a number of requests for prescriptive advice with respect to the issue of communication. My advice is to review studies of what people know and believe about the blood supply and the risks associated with transfusion. I would look very carefully at the quality of the methods that underlie those things. Before I could say anything much more prescriptively, I would want some empirical studies of that sort to be done. Second, I would add a warning to the advice Dr. Chess gave about going to the public health communication community. A lot of that community has not adopted the empirical style that I was talking about. If you do go to that community, be certain that you find a member who has a strong empirical commitment. Third, a comment about the role of information. Both at the individual level and certainly at the organizational level, decision making is a noisy, stochastic process influenced by many different things. For example, my carry-on bag has a lightweight smoke hood that I carry with me when I fly on airplanes. It was about a decade between the time I first learned what a smoke hood was and when I finally exerted the effort to find where they were commercially available and order one. There are a couple of other things like that I know I ought to do but have not. The point is, I would never take the second step absent the first, that is, the knowledge. Information does have an effect. It may be an effect roughly like the imposed field in Brownian motion. It is all moving around, but then the information can, in the context of all of this movement, produce a slow and perceptible drift. I would argue that telling your story repeatedly is important, even if it does not produce any obvious short-term consequence. Question from the audience: If I understood you correctly' you place autonomy, the rights of the individual, first. How do you balance that in the
RISK COMMUNICATION 139 national arena where we are constantly dealing with the rights of society as a whole? Jonathan Moreno: I tried to convey my discomfort with the contemporary bioethical framework at the same time I was suggesting that it is pretty much all we have in terms of a well-developed conceptual scheme at this point. An autonomy-based ethical theory, in fact, comports pretty much with what people think about these things. The American public arguably is going to change over the years, but for the most part it does conform to the way people think about ethical issues in our society. That being the case, it is important to be sensitive to the way people think about rights as compared with communal interests. However, at the same time, we are having to be more sensitive to problems, costs, and the equitable distribution of health care services from resources that are strained. It is not the ethicists who are going to solve the problem you have brought up. It is the fiscal realities that will drive us to find rationales within our liberal political framework that will help us to explain why what we are doing is okay. For example, we have already rationalized constraining the health care options that people have in HMO settings by saying that it is consistent with the liberal marketplace. We found a rationale base that is consistent with what we think as liberal people in the marketplace. HMOs compete and that is okay. Individual choice has to be constrained because that is the way the market works. We find ways to adapt our public philosophy to real-world constraints. However, we are not very good either in bioethical theory or in our society in figuring out how to strike the balance that you point out needs somehow to be struck. M. Granger Morgan: Nor are we likely to be internally consistent in the way we behave. We may behave in one way much of the time in the name of efficiency but then when a particular event is identified, we behave in economic terms quite differently because we are trying, as a manner of the demonstration of our humanity or something, to make some different point. Just as decision making is a stochastic process in the context of the kinds of decisions I was talking about, policy on issues of this sort is a stochastic process. You can be absolutely certain that the one thing we will not be is consistent in the way in which we deal with all issues, nor is it even absolutely clear that we ought to be. Jonathan Moreno: An example of how this has worked fairly successfully recently is the way that Oregon designed its health care decisions bill that finally became law. First, they asked doctors to rank procedures in terms of benefit. Then they reranked them in terms of a benefit/cost ratio analysis. They had a long list of 400 or 500 procedures, the idea being that the ones
140 BLOOD AND BLOOD PRODUCTS: SAFETY AND 121SK higher on the list would be funded first. Then they went to groups of people in the communities and asked them how they would rank these procedures. This step rankled a lot of health care experts because what looked like a really scientific process was now becoming infused with the irrational personal preferences of people in the community. But it proved to be a consensus builder. It is very successful and popular in Oregon. They found a way to marry these communal considerations with individual preferences and it built a consensus, convincing people that the system was responding to their wishes. M. Granger Morgan: To recast it in multiattribute utility terms, it says that things other than efficiency matter. Equity and other concerns also count. Question from the audience: Under most circumstances blood bankers have no contact with the recipients of the blood that they collect and manage. Instead, that communication is in the hands of others who have their own professional relationship. Do you see either any practical obligations or practical opportunities for information sharing between the blood banking community and the recipient community? Jonathan Moreno: A few years ago this was a subject of great discussion at a meeting of the American Association of Blood Banks and there was a general agreement that that ought to happen. It seems to be needed. Beatrice Pierce: A lot of what you have talked about in terms of trust, how that was lost and how it can be rebuilt, really strikes home to the hemophilia community. We in that community are hyperreactive now. Some want to know every little thing. Others are back to where it is all up to the physician. It is an individual process as to how much a patient wants to know, how much needs to be known, and how much is maybe just too confusing. A lot of times it is a long process to make a decision, and the amount of information that is retained in one visit can be very small. It is very important to have the avenues of communication open so that when the patient calls for the fifth time and asks a slight variation on the same question, that patient gets a very respectful answer. Within our community a statistic of communication of HIV of one in hundreds of thousands is not real. What is real is having a friend, a family member, or someone in the community on the average of one a week-die. So, it is biased, and it is skewed, and those skews are very hard to overcome with your statistics. When you talk about rebuilding trust, you cannot really talk about numbers. You have to talk about open communication. If you do not know, admit that you do not know. Look at the fear that is there and the fact that a lot of trust was put in physicians and the system but for various
RISK COMMUNICATION 141 reasons those physicians and that system did not come through for their patients. You have to look at all that when you are talking about rebuilding the trust within the community. You are seeing a rebound effect as we saw with the Creutzfeldt-Jacob disease (CJD) items that have come up. Within the hemophilia community it was a kneejerk response. Now there is something else in there to worry about. Even though we have the data indicating that CJD probably is not transmitted through plasma derivatives, we cannot forget that we have heard these reassurances before. Celso Bianco: There is a large disassociation between what we disclose and what people get through the filters that they have. Take, for example, the question of the directed donation. We may think it is worse; therefore, we disclose to the people that it is not as safe, probably, as the autologous or the homologous transfusion. However, the people are using other filters. How do we communicate the right information? How can we convince the hemophilia community that CJD is very different from what HIV or other viruses were when all the mistrust is there? How do we turn it around? Question from the audience: I have a question having to do with the statement that bad news has more credibility than good news and that sources of bad news are more reliable than sources of good news. I wonder what it is about our profession that has resulted in this sad state of affairs. What do we understand about the inherent cynicism in the people to whom we are speaking? How do we account for that? Do we understand something about human nature that would explain that, and if we do, short of press censorship, how can we make sure that bad news is not the news that gets all the spotlights? Caron Chess: Based on the research that I know, it is at least Western, or American, human nature. The question is, how can you deal with the positive information you have so that people are aware of it? I think a number of different ways have been mentioned. Not withstanding the fact that I said information is not the end all and be all, people do continue to need the information. Other things have also been mentioned: the effects of physicians in dealing with their patients, the willingness to respond to questions, and the willingness to disclose information even when it may not be economically or in other ways beneficial to the source of the information. I think that transmitting information and using the media as your primary conduit may not serve you with certain audiences. You may need to think about how you convey information or develop a dialogue with audiences who are better reached outside of the media. There is not only a responsibility to disclose information, but in terms of trust, the process of disclosure may also be important. Too often agencies go to reporters, and the people who are at
142 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK risk find out about it through the media, rather than the agencies going to the local community that has a vested interest. Thus, the agencies lose trust because people ask, "Why do I have to find out about this through the newspaper?" The best way to handle disclosure is an empirical question and it needs empirical testing. The process, not just the information, is probably quite critical. Jonathan Moreno: One strategy for rebuilding trust in your services is suggested by Theodore Lowie, a historian and political scientist. He points out that in the modern world, communities organize themselves around common characteristics that are nontraditional as a basis for community. Hemophiliacs are an interesting example. The more traditional ways of organizing are in terms of geographical proximity, ethnic or racial similarity, or perhaps class similarity. It is important for us to be aware of that fact. What would happen 2 years from now if people in the Hemophilia Foundation who are active in that community were enthusiastically supportive and confident in the blood donation system? What if they felt so much a part of what was going on and had such great confidence in what you were doing that they became ambassadors for the system? It would be quite impressive to hear people who do have routine experience be enthusiastic about the safety of the system, the openness of the professionals, and the competence of the management of these services. To take from Lowie's view, it is important in every field to know who the relevant communities are and appreciate that they may not be the ones that you would traditionally identify as a community. They may be the ones who organize themselves around characteristics that are especially important to your field, and they may be the first group with whom you forge an alliance and the first group with whom you might in fact be able to rebuild trust. M. Granger Morgan: The previous question implied that there was something wrong with paying more attention to bad information or bad news than to good news. We know that most individuals and organizations always try to cast themselves in positive and good lights. When you do hear a bit of bad news, it tends to be a somewhat rare event, and so people tend to pay attention. Reporters are aware of this, and they use it as the vehicle to make stories more interesting. At the individual level, there is a certain underlying efficiency in paying a bit more attention to the bad news when it comes our way, as it may serve to protect us from individuals and organizations who are overly committed to their positive images.
Patients, Informed Consent, and the Health Care Team David A. Rothman The patient belongs inside the calculation of risk, not simply because at this point in time one really has no choice, but because it may well expand the nature of the deliberations. The patient will bring a different perspective and may help to clarify some of the choices. I will explore two cases, one of which involves trying to minimize risk and the other of which illustrates how patient involvement may have the impact of having patients live with more risk, provided that other benefits are available. The two cases that I am going to take are, first, the model of the institutional review board (IRB), in which risk calculation is fundamental both to the origins of IRBs and to their deliberations. The second concerns the shift from radical mastectomy to lumpectomy in terms of patient involvement and risk tolerance. Let me start with human experimentation. Until the early 1970s the calculation of risk to the subject in terms of any investigational intervention was left completely in the hands of the investigator. At the National Institutesof Health (NIH) there were some rules governing the use of normal controls. Normal controls should be informed about the research, and so forth, and there were some rudimentary comments about what the subjects should be told, but for the most part the subject was thought of as a patient. The investigator was thought of as physician, and the calculus on risks and benefits was left in the hands of a single investigator. There was really no need to collaborate with colleagues and certainly no expectation of collaborating with subjects. The investigator knew the risks, was to make a risk/benefit calculus, and proceed accordingly. By the mid-1960s this calculation of risk by the investigator alone was under attack from Beecher, Papenworth, and others.~03 i03Rothman, DJ. (1991). Strangers at the Bedside. New York: Basic Books. 143
144 BLOOD AND BLOOD PRODUCTS: SAFER AND RISK One of the classic cases that became notorious involved Chester Southam from Cornell Medical School, New York Hospital, conducting research at the Brooklyn Jewish Chronic Disease Hospital.'04 He was interested in the body's rejection of cancer cells and thought that he could begin to solve some of the puzzles of cancer. He injected demented senile old men with foreign cancer cells. When asked why he did this without telling the subjects that he was going to be injecting them with cancer cells, he said that if he would have told them that it was cancer cells, they would have become much too frightened, would have exaggerated the risk, and would have refused to join the protocol. Because he didn't want to lose his subjects, he did not communicate to them. At one point he was asked why he didn't also inject himself, as was frequently the case with researchers. He responded that, "There aren't enough good cancer investigators." This is one of the problems of leaving it to the investigator to determine the risks. It was decided, with NIH as the driving force, that we would no longer leave risk calculus to the investigator on the grounds that the investigator was not a neutral figure in this calculation. The investigator's first charge was to accumulate knowledge. Whether the motive was knowledge, the next grant, or the prize, investigators as a rule underestimated the risks to which they exposed their subjects. The two major functions of the IRB are to calculate the risk-benef~t ratio and ultimately to insure that the consent process is implemented in a way that those risks and benefits are clearly communicated to the subject. What we have done in the area of human experimentation is to require a collective judgment on the risk by the investigator's peers, with a few other people also involved. Equally important, the IRB insists that that finding on risk and benefit be told to the subject, so that the subject has the last word. The difficulties, the imperfections, and the impossibility of fully informing the subject are clear and apparent. The question is how far down the disclosure of risks do you go? When consent in this process was first started in the early 1970s, you could find spoofs in the major journals. The choice of subject for the spoof was the deriding of consent hv listing `'v~rv rick acing . . . circumcision as the test case _= ~ __ _^AA ~_ ~_A ~· ^ ~ In the end, we now require communication about risk to the subject, and we are going about it in a very decentralized way, leaving it to local IRBs, and to individual patients to calculate the nature of how much risk should be taken. Were CreutzLeldt-Jakob disease part and parcel of an experimental protocol, knowing what we know now, would that risk be communicated to the subject? The answer to that would really depend ultimately on individual Ibid, pp 77-93.
RISK COMMUNICATION 145 IRBs. They might bend over backwards in an experimental protocol and say, "Yes, that ought to be included." They probably would not rule out an experiment simply on the basis of that unknown risk being present, but they might. The key point is that they certainly would want the subject ultimately to make the decision about the risk. You are not going to find an existing national body that is going to make that decision on its own. You will find in elements of risk calculus that we have accepted informed consent as the basis for approval. We anticipate through IRB and consent that ultimately we do share risk analysis collectively and ultimately trust the choice made by the subject. We now tend on the whole in human experimentation to be risk minimizing. In the area of research communication of risk, sharing information about risk, and calculating risk against benefit is not merely standard, it is mandated, and however imperfect in practice, it is altogether the requirement. If you look at the clinical setting and allow me my choice of the case of radical mastectomy versus lumpectomy, you begin to find a second and very interesting model in teens of risk calculus. The decision in favor of radical mastectomy as almost the only procedure to be used in all cases of breast cancer lasted well into the late 1970s and even into the early 1980s. The ultimate reason was the surgeon's response of risk minimization. The shift from radical mastectomy to lumpectomy is complicated. Parts of the story involve radiology and oncology, as well as psychiatry. No roster of participants who moved us from mastectomy to lumpectomy would be complete without talking about women and patients' groups themselves. It would not be an exaggeration to say that although the change certainly was abetted, encouraged, and finally conclusively established in the territory of medicine, the role of women activists was absolutely critical. There the risk calculus shifted the other way as opposed to the psychological pains and physical pains of a radical mastectomy. Where the choice turned out to be a realistic one, many women prepared to carry the additional burden of risk in return for the benefit of not undergoing radical mastectomy. That conversation between a surgeon or oncologist and woman patient is as complicated a conversation medically and emotionally speaking as one can imagine. The variables that go into the risks of each procedure are very exquisite. It is very difficult, intellectually and emotionally, to parse out the risks, and yet it is being done all the time. In this day and age for a breast surgeon in this country to say, as I recently heard from a physician in Israel, "This is much too complicated, I am simply going to make the decision and not involve the woman," would be absurd. It may well be than an older patient or a patient overwhelmed by the diagnosis may turn to her surgeon and say, "I don't want to hear about it. Do what you want to do." That certainly happens. The right to say no to information certainly is to be respected, but A., .. . . .
146 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK the standard is surely to share the information about risk, although it is complicated. In the area of clinical practice, we have clearly reached the point where sharing the decision-making about risk after intensive conversation is standard. It is fairly apparent that clinically many physicians have great difficulty communicating risks to their patients. Some are paternalistic. Physicians on the whole do not do a very good job communicating about risk. There is a fear that if they tell a patient all the risks, the patient won't be compliant. Physicians may also be trying to be protective of patients by trying not to worry them. It seems if you tell the patient there is a side effect, that patient will call you tomorrow and report the side effect. In summary, it is easy to spoof risk disclosure and point to the flaws, but I would suggest on the basis of my two models that the benefits of doing it far outweigh whatever the problems are. Furthermore, physicians probably don't have a lot of choice any longer. In experimentation, they certainly don't. Dealing with women and breast cancer, they don't. The list is going to get longer, not shorter. You may agree or disagree with one or another of my arguments, but in the end it is moving in exactly the direction I have described. DISCUSSION Paul Russell: There is no question that we must strive toward informed consent. There is also no doubt that it is a very flawed process and that we cannot get to total informed consent. Our responsibility lies with the issue of diminishing risks as you get out toward the end of the risk curve. It is difficult, but it is an important thing to do. I am concerned about it. How far should I go in telling people about risks? David Rothman: In an experimental setting, I would think you would want to go further. In a therapeutic setting, the patient gives you cues. I am sure older patients, or maybe those less educated, might press you less. Paul Russell: I go as far as the patient wants to go. David Rothman: You are not going to bludgeon the patient with information, but if the patient is a long-distance runner and there is a 5 percent risk that this surgery is going to keep him off the track for the rest of his life, you might well want to tell him that. If the patient that comes in is rather corpulent, beer drinking, and poker playing, the 5 percent risk to a running career would be irrelevant. You will tailor it. You are going to go down the major items and
RISK COMMUNICATION 147 then, depending on that patient's tolerance for information, you go down below the normal list to include those that you think would be particularly relevant for the patient. Paul Russell: I think that is a helpful guide, but it doesn't get me all the way. Edward A. Dauer: In those instances in which a patient's perception or evaluation of a risk is not consonant with its actual quantity, is it the physician's responsibility to respect the patient's evaluation as it is or to try to correct that perception and bring it back into line with the actual quantification? David Rothman: The answer to that is absolutely yes, you have to bring it back into line. No patient is going to ever get to exercise informed consent on his own unless the physician helps him realize it. You have to correct. A patient may have a fear that is completely out of perspective. You will give the information. What do you then do when a patient still says, "No thank you"? You try a second time, but it may not be appropriate to try the fifth time. That you must make an effort is absolutely clear, but in the end when you are content that that patient has understood you, then it remains the patient's choice. Llewellys Barker: By telling patients what the risks are, the practical approach is very appropriate. I have been on both sides of this process. In your mind, does what we tell patients the risks are equate with "these are the risks we tolerate"? David Rothman: I have a lot of trouble with "we." Is that the five people who are sitting in the lab or the five people at an NIH consensus conference? I might prefer "we" to become "I." The point of my remarks is to personalize it, individualize it, and communicate it. The "we tolerate" formulation may make it too easy not to communicate.
Communication of Risk and Uncertainly to Patients Donald Colburn I wear a lot of hats in the world of hemophilia. I have hemophilia, severe Factor VIII deficiency. I also have HIV disease, and I happen to be president and chief executive officer of American Homecare Federation, which is a company that provides service to people with hemophilia. In addition, I am currently serving as a member ofthe National Hemophilia Foundation's (NHF) Blood Products Monitoring Committee, and I am chairing the legislative campaign for passage of the Ricky Ray Relief Act. I am going to discuss the general concept of blood safety and take a look at how we currently educate folks. Having been on the receiving end of that education, I would say that I have not experienced the inclusion of the patient in decision making that you have discussed. The attitude of the physicians is still, "Do what I say. This is good for you. If you don't do it, you will be more ill." With respect to blood and blood products, I believe we should be moving toward a signed informed consent. We can certainly talk about how far we go in levels of exploring uncertain risks. Unfortunately, the process of sharing the infonnation necessary to institute informed consent is time-consuming, and, as we know, the health care system is already operating with a lack of quality time spent with patients. The challenge we have is to devise a mechanism that will allow blood bankers, for instance, who are very interested in blood safety issues to serve a broader role in the process of educating patients. There has been a great deal of discussion about the pamphlet that is put out by American ssociation of Blood Banks (AABB) in conjunction with the Council of Community Blood Centers (CCBC) and the Red Cross. Designed for recipients of cellular blood products, it is supposed to mimic a package insert as far as information. To date I have not found one person with hemophilia or another condition who received this pamphlet at the time of receiving a cellular product. Even that mechanism isn't being utilized. What can we do that will allow some "right of information" for that patient? For 149
150 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK starters, we can provide signed and informed consent with chronic users of blood products on an annualized basis. In addition, we first must make information understandable for people with various levels of education. The second thing that we have to do is to present the information in an unbiased manner. Then we have to be certain that patient is up to date with the information when he makes that decision. Once there has been uncertainty about a particular product, that uncertainty hinders communication even more, as we experienced with HIV. The patient, as the blood products' consumer should have the ultimate role in the decision. Without that type of participation, without informed consent that somebody has on file somewhere transmitted. then the whole thing becomes bogus if a disease is To sum up, it is important, tremendously important, that we present materials to people in an unbiased fashion that they can understand. One would like to think that the patient would have many more questions, especially in a situation involving chronic use. As far as communicating the risk of uncertainty, that is a tough one. Blood goes through various processes which eradicate most diseases most of the time. However, there are many conditions that are not tested for and that we don't know about. Those are unknown risks. It is important that we don't panic the person, but that he or she understands that there will always be risks in any medication. Even recombinant products have a risk factor. That is part of life. We have to make a sincere effort to bring the patient into the loop, and in some areas it works very well. It works really well in discussion groups like this, but when you get back out into the field you often hear remarks from folks like, "Doe just said that this is what I should do," remarks which reinforce the continuing absence of shared information and informed consent. At NHF we are working very hard to empower our consumers to come back to the medical community with a lot of questions. We think it is important. We have had a very severe lesson as a result of our own complacency. DISCUSSION Paul Russell: Having the patient specifically sign makes a lot of sense, and that is really pretty much what we are trying to do throughout. In the case of transfusion it could be applied a little bit more broadly and repeatedly in the case of patients like those you are familiar with. Harvey Klein: Informed consent for blood transfusion has been national policy since 1986. I don't know whether that is standard of care in the outside world and whether every institution practices that, but the AABB, for example,
RISK COMMUNICATION 151 has had that written form widely disseminated to every one of its members since 1986. The circular of information, the so-called package insert, was not really developed for patient use any more than the package inserts in the drugs that you get. The package inserts were developed to inform the physicians, whose job it is to infonn the patient who receives a blood transfusion. Paul Schmidt: There was a study about 3 years ago in which a group of investigators doing informed consent talked to patients after surgery and then some months later. The patients didn't remember what had been discussed and their understanding was different. Donald Colburn: You are always going to have that obstacle. My emphasis is on the chronic user of medical services. There the time is importantly spent because that patient is going to take up a lot of the medical services' time. The better informed that person is, the better the relationship will be. The difficulty that we have observed in the hemophilia community is that when there is a perception that there has not been the opportunity to participate in the therapeutic regimen decision, then all the other systems become distrusted. That perceived breach by the individual causes a great deal of hann, even if the patient would not have remembered everything he or she was told. Henrik Bendixen: The point is that informed consent is not a freestanding event but should be part of the continuum of interaction between physician and patient. David Rothman: The world wide web on the Internet is becoming a real player. A colleague of ours talked about the fact that his patients ask where the newest protocol is. The AIDS community brought us into that consumer-based knowledge concerning protocol availability. If you don't have a protocol going they are not coming to your institution. Although I know those stories about how if you poll the patient 3 months later he doesn't remember the consent form, there are lots of newer stories about patient groups that are all on the Interpret communicating with each other in teens of where the best protocol is. Harold Sox: At Dartmouth there has been a reverse epidemic of operations for benign prostatic hypertrophy as a result of Wennberg and Mulley's work.'05 They developed video disks that convey in an individualized way ~05Kasper, JF, AG Mulley Jr, JE Wennberg (1992). Developing shared decision-making programs to improve the quality of health care (comments). Qualify Review Bulletin, 18(6): 1 83-190.
152 BLOOD AND BLOOD PRODUCTS: SAFE775 AND RISK the risks and benefits of a procedure with respect to the various therapeutic options. As a result patients bring a lot more information to the discussion with the physician. The process of the discussion is at least enlightened and may be even truncated somewhat. This might be another instance in which annual provision of informed consent might be in order. James Reilly: Whether we are a blood center providing blood to the transfusion center or a product manufacturer providing products to a treatment facility, where in your view is the communication breakdown? Is it that we are not providing enough to the physician or that the physician is not communicating to the patient? Donald Colburn: I did ask a number of blood bankers why isn't that circular used, for instance, because it is supposed to go up with every unit. I started to probe in that direction and the responses I got were amazing. "When we first got them we started sending them up, and all of a sudden I got calls from about five or six different hematologists. 'What the hell are you sending up to my patients? If I want them to know anything, I will tell them."' The overall problem is in physician communication to patient. I see it as time management because it takes a long time to be certain someone is educated. Frederick Manning: One of the projects I have been involved in here at the Institute of Medicine involved an experimental drug and the whole question of informed consent: how much is enough; how much should you tell the patients. It turned out the drug did not work out all that well. There was a lot of talk from patients about what they should have been told. In the process we did a little poll of the committee, who were all relatively eminent clinical researchers. The question was whether they could recall in their careers ever having a potential subject in one of their experiments who listened to their pitch, read the informed consent form, and said, "No, I am not interested." In fact, none of them could think of a single instance in which that happened. For the audience today, I would ask you to think back and tell us how many times have you had a patient leave the office after you described what the risks were? Maybe there are some inherent limitations about how scared you can make a patient who has already come to you and decided that he or she is going to trust you. Paul Russell: I have certainly had patients decide against surgery after I explained the risks. We have immensely complicated clinical protocols that have to be applied to almost every patient we see. When they come in for an organ transplant they may have to sign about half a dozen consent forms. It is not uncommon to have one or two of them be inappropriate. Their refusal
RISK COMMUNICATION 153 may not be because they are scared, but because it is an immensely complicated setting. I think you are thinking of a setting with a simple, straightforward, single question. In the situations that I deal with, there is an awful lot of experimentation of all kinds, one superimposed on top of another. There are patients who are in several protocols at once. David Rothman: There are protocols out there that don't get to enroll patients. There will be a research team to which the treating physician may or may not dispatch his or her patient. For example, at our institution the investigator who really wants to figure out if bone marrow transplantation has any efficacy in breast cancer cannot get patients to enroll because if you want it, it is out there to be had, and if you don't want it, you are not going to leave yourself to a random draw. No one goes into the protocol. Elaine Eyster: There are two very different types of protocols. One is the type that has been referred to, in which we have a variety of experimental studies for persons who are infected with HIV. We present those and patients may or may not choose to go on that protocol. It doesn't interfere with our relationship with them, but they have that choice. On the other hand, we have a routine informed consent that everybody signs and re-signs on some sort of interim basis saying that they agree to participate in a study in which their blood can be used for certain purposes. Most patients will accept that as part of what we do, but others will say, "No, I don't care to do that." That wish is honored, and they are not included in whatever study that you are doing. 1 ~ William Sherwood: There is also a great deal of informed consent that is really for comfort. Virtually all hospitals instituted an informed consent for transfusion, particularly prior to surgery. There are a couple of paragraphs given to the patient at a time when that patient has a lot of other things to sign, and the patient is under a good deal of stress having surgery the next day or the next morning. What we are searching for is just how far to go with informed consent. An interplay with the patient to try to find out how far to go was suggested, but I had trouble with that. I would like to be more uniform. I would like to be able to tell all patients what the potentials are and what I should be doing. Is there a way to stratify these risks proven, potential, and theoretical and where should we draw those lines? It is difficult to handpick something for each patient when you know in the long run that the patient will be disaffected in some way and later come back and say, "Why didn't you tell me about that risk?" I cannot come back and say to them, "I didn't think you needed to know it." We need more guidance on how deep to go in the risk spectrum in helping patients understand.
154 BLOOD AND BLOOD PRODUCTS: SAFETY AND RISK David Rothman: When an institutional review board (IRB) does a risk assessment for informed consent, it lists the major potential side effects by percentage; it keeps going down to maybe include some in the 10 percent range, but it probably doesn't go down to the 1 percent range. I don't know that all the patients read it, but they are going to certainly read the major untoward effects. The kinds of complaints that often end up being told to me are one in which the patient has been given a prescription, but the ordinary side effect was not mentioned to the patient. The patient experiences it, calls the physician the next day, and physician says, "Relax, it is one of the side effects," to which the response is, "So, why didn't you tell me?" We need to be between the extremes of parsing it out to 1 percent and telling a patient, "Look, you are going to be taking this. Here are the major side effects." Donald Colburn: I have signed off on admission to a hospital on a blood form. What you sign is incredible. If you sit there and challenge the document, you become a troublesome patient at that point. If you say, "I don't understand this," the person who is doing the intake to put you into the hospital is a clerk for the most part and responds, "Sign these forms." You ask "What happens if I don't sign them?" and the clerk says "We cannot put you in the hospital." You sign the forms. There is some degree of pressure that is put on the patient at that stage, in addition to whatever else may have been tried to have been communicated. Harvey Klein: Many good hospitals don't consider informed consent a form given at the time of intake with a clerk. That really isn't informed consent. David Rothman: Exactly. What could be a worse time? We have some data on hospitals trying to do advanced directives at the time of admission. Could there be a worse time to do anything, especially because somebody knows you are coming. James Allen: When you talk about listing potential adverse side effects in the 10 percent range, but not the 1 percent range, is that a 10 percent risk of occurrence versus a 1 percent risk of occurrence? Surely, for serious effects such as risk of HIV infection post-transfusion, a risk of one in several tens of thousands is considered highly significant and certainly a"must notify." David Rothman: You are absolutely right.