The Importance of Context in Healthcare Decision Making1
An overarching theme of the workshop discussion was that the ultimate goal of improving benefit–risk assessment and communication is to enable “better” healthcare decision making. Dr. Fendrick observed that there are two outcomes of better decisions: (1) reducing patient risk by decreasing the use of drugs that people want to take but wouldn’t take if they were better informed; and (2) enhancing benefit by increasing the use of drugs that people don’t want to take now but would if they were well informed. Participants identified and debated major constraints of the current system that hinders patients (and physicians) from making optimal decisions.
ON-THE-SPOT DECISION MAKING
Dr. Slovic argued that understanding the psychology of judgment and decision making is critical to effectively designing, presenting, and utilizing pharmaceutical benefit–risk information. In order to determine how people perceive and assess benefit–risk relationships, he listed some assertions that we need to consider: there are different types of decisions about benefit and risk of pharmaceuticals; risk is not a well-defined concept, and cavalier use of the word may contribute to the challenges associ-
ated with communicating benefit and risk information; when faced with a benefit–risk decision, people tend to behave more intuitively, by sensing the qualities of whatever it is we are deciding and then integrating those qualities very quickly and automatically; patient perception of benefit and risk is just one of many factors at play when a decision is made about drug usage; when forced to confront trade-offs, people become uncomfortable and may use a simple rule to determine the decision or avoid making the decision altogether; and lastly, people must acquire and comprehend benefit–risk information before they can process it.
Dr. Slovic raised the question, Assuming that a patient acquires and comprehends this information, how does he or she make decisions involving benefit and risk? He argued that rarely do ordinary individuals explicitly calculate benefit–risk trade-offs when making a decision. Patients make on-the-spot, experiential decisions that are influenced by a complicated set of interacting factors, such as physician decision (the physician is making the decision about what is best), patient perception of benefit and risk (e.g., associating high benefit with low or zero risk), and innumeracy. Patients rely on their own knowledge, feelings, and memories when constructing preferences, and the way that information is presented or framed can readily alter their decisions. There are no neutral frames, so this poses a tremendous challenge to communicating benefit–risk information. Every presentation of information creates a bias one way or another, and whoever frames the decision inevitably manipulates the choice.
Dr. Slovic discussed affect, one of the many powerful elements of preference construction. He defined affect as a valenced feeling (e.g., goodness or badness) associated with a stimulus. It involves the processing of feelings associated with stimuli in what is known as the “experiential mode” of thinking, in contrast to the “analytic mode.”2 These two types of thinking—experiential and analytic—reside side-by-side in our brains and play off each other in “the dance of affect and reason.” Researchers are currently trying to understand how these two ways of thinking interact and have demonstrated thus far that experiential decision making increases with innumeracy, cognitive load (e.g., the complexity of the task and information), stress (e.g., time pressure), age, and the accompaniment of affect-rich images with the information. Studies have also demonstrated that although, in reality, risk and benefit are generally positively correlated, in people’s minds they tend to be strongly negatively correlated. This is because people judge benefits and risks based on feelings, with beneficial activities typically associated with lower risk.3
Innumeracy is another major factor in preference construction—it has been associated with lower comprehension, greater framing effects in decision making, a greater influence of affect and emotion on decision making, and drawing less meaning from numbers.4 Innumeracy raises the question, How should a clinician convey risk information to a patient—as a relative frequency, percentage, or otherwise? Dr. Slovic suggested that the answer depends on how the communicator wants to bias the patient, for example, whether he or she wants the patient to become worried or remain calm.
Dr. Ubel corroborated Dr. Slovic’s thesis: a range of contextual factors affect people’s perceptions of risk versus benefit and guide decision making. Dr. Ubel posed several hypothetical benefit–risk choices to the audience and described what he and his colleagues have learned from posing similar choices in controlled studies. Their findings reflect the reality that benefit–risk decision making depends on subtle contextual factors:
Feelings (or, as Dr. Slovic called it, “affect”): People do not always make rational use of information about their preferences or the risk(s) at hand. Rather, the way people feel about a decision or risk(s) guides their decision making.
Guessing: If somebody has already guessed or imagined what the risk of something is before knowing what the actual risk is, he or she will feel differently about the risk (e.g., anxious versus relieved) and make a different decision accordingly.
Type of information provided: Patients’ perceptions of risk and the decision they make depend on what they are told about average risks for the population at large and how that determines whether they perceive their risk as high or low.
Emotional salience of possible outcomes: Many possible outcomes have emotional salience, which affects how people think about risks and how they use probability information (or don’t use it) in their decision making. (For example, colostomies and diarrhea are “icky” things that elicit emotions and affect people’s decision making about treatment options.)
Labels and words: Some words and terms scare people (e.g., “mad cow disease”), whereas others do not (e.g., “bovine spongiform encephalopathy”), influencing decision making.
Where the risk is located: People make different decisions when the risk is “external” (e.g., risks associated with vaccines) than when the risk is “internal” (e.g., a tumor).
Betrayal aversion: People make different decisions if they have felt betrayed in the past by something that should have protected them.
Knowledge about the risk: As uncertainty about the risk of doing something—or not doing something—increases, the influence of contextual factors increases. If nobody can pin down the probabilities, then all of these other factors are going to drive the decision making even more.
Dr. Ubel pointed to the need for improved risk communication. The fact that context is so important raises the question, How can benefit–risk decision making be improved?
Reframing the Context
Dr. Jasanoff stated that arguing context matters, as Drs. Slovic and Ubel have done, is only the beginning of a discussion. The next question is, What context? She emphasized the importance of understanding where responsibility for benefit–risk decisions lies. While our legal system gives informed, competent patients the ultimate decision-making power with regard to which therapy to choose, there are also some legally regulated associative responsibilities that lie elsewhere. For example, companies are responsible for producing beneficial products, regulatory agencies for enforcing a certain level of safety, and physicians for implementing the standards.
Dr. Jasanoff discussed how several decades of social science research on risk have led to the finding that many regulators and other people with associative responsibilities for benefit–risk decisions operate under the rules of what is known as the “deficit model of the public.” This model is based on several assumptions: (1) Public risk perceptions are influenced by systematic cognitive biases, (2) These cognitive biases produce erroneous assessments of probability, and (3) These erroneous assessments of probability lead to incorrect weightings of relative risks and benefits, which need correction through appropriate expert advice.
By contrast, the legal system presupposes something that does not resemble this deficit model at all. U.S. law operates on the assumption that the public is a constantly learning, evolving entity composed of citizens who are knowledgeable, informed, and capable of absorbing evidence. Based on this notion of the knowledgeable citizen, the rights of the public include the right to know, a patient’s right to give informed consent, the right to demand reasons of our agencies, the right to participate and offer expertise, the right to challenge irrational decision making,
and the right to appeal judicial rulings. This “public-under-law” model is extraordinarily important to the functioning of a democratic society. Under this model, we assume that lay people are capable of understanding and critically evaluating complex technical information. They must continually be learning in order to assert the rights of citizenship in our modern knowledge-based society. We also believe that lay people have nonbiased perspectives, knowledge, and insights that are essential for good decision making and that ought to be incorporated into decision-making processes.
Dr. Jasanoff described two different ways to imagine the public’s involvement in benefit–risk decision making. First is the “education model,” in which somebody knows best or better, and somebody else needs to be brought up to speed. Under the guise of this model, the choices are to some extent framed in advance, with the expert controlling the style of communication and the objective being to get the most rational outcome—rationality being defined in relation to quantitative outcome measures. Second is the “engagement model,” under which public involvement is based on the notion that citizens are continually learning. Under this framework, choices are not framed in advance, rather they are framed through dialogue. The way that the information is conveyed is targeted toward evolving questions and is not controlled by the expert. For example, in situations where patients are paralyzed by too much benefit–risk information, if the right kind of dialogic environment were selected, then perhaps counseling could help the patient get past this paralysis. The objective is not to get the most rational outcome from the perspective of an agency (e.g., in terms of how much money is appropriate to spend) but to get the most beneficial outcome for the patient.
Dr. Jasanoff proposed that the following contextual factors be considered when thinking about how to improve benefit–risk decision making: view the public as partners, not antagonists, at all levels (regulatory, physician–patient interaction); express uncertainty and ignorance; diversify communication strategies; adopt an experimental approach to approval, communication, and learning (rather than a marketing approach), including a postapproval means of providing feedback and implementing corrections; and improve our sense of responsibility, given that we do not live in a zero-risk world and that people will inevitably get hurt.
Engaging the Patient
The notion of engaging patients in the decision-making process, as Dr. Jasanoff discussed, raised a question about how this could be done. Specifically, who should frame the information, and how should that
information be communicated to the public? Under what circumstances and to what extent should physicians have detailed quantitative discussions with their patients about the risks and benefits of a drug or procedure? Dr. Slovic said that it is a hard question to answer because there are so many different types of publics and patients. Some patients don’t want that information, others wouldn’t know how to use it, and still others want to know everything. Dr. Jasanoff emphasized the lack of time as a limiting factor in our decision making and how many patients may not have time to consult with their families. She said that many patients may not be given the opportunity to indicate in what context they would like to receive benefit–risk information, for example hearing it verbally versus seeing it visually. Dr. Ubel reflected on how difficult it is to explain risks and benefits in clinical settings, particularly when office visits are so short. Physicians need to be aware of innumeracy and other factors that influence patient perception of risk. The medical curriculum needs to be improved to help doctors become better communicators.
Dr. Leiden concluded the session by observing that despite the complexities and difficulties of benefit–risk decision making, there is a need to provide the public with much better education of risk and benefit concepts so that patients become more involved with the decision-making process regardless of how the information is presented.
RATIONAL DECISION MAKING AND UTILITY ASSESSMENT
Dr. Sox stressed the importance of helping patients make rational decisions. He introduced a model for rational choice known as “expected value decision making” and used a Las Vegas slot-machine metaphor to explain the model. While a gambler’s winnings are unpredictable, given that he or she is playing a game of chance only a few times, the owners of the slot machines are in a different position. Given that their machine is played tens of thousands of times a year, their winnings are predictable. Dr. Sox likened the experience to that of a patient with a given illness. While the patient’s outcome is unpredictable, given that he or she is experiencing an unpredictable situation only once, the physician will pick the treatment that has worked in the largest number of patients over the course of his or her career. While they cannot guarantee an outcome, physicians maximize patients’ chances of having the best possible outcomes by choosing decision options with the highest expected values. Like the slot machine owner, the physician is an expected value decision maker.
Dr. Sox then explained how a “decision tree” is used to make rational decisions based on the expected value decision-making model. He described two ways to present the outcomes—a tree format and a balance-
sheet tabular format. The problem with both of these methodologies, however, is that they express outcomes in terms of life expectancies, and a year of health (e.g., after being cured) is given the same value as a year of illness. Based on the premise that sick years are not worth as much as healthy years, however, outcome states should have different values. To account for the difference in quality of life, life expectancy must be multiplied by utility to yield QALYs. Utility is a measure of preference that takes into account how the patient feels about the outcome state. Either average utilities (average among all patients) or personal utilities (an individual’s personal preferences) can be used to calculate QALYs.
Dr. Sox demonstrated how utilities are used to measure QALYs, using data from a 1995 study (Nease et al. 1995). He discussed different methods for estimating utility, including the “standard reference gamble,” the “time trade-off method,” and the use of a linear scale. He concluded by discussing the challenges of measuring utility and emphasizing that despite these challenges, quantifying attitudes toward health states (measuring utility) is doable.
Patient-Centered Approach to Decision Making
Dr. Spetzler elaborated on the principles that underlie benefit–risk decision making:
The system should be patient-centered and should empower the patient.
Instantaneous consumer responses gathered in an experiment are not necessarily the same decisions that would be made by that consumer as a patient. Most treatment decisions include family members and other trusted advisers.
Decision making is not equal across individuals. While some people learn the basics of good decision making through experience, others do not.
Treatment information should be decision-friendly—patients and their advisers should clearly understand the likely consequences of each alternative, and the preferences of the patient should be respected, even if they are judged unstable by their advisers.
In a patient-centered approach, drug benefit–risk decision making is usually within the frame of a broader treatment decision that likely includes non-drug options. Every alternative needs to be considered, including “do nothing.” While it may be impossible to include consideration of all of the alternatives in a package insert, one option might be to have some kind of independent information organization or rating agency provide comparative information.
Lastly, Dr. Spetzler suggested that if the system does not accomplish the above, then the system should be changed.
Dr. Spetzler argued that in order to make a good decision, a patient must be able to answer the following questions:
What is it that I am deciding, and why? If the frame changes, the decision changes.
What are my choices? Within that frame, there must be creative, doable alternatives.
What do I know? There must be meaningful, reliable information for each alternative. The Prescription Drug Facts Box provides information about one but not all alternatives. The information should be forward-looking since, although it is based on past experience, it is there to enable the decision maker to make predictions about the consequences of the decision.
What consequences do I care about? There must be clear values and preferences, or utilities.
Am I thinking straight about this? There must be logically correct reasoning—a way to sort through the alternatives, information, and personal preferences in a world of uncertainty and risk and derive a choice that gives the decision maker the most of what he or she wants. Ultimately, however, decisions are not purely rational; a good decision makes sense and feels right. People combine their heads and hearts, or the psychosocial and analytical, in decision making all the time. We have to know how to line those dimensions up—how to engage patients and go through the reasoning with them.
Will I act? There must be commitment to follow through. Much of this depends on whether a patient owns the decision.
Dr. Spetzler said that financial decision making is a good analogy for medical decision making. He noted that the financial industry has rating agencies, such as Dun and Bradstreet (D&B) and Standard and Poor’s (S&P), and that the drug industry could do the same. He argued that there is no reason that the FDA should bear responsibility for this. In fact, distance from the regulatory agencies might make it easier to ensure that decisions are truly customer-focused. He concluded by arguing that the challenge is to bring this information to people who are not very numerate because we are not going to change the fact that most people are math-phobic.
How Patients Make Decisions About Therapy
Dr. Schulman presented three case studies representing typical treatment decision-making situations: a 70-year-old healthy female patient
who refuses flu vaccination because she thinks she will get sick from the vaccine; a 40-year-old male patient with new-onset malignancy who chooses experimental therapy with a high risk of toxicity; and a 60-year-old female patient with new-onset heart failure after a previous heart attack who is offered implantable cardiac defibrillator therapy as well as a new experimental heart failure medication. He then went on to describe how these three different people might consider their treatment options.
He noted that there is often a huge difference between a physician’s review of the data and how a patient perceives the data, and that expectation about what is going to happen to a patient’s life changes when his or her physician presents new information about the prognosis. He and his colleagues postulated that this type of change in one’s position (receiving a new prognosis) can change the decision-making process and they constructed a model based on this. They called their model the “health stock risk adjustment model.”
Dr. Schulman explained how, within a prospect theory framework, the model can be used to predict whether a patient is making a treatment decision under a condition of gains (risk aversion; more interest in avoiding risk than gaining benefit; not much toleration for uncertainty around risk) or losses (risk seeking; more interest in benefits than in risks; will tolerate uncertainty around benefits).
He explained how this approach can be used to predict treatment decisions of his three case study patients: (1) The 70-year-old woman is making a decision under a condition of gains and therefore is going to be incredibly conservative and focused on the toxicity issues. (2) The cancer patient is making a decision under a condition of losses, unless he has accommodated to his prognosis such that the presentation of new information doesn’t change his perception of what life is going to be like. (3) The heart failure patient could be making a decision under either gains or losses, depending on whether she readjusted to her health state following the previous heart attack.
Dr. Schulman concluded by emphasizing that patient expectations and evaluations of risk and benefit vary across disease categories or indications. He suggested that while clinical trials are performed, research be conducted to determine how people make trade-offs between risks and benefits as well as how much uncertainty (in benefit and risk measurements) they will tolerate.
Limiting FDA Authority and Policy
Mr. Hutt argued that the FDA’s authority should be limited to three main functions:
Assess potential harm, as it currently does.
Determine the probability that the drug may benefit one or more patients. The focus should be on the individual patient, not the population.
Require that all of this information be provided in detail in the best way possible (e.g., in a physician brochure, which might include more detailed scientific elements, and in a mandatory patient brochure).
By limiting the FDA’s authority to these functions, a drug would be approved once a point is reached at which there are enough data to assess safety and risk and enough data to evaluate benefit or lack of benefit. The benefit–risk decision would be given back to the patient and the patient’s physician, which is where the decision was originally placed under the 1906 Food and Drugs Act and 1938 Federal Food, Drug and Cosmetic Act.
Mr. Hutt provided a brief overview of the history of the FDA statute, noting that it wasn’t until 1962 when the FDA became responsible for making benefit–risk decisions, even though the FDA’s legal and congressional mandate to evaluate benefits and risks did not change. He argued that the transfer of benefit–risk judgment back to the individual patient would not change anything the FDA does with respect to analyzing either safety or effectiveness. Indeed, it would increase the amount of information made available to the American public and the people who need the information in order to make personal decisions. There would be full, complete disclosure to physicians and patients, with the FDA retaining power to prevent the marketing of outright poisons and to prohibit the marketing of drugs with no efficacy data or where there is no difference in outcome between the test drug and a placebo. Efficacy data would be presented to the public such that individual patients could decide whether they want to accept the risk in order to gain the possibility—not probability—of benefit.
Mr. Hutt discussed several reasons why this approach of limited FDA authority should be adopted. First, it respects the autonomy and humanity of every individual citizen. Paternalism is not a high value in our country, and the FDA has lost eight straight court cases because it has been accused of unnecessary paternalism. As Dr. Hall pointed out in his discussion of food risks, we each make our own decisions when we choose which foods to eat, given the information on nutrition made available. Drugs should be no different. Just because a patient doesn’t follow advice doesn’t mean that the patient is making a wrong decision. It may be the right choice for that patient. Second, when the FDA decides that it is not in the public interest to permit a drug to be marketed—taking the decision away from the individual—this can be a death warrant for that
individual. Third, it would encourage industry to pursue the development of products that may have unanticipated uses, which often become the most important uses of many products. With the current system, the development of drugs that demonstrate slight toxicity or that do not show great benefit early on are discontinued. Fourth, it would eliminate the current stranglehold of statistics over drug development. Patients don’t care if the p-value is 0.05 or 0.5. While, yes, we need to let the consumer know what the odds are, if we calculate and present the information in a brochure, patients should be allowed to make their own choice. Finally, he asserted that a shift in paradigm would take the FDA out of the uncomfortable position it is currently in—of deciding who lives and who dies. That was never intended, and it is one of the contributing factors to the serious downgrading of FDA’s credibility and trust in this country.
Paternalism Versus Libertarianism
Mr. Hutt’s presentation elicited much debate. Dr. Strom remarked that he has dramatically less confidence than Mr. Hutt does in an altered, “libertarian” approach for several reasons. Dr. Strom said he has much less faith in the ability of physicians to understand the data. He remarked that he spends much of his time educating physicians to use safety data rationally. The problem stems partly from physicians not being aware of the data, partly from marketing pressure, and partly from physicians not knowing how to interpret the data. Additionally, Dr. Strom said he does not think that patients are able to balance the benefit and risk information correctly. In fact, this is why physicians go through the training that they do—to be able to make that kind of judgment. He pointed to Vioxx and said that the problem was poor prescribing. Most of the people who were prescribed the drug were not in the patient group for whom it was intended—patients who could not take NSAIDs (nonsteroidal anti-inflammatory drugs). If the patients taking Vioxx had taken NSAIDs instead, they would not have been exposed to the risk and Vioxx would not have been withdrawn. He argued that we cannot rely on the marketplace to make decisions about benefit and risk.
Mr. Hutt responded by arguing that, under his proposed changes, drug products would still need to go through an FDA approval process, which would include determination of risk and benefit. The only difference between the current system and his proposed system is in who makes the judgment as to whether a drug can or cannot be used. With regard to whether consumers and physicians can understand all of the benefit–risk information, he observed that the same argument was made with regard to nutrition labeling—that people will not understand the information and will eat the wrong foods for the wrong reasons. Yet if
you do not put the information on the label, Mr. Hutt argued, nobody will have a chance to understand the information. He referred to Dr. Jasanoff’s argument about people having personal views about what is right and wrong and that our citizens are capable of understanding basic issues.
The discussion turned to statistics. Mr. Hutt argued that while the current system should retain its rigorous evaluation of safety and benefit, it must break out of its “statistical stranglehold.” When statistics dominate the entire drug regulatory approval process, he argued, the end result is distorted because it does not account for individual variability. People that could benefit from a drug never benefit. He argued that if, instead of statistics, we could rely more on labeling, people would have greater free choice.
Dr. Strom responded that while he agrees about the concern for losing variability in the “tyranny of the mean,” it is important to differentiate between throwing out all statistics and throwing out incorrectly done statistics. He cited pharmacogenomics as a good example of where there are a priori reasons why you would expect a subgroup to react differently and where analyses of means would give the wrong answer. With respect to labeling, he argued that studies have shown that current labeling does not change behavior. He said that other approaches may change behavior but we cannot assume that they work (as we assumed for so many decades that labels worked). The burden is on those who want to use an approach to prove that it works before pursuing it and expecting that physicians are going to prescribe correctly. Even in controlled settings, such as university hospitals, educational efforts to change prescribing behavior do not work. Until proven otherwise, the only way we can change prescribing is by changing availability of the drug.
Mr. Hutt asked whether a cancer drug with limited statistical efficacy (for example, a 0.2 p-value) would automatically be disapproved, even if there were no other drug available for that cancer. Dr. Strom replied that the drug should be available on a compassionate investigational new drug basis to select individuals, and that those individuals should be included in studies to determine whether the drug works or not. Mr. Hutt expressed concern that if the manufacturer were a small biotech company without the resources to provide the drug at no cost, patients who would otherwise benefit would be dying. Dr. Strom said that under those circumstances, society would decide to make the drug available through the National Cancer Institute, for example, or another organization. Mr. Hutt replied that, still, patients would need to wait, so that is not a good enough answer.
There was a comment that nobody had challenged Mr. Hutt in his assertion that we should approve drugs even if just one patient has the possibility of gaining benefit. The questioner argued that this means that
essentially every drug would be approved but without any reassurance of a reasonable expectation of benefit and that we would be exposing people to the burden of risk with potentially a false hope of efficacy. Trust in the process would erode, and companies could have greater liability in situations where they had not rigorously evaluated benefit. The question then becomes, “Who in society bears the cost?” Mr. Hutt responded by emphasizing, again, that approval would require separation in the clinical trial between the active agent and a placebo. Dr. Strom remarked that 50 percent of studies would be positive in that direction—showing benefit—when in fact the drug does nothing. Mr. Hutt said that the p-value ought to be in the labeling so that patients know the results of the clinical trial. With regard to false hope, Mr. Hutt reemphasized that his proposal is based on freedom of choice and the assumption that people are intelligent and capable of being educated.
Dr. Slavin remarked that the debate between Mr. Hutt and Dr. Strom misses the point. Rather than dissipating our energies in deciding who is going to make decisions about whether drugs can be used or not, there are more immediate issues such as risk communication and trust that can and should be addressed now.