Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 358
Conflict of Interest in Medical Research, Education, and Practice D How Psychological Research Can Inform Policies for Dealing with Conflicts of Interest in Medicine Jason Dana* Physicians take an altruistic pledge to consider their patient’s interests ahead of their own in clinical practice. Likewise, medical researchers have a professional obligation to conduct their research ethically in their search of truth. A conflict of interest is a set of circumstances that creates a substantial risk that professional judgment or actions regarding a primary interest will be unduly influenced by a secondary interest. Although the information in this report can be applicable to many types of conflict of interest, it focuses on financial conflicts of interest, which can occur when medical professionals interact with the pharmaceutical industry. For example, when physicians accept support for clinical research or continuing education programs, accept consultantships and appointments to industry-sponsored speakers bureaus, or have informal meetings with pharmaceutical sales representatives who buy lunch and bring drug samples, there is concern about the impact of these relationships on prescribing behaviors and professional responsibilities (Marco et al., 2006). The purpose of this paper is to bring basic psychological research to bear on understanding financial conflicts of interest in medicine and effectively dealing with these conflicts. A particular focus will be research on self-serving biases in judgments of what is fair. This research shows that when individuals stand to gain by reaching a particular conclusion, they tend to unconsciously and unintentionally weigh evidence in a biased fashion that favors that conclusion. Furthermore, the process of weighing * Jason Dana, Ph.D., is professor of psychology in the Department of Psychology, University of Pennsylvania, Philadelphia.
OCR for page 359
Conflict of Interest in Medical Research, Education, and Practice evidence can happen beneath the individual’s level of awareness, such that a biased individual will sincerely claim objectivity. Application of this research to medical conflicts of interest suggests that physicians who strive to maintain objectivity and policy makers who seek to limit the negative effects of physician-industry interaction face a number of challenges. This research explains how even well-intentioned individuals can succumb to conflicts of interest and why the effects of conflicts of interest are so insidious and difficult to combat. The section Unconscious and Unintentional Bias describes the psychological research on bias in more detail, and its relevance to financial conflicts of interest will be made clearer. The section Parallel Evidence in the Medical Literature then provides a brief review that demonstrates the correspondence between the findings from studies of conflicts of interest in the medical field and the findings from basic studies of bias in the field of psychology. The section Implications for Policies Dealing with Medical Conflict of Interest details for policy makers how approaches including educational initiatives, mandatory disclosure, penalties, and limiting the size or type of gifts can be informed by the psychological bias literature. The Methods and Limitations of the Data briefly addresses the propriety of applying psychological experiments to professionalism in medicine. Finally, a conclusions section summarizes what can be learned from the psychological literature. UNCONSCIOUS AND UNINTENTIONAL BIAS One intuitive view of financial conflicts of interest is that the physicians who are swayed by them are corrupt. Physicians have taken an oath to put their professional obligations first, so that if they are indeed influenced by private financial incentives, they have chosen not to uphold that oath. Although there may indeed be a minority of individuals who are fundamentally corrupt, most physicians certainly try to uphold ethical standards. This intuition is implicit in the guidelines set forth by the American Medical Association, the American College of Physicians, and the self-imposed guidelines of the Pharmaceutical Manufacturers Association, all of which stress that gifts accepted by physicians should primarily entail a benefit to patients and should not be of substantial value, suggesting that the temptation to provide or accept large or personal gifts is a concern. This view perhaps suggests that physician relationships with the pharmaceutical industry are problematic and can elicit hostility from some physicians. Understandably, most physicians see themselves as ethical people who would not place their objectivity for sale, and so they believe that they can be trusted to navigate these conflicts when dealing with industry. Compounding matters, many enticements from industry are of relatively small
OCR for page 360
Conflict of Interest in Medical Research, Education, and Practice financial value. This prompts responses that physicians are “above sacrificing their self-esteem for penlights” (Hume, 1990) or that if panelists on a scientific committee are influenced by receiving reimbursement for travel and expenses, someone “bought their opinions” and “they obviously come cheap” (Coyne, 2005). This view is also compatible with an orthodox economic approach, which casts succumbing to conflicts of interest as the rational output of a cost-benefit calculation. In that case, solutions to problems of conflicts of interest would involve better monitoring and punishment, hopefully to the point at which ethical lapses would be too costly to indulge. Evidence from psychology offers us a different view, one in which our judgments may be distorted or biased in ways of which we are unaware. Some of the most compelling evidence of bias comes in the domain of optimism about the self. There is, for example, much evidence that people engage in self-deception that enhances their views of their own abilities (Gilovich, 1991). One of the most oft cited and humorous examples of self-enhancement is found in a study that reported that 90 percent of people thought they were better drivers than the average driver (Svenson, 1981). Such biases have been dubbed “self-serving” (Miller and Ross, 1975) when they lead one to take credit for good outcomes and blame bad outcomes on external sources. Although an unrealistic optimism about the self is sometimes adaptive and healthy (Taylor and Brown, 1988), these biases can lead to judgments that are unwise or unjust in situations in which we are epistemically responsible for being correct. Perhaps most relevant to the issue of financial conflicts of interest are well-known self-serving biases in the interpretation of what allocations are fair or just. A classic demonstration of self-serving bias in fairness comes from a study by van Avermaet (reported by Messick, 1985). Subjects were instructed to fill out questionnaires until they were told to stop. When the subjects finished, the experimenter left them with money that they could use to pay themselves and send in an envelope as pay for another subject who had already left. In four different conditions, the subject was told one of the following four different conditions: (1) the other subject had put in half as much time and had completed half as many surveys, (2) the other subject had put in half as much time but had completed twice as many surveys, (3) the other subject had put in twice as much time but had completed half as many surveys, or (4) the other subject had put in twice as much time and had completed twice as many surveys. It is first interesting to note that almost everyone took the trouble to send the other person a share of the money, even though they were free to keep it all. It was not clear to the author that the rare cases of nonreturn were not due to a mistake or a lost envelope. Clearly, the subjects’ sense of ethics served as a powerful constraint on their behavior: keeping all of the
OCR for page 361
Conflict of Interest in Medical Research, Education, and Practice money would be unjustifiably selfish and unfair because the other subject at least did similar work, so most subjects shared it. How they shared the money, however, provides an interesting insight into human nature. The subjects who worked twice as long and completed twice as much kept twice as much money, on average, a simple application of a merit principle to pay. The subjects kept more than half of the money, however, both under the condition in which they worked longer and completed less and under the condition in which they completed more work and did not work as long. Again, their behavior was consistent with a merit principle, but the principle chosen, on average, systematically favored the subject making the allocation. Finally, when the subjects completed only half as much work and worked only half as long, they did not, on average, give the other subject twice as much money. Instead, the subjects kept about half of the money, on average, consistent with a rule of equal division rather than merit. What we can take away from the van Avermaet study is that most people are not unabashedly selfish; they have a sense of what is fair and tend to abide by it. Yet, that does not mean that judgments of fairness are not systematically biased to favor the self. When people are free to choose among competing principles of fair behavior, they tend to gravitate toward those principles that most favor their own interests. Other early experiments have similarly found that interpretations of fair allocations of pay are self-servingly biased (Messick and Sentis, 1979). One potential shortcoming of these experiments, however, is that they used a survey methodology. Thus, the subjects’ self-interest was imagined, and they had no motivation to honestly report what they thought was fair. Thus, although it is apparent that the subjects had malleable interpretations of what was fair, it is not always clear whether these interpretations reflected a bias or, for example, a strategic effort on the part of the subjects. In that case, one wonders if the use of sufficient compensation would erase the effect. A series of experiments by behavioral economists (Loewenstein et al., 1992; Babcock et al., 1995) addresses this problem through the use of real money incentives without deception and establishes that self-serving interpretations can arise as unwitting and unintentional biases. Simulating pretrial bargaining, Loewenstein et al. (1992) conducted bargaining experiments in which subjects were presented with case materials (depositions, police reports, etc.) from an actual law suit. The subjects were randomly assigned to the role of either the plaintiff or the defendant and were asked to negotiate a settlement in the form of a payment from the defendant to the plaintiff. At the outset, the experimenters gave the defendants a monetary endowment to finance the settlement, and the division of the endowment that the subjects agreed upon through bargaining was what they took home as pay. The longer that it took the parties to agree to a settlement, the more that both were penalized by having the endowment of money that
OCR for page 362
Conflict of Interest in Medical Research, Education, and Practice they were dividing shrink. If they failed to settle, the defendant’s payment to the plaintiff, based on the smaller endowment size, was determined by a neutral judge who had reviewed all of the case materials. Before they negotiated, both the plaintiffs and the defendants were asked to predict how the neutral judge would rule in the case and were also paid for the accuracy of this prediction. The subjects in this experiment had every incentive to be objective in seeking a settlement; if their demands were unreasonable, the pot of money would only shrink and ultimately the award would be determined by a neutral and informed party. If the subjects’ estimates of a fair settlement were biased in a self-serving manner, however, they might be inclined to view the other party’s offer as unjust and unacceptable. Indeed, the subjects were often unable to settle, to their own detriment. Direct evidence that the self-serving bias played a role in this failure to settle came in the form of the predictions of the judge’s ruling. The plaintiffs’ predictions of the judge’s award to them were, on average, substantially higher than those of the defendants, even though the estimates were secret and had no bearing on the settlement and both parties were paid to be accurate in their estimates. Furthermore, the larger that the discrepancy between a particular plaintiff’s and defendant’s estimates was, the lower was their likelihood of settlement, and hence, they both left the experiment worse off in terms of payment. This evidence suggests that self-serving biases are unintentional because people are often unable to avoid being biased, even when it is in their best interest to do so. In subsequent experiments that used the same paradigm (Babcock et al., 1995), the settlement rates were markedly improved by assigning subjects their roles only after they had read the transcripts. In this way, any motivation to interpret evidence as favorable to one side over another while the subjects were reading and evaluating the materials was removed. Without the subjects having a self-interested conclusion to reach, interpretations of fairness, as measured by predictions of the judge’s ruling, looked more like those of a neutral third party than an interested party. In principle, of course, these judgments were exactly like a third party’s judgment. The finding is important, however, because these subjects still had the same bargaining task as in the earlier experiments. Thus, one cannot conclude that the majority of failures to settle were due to the subjects being overly competitive or having a poor strategy. Rather, manipulations targeting the objectivity of the fair ruling judgment increased the settlement rates. This finding suggests that self-serving biases work by way of distorting the way that people seek out and weigh information when they perceive that they have a stake in the conclusion. The motivated reasoning displayed by the subjects in the study of Loewenstein et al. (1992) confirms the general findings from social psychol-
OCR for page 363
Conflict of Interest in Medical Research, Education, and Practice ogy research. Gilovich (1991) describes the different evidential standards that people typically use to evaluate propositions that they wish to be true versus propositions that they wish to be false. When they evaluate an agreeable proposition, people ask, “Can I believe this?” When they evaluate a disagreeable position, people ask, “Must I believe this?” The former question implies a more permissive evidential standard because it requires the decision maker only to seek out confirmatory evidence, whereas the latter question implies that the proposition must survive a search for disconfirming evidence. These different evidential standards are exemplified by studies that use a variant of the classic Wason card selection task (Wason, 1966). The Wason task asks subjects to test an abstract logical rule by choosing which pieces of information that they want to be revealed to them. An overwhelming majority of subjects, even those with high levels of formal education, fail to reason through this task properly. The most common mistake that they make is selecting information that could confirm the rule but that is useless for testing it while failing to select information necessary for testing the rule because it could disconfirm it. Dawson and colleagues (2002) modified the Wason card selection task by having subjects sometimes test hypotheses that they did not want to believe, such as those that implied their own early death. Providing motivation not to believe in this manner improved the subjects’ performance over that in situations in which the subjects were testing nonthreatening or agreeable hypotheses. This finding is interesting because it shows not only that people approach the problem differently when the hypothesis is agreeable or disagreeable but also that the proper motivations can lead them to solve problems that they are otherwise incapable of solving. Thus, motivated reasoning appears to operate at a preconscious level. The “can I?” versus “must I?” distinction in the motivated evaluation of evidence could be applied to thinking in many financial conflict of interest situations. For example, a physician may evaluate evidence that a particular treatment is effective. If that physician stands to make money by prescribing that treatment, the motivation of financial gain may make his or her evaluation of the drug’s effectiveness hold to a weaker evidential standard. In further studies on the self-serving bias, Babcock et al. (1995) attempted to reduce bias by educating subjects, describing to them the behavioral regularities of bias that lead to disagreement, and testing the subjects to make sure that they understood. This intervention, on average, had little success in improving settlement rates. It did help the subjects recognize bias, but mostly in their negotiating opponents rather than in themselves. Moreover, those subjects who did concede that they might be somewhat biased tended to drastically underestimate how strong their bias was. This
OCR for page 364
Conflict of Interest in Medical Research, Education, and Practice finding suggests not only that bias is unconscious but also that conscious attention alone cannot be expected to remove bias. This finding—that teaching people about bias makes them recognize it in others but not themselves—has since been confirmed and extended. Several studies of the “bias blind spot” (Pronin et al., 2002) have found that for any number of cognitive and motivational biases that the researchers can describe, subjects will, on average, see themselves as less subject to the bias than the “average American,” classmates in a seminar, and fellow airport travelers. That is, the average subject repeatedly sees himself or herself as less biased than average, a logical impossibility in the aggregate that suggests that self-evaluations of bias are systematically biased. Furthermore, experiments have shown that when people rate themselves as being less biased than they rate the average person, they subsequently tend to insist that their ratings are objective (Pronin et al., 2002; Ehrlinger et al., 2005). Much like in the study of Loewenstein et al. (1992), this insistence persists even after the subjects read a description of how they could have been affected by the relevant bias. Why do people recognize less bias in themselves than in others, and why does education not make this bias go away? Further studies of the bias blind spot (Ehrlinger et al., 2005; Pronin and Kugler, 2007) have identified a mechanism behind this behavior that they term an “introspective illusion.” Being privileged to their own thoughts, people use introspection to assess bias in themselves. Because biases like the self-serving bias operate below the level of conscious awareness, they can “see” that they are not biased; at least, they have no experience of bias and so conclude that they are not biased. When they assess bias in others, however, people do not have the privilege of knowing what a person thought and must rely on inferences based on the situation. If another’s behavior is consistent with a bias, people will often conclude that the other is biased. Learning about various cognitive and motivational biases can exacerbate these “I’m better-than-average” effects. People will often still hold that they are not biased because they “know” their own thoughts, but they will now know what to look for in a situation that could bias others. The bias blind spot gives us one way of understanding why such strong disagreements can take place over whether conflicts of interest are problematic. In summary, psychological research suggests that people are prone to having optimistic biases about themselves. Judgments about what is fair or ethical are often biased in a self-serving fashion, leading even ethical people to behave poorly by objective standards. Self-serving bias is unconscious and unintentional, and people often fall prey to it even when they do not want to do so and they do not know they are doing it. The bias works by influencing the way in which information is sought and evaluated when the decision maker has a stake in the conclusion (financial or otherwise). The bias thus leads to the use of more lax evidentiary standards when the deci-
OCR for page 365
Conflict of Interest in Medical Research, Education, and Practice sion maker wants to believe something than when the decision maker does not. Teaching about egocentric biases like the self-serving bias does little to mitigate them because when people examine their own thinking, they do not experience themselves as being biased. People do learn to look for bias in others, however, which can lead them to conclude that others are biased while they themselves are not. PARALLEL EVIDENCE IN THE MEDICAL LITERATURE Medical research on conflicts of interest—such as research on attitudes about or the influences of gifts to physicians from industry—has not set out to research whether unintentional bias exists. The findings in the medical literature, however, correspond nicely with the findings from basic psychological studies of bias. This correspondence serves as support for the idea that the model of unconscious and unintentional bias can help us understand conflicts of interest in medicine. Most prominently, although some physicians may admit to the possibility of being influenced, physicians typically deny that they are influenced by interactions with and gifts from industry, even though research suggests otherwise (Avorn et al., 1982; Lurie et al., 1990; Wateska, 1992; Caudill et al., 1996; Orlowski and Gibbons et al., 1998; Adair and Holmgren, 2005). The question is whether these denials by and large reflect a sincere belief in one’s objectivity. Accumulating evidence suggests that physicians believe that other physicians are more likely to be influenced by gifts than they themselves are (McKinney et al., 1990). A study of medical residents (Steinman et al., 2001) found that 61 percent reported that “promotions don’t influence my practice,” while only 16 percent believed the same about other physicians. Findings that residents in general believe that others are more likely to be influenced by interactions with industry than they are have been confirmed in a more recent review (Zipkin and Steinman, 2005). Morgan et al. (2006) found that for all of four different gifts, ranging in size from a drug sample to an offer of a well-paid consultancy based only on prescribing volume, physicians rated themselves as less likely, on average, to be influenced by their acceptance of a gift than their colleagues. Even medical students see gifts of equal value as being more problematic for other professions than their own (Palmisano and Edelstein, 1980). There is even some direct evidence that physicians do not appreciate industry’s influence on them. Orlowski and Wateska (1992) tracked the pharmacy inventory usage reports for two drugs after the companies producing the drugs sponsored 20 physicians at their institution to attend continuing medical education seminars. The rates of use of the drugs described at these seminars increased, both in time series analysis of the rate
OCR for page 366
Conflict of Interest in Medical Research, Education, and Practice of use of the drugs at the institution and in comparison with the national average rate of use during the same period. However, before they attended the seminars, all but one of the physicians denied that the seminars would influence their behavior. Being asked about bias should make physicians more aware of the potential of bias entering into the seminar, yet this did not prevent the seminar from apparently having an impact on the physicians’ decisions. A retrospective study (Springarn et al., 1996) tracked house staff who attended a grand rounds sponsored by a pharmaceutical company and found that they were more likely to indicate that the company’s drug was the treatment of choice than were their colleagues who had not attended the session. Interestingly, these same physicians were often not even able to recall the sponsored grand rounds, so they were not consciously aware that it had any influence on their decisions. If conflicts of interest in medicine can indeed be understood as unconscious and unintentional, how might that affect how policy makers approach dealing with them? IMPLICATIONS FOR POLICIES DEALING WITH MEDICAL CONFLICTS OF INTEREST Short of eliminating conflicts of interest altogether, there are several interventions that universities, professional societies, and other policy makers frequently employ to guard against the inappropriate influence of industry on medical practice and research. These interventions may be implicitly predicated on the view that succumbing to conflicts of interest is a conscious choice, however, and thus they may have limited or surprising effects if physicians are subject to unconscious bias. The psychological research reviewed here suggests that policy makers may wish to be cautious in their expectations of success for these policies, as they are not tailored to deal with unconscious bias. Policy makers may also wish to consider some possible perverse consequences that can result from using these interventions. Education Educational initiatives can be thought of as taking two forms: substantive education in ethics and education aimed specifically at describing and explaining institutional policies and enforcement and individual responsibilities. Perhaps the biggest barrier to the effectiveness of teaching about bias specifically is the bias blind spot. Certainly, some value exists in teaching physicians about potential conflicts of interest when they are dealing with industry. Simply knowing about the potential for bias, however, does not
OCR for page 367
Conflict of Interest in Medical Research, Education, and Practice prevent one from being biased. The bias blind spot (Pronin et al., 2002) research described earlier suggests that simply teaching about biases is more likely to help physicians recognize bias in other physicians than in themselves. The blind spot suggests one reason why many physicians deny that they are personally influenced by gifts from industry, despite evidence that gifts and interactions do influence decision making (e.g., Orlowski and Wateska, 1992; Caudill et al., 1996; Wazana, 2000). Even if people are taught about bias, they are still prone to it. Navigating relationships with industry and accepting gifts while remaining completely objective, then, is not a simple imperative that physicians can be easily trained to follow. Indeed, the research of Loewenstein et al. (1992) suggests that knowing about bias is not sufficient to prevent it even if one is determined to be objective. Thus, recommendations for physicians, such as “If nominal gifts are accepted, make certain that they do not influence your prescribing or ordering of drugs” (Marco et al., 2006), are not practical. Perhaps an effective use of education is to help physicians recognize which relationships lead to bias so that those relationships may be preemptively avoided. There is, however, some indication that teaching specifically about the unconscious aspect of bias could help in one respect (Pronin and Kugler, 2007). That is, limited evidence suggests that such teaching reduces the gap between perceptions of bias in self and others, and thus, education could reduce the sharpness of disagreement about whether bias exists. Education aimed at conveying institutional guidelines about the receipt of gifts has produced mixed results. On the one hand (Brett et al., 2003; Agrawal et al., 2004; Schneider et al., 2006), after successfully completing such educational initiatives, residents can identify practices that are appropriate and inappropriate consistent with institutional guidelines. On the other hand, these behaviors, which are mostly of a self-report nature on a survey, do not suggest much about how residents will behave, and several authors have raised questions about how long lasting these effects are (Agrawal et al., 2004; Schneider et al., 2006; Carroll et al., 2007). Furthermore, it seems that there are also some perverse effects from familiarizing students with how to interact with industry. Although theirs was not a study about education as such, Fitz et al. (2007) found that even though clinical and preclinical students had the same knowledge about industry, their attitudes about the appropriateness of gifts could still differ, with clinical students far more likely to believe that accepting gifts is appropriate. Hyman et al. (2007) found that although students generally believed that they were not educated enough to deal with industry, students who reported feeling better educated about the pharmaceutical industry were less skeptical about the industry and were more likely to view interactions with the
OCR for page 368
Conflict of Interest in Medical Research, Education, and Practice pharmaceutical industry as appropriate. We cannot tell from this sort of self-reporting what the exact nature of this education was. When guidelines are voluntary, many physicians interact with industry without familiarizing themselves with the guidelines. Morgan et al. (2006) found that although most physicians had contact with the pharmaceutical industry—as evidenced by the fact that more than 93 percent of them had received drug samples—less than two-thirds were aware of the guidelines for interaction with the industry set forth by the college to which the physician belonged, and only one-third were familiar with the guidelines of the American Medical Association. Therefore, requiring education on the content of the guidelines might be a useful point of intervention if many physicians are unaware of them. Penalties Deterring bias through punishment is more likely to be effective if people are knowingly influenced by financial considerations. The psychological research reviewed above, however, suggests that bias due to conflicts of interest can often arise unconsciously and unintentionally, such that people cannot overcome bias even when it is in their best interest to do so. One concern, then, is that aligning self-interest with guidelines through punishment may not be as effective as we would wish. Perhaps even more difficult, though, is establishing whether a case of bias exists. Research identifies statistical evidence of bias by analyzing aggregated sample information, ideally against some control sample. That is much different from establishing that an individual is biased. Law typically requires that each case be considered individually, but without adequate comparisons, it cannot be established that a physician’s beliefs and practices were unduly influenced by nonproscribed relationships with industry, as opposed to being genuine and objective. The prospect of penalties can, of course, help deter cases of blatant corruption and may encourage conformance to policies requiring disclosure of financial interests. The vast majority of industry’s influence on physicians, however, is likely of a more nuanced nature, the result of basically ethical individuals being subtly biased. There are thus serious barriers to effective penalties. Disclosure One common policy response is to require physicians with potential conflicts of interest to disclose them to those whom they advise. In this way, patients or those hearing a presentation can consider the potential for bias, and the physician may perhaps be mindful of this when he or she enters into relationships with industry. For several reasons, this policy is
OCR for page 369
Conflict of Interest in Medical Research, Education, and Practice problematic, and disclosure may be largely ineffective by itself and in some instances could have perverse effects. As an example, consider a physician who advises a patient to pursue some treatment and discloses a possible financial conflict of interest. How should the patient rationally discount the physician’s advice in light of the disclosure? Even if the physician has private incentives, it does not follow that the advice is not genuine. Furthermore, even if the physician is likely to be biased, that does not mean that the advice is incorrect. Often it will be the case that the patient can either take or ignore the physician’s advice, and the disclosure does little to alleviate uncertainty. In addition, patients are in often a vulnerable situation with a need to trust their physicians. Forcing the physician to disclose a possible conflict of interest may also have perverse effects. For example, now that the disclosure has taken place, the physician may expect that the patient will be skeptical and respond by making the message more forceful, a sort of strategic exaggeration (Cain et al., 2005). If patients metaphorically cover their ears, physicians who believe that they must get their message across will yell louder. Although the exaggerated advice may perhaps be discounted, it may still be followed. Decades of psychological research on anchoring and insufficient adjustment has shown that when judgment begins from even a random anchor that people know is incorrect, judgment will not be adjusted sufficiently far from the anchor. For example, experimenters ostensibly spun a wheel of fortune that actually always landed on 65 or 10 and then asked two questions (Tversky and Kahneman, 1974): “Is the proportion of African nations in the United Nations less than or greater than (10/65)?” and “What is the proportion of African nations in the United Nations?” The median response when the wheel was spun to 10 was much lower (25) than the median response when the wheel was spun to 65 (45). Although the subjects did adjust away from the implausible anchors that they were given, they were still affected by those anchors, even though they knew that the values of the anchors were irrelevant. This effect is one of the strongest in the judgment and decision-making literature. One implication, then, is that even if advisees know that the advice is exaggerated, they will still be influenced by it. An experimental study of the effects of disclosure has found just that (Cain et al., 2005). Experimental “advisers” were asked to give advice on the worth of a jar of coins that they could get close to and hold. Their advisees earned money by accurately guessing the value in the jar, whereas the advisers earned money by inducing higher guesses from the advisee. Perversely, when advisers had to disclose these incentives, advisees were made significantly worse off. This effect was in part due to the fact that the advisers exaggerated their advice in light of disclosure, whereas the advisees were unable to sufficiently adjust down from the inflated advice.
OCR for page 370
Conflict of Interest in Medical Research, Education, and Practice Limiting Gifts by Size or Use Policies on gifts often suggest that any gifts accepted by physicians individually should primarily entail a benefit to patients and should not be of substantial value. Certainly, small gifts are preferable to large gifts. Because bias is unintentional and not a matter of corruption, however, small gifts may still produce results and therefore should not be assumed to be benign. Katz and colleagues (2003) reviewed and synthesized a sizeable body of social science literature that suggests that small gifts induce feelings of reciprocity, get a message across by mere exposure (pens, notepads, etc.), and can be effective in changing behavior. Even the sheer ubiquity of trinkets like pens and notepads suggests that this is true. Why else would profitminded entities who conduct market research on their practices continue to supply them if their efforts did not fetch a return? The ethical distinction of a gift having versus not having a primary patient benefit, though intuitively appealing, may also be meaningless. The distinction may reveal a lack of appreciation of the fungibility of money, as first pointed out in Thaler’s treatise on mental accounting (1980). For example, if a physician receives a $100 anatomical model, then he or she does not have to buy it, and that frees up $100 to buy something else for themselves, such as a golf bag or a nice dinner. This situation is consequentially equivalent to the company giving the physician an inappropriate monetary gift, even though our intuitions may tell us that the latter is much worse because we place it in the “extravagance” account rather than the “patient care” account. The research evidence cannot tell us what is ethical, but the policy maker should keep in mind that any gift is still a gift, because the economic value is exchangeable whether it is received in the “extravagance” account or the “patient care” account. Even gifts with clear patient benefit—like the ubiquitous drug sample—have been associated with problems. Physicians and their staff frequently end up using the samples that are intended for patients (Westfall et al., 1997), which can also provide a covert means for pharmaceutical representatives to supply physicians with free medications for personal or family use. Furthermore, there is evidence that physicians with access to drug samples will end up prescribing more advertised, expensive drugs in the future (Adair and Holmgren, 2005), so that these gifts can also drive up health care costs. Limitations on the size and use of gifts may not be a bad policy in terms of limiting corruption, but there may still be influence associated with gifts that are permitted under many current policies.
OCR for page 371
Conflict of Interest in Medical Research, Education, and Practice METHODS AND LIMITATIONS OF THE DATA A common problem with data from psychology experiments is that they overly rely on college undergraduates as a sample of convenience. This problem is perhaps serious in that it raises questions about the generality of the results. Whereas care should be used in extrapolating the findings of experiments conducted with populations composed entirely of college students, there are reasons to take the findings on unconscious bias seriously. First, the phenomenon in question is less likely to suffer from a lack of generality because it is proposed to be a function of the human brain and is not dependent much on context or experience. Because the brain development of college students has mostly been completed, these findings should hypothetically generalize to older adults. Second, absent a theory of how physicians differ from other college students, there is no reason to suspect that they will not be subject to unconscious bias. As support for this idea, the applicability of the psychological research to other professionals (auditors) was also drawn into question when findings of unconscious bias were suggested as a cause for financial malfeasance. Yet, when a study was done with a sample of actual auditors (Moore et al., 2006), the findings of bias were much as what would be expected in the laboratory with college students. Perhaps more importantly, the types of decisions and incentives studied in psychological experiments are considerably different in quality from the treatment decisions made by physicians who have relationships with industry. The intention of this paper is not to overstate the similarity between the two. That does not mean, however, that the concept of unconscious bias does not raise valid concerns over how to deal with conflicts of interest. Indeed, the fact that the findings from research on bias in medicine (and other professions) mirror the findings from the psychological research on bias suggests that the concept of unconscious bias is a good tool to be used to obtain an understanding of conflicts of interest in medicine. CONCLUSIONS Psychological research tells us that people are prone to having optimistic biases regarding themselves, including judgments about whether their own behavior is objective. A large body of literature has shown that these biases are unconscious and unintentional: people fall prey to them even when they do not want to or think that they do. Although it may seem to be intuitively and easily recognized that people are biased in assessing themselves, the fact that these biases are often unconscious and unintentional is not intuitive and is largely underappreciated. The findings of research on the influence of industry on medical practice corresponds closely to the
OCR for page 372
Conflict of Interest in Medical Research, Education, and Practice findings of psychological research, suggesting that we might view the biasing effect of conflicts of interest in medicine to result from an unconscious and unintended bias. Although this view is kind to physicians, in that it allows the biased individual to be understood as basically being well intended, it is also a cause for concern, in that research suggests that such unconscious biases are quite difficult to combat on the large scale. For example, teaching about egocentric biases does not mitigate them because when we examine ourselves, we do not experience ourselves as being biased. This distinction is not merely an academic argument about human nature; several policies that we expect to combat the effects of conflict of interest may not be effective if unconscious bias is an important factor, and the effects of these policies could even be perversely counterproductive. Policy makers may benefit from recognizing and accommodating a more psychologically nuanced view of conflicts of interest in their interventions. REFERENCES Adair, R., Holmgren, L. (2005). Do drug samples influence resident prescribing behavior? A randomized trial. The American Journal of Medicine, 118, 881–884. Agrawal, S., Saluja, I., Kaczorowski, J. (2004). A prospective before-and-after trial of an educational intervention about pharmaceutical marketing. Academic Medicine, 79, 1046–1050. Avorn, J., Chen, M., Hartley, R. (1982). Scientific versus commercial sources of influence on the prescribing behavior of physicians. American Journal of Medicine, 73, 4–8. Babcock, L., Loewenstein, G., Issacharoff, S., Camerer, C. (1995). Biased judgments of fairness in bargaining. American Economic Review, 85, 1337–1342. Brett, A., Burr, W., Moloo, J. (2003). Are gifts from pharmaceutical companies ethically problematic? Archives of Internal Medicine, 163, 2213–2218. Cain, D., Loewenstein, G., Moore, D. (2005). The dirt on coming clean: The perverse effects of disclosing conflicts of interest. Journal of Legal Studies, 34, 1–25. Carroll, A., Vreeman, R., Buddenbaum, J., Inui, T. (2007). To what extent do educational interventions impact medical trainees’ attitudes and behaviors regarding industry-trainee and industry-physician relationships? Pediatrics, 120, e1528–e1535. Caudill, T., Johnson, M., Rich, E., McKinney, P. (1996). Physicians, pharmaceutical sales representatives, and the cost of prescribing. Archives of Family Medicine, 5, 201–206. Coyne, J. (2005). Industry funded bioethics articles (letter). The Lancet, 366, 1077–1078. Dana, J., Weber, R., J. X. Kuang. (2007). Exploiting moral wriggle room: Experiments demonstrating an illusory preference for fairness. Economic Theory, 33, 67–80. Dawson, E., Gilovich, T., Regan, D. T. (2002). Motivated reasoning and performance on the Wason selection task. Personality and Social Psychology Bulletin, 28, 1379–1387. Ehrlinger, J., Gilovich, T., Ross, L. (2005). Peering into the bias blindspot: People’s assessments of bias in themselves and others. Personality and Social Psychology Bulletin, 31, 680–692. Fitz, M., Homan, D., Reddy, S., Griffith, C., III, Baker, E., Simpson, K. (2007). The hidden curriculum: Medical students’ changing opinions toward the pharmaceutical industry. Academic Medicine, 82(10 Suppl), S1–S3.
OCR for page 373
Conflict of Interest in Medical Research, Education, and Practice Gibbons, R., Landry, F., Blouch, D., Jones, D., Williams, F., Lucey, C. (1998). A comparison of physicians’ and patients’ attitudes toward pharmaceutical industry gifts. Journal of General Internal Medicine, 13, 151–154. Gilovich, T. (1991). How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life. New York: The Free Press. Hume, A. (1990). Doctors, drug companies, and gifts. Journal of the American Medical Association, 263, 2177–2178. Hyman, P., Hochman, M., Shaw, J., Steinman, M. (2007). Attitudes of preclinical and clinical medical students toward interactions with the pharmaceutical industry. Academic Medicine, 82, 94–99. Katz, D., Caplan, A. L., Merz, J. F. (2003). All gifts large and small: Toward an understanding of the ethics of pharmaceutical industry gift giving. The American Journal of Bioethics, 3, 39–46. Loewenstein, G., Issacharoff, S., Camerer, C., Babcock, L. (1992). Self-serving assessments of fairness and pretrial bargaining. Journal of Legal Studies, 12, 135–159. Lurie, N., Rich, E., Simpson, D., et al. (1990). Pharmaceutical representatives in academic medical centers: interaction with faculty and housestaff. Journal of General Internal Medicine, 5, 240–243. Marco, C., Moskop, J., Solomon, R., Geiderman, J., Larkin, G. (2006). Gifts to physicians from the pharmaceutical industry: An ethical analysis. Annals of Emergency Medicine, 48, 513–521. McKinney, W., Schiedermayer, D., Lurie, N., Simpson, D., Goodman, J., Rich, E. (1990). Attitudes of internal medicine faculty and residents toward professional interaction with pharmaceutical sales representatives. Journal of the American Medical Association, 264, 1693–1697. Messick, D. (1985). Social interdependence and decisionmaking. In G. Wright (ed.), Behavioral Decision Making (pp. 87–109). New York: Plenum. Messick, D., Sentis, K. (1979). Fairness and preference. Journal of Experimental Social Psychology, 15, 418–434. Miller, D. T., Ross, M. (1975). Self-serving biases in the attribution of causality: Fact or fiction? Psychological Bulletin, 82, 213–225. Moore, D., Tetlock, P., Tanlu, L., Bazerman, M. (2006). Conflicts of interest and the case of auditor independence: Moral seduction and strategic issue cycling. Academy of Management Review, 31, 10–29. Morgan, M. A., Dana, J., Loewenstein, G., Zinberg, S., Schulkin, J. (2006). Physician interactions with the pharmaceutical industry. Journal of Medical Ethics, 32, 559–563. Orlowski, J., Wateska, L. (1992). The effects of pharmaceutical firm enticements on physician prescribing patterns. Chest, 102, 270–273. Palmisano, P., Edelstein, J. (1980). Teaching drug promotion abuses to health profession students. Journal of Medical Education, 55, 453–455. Pronin, E., Kugler, M. B. (2007). Valuing thoughts, ignoring behavior: The introspection illusion as a source of the bias blind spot. Journal of Experimental Social Psychology, 43, 565–578. Pronin, E., Lin, D. Y., Ross, L. (2002). The bias blind spot: Perception of bias in self versus others. Personality and Social Psychology Bulletin, 12, 83–87. Pronin, E., Gilovich, T., Ross, L. (2004). Objectivity in the eye of the beholder: Divergent perceptions of bias in self versus others. Psychological Review, 111, 781–799. Schneider, J., Arora, V., Kasza, K., Van Harrison, R., Humphrey, H. (2006). Residents’ perceptions over time of pharmaceutical industry interactions and gifts and the effect of an educational intervention. Academic Medicine, 81, 595–602.
OCR for page 374
Conflict of Interest in Medical Research, Education, and Practice Springarn, R., Berlin, J., Strom, B. (1996). When pharmaceutical manufacturers’ employees present grand rounds, what do residents remember? Academic Medicine, 71, 86–88. Steinman, M., Shlipak, M., McPhee, S. (2001). Of principles and pens: attitudes of medicine housestaff toward pharmaceutical industry promotions. American Journal of Medicine, 110, 551–557. Svenson, O. (1981). Are we all less risky and more skillful than our fellow drivers? Acta Psychologica, 47, 143–148. Taylor, S., Brown, J. (1988). Illusion and well-being: A social psychological perspective on mental health. Psychological Bulletin, 103, 193–210. Thaler, R. (1980). Towards a positive theory of consumer choice. Journal of Economic Behavior and Organization, 1, 39–60. Tversky, A., Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131. Wason, P. (1966). Reasoning. In B. M. Foss (ed.), New Horizons in Psychology. Harmondsworth, United Kingdom: Penguin. Wazana, A. (2000). Physicians and the pharmaceutical industry: Is a gift ever just a gift? Journal of the American Medical Association, 283, 373–380. Westfall, J., McCabe, J., Nicholas, R. (1997). Personal use of drug samples by physicians and office staff. Journal of the American Medical Association, 278, 141–143. Zipkin, D., Steinman, M. (2005). Interactions between pharmaceutical representatives and doctors in training: A thematic review. Journal of General Internal Medicine, 20, 777–786.