APPENDIX D

Distributive Justice and the Use of Summary Measures of Population Health Status

Norman Daniels, Ph.D.

Tufts University

OVERVIEW OF A MORAL CONTROVERSY ABOUT METHOD

In his contribution to this workshop, Dan Brock has focused on ethical issues that arise in the construction of different summary measures of health status and the benefits of health interventions. The questions he raises about possible age bias or bias against people with disabilities are important whether one is simply surveying the health status of different populations or actually making resource allocation decisions. I focus more narrowly on issues of distributive justice that arise when methods using these measures, such as cost-effectiveness analysis (CEA), are deployed to help us make resource allocation decisions. Obviously, how these measures are constructed has distributive implications in this central use, as does the underlying utilitarian framework of the method itself. These implications mean that the division of labor between us cannot be a line in the sand.

My ultimate conclusion is about both the construction and use of these measures. In contexts where we use CEA—and thus these measures—to make resource allocation decisions, we face a particularly difficult set of distributive issues. How much priority should we give to the sickest or worst-off patients? When should we allow modest benefits to many people to outweigh significant benefits to fewer? When should we allocate resources to produce “best outcomes” and when should we give people fair chances at some benefit? These questions form a family of “unsolved rationing problems.” We have no principled solutions to them (though we may eventually discover some), and there is considerable moral controversy focused on them.

The straightforward use of CEA would, however, push us toward specific, yet contested, answers to these questions (Harris, 1987). The use of these measures in CEA would give no priority to the sickest patients, would permit any aggregation that maximized health benefit per dollar spent, and would always support best outcomes. What is contested—and I believe unacceptable—is the underlying utilitarian thrust to these answers. The central utilitarian assumptions are that a benefit to one always compensates for a loss to others, and that it is always morally desirable to maximize in the aggregate or at the margin. These assumptions, as Rawls (1971) argued, ignore the “separateness” of persons.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 58
APPENDIX D Distributive Justice and the Use of Summary Measures of Population Health Status Norman Daniels, Ph.D. Tufts University OVERVIEW OF A MORAL CONTROVERSY ABOUT METHOD In his contribution to this workshop, Dan Brock has focused on ethical issues that arise in the construction of different summary measures of health status and the benefits of health interventions. The questions he raises about possible age bias or bias against people with disabilities are important whether one is simply surveying the health status of different populations or actually making resource allocation decisions. I focus more narrowly on issues of distributive justice that arise when methods using these measures, such as cost-effectiveness analysis (CEA), are deployed to help us make resource allocation decisions. Obviously, how these measures are constructed has distributive implications in this central use, as does the underlying utilitarian framework of the method itself. These implications mean that the division of labor between us cannot be a line in the sand. My ultimate conclusion is about both the construction and use of these measures. In contexts where we use CEA—and thus these measures—to make resource allocation decisions, we face a particularly difficult set of distributive issues. How much priority should we give to the sickest or worst-off patients? When should we allow modest benefits to many people to outweigh significant benefits to fewer? When should we allocate resources to produce “best outcomes” and when should we give people fair chances at some benefit? These questions form a family of “unsolved rationing problems.” We have no principled solutions to them (though we may eventually discover some), and there is considerable moral controversy focused on them. The straightforward use of CEA would, however, push us toward specific, yet contested, answers to these questions (Harris, 1987). The use of these measures in CEA would give no priority to the sickest patients, would permit any aggregation that maximized health benefit per dollar spent, and would always support best outcomes. What is contested—and I believe unacceptable—is the underlying utilitarian thrust to these answers. The central utilitarian assumptions are that a benefit to one always compensates for a loss to others, and that it is always morally desirable to maximize in the aggregate or at the margin. These assumptions, as Rawls (1971) argued, ignore the “separateness” of persons.

OCR for page 58
To get around this criticism, some might propose building into summary population measures “moral weights” that reflect our preferences or values when we take a stand on these distributive problems. Their goal, perhaps, might be to develop an ethically sensitive tool or decision procedure. Planners could then use it with fewer moral qualms in different resource allocation contexts, perhaps substituting “objective” calculation for “subjective” moral deliberation. I believe that the unsolved nature of these morally contested rationing problems poses a serious obstacle to this strategy. For reasons I develop, we must view these measures and the methodologies that use them—or even highly improved versions of them—as inputs into a fair and deliberative decision making process. Our goal must be better informed and ethically sensitive deliberators making decisions, not methodologies that substitute for them. To that end I urge a two-pronged research program. One prong explores social attitudes toward the distributive problems that must be addressed in making these resource allocation decisions. For example, Eric Nord’s (1994) “person trade-off” approach explicitly asks people how many health outcomes of one kind (e.g., moving patients from one health state to another) they consider equal in social value to outcomes of a different kind. This approach avoids inferring an answer to this question from the very different question people are asked in standard summary measures, where they assign a personal utility to them of being in one state rather than the other. If developed further in directions Nord suggested, the person trade-off approach could help us learn more about how our society, or subgroups in it, reason about these issues. Serious obstacles confront this approach, however. Nord himself recognized some, and I note others. Still, properly pursued, it might provide an important body of information that could assist decision makers who must allocate resources. The second component of the research program explores the requirements of a fair decision making process in the different contexts in which resource allocation decisions must be made. I make some preliminary suggestions about some elements of such a process, but much more work needs to be done. Decision makers constrained by such a fair process could then use information from summary health measures, CEA, and information about the attitudes and reasoning people use to think about these distributive issues to arrive at decisions others should view as legitimate and fair. This argument has a brief history. Five years ago, at an international bioethics conference in Amsterdam, I claimed that the absence of principled solutions to these rationing problems means that we need to develop an account of fair procedures for resolving them (Daniels, 1993). Nord, speaking in the same session, replied that his person trade-off method can tell us how the public solves these problems and gives us a way to produce an instrument that incorporates the values underlying these solutions (Nord, 1993, 1994). I objected then that his method—which Brock and I believe asks the right questions—could not substitute for moral deliberation for various reasons. I continue that line of reply here, but I embrace the effort he makes to find out more about our beliefs about these distributive issues. Early in 1993, the Public Health Service Panel on Cost-Effectiveness Analysis in Health and Medicine began its deliberations about the role and limits of CEA, and the argument I offer here was one of the considerations that led it to recommend that CEA should be viewed as an input to decision making and not a decision making procedure (Gold, Siegel, et al., 1996; Russell, Gold, et al., 1996). Unfortunately, simply making that recommendation without providing more assistance in helping us make these controversial, distributive decisions risks letting people give too much weight to the distributive implications of CEA. Even imperfect or distributively

OCR for page 58
insensitive measures could still act as an important input into a fair deliberative process that was highly sensitized to distributive controversies and our beliefs about them. WHEN QUESTIONS OF DISTRIBUTIVE JUSTICE ARISE The unsolved problems of distributive justice that concern us arise in some decision making contexts in which summary measures may be used but not in others, as Brock has suggested in his companion paper. Consider first a context in which these issues are not raised. In managed care organizations, coverage decisions for new technologies (drugs, devices, or procedures) are now generally made on a noncomparative basis (Daniels and Sabin 1997). In effect, each new technology is compared only to the existing or standard ways of treating the same group of patients with the same condition, and not to other technologies used for treating quite different groups of patients. For the most part, the decision is made to introduce a new technology if it produces a net benefit to that group of patients. To manage its costs and assure its quality, a coverage decision is then usually coupled with decisions that limit who may get it and who may perform it—that is, a “mini practice guideline” is developed. Usually, this coverage decision is uninformed by formal or even informal CEA, except in the case of pharmaceuticals and, in some instances, devices. (Cost-effectiveness studies are generally not available in time to make these coverage decisions.) Suppose, however, a MCO used a formal CEA—with QALYs or DALYs or some other summary population health measure—to compare the new technology to the standard one. If the CEA shows they get the same or greater health benefit per dollar spent with the new technology, the MCO can make a completely noncontroversial distributive decision. In this special case the new technology benefits targeted patients, other enrollees, and the organization itself. Suppose, however, we find that the new technology, when compared to the standard alternative treatment, produces a modest increment in benefit but at a significant increase in cost, yielding a less favorable cost-effectiveness ratio. The classic case in the literature is streptokinase versus TPA. A new one involves clopidogrel, a blood thinning agent much more expensive than aspirin but some 8 percent more effective in reducing infarcts among those at high risk. A MCO refusal to cover the less cost-effective technology rests on the judgment that the additional resources it would use can be put to better use elsewhere. This decision does raise distributive questions, but they are most likely to be left unanswered, because the “better use” of those resources will not be specified. (The MCO would be unlikely to specify whether the “savings” would be used solely to deliver more cost-effective treatments or directed to nonmedical organizational uses (for remarks on “closed” or budgeted systems, see Daniels [1986], Daniels and Sabin [1998a]). Because the alternative uses are not specified and the distributive implications not determined, the judgment may seem noncontroversial. In this context we cannot see what might be problematic about the seemingly uncontroversial (especially among economists and economically trained planners) judgment that “we should act so as to get the most health benefit for every dollar we spend.” A more typical use of CEA to make a similar kind of judgment in an MCO (or in a task force or professional association) would be in evaluating alternative screening protocols. A CEA might compare screening protocols with different frequencies for mammograms, colonoscopies, or (to take the classic case) stool guaiacs. A recommendation to adopt one protocol because it is

OCR for page 58
appropriately cost-effective, and to reject an alternative because it reduces extra risks at too high a cost, has distributive implications. It imposes those extra risks on that screened population on the promise that there are better ways to use the extra resources that would be involved in the protocol with the worse CE ratio. Here, too, the “better” uses of the extra resources are unspecified, and so we cannot say exactly which issue of distributive justice is raised. Again, it might seem that getting the most benefit for our marginal health dollar ought to be completely noncontroversial, but the response of advocacy groups to recommendations about screening protocols, sometimes dismissed as uninformed special pleading, should give us some pause. Those who face the identified extra risks may feel that reducing them is more important than assisting the unidentified others who may benefit from other unspecified uses of the extra resources. In effect, they portray themselves as “identified victims” and the others as completely indeterminate “statistical ones.” Now, however, suppose that MCOs make coverage decisions for new technologies in a directly comparative way, adopting a strict budget for the introduction of all new treatments and thus only able to adopt some of those competing for adoption. (As I noted earlier, there is no such budgeting now; whether continued downward pressure on premiums and the failure to squeeze savings out of traditional sources forces such budgeting remains to be seen.) They must compare as competitors, for example, a new treatment for people facing a life-threatening condition, a new procedure for the rehabilitation of patients with a debilitating injury, a surgical regimen for improving the quality of life of patients who have a chronic, disabling disease, and a pharmaceutical that reduces modest depression for a broad class of patients. If a MCO now decides to use CEA to chose among the new treatments, it faces various kinds of distributive questions. (Alternatively, to see the same issues, we could recast this as the problem of a federal or state board [e.g., Oregon’s Health Services Commission] deciding which treatments should be included in a public health insurance benefit package. It is also the problem facing the World Bank when it evaluates which health care investments it should make among alternatives proposed for a developing country, or the problem faced by a country considering whether to accept a World Bank loan that uses CEA to impose certain priorities.) Some of these distributive questions, as Brock suggests in his paper, result from the fact that different summary measures incorporate distinctive assumptions that have recognized distributive effects on different population groups. For example, if one of the new treatments involves primarily middle-age patients and another involves very young or very old ones, using QALYs or DALYs might affect their CE ranking. We must then decide whether it is morally desirable to value a life year the same at every age or to give it age-adjusted weights. The distributive questions I want to focus on arise independently of some of these questions Brock has addressed. We can bring out their force by asking this question: is a QALY (or DALY) worth the same wherever it is distributed? Or should we ascribe a different value or moral importance to it depending on who gets it? Should we give moral priority, for example, to distributing the QALY to patients who are worst off to start with? Should we be neutral between distributing a thousand QALYs to 1000 people, each of whom improves by one QALY, or distributing those thousand QALYs to 50 people, each of whom gains 20 QALYs? Challenging the assumption that a QALY is always equal (in moral value) to a QALY, or a DALY to a DALY, goes to the heart of the distributive questions that concern us in what follows.

OCR for page 58
SOME UNSOLVED RATIONING PROBLEMS The Priorities Problem To illustrate the moral controversy that surrounds these unsolved rationing problems, consider first the priorities problem: How much priority should we give to treating the sickest or most disabled patients? Imagine two extreme positions. The “Maximin” position (“maximize the minimum”) says that we should give complete priority to treating the worst-off patients. The “Maximize” position says that we should give priority to whatever treatment produces the greatest net health benefit (or greatest net health benefit per dollar spent) regardless of which patients we treat. Suppose comparable resources could be invested in technology A or in B, but the resources are “lumpy” (we cannot introduce some A and some B) and we can only afford one of A or B in our MCO budget. The Maximin position would settle the matter by determining whether patients treated by A are worse off before treatment than patients treated by B. If so, we introduce A; if patients treated by B are worse off, we introduce B. If the two sets of patients are equally badly off, we can break the tie by considering to whom we can provide the most benefit. The Maximize position chooses between A and B solely by reference to which produces the greatest net benefit. In practice, most people are likely to reject both extreme positions. If the benefits A and B produce are nearly equal, but patients needing A start off much worse than patients needing B, most people seem to believe we should introduce A. They prefer to provide A even if they know we could produce somewhat more net health benefit by introducing B. But if the net benefit produced by A is very small, or if B produces significantly more net benefit, then most people will overcome their concern to give priority to the worst off and will prefer to introduce B. Some people who would give priority to patients needing A temper their preference if those patients end up faring much better than patients needing B. In all situations where groups of students or health professionals have been informally polled on these cases, there is considerable disagreement: a definite but very small minority are inclined to be “maximizers” and a definite but very small minority are inclined to be “maximiners”. Most people fall somewhere in between, and they vary considerably in how much benefit they are willing to sacrifice to give priority to worse-off patients. Those who argue over these hypothetical are quite willing to back their conclusions with reasons. Some will say, for example, “although patients needing B are being asked to forgo a significant benefit, I simply cannot turn my back on patients needing A, since they are so badly off.” In response, someone else will say, “I hate to abandon A, but I simply cannot expect B to sacrifice a much greater benefit just because A starts off so poorly.” We might hope, faced with this kind of complexity, that a very careful examination of hypothetical cases might reveal some convergence on a complex set of underlying principles. This hope, however, may be unrealistic. The weightings that different people give to different moral concerns, such as helping the worst-off versus not sacrificing achievable medical benefits, probably depend on how these moral concerns fit within wider moral conceptions people hold. If so, there is good reason to think these disagreements will be a persistent feature of the situation. Indeed, some of the kinds of theoretical devices we might appeal to, such as forcing people to choose an allocation scheme from behind a veil of ignorance, are themselves the focus of considerable dispute. Is it reasonable, for example, for such people to gamble on their likelihood of

OCR for page 58
being one type of patient or the other, or must they somehow identify with each category of patients and refuse to gamble (see Daniels [1993], Kamm [1993], Scanlon [1982])? The Aggregation Problem When should we allow an aggregation of modest benefits to larger numbers of people to outweigh more significant benefits to fewer people? In June 1990, the Oregon Health Services Commission (OHSC) released a list of treatment/condition pairs ranked by a cost/benefit calculation. Critics were quick to seize on rankings that seemed completely counter intuitive (other critics argue the problem arose because the OHSC used crude numbers). For example, as Hadorn (1991) noted, tooth capping was ranked higher than appendectomy. The reason was simple: an appendectomy cost about $4,000, many times the cost of capping a tooth. Simply aggregating the net medical benefit of many capped teeth yielded a net benefit greater than that produced by one appendectomy. As Eddy (1991) pointed out, our intuitions in these cases are based largely on comparing treatment/condition pairs for their importance on a one to one basis. One appendectomy is more important than one tooth capping because it saves a life rather than merely reduces pain and preserves dental function. Our intuitions are much less developed when it comes to making one to many comparisons. Kamm (1993) explored hypothetical cases, showing that we are not straightforward aggregators of all benefits, though we do permit some forms of aggregation. Nevertheless, our moral views are both complex and difficult to explicate in terms of well-ordered principles. Kamm argued quite plausibly that some minor goods are irrelevant to aggregation and should not be weighed at all against significant benefits. She also suggested that benefits we cannot be expected to sacrifice to help someone else may count as significant enough to be aggregated even against saving life. Thus, since someone is not required to sacrifice an arm to save someone else’s life, it may be possible to aggregate the saving of some number of arms as opposed to saving a life. Kamm’s discussion thus provides some important structure to the aggregation problem. Nevertheless, the principles that emerged do not constitute an adequate framework for addressing many real aggregation questions. The principles also may have emerged only because Kamm ignored variation in responses to some of her cases, as I suggested previously. The Best Outcomes/Fair Chances Problem How much should we favor producing the best outcome with our limited resources as opposed to giving people some fair chance at deriving some benefit from them? For example, which of several equally needy individuals should get a scarce resource, such as a heart transplant? Suppose that Alice and Betty are the same age, have waited the same time, and that each will live only 1 week without a transplant. With the transplant, however, Alice is expected to live 2 years and Betty 20. Who should get the transplant? Giving priority to producing best outcomes, as in some point systems for awarding organs, would mean that Betty gets the organ and Alice dies (assuming persistent scarcity of organs). But Alice might complain, “Why should I give up my only chance at survival—and 2 years of survival is not insignificant—just because Betty has a chance to live longer?” Alice demands a lottery that gives her an equal chance with Betty.

OCR for page 58
To see the problem in its macro-allocation version, suppose our health care budget allows us to introduce one of two treatments, A or B, which can be given to comparable but different groups. Because A restores patients to a higher level of functioning than B, it has a higher net benefit. We could produce the best outcomes by putting all our resources into A; then patients treatable by B might, like Alice, complain that they are being asked to forgo any chance of a significant benefit. One variation on this scenario raises the question of discrimination against people with disabilities: suppose the disease treatable by A can strike anyone, but the disease treatable by B tends to be associated with people suffering from some other significant disability. Then favoring “best outcomes” by putting our resources into A would clearly discriminate against people with disabilities who need B. The problem currently has no satisfactory solution at either the intuitive or theoretical level. Brock (1988) proposed breaking this deadlock by giving Alice and Betty chances proportional to the benefits they can get (e.g., by assigning Alice one side of a 10 sided die). Kamm (1993) proposes a more complex assignment of multiplicative weights. Both suggestions have the advantage of taking what people view as relevant reasons into account. Such weighted lotteries might, for example, be justifiable as the result of deliberation among people about what weights they can agree to assign to these reasons. Viewed in this way, they seem less arbitrary or ad hoc than if they (the weightings) are taken to capture some underlying precision in our moral intuitions. In the Absence of Consensus on Principles The best outcomes/fair chances, priorities, and aggregation problems may turn out to have principled solutions on which consensus can be obtained. My claim here is not that they are unsolvable but only that they are unsolved now and that we have no real prospect of arriving at solutions that would be publicly acceptable in the foreseeable future. My skepticism about any rapid solution rests in part on the fact that these are all problems on which there is moral disagreement. Such disagreement emerges quickly in class or group settings where hypothetical cases are discussed in detail. Some people give more weight to helping the worst off than others, or permit some forms of aggregating that others think objectionable, or give more weight than others to best outcomes rather than fair chances. Nord found that different subgroups of the Norwegian population tend to have systematically different responses to these problems. When the Swedish government set up a commission to establish principles for establishing priorities in its health care system, it gave great weight to helping the sickest or most disabled individuals, probably much more weight than other societies considering the same question would give, and more weight than many of my students polled on the issue were willing to give. Even if there are principled solutions that philosophical investigation may eventually uncover, there is considerable disagreement now and among different groups about how to solve these problems. Typically, for example, a minority will be willing to give significant priority to the sickest patients, trading away much more significant benefits to those who are less sick to obtain some benefits for the sickest (as was the Swedish commission), but the majority is not. These commitments support two distinct criteria for ranking (or rationing) various kinds of treatments that might be applied to the very sick or the mildly sick. Each group is willing to provide reasons for its belief about the correct solution.

OCR for page 58
How should we decide among policies when there is this kind of disagreement underlying moral commitment? I suggest in Section 5 that we need to retreat to a form of procedural justice by giving an account of a fair decision making procedure. On the Generality of These Problems Though I have discussed these unsolved rationing problems primarily in health care contexts, I want to note that they are quite general. Similar issues would arise if we were allocating legal aid services, special education services, or even income support services. One extremely important context in which they arise is economic growth or development policy. THE DEMOCRACY PROBLEM: ISSUES FACING PERSON TRADE-OFFS AND VOTING Brock and I believe that Nord’s (1994) person trade-off approach to valuing alternative health care programs addresses explicitly the distributive questions that need answers. By directly surveying people’s attitudes toward trade-offs between allocating resources to groups that differ in their initial health state and ultimate health outcomes, Nord hoped to uncover the structure of our moral concern or our values regarding how much priority to give sickest patients. Nord (1994,201) noted some methodological problems his “demanding” approach faces, and I add some philosophical worries. Nevertheless, I urge we pursue a research strategy using such approaches to see if we can develop meaningful information about how society or subgroups in it reason about these distributive issues. Nord’s approach faces questions about reliability. He noted that surveys of small populations using his approach show considerable variance in answers; large populations would be needed to eliminate random errors. Reliability of the tool is also suspect because there are significant “starting point biases.” People are asked how many of one kind of outcome they would sacrifice to achieve N outcomes of another kind. Higher initial values trigger higher responses. Other kinds of “framing” issues are raised by the attempt to elicit consideration of specific reasons or rationales for making the trades. To control for framing effects, Nord suggested subjects have to be taken through various steps in which they are exposed to different arguments that might be relevant to the exercise. These efforts by Nord are the most promising features of his approach because they may tap into beliefs that function as true reasons. Based on my experience raising hypothetical like these in classroom discussions (Nord himself uses seminars!) I speculate that the reliability problems Nord found may reflect subgroup disagreements about the weights to give different factors involved in these trades. These disagreements may reflect underlying differences in comprehensive moral views. Nord himself has presented some evidence for differences that vary with Norwegian political party affiliations. Taking means or ranges as a way of addressing this reason for divergence may be begging the very question raised by moral disagreement. Nevertheless, knowing the magnitude of these differences could help decision makers think about how much weight to give disagreements they encounter. Suppose we find considerable convergence in a population (or subgroup) on the magnitudes involved in trade-offs. What should we make of it? Should we view it as a prevalent “taste” or “preference,” the equivalent of a predominant taste for chocolate over vanilla ice cream? Or

OCR for page 58
are the responses the result of a deliberative or reflective process in which people weigh various reasons, proposed principles, and intuitions about particular cases and arrive at a coherent set of moral beliefs that for them are justified? We risk having uncovered only “tastes” if we carry out a straightforward survey of attitudes toward trade-offs. Nord suggests, however, that we develop and deploy more complex methods that lead subjects in these surveys through a series of questions that import arguments and reasons that might be the basis for making these trades. This complex technique—which is quite demanding—begins to approximate the sort of philosophical exploration that is involved when students are led through the complexities of these issues by posing various hypothetical cases (cf. Kamm, 1993). Such an approach is more likely to uncover some evidence about what reasons people give weight to, and not simply their unconsidered tastes. To see why it is important to seek reasons and not merely tastes, consider an analogy to democratic procedures. Surveys and voting are both ways of consulting people’s opinions. What gives majority (or plurality) rule its legitimacy as a procedure for resolving moral disputes about public policy and the design of institutions? One prominent answer, which Cohen (1996) refered to as the “aggregative” conception of democracy, holds that the procedure is fair and acquires legitimacy simply because it counts everyone’s interests equally in the voting process: each counts for one, not more or less. The same might be true of a survey of a representative sampling of people. Something important seems to be left out of this proceduralist view of the virtues of aggregation through voting. It allows us to compel people to abide by majority rule, even where there are matters of fundamental moral disagreement, simply by aggregating the preferences of the voters, whatever they happen to be. If we had the option of buying only one flavor of ice cream, vanilla or chocolate, for a large group, we might settle the matter by voting. In this case, aggregating preferences through the mechanism of voting is a way to achieve the greatest net satisfaction of preferences, since the frustration of the vanilla lovers is offset by the greater pleasure of the chocolate lovers. Abiding by a majority decision that compels people to act in ways that are counter to their fundamental beliefs about what is morally right is not, however, simply like frustrating a taste for vanilla ice cream. A strong craving for vanilla is not a moral conviction. Settling moral disputes simply by aggregating preferences seems to ignore fundamental differences between the nature of values and commitments to them and tastes or preferences. I have the same worry if a person-trade survey reveals only preferences. The aggregative conception seems insensitive to how we ideally would like to resolve moral disputes, namely through argument and deliberation. We expect people to offer reasons and arguments for their moral views, and we hope that the better arguments will prove persuasive. We want to be shown what is right by appeal to reasons that we consider convincing. If a good moral argument persuades us that our original belief about what is right is in fact incorrect, we may be chagrined, but we are (or should be) grateful as well. We have been spared doing what is wrong. It is more important to end up knowing what is right and doing it, given our motivation to act in ways that we can justify morally, than it is to get our way. These points help to explain why we are not satisfied in cases of moral disagreement simply to be told, “a majority of people think otherwise.” The problem is not that the majority will simply keep us from getting our way, as it would be if we preferred vanilla but ended up with chocolate, but that majorities can be morally wrong and may make us do the wrong thing. In

OCR for page 58
addition, they may be moved by reasons that minorities cannot even accept as relevant to resolving the dispute. The aggregative account fails as an account of the legitimacy of a democratic procedure because it ignores the way reasons play a role in our deliberations about what is right. An alternative account of how a procedure such as majority rule acquires legitimacy depends on emphasizing the deliberative process that may conclude in a vote. Specifically, it imposes some constraints on the kinds of reasons that can play a role in that deliberation. Not just any preferences will do. Reasons must reflect the fact that all parties to a decision are viewed as seeking terms of fair cooperation that all can accept as reasonable. Where their well-being or fundamental liberties or other matters of fundamental value are involved and at risk, people should not be expected to accept binding terms of cooperation that rest on reasons they cannot view as acceptable types of reasons. For example, reasons that rest on matters of religious faith will not meet this condition. Reasonable people differ in their religious, philosophical, and moral views, and yet we must seek terms of fair cooperation that rest on justifications acceptable to all. Suppose that a deliberation appeals only to reasons that all can recognize as acceptable or relevant kinds of reasons, but that consensus about an outcome is still not achieved. To settle the practical matter, we rely on a majority vote. What can be said in favor of reliance on this voting procedure that could not be said on the purely proceduralist view? The minority is not being compelled to do something for reasons it thinks irrelevant or inappropriate—even if it does not accept the weight or balance given to various considerations by the majority. On the aggregative view, the minority has to accept that it loses only because more people prefer an alternative, for whatever reasons. On the deliberative democracy view, the minority can at least assure itself that the preference of the majority rests on the kind of reason that even the minority must acknowledge appropriately plays a role in the deliberation. The majority does not exercise brute power of preference, but is constrained by having to seek reasons for its view that are justifiable to all who seek mutually justifiable terms of cooperation. A research strategy building on Nord’s approach could reveal information about the types of reasons and arguments people give for their distributive choices and the weights they attach to them. This knowledge could inform a further deliberative process in which resource allocations must be made. Still, it is not a substitute for that process. FAIR, DELIBERATIVE PROCEDURES: THE EXAMPLE OF MCOS Elsewhere (Daniels and Sabin, 1997) I have discussed a partial account of fair procedures intended to address the kind of moral controversy that pervades decision making about health care resource allocation in MCOs. I can only sketch the approach here. The basic intuition behind it is that institutions making decisions about resource allocation—as MCOs do when they make coverage decisions—should meet several conditions that impose what I call “public accountability for reasonableness.” These conditions connect deliberations about how to address distributive issues made by private organizations (or public agencies) to a broader social deliberation that involves broader democratic processes. For the sake of specificity, I state these as conditions that must be met for a highly visible and controversial area of decision making, coverage for new technologies, though they can be generalized to cover other forms of limit-setting.

OCR for page 58
Publicity condition: Decisions regarding coverage for new technologies (and other limit-setting decisions) and their rationales must be publicly accessible. Relevance condition: These rationales must rest on evidence, reasons, and principles that all parties—managers, clinicians, patients and consumers in general—can agree are relevant to deciding how to meet those the diverse needs of a covered population under necessary resource constraints. Appeals condition: There is a mechanism for challenge and dispute resolution regarding limit-setting decisions, including the opportunity for revising decisions in light of further evidence or arguments. Enforcement condition: There is either voluntary or public regulation of the process to ensure that conditions 1–3 are met. The guiding idea behind the four conditions is to convert private MCO solutions to problems of limit-setting and resource allocation—where highly controversial moral issues are at stake—into part of a larger public deliberation about a major, unsolved public policy problem, namely, how to use limited resources to protect fairly the health of a population with varied needs, a problem made progressively more difficult by the successes of medical science and technology. If met, these conditions help these private institutions to enable a more focused public deliberation that involves broader democratic institutions. The publicity condition thus provides a public record of the commitments to which the plan adheres in making these kinds of decisions. A case law record such as this improves fairness in decision making because it provides a basis for judging the coherence and consistency of decisions made over time. It gives those affected by decisions—often when they have no real choice to seek alternatives—a way of knowing why they face the restrictions they do. The publicity condition thus satisfies what many believe is a fundamental requirement of justice: the grounds for decisions that fundamentally affect our well-being must be publicly available to us. The relevance condition imposes important constraints on the kinds of reasons that should play a role in rationales for coverage decisions. The basic idea is that all parties in a MCO pursue a common goal or common good: they enter into a plan that aims to meet their diverse needs under necessary resource constraints. Since hard choices will have to be made about how to meet those needs fairly, the grounds for those decisions must be ones that all can agree are relevant to that kind of decision. A justification must be based on reasons all accept as relevant. The relevance condition does not mean that all parties will agree with the specific decisions made. They may agree that reasons are relevant but still give different weight or importance to them. As long as all parties who make and are affected by the decision can accept that the grounds for it are relevant, then even those who do not like or agree with the specific outcome of the decision cannot complain that it is unreasonable. CONCLUSION Standard measures of population status and the benefits of health intervention, coupled with methods like CEA, are themselves insensitive to important questions of distributive justice. These ethical questions must be faced head-on. Unfortunately, they include a family of “unsolved rationing problems.” In the absence of prior consensus on principled moral solutions,

OCR for page 58
we must develop procedurally fair decision making processes and rely on them to give us legitimate and fair outcomes. I urge a two-pronged research strategy to address this problem. One component involves empirical research on the distributive problems. It aims at uncovering the kinds of reasons people employ in making choices about these issues, the weights they give to these reasons, and the magnitude and patterns of divergence on their solutions. This is a very demanding type of survey research, mimicking in some ways highly “qualitative” philosophical exploration of these issues. A better understanding of a population’s beliefs about these questions could then inform deliberators who face allocative choices. A further ethical issue is just how this information about beliefs should be used by decision makers. A second component involves research—both empirical and ethical—into the design of fair, deliberative procedures for making these decisions in the various contexts and institutions where they must be made. My suggestions here about some constraints on that process in MCOs making decisions about coverage for new technologies are intended to illustrate the solutions that must be sought. The research is empirical, because it must reflect facts about the institutions in which decisions are made and because we may also want to test proposed procedures to see how feasible and effective they are at achieving legitimacy. It is also ethical because procedural fairness itself raises complex issues in ethics and political philosophy. ACKNOWLEDGMENT I wish to thank my research assistant, Roxanne Fay, for her extensive help in preparation of this paper. NOTES 1.   Brock asked the same question to show the age-weighting of DALYs versus QALYs; I ask it more generally here. 2.   The claim is based on observations over several years of how audiences of students and medical personnel vote on hypothetical cases of this sort. Nord (1993) reported variations in attitudes toward priorities of this sort between different groups of students and professionals. There is some cross-national evidence that people are not straight maximizers in Nord, Richardson, Street, Kuhse, and Singer (1995). 3.   A distinct minority of students and health professionals would argue as follows: if helping the better-off patient B actually returns B to a level of functioning that permits her to work and carry out other social functions, whereas helping the sicker patient A does not accomplish this outcome, then it is more important to help B. Some holding this view give as a reason that B returns more to society than A, but others justify their view by saying B’s returning to closer to normal functioning will make B happier that A is likely to be; this reason focuses on the well-being of B and A, not on social contribution. 4.   Frances Kamm (1993) suggested this may be true. Her brilliant discussion of cases often points to less disagreement than I find in thinking about them myself or with students or public audiences. For some concerns about her methods, see Daniels (1998a).

OCR for page 58
5.   I had once criticized the proposal for being ad hoc and arbitrarily importing precision in this way. See Daniels (1993). 6.   personal communication and presentation at the Stockholm Conference on Priorities in Health Care, October 17, 1996. 7.   Suppose an antipoverty program in Bangladesh that has fixed resources might be targeted at the very poorest (VP) segment of a population or on the next poorest (but still poor) subgroup (P). Using the resources on P leads to more people being moved out of poverty and becoming producers capable of contributing in the future to further anti-poverty measures. But putting the resources into helping P leaves those in the most dire straits unaided. Is it fair to favor helping P over VP? Here the extra complexity of the problem is that it is more plausible to think not only of benefits to those helped but of the future social contribution of those who are helped. The ethical dimensions of these problems in these different contexts has been largely unexplored and warrant s considerable interdisciplinary effort. REFERENCES Brock, D . 1988 . “Ethical Issues in Recipient Selection for Organ Transplantation.” In D. Mathieu (ed.) , Organ Substitution Technology: Ethical, Legal, and public Policy issues . Boulder : Westview , pp. 86–99 . Cohen, J . 1996 . “Procedure and Substance in Deliberative Democracy.” In Seyla Benhabib , ed., Democracy and Difference: Changing Boundaries of the Political . Princeton, NJ : Princeton University Press . Daniels, N . 1998 . “Kamm’s Moral Methods.” Forthcoming in Philosophy and Public Affairs 26 : 4 : 303–350 . Daniels, N . 1993 . “Rationing Fairly.” Bioethics 7 : 2–3 : 224–233 . Daniels, N . 1986 . “Why Saying No is so Hard in the U.S.” New England Journal of Medicine 314 : 1381–1383 . Daniels, N. , and Sabin, J.E . 1998 . “Closure, Fair Procedures, and Setting Limits in managed Care Organizations.” Journal of American geriatrics Society . (In press) . Daniels, N . and Sabin, J.E . 1997 . “Limits to Health Care: Fair Procedures, Democratic Deliberation, and the Legitimacy Problem for Insurers.” Philosophy and Public Affairs 26 : 4 : 303–350 . Eddy, D . 1991 . “Oregon’s Methods: Did Cost-Effectiveness Analysis Fail?” Journal of the American Medical Association 265 : 2218–2225 . Gold, M. , Siegel, J. , Russell, L. , and Weinstein, M . 1996 . Cost-Effectiveness in Health and Medicine . New York : Oxford University Press . Hadorn, D . 1991 . “Setting Health Care Priorities in Oregon: Cost-Effectiveness Meets the Rule of Rescue.” Journal of the American Medical Association 265 : 2218–2225 . Harris, J . 1987 . “QALYfying the Value of Life.” Journal of Medical Ethics 13 : 117–123 . Kamm, F . 1993 . Morality and Mortality, Volume 1: Death and Whom to Save from It . New York : Oxford University Press . Kamm, F . 1989 . “The Report of the U.S. Task Force on Organ Transplantation: Criticisms and Alternatives.” Mount Sinai Journal of Medicine 56 : 207–220 .

OCR for page 58
Nord, E . 1993 . “The Relevance of Health State after Treatment in Prioritizing between Different Patients.” Journal of medical Ethics 19 : 37–42 . Nord, E. , Richardson, J. , Street, A. , Kuhse, H. , and Singer, P . 1995 . “Maximizing Health Benefits vs. Egalitarianism: An Australian Survey of Health Issues.” Social Science and Medicine 41 : 10 : 1429–1437 . Russell, L.B. , Gold, M.R. , Siegel, J.E. , Daniels, N. , and Weinstein, M.C. , 1996 . “The Role of Cost-Effectiveness Analysis in health and Medicine.” Journal of the American Medical Association 276 : 14 : 1172–1177 . Scanlon, T.M . 1982 . “Contractualism and Utilitarianism.” In A. Sen and B. Williams , Utilitarianism and Beyond . Cambridge, England : Cambridge University Press , pp. 103–128 .

OCR for page 58