APPENDIX D

When and How Should Purchasers Seek to Selectively Refer Patients to High-Quality Hospitals?

R. Adams Dudley, M.D., M.B.A.; Richard Y. Bae, B.A.; Kirsten L. Johansen, M.D.; and Arnold Milstein, M.D., M.P.H.

There has been evidence for years suggesting important variations in quality among doctors,1,2and3 hospitals,4,5,6and7 and health plans,8,9 and significant effort—such as the development of the Health Plan Employer Data and Information Set (HEDIS) and the Consumer Assessment of Health Plans Study (CAHPS) —has been put into measuring some aspects of quality. Despite this, few consumers, purchasers, health plans, or providers have used this information to obtain high quality care or excellent clinical partners.10,11 and 12 On the contrary, the available evidence suggests that, when selecting among health plans, purchasers give price information much more weight than quality information.11,13

The absence of initiatives to act on quality information may reflect a variety of factors. The majority of patients, even those who believe quality varies among providers, are convinced that their provider is very good.14,15 These patients may not feel the need to demand quality improvements. Some purchasers lack the expertise or resources to address quality concerns. Other purchasers may believe that U.S. health care is the best in the world and that they need only ensure that their beneficiaries can access the system to guarantee them high quality care. However, recent reports indicate that quality deficiencies in the United States are common and sometimes severe.16 Thus, the clinical benefits achievable by directing patients to high quality providers may be significant.

In this paper, we address the rationale for and implementation of selective referral. By selective referral we mean the establishment of policies requiring that patients be: 1) informed that they should go to high quality providers, 2) given a referral to a high quality provider, and 3) offered coverage for the cost of using the high quality provider, even if that provider is outside the patient's usual network of care. The question of when and how to selectively refer patients can be asked at several levels. Purchasers can seek to encourage patients to select the best clinicians, hospitals, or health plans. Alternatively, health plans can direct patients to the best providers. For the sake of exposition, and because there are ongoing initiatives at this level,17 we will focus on purchasers attempting to ensure that their beneficiaries are referred to high quality hospitals for specific conditions (e.g., the Pacific Business Group on Health's program to ensure patients needing coronary bypass go to the best hospitals). Most of the issues we describe, however, are applicable to initiatives at and among all levels of the health system.

From the Departments of Medicine (R.A.D., K.L.J.) and Epidemiology and Biostatistics (R.A.D., K.L.J.) in the School of Medicine and the Institute for Health Policy Studies (R.A.D., R.Y.B.) at the University of California, San Francisco; the Pacific Business Group on Health (A.M.), and William M. Mercer, Inc. (A.M.).

Corresponding author: R. Adams Dudley, MD, MBA, Institute for Health Policy Studies, University of California, San Francisco, 3333 California St, Suite 265, San Francisco, CA 94118. Phone (415) 476-8617, Fax (415) 476-0705. E-mail: adudley@itsa.ucsf.edu.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary APPENDIX D When and How Should Purchasers Seek to Selectively Refer Patients to High-Quality Hospitals? R. Adams Dudley, M.D., M.B.A.; Richard Y. Bae, B.A.; Kirsten L. Johansen, M.D.; and Arnold Milstein, M.D., M.P.H. There has been evidence for years suggesting important variations in quality among doctors,1 ,2 and 3 hospitals,4 ,5 ,6 and 7 and health plans,8 ,9 and significant effort—such as the development of the Health Plan Employer Data and Information Set (HEDIS) and the Consumer Assessment of Health Plans Study (CAHPS) —has been put into measuring some aspects of quality. Despite this, few consumers, purchasers, health plans, or providers have used this information to obtain high quality care or excellent clinical partners.10,11 and 12 On the contrary, the available evidence suggests that, when selecting among health plans, purchasers give price information much more weight than quality information.11, 13 The absence of initiatives to act on quality information may reflect a variety of factors. The majority of patients, even those who believe quality varies among providers, are convinced that their provider is very good.14,15 These patients may not feel the need to demand quality improvements. Some purchasers lack the expertise or resources to address quality concerns. Other purchasers may believe that U.S. health care is the best in the world and that they need only ensure that their beneficiaries can access the system to guarantee them high quality care. However, recent reports indicate that quality deficiencies in the United States are common and sometimes severe.16 Thus, the clinical benefits achievable by directing patients to high quality providers may be significant. In this paper, we address the rationale for and implementation of selective referral. By selective referral we mean the establishment of policies requiring that patients be: 1) informed that they should go to high quality providers, 2) given a referral to a high quality provider, and 3) offered coverage for the cost of using the high quality provider, even if that provider is outside the patient's usual network of care. The question of when and how to selectively refer patients can be asked at several levels. Purchasers can seek to encourage patients to select the best clinicians, hospitals, or health plans. Alternatively, health plans can direct patients to the best providers. For the sake of exposition, and because there are ongoing initiatives at this level,17 we will focus on purchasers attempting to ensure that their beneficiaries are referred to high quality hospitals for specific conditions (e.g., the Pacific Business Group on Health's program to ensure patients needing coronary bypass go to the best hospitals). Most of the issues we describe, however, are applicable to initiatives at and among all levels of the health system. From the Departments of Medicine (R.A.D., K.L.J.) and Epidemiology and Biostatistics (R.A.D., K.L.J.) in the School of Medicine and the Institute for Health Policy Studies (R.A.D., R.Y.B.) at the University of California, San Francisco; the Pacific Business Group on Health (A.M.), and William M. Mercer, Inc. (A.M.). Corresponding author: R. Adams Dudley, MD, MBA, Institute for Health Policy Studies, University of California, San Francisco, 3333 California St, Suite 265, San Francisco, CA 94118. Phone (415) 476-8617, Fax (415) 476-0705. E-mail: adudley@itsa.ucsf.edu.

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary TABLE 1 On Which Hospital Characteristic Should Selective Referral Be Based? Characteristics that Serve as Proxies for Quality Volume Teaching Status Level of Care (e.g., Level I Trauma Center) Designation by Regulatory Body or Professional Group Profit Status Outside organizations (e.g., U.S. News and World Report 100 “best hospitals”) Participation in Clinical Trials DETERMINING WHICH PROVIDERS ARE HIGH QUALITY: DEFINING THE BASIS FOR SELECTIVE REFERRAL To optimally direct patients to the highest quality providers, one would need statistically stable risk adjusted measures of both technical quality of care (especially clinical outcomes and processes) and patient satisfaction. Unfortunately, such indices of quality are rarely available. The clinical data needed for risk adjustment are not usually collected for claims or administrative purposes and patient satisfaction with individual clinicians and hospitals is rarely measured at all (CAHPS, for instance, is a measure of satisfaction with health plans). There is, however, literature suggesting that many variables may be indirect indicators of quality (Table 1). For example, studies have shown correlation between hospitals' outcomes and their volumes,7,18 teaching status,19 level of care,20 profit status,21,22 designation by regulatory body or professional group,23 and participation in clinical trials.24 Both direct measurement of quality and the use of indirect indicators like volume have drawbacks as indices of quality (and hence, as bases for selective referral). Direct measurement of outcomes is limited, for many conditions, by hospital sample sizes that are too small to generate annual hospital-specific quality indices.25,26 For example, in California in 1997, the majority of esophageal cancer surgeries (94 of 169) occurred in hospitals doing 3 or fewer procedures per year. Even a decade of data from such hospitals would not generate stable estimates of an individual hospital's mortality rate. On the other hand, for some conditions, reliance on hospital volume would penalize low volume hospitals that are doing well.27 Hannan et al. found a volume– outcome relationship for coronary artery bypass grafting (CABG) in New York, but some low volume hospitals had risk adjusted mortality rates below the mean for high volume hospitals.7 For reasons such as these, purchasers should consider carefully the circumstances under which they would be willing to embark on a selective referral initiative. There are also some difficult operational issues in the use of volume as a basis for referral. An individual hospital's volume will change frequently and can be dramatically affected by merger with another hospital or acquisition by a hospital chain. Coupled with the usual delays in

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary reporting, purchasers may have difficulty ascertaining a hospital 's true current volume. In addition, some hospitals are very closely affiliated (e.g., multiple hospitals whose medical staffs are on the same medical school faculty) and may have the same person or people performing a single procedure at multiple sites. It is not clear whether volumes in such cases should be combined across sites. As more surgical procedures occur on an outpatient basis, hospitalization databases will become less useful for determining the number of procedures performed at a particular institution. Finally, some may consider the definitions of procedures used in research, on which policy must be based, fairly arbitrary. For example, in counting hospital volumes for esophageal cancer surgeries, most researchers use procedure codes for esophagectomy plus a diagnosis of cancer. Hospitals and physicians may argue, however, that esophageal surgeries done for benign disease should count toward total volume. Between direct measurement of outcomes and using proxies for quality like volume lie the options of measuring compliance with process indicators (or clinical guidelines) or with structural measures (e.g., ensuring the availability of board certified physicians). However, most providers would argue that this approach should be restricted to those aspects of clinical care or structure for which a clear link to outcomes has been established by randomized trials. Even for common, frequently studied conditions like myocardial infarction, there are many clinical process decisions that probably affect outcome but that cannot be made on this basis of evidence from large randomized trials.28 Requirements that providers be certified or licensed are even more tenuously linked to outcomes than process measures. Thus, focusing quality measurement exclusively on process or structural variables that correlate with outcome would limit the breadth of quality assessment and penalize providers who have identified methods for improving quality that just happen not to have been studied yet. CRITERIA FOR ACTION Types of evidence required to make a decision. In Table 2 we list the categories of data that we believe should go into a decision to pursue (or not) selective referral. The most obvious issue is the expected clinical benefits to the patients referred. This can be measured initially by the observed decrease in mortality or complications at higher quality hospitals, but some attempt should be made to consider whether these benefits are outweighed by the clinical risks associated with transfer. In addition, patients may find referral socially isolating if family or friends cannot accompany them. Disruption of long-term clinical relationships with primary care providers is also a risk. More subtle, but just as important, is the potential for clinical costs and benefits to patients other than those who are selectively referred. Using the volume-outcome relationship for percutaneous coronary angioplasty as an example, if most patients were referred to high volume hospitals, what would be the clinical implications for those patients who, either because they were too sick to transfer or they did not wish to transfer, remain at an even lower volume hospital? In addition, if the reduction in angioplasty volume makes it impossible for cardiologists to remain at the low volume hospital, what happens to patients with other cardiac problems who lose access to specialty care? TABLE 2 Types of Evidence Required to Make a Decision Regarding Selective Referral Clinical Benefits to Patients Transferred Clinical and Social Costs to Patients Transferred Clinical Benefits to Other Patients at Referral Hospital Clinical and Social Costs to Patients who Remain at the Transferring Hospital Economic Implications of Changing the Marketplace Structure

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary On the other hand, patients who were at high volume hospitals before selective referral may benefit from the increased volume (if practice makes perfect) or may find that additional services—perhaps a cardiac rehabilitation program—are available given the revenues and economies of scale associated with higher volume. Alternatively, though less likely, the new patients may overburden the referral hospital and quality may decline. Purchasers should also evaluate the potential effects of selective referral on marketplace structure and competition. Selective referral initiatives will increase the market power of preferred hospitals and, if many patients are involved, may lead to the closure of other hospitals. This could allow preferred hospitals to raise prices. In addition, if a hospital characteristic such as volume or teaching status becomes the basis for referral, it will create barriers to entry of new competitors (who must either start at a high volume or immediately meet all criteria necessary to achieve teaching status). Level of evidence needed to make a decision. Before considering selective referral, purchasers should decide what degree of statistical certainty they will want and how much clinical benefit they will require to justify selective referral. On the first issue, academic statisticians traditionally prefer 95% confidence before concluding that a particular variable (administration of a new drug, or, in this case, a policy initiative) truly affects outcome. However, selective referral could be construed as a social policy, and the criteria for making social decisions vary depending on the perceived consequences of failing to make the right decision. For instance, in American criminal courts, conviction of a defendant should only occur when guilt is established “beyond a reasonable doubt” (presumably a standard greater than 95% confidence). In American civil courts, however, rulings are made against the defendant if the jury finds the defendant “more likely than not” to be guilty (a standard requiring only approximately 51% confidence). Thus, in social decisions in the United States, there is substantial variation in the proof required to initiate an action. Purchasers, not bound by academic tradition, may very well prefer some standard other than 95% confidence, but public discourse on this topic has been essentially nonexistent. A related issue is the scientific characteristics of the studies on which the decision to selectively refer is based. Specialty societies and consensus panels, when making recommendations about the adoption of technologies or establishing clinical guidelines, frequently comment on the “strength” of the evidence supporting a particular recommendation. These grades traditionally reflect primarily study design (with randomized interventional trials given greater weight than nonrandomized interventional trials or observational studies) and sample size, but may include other considerations such as recruitment bias and loss to follow-up. 29 ,30 Similar considerations are relevant for instituting selective referral, which is essentially a change in clinical process. However, since most studies of institutional quality are observational, it will be especially important to identify potential sources of bias, particularly with respect to referral of sicker patients to one type of institution or another. In terms of the level of clinical benefit required to institute selective referral, purchasers should consider whether mortality is the only outcome of interest. For conditions with low overall mortality, differences in mortality may be difficult to demonstrate. However, common complications or processes of care could be easier to evaluate. To continue with the angioplasty example, many investigators have been unable to identify a statistically significant relationship between volume and mortality, but do find that the rate of “death or emergency CABG or myocardial infarction” falls with increased volume.31

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary TABLE 3 Practical and Political Barriers to Selective Referral Access to Preferred Hospital in Rural Areas Accommodating Patient Preferences for Care Close to Home Ensuring Safe Patient Transfer Assuring Capacity at Preferred Hospitals Provider Acceptance of Proxies for Risk Adjusted Outcome Measures of Quality Health Plan Opposition Possible Financial Dissolution of Hospitals It may be that these two issues must be considered together. That is, if there appears to be a mortality benefit, the required level of statistical confidence may be lower than for outcomes of less clinical significance. POTENTIAL BARRIERS TO SELECTIVE REFERRAL Table 3 lists both practical and political barriers to the implementation of selective referral. From a practical standpoint, access to preferred hospitals may be difficult, particularly for patients in rural areas. Patient preferences to stay near home, even if this involves using a lower quality hospital, should remain paramount. This necessitates some reporting mechanism so that health plans and medical groups bound by selective referral policies are not penalized by patient decisions. Clinical issues that must be addressed prior to initiation of selective referral include the determination that patients can be transferred safely and that preferred hospitals actually have the capacity to accept the new patients. Those doctors and hospitals who might lose patients under selective referral are likely to lobby against such initiatives. Their arguments will probably be both technical (especially if an indirect indicator of quality, such as volume, is used) and clinical, focused on the contributions of the less preferred hospitals in other areas that may become collateral damage of a selective referral initiative (e.g., the need to close an emergency room because of loss of profitable cardiac patients). Health plans may also fight selective referral because it involves another reporting requirement and limits their freedom to negotiate with all hospitals in a market. ALTERNATIVE STRATEGIES TO IMPROVE QUALITY OF CARE Table 4 lists several options available to purchasers interested in improving the quality of care. Selective referral is just one such approach, and before adopting it purchasers should consider all alternative strategies. We will discuss regulatory options only briefly, because these approaches have mainly fallen out of favor.27,32 Regulatory options are not within the power of the purchaser to enact directly, but purchasers could choose to lobby for regulation or legislation. Options targeting health care facilities include mandated regionalization and certificate of need legislation. There have been only a few instances of mandated regionalization programs in the United States, though it is more common in Europe. The federal Regional Medical Program of the 1960s was an attempt to mandate regionalization. However, the act offered no structured blueprint for regionalization, instead leaving each region to develop its own plans.33

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary TABLE 4 Alternative Strategies to Improve Quality of Care Regulatory Strategies: Regulate Targeting Facilities Behavior (e.g., certificate of need, regionalization) Regulations Targeting Professional Behavior (e.g., scope of practice limitations) Competition-Based Strategies (produce quality data to stimulate improvement): Consumer-Oriented (e.g., report cards, inclusion in the informed consent process) Professional-Oriented (e.g., physician education) Purchaser Initiatives (e.g., quality withholds or bonuses, selective referral, direct contracting) Regionalization was pursued primarily in perinatal care and emergency medical services, especially trauma care. Two controlled studies of perinatal regionalization showed no significant improvement in mortality,34,35 though one study did show that the areas with regionalization had lower morbidity.35 Passage of the Emergency Medical Services Act stimulated regionalization of trauma care systems, and early studies suggested reductions in trauma mortality within five years.36,37 and 38 However, these studies used historical controls, making it difficult to determine the extent to which reductions in mortality reflected regionalization vs. secular trends in improved trauma care nationally. 37 A recent cross-sectional analysis controlling for trends in trauma mortality in areas without regionalization suggests that regionalization does, in fact, lower mortality, but the benefits develop only as the system matures over periods as along as a decade.39 Certificate of need (CON) legislation was developed as a regulatory mechanism at the state level for review and approval of capital expenditures and service capacity expansions by health care facilities. While CON laws could, in theory, improve access (if used to ensure capacity is well distributed) and prevent waste, they could also limit access (and worsen outcomes) if applied too stringently. In the only study examining the effects of CON laws on hospital mortality rates, states that had more stringent laws had higher risk adjusted mortality rates among Medicare inpatients.40 However, the risk adjustment methodology used was limited to age, sex, and a small number of comorbid conditions obtained from administrative files, so this finding may not be robust. Scope of practice laws limit the types of clinical activity in which providers can engage, and hence effectively limit the provision of certain services to specific types of providers. In many states, for instance, optometrists can prescribe corrective lenses, but not medications. This effectively restricts the provision of eye prescriptions to physicians, but few states limit this specifically to ophthalmologists. In theory, one could further limit scope of practice in the name of quality (e.g., if data suggested that board certified ophthalmologists had better outcomes than generalists in caring for patients with glaucoma), but we are aware of no such programs. While the use of regulation to control quality has declined, programs to use physician, hospital, or health plan performance reports to improve quality have become more common over the last decades, and we focus the remainder of the paper on these. In most cases, proponents of these initiatives expect quality improvements to be mediated through competition.41 Unfortunately, there have been few evaluations of these initiatives to guide purchasers in the selection of

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary a strategy. In addition, almost all studies that have been done use historical controls,41,42,43 and 44 and the few studies that do not are limited by lack of risk adjustment 45 or significant volume differences between types of hospitals.23 Specific examples will be discussed below, but we first briefly review the positive and negative features of various strategies categorized by target audience. Consumer-oriented strategies, such as the publication of hospital or health plan report cards, no intervention by the purchaser into clinical processes. However, report cards can be difficult for consumers to understand and, where they have been published so far, appear to have been used by only a small percentage of consumers.10,46,47 Approaches that target information to clinicians or medical associations —such as offering educational conferences in which the quality of specific hospitals or types of hospitals are discussed or asking medical associations to certify hospitals—offer the benefit of a greater probability that the recipient audience will understand the data. However, such a strategy will inevitably create conflicts of interest for clinicians at lower quality hospitals, because they risk losing patients by referring them to other institutions. While we believe that altruism can be expected of most clinicians, others will be unwilling to participate. Inclusion of discussion of the quality of alternative hospitals in the informed consent process would combine informing consumers with clinician-directed approaches. When properly carried out, this tactic is consistent with goals of patient education and empowerment; however, it is also dependent on clinician altruism and vulnerable to the method of presentation of the data by the provider. Finally, there are other methods in addition to selective referral that focus primarily on purchaser-health plan relationships. For example, purchasers could offer quality withholds or bonuses for health plans that can document excellent risk adjusted outcomes, without requiring the use of preferred hospitals.48,49 This option allows health plans more contracting and implementation flexibility. However, because it is a less direct attempt to get patients to the best hospitals, it might also mean slower improvement in clinical results. INFLUENCE OF THE POLICY CONTEXT ON CHOICE OF STRATEGY Both the choice of a definition of quality—Does one use risk adjusted outcomes? Process measures? Indirect markers like volume?—and the selection of an implementation strategy from among those listed in Table 4 will be heavily influenced by the policy context in which these decisions are made. In particular, the availability of different types of quality data, the impact a quality initiative might have on market structure, and the willingness of various stakeholders to support quality programs are important. In most states, hospital licensing and discharge databases already exist and can be used to identify teaching or high volume hospitals. Currently, 42 states collect inpatient data, with 30 of those collecting other outpatient data. All states collect at least five discharge diagnoses and procedures; many collect up to 25.50 However, since hospital outcomes and volume may vary from year to year, policymakers (including purchasers) would be in a better position to categorize hospitals if data from several years were acquired. For a few conditions in a few states, risk adjusted mortality rates by hospital are available. New York, Pennsylvania, and California, for instance, collect the clinical information necessary to calculate risk adjusted hospital-specific CABG mortality rates. California 's system, however, is in its first year, so clear trends in outcome are not yet discernible. Few states have developed reliable reporting of complications or other morbidity, in part because it is not possible to distinguish, from among the list of discharge diagnoses, conditions that

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary were present on admission (which should be considered risk factors for bad outcomes) vs. those that developed during the admission (which should be considered complications or bad outcomes). California has just begun to experiment with an indicator that diagnoses were present at admission, but preliminary evaluations by the Office of Statewide Health Planning and Development suggest that coding is inconsistent. The ability to measure other health outcomes, such as quality of life and time to return to work, would likely increase consumers ', purchasers', and policymakers' interest in performance reports. However, to our knowledge, there is no source for such data at this time. Expected changes in marketplace behavior and competition should also be considered. If one strategy creates an imbalance of market power, policymakers may be more interested in other alternatives. For example, if a proposal to selectively refer patients with a particular condition only to major teaching hospitals would create markets in which plans could only contract with a single hospital (or in which the nearest preferred hospital was very distant), it might be preferable to develop risk adjusted mortality reporting instead—even at substantial administrative cost. In general, the extent to which local clinicians and hospitals are willing to devote resources to quality improvement will also influence one's choice among initiatives that vary in terms of the administrative demands placed on hospitals and plans. IMPLEMENTATION CONSIDERATIONS The barriers to implementation vary with the strategy used to introduce quality-based competition, but we will focus on overcoming barriers to selective referral. Since much of the difficulty with selective referral stems from the need to move some patients away from their families and primary providers, programs to encourage selective referral should minimize the impact of distance on patients. If the clinical benefits are significant, it may be worthwhile to pay for family members to accompany patients if that will increase patient compliance with referrals. Similarly, reducing the administrative demands on patients and clinicians associated with changing hospitals may increase acceptance of the program. Finally, it may be prudent to focus initial efforts in areas where success is more likely, such as urban areas with dense concentrations of both patients and hospitals. This will reduce travel distance and facilitate communication between the referral hospitals and primary providers. In undertaking a program that introduces fundamental changes into the health care market, it will be important to be flexible and to recognize the limitations of the various selective referral strategies as well. For example, for some conditions with low overall volume (e.g., esophageal cancer), it may be impossible to selectively refer based on direct measurement of quality. In this case, selective referral based on minimum volume standards may be optimal. On the other hand, for conditions in which high volume has been shown to improve outcome and for which case loads are large enough to support outcomes measurement (e.g., CABG, in which most hospitals do hundreds of cases a year), it may be possible to offer low volume hospitals a choice: accept selective referral based on volume or agree to participate in direct measurement of outcomes. Matching strategies to the clinical characteristics of different conditions will seem more reasonable to clinicians and hospitals than choosing a universal approach.

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary EXAMPLES OF THE USE OF QUALITY INFORMATION TO INFLUENCE REFERRAL PATTERNS Though not yet commonplace, there have been several initiatives to collect and utilize quality information to improve care (Table 5). The organizations sponsoring these projects are primarily state governments, private purchasers, and the Health Care Financing Administration (HCFA), but companies that sell quality data or provide it on the Internet and profit from associated advertising also exist. Most of the programs involve the use of HEDIS, patient satisfaction surveys, or the collection and publication of hospital-specific risk adjusted mortality rates. We will discuss these initiatives in terms of their sponsors, but in Table 5 we sort the programs discussed below by the type of quality information produced, the target audience, and whether the target audience is also given financial incentives to select high quality organizations. There are several initiatives run by states. Since 1989, the New York State Department of Health has had a Cardiac Surgery Reporting System (CSRS) through which it collects clinical data on all patients undergoing CABG. The Department reports to hospitals both risk adjusted mortality rates and lists of preoperative risk factors that are significantly related to mortality. In addition, risk adjusted mortality rates for each hospital and surgeon that perform CABG are released in public reports. In each of the first four years after CSRS was introduced, actual and risk adjusted mortality decreased. Actual mortality decreased from 3.52% in 1989 to 2.78% to 1992, a 21% reduction. Risk adjusted mortality decreased 41% from 4.17% in 1989 to 2.45% in 1992.42 It is unknown whether public reporting of quality was the sole or predominant cause of the mortality reduction, as there was a simultaneous nationwide trend toward lower CABG mortality.51 However, comparison of the reduction in mortality in New York to national trends (done without risk adjustment) did suggest that mortality fell faster in New York.45 If there is an effect of public reporting, it is also impossible to know how much of this effect comes from reporting hospital data and how much is associated with reporting individual surgeon performance. Pennsylvania also developed a CABG mortality report in the early 1990s. Risk adjusted mortality dropped from 3.9% to 2.9% (or 26%) between 1990 and 1993.52 In 1994–1995, the decrease in mortality was accompanied by a decline in charges for CABG of 3.9% from the previous year even though the number of CABG surgeries rose 25% from 1991 through 1995.53 Interestingly, though intended for consumers, the report has been more useful to hospitals and providers in improving mortality outcomes than to consumers looking for high quality providers. Only 12% of patients undergoing CABG were aware of the report prior to surgery.10 California produces annual hospital mortality reports for myocardial infarction (MI) and is developing reports for CABG, pneumonia, maternal outcomes, hip fracture, and intensive care unit (ICU) patients. To date, the MI reports have not yet been used extensively by hospitals54 or purchasers to improve quality. Missouri has an obstetrics quality public reporting system that includes mortality, morbidity, and patient satisfaction measures. Longo et al. found that the implementation of this system was associated with improvements in all measures, but this study used historical controls.44 Private purchasers across the country also have begun to collect quality data. The 1998 National Business Coalition on Health Survey found that many business coalitions gather data on quality of care, incorporate financial incentives for performance into purchase contracts, or collaborate with plans or providers on continuous quality improvement efforts.49 Having acquired quality data, some coalitions have attempted to stimulate quality-based competition.

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary TABLE 5 Alternative Strategies to Introduce Quality Information as a Determinant of Market Behavior, with Recent Examples   Method of Introducing Data to the Market   Offer information but not financial incentives, targeted to: Offer information and financial incentives, targeted to: Quality Information Generated Patients Physicians, Hospitals, or Health Systems Health Plans Patients Physicians, Hospitals, or Health Systems Health Plans Clinical Outcomes—Mortality CA, Cleve, DFW, HCFA/Mort, HlthGrade, MO, NY, PA, US, News CA, DFW, HCFA/Mort, HCFA/Mort, HlthGrade, MO, NY, PA, U.S. News CA, HCFA/Mort, HlthGrade, MO, NY, PA   Cleve, HCFA/CtrsExcel, HCFA/Heart PBGH/EBHR for CABG Clinical Outcomes—Morbidity or Complications CA, Cleve, DFW, HEDIS, HlthGrade, MO, St L CA, DFW, HEDIS, MO CA, HEDIS, MO, Digital, GM, GTE Cleve, HCFA/CtrsExcel, HCFA/Heart Digital, GM, GTE, St L Patient Satisfaction BHCAG, Cleve, Chicago, CAHPS, DFW, HEDIS, MO CAHPS, DFW, HEDIS, MO CAHPS, HEDIS, MO Digital, GM, GTE Cleve, HCFA/Heart Digital, GM, GTE

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary Process Measures HEDIS, St L HEDIS HEDIS Digital, GM, GTE HCFA/Heart, HCFA/CtrsExcel Digital, GM, GTE, St L Structural Measures U.S. News U.S. News     HCFA/CtrsExcel   Hospital Volume PBGH/Hlthscope, HlthGrade       HCFA/CtrsExcel Leap, GM, PBGH/EBHR for 6 non-CABG procedures NOTE: (1) In compiling this table, we considered a Caesarean section procedure a complication or morbidity, not a process measure. (2) When a purchaser discounted prices to employees of higher quality plans, this was considered a financial incentive to both patients and plans. LEGEND: BHCAG = Buyers Health Care Action Group; CA = California Hospital Outcomes Project; CABG = coronary artery bypass grafting; Chicago = Chicago Business Group on Health; Cleve = Cleveland Health Quality Choice; CAHPS = Consumer Assessment of Health Plans Study; DFW = Dallas/Fort Worth Business Group on Health; Digital = Digital Equipment Corp. health benefits plan; GM = General Motors health benefits plan; GTE = GTE, Inc., health benefits plan; HCFA/CtrsExcel = Health Care Financing Administration Centers of Excellence Program; HCFA/Heart = Health Care Financing Administration heart transplantation approval process; HCFA/Mort = Health Care Financing Administration hospital mortality reports; HEDIS = Health Plan Employer Data and Information Set 3.0; HlthGrade = HealthGrades.com web site; Leap = The Leapfrog Group; MO = Missouri Obstetrics Reporting Program; NY = New York State Cardiac Surgery Reporting System; PBGH/Hlthscope = Pacific Business Group on Health's Healthscope web site; PBGH/EBHR = Pacific Business Group on Health Evidence-Based Hospital Referral Program; PA = Pennsylvania Health Care Cost Containment Council; St L = Gateway Purchasing Association; U.S. News = U.S. News and World Report's Top 100 Hospitals.

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary One of the oldest such programs was Cleveland Health Quality Choice (CHQC). In 1989, a group of large employers in the Cleveland area decided to collect hospital quality information and use it in both public reports and direct contracting. Measures collected included risk adjusted hospital mortality for certain conditions, cesarean section rates, and patient satisfaction. As reports came out in the early 1990s, hospital mortality rates declined,43 but again only historical controls were available. CHQC ended in 1999 when some hospitals, unhappy with the accuracy of the quality data (especially ICU mortality measurement) and the data collection burdens and unconvinced that purchasers were really using the data to direct patients to high quality hospitals, withdrew from the reporting system.55 A hospital reporting system similar to CHQC has been developed by the Dallas-Fort Worth Business Group on Health, but this program only reports the data back to hospitals and to consumers, without any use of financial or volume incentives. There are plans to develop a program to measure physician quality, but this has not been implemented. 56 Since 1996, the Chicago Business Group on Health has provided public reports of patient satisfaction with health plans as a demonstration site for the National Committee on Quality Assurance's satisfaction survey. An evaluation is planned but has not been published.56 In Minneapolis-St. Paul, the Buyers Health Care Action Group also provides patient satisfaction data to consumers, and care systems that score well seem to gain market share.57 The Pacific Business Group on Health (PBGH) has several programs in place. On the PBGH HealthScope web site,58 PBGH offers consumers a synopsis of the literature on volume¯outcome relationships and lists several conditions for which higher volume is associated with lower mortality. The site also allows consumers to look at the condition-specific volume for hospitals in their area. PBGH is also attempting to encourage evidence-based hospital referral through its health plan contracting process. With the state of California, PBGH has developed risk adjusted CABG mortality reports and is requiring that its contracting plans report which hospitals are used for patients undergoing CABG. In addition, based on recent reports of the potential to lower mortality in California through referral of patients away from low volume hospitals,18 PBGH is requiring that health plans ensure patients with selected conditions go to high volume hospitals. Some large employers have acted on their own to stimulate quality. General Motors (GM), GTE, and Digital Equipment Corporation all discount the premiums employees must pay if they choose higher quality plans from among those the company offers (based on HEDIS scores and patient satisfaction surveys).56 This provides financial incentives both to patients in the form of lower prices and to plans in the form of higher market share. These programs have not been formally evaluated for their impact on quality of care. The Leapfrog Group, a newly formed organization comprising many large employers (GM, Ford, GTE, GE, and others) and purchasing coalitions (including PBGH), is also pursuing evidence-based hospital referral. This organization has developed a set of health plan performance standards that include volume standards for specific conditions, but is also considering exemptions from volume standards if hospitals agree instead to participate in risk adjusted mortality measurement. We are aware of only one study of the impact of a selective referral program. In 1986, HCFA created a heart transplantation program for Medicare beneficiaries. This involved a transplant facility approval process that included a volume standard—performing at least 500 cardiac catheterizations and 250 cardiac surgeries (of all types, not just transplants) annually. Other standards involved patient selection and management processes, resources available, reporting re-

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary quirements and survival rates.59 In a study of heart transplants done at approved versus non-approved centers, the probability of death at a Medicare-approved center was 7.0 ± 0.4% at 30 days and 16.2% ± 0.6% at one year post transplantation. For nonapproved centers, the probability of death at 30 days was 9.2 ± 0.4% and 19.2% ± 0.6% at one year (p < .05 for both).23 This suggests that selective referral for specific procedures can improve outcomes if the selection criteria are legitimate measures of quality. HCFA has had several other programs aimed at improving quality. In the 1980s, HCFA released hospital-specific mortality rates among Medicare patients. Hospital executives were skeptical about the accuracy and usefulness of the HCFA reports,60 and consumers and purchasers appear to have paid little attention. A study of New York hospitals found no effect of mortality on occupancy rates,61 while a national study found a statistically significant but very small effect on utilization.62 HCFA also has a Centers of Excellence program in which hospitals apply to receive this designation for CABG or orthopedic surgery. In considering applications, HCFA uses a volume threshold, but also considers each hospital's outcomes and evaluates institutional processes and structure.63 At this time, there have been no studies of the impact of this program published in the peer-reviewed literature. Some companies are discovering commercial uses for health care quality information. HealthGrades has created a web site that evaluates hospitals, physicians, health plans, and HMOs.64 In ranking hospitals, HealthGrades utilized the HCFA Centers of Excellence demonstration program minimum volume requirements for cardiac surgery and orthopedic surgery. For the other conditions, HealthGrades.com classified the hospitals within the lowest quartile with respect to volume as “lower volume hospitals.” HealthGrades then uses HCFA discharge data to calculate condition-specific risk adjusted mortality rates. The data are available to consumers for free, and the site is supported by advertising, only some of which is for health-related products. U.S. News and World Report has been publishing ratings of hospitals since 1991. The rating process includes mortality measurements, structural measures, and hospital volume.65 It purports to include process measures as well, but these are based on reputational surveys. We are aware of no evaluation of the impact of these reports on outcomes. FUTURE DIRECTIONS Much research is needed to better understand how best to use quality data to improve health care. The two primary needs are to determine how best to use quality information once it is generated66 and how it is that physicians and hospitals with better outcomes achieve these results.67 As a first step in the evaluation of the use of quality information, studies of the many initiatives shown in Table 5 should be performed. Specific questions include whom to target for dissemination of quality data and whether information is enough or financial incentives are necessary. An important issue that needs to be considered is whether all patients should receive the same data, or whether, for example, patients with chronic diseases might be interested in different information than the general population. In performing these analyses, investigators should attempt to find better controls than the typical “before and after” patient groups. One health plan-oriented approach would be to analyze whether, as the plan implements, for example, selective referral to high volume hospitals for a cancer surgery, the plan-level mortality rate falls faster than the contemporaneous state-wide mortality rate for the same procedure. Understanding why high performers do well will also be critical. This strategy can be applied whether physicians or hospitals are identified as superior based on direct measurement of quality

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary or based on indirect markers like volume. Initial steps to do this have focused on whether high and low performers differ in processes that have been shown by randomized controlled trials to improve outcomes, and this is indeed an important question. However, investigators should also consider the possibility that better outcomes are due to differences in process that have not yet been studied in randomized trials. As the determinants of higher quality are identified, it will be important to assess how easily they can be transferred to institutions with lower quality. For instance, if cancer centers with better surveillance procedures for chemotherapy-related bone marrow suppression have lower mortality, these algorithms can easily be shared with other centers (since they involve only blood tests and diligence). If, alternatively, it appears that the cancer centers with fewer marrow suppression-related fatalities are those that have full-time on site board certified hematologists, it may be difficult for other centers to have enough clinical activity to be able to support that level of hematology coverage. Answering questions like these will be necessary to maximize the quality gain for any given investment in health care improvement. CONCLUSIONS The clinical impact of efforts to introduce quality as a basis of competition in the market-place will depend greatly on the ability of purchasers or policymakers to choose a strategy appropriate to a given location and to address the implementational barriers described above. In theory, referral based on actual outcomes and processes or public reporting of quality data are preferable to referral based on indirect indicators of quality like hospital volume. However, selective referral based on volume is currently much easier to achieve. Introducing selective referral based on volume as a first step may create an impetus for developing databases that can foster condition-specific outcome or process measurement. Further research is needed to allow comparison of the costs and benefits of the various strategies listed in Table 4 and Table 5. Regardless of the strategy selected, any quality improvement program should include an evaluation component to determine that the estimates of clinical benefit on which the choice of strategy was based were accurate. In addition, an assessment of which patients were transferred —and which were not and the reasons that patients were not transferred —must be performed to ensure that the program is implemented optimally and the benefits of referral are distributed fairly. Finally, the presence of significant barriers to selective referral and other approaches designed to introduce quality competition imply that the long-term success of such initiatives will depend on ongoing support from purchasers and policymakers. REFERENCES 1. Burns LR, Wholey DR. The effects of patient, hospital, and physician characteristics on length of stay and mortality. Med Care. 1991; 29:251–71. 2. Edwards WH, Morris JA, Jr., Jenkins JM, Bass SM, MacKenzie EJ. Evaluating quality, cost-effective health care. Vascular database predicated on hospital discharge abstracts. Ann Surg. 1991; 213:433-8. 3. Ellis SG, Weintraub W, Holmes D, Shaw R, Block PC, King SB, 3rd. Relation of operator volume and experience to procedural outcome of percutaneous coronary revascularization at hospitals with high interventional volumes. Circulation. 1997; 95:2479–84. 4. Glasgow RE, Mulvihill SJ. Hospital volume influences outcome in patients undergoing pancreatic resection for cancer. West J Med. 1996; 165:294–300. 5. Hosenpud JD, Breen TJ, Edwards EB, Daily OP, Hunsicker LG. The effect of transplant center volume on cardiac transplant outcome . A report of the United Network for Organ Sharing Scientific Registry . JAMA 1994; 271:1844–9.

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary 6. Patti MG, Corvera CU, Glasgow RE, Way LW. A hospital's annual rate of esophagectomy influences the operative mortality rate. J Gastrointest Surg. 1998; 2:186–92. 7. Hannan EL, Kilburn H, Jr., Bernard H, O'Donnell JF, Lukacik G, Shields EP. Coronary artery bypass surgery: the relationship between inhospital mortality rate and surgical volume after controlling for clinical risk factors. Med Care. 1991; 29:1094–107. 8. Miller RH, Luft HS. Managed care plan performance since 1980. A literature analysis. JAMA. 1994; 271:1512–9. 9. Chernew M, Scanlon DP. Health plan report cards and insurance choice. Inquiry. 1998; 35:9–22. 10. Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA. 1998; 279:1638–42. 11. McLaughlin CG. Health care consumers: choices and constraints. Med Care Res Rev. 1999; 56:24–59; discussion 60–6. 12. Schneider EC, Influence of cardiac-surgery performance reports on referral practices and access to care. A survey of cardiovascular specialists. N Engl J Med. 1996; 335:251–6. 13. Langreth R. Health-Care Costs Rise for Employers, According to Poll. Wall Street Journal. September 8, 1997, B2. 14. Mundinger MO, Kane RL, Lenz ER, et al. Primary care outcomes in patients treated by nurse practitioners or physicians: a randomized trial. JAMA. 2000; 283:59–68. 15. Murray-Garcia JL, Selby JV, Schmittdiel J, Grumbach K, Quesenberry CP, Jr. Racial and ethnic differences in a patient survey: patients' values, ratings, and reports regarding physician primary care performance in a large health maintenance organization. Med Care. 2000; 38:300–10. 16. Kohn LT, Corrigan JM, Donaldson MS. To Err is Human: Building a Safer Health Care System. Washington, D. C.: Institute of Medicine; 1999. 17. PBGH. Two New PBGH Initiatives to Make Provider Quality Count More. Pacific Currents. Fall, 1998, 3; 5. 18. Dudley RA, Johansen KL, Brand R, Rennie DJ, Milstein A. Selective referral to high-volume hospitals: estimating potentially avoidable deaths. JAMA. 2000; 283:1159–66. 19. Rosenthal GE, Harper DL, Quinn LM, Cooper GS. Severity-adjusted mortality and length of stay in teaching and nonteaching hospitals. Results of a regional study. JAMA. 1997; 278:485–90. 20. Phibbs CS, Bronstein JM, Buxton E, Phibbs RH. The effects of patient volume and level of care at the hospital of birth on neonatal mortality. JAMA. 1996; 276:1054–9. 21. Hartz AJ, Krakauer H, Kuhn EM, et al. Hospital characteristics and mortality rates. N Engl J Med. 1989; 321:1720–5. 22. Kuhn EM, Hartz AJ, Krakauer H, Bailey RC, Rimm AA. The relationship of hospital ownership and teaching status to 30-and 180-day adjusted mortality rates. Med Care. 1994; 32:1098–108. 23. Krakauer H, Shekar SS, Kaye MP. The relationship of clinical outcomes to status as a Medicare-approved heart transplant center. Transplantation. 1995; 59:840–6. 24. Wennberg DE, Lucas FL, Birkmeyer JD, Bredenberg CE, Fisher ES. Variation in carotid endarterectomy mortality in the Medicare population: trial hospitals, volume, and patient characteristics. JAMA. 1998; 279:1278–81. 25. Hofer TP, Hayward RA. Identifying poor-quality hospitals. Can hospital mortality rates detect quality problems for medical diagnoses? Med Care. 1996; 34:737–53. 26. Zalkind DL, Eastaugh SR. Mortality rates as an indicator of hospital quality. Hosp Health Serv Adm. 1997; 42:3–15. 27. Phillips KA, Luft HS. The policy implications of using hospital and physician volumes as “indicators” of quality of care in a changing health care environment. Int J Qual Health Care. 1997; 9:341–8. 28. Ryan TJ, Antman EM, Brooks NH, et al. 1999 update: ACC/AHA guidelines for the management of patients with acute myocardial infarction. A report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on Management of Acute Myocardial Infar. J Am Coll Cardiol. 1999; 34:890–911.

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary 29. Hayward RS, Wilson MC, Tunis SR, Bass EB, Guyatt G. Users' guides to the medical literature. VIII. How to use clinical practice guidelines. A. Are the recommendations valid? The Evidence-Based Medicine Working Group. JAMA. 1995; 274:570–4. 30. Wilson MC, Hayward RS, Tunis SR, Bass EB, Guyatt G. Users' guides to the Medical Literature. VIII. How to use clinical practice guidelines. B. what are the recommendations and will they help you in caring for your patients? The Evidence-Based Medicine Working Group. JAMA. 1995; 274:1630–2. 31. Jollis JG, Peterson ED, Nelson CL, et al. Relationship between physician and hospital coronary angioplasty volume and outcome in elderly patients. Circulation. 1997; 95:2485–91. 32. Chassin MR. Assessing strategies for quality improvement. Health Aff (Millwood). 1997; 16:151–61. 33. Marston RQ, Yordy K. A nation starts a program: Regional Medical Programs, 1965–1966. J Med Educ. 1967; 42:17–27. 34. McCormick MC, Shapiro S, Starfield BH. The regionalization of perinatal services. Summary of the evaluation of a national demonstration program. JAMA. 1985; 253:799–804. 35. Siegel E, Gillings D, Campbell S, Guild P. A controlled evaluation of rural regional perinatal care: impact on mortality and morbidity. Am J Public Health. 1985; 75:246–53. 36. Mullins RJ, Veum-Stone J, Hedges JR, et al. Influence of a statewide trauma system on location of hospitalization and outcome of injured patients. J Trauma. 1996; 40:536–45; discussion 545–6. 37. Mullins RJ, Veum-Stone J, Helfand M, et al. Outcome of hospitalized injured patients after institution of a trauma system in an urban area. JAMA. 1994; 271:1919–24. 38. Stewart TC, Lane PL, Stefanits T. An evaluation of patient outcomes before and after trauma center designation using Trauma and Injury Severity Score analysis. J Trauma. 1995; 39:1036–40. 39. Nathens AB, Jurkovich GJ, Cummings P, Rivara FP, Maier RV. The Effect of Organized Systems of Trauma Care on Motor Vehicle Crash Mortality. JAMA. 2000; 283:1990–1994. 40. Shortell SM, Hughes EF. The effects of regulation, competition, and ownership on mortality rates among hospital inpatients. N Engl J Med. 1988; 318:1100–7. 41. Marshall MN, Shekelle PG, Leatherman S, Brook RH. The Public Release of Performance Data-What do we Expect to Gain? A Review of the Evidence. JAMA. 2000; 283:1866–1874. 42. Hannan EL, Kilburn H, Jr., Racz M, Shields E, Chassin MR. Improving the outcomes of coronary artery bypass surgery in New York State. JAMA. 1994; 271:761–6. 43. Rosenthal GE, Quinn L, Harper DL. Declines in hospital mortality associated with a regional initiative to measure hospital performance. Am J Med Qual. 1997; 12:103–12. 44. Longo DR, Land G, Schramm W, Fraas J, Hoskins B, Howell V. Consumer reports in health care. Do they make a difference in patient care? JAMA. 1997; 278:1579–84. 45. Peterson ED, DeLong ER, Jollis JG, Muhlbaier LH, Mark DB. The effects of New York's bypass surgery provider profiling on access to care and patient outcomes in the elderly. J Am Coll Cardiol. 1998; 32:993–9. 46. Hibbard JH, Jewett JJ. Will quality report cards help consumers? Health Aff (Millwood). 1997; 16:218–28. 47. Wicks EK, Meyer JA. Making report cards work. Health Aff (Millwood). 1999; 18:152–5. 48. Fraser I, McNamara P, Lehman GO, Isaacson S, Moler K. The pursuit of quality by business coalitions: a national survey. Health Aff (Millwood). 1999; 18:158–65. 49. Schauffler HH, Brown C, Milstein A. Raising the bar: the use of performance guarantees by the Pacific Business Group on Health. Health Aff (Millwood). 1999; 18:134–42. 50. NAHDO, MEDSTAT. The National Association for Health Data Organizations, The MEDSTAT Group, Statewide Encounter-Level Inpatient and Outpatient Data Collection Activities, Summary Report. Salt Lake City, UT: Agency for Health Care Policy and Research; 1999. 51. Grover FL. The Society of Thoracic Surgeons National Database: current status and future directions. Ann Thorac Surg. 1999; 68:367–73. 52. Moore JD, Jr. The public eye. Published outcomes reports effect change at Pa. hospitals. Mod Healthc. 1997; 27:140, 155.

OCR for page 103
Interpreting the Volume–Outcome Relationship in the Context of Health Care Quality: Workshop Summary 53. Moore JD, Jr. Real results. Pa. report gives consumers outcomes data they can use . Mod Healthc. 1998; 28:46. 54. Luce JM, Thiel GD, Holland MR, Swig L, Currin SA, Luft HS. Use of risk-adjusted outcome data for quality improvement by public hospitals. West J Med. 1996; 164:410–4. 55. Burton TM. Operation that Rated Hospitals Was Success, But the Patience Died . Wall Street Journal. August 23, 1999, A1. 56. Meyer J, Ribowski L, Eichler R. Theory and Reality of Value-Based Purchasing: Lessons from the Pioneers . Rockville, MD: Agency for Health Care Policy and Research; 1997. 57. Christianson J, Feldman R, Weiner JP, Drury P. Early experience with a new model of employer group purchasing in Minnesota. Health Aff (Millwood). 1999; 18:100–14. 58. PBGH. http://www.HealthScope.org. Accessed April 26, 2000. 59. Medicare program: Proposed criteria for Medicare coverage of Heart Transplants. Fed Register. 1986; 51:37164–70. 60. Berwick DM, Wald DL. Hospital leaders' opinions of the HCFA mortality data. JAMA. 1990; 263:247–9. 61. Vladeck BC, Goodwin EJ, Myers LP, Sinisi M. Consumers and hospital use: the HCFA “death list.” Health Aff (Millwood). 1988; 7:122–5. 62. Mennemeyer ST, Morrisey MA, Howard LZ. Death and reputation: how consumers acted upon HCFA mortality information . Inquiry. 1997; 34:117–28. 63. Wilensky GR. From the Health Care Financing Administration. JAMA. 1991; 266:3404. 64. HealthGrades.com. http://www.healthgrades.com. Accessed April 27, 2000. 65. Hill CA, Winfrey KL, Rudolph BA. “Best hospitals”: a description of the methodology for the Index of Hospital Quality . Inquiry. 1997; 34:80–90. 66. Jencks SF. Clinical Performance Measurement—A Hard Sell. JAMA. 2000; 283:2015–2016. 67. Hannan EL. The relation between volume and outcome in health care. N Engl J Med. 1999; 340:1677–9.