Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 91
--> 3 Public Disclosure of Data on Health Care Providers and Practitioners Previous chapters have discussed a wide array of users, uses, and expected benefits of information held by health database organizations (HDOs). Such organizations are presumed to have two major capabilities. One is the ability to amass credible descriptive information and evaluative data on costs, quality, and cost-effectiveness for hospitals, physicians, and other health care facilities, agencies, and providers. The other is the capacity to analyze data to generate knowledge and then to make that knowledge available for purposes of controlling the costs and improving the quality of health care—that is, of obtaining value for health care dollars spent. Another benefit derived from HDOs is the generation of new knowledge by others. In principle, the goals implied by these capabilities are universally accepted and applauded. In practice, HDOs will face a considerable number of philosophical issues and practical challenges in attempting to realize such goals. The IOM committee characterizes the activities that HDOs might pursue to accomplish these goals as public disclosure. By public disclosure, this committee means the timely communication, or publication and dissemination, of certain kinds of information to the public at large. Such communication may be through traditional print and broadcast media, or it may be through more specialized outlets such as newsletters or computer bulletin boards. The information to be communicated is of two varieties: (1) descriptive facts and (2) results of evaluative studies on topics such as charges or costs and patient outcomes or other
OCR for page 92
--> quality-of-care measures. The fundamental aims of such public disclosure, in the context of this study, are to improve the public's understanding about health care issues generally and to help consumers select providers of health care.1 These elements imply that HDOs should be required to gather, analyze, generate, and publicly release such data and information: in forms and with explanations that can be understood by the public; in such a manner that the public can distinguish actual events (i.e., primary data) from derived, computed, or interpretive information; in ways that reveal the magnitude of any differences among providers as well as the likelihood that differences could be the result of chance alone; in sufficient detail that all providers can be easily described and compared, not just those at the extremes; with descriptions and illustrations of the steps necessary to predict outcomes in the present or future from information relating only to past experience; and with statements and illustrations about the need to particularize information for an individual in the final stages of decision making. Acceptance of HDO activities and products relating to public disclosure will depend in part on the balance struck for fairness to patients, the public in general, payers, and health care providers. Fairness to patients involves protecting their privacy and the confidentiality of information about them, as examined in Chapter 4. Fairness to the public involves distributing accurate, reliable information that is needed to make informed decisions about providers and health care interventions; the broader aims are to promote universal access to affordable and competent health care, enhance consumer choice, improve value for health care dollars expended, and increase the accountability to the public of health care institutions. Fairness to payers may be a subset of this category. They should receive the information that is available to the public at large, but perhaps in more detail or in a more timely manner. Finally, fairness to providers entails ensuring that 1 SEC. 5003 of the HSA (1993) calls for a National Quality Management Council to develop a set of national measures of quality performance to assess the provision of health care services and access to such services. SEC. 5005 (1) requires health alliances annually to publish and make available to the public a performance report outlining in a standard format the performance of each health plan offered in the alliance and the results of consumer surveys conducted in the alliance
OCR for page 93
--> data and analyses are reliable, valid, and impartial; it also means that providers are given some opportunity to confirm data and methods before information is released to the public, and offered some means of publishing their perspectives when the information is released. This chapter deals chiefly with issues relating to trade-offs between fairness to providers and fairness to the public at large (including patients) insofar as public disclosure of information is concerned. The considerations just noted appear simple and noncontroversial on the surface; in the context of real patients, providers, and data, they become technical, complex, and occasionally in conflict. The appendix to this chapter offers a brief illustration of the difficulties that HDOs might face in discharging their duties of fairness to all groups. PREVIOUS STUDIES This report is not the first treatment of issues related to providing health-related information to the public. Marquis et al. (1985) reviewed what was known about informing consumers about health care costs—considered then and now a less difficult challenge than informing them about quality of care—as a means of encouraging them to make more cost-conscious choices. In an extensive literature review, the authors documented the wide gaps in cost (or price) information available to consumers, especially for hospital care.2 They reported evidence that some programs to help certain consumer groups, such as assisting the elderly in purchasing supplemental Medicare coverage, have had salutary effects on the choices people make. Despite new efforts at that time by employers, insurers, business coalitions, and states to collect and disseminate such information, the authors concluded that, ''it remains uncertain whether disclosure of information about health care costs will do much to modify consumers' choices of health plans, 2 In all likelihood, people will have more, and be more attentive to, information about their own health insurance plans than about cost or quality information on health care providers. Marquis (1981) studied consumers' knowledge about their health insurance coverage as part of the RAND Corporation Health Insurance Experiment. She determined that, although most families understand some aspects of their insurance policies, many lack detailed knowledge of benefits, especially about coverage of outpatient medical services. Greater exposure to information about an insurance plan, measured by the length of time the family was insured and whether the family had a choice of plans, increased the family's knowledge, which suggests that more experience with information or formal efforts to educate will improve the general level of knowledge. Left unanswered, however, is the question of the extent to which people will act on that knowledge, especially to change insurance plans. These findings raise cautions, therefore, about what actions people might take in response to receipt of quality and cost information.
OCR for page 94
--> hospitals, or other health care providers" (p. xii). The authors emphasized that understanding how consumers use information in making health care choices is critical to the design of effective data collection and disclosure interventions but that such basic knowledge was lacking. It is not clear that the knowledge gap has been closed. More recently, the congressional Office of Technology Assessment (OTA, 1988) produced a signal report on disseminating quality-of-care information to consumers. It examined the rationales that lie behind the call for more public information; evaluated the reliability, validity, and feasibility of several types of quality indicators; 3 and advanced some policy options that Congress could use to overcome problems with the indicators. Also presented was a strategy for disseminating information on the quality of physicians and hospitals using the following components: stimulate consumer awareness of quality of care; provide easily understood information on the quality of providers' care; present information via many media repeatedly and over long periods of time; present messages to attract attention; present information in more than one format; use reputable organizations to interpret quality-of-care information; consider providing price information along with information on the quality of care; make information accessible; and provide consumers the skills to use and physicians the skills to provide information on quality of care (OTA, 1988, pp. 40-47). The OTA study did not wholly endorse any one quality measure or approach, and specifically noted that "existing data sets do not allow routine evaluation of physicians' performance outside hospitals" (p. 30). The report also concluded that "informing consumers and relying on their subsequent actions should not be viewed as the only method to encourage hospitals and physicians to maintain and improve the quality of their care. Even well-informed lay people ... must continue to rely on experts to ensure the quality of providers. Some experts come from within the medical community and engage in self regulation, while others operate as external reviewers through private and governmental regulatory bodies" (p. 30). It may be said that many, if not most, of the issues raised by the OTA report are germane to today's quite different health care environment, including the development of regional HDOs. 3 Quality indicators in the OTA (1988) report included: hospital mortality rates; adverse events; disciplinary actions, sanctions, and malpractice compensation; evaluation of physicians' performance (care for hypertension); volume of services in hospitals or performed by physicians; scope of hospital services (external standards and guidelines); physician specialization; and patients' assessments of their care.
OCR for page 95
--> IMPORTANT PRINCIPLES OF PUBLIC DISCLOSURE A significant committee stance should be made plain at the outset: the public interest is materially served when society is given as much information on costs, quality, and value for health care dollar expended as can be given accurately and provided with educational materials that aid interpretation of that information. Indeed, public disclosure and public education go hand in hand. Much of the later part of this chapter, therefore, advances a series of recommendations intended to foster active, but responsible, public disclosure of information by HDOs. One critical element in this position must be underscored, however, because it is a major caveat: public disclosure is acceptable only when it: (1) involves information and analytic results that come from studies that have been well conducted, (2) is based on data that can be shown to be reliable and valid for the purposes intended, and (3) is accompanied by appropriate educational material. As discussed in Chapter 2, data cannot be assumed to be reliable and valid; hence, study results and interpretations, and resulting inferences, cannot be assumed always to be sound and credible. Thus, a position supporting public disclosure of cost, quality, or other information about health care providers must be tempered by an appreciation of the limitations and problems of such activities. In Chapter 2 the committee advanced a recommendation about HDOs ensuring the quality of their data so as to minimize the difficulties that might arise from incomplete or inaccurate data. Apart from these caveats, the committee's posture in this area leads to three critical propositions. First, it will be crucial for HDOs or those who use their data to avoid the harms that might come from inadequate, incorrect, or inappropriately "conclusive" analyses and communications. That is, HDOs have a minimum obligation of ensuring that the analyses they publish are statistically rigorous and clearly described. Second, HDOs will need to establish clear policies and guidelines on their standards for data, analyses, and disclosure, and this is an especially significant responsibility when the uses in question are related to quality assurance and quality improvement (QA/QI). The committee believes that HDOs can produce significant and reliable information and that the presumption should be in favor of data release. Such guidelines can help make this case to those who would otherwise oppose public disclosure efforts with the argument that reasonable and credible studies cannot be conducted. Third, in line with these principles, the committee advises that HDOs establish a responsible administrative unit or board to promulgate, oversee, and enforce information policies. The specifics of this recommendation are discussed in Chapter 4, chiefly in relationship to privacy protections. The committee wishes here simply to underscore its view that HDOs cannot
OCR for page 96
--> responsibly or practically carry out the activities discussed in the remainder of this chapter without formulating and overseeing such policies at the highest levels. IMPORTANT ELEMENTS OF PUBLIC DISCLOSURE Several elements are important to the successful public disclosure of health-related information. Among them are the topics and types of information involved, who is identified in such releases, differing levels of vulnerability to harm, and how information might be disclosed. How these factors might be handled by HDOs is briefly discussed below. Topics for HDO Analysis and Disclosure In theory, virtually any topic may be subject to the HDO analyses and public disclosure activities under consideration in this chapter. In practice, the topics that figure most prominently in public disclosure of provider-identified health care data thus far have been extremely limited. Perhaps the best-known instance of release of provider-specific information is the Health Care Financing Administration's (HCFA) annual publication (since 1986) of hospital-specific death rates; these have been based on Medicare Part A files for the entire nation (see, e.g., HCFA, 1987; OTA, 1988, Chapter 4; HCFA, 1991; and the discussion in Chapter 2 of this report).4 This activity has had three spin-offs (not necessarily pertaining just to hospital death rates). The first is repackaging and publishing the HCFA data in local newspapers, consumer guides, and other media. The second is similar analyses, perhaps more detailed, more timely, or more locally pertinent, carried out by state-based data commissions. Examples of statewide work include the published data on cardiac surgery outcomes in New York (cited in Chapter 2), the work of the Pennsylvania Health Care Cost Containment Council on hospital efficiency (PHCCCC, 1989) and on coronary artery bypass graft (PHCCCC, 1992), and the publication of a wide array of information on hospitals, long-term care facilities, home health agencies, 4 As this report was being prepared, the HCFA administrator announced a moratorium of indeterminate length on publication of hospital-specific mortality data (Darby, 1993). The main issues appear to be the adequacy of risk adjustors in the statistical model and the concern that mortality-related data do not provide meaningful information about the true levels of quality of care in the nation's hospitals (or at least in certain types, such as inner-city institutions). Some attention may thus be turned to other indicators, such as length of stay or hospital-acquired complications. Even more ambitious goals may involve reporting on volume of services and patient satisfaction. The ultimate desirability of making reliable and valid information available to consumers is not in question.
OCR for page 97
--> and licensed clinics by the California Health Policy and Data Advisory Commission (California Office of Statewide Health Planning and Development, 1991). The files of the Massachusetts Health Data Consortium have been a rich source of information for various health services research projects (Densen et al., 1980; Gallagher et al., 1984; Barnes et al., 1985; Wenneker and Epstein, 1989; Wenneker et al., 1990; Ayanian and Epstein, 1991; Weissman et al., 1992). The third spin-off is exemplified by the special issues of U.S. News & World Report (1991, 1992, 1993) that have reported on top hospitals around the country by condition or speciality. The underpinnings of these rankings, however, are not HCFA mortality data but, rather, personal ratings by physicians and nurses. Longo et al. (1990) provide an inventory of data demands directed at hospitals, some of which originate with entities like the regional HDOs envisioned in this study (e.g., tumor and trauma registries and state data commissions). Those requesting data would like, for example, to compare hospitals or hospital subgroups during a specific calendar period, to control or regulate new technologies or facilities, and to help providers identify and use scarce resources such as human organs. Local activities, such as those for metropolitan areas or counties, are exemplified by the release of the Cleveland-Area Hospital Quality Outcome Measurements and Patient Satisfaction Report (CHQC, 1993), as described in Chapter 2. (Nearly a decade ago, the Orange County, California, Health Planning Council developed a set of quality indicators for local hospitals, which was considered at the time to be a pioneering effort; see Lohr, 198586.) In 1992 (Volume 8, Number 3), Washington Checkbook presented information on pharmacy prices for prescription drugs and for national and store-brand health and beauty care products; it also reported on hospital inpatient care quality (judged in terms of death rates) and pleasantness (evaluated in terms of staff friendliness, respect, and concern) (Hospital Inpatient Care, 1992). In October 1993 The Washingtonian offered a review of top hospitals and physicians serving the Washington D.C. metropolitan area (Stevens, 1993). Another local publication, Health Pages (1993, 1994), covers selected cities or areas of the country. It tries to help readers choose doctors, pick hospitals, and decide on other services such as home nursing care. Its Spring 1993 issue provides a consumer's guide to several metropolitan areas of Wisconsin; included are practitioners; hospital services, procedure rates, and prices; and an array of other kinds of health care.5 A similar issue released in Winter 1994 focused on metropolitan St. 5 Quality of care becomes problematic for these types of publications. Health Pages (1993), for instance, states explicitly: "There is little objective information available enabling us to judge the quality of care provided [about physicians]" (p. 3).
OCR for page 98
--> Louis. Sources of the information in these publications include surveys, price checks, and HCFA mortality rate studies; only the last approximates the uses that might be made of the data held by regional HDOs today, but clearly more comprehensive HDOs in the future may have price information, survey data, and the like. The brief examples above illustrate areas in which analyses that identify providers have been publicly released. Other calls for public disclosure, however, may actually be intended for more private use by consulting firms; health care plans such as health maintenance organizations (HMOs), independent practice associations (IPAs), and preferred provider organizations (PPOs); and other health care delivery institutions such as academic medical centers or specialized treatment centers. Requests may include analyses of the fees charged by physicians for office visits, consultations, surgical procedures, and the like, and the requests may be for very specific ICD-9-CM (International Classification of Diseases, ninth revision, clinical modification) and CPT-4 (Current Procedural Terminology, fourth revision) codes. Yet other inquiries come from clients concerned with the market share of given institutions or health plans in a region as part of a more detailed market assessment. Questions may also be focused on patterns of resource utilization by certain kinds of patients, for instance, those with advanced or rare neoplastic disease. In general, because these applications are unlikely to lead to studies with published results, they are not discussed here in any detail. Some internal studies are intended for public release, however, for use by regulators, consumers, employers, and other purchasers. These include the so-called quality report cards being developed by the National Committee on Quality Assurance, by Kaiser Permanente, the state of Missouri, and others. The Northern California Region of Kaiser Permanente, for instance, has released a "benchmarked" report on more than 100 quality indicators such as member satisfaction, childhood health, maternal care, cardiovascular diseases, cancer, common surgical procedures, mental health, and substance abuse (Kaiser Permanente, 1993a, 1993b). Who Is Identified The main objects of such requests and the ensuing analyses tend to be large health plans to hospitals, physician groups, individual physicians, and nursing homes. Most of the debate in the past few years has centered on hospitals, especially in the context of the validity and meaningfulness of hospital-specific death rates (Baker, 1992). Generally, arguments in favor of the principle of release of such information on hospitals have carried the day; controversy persists about the reliability, validity, and utility of such information when the underlying data or the sophistication of the analyses can be called into question.
OCR for page 99
--> More recently, the debate has turned to release of information on the hospital-based activities of particular physicians—for example, death rates associated with specific surgical procedures. Here the principle of public disclosure also seems to have gained acceptance, again with caveats about the soundness of the analyses and results. Nevertheless, because of the much greater difficulty of ensuring the reliability and validity of such analyses, especially on the level of individual physicians, many observers remain concerned about the possible downside of releasing information on specific clinicians. This criticism is especially pertinent to the extent that this information is a relatively crude indicator of the quality of care in hospitals or of that rendered by individual physicians, especially surgeons. In the future, attention can be expected to shift to outpatient care and involve the ambulatory, office-based services of health plans and physician groups in primary or specialty care and of individual physicians. In these cases the stance in favor of public disclosure may become more difficult to adopt fully, for three reasons: the problems alluded to above for hospital-based physicians become exponential for office-based physicians; the clear, easy-to-count outcomes, such as deaths, tend to be inappropriate for office-based care because they are so rare; and quality-of-life measures, such as those relating to functional outcomes and physical, social, and emotional well-being, are more significant but also more difficult to assess, aggregate, and report. Other types of providers and clinicians also must be considered in this framework. These include pharmacies and individual pharmacists; home health agencies and the registered nurses and therapists they employ; and durable medical device companies, such as those that supply oxygen to oxygen-dependent patients and the respiratory therapists they employ. Stretching the public-disclosure debate to these and other parts of the health care delivery environment may seem farfetched; to the extent that their data will appear eventually in databases maintained by HDOs, however, the prospect that someone will want to obtain, analyze, and publicize such data is real. This may illustrate the point raised in Chapter 2 that simple creation of databases may lead to applications quite unanticipated by the original creators. Finally, some experts foresee the day when HDOs might do analyses by employer or by commercial industry or sector with the aim of clarifying the causes and epidemiology of health-related problems. Cases in point might be the incidence of carpal tunnel syndrome in banks, accidents in the meatpacking or lumber industry, or various types of disorders in the chemical industry. Here the issue is one of informing the public or specific employers in an economic sector about possible threats to the health and well-being of residents of an area or employees in a particular commercial enterprise.
OCR for page 100
--> Vulnerability to Harm The examples above can be characterized by level of aggregation: large aggregations of health care personnel in, for instance, hospitals or HMOs, as contrasted with individual clinicians. The committee believes that, in general, public disclosure can be defended more easily when data involve aggregations or institutions than when they involve individuals. Vulnerability to harm is the complicating factor in this controversy, and some committee members affirm that it should be carefully and thoughtfully taken into account before data on individuals are published. To an individual, the direct harms are those of loss of reputation, patients, income, employment, and possibly even career.6 Hospitals and other large facilities, health plans, and even large groups are less vulnerable to such losses than are individuals. Higher-than-expected death rates for acute myocardial infarction or higher-than-expected caesarean section rates are not likely to drive a hospital out of business unless the public becomes convinced that these rates are representative of care generally and are not being addressed. By contrast, reports of higher-than-expected death rates for pneumonia or higher-than-expected complication rates for cataract replacement surgery could disqualify an individual from participating in managed care contracts and eventually spell ruin for the particular physician. How one regards harms and gains may depend in part on whether one views public disclosure of evaluative information about costs or quality as a zero-sum game. In a highly competitive market, which may have the characteristics of a zero-sum game, clear winners and losers may emerge in the provider and practitioner communities. Furthermore, in theory this is what one would both expect and desire. Nevertheless, when markets are not highly competitive—for instance, when all hospital occupancy rates are high or when the number of physicians in a locality is small—the information may less directly affect consumer choice, although it may well influence provider behavior by changing consumer perceptions. In this situation, clear winners and losers are neither expected nor likely, but establishing benchmarks that all can strive to attain should, in principle, contribute to better performance across all institutions and practitioners. 6 The prospect that particular institutions, health plans, or individual practitioners might rate less well than others, but not necessarily poorly, and thereby lose patients to others is possible (and perhaps probable), but in the committee's view it did not warrant special attention. Similarly, the possibility of gain, when publicly disclosed data or other ratings are superior and thereby enhance reputation or bring additional patients, seems likely but not of sufficient weight to merit further discussion.
OCR for page 101
--> Methodological and Technical Issues Several factors influence the degree of confidence one can have in the precision of publicly disclosed analyses, and this dictates how securely one can interpret and rely on published levels of statistical significance and confidence intervals and generalize from published information. Two factors involve the quality of the underlying data and the analytic effort, as introduced in Chapter 2. Others, discussed below, involve the level of aggregation in published analyses, the appropriateness of generalizing from published results to aspects of care not directly studied, and the difficulty of creating global indexes of quality of care. In the committee's view, proponents of public disclosure have an obligation to insist that the information to be published meet all customary requirements of reliability, validity, and understandability for the intended use. Such requirements vary, to some degree, according to the numbers of cases or individuals included in the report-that is, according to the level of aggregation, from a single case or physician to dozens or hundreds of cases from multiple hospitals. When HDOs cannot satisfy these technical requirements, they should not publish data in either scientific journals or the public media. The committee was not comfortable with the idea that publication might go forward with explanatory footnotes or caveats, on the grounds that most consumers or users of such information are unlikely to accord the cautions as much importance as they give to the data themselves and may thus be unwittingly led to make erroneous or perhaps even harmful decisions. This position may not be sustainable in all cases, however. The New York Supreme Court rejected the argument "that the State must protect its citizens from their intellectual shortcomings by keeping from them information beyond their ability to comprehend" (Newsday, Inc. and David Zinman v. New York State Department of Health, et al.) and ruled that physician-specific mortality rate information be made public pursuant to a Freedom of Information Law request. In this particular case it could be argued that the data and analyses met all reasonable expectations of scientific rigor. In the future, however, one cannot assume this will be the case. One solution in problematic circumstances may be for HDOs to disclose information only at a much higher level of aggregation than that at which the original analyses may have been done.7 7 To overcome some of these objections to public disclosure of information with weak reliability and validity, especially stemming from small sample sizes, various statistical disclosure limitation procedures might be considered (NRC, 1993). For example, if data or results are in tabular form and if the data are themselves questionable, then information on individual
OCR for page 125
--> FIGURE 3A-2 Actual (observed) number of patients who died in the hospital after coronary artery bypass graft in each hospital in New York State, 1991.
OCR for page 126
--> FIGURE 3A-3 Actual (observed) proportion of patients who died in the hospital after coronary artery bypass graft in each hospital in New York State,1991.
OCR for page 127
--> Releasing data only on actual mortality rates clearly is unfair to the public; it is equally unfair to the providers in question. For example, more and more patients would seek care from Hospital A, and these actions collectively might render Hospital B's cardiac surgical program sufficiently underutilized that it would be closed. This would serve the interests neither of future low-risk patients, for whom expected mortality would be lower in Hospital B than in Hospital A, nor of future high-risk emergency patients, with whom Hospital A has no experience. This kind of unfairness had led to the process generally referred to as risk adjustment. This may be defined as a process that allows the effect of a variable of interest on patient outcomes, such as a hospital or a surgeon, to be isolated from the effect of all other variables believed to influence that outcome. This commonly is accomplished by multivariate analysis to determine simultaneously the variables that, with a stated degree of uncertainty, determine the outcome in question in the population under study.1 In the example just above, the question would be the effect of hospital on the outcomes of patients undergoing elective or emergency CABG. The analysis would be used to risk-adjust actual hospital mortalities for Hospitals A and B so that they reflect only—or at least largely—the effect of the expertise of the hospitals on patient outcomes. By extension the same process can be applied to determine physician-specific outcomes; in this example, mortality rates by surgeon. It can be argued that properly risk-adjusted hospital mortality rates, and their conversion by one or another means to inferences about quality of care, is a fair method of comparing providers. Some would counter, however, that random assignment of treatment (e.g., CABG or no CABG in the example at hand) is the only reliable method of risk adjustment. Whether or not this argument is persuasive in theory, random assignment is not practical in many situations, including the one under consideration here. The conclusion to this point might be, however, that HDOs should publicly release not just actual numbers of events but also the results of appropriately risk-adjusted analyses. Certainty, Probability, and Correct Inferences Even with all these refinements all derived information and comparisons inherently have only a degree of certainty, not absolute certainty. The 1 Risk adjustment can also be approached by polling expert opinion about indications for an intervention—as for example, in the indications of appropriateness for CABG that have been developed by the RAND Corporation (Chassin et al., 1986b; Leape et al., 1991)—and then stratifying groups of (actual) patients in the analysis according to those indications.
OCR for page 128
--> reason is that such analyses yield information and inferences relating to a hypothetical group or population on the basis of investigation of a presumably randomly selected sample of that population. Simple ''secure" facts, such as those cited in the figures above for Hospitals A and B, do give some of the information required for fair comparisons, reliable predictions, and secure inferences. Nonetheless, comparisons, predictions, and inferences require something more, and that something more always has a degree of uncertainty. Thus, the comparisons, predictions, and inferences must take into account both the amount or magnitude of any differences displayed by derived information (as well as that shown in actual information) and the related degree of certainty (or uncertainty). This IOM committee has asserted that HDOs have a responsibility to ensure that public disclosures of their data and analyses done with their data clearly portray these considerations. Again, it may suffice to include detailed warnings concerning how not to use the data. One pitfall that HDOs should avoid is releasing only that information they regard as critically-or statistically significantly-important. They may do this when the multiplicity of providers and computations produces such a large amount of information that not all of it can be published, but doing so may be unfair both to providers and the public. For example, studies might show, with a high degree of certainty, that in a group of 30 hospitals, hospitals X, Y, and Z are the only ones determined to have less good results than the remaining 27 institutions. This degree of certainty, or criterion for differentiating between one subset of the whole group and another, is conventionally defined as a "P-value less than 0.05." Assume, in this example, that no other members of this group of 30 hospitals are shown with a high degree of certainty to be different from one another. A public release containing only this information is an attractive option on practical grounds, but it may not be fair. Some readers will intuitively realize that one or more other institutions may also have somewhat inferior outcomes, and thus are different from the others in the group, but this conclusion will not have as high a degree of certainty as for hospitals X, Y, and Z. In short, the use of P-values and establishment of criteria using degrees of certainty are based on arbitrary decisions. The point is illustrated in Table 3A-1, which shows the usual tabular form of public disclosure of hospital deaths after CABG for the purpose of identifying hospitals with poor outcomes. The single asterisk in the figure identifies the only three institutions (here, St. Peter's, St. Vincent's, and University Hospital of Brooklyn) that the analyses showed to be different from all others in risk-adjusted mortality rates with a high degree of certainty (P < 0.05). Although technically this portrayal may be correct, is it fair? That is, should the public be left with the general impression—which may not be
OCR for page 129
--> accurate—that the other institutions were not different one from the other? As can be seen in Figure 3A-4, three other hospitals (Bellevue, Erie County, and Upstate Medical) had less good outcomes as well, but with slightly less certainty than the three already noted. The arbitrary designation of a P-value (here of less than 0.05, but it could be any a priori P-value) has led to the erroneous general impression that only the first-named three hospitals are somehow "different" and are to be regarded as outliers. The criticism can be generalized: Why should "the ruler"—in this case an arbitrary P-value—not be placed so that St. Luke's, Arnot-Ogden, and Long Island Jewish are included as outliers? Where, indeed, ought the ruler, if applied to Figure 3A-4, be brought to rest? The Educational Content of Public Information Dissemination Public release of information about the variability in the ranks of various hospitals that is produced by different methods of analysis helps the public to understand the degree of uncertainty in overall inferences about "the best place to go for surgery." Much of this can be expressed in some combined index (in Figure 3A-4, this is the risk-adjusted percentage mortality). Depicting other information, and displaying information in bar diagrams (see, for instance, Figures 3A-5, 3A-6, and 3A-7) encourages, if not forces, the public to see how complex a matter it is to distinguish between the best and the worst (or better and poorer) hospitals. It also portrays the small differences that sometimes separate these facilities. Thus, this committee believes that HDOs must realize that the fairest approach to the public release of evaluative information involves disclosing rankings—of all actual data as well as derived data—along with appropriate explanations. The knowledge base on effective ways to communicate and disseminate quality-related information to consumers is comparatively scanty, yet much health policy today presumes that health care policymakers and providers understand how to carry out such efforts. To overcome gaps in this area, more than one research agenda in the quality-of-care arena has specifically called for work on information dissemination techniques (IOM, 1990; VanAmringe and Shannon, 1992). The committee endorses these calls for additional research on these topics.
OCR for page 130
--> Hospital Deaths Risk-Adjusted Mortality Hospital Patients(n) Number Actual (Observed) % Mortality Expected % Mortality % 95% Confidence Limits (CL) Albany Medical Center 831 26 3.13 2.85 3.38 2.20-4.95 Arnot-Ogden 466 13 2.79 2.14 4.01 2.13-6.85 Bellevue 59 4 6.78 2.81 7.42 2.00-19.00 Beth Israel 169 4 2.37 3.05 2.39 0.64-6.11 Binghamton General 325 8 2.46 3.35 2.26 0.98-4.46 Buffalo General 1,151 26 2.26 2.52 2.76 1.80-4.04 Erie County 148 5 3.38 1.93 5.39 1.74-12.59 Lenox Hill 507 18 3.55 3.17 3.45 2.04-5.45 Long Island Jewish 369 13 3.52 2.70 4.02 2.14-6.88 Maimonides 608 22 3.62 3.63 3.07 1.92-4.64 Millard Fillmore 496 15 3.02 2.44 3.82 2.14-6.30 Montefiore Moses 305 5 1.64 2.82 1.79 0.58-4.18 Montefiore Weiler 196 0 0.00 2.29 0.00a 0.00-2.51 Mount Sinai 497 15 3.02 2.84 3.28 1.83-5.40 New York Hospital 831 23 2.77 3.77 2.26 1.43-3.39 North Shore 465 16 3.44 3.25 3.25 1.86-5.28 NYU Medical Center 707 31 4.38 5.38 2.51 1.70-3.56 Presbyterian 275 10 3.64 2.95 3.80 1.82-6.99 Rochester General 959 22 2.29 3.40 2.08 1.30-3.15
OCR for page 131
--> St. Joseph's 521 11 2.11 2.56 2.53 1.26-4.54 St. Luke's 734 24 3.27 2.44 4.12 2.64-6.13 St. Peter's 437 20 4.58 2.12 6.64b 4.06-10.26 St. Vincent's 481 22 4.57 2.13 6.61b 4.14-10.01 Strong Memorial 331 14 4.23 3.64 3.57 1.95-5.99 Univ. Hosp. Brooklyn 216 12 5.56 2.37 7.21b 3.72-12.59 Univ. Hosp. Stony Brook 277 9 3.25 5.27 1.90 0.87-3.60 Upstate Med. Ctr. Syracuse 266 12 4.51 2.71 5.12 2.64-8.95 Westchester County 605 18 2.98 2.66 3.44 2.04-5.44 Winthrop 451 11 2.44 4.24 1.77 0.88-3.17 Total 14,944 460 3.08 (95% CL = 2.81%-3.37%) NOTE: This is the usual New York State tabular form of public disclosure of hospital deaths after CABG, with each hospital being identified. Although the ranking here is alphabetical, the media often reranks the institutions according to the risk-adjusted mortality. The number of patients and the number of hospital deaths are the only happenings depicted that occurred and about which there is no uncertainty. The actual percent mortality is a derived datum, obtained from the previous two numbers by computation. The expected percent mortalities and the risk-adjusted percent mortalities depend upon computations using a multivariable logistic risk-factor equation and have a degree of uncertainty. The uncertainty in the risk-adjusted percent mortality is expressed here in the 95% confidence limits. Note that the public's attention is called primarily to three institutions whose risk-adjusted percent hospital mortalities are nearly certainly higher than those of the state as a whole. (This is essentially the same as those hospitals with nearly certainly larger actual mortality than expected mortality.) The decision as to which hospital is to receive the designations in footnotes a and b is entirely arbitrary (although based on conventional criteria), as will be seen later. a The only hospital whose lower than statewide risk-adjusted percent mortality is unlikely to be due to chance alone (P < 0.05). b P < 0.05.
OCR for page 132
--> FIGURE 3A-4 The risk-adjusted percent mortality from coronary artery bypass graft in New York State, 1991, shown numerically in Table 3A-1. The higher the risk-adjusted percent mortality, the less the perceived expertise of the institution.
OCR for page 133
--> FIGURE 3A-5 The computed difference between the observed (actual) and the expected percent mortality from coronary artery bypass graft in New York State, 1991. The hospitals with lower (negative) differences can be held to have greater expertise than the state as a whole; those with higher (positive) differences can be held to have less expertise.
OCR for page 134
--> FIGURE 3A-6 The computed ratio between observed (actual) and expected (computed) percent mortality. The smaller the ratios, the greater the expertise of the institution.
OCR for page 135
--> FIGURE 3A-7 The P-value for the difference in observed and expected mortality from coronary artery bypass graft in each institution in New York State, 1991. This says nothing about the amount of the difference shown in Figure 3A-5 and Figure 3A-6, but speaks only to the degree of certainty (believability) that the difference is not due to chance alone.
Representative terms from entire chapter: