Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 222
Performance Measurement: Accelerating Improvement Appendix H Commissioned Paper Efficiency/Value-Based Measures for Services, Defined Populations, Acute Episodes, and Chronic Conditions Kyle L. Grazier INTRODUCTION This paper was commissioned by the Institute of Medicine (IOM) to provide an overview of “value-based” or efficiency measurement in health care. It will define selected terms; provide a brief history of the development of these measurement sets; assemble information on the efficiency measurement sets in current use; identify challenges to applying these in practice and research; and identify gaps in efficiency measurement. DEFINITION OF EFFICIENCY Central to this work is the manner in which “efficiency” and “value based” are defined. Among others, the economics, statistics, management science, and health services research literatures have contributed variations on these definitions that differ in their specificity to health care and their generalizability beyond the economic costs of health care services. Specifically, definitions differ as to whether the mix of inputs includes quality, and the mix of outputs includes health, health status, or mortality. Economic efficiency is commonly expressed as the relationships between a given quantity and quality of output using a bundle of inputs that minimizes the cost of production. Several different combinations of capital, labor, and raw materials (where each of these can have multiple dimensions, e.g., physician labor, nurse labor, etc.), could feasibly be used as inputs to produce a particular quantity and quality of output. Generally, only one of these combinations will have the lowest cost associated with that input bundle.
OCR for page 223
Performance Measurement: Accelerating Improvement Palmer and Torgerson’s (1999) definition of efficiency includes both health care inputs and health outcomes. The goals for measurement determine which aspect of efficiency is emphasized. They suggest that “allocative efficiency” should dictate policy decisions focused on resource distribution (Palmer and Torgerson, 1999). This aspect of efficiency requires that a specific outcome be defined in advance, after which a choice is made among alterative interventions or resources based on their relative costs. The resulting costs may not reflect the most efficient combination of inputs and outputs but it does allow for an allocation strategy. An example: If one is interested in promoting one of two surgical interventions, and the identified criteria for selection is a fixed minimum postsurgical mortality rate, then one can compare the relative costs of each to achieve a fixed mortality threshold. To assess “productive efficiency,” one maximizes “health outcome for a given cost,” or minimizes “cost for a given outcome.” For example, one chooses different combinations of inputs to achieve the best health outcome for a given cost. “Technical efficiency” is achieved if the physical mix of labor and capital inputs achieves the maximum output. For instance, if surgical procedure A and surgical procedure B produce the identical defined outcomes of hospital discharge in 3 days, but procedure A uses less labor but identical amounts of capital, then procedure B is considered technically inefficient. The measurement of the individual inputs and outputs in the efficiency function also vary by setting, goals, and the availability of data. The definition of costs or economic resources has been relatively consistent in services research: direct and indirect monetary resources that contribute to the institution’s costs of providing a service. However, as the goals of measurement change to incorporate an understanding of system resources, then the physician’s resource use is included, as are out-of-pocket direct costs, and even indirect costs of lost workplace productivity and reductions in general economic production. Such expansive cost constructs can inhibit practical solutions due to conceptual and data complexities. For the most part, this paper focuses on the service-related resource costs consumed in the delivery of medical care within the health care system. Over a decade ago, the IOM defined quality as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge” (IOM, 1990). But as many authors have noted recently, the definition of quality, as in quality care or quality improvement, has not reached national consensus (Berwick, 2002; McGlynn, 1995; McGlynn et al., 2003; McKee, 2001; Palmer and Torgerson, 1999; Wennberg et al., 2002). Complicating these efforts are the paucity of “gold standards” for health outcomes, definitive levels of health that are measurable, valid, and reliable.
OCR for page 224
Performance Measurement: Accelerating Improvement Patient, population, and clinical characteristics introduce variations in outcomes. In addition, the choice of services and the process for delivering them have limited clinical evidence of their efficacy and effectiveness. Finally, deficits in management costing have limited the ability to measure accurately the resources consumed in the care delivery process and the quantitative outcomes. In the discourse on performance measurement in health care, “efficiency” is used in many contexts and for many purposes. Policymakers at national, local, plan, and purchaser levels are deliberating how to maximize health-related outcomes of their enrollees, beneficiaries, or employees receiving services, while minimizing costs for a standard outcome. Maximizing efficiency or reducing expenditures may compete for attention with a target morbidity rate. These challenges influence which measures of value and efficiency to evaluate or support; which methods to endorse for practitioners, services, and resources; and how to implement and integrate efforts to improve intermediate and longer-term population-, firm-, or patient-specific outcomes. Despite these many challenges, considerable effort has advanced thinking and action in the research and practice arenas. While there is not yet consensus on the definition of “efficiency” or “value based,” this paper will incorporate both the Institute of Medicine (IOM) landmark report’s definition of efficiency (eliminating waste) and the theoretical economics definition of efficiency (IOM, 2001; Palmer and Torgerson, 1999). For these purposes, efficiency will be broadly defined as the mix of health care resource inputs that produce optimal quantity and quality of health and health care outputs. In short, the bias is toward measuring the production efficiency of relative health care resources among individual, institutional, and groups of providers. It is important to note here that there are several current initiatives and programs to assess, improve, promote, and reward improvements in and delivery of quality health care (AHRQ, 2004; Bridges to Excellence, 2004; Kerr et al., 2004; Leapfrog Group, 2005). Other consortia of employers, purchasers, and health plans are planning programs to measure and reward institutional performance in effectiveness and efficiency (Leapfrog Group, 2005; PBGH, 2005; Worthington, 2004). Although this paper addresses the broader definition of efficiency to include “value,” and therefore, quality inputs and outcomes, no attempt will be made to discuss all measures of quality, performance, or effectiveness currently in use. MOTIVATIONS FOR VALUE-BASED MEASURES AND MEASUREMENT Policy makers, researchers, providers, and others are motivated to seek value-based or efficiency measures for various reasons. In the past two de-
OCR for page 225
Performance Measurement: Accelerating Improvement cades, the quality of the available data and the rigor of the analysis have advanced our ability to measure the economic outputs that are derived from resource inputs. As a result, numerous health care institutions and researchers are willing to invest in value-based measurements, with a clear focus on quality-adjusted outcomes. Many purchasing groups, health plans, insurers, and consumer groups are at least as concerned, if not more so, with the cost-efficiency of services. Algorithms for assessing relative efficiency of providers vary in their transparency to the user, but are widespread among health plans and physician group practices. Outputs from these types of analyses trigger decisions on appointing and reappointing physicians within a practice or network; form the basis for monetary incentive packages for providers and groups; and generally are aimed at the containment and management of contract and practice costs of physicians delivering inpatient and outpatient, general and specialty care in solo, single-, or multispecialty practices. The following purposes for efficiency measurement have been documented in the literature (Berwick, 2002, 2003; Fiscella et al., 2000; Franks et al., 1993; Galvin and McGlynn, 2003; Iezzoni et al., 1992b, 1994a; IOM, 1990; Kerr et al., 2004; Leatherman et al., 2003; McGlynn, 2003a,b; McGlynn and Brook, 2001; McGlynn and Halfon, 1998; McGlynn et al., 2003; Nauert, 1996; NCQA, 2004; Shahian and Normand, 2003; Schield et al., 2000; Siu et al., 1992). While extensive, the list is not exhaustive: Improve quality of care Encourage payer involvement Integrate responsibility for employment, payment, health status Reduce waste Re/appoint/certify medical staff for network participation Increase financial risk associated with practice decisions Alter practice patterns Assist in cost containment Encourage/steer selection of efficient health plans Allocate service resources differently Deploy alternative labor and capital Track/evaluate relationships to health management, health status, survival MEASUREMENT CONSIDERATIONS Validity There are generic guidelines for selecting measurement criteria, not all of which can be met in the current efforts to measure efficiency. Regardless
OCR for page 226
Performance Measurement: Accelerating Improvement of the goals for measuring efficiency, the measure used for efficiency or value must be valid. Unfortunately, gold standards for health care efficiency don’t exist, complicating efforts to establish the validity and reliability of a measure. Surrogates for validity in measuring practice efficiency include the notion of “accuracy” of the programs and “consistency” or “stability” across practices and providers (Thomas et al., 2004b). Technical accuracy is highlighted by holding constant an outcome, and comparing inputs, namely costs, across physicians of the same specialty. By varying the methods used in measuring the inputs, and comparing the consistency of the outputs, production efficiency is captured. By establishing the “stability” of the output measure over time, over different types of physician specialties and patient panel sizes, one can learn more about potential variation in the inputs and outputs, and the financial and health consequences. Unit of Analysis Currently, the majority of practice efficiency measurement tools rely on the physician as the unit of analysis, rather than the physician group, individual patient, or community member. The purpose of this physician-focused measurement is to establish the economic resources consumed by the physician in the delivery of care, relative to physician peers. The visit, service, or case descriptors attempt to bundle patient and clinical care characteristics into discrete, homogeneous categories. These categories are then used to help define the services a patient might expect to receive when presenting with the characteristics defined by a particular resource category (Franks and Fiscella, 2002; Franks et al., 2003). However, there is still considerable variation in which variables contribute to “case” categories and resource use, and the algorithms for assigning the costs of those resources to providers. Attribution of Resource Use Attributing patient-specific resource use to an individual physician is particularly complicated when services are delivered as part of a team of providers, over an extended period of time, for complex or persistent conditions. Under a gatekeeper model, primary care physicians are held responsible for all services delivered, whether provided by the physician, referred to another approved physician, or provided by other clinical staff within the practice. Although the underlying risk-sharing arrangement within a primary care practice may not be known, many efficiency tools assume that all consumed resources can be attributed to the primary care physician. When evaluating the resources used by nonprimary care physicians, or “special-
OCR for page 227
Performance Measurement: Accelerating Improvement ists,” attributing responsibility for services across the providers is usually based on a formula. These formulas differ in their attribution decision rules, and vary the amounts of resources assigned to a responsible provider proportionally or nonproportionally to the primary care, nonprimary care, or total resources consumed across the episode. Data The data sources for these efforts have traditionally included encounter and claims data supplied through an employer, insurer, or plan’s administrative data systems. In some cases, the administrative data have been validated against medical records, but these efforts have been inconclusive in determining which source is better than another for these purposes (Hannan et al., 2003). Claims or encounter data at this time are generally more accessible and less expensive to analyze than medical charts or patient surveys, although efforts to identify quality and value metrics continue to explore these sources as well as electronic medical records and online order entry systems (Birkmeyer et al., 1999, 2002, 2003; Fisher et al., 1990a,b, 1992; Malenka et al., 1994; Thomas et al., 2004a). Different types and amounts of data can be extracted from the same claims data set (Baron et al., 1994; Fisher et al., 1992). Many profiling tools capture and use in their algorithms different numbers of diagnoses, procedures, and different time periods for services. Current episoding algorithms vary in the numbers of episode categories to which diagnoses and procedures are assigned. They also differ in the length of the “clean periods,” those time periods during which no services for the condition are received, thus triggering the end of one episode and the beginning of another. It is common in profiling methods to aggregate all costs of care that appear with an episode and attribute this total to a provider. But there is also variation in the complexity or severity of the case or in patient characteristics that are not captured in episode categories defined by time of service (Iezzoni et al., 1992a,b, 1994b). Several risk adjustment methods that have been perfected for other purposes as well as for physician efficiency profiling are applied to episodes to explain better the resources identified as inputs in the model. Risk Adjustment Risk adjustment is used to adjust claims profiles to account for differences in the health status (and thus expected resource use) of patients served. Without proper adjustment, practice patterns of physicians whose patient panels include greater than average proportions of elderly patients or
OCR for page 228
Performance Measurement: Accelerating Improvement patients with severe or chronic disease could appear, incorrectly, to reflect inappropriately high levels of resource use (such as office visits, ancillary services, prescription medications, specialist referrals, and hospital days). Different risk-adjustment methodologies—all purporting to “do” the same thing—can produce quite different results. Research on hospital profiling demonstrates that comparative judgments about provider performance can be influenced significantly by the specific risk measurement methodology utilized (Iezzoni, 1997). There are several models of risk adjustment that have been tested over time and on various data sets. The vast literature reflects the range of purposes, data sources, algorithms, analytic models, and outputs associated with risk adjustment methods (Thomas et al., 2004a). Researchers and policy makers see a growing role for risk adjustment payment models, financing policies, and performance measurement. Patient interviews, surveys, claims records, medical records, or some combination of these have been suggested as sources for data on health or medical risk (Ash et al., 2001; Grazier and Thomas, 2002; Hornbrook and Goodman, 1996; Newhouse et al., 1997; Pope et al., 2004; Street, 2003; Worthington, 2004; Zhao et al., 2001). The costs associated with collection are weighed against the quality and volume of the information from each source. There are many physician profiling and efficiency tools based solely on administrative data, although even in these cases there are significant differences in the data fields used in the algorithms that define risk categories; models may include age, sex, one or more primary, principal, or secondary procedures and diagnoses, and pharmacy National Drug Codes (NDC) (Ash et al., 2001; de Brantes, 2002; de Brantes et al., 2003; Goldman et al., 2004; Grazier and Thomas, 2002; Pope et al., 2004; Thomas et al., 2002; Worthington, 2004; Zhao et al., 2001). Many efficiency-profiling packages also require specific record layouts and field definitions. Resource Costs Service or resource costs used in efficiency measurement are seldom collected from institutional management accounting processes; instead they rely on the monetary data appearing in the claim record; these include paid charges, allowable charges, or relative value adjusted charges. In some cases, to remove the effects of price variations in the reported charges, charges are standardized to a regional or local mean value for similar procedures or practices. In cases in which detecting price variations and their impact on practice is central to the profiling effort, actual recorded paid charges are used without standardizing for market differences.
OCR for page 229
Performance Measurement: Accelerating Improvement Thresholds Physician profiling tools assess the extent to which “costs” of the resources used for an individual type of patient or panel of patients exceeds a predetermined percentile, a group-specific median or mean, a national specialty group consensus level, other national benchmarks, or a relative value based on annual budgets or financial targets. Patient or episode cost outliers can influence many of the algorithms for assessing efficiency. Case outliers are often examined separately from the pool to determine what factors affect their occurrence. The width of the threshold bands determines in part how stable efficiency rankings are over time and across specialties. Outputs The output from efficiency measurement for individual physicians is most commonly the ratio of the observed costs to the expected costs (Thomas et al., 2004a,b). The closer a physician comes to using (spending) resources at levels expected for the clinical risk of the patient or panel of patients, the more efficient he or she is assumed to be. While use of the observed/expected cost ratio is prevalent, users should be cautious when applying the ratio to physicians with small patient panels, since misclassification is in many cases related to panel group size. Use of a measure of the difference between the standardized expected costs and the standardized observed costs for a patient or panel could dampen this small sample bias. MEASURES OF “VALUE-BASED” METRICS (EFFICIENCY MEASURES) In 2003, the National Quality Forum (NQF) endorsed national voluntary consensus standards for hospital care performance measures. The initial 39 measures were “intended to promote both public accountability and quality improvement.” The Institute for Healthcare Improvement, through several programs and as described in several white papers as part of their innovation series, has initiated efforts among hospitals to improve the outcomes and experiences of patients and providers on medical/surgical units. Although not specifically designed to measure efficiency, they promote the potential increased value to patients and providers through use of the measures (Institute for Healthcare Improvement, 2005). The IOM, NQF, the Agency for Healthcare Research and Quality (ARHQ), and the National Committee for Quality Assurance (NCQA) singularly and as part of consortia have produced topics, criteria, and measures for clinical conditions and priority areas for health care quality improvement activities (AHRQ,
OCR for page 230
Performance Measurement: Accelerating Improvement 2004; IOM, 2005; NCQA, 2004; NQF, 2005). These works continue to contribute measures of quality into the value-based efficiency measurement equation. The report on measuring provider efficiency, a collaborative effort of the Leapfrog Group and the Bridges to Excellence, notes “reporting performance on efficiency should be linked to reporting performance on quality to better understand, measure and communicate the value that is delivered by physicians and hospitals” (Bridges to Excellence, 2004; NCQA, 2004). Other organizations and sponsors have begun or are considering using data collected for earlier purposes, such as quality measures for accreditation or internal monitoring, for value measurement. The NCQA monitors health plan performance by collecting and analyzing the Health Plan Employer Data and Information Set. As noted earlier, it has convened technical panels to design efficiency measures for implementation among member health plans. The AHRQ is providing guidance based on its own research as to how best to use the quality indicators that they make publicly available for performance and potentially efficiency measurement (Remus and Irene, 2004). The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) is considering reporting some of its measures collected during accreditation processes. AHRQ reports “JCAHO will be replacing hospital performance reports with quality reports in 2004.” Table H-1 presents some of the measures of value and efficiency that have been proposed or are in use either by or under the sponsorship of several of the above-named organizations. Few of the existing measures endorsed by national organizations are specifically for measuring efficiency; however, some programs are included if they noted in their documentation their preparations for expansion of quality measurement to “efficiency” or “value.” GAPS IN THE LITERATURE AND EMPIRICAL WORK Health care value can be viewed as a set of individual and conflated components (e.g., quality, cost, population health, clinical measurement, payment methods, practice patterns, and delivery system). The dynamic nature of the research in each of these areas leads to frequent, important contributions. Recent advances stimulate efforts to identify and fill the gaps remaining in our knowledge of value-based metrics, and in related policies and practices. Several remaining challenges are being addressed in demonstrations, experiments, and practice; some have not yet been rigorously examined; and many remain ripe for rigorous study.
OCR for page 231
Performance Measurement: Accelerating Improvement Standardization in the Measures Used to Assess Efficiency Standardization has been a necessary step in the advancement of numerous technologies and in improvements in production. The need for standardized measures of quality, effectiveness, and efficiency has been documented extensively. Most commercial products on the market and many of those in development by NCQA and others measure efficiency by comparing actual observed expenses with expected expenses incurred in the delivery of services. In some cases, the effect of prices is removed. The price-adjusted (or standardized) measure assumes that “paid amount” noted on claims reflects volume, type of services, and price. Unless the intent was to assess the impact of price variation on provider efficiency, the ratio of observed to expected costs would be standardized to remove this variation. To accomplish this, standard or average regional prices for similar services are applied to the services data. Recent studies have recommended that both price-adjusted and unadjusted observed versus expected costs be measured and compared with one another (Bridges to Excellence, 2004; Leapfrog Group, 2005; Thomas et al., 2004a). NCQA efforts to create an efficiency indicator for health plans include examining both standardized and unadjusted efficiency measures, to understand better the extent of variation in outcomes due to regional or price differences. Physicians are obvious stakeholders in the standardization of these measures, and many complain that efficiency performance is being measured and interpreted differently within and across health plans, insurers, health systems, and consumer groups. Policymakers must consider the cost to the plans or practices of imposing one particular episoding and/or risk adjustment commercial product, rather than specifying standardized input and output measures. Transparency in methods and algorithms aids evaluation of the logic and components that could and should contribute to a standard. To advance understanding and promote progress in standard setting, product details need to be revealed; examples of information needed for this purpose include: the underlying logic and processes of the algorithms used for preparing data for the application, and for episoding and risk adjustment; standard errors and statistical significance of output measures; outlier threshold levels; frequency and types of omitted cases; total member panel size and number of valid episodes per physician per time period; and the attribution method used within specialty and across specialties. Inclusion of Quality Dimensions in the Measures Significant progress has been made in identifying process and outcomes components of quality care, particularly for certain conditions treated in
OCR for page 232
Performance Measurement: Accelerating Improvement certain settings. Experts in clinical care and measurement recommend that recently piloted processes be expanded and that current larger-scale empirical work be tested on other samples and in other venues. For instance, clinical quality measures for diabetes care and heart/stroke care included in Bridges to Excellence/NCQA Provider Recognition Programs are available for use in assessing efficiency performance (Tom Lee, personal communication, 2004). The End of Life metrics developed by the Dartmouth Atlas team (Wennberg et al., 2002) have been proposed as a proxy for hospital system efficiency (Eugene Nelson, personal communication, 2004). Active research programs and demonstrations by the NQF, the NCQA, Bridges to Excellence, the Leapfrog Group, research groups, and others are rapidly advancing the measurement of quality using medical records and administration data. These efforts need to be shared and combined on an ongoing basis into the measurement of health care value. Validated Clinical (Medical Service, Pharmacy) and Financial Data A number of studies have examined the validity of self-reported data, medical records, and administrative data and found that, with some caveats, claims data are adequate for many purposes related to value measurement. Although recent, these studies may not be generalizable to future information systems in which the electronic medical record, integrated services/encounter data, and advanced cost accounting systems are the norm. Concurrent with efforts to measure efficiency and performance are demonstrations and experiments in facility-based standardized records and information systems that can form the basis for reliable measurement of services, quality, and providers across sites and health systems (Physician Practice Connections for the Bridges to Excellence rewards program, Physician Office Link, the product of a collaboration between NCQA and Bridges to Excellence). Although these efforts will undoubtedly lead to important answers and recommendations, ongoing empirical work should include sampling and analysis of: medical records for office visits and inpatient stays to validate data that appear on and are extracted from claims-based files and similar administrative records; cost data collected from multiple sources, including facility-specific, payer-specific records of billed charges, allowed charges, paid charges, and retroactive adjustments to assess the validity of resource measures; physician or group panel member characteristics including age, sex, race/ethnicity, and zipcode (to measure average socioeconomic status) relative to the service area or plan population. This can serve several purposes.
OCR for page 239
Performance Measurement: Accelerating Improvement Berwick DM. 2003. Improvement, trust, and the healthcare workforce. Quality and Safety in Health Care 12(Suppl.1):i2–i6. Birkmeyer JD, Lucas FL, Wennberg DE. 1999. Potential benefits of regionalizing major surgery in Medicare patients. Effective Clinical Practice 2(6):277–283. Birkmeyer JD, Siewers AE, Finlayson EV, Stukel TA, Lucas FL, Batista I, et al. 2002. Hospital volume and surgical mortality in the United States. New England Journal of Medicine 346(15):1128–1137. Birkmeyer JD, Stukel TA, Siewers AE, Goodney PP, Wennberg DE, Lucas FL. 2003. Surgeon volume and operative mortality in the United States. New England Journal of Medicine 349(22):2117–2127. Bridges to Excellence (BTE). 2004. Measuring Provider Efficiency. A Collaborative multi-stakeholder effort. Version 1.0. de Brantes FS. 2002. Bridges To Excellence: A program to start closing the quality chasm in healthcare. Journal of Healthcare Quality 24(2):2–11. de Brantes FS, Galvin RS, Lee T. 2003. Bridges to Excellence: A business case for quality. Journal of Clinical Outcomes Management 10(8):431–438. Fiscella K, Franks P, Gold MR, Clancy CM. 2000. Inequality in quality: Addressing socioeconomic, racial, and ethnic disparities in health care. Journal of the American Medical Association 283(19):2579–2584. Fisher ES, Baron JA, Malenka DJ, Barrett J, Bubolz TA. 1990a. Overcoming potential pitfalls in the use of Medicare data for epidemiologic research. American Journal of Public Health 80(12):1487–1490. Fisher, ES, Malenka DJ, Wennberg JE, Roos NP. 1990b. Technology assessment using insurance claims: Example of prostatectomy. International Journal of Technology Assessment in Health Care 6(2):194–202. Fisher ES, Whaley FS, Krushat WM, Malenka DJ, Fleming C, Baron JA, et al. 1992. The accuracy of Medicare’s hospital claims data: Progress has been made, but problems remain. American Journal of Public Health 82(2):243–248. Franks P, Fiscella K. 2002. Effect of patient socioeconomic status on physician profiles for prevention, disease management, and diagnostic testing costs. Medical Care 40(8):717–724. Franks P, Nutting PA, Clancy CM. 1993. Health care reform, primary care, and the need for research. Journal of the American Medical Association 270(12):1449–1453. Franks P, Fiscella K, Beckett L, Zwanziger J, Mooney C, Gorthy S. 2003. Effects of patient and physician practice socioeconomic status on the health care of privately insured managed care patients. Medical Care 41(7):842–852. Galvin RS, McGlynn EA. 2003. Using performance measurement to drive improvement: A road map for change. Medical Care 41(Suppl. 1):I48–I60. Goldman D, Joyce GF, Escarce JJ, Pace JE, Solomon MD, Laouri M, Landsman PB, Teutsch SM. 2004. Pharmacy benefits and the use of drugs by the chronically ill. Journal of the American Medical Association 291(19):2344–2350. Grazier KL, G’Sell WA. 2004. Group Medical Insurance Claims Database Collections and Analysis. Schaumburg, IL: Society of Actuaries. Grazier KL, Thomas JW. 2002. A Comparative Evaluation of Risk-Adjustment Methodologies for Profiling Physician Practice Efficiency. A report to the Robert Wood Johnson Foundation. Hannan EL, Doran DR, Rosenthal GE, Vaughn MS. 2003. Provider profiling and quality improvement efforts in coronary artery bypass graft surgery: The effect on short-term mortality among Medicare beneficiaries. Medical Care 4(10):1164–1172. Hornbrook M, Goodman M. 1996. Chronic disease, functional health status and demographics: A multi-dimensional approach to risk adjustment. Health Services Research 31(3):283–307.
OCR for page 240
Performance Measurement: Accelerating Improvement Iezzoni LI. 1997. The risks of risk-adjustment. Journal of the American Medical Association 278:1600–1607. Iezzoni LI, Foley SM, Daley J, Hughes J, Fisher ES, Heeren T. 1992a. Comorbidities, complications, and coding bias. Does the number of diagnosis codes matter in predicting in-hospital mortality. Journal of the American Medical Association 267(16):2197–2203. Iezzoni LI, Foley SM, Heeren T, Daley J, Duncan CC, Fisher ES, et al. 1992b. A method for screening the quality of hospital care using administrative data: Preliminary validation results. QRB Quality Review Bulletin 18(11):361–371. Iezzoni LI, Daley J, Heeren T, Foley SM, Fisher ES, Duncan C, et al. 1994a. Identifying complications of care using administrative data. Medical Care 32(7):700–715. Iezzoni LI, Daley J, Heeren T, Foley SM, Hughes JS, Fisher ES et al. 1994b. Using administrative data to screen hospitals for high complication rates. Inquiry 31(1):40–55. Institute for Healthcare Improvement. 2005. [Online]. Available: http://www.ihi.org/IHI/ [accessed November 15, 2004]. Institute of Medicine (IOM). 1990. Medicare: A Strategy for Quality Assurance, Vol. II. Washington, DC: National Academy Press. IOM. 2001. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press. IOM. 2005. Pathways to Better Health Services: Measuring Quality. Washington, DC: The National Academies Press. Kerr EA, McGlynn EA, Adams J, Keesey J, Asch SM. 2004. Profiling the quality of care in twelve communities: results from the CQI study. Health Affairs 23(3):247–256. Leapfrog Group. 2005. The Leapfrog Group. [Online]. Available: http://www.leapfroggroup.org/home [accessed November 2, 2005]. Leatherman ST, Hibbard JH, McGlynn EA. 2003. A research agenda to advance quality measurement and improvement. Medical Care 41(Suppl. 1):I80–186. Malenka DJ, McLerran D, Roos N, Fisher ES, Wennberg JE. 1994. Using administrative data to describe casemix: A comparison with the medical record. Journal of Clinical Epidemiology 47(9):1027–1032. McGlynn EA. 1995. Quality assessment of reproductive health services. Western Journal of Medicine 163(Suppl. 3):19–27. McGlynn EA. 2003a. Introduction and overview of the conceptual framework for a national quality measurement and reporting system. Medical Care 41(Suppl. 1):I-1–I-7. McGlynn EA. 2003b. Selecting common measures of quality and system performance. Medical Care 41(Suppl. 1):I-39–I-47. McGlynn EA, Brook RH. 2001. Keeping quality on the policy agenda. Health Affairs 20(3): 82–90. McGlynn EA, Halfon N. 1998. Overview of issues in improving quality of care for children. Health Services Research 33(4 Pt. 2):977–1000. McGlynn EA, Cassel CK, Leatherman ST, DeCristofaro A, Smits HL. 2003. Establishing national goals for quality improvement. Medical Care 41(Suppl. 1): I-16–I-29. McKee M. 2001. Measuring the efficiency of health systems. The world health report sets the agenda, but there’s still a long way to go. British Medical Journal 323(7308):295–296. Nauert R. 1996. The quest for value in health care. Journal of Health Care Finance 22(3):52–61. NCQA (National Committee for Quality Assurance). 2004. State of Health Care Quality. Washington, DC: National Committee for Quality Assurance. Newhouse JP, Beeuwkes Buntin M, Chapman JD. 1997. Risk adjustment and Medicare: Taking a closer look. Health Affairs 16(5):26–43. NQF (National Quality Forum). 2005. National Priorities for Healthcare Quality Measurement and Reporting. [Online]. Available: http://www.qualityforum.org/webprioritiespublic.pdf [accessed January 19, 2005].
OCR for page 241
Performance Measurement: Accelerating Improvement Palmer S, Torgerson DJ. 1999. Economic notes: Definitions of efficiency. British Medical Journal 318(7191):1136. PBGH (Pacific Business Group on Health). 2005. Value Based Purchasing. [Online]. Available: http://www.pbgh.org/programs/value_based_purchasing.asp [accessed October 24, 2005]. Pope GC, Kautter J, Randall PE, Ash AS, Ayanian JZ, Iezzoni LI, Ingber MJ, Levy JM, Robst J. 2004. Risk adjustment of Medicare capitation payments using the CMS-HCC model. Health Care Financing Review 25(4):119–141. Remus D, Irene F. 2004. Guidance for Using the AHRQ Quality Indicators for Hospital-level Public Reporting or Payment . [Online]. Available: http://www.qualityindicators.ahrq.gov [accessed October 26, 2005]. Schield JM, Bolnick HJ, Murphy JJ. October 2000. Evaluating Managed Care Effectiveness: A Societal Perspective. Paper presented to the Society of Actuaries, Schaumburg, IL. [Online]. Available: http://www.soa.org/ccm/content/?categoryID=1079102 [accessed January 10, 2005]. Shahian DM, Normand SL. 2003. The volume-outcome relationship: From Luft to Leapfrog. Annals of Thoracic Surgery 75(3):1048–1058. Siu AL, McGlynn EA, Morgenstern H, Beers MH, Carlisle DM, Keeler EB, et al. 1992. Choosing quality of care measures based on the expected impact of improved care on health. Health Services Research 27(5):619–650. Street A. 2003. How much confidence should we place in efficiency estimates? Health Economics 12(11):895–907. Thomas CP, Wallack SS, Lee S, Ritter GA. 2002. Impact of health plan design and management on retirees’ prescription drug use and spending, 2001. Health Affairs Suppl Web Exclusives:W408–W419. Thomas JW, Grazier KL, Ward K. 2004a. Comparing accuracy of risk-adjustment methodologies used in economic profiling of physicians. Inquiry 41(2):218–231. Thomas JW, Grazier KL, Ward K. 2004b. Economic profiling of primary care physicians: Consistency among risk-adjusted measures. Health Services Research 39(4 Pt. 1):985–1003. Wennberg JE, Fisher ES, Skinner JS. 2002. Geography and the debate over Medicare reform. Health Affairs Supp Web Exclusives:W96–W114. Worthington AC. 2004. Frontier efficiency measurement in health care: A review of empirical techniques and selected applications. Medical Care 61(2):135–170. Zhao Y, Randall PE, Ash AS, Calabrese D, Ayanian J, Slaughter JP, Weyuker L, Bowen B. 2001. Measuring population health risks using inpatient diagnoses and outpatient pharmacy data. Health Services Research 26(6 Pt. 2):180–193.
OCR for page 242
Performance Measurement: Accelerating Improvement TABLE H-1 “Value-Based” and Efficiency Metrics Measures Definition: Input:Output Stated Purpose/Function Health Care Setting Disease-specific (e.g., CVD) cost-episodes per person Person or patient annual episode specific costs for CVD services: health-related process or outcome measures Measure guideline concordance; aggregate resources consumed; attribute resource use to provider; compare across physician groups; pay for performance Acute care hospital Agreement between pairs of efficiency rankings using the weighted kappa statistic Relative practice efficiency rankings Physician efficiency profiling; nine clinical specialties selected, based on numbers of episodes managed, numbers of physicians in the specialty, and whether the specialty was medical or surgical Mixed group model/IPA HMO NCQA plan efficiency measurement Relative resource consumption across plans Measure and report relative resource consumption, risk adjusted for underlying population risk Health plans Process and outcome measures related to transitional care (across settings) (Mary Naylor) Multiple settings: home care, hospital, ED, nursing home
OCR for page 243
Performance Measurement: Accelerating Improvement Required Enhancements/Methods Data Sources Output Measure Risk adjustment using ETGs Hospital-reported data; payer claims paid charges for procedure codes (CPT-9-CMxxxx, …) for episode length of time Patient health status; provider payment; patient disposition; patient, provider satisfaction Two different episode definition systems—ETGs and MEGs; three cost outlier tests: Winzorized at 10% and 90%; Winzorized at 90%; and Winzorized at 80% Four years of claims and membership data Two measures of costs were used in the analyses: Actual costs, as recorded by the HMO, and standard costs, determined by assigning the same costs to all procedures of the same type Plan costs (total costs vs. disease specific costs) 30-day rehospitalization; Emergent care for wound infections (Source: OASIS, OBQM) Emergent care for improper medication administration, medication side effects (Source: OASIS, OBQM) Emergent care for hypo/hyperglycemia (Source: OASIS, OBQM) Discharge to the community needing wound care or medication assistance (Source: OASIS, OBQM) Acute care hospitalization (Source: OASIS/OBQI)
OCR for page 244
Performance Measurement: Accelerating Improvement Measures Definition: Input:Output Stated Purpose/Function Health Care Setting “Risk-adjustment accuracy” across primary care physicians Group R2 analyses Compare six different risk-adjustment methods in terms of capacity to explain variations in annual claims cost among HMO members Physicians in HMO/IPA Identification of high-outlier PCPs (family practitioners, general internists, and general practitioners, pediatricians) Bridges to Excellence/NCQA Provider Recognition Programs: clinical quality measures for diabetes care (Diabetes Care Link); clinical quality measures for heart/stroke care Provider Recognition Programs Cardiac Care Link; adoption of electronic medical records and other office systems: Measure quality processes and outcomes; reporting; recognition and possible financial rewards Hospitals
OCR for page 245
Performance Measurement: Accelerating Improvement Required Enhancements/Methods Data Sources Output Measure Unexpected nursing home admission (Source: OBQM) Discharge to the community (Source: OASIS/OBQI) Emergent care (Source: OASIS/OBQI) Outlier removal HMO, one state; member and adjudicated claims files (inpatient, outpatient/professional, and pharmacy) for calendar years 1997 and 1998 for the 156,280 continuously enrolled members Reasonably consistent estimates of member level expected costs, across a broad range of panel sizes Identification of high-outlier PCPs ranged from 54% to 58% for adult care physicians (family practitioners, general internists, and general practitioners), and from 67% to 77% for pediatricians, when rankings were based on the standardized cost difference measure which accounts for physician panel size Hospital sampling, self-report Rates of adherence
OCR for page 246
Performance Measurement: Accelerating Improvement Measures Definition: Input:Output Stated Purpose/Function Health Care Setting Physician Office Link: Clinical Information Systems/Evidence-Based Medicine (See Bridges to Excellence) Leapfrog Group: Computer physician order entry (CPOE) systems Presence of systems; use Electronic prescribing systems that intercept errors Hospitals Evidence-based hospital (EHR) Safety Standard Combination of outcome, process and volume Adoption of clinical processes for high-risk procedures; volume of procedures per year; direct outcome measures (i.e, risk-adjusted mortality) for coronary artery bypass graft and percutaneous coronary interventions, using robust and approved measurement systems for the EHR Safety Standards ICU physician staffing Operate adult and/or pediatric ICUs that are managed or comanaged by intensivists who: 1. Are present during daytime hours and provide clinical care exclusively in the ICU and, 2. At other times can—at least 95% of the time, (i) return ICU pages within five minutes and (ii) arrange for a FCCS-certified nonphysician effector to reach ICU patients within five minutes Patients are cared for exclusively by critical-care specialists or teams that are closer on hand for both fine-tuning routine care and dealing with emergencies
OCR for page 247
Performance Measurement: Accelerating Improvement Required Enhancements/Methods Data Sources Output Measure Upfront capital Voluntary hospital self-report; data survey Explicit: extent to which standards are met, relative to other hospitals; implicit: costs of adverse drug events: mortality, morbidity; other costs An EHR standard does not apply to hospitals that do not perform the procedure or treat the condition. Patients under 18 are excluded Hospitals to report their volume and process or performance information for these procedures and conditions by responding to the Leapfrog Hospital Patient Safety Survey on the Leapfrog Website Hospitals with adult or pediatric ICUs to respond to the Leapfrog Group Presence/absence of intensivists in ICU; organization of closed/open ICU Voluntary online survey
OCR for page 248
Performance Measurement: Accelerating Improvement Measures Definition: Input:Output Stated Purpose/Function Health Care Setting Leapfrog Group: Expert Panel-Endorsed Process Measures Cases meeting endorsed process measure: eligible cases (meeting criteria for inclusion) Establish, monitor, and report measures of process-oriented quality Hospitals IHI Whole System Measures: efficiency Costs per capita; hospital specific standardized reimbursement The Agency for Healthcare Research and Quality Quality Indicators (QIs) are measures of health care quality Prevention QIs identify hospital admissions that evidence suggests could have been avoided, at least in part, through high-quality outpatient care National tracking or quality improvement Inpatient QIs reflect quality of care inside hospitals including inpatient mortality for medical conditions and surgical procedures Patient Safety Indicators also reflect quality of care inside hospitals, but focus on potentially avoidable complications and iatrogenic events
OCR for page 249
Performance Measurement: Accelerating Improvement Required Enhancements/Methods Data Sources Output Measure Exclude transferred patients; expired patients Hospital: random sample of at least 60 cases with the condition; principal or secondary discharge diagnosis Rate of adherence to endorsed process measures of quality Measure health care quality using administrative data; update for ICD codes Currently being considered for uses other than tracking quality improvement; namely provider payment and public reporting
Representative terms from entire chapter: