Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 79
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality 4 Performance Measures Summary of Chapter Recommendations The committee recommends that the federal government accelerate, expand, and coordinate its use of standardized performance measurement and reporting to improve health care quality. RECOMMENDATION 3: Congress should direct the Secretaries of the Department of Health and Human Services (DHHS), Department of Defense (DOD), and Department of Veterans Affairs (VA) to work together to establish standardized performance measures across the government programs, as well as public reporting requirements for clinicians, institutional providers, and health plans in each program. These requirements should be implemented for all six major government health care programs and should be applied fairly and equitably across various financing and delivery options within those programs. The standardized measurement and reporting activities should replace the many performance measurement activities currently under way in the various government programs. RECOMMENDATION 4: The Quality Interagency Coordination (QuIC) Task Force should promulgate standardized sets of performance measures for 5 common health conditions in fiscal year (FY) 2003 and another 10 sets in FY 2004. a. Each government health care program should pilot test the first 5 sets of measures between FY 2003 and FY 2005 in a limited number
OCR for page 80
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality of sites. These pilot tests should include the collection of patient-level data and the public release of comparative performance reports. b. All six government programs should prepare for full implementation of the 15-set performance measurement and reporting system by FY 2008. The government health care programs that provide services through the private sector (i.e., Medicare, Medicaid, the State Children’s Health Insurance Program [SCHIP], and portions of DOD TRICARE) should inform participating providers that submission of the audited patient-level data necessary for performance measurement will be required for continued participation in FY 2007. The government health care programs that provide services directly (i.e., the Veterans Health Administration [VHA], the remainder of DOD TRICARE, and the Indian Health Service [IHS]) should begin work immediately to ensure that they have the information technology capabilities to produce the necessary data. The initial set of measures should focus primarily on validated process-of-care measures. Many process measures, such as those in the Diabetes Quality Improvement Project (DQIP) set, can readily be used for quality measurement without adjusting for patients’ demographics or other risk factors. Moreover, compared with outcome measures, many process measures take less time to collect, require smaller samples, and can be collected from data that have already been recorded for other clinical or administrative purposes (Rubin et al., 2001). Process measures can also be easier to benchmark. But the measurement set should not be limited to process measures alone. Over time, incorporating outcome measures and measures of patient perceptions will allow for a richer assessment of the contributions of health care to improved patient and population health status. The QuIC, an interagency committee with representation from the six major government health care programs, is well positioned to coordinate these activities. QuIC should coordinate its efforts with private-sector groups involved in the promulgation of standardized performance measures, such as the National Quality Forum (NQF), the National Committee for Quality Assurance (NCQA), the Joint Commission on Accreditation of Healthcare Organizations (JCAHO), the Leapfrog Group, and the Foundation for Accountability (FACCT). The coordinating body should ensure that the design of performance measures and their dissemination reflect the participation of consumers. It should also aim to minimize the number of times providers must report patient-specific performance data. For example, standardized data on patients who are dually eligible for Medicare and Medicaid might be submitted to a clearinghouse, which would then distribute the data to the relevant programs.
OCR for page 81
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality In health care, the notion of measuring the performance of clinicians and institutions to improve outcomes is not new. The Pennsylvania Hospital collected diagnosis-specific data on patient outcomes in 1754 (McIntyre et al., 2001). A century later, Florence Nightingale developed a hospital data collection and analysis system that ultimately led to new insights into how sanitary conditions affect hospital morbidity and mortality (Nerenz and Neil, 2001). In 1910, a Massachusetts General Hospital surgeon proposed an “end result” tracking system to determine whether patients had received effective treatments (McIntyre et al., 2001). The focus in today’s health care environment is increasingly on using performance data to measure quality, to demand accountability, and to cultivate an information-rich health care marketplace (American Medical Association, 2001). Performance measurement is commonplace in government health care programs; its application, however, is often uncoordinated and duplicative. As a result, health providers of all types and in all health care settings are increasingly engaged in costly and often redundant measurement and reporting activities to meet the demands of government agencies, accrediting groups, professional associations, and others. In addition, providers serving patients with multiple sources of coverage are further burdened by having to submit the same data to more than one agency in the Centers for Medicare and Medicaid Services (CMS), such as the Medicare and Medicaid programs. With each new measure, there are often different and sometimes conflicting methodologies, data requirements, and terminology (Jencks, 2000; Roper and Cutler, 1998).1 This chapter describes some of the leading performance measures used by government health care programs and concludes by setting forth a vision for optimizing the use of performance measurement. TYPES OF PERFORMANCE MEASURES Performance measurement in the context of this report is the use of specific quantitative indicators to identify the degree to which providers in the health care system are delivering care that is consistent with standards or acceptable to customers of the delivery system. More than 20 1 The proliferation of measures is well illustrated in a recent review of quality indicators for one diagnosis alone—community-acquired pneumonia (CAP) (Rhew et al., 2001). The authors conducted a systematic search for CAP-specific quality indicators and identified 44 indicators from 10 organizations including CMS, JCAHO, the Agency for Healthcare Research and Quality (AHRQ), and VHA. They concluded that only 16 of the 44 indicators were based on evidence, able to detect “clinically meaningful differences,” measurable in a clinical practice setting, or sufficiently precise.
OCR for page 82
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality years ago, Donabedian (1980) proposed that quality can be measured by observing its structure, processes, and outcomes. Structural measures—such as staffing ratios or the presence of a patient safety committee—refer to organizational characteristics that are thought to create the potential for good quality. They are the basis for most current regulations and are often required by government programs through accreditation, licensure, or certification requirements as a way of ensuring a minimal capacity for quality (as described in Chapter 3). Process measures quantify the delivery of recommended procedures or services that are correlated with desired outcomes in a specific population group. Process measures can be useful for assessing individual practitioners, as well as for comparing institutional providers, communities, or larger geographic areas (Agency for Healthcare Research and Quality, 2002b). For example, the quality of adult diabetes care is often judged by examining the percent of patients with diabetes who receive recommended services including hemoglobin A1c tests, low-density lipoprotein cholesterol tests, lipid profiles, and retinal exams (Texas Medical Foundation, 2002). The data needed to develop process measures are typically obtained from medical records, claims data, and patient surveys. Outcome measures are used to capture the effect of an intervention on health status, control of a chronic condition, specific clinical findings, or patients’ perceptions of care (Nerenz and Neil, 2001). Two core intermediate outcome measures in adult diabetes care, for example, are the percentage of patients whose most recent hemoglobin A1c level is greater than 9.5 percent and the percentage of patients whose most recent low-density lipoprotein cholesterol level is less than 130 mg/dL. Outcome analysis may require sophisticated statistical techniques, including risk adjustment, to discern the impact of an intervention independent of confounding factors such as comorbidities, socioeconomic characteristics, and local patterns of care (Agency for Healthcare Research Quality, 2002b; Rubin et al., 2001). Until the QuIC was established in 1998, there was little coordination of government’s use of performance measures for quality improvement. The QuIC has initiated projects to address tasks that are key to the use of quality performance measures (Foster, 2002). These include efforts to inventory quality measures; document their uses, strengths, and weaknesses; explore how best to employ risk adjustment methods; encourage all government programs to use the DQIP measures; and identify the most effective ways to communicate with patients about quality, such as establishing a common vocabulary for federal health care agencies (Quality Interagency Coordination, 2002).
OCR for page 83
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality COMMONLY USED PERFORMANCE MEASURE SETS This section describes some of the leading performance measurement sets used by one or more government health care programs (see Table 4-1). Consumer Assessment of Health Plans CAHPS is a survey instrument and reporting system developed, with funding and direction from the Agency for Healthcare Research Quality (AHRQ), to help consumers and purchasers choose among health care plans. CAHPS employs primarily outcome measures—specifically consumers’ perceptions of their health plan and personal providers—and is used by some state Medicaid agencies, the Medicare program, DOD TRICARE, and public and private employers. NCQA requires managed care plans to field CAHPS and to develop quality improvement projects that address problems identified through CAHPS findings. JCAHO similarly encourages, but does not require, some accredited health care organizations, such as health networks, to employ CAHPS. CAHPS was originally conceived as a tool for managed care, but more recently has been adapted for fee-for-service (FFS) purposes. There are publicly available algorithms for developing and reporting standardized composite measures of CAHPS results in standardized formats. Comparative analyses of CAHPS outcomes are greatly enhanced through the National CAHPS Benchmarking Database. The CAHPS initiative is still a work in progress. It remains uncertain whether satisfaction ratings can meaningfully inform quality improvement (Sofaer, 2002). AHRQ has launched the development of a second generation of CAHPS research to evaluate the system’s utility for quality improvement and to assess its effectiveness in applied settings. The principal objectives of CAHPS II are to develop innovative reporting formats and to create survey instruments for nursing homes and group practices that can be used by persons with mobility impairments (Agency for Healthcare Research Quality, 2001). Diabetes Quality Improvement Project DQIP is an example of a disease-specific performance measurement set. The project was funded by CMS to develop a national consensus with regard to a set of standardized process and outcome measures for performance reporting related to the care of adults with diabetes (see Appendix B) (Texas Medical Foundation, 2002). Although the DQIP measure set has
OCR for page 84
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality TABLE 4-1 Selected Performance Measure Sets Used by One or More Government Health Programs CAHPS DQIP ESRD CPMs HEDIS MDS CMS National Priorities OASIS Setting Health Plans, FFS Outpatient, hospital ESRD facilities Health plans LTC facilities Out-patient, hospital Home Care Types of Measures Outcomes Process Outcome Process, outcome, structure Process, outcome Process, outcome Outcome Medicare X X X X X X X Medicaid X X X X X SCHIP X X X DOD TRICARE X X X VHA X X IHS X NOTE: CAHPS = Consumer Assessment of Health Plans; CMS = Centers for Medicare and Medicaid Services; DQIP = Diabetes Quality Improvement Project; ESRD CPM = End stage renal disease, clinical performance measure; FFS= Fee for service; HEDIS = Health Plan Employer Data and Information Set; IHS = Indian Health Service; LTC=Long term care; MDS = Minimum data set; OASIS = Outcome Assessment and Information Set; SCHIP = State Children’s Health Insurance Program; VHA = Veterans Health Administration.
OCR for page 85
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality been evolving,2 it is being used by all the major government programs, has been incorporated in the Health Plan Employer Data and Information Set (HEDIS) (see below), and is required in CMS managed care contracts (although not in Medicare FFS). DQIP includes abstracting and quality improvement tools as well as a technical assistance hotline. End Stage Renal Disease Clinical Performance Measures This set of process and outcome measures is used by CMS to monitor and improve the care provided by dialysis facilities. The measures include indicators of the adequacy of hemodialysis and peritoneal dialysis, vascular access, and anemia management. The public can obtain from the Medicare Website patient survival outcomes as well as other information for any dialysis facility receiving Medicare reimbursement. The ESRD CPMs have been credited for significant improvements in the quality of renal dialysis facilities (Jencks, 2001). Health Plan Employer Data and Information Set HEDIS was introduced by NCQA in 1991, and is updated annually to help purchasers and consumers compare the quality of commercial, Medicaid, and Medicare managed care plans. Its measures are used in many government health care programs, particularly in managed care settings. HEDIS incorporates other established standard measure sets, such as CAHPS, DQIP, and the Health Outcomes Survey (HOS). It encompasses the care of common health conditions, including asthma, cancer, depression, diabetes, and heart disease; patients’ perceptions of care received; and structural health plan attributes. Minimum Data Set The MDS is an 8-page set of core assessment items introduced by CMS in 1990 in all Medicare- and Medicaid-certified nursing homes principally for clinical assessment of nursing home residents. CMS is currently conducting a pilot project that involves regular disclosure of nine risk-adjusted quality measures, derived from the MDS, with the aim of promot- 2 DQIP has been a primary focus of NQF. In May 2002, the NQF Diabetes Measures Review Committee issued for public comment a draft set of diabetes measures drawn from the DQIP measures. The draft set was developed by the National Diabetes Quality Improvement Alliance, a collaboration of the American Medical Association, JCAHO, and NCQA.
OCR for page 86
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality ing quality improvement in nursing homes in six states. There are six chronic care measures (e.g., physical restraints, pressure sores, weight loss, infections, residents with pain, and declines in activities of daily living) and three measures of post-acute care quality (e.g., managing delirium, residents with pain, and improvement in walking) (Centers for Medicare and Medicaid Services, 2001c). MDS and the Outcome Assessment and Information Set (OASIS) (see below) have been criticized for being overly burdensome to providers and for failing to reflect the care patients experience as they move from one health care setting to another, such as the transitions to and from home health care to nursing home and hospital (Institute of Medicine, 2001b).3 The Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (Public Law 106-554) mandated that the Secretary of DHHS report to Congress on the development of standard assessment instruments across a wide array of health care settings, including home care and nursing home care.4 CMS has recently taken steps to shorten the MDS for prospective payment system assessments, effective July 2002 (Centers for Medicare and Medicaid Services, 2002d). National Priorities Project This is a CMS quality improvement organization (QIO) project to improve statewide Medicare FFS performance. It uses 22 process measures for three inpatient clinical topics (acute myocardial infarction, heart failure, and stroke) and three outpatient clinical topics (early detection of breast cancer, diabetes management, and pneumonia and influenza immunization). 3 The IOM Committee on Improving Quality in Long-Term Care has recommended that DHHS and others “fund scientifically sound research toward further development of quality assessment instruments that can be used appropriately across the different long-term care settings and different population groups” (Institute of Medicine, 2001b, p. 127). 4 The report to Congress is due January 1, 2005. It will address issues related to the use of standard instruments for acute care hospitals (in- and out-patient); rehabilitation hospitals (in- and out-patient); skilled nursing facilities; home health agencies; physical, occupational, or speech therapy; ESRD facilities; and partial hospitalization or other mental health services (Johnson, 2001). In 2001, DHHS held a round of initial meetings with more than 200 stakeholders to identify the key issues that should be addressed in the report to Congress. The stakeholders clearly agreed that it would be optimal to use health information standards to collect comparable data (Hines, 2002). Currently, the agency is working to secure funds to extend this effort (Paul, 2002).
OCR for page 87
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality Outcome Assessment and Information Set OASIS is a clinical dataset used by CMS for assessing home care since 1999. CMS requires home care agencies to submit OASIS data for most adult Medicare and Medicaid patients. There have been widespread complaints about the time and expense required to complete the OASIS reporting form. Numerous organizations have called for streamlining of the dataset because of this administrative burden. Critics have maintained that the OASIS reporting requirements are duplicative, that the paperwork involved consumes more nursing time than that devoted to patient care, that associated administrative costs are inadequately reimbursed, and even that OASIS is partly to blame for the critical shortage of qualified home care nurses (American Hospital Association and American Home Care Association, 2001). However, there is evidence that OASIS has been a useful tool in home health quality improvement projects, resulting in measurably better outcomes for patients (Shaughnessy et al., 2002). In June 2002, the DHHS Secretary’s Advisory Committee on Regulatory Reform recommended that OASIS be subject to an independent cost–benefit evaluation. The committee also recommended that the reporting form be modernized to, for example, better reflect home health agency operations and current medical practice; to eliminate data elements that are duplicative or not used for payment, quality management, or survey purposes; and to create the option to use one form for all situations of care or changes in status (DHHS Secretary’s Advisory Committee on Regulatory Reform, 2002). In response to a request from the Secretary, CMS completed an in-depth review of all OASIS elements and has proposed reducing the burden associated with OASIS by approximately 25 percent. CMS estimates that the proposed changes could be implemented by the end of December 2002. CMS has also convened a technical expert panel and hosted a town hall meeting to assess any additional opportunities for streamlining the OASIS data collection tool (Centers for Medicare and Medicaid Services, 2002e). OVERVIEW OF CURRENT PERFORMANCE MEASUREMENT ACTIVITIES Centers for Medicare and Medicaid Services CMS manages the lion’s share of the federal responsibilities for three of the government health care programs addressed in this report—Medicare, Medicaid, and SCHIP. It thereby influences the quality of health care services provided to more than one in four U.S. residents (an estimated 83 million people).
OCR for page 88
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality Medicare Since creating Medicare in 1965, Congress has mandated a series of programs to ensure the quality of care provided to Medicare beneficiaries (Institute of Medicine, 1990). Medicare’s approach to improving quality—like that in the private sector—has evolved differently depending on the clinical context and delivery setting (MedPAC, 1999). By statute, Medicare’s quality improvement resources must be allocated to its FFS and Medicare+Choice (M+C) programs in proportion to beneficiary participation in the two delivery systems (Health Care Financing Administration, 1999).5 Nevertheless, CMS relies much more heavily on regulatory requirements to promote quality in Medicare managed care and in long-term care facilities and programs than in Medicare FFS (MedPAC, 2002).6 In addition, although CMS employs performance measures to stimulate quality improvement across a wide range of clinical settings and delivery systems, it uses those measures in distinctly different ways in managed care and FFS (MedPAC, 2002). For example: While M+C plans are held accountable for their performance, FFS contractors are not. As a condition of Medicare participation, M+C plans must implement a quality improvement process and also show evidence of improvement using three sets of measures, including the Medicare versions of HEDIS, CAHPS, and HOS (MedPAC, 2002).7 In Medicare FFS, participation in quality improvement projects is voluntary (although hospitals and other health care institutions must respond to QIO data requests). CMS publicly discloses the quality improvement efforts of individual M+C plans by, for example, annually reporting each plan’s HEDIS measures on the CMS Website. Only limited information about relatively small subsets of FFS providers (i.e., dialysis facilities and nursing homes) is publicly reported. Quality Improvement Organizations QIOs are Medicare’s primary tool for enhancing quality (see Box 4-1). Today’s QIOs reflect more than 30 years’ evolution in CMS efforts to ad- 5 About 87 percent of Medicare beneficiaries are covered by Medicare fee for service (FFS); 14 percent are enrolled in Medicare+Choice (M+C) and health maintenance organizations (Stuber et al., 2001). 6 This is due in part to the Balanced Budget Act (BBA) of 1997 (P.L. 105-33), which instructed CMS to regulate quality improvement in M+C plans. 7 See Chapter 3 for a discussion of Medicare conditions of participation.
OCR for page 89
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality BOX 4-1 Quality Improvement Organizations: Objectives, Staffing, and Financing There are currently 37 QIOs serving the 50 states, District of Columbia, and U.S. territories. Medicare’s QIO program has three basic objectives: To improve the quality of care for Medicare beneficiaries by ensuring that it meets professionally recognized standards of health care. To protect the integrity of the Medicare Trust Fund by ensuring that Medicare pays only for reasonable and medically necessary services that are provided in the most economical setting. To protect beneficiaries by expeditiously addressing beneficiary complaints, provider-issued notices of noncoverage, violations of the Emergency Medical Treatment and Active Labor Act (P.L. 99-272—the antidumping statute), payment error prevention, and other mandated responsibilities. CMS finances QIO projects through competitively awarded contracts that can be renewed every 3 years or canceled and put up for competitive bidding. QIOs are private organizations that vary in their capabilities and the extent to which they do non-Medicare work. They typically employ a multidisciplinary team that includes physicians, nurses, health care quality professionals, epidemiologists, statisticians, and communications experts. Every QIO contracts with Medicare, but many QIOs also work with state Medicaid programs (about two-thirds conduct quality reviews for state Medicaid agencies) as well as with private employers, skilled nursing facilities, and ESRD facilities. The Medicare-QIO 3-year contracts detail a complex and extensive set of tasks referred to as the Scope of Work (SOW). During the sixth SOW, covering federal fiscal years 2000-2002, QIOs received about $240 million per year from CMS, approximately one-tenth of 1.0 percent of annual Medicare spending. The seventh SOW was issued while this report was being prepared. SOURCES: Agency for Healthcare Research and Quality, 2002a; Center for Medicare Education, 2001; Centers for Medicare and Medicaid Services, 2002a; Health Care Financing Administration, 2000; MedPAC, 2002.
OCR for page 97
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality their performance measures, the specifications to be followed in calculating the measures, and the method and timing that health plans must use for reporting.11 State Children’s Health Insurance Program Congress established the SCHIP program in 1997 for low-income uninsured children. As of 2002, most states had operated their programs for only 3 or 4 years. As a consequence, both the federal and state focus for SCHIP has been on enrolling eligible children and making the program operational. More recently, attention has turned to assessing the program’s efforts (Henneberry, 2001). SCHIP regulations require states to establish performance goals and performance measures, including a written assurance that the state will collect and maintain data and furnish reports to the Health and Human Services Secretary. Managed care is the dominant delivery system used by SCHIP programs, and the regulations grant CMS the authority to mandate standardized performance measures for managed care plans serving SCHIP enrollees (but not for FFS providers). No specific performance measures or goals are required. Many states require managed care plans that serve SCHIP enrollees to report HEDIS measures (Henneberry, 2001). However, surveys of SCHIP programs indicate that the programs often modify HEDIS to tailor data collection to their specific program needs thus making state-to-state comparisons problematic (French and Miele, 2001). Some states are also adapting HEDIS for FFS and primary care case management. Other states have developed their own performance measures. Wisconsin, for example, is developing a new performance measurement system, the “Medicaid Encounter Data Driven Improvement Core-Measure Set,” drawing directly from monthly HMO encounter data (Henneberry, 2001). CMS and AHRQ are currently collaborating on a Performance Measurement Partnership Project with state Medicaid and SCHIP programs to determine the feasibility of implementing a core set of standardized performance measures, such as HEDIS or CAHPS, for managed care in Medicaid and SCHIP. One aim of the project is to motivate benchmarking and state creativity in using performance measures (Block, 2002). 11 States may choose to develop their own measures or use standardized measures from HEDIS, FACCT, AHRQ’s CONQUEST database, or the measures recommended in A Guide for States to Assist in the Collection and Analysis of Medicaid Managed Care (MEDSTAT, 1998).
OCR for page 98
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality DOD TRICARE DOD TRICARE is in the midst of an ambitious effort to reengineer the military health system (MHS) (Milbank Memorial Fund, 2001). In December 2001, TRICARE Management Activity (TMA), the DOD-level administrator of the MHS, released the Population Health Improvement Plan (PHI) and Guide, a detailed blueprint for making “population health improvement a reality in the DOD” (DOD TRICARE Management Activity, 2001, p. i). In earlier research that contributed to the guide’s development, TMA had concluded that its system was “replete with metrics covering a wide range of uncoordinated indicators of varying usefulness’’ and “disparate performance measurement systems” (TRICARE, 1999b, p. 26). The PHI Guide directly addresses this concern and calls for an “enterprise-wide core set of standardized performance measures” to drive improvements in clinical services (DOD TRICARE Management Activity, 2001, p. 67). One of the first steps will be to integrate measure sets that are already collected for mandatory quality assurance programs such as HEDIS and ORYX. Today’s TRICARE Website reports numerous performance measurement activities—analyses of HEDIS data used to focus quality improvement efforts related to diabetes, asthma, breast cancer screening, and cervical cancer screening; “report cards” drawn from an array of beneficiary surveys; digests of performance measures called TRICARE Operational Performance Statements (TOPS); and others. One survey, the Health Care Survey of DOD Beneficiaries, is an adapted CAHPS instrument used by TRICARE to monitor consumer satisfaction with and perceptions of the quality of MHS hospitals, clinics, and clinical staff (including how the MHS compares with the care received by the privately insured population) (TRICARE, 1999a).12 The survey responses are aggregated into composite performance measures using CAHPS algorithms. The resulting measures are benchmarked against the National CAHPS Benchmarking Database and the findings are released in Web-based interactive report cards. TOPS is a quarterly digest that disseminates routine analyses of the MHS. Included are performance measures such as beneficiary grievance rates, preventable admission rates for active-duty personnel (e.g., for angina or chronic obstructive pulmonary disease), preventable admission rates for non–active duty managed care enrollees (e.g., for asthma or congestive heart failure), access to care, and patient satisfaction. 12 The Health Care Survey of DOD Beneficiaries was mandated by the National Defense Authorization Act for fiscal year 1993 (P.L. 102-484).
OCR for page 99
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality Veterans Health Administration VHA’s integrated health information system, including its framework for using performance measures to improve quality, is considered one of the best in the nation. VHA uses performance measures along a number of dimensions—patient satisfaction, functional outcomes, personal health practices, and clinical measures—to drive quality improvement in a wide range of clinical disciplines and across ambulatory, hospital, and long-term care settings (Jones and VHA, 2002; Nerenz and Neil, 2001). One of the most highly regarded VHA initiatives employing performance measures is the National Surgical Quality Improvement Program (NSQIP). NSQIP was implemented to develop comparative risk-adjusted information on surgical outcomes in the VHA’s many medical centers (Daley, 1998). The initiative’s key components are periodic performance measurement and feedback, along with comparative, site-specific, and outcome-based annual reports; self-assessment tools; structured site visits; and dissemination of best practices. From 1991, when NSQIP data were first collected, through 2000, the impact on the outcomes of major surgeries at VHA hospitals was dramatic: 30-day postoperative mortality decreased by 27 percent and 30-day morbidity by 45 percent (Shukri et al., 2002). Many other performance measures are in use, including, for example, several evidence-based quality indices developed by VHA researchers to improve preventive, chronic, and palliative services and commercially available measurement sets such as HEDIS and CAHPS. The Chronic Disease Care Index targets the five most common conditions treated at VHA hospitals: ischemic heart disease, hypertension, chronic obstructive pulmonary disease, diabetes mellitus, and obesity. HEDIS measures have been used to assess diabetes care, heart attack treatment, ambulatory follow-up after inpatient mental health stays, and cervical cancer screening (Jones et al., 2000; Mencke et al., 2000). Indian Health Service IHS has developed a performance evaluation system to meet the performance measurement requirements of JCAHO’s ORYX initiative and to comply with the Government Performance and Results Act (Indian Health Service, 2000). The majority of IHS facilities are JCAHO-accredited and thus are required to regularly submit and use performance measures for quality improvement. The performance evaluation system uses quality indicators that have been specifically tailored to Indian health care populations and focus on 12 priority health problems: diabetes, obesity, cancer, heart disease, alcohol and substance abuse, family abuse and violence,
OCR for page 100
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality injuries, dental disease, poor living environment, mental health, tobacco use, and maternal and child health (Indian Health Service, 2002). OPTIMIZING THE GOVERNMENT’S USE OF PERFORMANCE MEASURES In its recent comprehensive assessment of how to advance the quality of the MHS, DOD/TMA concluded that a conceptual framework is key for “improving the health of populations” and for guiding the “specific actions and tools that will help to build healthy communities” (DOD TRICARE Management Activity, 2001, p. v). The committee agrees and believes this to be true for all government health care performance measurement efforts. The committee believes further that a conceptual framework for performance measurement should build on efforts already under way. To achieve the continuity required to formulate a conceptual framework for performance measurement, the committee encouraged adoption of the taxonomy developed by the Institute of Medicine’s earlier Committee on the Quality of Health Care in America. That committee identified six dimensions or attributes of quality that should shape government’s use of performance measures (see Box 4-3). These six attributes have al- BOX 4-3 Six Attributes of Quality Safe—avoiding injuries to patients from the care that is intended to help them. Effective—providing services based on scientific knowledge to all who could benefit and refraining from providing services to those not likely to benefit (avoiding underuse and overuse). Patient-centered—providing care that is respectful of and responsive to individual patient preferences, needs, and values and ensuring that patient values guide all clinical decisions. Timely—reducing waits and sometimes harmful delays for both those who receive and those who give care. Efficient—avoiding waste, in particular waste of equipment, supplies, ideas, and energy. Equitable—providing care that does not vary in quality because of personal characteristics such as gender, ethnicity, geographic location, and socioeconomic status. SOURCE: Institute of Medicine, 2001a.
OCR for page 101
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality ready been adopted by DHHS as a conceptual framework for the National Health Care Quality Report. They have also been endorsed in whole or in part by various private-sector groups including the Leapfrog Group and NQF. In addition, another IOM committee has identified a list of 20 priority areas for health system improvement, and these represent excellent candidates for the development of standardized performance measures (Institute of Medicine, 2002). Most of the government programs have identified leading chronic conditions and health concerns for their populations, and there is much overlap in all of these lists. NEED TO STANDARDIZE QUALITY PERFORMANCE MEASURES Government health care programs reflect a growing recognition that measuring quality and using quality performance measures to improve health care is central to the federal government’s roles of regulator, purchaser, and provider of health care for almost half the U.S. population. Yet too many resources are spent on health care measures that are either duplicative or ineffective, and little comparative quality information is made available in the public domain for use by beneficiaries, health professionals, or other stakeholders. Furthermore, potential users of the available measures are often hindered by the lack of reporting standards, conflicting methodologies, and inconsistent terminology (Eddy, 1998; Rhew et al., 2001). Standardizing measures can lessen the confusion. In addition to addressing these problems, the committee believes standardized performance measures could drive quality improvement in numerous other ways: By drawing attention to best practices and encouraging providers to adopt them. By facilitating comparisons of accountable entities, such as hospitals, health plans, long-term care facilities, and, potentially, physicians’ practices. By enabling the development of national benchmarks and helping to identify regional differences. By supporting efforts to sensibly reward quality through either payment or other means. By expanding the research community’s capacity to identify the factors that drive or diminish health care quality. By helping to make the link between accountable entities and patient outcomes. By providing the clinical data needed to formulate workable risk adjustment techniques. By providing the necessary data to identify providers who demon-
OCR for page 102
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality strate consistently substandard care and developing strategies for improvement or narrowing of their scope of practice. Performance measurement is not a perfect solution. There are problems and pitfalls with this approach that must be addressed and guarded against. Any performance measurement approach will focus on only a limited number of areas, and there is the risk that too little attention will be paid to clinical areas that are not the focus of measurement activity. There are numerous methodologic challenges, such as capturing rare events and adjusting for differences in risk or severity of illness (Eddy, 1998). In the case of outcome measures, it must be recognized that almost all outcomes are probabilistic (i.e., doing the right things does not guarantee good outcomes, and good outcomes sometimes occur even when the right things were not done), and there are also many factors outside the control of the health system determining outcomes (Eddy, 1998). There must also be ways to identify and deal with missing or incorrect data (McGlynn and Adams, 2001). While not a perfect solution, the committee believes that the potential benefits of performance measurement and reporting are sizable and that the federal government should act expeditiously to promulgate a standardized measurement set and to implement this set within each of the government programs. At the same time, efforts must be made to address operational and methodologic challenges and to mitigate any unintended adverse consequences. Implications for Current Activities Adoption of a central focus on performance measurement and reporting will have significant implications for the way in which the government conducts its quality enhancement activities. In today’s environment of scarce resources and rising health care costs, it will be imperative for each government health care program to assess carefully how best to realize its objectives. Standardized quality measurement and reporting must not be pursued as an additional government requirement, but rather as a replacement for current quality measurement activities. Moreover, whenever possible, providers should not be burdened with reporting the same patient-specific performance data more than once to the same government agency. There should be a designated government entity responsible for coordinating the government’s performance measurement activities. QuIC has made a strong start in the right direction by convening representatives from the six major government health care programs and initiating various collaborative projects based on voluntary participation, but it lacks a
OCR for page 103
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality clear mandate. Congress should grant the statutory authority and provide adequate funding to either QuIC or another existing entity to coordinate and standardize the government’s performance measurement activities. This entity should establish strong working relationships with various private-sector groups, including NQF, NCQA, JCAHO, the Leapfrog Group, and FACCT to optimize future public–private collaboration and provide structured mechanisms for consumer input. It should be noted that the committee considered and rejected the option of establishing a new oversight authority. It concluded that the existing infrastructure, if applied more rigorously and with adequate resources, has the potential to accomplish the objectives laid out in this report. The costs and organizational challenges of forming a new agency were viewed as substantial, creating the potential for delay in implementation of the substantive activities. The QuIC should move aggressively to establish an initial set of standardized measures. As noted previously, a wealth of measures already exists. In very few instances will it be necessary to develop measures from scratch. There are some measure sets, for example, DQIP, that are already being used by several or most of the government programs. By starting with this “low hanging fruit,” it should be possible to identify measure sets for 5 conditions almost immediately, thus allowing the pilot testing process to begin in fiscal year 2003. The remaining 10 sets can then be designated in fiscal year 2004. By moving expeditiously to designating all 15 sets of measures within the first 18 months to 2 years, the federal government will be providing important information to providers regarding the necessary capabilities and specifications for their information systems. CMS has historically allocated most of Medicare’s quality improvement budget to its QIO contracts. The committee strongly recommends the use of standardized measures derived from computerized data and public reporting of comparative quality information. It will be important for CMS to reexamine how best to use the QIOs to enhance quality within this context. For example, should QIOs play a role in the release of public-domain comparative quality reports? Would substantial quality improvements in Medicare be achieved more readily with fewer QIO-like entities operating on a national or larger regional scale? States will also need to relinquish some flexibility in promulgating state-specific performance measures for Medicaid and SCHIP programs. State representatives should be active participants in the QuIC, thus having input into the process of establishing the standardized measure sets. But individual states would be required to apply within their Medicaid and SCHIP programs the standardized measures applicable to the populations served. States would still retain a good deal of flexibility in how they use their regulatory and purchasing powers to act on the perfor-
OCR for page 104
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality mance information provided through standardized reporting mechanisms. In summary, the six major government health care programs should commit to the use of common sets of standardized performance measures. The current administrative burden on the providers that constitute the foundation of government health care services is unacceptable. The committee believes that standardized metrics and reporting formats would not only aid in alleviating this burden, but also help ensure meaningful gains in the quality of health care. Finally, effective performance measurement demands real time access to sufficient clinical detail and accurate data (Schneider et al., 1999). By the time retrospective performance measures reach decision makers, it is too late for them to be useful. The current health information environment is far too fragmented, technologically primitive, and overly dependent on paper medical records. The nation’s need for a functional health care information system is examined in the next chapter. REFERENCES Agency for Healthcare Research Quality. 2001. “AHRQ Seeks Applications for Second Phase of CAHPS®. Media Advisory.” Online. Available at http://www.ahrq.gov/news/press/pr2001/cahps2pr.htm [accessed July 10, 2002]. ———. 2002a. “Fact Sheet: Medicare QIOs: Improving Patient Safety and Quality of Care for Seniors; A National Network of Quality Improvement Experts: Major Medicare QIO Efforts.” Online. Available at http://www.ahqa.org/pub/media/159_766_2687.cfm [accessed May 13, 2002]. ———. 2002b. “Child Health Tool Box: Measuring Performance in Child Health Programs. Understanding Performance Measurement.” Online. Available at http://www.ahrq.gov/chtoolbx/understn.htm [accessed July 10, 2002]. American Hospital Association, and American Home Care Association. 2001. Letter to T. Scully, CMS Administrator (Subject: Oasis). American Medical Association, Joint Commission on Accreditation of Healthcare Organizations and National Committee for Quality Assurance. 2001. “Principles for Performance Measurement in Health Care. A Consensus Statement.” Online. Available at http://www.ncqa.org/communications/news/prinpls.htm [accessed May 29, 2002]. Block, R. (CMS). 16 May 2002. Personal communication to Jill Eden. Center for Medicare Education. 2001. “The Role of PROs, Issue Brief, V2 (2).” Online. Available at http://www.medicareed.org/pdfs/papers53.pdf [accessed May 13, 2002]. Centers for Medicare and Medicaid Services. 2001a. “End Stage Renal Disease (ESRD) Network Organizations.” Online. Available at http://www.hcfa.gov/quality/5d.htm [accessed Feb. 8, 2002a]. ———. 2001b. “MDS Quality Indicator and Frequencies Reports.” Online. Available at http://hcfa.gov/projects/mdsreports/default.asp [accessed June 17, 2002b]. ———. 2001c. “Nursing Home Compare—Home.” Online. Available at http://www.medicare.gov/NHCompare/home.asp [accessed May 6, 2002c]. ———. 2001d. “Protocols for External Quality Review of Medicaid Managed Care Organizations and Prepaid Health Plans.” Online. Available at http://www.hcfa.gov/Medicaid/mceqrhmp.htm [accessed May 15, 2002d].
OCR for page 105
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality ———. 2001e. “Quality of Care: National Projects, ESRD Clinical Performance Measures Project (2000 Annual Report).” Online. Available at hcfa.gov/quality/3m8.htm [accessed Jan. 9, 2002e]. ———. 2002a. “Quality Improvement Organizations Statement of Work.” Online. Available at www.hcfa.gov/qio/2.asp [accessed Apr. 22, 2002a]. ———. 2002b. “Quality Indicators.” Online. Available at http://www.cms.hhs.gov/qio/1a1-d.asp [accessed June 14, 2002b]. ———. 2002c. “Statement of Work, QIOs: 7th Round February 2002 Version.” Online. Available at http://www.hcfa.gov/qio/2b.pdf [accessed May 13, 2002c]. ———, CMS Office of Public Affairs. 2002d. “Medicare Streamlines Paperwork Requirements for Nursing Homes to Allow Nurses, Other Caregivers to Spend More Time With Patients.” Online. Available at www.CMS.hhs.gov/media/press/release.asp?counter=462. [accessed July 10, 2002d]. ———. 2002e. “Medicare Program; Town Hall Meeting on the Outcome Assessment Information Set (OASIS).” Online [accessed Aug. 12, 2002e]. ———. “Nursing Home Quality Initiative.” Online [accessed Aug. 12, 2002f]. Daley, J. 1998. About the National VA Surgical Quality Improvement Program. The Forum, VA Office of Research & Develoment. DHHS Secretary’s Advisory Committee on Regulatory Reform. 2002. Regional Hearing #5 Meeting Minutes/Summary. DOD TRICARE Management Activity. 2001. “Population Health Improvement Plan Guide.” Online. Available at http://www.tricare.osd.mil/mhsophsc/DoD_PHI_Plan_Guide.html [accessed May 15, 2002]. Donabedian, A. 1980. The definition of quality and approaches to its assessment. In Explorations in Quality Assessment and Monitoring. Vol. I. Ann Arbor MI: Health Administration Press. Dyer, M., M. Bailit, and C. Kokenyesi. 2002. Are Incentives Effective in Improving the Performance of Managed Care Plans, Working Paper in the Informed Purchasing Series. Lawrenceville NJ: Center for Health Care Strategies. Eddy, D. M. 1998. Performance measurement: problems and solutions. Health Aff (Millwood) 17 (4):7-25. Forum of End Stage Renal Disease Networks. 2002. “What Are the ESRD Networks?“ Online. Available at http://www.esrdnetworks.org/networks_defined.htm [accessed Apr. 30, 2002]. Foster, N. (QuIC). 18 April 2002. Personal communication to Jill Eden. French, J. B., and A. Miele. “Evaluation of HEDIS in Medicaid and SCHIP.” Online. Available at http://www.ncqa.org/Programs/QSG/EvaluationofHEDISinMedicaidandSCHIP.pdf [accessed Dec. 2001]. Health Care Financing Administration. 1999. “QIO SOWs: Request for Proposal, Sixth Round.” Online. Available at http://www.hcfa.gov/qio/2a.pdf [accessed May 13, 2002]. ———. 2000. “National Projects Reports: Medicare Priorities.” Online. Available at http://www.hcfa.gov/quality/3k.htm#priority [accessed May 13, 2002]. Henneberry, J. 2001. State efforts to evaluate the progress and success of SCHIP (Issue Brief). NGA Center for Best Practices. Hines, L. (DHHS). 8 August 2002. BIPA info. Personal communication to Jill Eden. Indian Health Service. 2000. “Indian Health Performance Evaluation System (PES).” Online. Available at http://www.ihs.gov/NonMedicalPrograms/IHPES/index.cfm?module=content&option=pes [accessed June 15, 2001]. ———. “IHS FY 1999 Performance Plan.” Online. Available at http://www.ihs.gov/PublicInfo/Publications/Perfplan2-1-99.asp [accessed Jan. 14, 2002].
OCR for page 106
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality Institute of Medicine. 1990. Medicare: a Strategy for Quality Assurance. Washington DC: National Academy Press. ———. 2001a. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington DC: National Academy Press. ———. 2001b. Improving the Quality of Long-Term Care. Washington DC: National Academy Press. ———. 2002. Priority Areas for National Action: Transforming Health Care Quality. Washington DC: National Academy Press. Jencks, S. F. 2000. Clinical performance measurement—a hard sell. JAMA 283 (15):2015-6. Jencks, S. F. 2001. “Oct. workshop: Protecting and Improving Safety and Quality for Medicare-HCQIP.” (PP slides). Jencks, S. F. (Quality Improvement Group, Office of Clinical Standards and Quality, Centers for Medicare & Medicaid Services). 6 August 2002. Re: study period. Personal communication to Jill Eden. Johnson, D. 2001. HCFA Legislative Summary: Letter to All Interested Parties. Washington DC: CMS. Joint Commission on Accreditation of Healthcare Organizations. 2002. “ORYX: The Next Evolution in Accreditation; Questions and Answers about the Joint Commission’s Planned Integration of Performance Measures into the Accreditation Process.” Online. Available at http://www.jcaho.org/perfmeas/oryx_qa.html [accessed May 13, 2002]. Jones, D., A. Hendricks, C. Comstock, A. Rosen, B. H. Chang, J. Rothendler, C. Hankin, and M. Prashker. 2000. Eye examinations for VA patients with diabetes: standardizing performance measures. Int J Qual Health Care 12 (2):97-104. Jones, E., and VHA. 2002. “Quality Resources Newsletter; Three Interlinked Services Available in 2002.” Online. Available at http://www.oqp.med.va.gov/newsletter/newsletter.asp [accessed May 15, 2002]. Kaye, N. 2001. Medicaid Managed Care: A Guide for States. Prepared for the Henry J. Kaiser Family Foundation, the Health Resources and Services Administration, the David and Lucile Packard Foundation, and the Congressional Research Service. Portland ME: National Academy for State Health Policy. Matthews, T. L. 2000. Measuring the Quality of Medicaid Managed Care: An Introduction to State Efforts. Lexington KY: Council Of State Governments. McGlynn, E., and J. Adams. 2001. Public release of information on quality. Pp. 183-202. In Changing the U.S. Health Care System: Key Issues in Health Services Policy and Management. 2nd edition. R. Andersen, T. Rice, and G. Kominksi, eds. Jossey-Bass, Inc. McIntyre, D., L. Rogers, and E. J. Heier. 2001. Overview, history and objectives of performance measurement. Health Care Financ Rev 22 (3):7-21. Medicare. 2002. “Medicare.gov - Dialysis Facility Compare Home.” Online. Available at http://www.medicare.gov/dialysis/home.asp [accessed May 13, 2002]. MedPAC. 1999. Chapter 2: “Influencing Quality in Traditional Medicare.” Report to Congress: Selected Medicare Issues. Washington DC: MedPAC. ———. 2002. “Report to Congress: Applying Quality Improvement Standards in Medicare.” Online. Available at http://www.medpac.gov/publications/congressional_reports/jan2002_QualityImprovement.pdf [accessed Oct. 2, 2002]. MEDSTAT. 1998. A Guide for States to Assist in the Collection and Analysis of Medicaid Managed Care Data (CMS Contract #500-92-0035). Baltimore: CMS. Mencke, N. M., L. G. Alley, and J. Etchason. 2000. Application of HEDIS measures within a Veterans Affairs medical center. Am J Manag Care 6 (6):661-8.
OCR for page 107
Leadership by Example: Coordinating Government Roles in Improving Health Care Quality Milbank Memorial Fund. 2001. “Value Purchasers in Health Care: Seven Case Studies; The Military Health System: Implementing a Vision for Value.” Online. Available at http://www.milbank.org/2001ValuePurchasers/011001valuepurchasers.html#military [accessed May 14, 2002]. Nerenz, D. R., and N. Neil. 2001. “Performance Measures for Health Care Systems, Commissioned Paper for the Center for Health Management Research.” Online. Available at http://depts.washington.edu/chmr/docs/commissioned_papers/performancemeasures_nerenz_2001.doc [accessed June 14, 2002]. Paul, B. (CMS). 8 August 2002. BIPA 2000. Personal communication to Jill Eden. Quality Interagency Coordination. 2002. “Quality Interagency Coordination (QuIC) Task Force.” Online. Available at http://www.quic.gov/index.htm [accessed July 11, 2002]. Rhew, D. C., M. B. Goetz, and P. G. Shekelle. 2001. Evaluating quality indicators for patients with community-acquired pneumonia. Jt Comm J Qual Improv 27 (11):575-90. Roper, W. L., and C. M. Cutler. 1998. Health plan accountability and reporting: issues and challenges. Health Aff (Millwood) 17 (2):152-5. Rubin, H. R., P. Pronovost, and G. B. Diette. 2001. The advantages and disadvantages of process-based measures of health care quality. Int J Qual Health Care 13 (6):469-74. Schneider, E. C., V. Riehl, S. Courte-Wienecke, D. M. Eddy, and C. Sennett. 1999. Enhancing performance measurement: NCQA’s road map for a health information framework. National Committee for Quality Assurance. JAMA 282 (12):1184-90. Shaughnessy, P. W., D. F. Hittle, K. S. Crisler, M. C. Powell, A. A. Richard, A. M. Kramer, R. E. Schlenker, J. F. Steiner, N. S. Donelan-McCall, J. M. Beaudry, K. L. Mulvey-Lawlor, and K. Engle. 2002. Improving patient outcomes of home health care: findings from two demonstration trials of outcome-based quality improvement. J Am Geriatr Soc 50 (8):1354-64. Shukri, K., J. Henderson, and W. Daley. 2002. The comparative assessment and improvement of quality of surgical care in the Department of Veteran’s Affairs. Arch Surg 137:20-27. Sofaer, S. 2002. Why ask patients? Presentation at the annual meeting of the Academy for Health Services Research and Health Policy, Washington DC. Stuber, J., G. Dallek, and B. Biles. 2001. Program on Medicare’s Future: National and local factors driving health plan withdrawals from Medicare+Choice. New York: The Commonwealth Fund. Texas Medical Foundation. 2002. “Diabetes quality improvement project.” Online. Available at www.dqip.org [accessed July 10, 2002]. TRICARE. 1999a. “Health Care Survey of DOD Beneficiaries: Overview.” Online. Available at http://www.tricare.osd.mil/survey/hcsurvey/overview.html [accessed May 13, 2002a]. ———. 1999b. “MHS Optimization Plan February 1999 Interim Report.” Online. Available at http://www.tricare.osd.mil/mhsophsc/mhs_supportcenter/Library/MHS_Optimization_Plan.pdf [accessed May 14, 2002b]. Verdier, J., and R. Dodge. 2002. Other Data Sources and Uses, Working Paper in the Informed Purchasing Series. Lawrenceville NJ: Center for Health Care Strategies.
Representative terms from entire chapter: