Summary of Chapter Recommendations
The committee recommends that the federal government accelerate, expand, and coordinate its use of standardized performance measurement and reporting to improve health care quality.
RECOMMENDATION 3: Congress should direct the Secretaries of the Department of Health and Human Services (DHHS), Department of Defense (DOD), and Department of Veterans Affairs (VA) to work together to establish standardized performance measures across the government programs, as well as public reporting requirements for clinicians, institutional providers, and health plans in each program. These requirements should be implemented for all six major government health care programs and should be applied fairly and equitably across various financing and delivery options within those programs. The standardized measurement and reporting activities should replace the many performance measurement activities currently under way in the various government programs.
RECOMMENDATION 4: The Quality Interagency Coordination (QuIC) Task Force should promulgate standardized sets of performance measures for 5 common health conditions in fiscal year (FY) 2003 and another 10 sets in FY 2004.
a. Each government health care program should pilot test the first 5 sets of measures between FY 2003 and FY 2005 in a limited number
of sites. These pilot tests should include the collection of patient-level data and the public release of comparative performance reports.
b. All six government programs should prepare for full implementation of the 15-set performance measurement and reporting system by FY 2008. The government health care programs that provide services through the private sector (i.e., Medicare, Medicaid, the State Children’s Health Insurance Program [SCHIP], and portions of DOD TRICARE) should inform participating providers that submission of the audited patient-level data necessary for performance measurement will be required for continued participation in FY 2007. The government health care programs that provide services directly (i.e., the Veterans Health Administration [VHA], the remainder of DOD TRICARE, and the Indian Health Service [IHS]) should begin work immediately to ensure that they have the information technology capabilities to produce the necessary data.
The initial set of measures should focus primarily on validated process-of-care measures. Many process measures, such as those in the Diabetes Quality Improvement Project (DQIP) set, can readily be used for quality measurement without adjusting for patients’ demographics or other risk factors. Moreover, compared with outcome measures, many process measures take less time to collect, require smaller samples, and can be collected from data that have already been recorded for other clinical or administrative purposes (Rubin et al., 2001). Process measures can also be easier to benchmark. But the measurement set should not be limited to process measures alone. Over time, incorporating outcome measures and measures of patient perceptions will allow for a richer assessment of the contributions of health care to improved patient and population health status.
The QuIC, an interagency committee with representation from the six major government health care programs, is well positioned to coordinate these activities. QuIC should coordinate its efforts with private-sector groups involved in the promulgation of standardized performance measures, such as the National Quality Forum (NQF), the National Committee for Quality Assurance (NCQA), the Joint Commission on Accreditation of Healthcare Organizations (JCAHO), the Leapfrog Group, and the Foundation for Accountability (FACCT).
The coordinating body should ensure that the design of performance measures and their dissemination reflect the participation of consumers. It should also aim to minimize the number of times providers must report patient-specific performance data. For example, standardized data on patients who are dually eligible for Medicare and Medicaid might be submitted to a clearinghouse, which would then distribute the data to the relevant programs.
In health care, the notion of measuring the performance of clinicians and institutions to improve outcomes is not new. The Pennsylvania Hospital collected diagnosis-specific data on patient outcomes in 1754 (McIntyre et al., 2001). A century later, Florence Nightingale developed a hospital data collection and analysis system that ultimately led to new insights into how sanitary conditions affect hospital morbidity and mortality (Nerenz and Neil, 2001). In 1910, a Massachusetts General Hospital surgeon proposed an “end result” tracking system to determine whether patients had received effective treatments (McIntyre et al., 2001).
The focus in today’s health care environment is increasingly on using performance data to measure quality, to demand accountability, and to cultivate an information-rich health care marketplace (American Medical Association, 2001). Performance measurement is commonplace in government health care programs; its application, however, is often uncoordinated and duplicative. As a result, health providers of all types and in all health care settings are increasingly engaged in costly and often redundant measurement and reporting activities to meet the demands of government agencies, accrediting groups, professional associations, and others. In addition, providers serving patients with multiple sources of coverage are further burdened by having to submit the same data to more than one agency in the Centers for Medicare and Medicaid Services (CMS), such as the Medicare and Medicaid programs. With each new measure, there are often different and sometimes conflicting methodologies, data requirements, and terminology (Jencks, 2000; Roper and Cutler, 1998).1
This chapter describes some of the leading performance measures used by government health care programs and concludes by setting forth a vision for optimizing the use of performance measurement.
TYPES OF PERFORMANCE MEASURES
Performance measurement in the context of this report is the use of specific quantitative indicators to identify the degree to which providers in the health care system are delivering care that is consistent with standards or acceptable to customers of the delivery system. More than 20
years ago, Donabedian (1980) proposed that quality can be measured by observing its structure, processes, and outcomes. Structural measures—such as staffing ratios or the presence of a patient safety committee—refer to organizational characteristics that are thought to create the potential for good quality. They are the basis for most current regulations and are often required by government programs through accreditation, licensure, or certification requirements as a way of ensuring a minimal capacity for quality (as described in Chapter 3).
Process measures quantify the delivery of recommended procedures or services that are correlated with desired outcomes in a specific population group. Process measures can be useful for assessing individual practitioners, as well as for comparing institutional providers, communities, or larger geographic areas (Agency for Healthcare Research and Quality, 2002b). For example, the quality of adult diabetes care is often judged by examining the percent of patients with diabetes who receive recommended services including hemoglobin A1c tests, low-density lipoprotein cholesterol tests, lipid profiles, and retinal exams (Texas Medical Foundation, 2002). The data needed to develop process measures are typically obtained from medical records, claims data, and patient surveys.
Outcome measures are used to capture the effect of an intervention on health status, control of a chronic condition, specific clinical findings, or patients’ perceptions of care (Nerenz and Neil, 2001). Two core intermediate outcome measures in adult diabetes care, for example, are the percentage of patients whose most recent hemoglobin A1c level is greater than 9.5 percent and the percentage of patients whose most recent low-density lipoprotein cholesterol level is less than 130 mg/dL. Outcome analysis may require sophisticated statistical techniques, including risk adjustment, to discern the impact of an intervention independent of confounding factors such as comorbidities, socioeconomic characteristics, and local patterns of care (Agency for Healthcare Research Quality, 2002b; Rubin et al., 2001).
Until the QuIC was established in 1998, there was little coordination of government’s use of performance measures for quality improvement. The QuIC has initiated projects to address tasks that are key to the use of quality performance measures (Foster, 2002). These include efforts to inventory quality measures; document their uses, strengths, and weaknesses; explore how best to employ risk adjustment methods; encourage all government programs to use the DQIP measures; and identify the most effective ways to communicate with patients about quality, such as establishing a common vocabulary for federal health care agencies (Quality Interagency Coordination, 2002).
COMMONLY USED PERFORMANCE MEASURE SETS
This section describes some of the leading performance measurement sets used by one or more government health care programs (see Table 4-1).
Consumer Assessment of Health Plans
CAHPS is a survey instrument and reporting system developed, with funding and direction from the Agency for Healthcare Research Quality (AHRQ), to help consumers and purchasers choose among health care plans. CAHPS employs primarily outcome measures—specifically consumers’ perceptions of their health plan and personal providers—and is used by some state Medicaid agencies, the Medicare program, DOD TRICARE, and public and private employers. NCQA requires managed care plans to field CAHPS and to develop quality improvement projects that address problems identified through CAHPS findings. JCAHO similarly encourages, but does not require, some accredited health care organizations, such as health networks, to employ CAHPS.
CAHPS was originally conceived as a tool for managed care, but more recently has been adapted for fee-for-service (FFS) purposes. There are publicly available algorithms for developing and reporting standardized composite measures of CAHPS results in standardized formats. Comparative analyses of CAHPS outcomes are greatly enhanced through the National CAHPS Benchmarking Database.
The CAHPS initiative is still a work in progress. It remains uncertain whether satisfaction ratings can meaningfully inform quality improvement (Sofaer, 2002). AHRQ has launched the development of a second generation of CAHPS research to evaluate the system’s utility for quality improvement and to assess its effectiveness in applied settings. The principal objectives of CAHPS II are to develop innovative reporting formats and to create survey instruments for nursing homes and group practices that can be used by persons with mobility impairments (Agency for Healthcare Research Quality, 2001).
Diabetes Quality Improvement Project
DQIP is an example of a disease-specific performance measurement set. The project was funded by CMS to develop a national consensus with regard to a set of standardized process and outcome measures for performance reporting related to the care of adults with diabetes (see Appendix B) (Texas Medical Foundation, 2002). Although the DQIP measure set has
TABLE 4-1 Selected Performance Measure Sets Used by One or More Government Health Programs
been evolving,2 it is being used by all the major government programs, has been incorporated in the Health Plan Employer Data and Information Set (HEDIS) (see below), and is required in CMS managed care contracts (although not in Medicare FFS). DQIP includes abstracting and quality improvement tools as well as a technical assistance hotline.
End Stage Renal Disease Clinical Performance Measures
This set of process and outcome measures is used by CMS to monitor and improve the care provided by dialysis facilities. The measures include indicators of the adequacy of hemodialysis and peritoneal dialysis, vascular access, and anemia management. The public can obtain from the Medicare Website patient survival outcomes as well as other information for any dialysis facility receiving Medicare reimbursement. The ESRD CPMs have been credited for significant improvements in the quality of renal dialysis facilities (Jencks, 2001).
Health Plan Employer Data and Information Set
HEDIS was introduced by NCQA in 1991, and is updated annually to help purchasers and consumers compare the quality of commercial, Medicaid, and Medicare managed care plans. Its measures are used in many government health care programs, particularly in managed care settings. HEDIS incorporates other established standard measure sets, such as CAHPS, DQIP, and the Health Outcomes Survey (HOS). It encompasses the care of common health conditions, including asthma, cancer, depression, diabetes, and heart disease; patients’ perceptions of care received; and structural health plan attributes.
Minimum Data Set
The MDS is an 8-page set of core assessment items introduced by CMS in 1990 in all Medicare- and Medicaid-certified nursing homes principally for clinical assessment of nursing home residents. CMS is currently conducting a pilot project that involves regular disclosure of nine risk-adjusted quality measures, derived from the MDS, with the aim of promot-
ing quality improvement in nursing homes in six states. There are six chronic care measures (e.g., physical restraints, pressure sores, weight loss, infections, residents with pain, and declines in activities of daily living) and three measures of post-acute care quality (e.g., managing delirium, residents with pain, and improvement in walking) (Centers for Medicare and Medicaid Services, 2001c).
MDS and the Outcome Assessment and Information Set (OASIS) (see below) have been criticized for being overly burdensome to providers and for failing to reflect the care patients experience as they move from one health care setting to another, such as the transitions to and from home health care to nursing home and hospital (Institute of Medicine, 2001b).3 The Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (Public Law 106-554) mandated that the Secretary of DHHS report to Congress on the development of standard assessment instruments across a wide array of health care settings, including home care and nursing home care.4 CMS has recently taken steps to shorten the MDS for prospective payment system assessments, effective July 2002 (Centers for Medicare and Medicaid Services, 2002d).
National Priorities Project
This is a CMS quality improvement organization (QIO) project to improve statewide Medicare FFS performance. It uses 22 process measures for three inpatient clinical topics (acute myocardial infarction, heart failure, and stroke) and three outpatient clinical topics (early detection of breast cancer, diabetes management, and pneumonia and influenza immunization).
Outcome Assessment and Information Set
OASIS is a clinical dataset used by CMS for assessing home care since 1999. CMS requires home care agencies to submit OASIS data for most adult Medicare and Medicaid patients. There have been widespread complaints about the time and expense required to complete the OASIS reporting form. Numerous organizations have called for streamlining of the dataset because of this administrative burden. Critics have maintained that the OASIS reporting requirements are duplicative, that the paperwork involved consumes more nursing time than that devoted to patient care, that associated administrative costs are inadequately reimbursed, and even that OASIS is partly to blame for the critical shortage of qualified home care nurses (American Hospital Association and American Home Care Association, 2001). However, there is evidence that OASIS has been a useful tool in home health quality improvement projects, resulting in measurably better outcomes for patients (Shaughnessy et al., 2002). In June 2002, the DHHS Secretary’s Advisory Committee on Regulatory Reform recommended that OASIS be subject to an independent cost–benefit evaluation. The committee also recommended that the reporting form be modernized to, for example, better reflect home health agency operations and current medical practice; to eliminate data elements that are duplicative or not used for payment, quality management, or survey purposes; and to create the option to use one form for all situations of care or changes in status (DHHS Secretary’s Advisory Committee on Regulatory Reform, 2002). In response to a request from the Secretary, CMS completed an in-depth review of all OASIS elements and has proposed reducing the burden associated with OASIS by approximately 25 percent. CMS estimates that the proposed changes could be implemented by the end of December 2002. CMS has also convened a technical expert panel and hosted a town hall meeting to assess any additional opportunities for streamlining the OASIS data collection tool (Centers for Medicare and Medicaid Services, 2002e).
OVERVIEW OF CURRENT PERFORMANCE MEASUREMENT ACTIVITIES
Centers for Medicare and Medicaid Services
CMS manages the lion’s share of the federal responsibilities for three of the government health care programs addressed in this report—Medicare, Medicaid, and SCHIP. It thereby influences the quality of health care services provided to more than one in four U.S. residents (an estimated 83 million people).
Since creating Medicare in 1965, Congress has mandated a series of programs to ensure the quality of care provided to Medicare beneficiaries (Institute of Medicine, 1990). Medicare’s approach to improving quality—like that in the private sector—has evolved differently depending on the clinical context and delivery setting (MedPAC, 1999). By statute, Medicare’s quality improvement resources must be allocated to its FFS and Medicare+Choice (M+C) programs in proportion to beneficiary participation in the two delivery systems (Health Care Financing Administration, 1999).5 Nevertheless, CMS relies much more heavily on regulatory requirements to promote quality in Medicare managed care and in long-term care facilities and programs than in Medicare FFS (MedPAC, 2002).6 In addition, although CMS employs performance measures to stimulate quality improvement across a wide range of clinical settings and delivery systems, it uses those measures in distinctly different ways in managed care and FFS (MedPAC, 2002). For example:
While M+C plans are held accountable for their performance, FFS contractors are not. As a condition of Medicare participation, M+C plans must implement a quality improvement process and also show evidence of improvement using three sets of measures, including the Medicare versions of HEDIS, CAHPS, and HOS (MedPAC, 2002).7 In Medicare FFS, participation in quality improvement projects is voluntary (although hospitals and other health care institutions must respond to QIO data requests).
CMS publicly discloses the quality improvement efforts of individual M+C plans by, for example, annually reporting each plan’s HEDIS measures on the CMS Website. Only limited information about relatively small subsets of FFS providers (i.e., dialysis facilities and nursing homes) is publicly reported.
Quality Improvement Organizations
QIOs are Medicare’s primary tool for enhancing quality (see Box 4-1). Today’s QIOs reflect more than 30 years’ evolution in CMS efforts to ad-
About 87 percent of Medicare beneficiaries are covered by Medicare fee for service (FFS); 14 percent are enrolled in Medicare+Choice (M+C) and health maintenance organizations (Stuber et al., 2001).
This is due in part to the Balanced Budget Act (BBA) of 1997 (P.L. 105-33), which instructed CMS to regulate quality improvement in M+C plans.
See Chapter 3 for a discussion of Medicare conditions of participation.
There are currently 37 QIOs serving the 50 states, District of Columbia, and U.S. territories. Medicare’s QIO program has three basic objectives:
CMS finances QIO projects through competitively awarded contracts that can be renewed every 3 years or canceled and put up for competitive bidding. QIOs are private organizations that vary in their capabilities and the extent to which they do non-Medicare work. They typically employ a multidisciplinary team that includes physicians, nurses, health care quality professionals, epidemiologists, statisticians, and communications experts.
Every QIO contracts with Medicare, but many QIOs also work with state Medicaid programs (about two-thirds conduct quality reviews for state Medicaid agencies) as well as with private employers, skilled nursing facilities, and ESRD facilities.
The Medicare-QIO 3-year contracts detail a complex and extensive set of tasks referred to as the Scope of Work (SOW). During the sixth SOW, covering federal fiscal years 2000-2002, QIOs received about $240 million per year from CMS, approximately one-tenth of 1.0 percent of annual Medicare spending. The seventh SOW was issued while this report was being prepared.
SOURCES: Agency for Healthcare Research and Quality, 2002a; Center for Medicare Education, 2001; Centers for Medicare and Medicaid Services, 2002a; Health Care Financing Administration, 2000; MedPAC, 2002.
dress quality in the Medicare program. As discussed in Chapter 3, these state- or regional-level organizations initially engaged in retrospective review of paper medical records to identify any incidents of poor-quality hospital care and discipline wrongdoers (Institute of Medicine, 1990). Over time, the review organizations became increasingly responsible for protecting the fiscal integrity of the Medicare program and thus were charged with an array of additional responsibilities, such as lowering admission rates, reducing inpatient lengths of stay, providing prior authorizations for some elective procedures, and, just recently, preventing payment errors.
In the 1990s, in response to congressional direction, CMS moved the QIOs towards a more proactive, population- and evidence-based approach to measuring and sometimes disclosing provider and health plan performance. This approach is a clear departure from the past as it deemphasizes punitive actions and instead emphasizes community outreach and collaboration with health plans, providers, and the long-term care industry at the local and regional levels (Center for Medicare Education, 2001).
This shift became evident in the fifth SOW (1997–1999) and sixth SOW (2000–2002) and is further emphasized in the seventh SOW (2003–2005) (Centers for Medicare and Medicaid Services, 2001e). The heart of the sixth SOW was the National Priorities project to improve statewide Medicare FFS performance. As noted earlier, this effort involves the use of the same 22 clinical performance measures nationwide for three inpatient clinical topics (acute myocardial infarction [AMI], heart failure, and stroke) and three outpatient clinical topics (early detection of breast cancer, diabetes management, and pneumonia and influenza immunization). Each clinical topic is supported by a Medicare-designated QIO that provides technical support on that topic to QIOs nationwide (see Table 4-2).
The QIOs use the 22 performance measures to determine their state’s or region’s baseline performance for each clinical topic, work with local providers to make improvements, and report state-level results to CMS. They typically offer local providers clinical documentation supporting the performance indicators, feedback data on actual performance, and technical advice on alternatives for improving systems, and also convene meetings to promote collaboration among local stakeholders (Jencks, 2002). Medicare does not require individual clinicians to work with the QIOs on any specific improvement project (MedPAC, 2002). Thus, QIOs must find ways to persuade local providers to collaborate with them if they are to achieve state-level improvements in the performance measures.
The sixth SOW also required every QIO to offer technical assistance to all the M+C plans in its state (Health Care Financing Administration,
TABLE 4-2 National Medicare QIO Projects in the 6th SOW
Clinical Topic (Lead QIO)
Performance Measures (% of beneficiaries receiving unless otherwise indicated)
Acute Myocardial Infarction (AMI) (Qualidigm, <CTMedicare.org/ami_caspro>)
Early administration of aspirin after arrival at hospital
Early administration of beta blocker after arrival at hospital
Time to initiation of reperfusion therapy
Aspirin at discharge
Beta blocker at discharge
Angiotensin-converting enzyme (ACE) inhibitor at discharge for systolic dysfunction
Smoking cessation counseling during hospitalization
Hospital medical records for AMI patients
Breast Cancer Early Detection (Virginia Health Quality Center, <vhqc.org>)
Doctors’ offices, outpatient settings
Medicare claims for all female beneficiaries
Doctors’ offices, outpatient settings
Biennial retinal exam by an eye professional
Annual hemoglobin A1c (HbA1c) testing
Biennial lipid profile
Medicare claims for all diabetic beneficiaries
Heart failure (Colorado Foundation for Medical Care, <nationalheartfailure.org>)
Appropriate use/nonuse of ACE inhibitors at discharge (excluding discharges on Angiotension-II Receptor Blocker)
Hospital medical records for heart failure patients
Clinical Topic (Lead QIO)
Performance Measures (% of beneficiaries receiving unless otherwise indicated)
Doctors’ offices, outpatient settings
State influenza vaccination rate
State pneumococcal vaccination rate
Influenza vaccination or screening
Pneumococcal vaccination or screening
Blood culture before antibiotics are administered
Administration of antibiotics consistent with current recommendations
Initial antibiotic dose within 8 hours of hospital arrival
Centers for Disease Control and Prevention’s Behavioral Risk Factor
Data; hospital medical records for pneumonia patients
Stroke (Iowa Foundation for Medical Quality <ifmc.org>)
Discharged on antithrombotic (acute stroke or transient ischemic attack [TIA])
Discharged on warfarin (atrial fibrillation)
Avoidance of sublingual patients nifedipine (acute stroke)
Hospital medical records for stroke, TIA, and chronic atrial fibrillation
SOURCE: Adapted from Centers for Medicare and Medicaid Services, 2002b.
1999).8 Much of this assistance is focused on helping the plans to interpret their HEDIS, CAHPS, and HOS results, to identify opportunities for improving care, and to develop and evaluate measurable interventions.9 QIOs are also required to work with ESRD facilities, home-health agen-
cies, and long-term care facilities (Centers for Medicare and Medicaid Services, 2002a).
As described in the previous chapter, Medicare and most other government programs rely on JCAHO accreditation to help ensure a minimal level of health care quality. Performance measurement has become an integral component of JCAHO accreditation. JCAHO’s ORYX initiative requires accredited hospitals, long-term care facilities, home care providers, and behavioral care organizations to routinely submit patient-level data for performance measurement and to regularly demonstrate how they use performance measures to monitor and improve the quality of their services (see Box 4-2).
End Stage Renal Disease
The legislation that created the ESRD program in 1972 (Section 2991, Public Law 92-603), established ESRD Network Coordinating Councils as the official liaisons between the nation’s ESRD providers and the federal government (Forum of End Stage Renal Disease Networks, 2002). The 19 ESRD networks are CMS’ principal instruments for encouraging quality improvements in ESRD services. The networks’ scope of work is determined by competitively awarded contracts with CMS that delineate specific quality improvement activities as well as numerous other tasks. The quality improvement efforts are based on the premise that ESRD networks “can do more to improve the quality and cost effectiveness of care by bringing typical care into line with the best practices rather than by inspecting individual cases to identify erred treatment” (Centers for Medicare and Medicaid Services, 2001a, p.1)
Routine collection and analysis of clinical performance measures are a principal initiative of the program. The ESRD clinical performance measures are calculated from annual national random samples of adult dialysis patients. Each year, ESRD facilities with one or more patients in the sample must submit an array of patient-specific data to their respective ESRD network. According to their trade association, the networks maintain the world’s largest, comprehensive disease-specific registry. It includes Medicare beneficiaries, non-Medicare patients, Medicare secondary patients, and Veterans Health Administration (VHA) patients (Forum of End Stage Renal Disease Networks, 2002).
CMS maintains a Dialysis Facility Compare Website where members of the public can view selected clinical performance measures, such as adequacy of dialysis and patient survival, for the approved Medicare
Although JCAHO is a private accreditation group, it has a significant impact on almost all health care services provided by government health care programs. JCAHO has statutory authority under Medicare and Medicaid to certify hospitals, ambulatory surgical centers, clinical laboratories, home health agencies, and hospices as being in compliance with the government’s minimum standards of participation. JCAHO accreditation is also an important component of the VHA, TRICARE, and IHS health care programs.
ORYX is an evolving initiative, first introduced in February 1997, to support and foster quality improvement in JCAHO-accredited organizations. ORYX integrates outcome and other performance measurement data into the survey and accreditation process for hospitals, long-term care facilities, home care, and behavioral health organizations.
Under the current ORYX program, JCAHO has designated ORYX-certified performance measurement vendors for accredited hospitals, long-term care facilities, home care, and behavioral health organizations. JCAHO requires its accredited organizations to contract with one of the certified vendors. Accredited health care organizations select their performance measures and submit the necessary patient-level data to the vendors who in turn aggregate and report the performance data to JCAHO. JCAHO staff analyze the data, using control and comparison charts, to identify performance trends and patterns. JCAHO surveyors use these analyses to focus their on-site surveys. The accredited applicants must demonstrate that they use the measures to improve their performance.
Hospitals must select performance measures from two of four core measurement areas: acute myocardial infarction, heart failure, community-acquired pneumonia, and pregnancy and related conditions. Since July 1, 2002, hospitals have been collecting performance data for all patient discharges, and they will begin transmitting data to JCAHO via a certified vendor no later than January 31, 2003. Subsequently, quarterly transmissions must be made no later than 4 months after the close of a calendar quarter. Aggregate data from all JCAHO-accredited hospitals will comprise the comparison group for JCAHO’s assessment of how each accredited organization uses the performance measurement data for quality improvement.
JCAHO has not yet identified core measures for non-hospital organizations. Until this is done, non-hospital entities may choose their own measures from those measures offered by certified performance measurement vendors.
SOURCE: Joint Commission on Accreditation of Healthcare Organizations, 2002.
ESRD facilities in their own geographic area (Medicare, 2002). There has been an apparent steady improvement in a number of the measures (Centers for Medicare and Medicaid Services, 2001e; Jencks, 2001). For example, during the period 1993–1999, the proportion of adult dialysis patients receiving inadequate dialysis treatment declined from 57 to 20 percent. At the same time, the proportion of adult dialysis patients with anemia dropped from 57 to 32 percent.
Home Health Care
Since 1999, CMS has used OASIS for its oversight of home health agencies participating in the Medicare and Medicaid programs. All Medicare-certified home care agencies must collect, computerize, and electronically transmit OASIS data at regular intervals to a CMS-approved central source for all their adult Medicare or Medicaid patients receiving personal care or health services (42 Code of Federal Regulations Part 484). CMS’s seventh SOW for QIOs directs them to help home health agencies develop quality improvement projects using OASIS-based performance measures (Centers for Medicare and Medicaid Services, 2002c). Eventually, CMS plans to generate outcome reports for all certified home care agencies.
Skilled Nursing Care
All certified long-term care facilities, such as nursing homes and skilled nursing facilities, must transmit to their state an MDS drawn from residents’ medical records; in turn, the states submit the data to CMS (Centers for Medicare and Medicaid Services, 2001b). Members of the public can now consult the CMS website to view several nursing home quality measures, such as the percent of residents with pressure sores, the percent with urinary incontinence, and summary results from state nursing home inspections for facilities in their own geographic area and throughout the nation (Centers for Medicare and Medicaid Services, 2001c).
In April 2002, CMS initiated a six-state pilot to identify, collect, and publish nursing home quality information in Colorado, Florida, Maryland, Ohio, Rhode Island, and Washington. The project, which draws from CMS’ collaboration with the NQF to identify nine risk-adjusted quality measures for use by beneficiaries (Centers for Medicare and Medicaid Services, 2002f), uses measures which target the quality of both chronic care and post-acute care.
Since the Medicaid program was created by Congress in 1965, states have had great flexibility in how they manage their Medicaid programs. The same is also generally true of how states conduct Medicaid quality assurance and improvement activities. Government rules grant states wide latitude in establishing their own goals for Medicaid quality and in choosing the methods they use to achieve these goals. For example, CMS requires states to collect Medicaid encounter data, but the states are free to determine many of the specific features of the data, including the data elements themselves, reporting frequency, and level of aggregation (Matthews, 2000). As a consequence, state-to-state comparisons of Medicaid quality are largely infeasible.
Performance measures have become a popular state tool for assessing and promoting quality improvement in Medicaid managed care, but there are few useful quality performance measures for Medicaid FFS health care. Most states use a combination of publicly available measures and state-developed measures for Medicaid managed care (Kaye, 2001). In 2000, Medicaid HEDIS and Medicaid CAHPS were the most common national measure sets used by the states. However, states usually modify the specifications to tailor data collection to their own specific program needs (French and Miele, 2001). Many states have developed consumer report cards drawing from HEDIS, CAHPS, and other performance measures (Verdier and Dodge, 2002). Many states have also implemented provider incentive programs that employ performance indicators (Dyer et al., 2002).
Despite the variation in states’ HEDIS data specifications, the NCQA and the American Public Human Services Association have established a national database of Medicaid HEDIS statistics. In 2001, the database incorporated 168 individual Medicaid managed care plan HEDIS submissions (for 29 plans the data were unaudited). NCQA reports that although there were across-the-board improvements in commercial plans’ HEDIS performance, from 1998 to 2000, Medicaid performance was mixed (French and Miele, 2001).
There may be greater uniformity in performance data for Medicaid managed care once CMS implements related rules under the Balanced Budget Act of 1997, which directed CMS to develop specific protocols to guide the states’ conduct of external quality review of Medicaid managed care plans. In their current form, the protocols assume that states will continue to have flexibility in developing performance measures because they will be required to conduct their performance reviews only in a manner consistent with but not necessarily identical to the protocols (Centers for Medicare and Medicaid Services, 2001d).10 States will be free to specify
their performance measures, the specifications to be followed in calculating the measures, and the method and timing that health plans must use for reporting.11
State Children’s Health Insurance Program
Congress established the SCHIP program in 1997 for low-income uninsured children. As of 2002, most states had operated their programs for only 3 or 4 years. As a consequence, both the federal and state focus for SCHIP has been on enrolling eligible children and making the program operational. More recently, attention has turned to assessing the program’s efforts (Henneberry, 2001).
SCHIP regulations require states to establish performance goals and performance measures, including a written assurance that the state will collect and maintain data and furnish reports to the Health and Human Services Secretary. Managed care is the dominant delivery system used by SCHIP programs, and the regulations grant CMS the authority to mandate standardized performance measures for managed care plans serving SCHIP enrollees (but not for FFS providers). No specific performance measures or goals are required.
Many states require managed care plans that serve SCHIP enrollees to report HEDIS measures (Henneberry, 2001). However, surveys of SCHIP programs indicate that the programs often modify HEDIS to tailor data collection to their specific program needs thus making state-to-state comparisons problematic (French and Miele, 2001). Some states are also adapting HEDIS for FFS and primary care case management. Other states have developed their own performance measures. Wisconsin, for example, is developing a new performance measurement system, the “Medicaid Encounter Data Driven Improvement Core-Measure Set,” drawing directly from monthly HMO encounter data (Henneberry, 2001).
CMS and AHRQ are currently collaborating on a Performance Measurement Partnership Project with state Medicaid and SCHIP programs to determine the feasibility of implementing a core set of standardized performance measures, such as HEDIS or CAHPS, for managed care in Medicaid and SCHIP. One aim of the project is to motivate benchmarking and state creativity in using performance measures (Block, 2002).
DOD TRICARE is in the midst of an ambitious effort to reengineer the military health system (MHS) (Milbank Memorial Fund, 2001). In December 2001, TRICARE Management Activity (TMA), the DOD-level administrator of the MHS, released the Population Health Improvement Plan (PHI) and Guide, a detailed blueprint for making “population health improvement a reality in the DOD” (DOD TRICARE Management Activity, 2001, p. i). In earlier research that contributed to the guide’s development, TMA had concluded that its system was “replete with metrics covering a wide range of uncoordinated indicators of varying usefulness’’ and “disparate performance measurement systems” (TRICARE, 1999b, p. 26). The PHI Guide directly addresses this concern and calls for an “enterprise-wide core set of standardized performance measures” to drive improvements in clinical services (DOD TRICARE Management Activity, 2001, p. 67). One of the first steps will be to integrate measure sets that are already collected for mandatory quality assurance programs such as HEDIS and ORYX.
Today’s TRICARE Website reports numerous performance measurement activities—analyses of HEDIS data used to focus quality improvement efforts related to diabetes, asthma, breast cancer screening, and cervical cancer screening; “report cards” drawn from an array of beneficiary surveys; digests of performance measures called TRICARE Operational Performance Statements (TOPS); and others.
One survey, the Health Care Survey of DOD Beneficiaries, is an adapted CAHPS instrument used by TRICARE to monitor consumer satisfaction with and perceptions of the quality of MHS hospitals, clinics, and clinical staff (including how the MHS compares with the care received by the privately insured population) (TRICARE, 1999a).12 The survey responses are aggregated into composite performance measures using CAHPS algorithms. The resulting measures are benchmarked against the National CAHPS Benchmarking Database and the findings are released in Web-based interactive report cards.
TOPS is a quarterly digest that disseminates routine analyses of the MHS. Included are performance measures such as beneficiary grievance rates, preventable admission rates for active-duty personnel (e.g., for angina or chronic obstructive pulmonary disease), preventable admission rates for non–active duty managed care enrollees (e.g., for asthma or congestive heart failure), access to care, and patient satisfaction.
Veterans Health Administration
VHA’s integrated health information system, including its framework for using performance measures to improve quality, is considered one of the best in the nation. VHA uses performance measures along a number of dimensions—patient satisfaction, functional outcomes, personal health practices, and clinical measures—to drive quality improvement in a wide range of clinical disciplines and across ambulatory, hospital, and long-term care settings (Jones and VHA, 2002; Nerenz and Neil, 2001).
One of the most highly regarded VHA initiatives employing performance measures is the National Surgical Quality Improvement Program (NSQIP). NSQIP was implemented to develop comparative risk-adjusted information on surgical outcomes in the VHA’s many medical centers (Daley, 1998). The initiative’s key components are periodic performance measurement and feedback, along with comparative, site-specific, and outcome-based annual reports; self-assessment tools; structured site visits; and dissemination of best practices. From 1991, when NSQIP data were first collected, through 2000, the impact on the outcomes of major surgeries at VHA hospitals was dramatic: 30-day postoperative mortality decreased by 27 percent and 30-day morbidity by 45 percent (Shukri et al., 2002).
Many other performance measures are in use, including, for example, several evidence-based quality indices developed by VHA researchers to improve preventive, chronic, and palliative services and commercially available measurement sets such as HEDIS and CAHPS. The Chronic Disease Care Index targets the five most common conditions treated at VHA hospitals: ischemic heart disease, hypertension, chronic obstructive pulmonary disease, diabetes mellitus, and obesity. HEDIS measures have been used to assess diabetes care, heart attack treatment, ambulatory follow-up after inpatient mental health stays, and cervical cancer screening (Jones et al., 2000; Mencke et al., 2000).
Indian Health Service
IHS has developed a performance evaluation system to meet the performance measurement requirements of JCAHO’s ORYX initiative and to comply with the Government Performance and Results Act (Indian Health Service, 2000). The majority of IHS facilities are JCAHO-accredited and thus are required to regularly submit and use performance measures for quality improvement. The performance evaluation system uses quality indicators that have been specifically tailored to Indian health care populations and focus on 12 priority health problems: diabetes, obesity, cancer, heart disease, alcohol and substance abuse, family abuse and violence,
injuries, dental disease, poor living environment, mental health, tobacco use, and maternal and child health (Indian Health Service, 2002).
OPTIMIZING THE GOVERNMENT’S USE OF PERFORMANCE MEASURES
In its recent comprehensive assessment of how to advance the quality of the MHS, DOD/TMA concluded that a conceptual framework is key for “improving the health of populations” and for guiding the “specific actions and tools that will help to build healthy communities” (DOD TRICARE Management Activity, 2001, p. v). The committee agrees and believes this to be true for all government health care performance measurement efforts. The committee believes further that a conceptual framework for performance measurement should build on efforts already under way.
To achieve the continuity required to formulate a conceptual framework for performance measurement, the committee encouraged adoption of the taxonomy developed by the Institute of Medicine’s earlier Committee on the Quality of Health Care in America. That committee identified six dimensions or attributes of quality that should shape government’s use of performance measures (see Box 4-3). These six attributes have al-
SOURCE: Institute of Medicine, 2001a.
ready been adopted by DHHS as a conceptual framework for the National Health Care Quality Report. They have also been endorsed in whole or in part by various private-sector groups including the Leapfrog Group and NQF. In addition, another IOM committee has identified a list of 20 priority areas for health system improvement, and these represent excellent candidates for the development of standardized performance measures (Institute of Medicine, 2002). Most of the government programs have identified leading chronic conditions and health concerns for their populations, and there is much overlap in all of these lists.
NEED TO STANDARDIZE QUALITY PERFORMANCE MEASURES
Government health care programs reflect a growing recognition that measuring quality and using quality performance measures to improve health care is central to the federal government’s roles of regulator, purchaser, and provider of health care for almost half the U.S. population. Yet too many resources are spent on health care measures that are either duplicative or ineffective, and little comparative quality information is made available in the public domain for use by beneficiaries, health professionals, or other stakeholders. Furthermore, potential users of the available measures are often hindered by the lack of reporting standards, conflicting methodologies, and inconsistent terminology (Eddy, 1998; Rhew et al., 2001). Standardizing measures can lessen the confusion. In addition to addressing these problems, the committee believes standardized performance measures could drive quality improvement in numerous other ways:
By drawing attention to best practices and encouraging providers to adopt them.
By facilitating comparisons of accountable entities, such as hospitals, health plans, long-term care facilities, and, potentially, physicians’ practices.
By enabling the development of national benchmarks and helping to identify regional differences.
By supporting efforts to sensibly reward quality through either payment or other means.
By expanding the research community’s capacity to identify the factors that drive or diminish health care quality.
By helping to make the link between accountable entities and patient outcomes.
By providing the clinical data needed to formulate workable risk adjustment techniques.
By providing the necessary data to identify providers who demon-
strate consistently substandard care and developing strategies for improvement or narrowing of their scope of practice.
Performance measurement is not a perfect solution. There are problems and pitfalls with this approach that must be addressed and guarded against. Any performance measurement approach will focus on only a limited number of areas, and there is the risk that too little attention will be paid to clinical areas that are not the focus of measurement activity. There are numerous methodologic challenges, such as capturing rare events and adjusting for differences in risk or severity of illness (Eddy, 1998). In the case of outcome measures, it must be recognized that almost all outcomes are probabilistic (i.e., doing the right things does not guarantee good outcomes, and good outcomes sometimes occur even when the right things were not done), and there are also many factors outside the control of the health system determining outcomes (Eddy, 1998). There must also be ways to identify and deal with missing or incorrect data (McGlynn and Adams, 2001).
While not a perfect solution, the committee believes that the potential benefits of performance measurement and reporting are sizable and that the federal government should act expeditiously to promulgate a standardized measurement set and to implement this set within each of the government programs. At the same time, efforts must be made to address operational and methodologic challenges and to mitigate any unintended adverse consequences.
Implications for Current Activities
Adoption of a central focus on performance measurement and reporting will have significant implications for the way in which the government conducts its quality enhancement activities. In today’s environment of scarce resources and rising health care costs, it will be imperative for each government health care program to assess carefully how best to realize its objectives. Standardized quality measurement and reporting must not be pursued as an additional government requirement, but rather as a replacement for current quality measurement activities. Moreover, whenever possible, providers should not be burdened with reporting the same patient-specific performance data more than once to the same government agency.
There should be a designated government entity responsible for coordinating the government’s performance measurement activities. QuIC has made a strong start in the right direction by convening representatives from the six major government health care programs and initiating various collaborative projects based on voluntary participation, but it lacks a
clear mandate. Congress should grant the statutory authority and provide adequate funding to either QuIC or another existing entity to coordinate and standardize the government’s performance measurement activities. This entity should establish strong working relationships with various private-sector groups, including NQF, NCQA, JCAHO, the Leapfrog Group, and FACCT to optimize future public–private collaboration and provide structured mechanisms for consumer input.
It should be noted that the committee considered and rejected the option of establishing a new oversight authority. It concluded that the existing infrastructure, if applied more rigorously and with adequate resources, has the potential to accomplish the objectives laid out in this report. The costs and organizational challenges of forming a new agency were viewed as substantial, creating the potential for delay in implementation of the substantive activities.
The QuIC should move aggressively to establish an initial set of standardized measures. As noted previously, a wealth of measures already exists. In very few instances will it be necessary to develop measures from scratch. There are some measure sets, for example, DQIP, that are already being used by several or most of the government programs. By starting with this “low hanging fruit,” it should be possible to identify measure sets for 5 conditions almost immediately, thus allowing the pilot testing process to begin in fiscal year 2003. The remaining 10 sets can then be designated in fiscal year 2004. By moving expeditiously to designating all 15 sets of measures within the first 18 months to 2 years, the federal government will be providing important information to providers regarding the necessary capabilities and specifications for their information systems.
CMS has historically allocated most of Medicare’s quality improvement budget to its QIO contracts. The committee strongly recommends the use of standardized measures derived from computerized data and public reporting of comparative quality information. It will be important for CMS to reexamine how best to use the QIOs to enhance quality within this context. For example, should QIOs play a role in the release of public-domain comparative quality reports? Would substantial quality improvements in Medicare be achieved more readily with fewer QIO-like entities operating on a national or larger regional scale?
States will also need to relinquish some flexibility in promulgating state-specific performance measures for Medicaid and SCHIP programs. State representatives should be active participants in the QuIC, thus having input into the process of establishing the standardized measure sets. But individual states would be required to apply within their Medicaid and SCHIP programs the standardized measures applicable to the populations served. States would still retain a good deal of flexibility in how they use their regulatory and purchasing powers to act on the perfor-
mance information provided through standardized reporting mechanisms.
In summary, the six major government health care programs should commit to the use of common sets of standardized performance measures. The current administrative burden on the providers that constitute the foundation of government health care services is unacceptable. The committee believes that standardized metrics and reporting formats would not only aid in alleviating this burden, but also help ensure meaningful gains in the quality of health care.
Finally, effective performance measurement demands real time access to sufficient clinical detail and accurate data (Schneider et al., 1999). By the time retrospective performance measures reach decision makers, it is too late for them to be useful. The current health information environment is far too fragmented, technologically primitive, and overly dependent on paper medical records. The nation’s need for a functional health care information system is examined in the next chapter.
Agency for Healthcare Research Quality. 2001. “AHRQ Seeks Applications for Second Phase of CAHPS®. Media Advisory.” Online. Available at http://www.ahrq.gov/news/press/pr2001/cahps2pr.htm [accessed July 10, 2002].
———. 2002a. “Fact Sheet: Medicare QIOs: Improving Patient Safety and Quality of Care for Seniors; A National Network of Quality Improvement Experts: Major Medicare QIO Efforts.” Online. Available at http://www.ahqa.org/pub/media/159_766_2687.cfm [accessed May 13, 2002].
———. 2002b. “Child Health Tool Box: Measuring Performance in Child Health Programs. Understanding Performance Measurement.” Online. Available at http://www.ahrq.gov/chtoolbx/understn.htm [accessed July 10, 2002].
American Hospital Association, and American Home Care Association. 2001. Letter to T. Scully, CMS Administrator (Subject: Oasis).
American Medical Association, Joint Commission on Accreditation of Healthcare Organizations and National Committee for Quality Assurance. 2001. “Principles for Performance Measurement in Health Care. A Consensus Statement.” Online. Available at http://www.ncqa.org/communications/news/prinpls.htm [accessed May 29, 2002].
Block, R. (CMS). 16 May 2002. Personal communication to Jill Eden.
Center for Medicare Education. 2001. “The Role of PROs, Issue Brief, V2 (2).” Online. Available at http://www.medicareed.org/pdfs/papers53.pdf [accessed May 13, 2002].
Centers for Medicare and Medicaid Services. 2001a. “End Stage Renal Disease (ESRD) Network Organizations.” Online. Available at http://www.hcfa.gov/quality/5d.htm [accessed Feb. 8, 2002a].
———. 2001b. “MDS Quality Indicator and Frequencies Reports.” Online. Available at http://hcfa.gov/projects/mdsreports/default.asp [accessed June 17, 2002b].
———. 2001c. “Nursing Home Compare—Home.” Online. Available at http://www.medicare.gov/NHCompare/home.asp [accessed May 6, 2002c].
———. 2001d. “Protocols for External Quality Review of Medicaid Managed Care Organizations and Prepaid Health Plans.” Online. Available at http://www.hcfa.gov/Medicaid/mceqrhmp.htm [accessed May 15, 2002d].
———. 2001e. “Quality of Care: National Projects, ESRD Clinical Performance Measures Project (2000 Annual Report).” Online. Available at hcfa.gov/quality/3m8.htm [accessed Jan. 9, 2002e].
———. 2002a. “Quality Improvement Organizations Statement of Work.” Online. Available at www.hcfa.gov/qio/2.asp [accessed Apr. 22, 2002a].
———. 2002b. “Quality Indicators.” Online. Available at http://www.cms.hhs.gov/qio/1a1-d.asp [accessed June 14, 2002b].
———. 2002c. “Statement of Work, QIOs: 7th Round February 2002 Version.” Online. Available at http://www.hcfa.gov/qio/2b.pdf [accessed May 13, 2002c].
———, CMS Office of Public Affairs. 2002d. “Medicare Streamlines Paperwork Requirements for Nursing Homes to Allow Nurses, Other Caregivers to Spend More Time With Patients.” Online. Available at www.CMS.hhs.gov/media/press/release.asp?counter=462. [accessed July 10, 2002d].
———. 2002e. “Medicare Program; Town Hall Meeting on the Outcome Assessment Information Set (OASIS).” Online [accessed Aug. 12, 2002e].
———. “Nursing Home Quality Initiative.” Online [accessed Aug. 12, 2002f].
Daley, J. 1998. About the National VA Surgical Quality Improvement Program. The Forum, VA Office of Research & Develoment.
DHHS Secretary’s Advisory Committee on Regulatory Reform. 2002. Regional Hearing #5 Meeting Minutes/Summary.
DOD TRICARE Management Activity. 2001. “Population Health Improvement Plan Guide.” Online. Available at http://www.tricare.osd.mil/mhsophsc/DoD_PHI_Plan_Guide.html [accessed May 15, 2002].
Donabedian, A. 1980. The definition of quality and approaches to its assessment. In Explorations in Quality Assessment and Monitoring. Vol. I. Ann Arbor MI: Health Administration Press.
Dyer, M., M. Bailit, and C. Kokenyesi. 2002. Are Incentives Effective in Improving the Performance of Managed Care Plans, Working Paper in the Informed Purchasing Series. Lawrenceville NJ: Center for Health Care Strategies.
Eddy, D. M. 1998. Performance measurement: problems and solutions. Health Aff (Millwood) 17 (4):7-25.
Forum of End Stage Renal Disease Networks. 2002. “What Are the ESRD Networks?“ Online. Available at http://www.esrdnetworks.org/networks_defined.htm [accessed Apr. 30, 2002].
Foster, N. (QuIC). 18 April 2002. Personal communication to Jill Eden.
French, J. B., and A. Miele. “Evaluation of HEDIS in Medicaid and SCHIP.” Online. Available at http://www.ncqa.org/Programs/QSG/EvaluationofHEDISinMedicaidandSCHIP.pdf [accessed Dec. 2001].
Health Care Financing Administration. 1999. “QIO SOWs: Request for Proposal, Sixth Round.” Online. Available at http://www.hcfa.gov/qio/2a.pdf [accessed May 13, 2002].
———. 2000. “National Projects Reports: Medicare Priorities.” Online. Available at http://www.hcfa.gov/quality/3k.htm#priority [accessed May 13, 2002].
Henneberry, J. 2001. State efforts to evaluate the progress and success of SCHIP (Issue Brief). NGA Center for Best Practices.
Hines, L. (DHHS). 8 August 2002. BIPA info. Personal communication to Jill Eden.
Indian Health Service. 2000. “Indian Health Performance Evaluation System (PES).” Online. Available at http://www.ihs.gov/NonMedicalPrograms/IHPES/index.cfm?module=content&option=pes [accessed June 15, 2001].
———. “IHS FY 1999 Performance Plan.” Online. Available at http://www.ihs.gov/PublicInfo/Publications/Perfplan2-1-99.asp [accessed Jan. 14, 2002].
Institute of Medicine. 1990. Medicare: a Strategy for Quality Assurance. Washington DC: National Academy Press.
———. 2001a. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington DC: National Academy Press.
———. 2001b. Improving the Quality of Long-Term Care. Washington DC: National Academy Press.
———. 2002. Priority Areas for National Action: Transforming Health Care Quality. Washington DC: National Academy Press.
Jencks, S. F. 2000. Clinical performance measurement—a hard sell. JAMA 283 (15):2015-6.
Jencks, S. F. 2001. “Oct. workshop: Protecting and Improving Safety and Quality for Medicare-HCQIP.” (PP slides).
Jencks, S. F. (Quality Improvement Group, Office of Clinical Standards and Quality, Centers for Medicare & Medicaid Services). 6 August 2002. Re: study period. Personal communication to Jill Eden.
Johnson, D. 2001. HCFA Legislative Summary: Letter to All Interested Parties. Washington DC: CMS.
Joint Commission on Accreditation of Healthcare Organizations. 2002. “ORYX: The Next Evolution in Accreditation; Questions and Answers about the Joint Commission’s Planned Integration of Performance Measures into the Accreditation Process.” Online. Available at http://www.jcaho.org/perfmeas/oryx_qa.html [accessed May 13, 2002].
Jones, D., A. Hendricks, C. Comstock, A. Rosen, B. H. Chang, J. Rothendler, C. Hankin, and M. Prashker. 2000. Eye examinations for VA patients with diabetes: standardizing performance measures. Int J Qual Health Care 12 (2):97-104.
Jones, E., and VHA. 2002. “Quality Resources Newsletter; Three Interlinked Services Available in 2002.” Online. Available at http://www.oqp.med.va.gov/newsletter/newsletter.asp [accessed May 15, 2002].
Kaye, N. 2001. Medicaid Managed Care: A Guide for States. Prepared for the Henry J. Kaiser Family Foundation, the Health Resources and Services Administration, the David and Lucile Packard Foundation, and the Congressional Research Service. Portland ME: National Academy for State Health Policy.
Matthews, T. L. 2000. Measuring the Quality of Medicaid Managed Care: An Introduction to State Efforts. Lexington KY: Council Of State Governments.
McGlynn, E., and J. Adams. 2001. Public release of information on quality. Pp. 183-202. In Changing the U.S. Health Care System: Key Issues in Health Services Policy and Management. 2nd edition. R. Andersen, T. Rice, and G. Kominksi, eds. Jossey-Bass, Inc.
McIntyre, D., L. Rogers, and E. J. Heier. 2001. Overview, history and objectives of performance measurement. Health Care Financ Rev 22 (3):7-21.
Medicare. 2002. “Medicare.gov - Dialysis Facility Compare Home.” Online. Available at http://www.medicare.gov/dialysis/home.asp [accessed May 13, 2002].
MedPAC. 1999. Chapter 2: “Influencing Quality in Traditional Medicare.” Report to Congress: Selected Medicare Issues. Washington DC: MedPAC.
———. 2002. “Report to Congress: Applying Quality Improvement Standards in Medicare.” Online. Available at http://www.medpac.gov/publications/congressional_reports/jan2002_QualityImprovement.pdf [accessed Oct. 2, 2002].
MEDSTAT. 1998. A Guide for States to Assist in the Collection and Analysis of Medicaid Managed Care Data (CMS Contract #500-92-0035). Baltimore: CMS.
Mencke, N. M., L. G. Alley, and J. Etchason. 2000. Application of HEDIS measures within a Veterans Affairs medical center. Am J Manag Care 6 (6):661-8.
Milbank Memorial Fund. 2001. “Value Purchasers in Health Care: Seven Case Studies; The Military Health System: Implementing a Vision for Value.” Online. Available at http://www.milbank.org/2001ValuePurchasers/011001valuepurchasers.html#military [accessed May 14, 2002].
Nerenz, D. R., and N. Neil. 2001. “Performance Measures for Health Care Systems, Commissioned Paper for the Center for Health Management Research.” Online. Available at http://depts.washington.edu/chmr/docs/commissioned_papers/performancemeasures_nerenz_2001.doc [accessed June 14, 2002].
Paul, B. (CMS). 8 August 2002. BIPA 2000. Personal communication to Jill Eden.
Quality Interagency Coordination. 2002. “Quality Interagency Coordination (QuIC) Task Force.” Online. Available at http://www.quic.gov/index.htm [accessed July 11, 2002].
Rhew, D. C., M. B. Goetz, and P. G. Shekelle. 2001. Evaluating quality indicators for patients with community-acquired pneumonia. Jt Comm J Qual Improv 27 (11):575-90.
Roper, W. L., and C. M. Cutler. 1998. Health plan accountability and reporting: issues and challenges. Health Aff (Millwood) 17 (2):152-5.
Rubin, H. R., P. Pronovost, and G. B. Diette. 2001. The advantages and disadvantages of process-based measures of health care quality. Int J Qual Health Care 13 (6):469-74.
Schneider, E. C., V. Riehl, S. Courte-Wienecke, D. M. Eddy, and C. Sennett. 1999. Enhancing performance measurement: NCQA’s road map for a health information framework. National Committee for Quality Assurance. JAMA 282 (12):1184-90.
Shaughnessy, P. W., D. F. Hittle, K. S. Crisler, M. C. Powell, A. A. Richard, A. M. Kramer, R. E. Schlenker, J. F. Steiner, N. S. Donelan-McCall, J. M. Beaudry, K. L. Mulvey-Lawlor, and K. Engle. 2002. Improving patient outcomes of home health care: findings from two demonstration trials of outcome-based quality improvement. J Am Geriatr Soc 50 (8):1354-64.
Shukri, K., J. Henderson, and W. Daley. 2002. The comparative assessment and improvement of quality of surgical care in the Department of Veteran’s Affairs. Arch Surg 137:20-27.
Sofaer, S. 2002. Why ask patients? Presentation at the annual meeting of the Academy for Health Services Research and Health Policy, Washington DC.
Stuber, J., G. Dallek, and B. Biles. 2001. Program on Medicare’s Future: National and local factors driving health plan withdrawals from Medicare+Choice. New York: The Commonwealth Fund.
Texas Medical Foundation. 2002. “Diabetes quality improvement project.” Online. Available at www.dqip.org [accessed July 10, 2002].
TRICARE. 1999a. “Health Care Survey of DOD Beneficiaries: Overview.” Online. Available at http://www.tricare.osd.mil/survey/hcsurvey/overview.html [accessed May 13, 2002a].
———. 1999b. “MHS Optimization Plan February 1999 Interim Report.” Online. Available at http://www.tricare.osd.mil/mhsophsc/mhs_supportcenter/Library/MHS_Optimization_Plan.pdf [accessed May 14, 2002b].
Verdier, J., and R. Dodge. 2002. Other Data Sources and Uses, Working Paper in the Informed Purchasing Series. Lawrenceville NJ: Center for Health Care Strategies.