A Research Agenda to Support Quality Enhancement Processes
Summary of Chapter Recommendations
Implementation and evaluation of a national quality enhancement strategy focused on the use of standardized performance measures to monitor and improve quality will require a robust applied health services research capacity. Steps should be taken to ensure that the health services research agendas developed by the various government programs are complementary; address the salient concerns and needs of the populations served and of their care providers; and advance the capabilities of the government health programs in the roles of regulators, purchasers, and providers to promote excellence in health care.
Recommendation 8: The government health care programs should work together to develop a comprehensive health services research agenda that will support the quality enhancement processes of all programs. The Quality Interagency Coordination (QuIC) Task Force (or some similar interdepartmental structure with representation from each of the government health care programs and the Agency for Healthcare Research and Quality [AHRQ]) should be provided the authority and resources needed to carry out this responsibility. This agenda for fiscal years (FY) 2003–2005 should support the following:
a. Establishment of core sets of standardized performance measures
b. Ongoing evaluation of the impact of the use of standardized performance measurement and reporting by the six major government health care programs
c. Development and evaluation of specific strategies that can be used to improve the federal government’s capability to leverage its purchaser, regulator, and provider roles to enhance quality
d. Monitoring of national progress in meeting the six national quality aims (safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity)
The QuIC membership should ensure that the experience of the states and the needs of the populations served by Medicaid and SCHIP are reflected in the research agenda. AHRQ should continue to staff the QuIC and provide the organizational locus of QuIC research activity.
Additional public investments in independent health services research will be critical to both the development and the implementation of the research agenda by the AHRQ and the six major government health care programs. Congress should ensure that the institutional organization and appropriations for health services research are adequate to meet this important objective.
This chapter presents the committee’s view of a research agenda to support the quality enhancement processes of the government health care programs. It begins with an overview of the current research activities supported by the federal government. The need for coordination of these activities is then discussed. The final section outlines what the committee believes to be the critical research priorities in health care quality.
OVERVIEW OF CURRENT RESEARCH ACTIVITIES
The federal government provides extensive support for four types of health research: laboratory research, clinical research, population-based epidemiological and environmental research, and applied health services research. Laboratory and clinical research is conducted mainly by the National Institutes of Health (NIH) (2002a), which operated in 2001 with a budget of approximately $20 billion. The Centers for Disease Control and Prevention (CDC) (2002), with a 2001 operating budget of approximately $5 billion, takes the lead role in applied epidemiological and environmental health research. The Agency for Healthcare Research and Quality (AHRQ) (2002a) provides the locus for applied health services research; its 2001 budget was approximately $270 million.
For the most part, the type of research most relevant to the development and implementation of effective quality enhancement strategies is applied health services research. Health services research “addresses is-
sues of organization, delivery, financing, utilization, patient and provider behavior, quality, outcomes, effectiveness, and cost. It evaluates both clinical services and the system in which these services are provided” (Agency for Healthcare Research and Quality, 2002e, Para. 2). This chapter focuses particular attention on research regarding the development of standardized performance measures, the reporting of comparative quality data, and the provision of financial or other incentives to providers to improve quality.
While AHRQ is the primary engine for this type of research, the Centers for Medicare and Medicaid Services (CMS), the Veterans Health Administration (VHA), CDC, the Health Resources and Services Administration (HRSA), and NIH also engage in relevant applied health services research and demonstration activities. Rather than providing a chronicle of past research that has formed the basis for ongoing quality activities, this section highlights some of the salient research activities currently under way in these agencies. This is not intended to be an exhaustive review of every current quality-related project, but to provide a flavor of the range and types of initiatives being undertaken that are relevant to this report.
Agency for Healthcare Research and Quality
Created by statute in 1989 as the Agency for Health Care Policy and Research, AHRQ administers programs and activities across a range of policy concerns, from access, to care, to cost-effectiveness, to quality of care. Its activities are organized under six separate research centers: the Center for Cost and Financial Studies, Center for Organization and Delivery Studies, Center for Primary Care Research, Center for Practice and Technology Assessment, Center for Outcomes and Effectiveness, and Center for Quality Improvement and Patient Safety. Much of the work related to the development of performance measures and tools, a small part of AHRQ’s overall mission, is conducted by the last center (Agency for Healthcare Research and Quality, 2001).
AHRQ funds both commissioned and investigator-initiated research efforts designed to enhance quality measurement and improve care. The quality-related research ranges from outcomes research, to performance measurement, to patient safety initiatives. For example, included in the safety agenda are 24 projects examining different methods of collecting and analyzing data to identify factors that create a higher risk of medical errors, 22 projects analyzing how computer technology can be used to reduce errors and improve the quality of care, 8 projects exploring the effects of working conditions on patient safety, and 23 projects focusing on the development of new strategies to improve patient safety at health care facilities (Agency for Healthcare Research and Quality, 2001).
In addition to its line of research on patient safety, AHRQ coordinates research initiatives directly related to performance measurement for quality improvement. These initiatives fall into three general categories: synthesizing the evidence to enable the development of guidelines and performance measures, enabling provider awareness of and response to clinical information, and improving the usefulness of comparative quality information made publicly available.
Evidence-based practice centers operating from 12 research and medical centers around the country (Agency for Healthcare Research and Quality, 2002c) synthesize and distill the clinical evidence on interventions for specified conditions. The objective is to provide organizations with a basis for the development of clinical guidelines and in some cases to enable the translation of the available clinical consensus into valid performance standards (Agency for Healthcare Research and Quality, 2002d). This translation process occurs through Q-Span, a project designed to expand the scope of valid, ready-to-use measures through cooperative research agreements.
Q-Span, due to be completed in FY 2003, develops and tests performance measures for specific conditions, patient populations, and care settings. Measures validated through the Q-Span project will be added to CONQUEST, an AHRQ compilation of over 1200 existing performance measures that can be searched by topic by providers, researchers, and patients (Agency for Healthcare Research and Quality, 2002b). The Q-Span project is intended to develop or modify measures for use in different settings and populations, thereby filling identified measurement gaps. Measures from the Q-Span project, CONQUEST, and other research will be incorporated into the National Measures Clearinghouse, previously developed through an AHRQ contract.
While not engaged specifically in the development of performance measures, AHRQ’s Patient Outcomes Research Team (PORT) evaluates interventions for particular illnesses and conditions and formulates recommendations based on evidence from multiyear studies of which strategies achieve the best outcomes. For example, AHRQ has coordinated PORT studies on asthma, low birth weight, pneumonia, depression, schizophrenia, prostate disease, cataract surgery, dialysis care, and breast cancer (Agency for Healthcare Research and Quality, 1998).
The Translating Research Into Practice (TRIP) initiative focuses on developing strategies to shorten the time lag between publication of research findings and incorporation of those findings into routine clinical practice. The average amount of time required for research findings to affect direct patient care can be as long as two decades (Agency for Healthcare Research and Quality, 2000). TRIP’s purpose, divided into two phases, is to evaluate research dissemination models and tools
for their effectiveness in bringing about changes in practice. The 14 projects under TRIP I focus on strategies for collecting data, while the 27 projects under TRIP II examine implementation strategies and their effectiveness in achieving practice changes among providers with different characteristics and clinical populations across diverse settings (Agency for Healthcare Research and Quality, 2000).
In addition to performance and outcome measurement, there has recently been increased interest in improving the use of comparative quality data by patients, purchasers, providers, and policy makers as a quality improvement tool. As interest has grown in evaluating patient perceptions of the care they receive to inform the future selection of health plans, AHRQ has begun working to develop and validate surveys of patient perceptions and to display their results to consumers in useful ways. Research examining patient perceptions of care and its relationship to improved quality is evolutionary. AHRQ’s development of instruments to measure consumer perceptions is an early step towards creating well-tested and validated instruments in the public domain to inform consumer choices.
AHRQ initially sponsored the Consumer Assessment of Health Plans Survey (CAHPS) to query Medicaid and commercial insurance beneficiaries on their experiences in managed care plans. As the role of managed care plans in Medicare grew, AHRQ and CMS worked collaboratively to ensure that the experience of Medicare beneficiaries would be captured in CAHPS. The CAHPS results are available on the Web and in print (Agency for Healthcare Research and Quality, 2000). The core of the CAHPS surveys is now applied to other federal programs and the private sector, with questions being added to tailor the survey to specific issues that may be more relevant to specific programs or populations.
CAHPS now includes surveys of Medicare beneficiaries who have disenrolled from Medicare+Choice plans to determine their reasons for doing so. A CAHPS survey first released in the fall of 2000 reported the experiences of Medicare beneficiaries in fee-for-service (FFS) Medicare. This survey was designed to enable comparisons of the performance of the FFS and managed care sectors as a whole on selected indicators within a geographic area. Through collaborations with other agencies and private organizations, CAHPS has also been adapted for applications by the Federal Employees Health Benefit Plan. CAHPS is the most widely used report of consumer ratings of health plans (Hibbard et al., 2002).
Research and development efforts for CAHPS are ongoing, and projects are currently under consideration for the second phase of the initiative (CAHPS II). Research is also underway to better understand how the information from consumer surveys can be used by QIOs to target quality improvement projects for providers (Garg et al., 2000). In the fu-
ture, efforts need to be directed towards evaluating the usefulness of CAHPS and other types of comparative data and using those evaluations to improve both the substance and accessibility of the information presented.
In addition to examining and disclosing beneficiaries’ perceptions of care, AHRQ has funded efforts to compile and make publicly available comparative data on clinical quality. For example, AHRQ has published studies on the comparative performance of health plans in cardiac bypass graft surgery, use of beta blockers after heart attacks, and asthma management (Agency for Healthcare Research and Quality, 1998).
Current efforts at AHRQ focus not only on improving the accessibility of publicly available information but also identifying elements of care that are significant to consumers and purchasers in decision-making. Because of evidence that the types of quality information currently available in the public domain are infrequently used by consumers and purchasers (Marshall et al., 2000), research is now focused on understanding the extent to which various stakeholders were aware of the publicly available quality information, and understood the information and found it relevant to the decisions they make. A great deal more research needs to be done in this area to support the efforts of the various government programs to provide useful information and reports to various stakeholders.
Responding to the interest in using financial and other incentives to improve care through performance measurement and public disclosure strategies, AHRQ participates in the Robert Wood Johnson Foundation’s initiative, Rewarding Results: Aligning Incentives with High-Quality Health Care (National Institutes of Health, 2002b). Accordingly, AHRQ has issued a Request for Proposals to evaluate and analyze the impact of financial and nonfinancial incentives on improving the quality of care.
In response to a Congressional mandate, AHRQ is responsible for creating the National Quality Report, to be issued annually beginning in 2003. Developed in collaboration with the National Center for Health Statistics and other federal agencies, this report must identify areas in which health care is improving, declining, or remaining stable; provide evidence to identify care that requires more focused attention; and set forth national performance benchmarks. To develop the content and design of the report, AHRQ formed an interagency work group that includes representatives of CMS, NIH, CDC, the Office of the Assistant Secretary for Planning and Evaluation of the Department of Health and Human Services (DHHS), the National Cancer Institute, and the Substance Abuse and Mental Health Services Administration (Reilly, 2001). This collaboration reflects AHRQ’s organizational and technical assistance experience in working with other agencies within DHHS.
Finally, AHRQ currently provides administrative support to the QuIC
Task Force. In addition to its coordination functions, described in Chapter 4, QuIC sponsors a number of research activities together with AHRQ and other agencies that are funded by the participating agencies. For example, QuIC works with AHRQ to develop risk adjustment methods for performance measurement and collaborates with the Department of Labor in exploring the effects of working conditions in health care institutions on patient safety (Agency for Healthcare Research and Quality, 2001; Eisenberg et al., 2001). By staffing QuIC in its implementation activities, AHRQ has expanded the contexts for collaboration with other agencies. These activities support the committee’s recommended role for AHRQ in working with QuIC to coordinate research.
Centers for Medicare and Medicaid Services
Charged with administering Medicare, Medicaid, and the State Children’s’ Health Insurance Program (SCHIP), CMS focuses on the conduct of measurement and improvement activities. In addition to its implementation activities, however, CMS engages in a number of quality-related research initiatives, many of which are undertaken in collaboration with AHRQ. Because Medicare is the largest payer in the federal government, CMS has been able to use demonstration projects with providers to test quality improvement and performance models. Its research efforts generally fall into three categories, development and testing of: performance measures, outcomes measures, and more accessible, consumer-oriented comparative quality information on Medicare providers and contractors. Reflecting the increasing prevalence of chronic illness and its implications for future care needs (see Chapter 2), much of this research focuses on quality oversight in nonacute settings, such as nursing homes and home care. Research on nonacute settings presents an opportunity for evaluating the integration of quality oversight across setting and providers.
The Diabetes Quality Improvement Project (DQIP), sponsored by CMS, represents one of the largest demonstration projects on performance measurement. In a collaborative effort involving CMS, patient advocacy groups, private-sector quality organizations, providers, researchers, and other government agencies, DQIP identified seven core measures for diabetes care, streamlining the multiplicity of measures for diabetes (see Appendix B). It then created a toolbox to implement a measurement and reporting process. The DQIP performance measures have been adopted by the larger federal health programs and are implemented in all 50 states (Fleming et al., 2001). The Study of Clinically Relevant Indicators for Pharmacologic Therapy (SCRIPT) is using the same public–private collaboration model in a demonstration project to develop a core set of standard-
ized performance measures for use in a variety of settings for medication management of atrial fibrillation, congestive heart failure, coronary artery disease, diabetes, dyslipidemia, hypertension, and post–myocardial infarction (Fleming, 2001).
As part of the QIO Seventh Scope of Work (see Chapter 4), CMS developed a home care demonstration project to test the Outcomes-Based Quality Improvement Technology, a systematic approach to measuring outcomes and targeting care processes that require improvement in home health agencies. This technology enables the QIOs to work with individual home health agencies to identify areas in which outcomes across the patient census are substandard, identify provider-specific causes of poor outcomes, and compare the practices of the home health agency with a clinical synthesis of best practices. Expanded to a pilot project in five states, the Outcomes-Based Quality Improvement Technology collaboration operates with a 67 percent participation rate by home health agencies (Thoumaian, 2002).
In addition to these major demonstration initiatives, the Health Care Quality Improvement Program, implemented by the QIOs, formulates evidence-based performance measures for use in its initiatives, primarily in Medicare, to improve care. The QIO Support Centers project engages in a synthesis of the clinical literature around targeted conditions as the foundation for developing quality indicators (Centers for Medicare and Medicaid Services, 2002).
Substantial attention has been directed toward enabling better public disclosure of quality information. CMS has worked collaboratively with AHRQ to develop Medicare applications of CAHPS and is continuing research on how to format the results more effectively for beneficiaries. Assessing how better to engage beneficiaries in the public disclosure elements of quality oversight provides a focus for Medicare CAHPS-related research. Accordingly, CMS has developed a research agenda aimed at exploring beneficiaries’ readiness to use comparative information and at tailoring information to the decision-making processes actually employed by users (McPhillips, 2002).
CMS has devoted particular attention to developing tools for public disclosure of comparative quality data for nursing homes. It began a six-state demonstration project in January 2002 to collect and publish quality information on nursing homes in Colorado, Florida, Maryland, Ohio, Rhode Island, and Washington. The data are based on performance measures developed through public–private collaboration by CMS, the industry, consumer representatives, and the National Quality Forum. The data collected were published in April 2002. The pilot is testing alternative approaches for public disclosure of data to determine which approaches
motivate consumers to use the information and reflect the priorities of beneficiaries and their families (Musgrave, 2001, 2002).
Finally, CMS is developing a solicitation for a demonstration project to test ways of financially rewarding physicians for improvement in outcomes and process measures. However, the creation of financial incentives to improve quality has not been the focus of research efforts (Klauser, 2002; Treiger, 2002).
Centers for Disease Control and Prevention
Consistent with its public health mission, CDC has developed many projects for tracking the care delivered to patients, particularly when patient safety issues are involved. For example, it has created a voluntary system for acute hospitals to report nosocomial infections to CDC. It has also developed performance measures related to health promotion and disease prevention issues and established a set of performance measures to define expectations. In addition, it has created a number of performance measure sets for preventive interventions and screenings, such as counseling for smoking cessation, pneumococcal immunization for seniors, and colorectal cancer screening. CDC is also examining structural measures for quality through its Translating Research into Action for Diabetes (TRIAD) program, which is investigating the association of eight structural factors with quality of care and patient outcomes (Institute of Medicine, 2001b).
Health Resources and Services Administration
HRSA conducts grant and contract funding programs to improve access to health care and serves as an indirect provider of care. It has built an expanding community-based network of primary and preventive health care services. The HRSA Strategic Plan identifies four long-range strategies that are linked to the agency’s research activities: (1) to eliminate barriers to care, (2) to eliminate health disparities, (3) to ensure quality of care, and (4) to improve public health and health care systems.
Accordingly, much of HRSA’s research activity pertains to improving the delivery of primary care for underserved individuals and families, analyzing different delivery mechanisms for care, and identifying strategies for improving access to targeted areas of care. The agency’s quality-related research has involved both its grantees and its direct providers. Current HRSA-sponsored research includes a study of the disparities between what is known about caring for people infected with HIV and current clinical practices, projects that demonstrate the efficacy of interventions for high-risk populations, and studies of service provision to
improve the quality of care. The range of these activities includes an assessment of emergency room services provided to young victims of violence and an evaluation of the effectiveness of a quality improvement initiative for improving HIV care.
As with other agencies, two themes emerge in HRSA’s research: obtaining information on patient perceptions of care and developing performance models for the management of chronic illness. To these ends, HRSA has created a patient satisfaction survey for its direct providers of care that differs somewhat from the CAHPS survey used by Medicaid; consequently, most community health centers (for which Medicaid is a major payer) must administer multiple surveys. Through collaboration with grantees, HRSA has also developed and implemented evidence-based chronic care performance models for the management of diabetes, asthma, and depression. In addition, HRSA conducts evaluations of the efficacy of its patient safety protocols. Significantly, the agency’s research agenda envisions greater collaboration on quality-related research with other agencies within the DHHS (Institute of Medicine, 2001b).
National Institutes of Health
While applied health services research has not been the focus of NIH activities, quality-related health services research is conducted within each of the Institutes. For example, research to develop performance measures for care of depression emanates from the National Institute of Mental Health, while research to develop performance measures for cancer care is supported by the National Cancer Institute (NCI), and for Alzheimer’s Disease by the National Institute on Aging. Reflecting its scientific and medical research mission, NIH focuses much of its research on evaluating the relative effectiveness of different clinical interventions and delivery arrangements in producing desired outcomes; developing clinical data to lead to the development of treatment guidelines; and improving public access to medical and clinical information, such as the results of clinical trials. Similar to the TRIP initiative in AHRQ, research efforts also have focused on strategies to improve the assimilation of research findings into community practice.
NCI’s initiative on quality-of-cancer care includes identifying a core set of outcome measures for use in quality-of-care studies and strengthening the methods and empirical foundations for quality-of-care assessment. An illustrative project is the Cancer Care Outcomes and Surveillance Consortium (CanCORS)—a 5-year, $34 million cooperative study to monitor and better understand variations in receipt of quality cancer care and process–outcome relationships among large cohorts of newly diagnosed lung and colorectal cancer patients. CanCORS findings will complement qual-
ity-of-care studies based on data from NCI’s Surveillance, Epidemiology, and End Results (SEER) registry program. In addition, NCI conducts an initiative on improving quality-of-care research within the institute’s clinical trials program, enhancing the quality of care by improving the quality of cancer communications, and increasing the extent to which available scientific evidence on quality measures and assessment informs federal decision making on cancer care. The vehicle for this initiative is the NCI-convened Quality of Cancer Care Committee, which currently supports three collaborative translation projects with HRSA, CDC, CMS, and the Department of Veterans Affairs, respectively.
In addition to research on developing outcome and process measures, NIH examines the relationship of performance measures and guidelines to outcomes across different settings of care, thereby testing the validity of quality measures. For example, the National Institute of Mental Health conducts research involving separate studies to determine how the implementation of treatment guidelines for depression and schizophrenia affects outcomes and processes of care. It also tests whether evidence-based protocols for improving the quality of care for depression are effective across multiple settings and delivery systems.
The National Heart, Lung, and Blood Institute evaluates strategies that can be used in clinical practice to improve the implementation of national, evidence-based clinical practice guidelines for the treatment of heart, lung, and blood diseases and related conditions. Focusing on the delivery of medical care, this research evaluates the factors that affect the adoption of a selected guideline in community practice. The research is designed to identify barriers to the implementation of guidelines and factors that can enhance adherence to guidelines.
The Diabetes Research and Training Centers of the National Institute of Diabetes and Digestive and Kidney Diseases focus on developing and implementing approaches to improving the acceptance of guidelines. The purpose of this translation effort is to develop and test evidence-based diabetes educational modules, targeted professional training, and active community outreach.
Veterans’ Health Administration
VHA has engaged in a number of research initiatives consistent with its use of informatics in the implementation of quality improvement strategies. Its research is structured through a number of programs and centers, including the Patient Safety Centers of Inquiry and the Quality Enhancement Research Initiative.
The Patient Safety Centers of Inquiry were created to analyze the elements of and develop better tools for improved patient safety. Located in
California, Florida, Ohio, and Vermont, these centers explore the efficacy of different systemic approaches to improving safety in major incident areas, such as patient falls and anesthesia-related complications of surgery. For example, the VHA Midwest Patient Safety Center (known as the GAPS Center, for Getting at Patient Safety) conducts research on the development of strategies for training clinical and administrative staff to create a culture of safety. Accordingly, it is developing and testing a portable training kit that consists of simulations of adverse safety incidents, blueprints for safety meeting discussions and the development of safety minutes, blueprints for team cross-checking to minimize errors by identifying categories of collaboration, and guidelines for developing patient-directed infomercials that enable patients to cross-check readily identifiable elements of different interventions (e.g., correct surgery on correct body part, discharge instructions). The GAPS Center also examines human–computer interactions to evaluate the kinds of errors likely to arise with electronic order entry and to develop mechanisms for overcoming the patterns of potential error identified through the research (Render, 2002).
The Patient Safety Center of Inquiry at Palo Alto focuses on the development of systemic solutions to safety issues found in workforce training, organization, and workload. For example, the center examines data comparing incident responses in hospitals with those of naval aviators to identify baselines for achieving goals associated with changing safety cultures. The center also develops cognitive prompts to avoid perioperative events and examines fatigue effects on clinical performance (Gaba, 2002).
The Patient Safety Center of Inquiry in Vermont investigates the effectiveness of quality enhancement activities around organized themes of intervention, such as reductions in patient falls and adverse drug events. Illustrating an essential element of quality-related research, the center examines whether or not specific quality enhancement activities actually result in improved care and better outcomes for patients (Weeks, 2002). Relying on self-reporting by quality enhancement teams at identified facilities, the purpose of the research is to determine whether the effects of quality-related interventions are sustained over a period of months (Weeks, 2002; Weeks et al., 2001).
The Quality Enhancement Research Initiative implements evidence-based outcome measures and evaluates the impact of efforts to translate the evidence base into practice. To this end, it creates a systemic approach to developing the translation and measuring its impact. Researchers identify the gaps in knowledge that prevent better outcomes, the reasons certain measures are not used by clinicians, and the manner in which clinicians use different measures. All data are risk-adjusted to enable assessment of outcomes relative to projected outcomes based on risk fac-
tors that range from severity of disease to patient compliance. After identifying specific barriers to achieving improved outcomes, researchers develop and test strategies for systemic solutions to closing the gap between actual and desired outcomes. Such solutions may range from improving clinician training to changing technical order specifications. In 2001, the initiative focused on modifying clinical databases to measure outcomes directly, rather than relying on chart abstraction, to enable nationwide assessment of the impact of the translation process (Demakis, 2002; Demakis et al., 2000). This research is facilitated by VHA’s system-wide electronic medical record (See Chapter 5).
COORDINATION OF RESEARCH ACTIVITIES
Research efforts in all the programs have focused on synthesizing the clinical evidence base and translating it into quality improvement strategies. Common research themes emerge among the programs: identification of priority areas, usually involving chronic illness or safety for quality improvement; synthesis of the evidence base around those areas; and development of performance measures from the evidence base.
While the research strategies of the various government programs are similar, the committee believes greater coordination would be beneficial in the development of the research agenda to better support the specific roles of government in quality enhancement processes. Some of the research efforts are duplicative or overlapping. For example, HRSA has established protocols for diabetes management and surveys of patient perceptions even though the DQIP protocols and CAHPS instruments are being used in many other government programs. Appropriate applications of the same instruments in HRSA could provide a richer database for assessing the validity of measures across populations.
Programs that conduct relatively less research would benefit from direct access to the research of other programs or agencies. Such a synergistic relationship cutting across all programs would also permit more testing of implementation approaches by providing a broader array of contexts for demonstration projects—for example, to determine how different payment methodologies could be used to improve quality (Anderson, 2002).
Without such coordination of research, the implementation of standardized tools across the government health programs will be much more difficult, since the tools used may not reflect the experience and responsibilities of the programs. In other words, research coordination is an essential precondition for coordination of implementation. Greater coordination also is needed to conduct more retrospective evaluations of the effects of different quality enhancement strategies across the government health
care programs and identify the elements of success or failure, similar to the retrospective research done by VHA.
Greater coordination would enable the identification of opportunities to include standardized quality measures and data elements in the design of other applied health services research. For example, the CanCORS project demonstrates how standardized quality measures and data elements can be applied in controlled clinical trials. Because of the substantial resources available to NIH and the advantages of building on well-designed trials, the committee concludes that spending should be realigned to encourage NIH to identify fields of clinical research for which the inclusion of development and testing of quality indicators and performance measures would be appropriate. NIH should engage in coordination between its various institutes to shorten the time lag between the development of research findings and their implementation in practice through more effective evaluation and dissemination of its own research. Consistent with recommendation 8, presented at the beginning of this chapter, these research efforts should be coordinated through QuIC with the support of AHRQ to ensure congruence with the efforts of the government health care programs to strengthen and streamline their quality enhancement processes.
Broad recognition of the need for coordination is already reflected in the establishment of QuIC, formed expressly to coordinate quality improvement efforts among the different departmental health programs and improve the consistency of oversight (Eisenberg et al., 2001). As discussed above, QuIC works cooperatively with all departments sponsoring health quality research through focused work groups (Eisenberg et al., 2001). The development of a comprehensive research agenda responsive to the needs of all programs should be coordinated similarly through QuIC as a complement to its implementation functions.
Such coordination would facilitate an ongoing trend. Indeed, it is the need to maximize reliance on core competencies already demonstrated that drives the committee’s recommendation for AHRQ’s role in staffing and housing QuIC. AHRQ already provides administrative support to QuIC, and AHRQ’s director serves as QuIC’s operating chair. AHRQ’s current mission and existing pattern of collaboration ensures that coordination will reside in the entity with the expertise, infrastructure, and operational focus needed to achieve a coherent research agenda useful to all programs. An evaluation should be conducted by QuIC every 3 years to assess the usefulness of the research and the application and effectiveness of the new tools developed through this collaborative process.
CRITICAL RESEARCH PRIORITIES
AHRQ is already engaged in areas of research that are critical to implementation of the quality enhancement strategy recommended in this report. For example, the current efforts to better understand the information needs of various stakeholders and to develop reporting formats that respond to these needs should be expanded in scope. There are also new areas of research that should be vigorously pursued and will require additional support. These include:
The development of core sets of standardized performance measures that address important health care needs and reflect efforts to overcome methodological or structural obstacles to quality oversight.
The development and evaluation of specific strategies that can improve the government’s capability to leverage its purchaser, regulator, and provider roles to enhance quality.
The monitoring of national progress in meeting the six national quality aims (safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity) (Institute of Medicine, 2001a).
A wealth of performance measures already exists. In some areas, the challenge is to identify the best measures to be used across all government health care programs. However, there are also gaps in the performance measurement toolbox in such areas as mental health and end-of-life care, areas in which some believe inadequate attention has been devoted to measurement development. Lastly, there are important methodological challenges to measurement that must be addressed. Following are a few research areas the committee believes merit attention:
Technical, organizational, and legal challenges to the assessment of quality in clinically significant areas in which existing performance measures may lack broad acceptance or appropriate data sources, such as mental illness and addiction disorder treatments.
Methodological and organizational challenges to performance measurement for small groups and physicians.
Methodological and organizational challenges to measurement of performance across different settings, types of financing and delivery arrangements, and time, especially for chronic conditions and overall health status.
Development and evaluation of the impact of alternative payment models and specific financial incentives on quality
Development of mechanisms for useful public access to comparative quality information.
This list is by no means exhaustive but is illustrative of the many types of issues that require substantial applied health services research attention.
Establishing Core Sets of Standardized Performance Measures
As discussed in Chapter 4, development of a core set of performance measures to be used by the government programs based on the common needs of all or most of the populations served would improve the effectiveness of quality enhancement processes. The research agenda must provide for the identification of appropriate measures, elimination of the multiplicity of measures that may exist for a given condition or intervention, and assurance of the clinical validity and credibility of the measures used (Anderson, 2002).
While many performance measures exist covering a broad spectrum of conditions and circumstances, there are areas in which performance measures may be lacking despite the substantial burden of the health condition or the characteristics and size of the populations affected. Adoption of performance measures to evaluate care for mental health/addiction disorders appears to be limited despite the prevalence and burden of these conditions in the government health programs (Anderson, 2002; Meyer and Massagli, 2001). For example, no mental health/addiction disorders are included for focused quality review in Medicare’s Health Care Quality Improvement Program (Davidson, 2001). Similarly, while there is a substantial encyclopedia of measures for pediatric care generally, the evidence suggests that measures are lacking for the particular screening and counseling needs of adolescents (Foundation for Accountability, 2002). Research should be directed at identifying the reasons for the apparent gaps in adoption of performance measures in these and other areas, and providing mechanisms for overcoming the barriers to acceptance, including demonstrating greater congruence in the relationship between the performance measured and improved outcomes (Anderson, 2002).
As part of the national quality enhancement strategy, effort should also be directed at assessing the impact of quality enhancement processes in general. It will be important to evaluate whether actual improvements in care are occurring in the clinical areas being monitored and whether the attention devoted to areas of performance measurement deflects attention from areas of care less susceptible to measurement, such as care coordination among providers and between providers and community services (Anderson, 2002). Lastly, as discussed in Chapter 4, performance measures must be updated periodically to reflect the current status of clinical knowledge and to remain responsive to the needs of the populations served.
Addressing Obstacles to More Effective Oversight
In addition to gaps in specific types of performance measures, research can address broader structural issues that impede quality measurement and enhancement. The committee concludes that the following issues require careful analysis.
Performance Measurement for Small Groups of Clinicians
While substantial numbers of performance measures exist, the majority of these apply to facilities or health plans. The Quality Improvement System for Managed Care in Medicare and Medicaid applies by definition only to managed care plans. Most of the efforts of QIOs are directed at hospitals, nursing homes, home health agencies, and managed care plans. The only comparative quality information currently available publicly is for health plans, hospitals, dialysis centers, and nursing homes. Yet, a large proportion of care, particularly in the management of chronic illness, is delivered from the offices of small group practices or individual clinicians—settings for which very little quality measurement exists.
The obstacles to systemic performance measurement at the clinician level are substantial. The decentralization and variation among clinicians, combined with the absence of uniform computerized clinical data, render data collection a complex and burdensome task. In many private-practice settings, it is not even possible to identify patients readily by diagnosis, a necessary first step in the calculation of many performance measures. Logistical problems are compounded by analytical issues, such as how to obtain adequate sample sizes in small-practice settings to derive reliable measurements of performance (Anderson, 2002). These obstacles, however, need not be permanent barriers. Because existing performance measurement techniques preclude evaluation of such a large portion of health care, the need for enhanced research to close the gap is compelling. Accordingly, a substantial effort must be made to identify the most feasible methods for collecting data from small clinical units and to address methodological issues. The committee believes that first steps in overcoming these obstacles could include broader use of patient registries and data systems that permit easy access to clinical and other patient information.
Measurement Across Settings, Delivery Systems, and Time: Patient-Centered Care
Most available measures tend to capture responses to time-limited episodes of care (e.g., stroke, heart attacks) rather than the elements of ongoing management of chronic conditions (e.g., ongoing communica-
tion between members of the interdisciplinary team and the patient, patient education and guidance for self-management, discharge planning). These “snapshots” fail to reflect the care patients receive as they experience it—from physician’s office to emergency room to hospital admission to nursing facility to home care. Moreover, the data requested for each phase of care may be redundant or may not reflect the total care process. Creation of a common dataset from the nursing home Minimum Data and the home health care Outcome and Assessment Information Set represents an important opportunity to reduce administrative burden while providing more coherent information on patient care. Once data have been collected, substantial methodological challenges remain, including how to analyze processes and outcomes according to the distribution of care among providers.
Accordingly, measures that can be applied in multiple settings and at the point of transition between settings must be developed. There are numerous options for the delivery of rehabilitative and long-term care services, including home health, short-term rehabilitation hospitals, and nursing homes. Patients with very similar health care needs may choose different settings. The use of common standardized performance measures across settings would be most helpful in determining which settings are most capable of providing adequate care, and the access to such comparative data would better inform patient decisions.
Alternative Payment Models
Although the preponderance of this report has focused on quality measurement as the stimulus for quality improvement, the committee recognizes that measurement in combination with other strategies has the potential to produce substantial change. The use of payment strategies to reward superior performance (as discussed briefly in Chapter 5, with regard to information technology) has attracted growing interest; some states, private-sector purchasers, and health plans are currently experimenting with these strategies (White, 2002). Seven states provide financial rewards to Medicaid managed care plans that meet administrative, access, or quality/clinical care standards; 26 states employ financial penalties for failure to meet performance standards (Kaye, 2001).1
As discussed in Chapter 3, the committee recommends that the federal government take greater advantage of its position as the largest pur-
chaser of health care services. The means of determining the impact of financial incentives and identifying the amount and structure of payment necessary to effect change remain largely unexplored at the national level, notwithstanding a body of experience at the state level that could inform such research (Kaye and Bailit, 1999). Research is needed to develop different models of compensation (including criteria for qualifying for higher payment) and to test the models to determine whether such strategies actually change performance and outcomes.
The Rewarding Results initiative announced by the Robert Wood Johnson Foundation and AHRQ in early 2002 serves as an example of research designed to explore strategies for creating incentives to improve quality. It provides grants and technical assistance to purchasers and health plans to develop incentive structures that “align incentives with high quality care” (National Health Care Purchasing Institute, 2002). As discussed above, QuIC, in collaboration with other agencies, can identify those programs best suited to such a demonstration that would yield important information for policy makers across the various programs. AHRQ should devote increased attention to the evaluation of alternative options for building incentives to improve quality into payment systems.
Access to Information for Informed Decision Making
Evaluation and testing play a particularly important role in determining the best strategies for providing public access to comparative quality information targeting information for different users to ensure the most beneficial impact. While there is substantial activity directed at exploring various approaches to public access, experience and reliable knowledge are limited. Accordingly, this remains a somewhat experimental area, one that will be susceptible to modification and innovation as understanding increases.
Existing evidence indicates that consumers generally rely on comparative quality data only to a limited extent and make choices that do not necessarily correspond to their stated preferences (Hibbard and Jewett, 1996; Hibbard et al., 2001). Studies show that this lack of reliance stems from a lack of understanding or distrust of performance ratings and their perceived lack of relevance or utility (Vaiana and McGlynn, 2002). This conclusion is confirmed by experience with disclosure of comparative quality data on hospitals and health plans (Jencks, 2000; Schneider and Epstein, 1996). In the latter two examples, publicly disclosed data showing variations in the quality of care and patient outcomes have had little impact on consumer choice or health plan contracting with hospitals (Jencks, 2000; Schneider and Lieberman, 2001).
Current research efforts focus on developing presentations of com-
parative quality information for public disclosure that are more accessible to the consumer, creating greater incentives for consumers to use and act on such information (McPhillips, 2002). Recent studies suggest that performance reports must make cognitive demands on the user that are consistent with “the basic processes by which people make decisions,” an element largely lacking in current public reports (Vaiana and McGlynn, 2002, pp. 3-4). Substantial support should be provided for efforts to improve the congruency between public reporting of data and the needs of users, including research assessing the capabilities of consumers to use the information and adapt it appropriately.
The committee believes that more education on the variability in the quality and safety of care, combined with comparative data in more accessible formats, will likely trigger greater interest and ability on the part of consumers to use comparative information on quality when making decisions. Early exploratory efforts point in this direction. For example, since 1998, PacifiCare of California’s Quality Index profile of physician organization performance has disclosed 58 measures of clinical quality, patient safety, service quality, and affordability to consumers, and PacifiCare enrollees have responded by increased selection of better performing providers (Ho, 2002).
There is evidence that the behavior of providers changes measurably when they are confronted with publicly disclosed comparative data, although the evidence on the effects of such changes is conflicting and the methods used in the different studies vary significantly. Some evidence from studies of the effects of comparative report cards on the quality of care in cardiac surgery in New York and Pennsylvania indicates that public disclosure of comparative risk-adjusted data may have contributed to improved outcomes for patients, including high-risk patients; this suggests that some providers actually changed clinical practices to improve care (Hannan et al., 1994, 1997; Marshall et al., 2000). Similar findings are reflected in a qualitative case study of four of the worst-performing hospitals in New York, indicating that report cards combined with regulatory intervention for conspicuous outliers led to specific clinical improvements in cardiac surgery care. These improvements included increasing the level of specialization of providers and caregivers, changing the physical organization of the facilities, and revising surgical privileges and scheduling (Chassin, 2002). Such a beneficial effect failed to occur in hospitals whose poor or mediocre performance did not qualify for outlier status. Chassin attributes the quality improvement effects to four factors that he characterizes as difficult to duplicate outside of New York: the integration of comparative quality reporting into the routine regulatory processes of a government agency, vigorous involvement of the professional leadership, a continuous commitment to scientific evaluation of the
program, and the “active engagement” of the health department as a “primary force for improvement” in a strong regulatory environment (Chassin, 2002, p. 49).
With respect to changes in provider behavior that affect access to care, Hannan et al. (1997) found that sicker patients did not experience greater exclusion as a result of disclosure of comparative data in New York. These findings differ from the results of an analysis of the effects of disclosure in Pennsylvania, where referring cardiologists reported in surveys that “access to care has decreased for severely ill patients who need CABG [coronary artery bypass graft] surgery” (Schneider and Epstein, 1996). These studies did not track the effects of changed provider behavior on patients who did not receive surgical intervention.
In a detailed, controlled study of comparative cohorts of patients before and after the use of report cards based on nationwide sampling, Dranove et al. (2002) found increased exclusion of sicker patients whose conditions were likely to require surgery to achieve health improvement, better matching of patients with providers (consistent with the findings of Hannan et al.), increased surgery on healthier patients whose conditions would have been more responsive to non-surgical interventions, and poorer outcomes (greater morbidity and mortality) among sicker patients who did not receive surgery compared with similar patients in control groups who did receive surgery. None of the studies examined long-term effects on health status.
The variation in findings among these studies points to the need to test different approaches to report cards and to explore the effects of these approaches in causing a broader range of providers to improve care, minimize unintended/undesired consequences, and support the interest of the consumer in being able to identify and select the safest and most effective sources of care (Hannan et al., 1997). These findings also underscore the importance of developing appropriate means of risk adjusting in publicly disclosed outcome information and promoting provider confidence in the validity of the risk adjustment. Without such risk-adjustment, comparative information could result in misconceptions regarding quality of care as well as incentives for risk selection by providers (Anderson, 2002). Accordingly, research should focus on the development and dissemination of risk adjustment methodologies that accurately reflect patient condition as an essential element of improved access to information.
As discussed in Chapter 4, research should be directed towards ensuring that the measures employed reflect important aspects of quality. Public information should focus on elements of care that reflect consumer priorities, address consumer assumptions about quality, lend themselves to easy and correct interpretation for making choices, and represent timely disclosure (Schneider and Lieberman, 2001).
The committee believes that structuring information to correspond to the core sets of performance measures across the six quality aims should provide the paradigm for research on public disclosure.
The relationship of process measures to better care and outcomes should be a defining consideration in the selection of the measures to be disclosed, and that relationship must be apparent in both surveys and presentation. However, the committee believes that because consumers make choices at the micro level of care (e.g. choosing a clinician) identifying and implementing an information infrastructure that can be used to collect provider-specific information for consumers remains an essential precondition for meaningful public disclosure of quality performance.
In addition to research on how best to design comparative reports to meet the needs of various stakeholders, it will also be important for AHRQ to better understand the potential users and applications that can be supported by the shared data repository (discussed in Chapter 4). The data repository is intended to be a more flexible tool for gaining access to quality information. In addition to requesting specific tailored reports, users might also access the data base directly and generate their own reports. A good deal of research and evaluation will be necessary to determine how best to structure and organize the data in the repository and to identify ways of assisting different types of users in accessing and interpreting data.
Agency for Healthcare Research and Quality. 1998. “Strategic Plan, November 1998: Center for Outcomes and Effectiveness Research: Mission Statement.” Online. Available at http://www.ahrq.gov/about/coer/coerplan.htm [accessed July 12, 2002].
———. 2000. “Translating Research into Practice; From the pipeline of health services research-CAHPS. The Story of the Consumer assessment of health plans.” Online. Available at http://www.ahrq.gov/research/cahptrip.htm [accessed May 21, 2001].
———. 2001. “Quality Interagency Coordination Task Force (QuIC) Fact Sheet, AHRQ publication No. 00-P027 .” Online. Available at http://www.ahcpr.gov/qual/quicfact.htm [accessed June 18, 2001].
———. 2002a. “AHRQ Fiscal Year 2002 Budget in Brief.” Online. Available at http://www.ahcpr.gov/about/cj2002/budbrf02.htm [accessed Feb. 4, 2002a].
———. 2002b. “CONQUEST Fact Sheet.” Online. Available at http://www.ahrq.gov/qual/conquest/conqfact.htm [accessed Feb. 14, 2002b].
———. 2002c. “Evidence-based Practice Centers.” Online. Available at http://www.ahcpr.gov/clinic/epc/ [accessed Apr. 29, 2002c].
———. 2002d. “Health Care: Evidence-based Practice Subdirectory Page.” Online. Available at http://www.ahrq.gov/clinic/epcix.htm [accessed Feb. 14, 2002d].
———. 2002e. “What Is AHRQ?” Online. Available at http://www.ahrq.gov/about/whatis.htm [accessed Sept. 23, 2002e].
Anderson, G. 2002. “Testimony Before the Subcommittee on Health of the House Committee on Ways and Means Hearing on Promoting Disease Management in Medicare.” Online. Available at http://waysandmeans.house.gov/health/107cong/4-16-02/4-16ande.htm [accessed May 3, 2002].
Centers for Disease Control and Prevention. 2002. “CDC - Financial Management Office Budgetary Information.” Online. Available at http://www.cdc.gov/fmo/fmofybudget.htm [accessed Feb. 4, 2002].
Centers for Medicare and Medicaid Services. 2002. “Quality Improvement Organization Support Centers (QIOSCs).” Online. Available at http://cms.hhs.gov/qio/1a1-c.asp [accessed Oct. 2, 2002].
Chassin, M. 2002. Achieving and sustaining improved quality: lessons from New York state and cardiac surgery. Health Aff (Millwood) 21 (4):40-51.
Davidson, E. (CMS) (phone interview). 13 August 2001. Personal communication to Barbara Smith.
Demakis, J. (VHA) (phone interview). 11 February 2002. Personal communication to Barbara Smith.
Demakis, J. G., L. McQueen, K. W. Kizer, and J. R. Feussner. 2000. Quality Enhancement Research Initiative (QUERI): A collaboration between research and clinical practice. Med Care 38 (6 Suppl 1):I17-25.
Dranove, D., D. Kessler, M. McClellan, and M. Satterthwaite. 2002. Is More Information Better? The effects of ‘report cards’ on health care providers (MBER Working Paper 8697). Cambridge MA: National Bureau of Economic Research.
Eisenberg, J. M., N. E. Foster, G. Meyer, and H. Holland. 2001. Federal efforts to improve the quality of care: the QuIC. JT Comm J Qual Improv 27 (2):93-100.
Fleming, B. (CMS). 1 September 2001. Personal communication to Barbara Smith.
Fleming, B., S. Greenfield, M. Engelgau, and L. Pogach. 2001. The Diabetes Quality Improvement Project: moving science into health policy to gain an edge on the diabetes epidemic. Diabetes Care 24 (10):1815-20.
Foundation for Accountability. “The Child and Adolescent Health Measurement Initiative.” Online. Available at www.facct.org.cahmiweb/tee/teenhome.htm [accessed Feb. 19, 2002].
Gaba, D. M. 11 April 2002. Personal communication to Barbara Smith.
Garg, P., J. Lee, R. Hays, K. Kahn, and P. Cleary. 2000. Strategic plan for quality improvement using Medicare CAHPS FFS information.
Hannan, E., H. Kilburn, M. Racz, E. Shields, and M. Chassin. 1994. Improving the outcomes of coronary artery bypass surgery in New York state. JAMA 271 (10):761-6.
Hannan, E. L., A. L. Siu, D. Kumar, M. Racz, D. B. Pryor, and M. R. Chassin. 1997. Assessment of coronary artery bypass graft surgery performance in New York. Is there a bias against taking high-risk patients? Med Care 35 (1):49-56.
Hibbard, J., N. Berkman, L. McCormack, and E. Jael. 2002. The impact of a CAHPS report on employee knowledge, beliefs, and decisions. Med Care Res Rev 59 (1):104-16.
Hibbard, J. H., and J. J. Jewett. 1996. What type of quality information do consumers want in a health care report card? Med Care Res Rev 53 (1):28-47.
Hibbard, J. H., P. Slovic, E. Peters, M. L. Finucane, and M. Tusler. 2001. Is the informed-choice policy approach appropriate for Medicare beneficiaries? Health Aff (Millwood) 20 (3):199-203.
Ho, S. (PacifiCare). 29 July 2002. Re: Chapters 4 and 5. Personal communication to Barbara Smith.
Institute of Medicine. 2001a. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington DC: National Academy Press.
———. 2001b. An Overview of Major Federal Health Care Quality Programs: Appendix B. Washington DC: IOM.
Jencks, S. F. 2000. Clinical performance measurement—a hard sell. JAMA 283 (15):2015-6.
Kaye, N. 2001. Medicaid Managed Care: A Guide for the States. Portland, ME: National Academy for State Health Policy.
Kaye, N., and M. Bailit. 1999. Innovations in Payment Strategies to Improve Plan Performance. Portland, ME: National Academy for State Health Policy.
Klauser, S. (CMS) (phone interview). 5 February 2002. Personal communication to Barbara Smith.
Marshall, M. N., P. G. Shekelle, S. Leatherman, and R. H. Brook. 2000. The public release of performance data: what do we expect to gain? A review of the evidence. JAMA 283 (14):1866-74.
McPhillips, R. (CMA). February 2002. Personal communication to Barbara Smith.
Meyer, G. S., and M. P. Massagli. 2001. The forgotten component of the quality triad: can we still learn something from “structure”? Jt Comm J Qual Improv 27 (9):484-93.
Musgrave, D. 2001. HHS to Provide Nursing Home Quality Information to Increase Safety and Quality in Nursing Homes: Presentation. DHHS.
Musgrave, D. (CMS). 7 February 2002. Personal communication to Barbara Smith.
National Health Care Purchasing Institute. “Rewarding Results.” Online. Available at http://www.nhcpi.net/rewardingresults/index.cfm [accessed Apr. 22, 2002].
National Institutes of Health. 2002a. “National Institutes of Health FY 2001 Investments.” Online. Available at http://www.nih.gov/news/BudgetFY2002/FY2001investments.htm [accessed Feb. 4, 2002a].
———. 2002b. “NIH Guide: Evaluation of demonstrations: ‘rewarding results’.” Online. Available at http://grants.nih.gov/grants/guide/rfa-files/RFA-HS-02-006.html [accessed Apr. 17, 2002b].
Reilly, T. W. (AHRQ). 7 June 2001. National Healthcare Quality Report: Background. Attachment to Edinger, S. Personal communication to Barbara Smith.
Render, M. 2002. Personal communication to Barbara Smith.
Schneider, E. C., and A. M. Epstein. 1996. Influence of cardiac-surgery performance reports on referral practices and access to care. A survey of cardiovascular specialists. N Engl J Med 335 (4):251-6.
Schneider, E. C., and T. Lieberman. 2001. Publicly disclosed information about the quality of health care: response of the U.S. public. Qual Health Care 10 (2):96-103.
Thoumaian, A. (CMS). 6 February 2002. Personal communication to Barbara Smith.
Treiger, S. (CMS). 7 February 2002. Personal communication to Barbara Smith.
Vaiana, M. E., and E. A. McGlynn. 2002. What cognitive science tells us about the design of reports for consumers. Med Care Res Rev 59 (1):3-35.
Weeks, W. (VA). 29 April 2002. Personal communication to Barbara M. Smith.
Weeks, W., P. Mills, R. Dittus, D. Aron, and P. Batalden. 2001. Using an improvement model to reduce adverse drug events in VA facilities. Jounal on Quality Improvement 27 (5):243-54.
White, R. Jan. 14, 2002. A shift to quality by health plans. Los Angeles Times.