KEY SPEAKER THEMES
- The National Quality Strategy is dedicated to the delivery of better care, improvement of individual and community health, and provision of more affordable care.
- The National Quality Strategy was informed by a variety of stakeholders who developed six main priorities for the strategy: harm reduction, patient engagement, communication, prevention, community involvement, and cost containment.
- The National Quality Strategy has worked to align agency efforts around the three aims to promote coordination and provide comparable results where possible.
- The National Quality Strategy faces a variety of measurement, accessibility, and functionality challenges, all of which will continue to be addressed as the strategy evolves.
- Measurement is not an end in itself, so measures should be developed and implemented around the goal of improving health and health care.
- Assessment, harmonization, and alignment of current measures are necessary to ensure a focus on only those measures that drive improvement.
- Measures will need to evolve to take advantage of new digital sources of health information.
- Measurement should involve a cyclical process of continuous improvement where the measure’s impact is continually assessed and that information is used to improve.
- Moving toward an effective measurement approach requires data harmonization, consistency, timeliness, and parsimony to allow for broad, multilevel progress toward the three-part aim.
- To ensure that measures are useful and actionable, data availability, the effectiveness of metrics in driving change, and the value each measure adds should all be considered in measure development and selection.
- The multiple data sources in the health system, and the varying units of those data, present significant challenges to scaled metrics implementation.
- Metric concepts and specifications must be consistent to reduce and streamline the variability of those metrics.
Measurement capabilities currently vary across the numerous levels of the health and health care system. To delve deeper into the status of current metrics implementation, Carolyn Clancy, director of the Agency for Healthcare Research and Quality (AHRQ), led off her panel’s discussions with an overview of the National Quality Strategy and its current initiatives, challenges, and future work. Helen Burstin, senior vice president for performance measures at the National Quality Forum (NQF), continued the conversation with a presentation on the key challenges and opportunities for current measurement capabilities. Barbara Gage, fellow and managing director of the Engelberg Center for Health Care Reform at the Brookings Institute, concluded the panel’s discussion by focusing on measurement implementation.
In her discussion of the National Quality Strategy, Carolyn Clancy described the broad aims of the strategy. In summary, the strategy seeks to provide
- better care: improving overall care quality by making health care more patient-centered, reliable, accessible, and safe;
- healthy people and healthy communities: improving population health by supporting proven interventions to address behavioral, social, and environmental determinants of health; and
- affordable care: reducing the cost of quality health care for individuals, families, employers, and government.
Building on these broad aims, a range of stakeholders from the private and public sectors formulated specific priorities on which to focus. Those priorities, Clancy explained, are to
- make care safer by reducing harm caused in the delivery of care;
- ensure that each person and the members of his or her family are engaged as partners in that person’s care;
- facilitate effective communication and coordination of care;
- promote the most effective prevention and treatment practices for leading causes of mortality, starting with cardiovascular disease;
- work with communities to promote wide use of best practices for healthy living; and
- make quality care more affordable for individuals, families, employers, and governments by developing and spreading new health care delivery models (HHS, 2011).
As required by the Patient Protection and Affordable Care Act (ACA), the U.S. Department of Health and Human Services (HHS) must submit a progress report on the National Quality Strategy to Congress each year, Clancy noted. She gave an overview of the messages relayed in the 2012 update, which established key measures used to track each of the National Quality Strategy’s priorities, described how the National Quality Strategy has helped align various measurement approaches used by programs to measure quality, and highlighted efforts in Colorado and Ohio to improve quality along the priorities identified by the National Quality Strategy. Clancy also outlined a variety of initiatives that are in line with the National Quality Strategy strategies for progress, including the Partnership for Patients, the Million Hearts Campaign, and the Multi-Payer Advanced
Primary Care Practice Demonstration (HHS, 2012). Clancy noted, too, that given AHRQ’s integral role in the National Quality Strategy Annual Progress Report, the agency has reoriented its National Healthcare Quality and National Healthcare Disparities reports to align with the National Quality Strategy priorities.
Future updates to the National Quality Strategy will use key measures to set aspirational targets and to track progress. These measures align with each of the priorities and are designed to evaluate long-term improvement in each priority area. Examples of such measures include monitoring hospital-acquired conditions to reduce preventable hospital admissions and tracking the proportion of adults who are obese in order to promote healthy living in the long term. Moreover, Clancy emphasized, the National Quality Strategy has provided a vital framework to ensure as much alignment with the three aims as possible, both across and within HHS and also with states and private sector initiatives.
In closing, Clancy elaborated on the ongoing uptake hurdles for the National Quality Strategy. Uniformity in data measurement poses critical challenges as national, state, and community-level initiatives may collect data differently. However, uniformity may be less critical in some instances, she said, and this may counter the utility of any broad requirements. Additionally, lags in the timeliness of data may make it more difficult to track progress or identify problems, and the reported data may not reflect the current conditions at a given site. There is also the question, she said, of whether all the data currently collected are useful for organizations’ efforts to make real-time improvements. On this point, Clancy noted the example of one initiative, a nationwide program to reduce central line-associated bloodstream infections, which saw strong results with a low data collection burden and quarterly feedback (Dixon-Woods et al., 2011; Pronovost et al., 2006, 2010). Over the course of that project, it became clear that the ability to connect current improvement efforts to progress toward their goals, as assessed using a limited amount of data collected, provides a powerful incentive for continued work and engagement.
Looking to the future, Clancy emphasized that the strategy will continue to be refined based on lessons learned, new research findings, and changing health quality priorities. The next version of the National Quality Strategy will include aspirational targets for a greater number of key measures. It also will catalyze action by engaging federal, state, and private-sector stakeholders to identify next steps in the National Priorities Partnership’s three strategic areas: a national strategy for data collection, measurement, and reporting; community-level organizational infrastructure for improvement efforts; and ongoing payment and delivery system reform.
Helen Burstin started her presentation by emphasizing that measurement is not an end unto itself, but rather is valuable for how it can aid in improving the health and health care system overall. She stressed that the measures should be considered with that focus in mind. Burstin proceeded with an overview of how NQF evaluates and categorizes health care measures. Guided by how well they contribute to improving health and health care, NQF assesses measures for importance, usability, feasibility, reliability, and validity. Additionally, NQF considers how a particular measure contributes to the broader ecosystem of measurement in order to avoid directing resources toward redundant or unnecessary measurement activities.
Burstin explained that the stakeholders for measurement can be divided into two broad categories: clinicians and providers on one side, and consumers and purchasers on the other. These two groups bring competing perspectives and concerns about the role of measurement in the health care system. Clinicians and providers share concerns about how measurement might affect their clinical practices, whether measures focus on important clinical processes, as well as the potential administrative burden of additional reporting requirements. Consumers and purchasers, on the other hand, are typically concerned with the impact and value of measures, favoring composite measures that focus on outcomes for various groups and conditions, rather than measures of process and compliance.
Combining the perspectives of both groups, a hierarchy of measures was developed where the highest tier contains outcome measures linked to evidence-based processes, followed by outcome measures of substantial importance supported by plausible processes, then intermediate outcome measures, and finally, process measurements that have a proven impact on outcomes. The development of these preferred measures requires substantial evidence, and, at present, there is a dearth of high-quality, consistent data to guide the implementation and validation of these measures.
Furthermore, given what Burstin describes as the “tsunami of measurement” inundating the health care system today, it is essential to focus attention and resources on those measures that drive improvement. Providers face numerous and overlapping federal, state, and programmatic measurement requirements, creating a need for harmonization to reduce administrative burden and hone in on the most important measures. Burstin said that in order to move forward, inappropriate and duplicative metrics that increase burden without adding value must be avoided.
Burstin said that it will also be critical to develop de novo measures that take advantage of clinical data in new formats—such as electronic health records (EHRs), registries, and patient portals—rather than simply trying
to force imperfect older measures toward new purposes. To make best use of these new technologies, the system needs better interfaces to other data, including patient demographics and costs, as well as interoperable systems to track quality and efficiency across time, sites, providers, and data platforms. At present, though, Burstin points out that EHR systems tend to be siloed and incompatible, with provider groups unable to share or compare information with each other or with the public. Furthermore, it is also critical that patients are at the center of data collection efforts, such that there is a comprehensive view of a patient’s health.
In her concluding remarks, Burstin outlined the key challenges that must be addressed to achieve high-quality, accurate core measurement in the health care system. Measurement today is divided, with one set of measures assessing selection and payment, and another set intended to drive improvement. Bridging this gap with comprehensive measures is an essential step toward better care. Second, measures should be continually evaluated and improved through the entire measurement cycle based on feedback about impact and accuracy. Third, outcome and composite measures should be prioritized, with a focus on disparities and longitudinal measures across episodes of care. Finally, measurement efforts should coalesce around the needs and values of the patient.
By developing systems for continuous improvement and aligning metrics with anticipated delivery and payment changes, Burstin said that the health system as a whole can ensure that core measurement supports positive change and reform toward better care for patients and populations.
To start her discussion of implementation issues, Barbara Gage gave a broad overview of the current measurement landscape. While there is growing use of performance measures for internal quality improvement, public and private reporting, and different payment methods, there are several challenges to the use of these measures, including variations in the data sources used to construct measures, inconsistent measures used across initiatives, different operational specifications for the same metric concept, and the large number of measures in play. All of these factors, she explained, contribute to the current challenges in applying metrics across different organizations and payer groups to move nationally toward the three-part aim.
Gage highlighted the changing measurement landscape and noted a few key points. First, measures will need to be aligned across similar concepts where possible, including the use of similar specifications to reduce administrative costs and enhance comparability of populations in different programs. Moreover, measures should be parsimonious to make large-scale,
consistent implementation feasible. Further, consistency in measure use and timely feedback to clinicians will be key to establishing actionable data, which clinicians will need in order to change behaviors in real time.
Today’s environment for measure development, Gage said, involves a variety of actors. Measures are endorsed as scientifically valid by the National Quality Forum, and payers select among those measures for those best suited for monitoring provider performance and quality of care. Metrics are critical to multiple initiatives nationally, including accountable care organizations (ACOs), Aligning Forces for Quality (AF4Q), regional health collaboratives, and value-based purchasing initiatives. They are also being tested in new health information technology initiatives, such as the Beacon Communities, health information exchanges, and regional collaboratives.
Gage highlighted the work of the Quality Alliance Steering Committee (QASC), which is focused on challenges in performance measurement implementation as it relates to the three-part aim. In concert with others, the QASC has been working to identify and select measures that can determine value. This is critical because both clinical outcomes and the associated costs must be considered in striving to meet the goals of the three-part aim. Two major implementation challenges to consider are data transfer issues and the challenges in merging data across different systems throughout the entire patient episode. Data transfer requires protecting privacy, addressing security issues, and respecting proprietary information while using standardized provider performance measures to create comparable community metrics. Data governance structures need to be designed to ensure neutrality and respect the proprietary nature of the information being transferred. One approach to these challenges may be to use distributed data models, in which the individual-level, or patient-level, data stay with the data owner, but aggregated information and measures can be submitted to a convening organization.
In addition to data governance and data transfer, Gage highlighted three further implementation issues. The underlying information technology systems, regardless of the model, need to be affordable—they cannot be cost prohibitive to the providers. Furthermore, the measures need to be effective in helping organizations and individuals in efforts to meet the three-part aim. Gage emphasized that the measures must provide timely, interpretable feedback to clinicians in order to affect their performance, and they must be valued by clinicians in order to be actionable.
She continued her discussion by highlighting the variation in measures currently used under different insurance programs. The Medicare Shared Savings Program for ACOs requires 33 measures of patient experience, care coordination, safety, prevention, and at-risk populations from three separate sources. The Medicare Advantage Star programs use a different set of 36 measures of prevention, chronic care management, patient experience,
and customer service from a variety of sources. Some of the concepts are similar, but the underlying data and specification of the measure may vary. The measures most common across private plans, as Gage found from discussions with America’s Health Insurance Plans, are yet a third set of measures and specifications, and those used by the regional health collaborative are different as well. Each initiative can benefit from the lessons learned by others, especially to the extent that each is measuring quality and the value of services provided by the same type of provider or for similar populations.
Gage discussed the range of data sources used to populate measures. Patient experience data are commonly collected through surveys like the Consumer Assessment of Healthcare Providers and Systems (CAHPS) and may be collected more directly and in a more timely fashion from patients in the future. Claims data are typically used to measure service utilization, such as hospital readmissions, admissions for conditions that could be treated with ambulatory care, emergency room use, and cost measures, while clinically enhanced measures offer insights in health improvement through tracking high blood pressure control, screening for average blood glucose levels (HbA1c), eye exams, and other factors extracted from electronic records or medical charts.
In addition to the diversity of data sources, other factors can also make measure harmonization difficult. Harmonizing metrics can be challenging because of a lack of consensus about the best measures to use, and even when common concepts are measured, the specifications of the numerator, denominator, and inclusions and exclusions may vary. This makes it difficult to compare outcomes across providers, payers, initiatives, and communities. For example, initiatives may agree on the need to measure a concept like risk for falls but differ in the specific technical details, with one initiative measuring screening rates and other measuring patient education about falls.
Gage concluded with a discussion of these issues as they relate to the current landscape for cost and resource use measures, explaining the inherent challenges stemming from the significant variations in available cost measures. First, the unit of analysis can hinder comparisons; costs per person per year, per member per month, and per episode cannot be directly compared. Second, numerous types of cost measures exist. The metric may vary depending on whether it is reflecting costs per diagnosis, costs for certain types of clinical services, costs across certain time windows or episode periods, or costs based on numerous other possible factors. Third, risk-adjustment methods can vary across payers, making the measured costs difficult to compare. Fourth, these data are often considered proprietary and may contain individual identifiers that are protected under Health Insurance Portability and Accountability Act (HIPAA) regulations. Finally, there are multiple definitions of health care cost, ranging from negotiated
To lead off the ensuing discussion, one participant noted that while the idea of having global measures is good, global measures can fail to capture the level of detail that leads to accountability. Other participants also commented on the struggle to strike a balance between global and more detailed measures, such as those based on person-level data. Both Clancy and Gage agreed that there needs to be the right balance between global measures that put minimal strain on those responsible for collecting data and more detailed measures. Gage noted that there needs to be a better understanding of issues involving cost shifting and unintended consequences, particularly with regard to new payment models. That kind of understanding will come with experience and feedback from those on the ground who are actually collecting data.
A participant noted that the ability to use information at the service level, where physicians can begin looking at data from other physicians in the same plan, can run into privacy issues that are covered by state regulations. This complexity raises significant implementation issues for health care systems that operate in multiple states. In a similar vein, participants commented that even when two organizations use the same measure, the way they implement those measures may differ, and even within one organization implementation of a measure may change over time.
Dixon-Woods, M., C. L. Bosk, E. L. Aveling, C. A. Goeschel, and P. J. Pronovost. 2011. Explaining Michigan: Developing an ex post theory of a quality improvement program. Milbank Quarterly 89(2):167–205.
HHS (U.S. Department of Health and Human Services). 2011. Report to Congress: National Strategy for Quality Improvement in Public Health. Washington, DC: Department of Health and Human Services.
———. 2012. Report to Congress: National Strategy for Quality Improvement in Public Health. Washington, DC: Department of Health and Human Services.
Pronovost, P., D. Needham, S. Berenholtz, D. Sinopoli, H. Chu, S. Cosgrove, B. Sexton, R. Hyzy, R. Welsh, G. Roth, J. Bander, J. Kepros, and C. Goeschel. 2006. An intervention to decrease catheter-related bloodstream infections in the ICU. New England Journal of Medicine 355(26):2725–2732.
Pronovost, P. J., C. A. Goeschel, E. Colantuoni, S. Watson, L. H. Lubomski, S. M. Berenholtz, D. A. Thompson, D. J. Sinopoli, S. Cosgrove, J. B. Sexton, J. A. Marsteller, R. C. Hyzy, R. Welsh, P. Posa, K. Schumacher, and D. Needham. 2010. Sustaining reductions in catheter related bloodstream infections in Michigan intensive care units: Observational study. BMJ 340:c309.