Improving the U.S. health system depends on the ability to measure effectively its performance and the factors that shape its performance. Measurement is necessary to learn what works, to guide resources toward effective initiatives, and to promote accountability.
The constellation of health measurement activities in the nation has proliferated out of interest in improving the targeting of various initiatives—for example, local disease control, program planning, resource allocation, legislative and regulatory requirements, and monitoring of progress in health and health care. Expanded measurement capabilities have helped focus a variety of interventions across the health system, thereby contributing to positive impacts on health and health care. As understanding has grown about the many factors shaping individual and population health and technical capacity for tracking has advanced, the scope of health measurement has broadened to include a large number of process and outcome targets relevant to health and health care, from social determinants and programs to physician and hospital performance, patient experience, and costs of care.
Along with this burgeoning measurement capacity have come certain challenges. Like any improvement activity, measurement requires up-front investment to create the necessary capabilities. Assessment needs to be efficient with respect to the amount of information produced for a given investment in resources, but even so, existing measurement programs do not yet capture all of the key information needed for progress. Significant gaps exist in knowledge and understanding of what works in population health,
quality care, cost control, and patient engagement, and those knowledge gaps are paralleled by measurement gaps.
This chapter provides an overview of the current landscape of health and health care measurement in the United States. It begins by summarizing policy initiatives that are drawing increased attention to the need for measurement. The chapter then describes the various purposes for health and health care measurement and the measurement activities that currently serve those purposes. Next is a discussion of the limitations of these current activities. The final section addresses the issue of the measurement burden on health care providers and organizations.
The multiple changes occurring throughout the health system create a compelling need for reassessment and sharpening of existing measurement activities. Rapidly evolving models for delivering, paying for, and organizing health care, as well as collaborations designed to improve health, all require new approaches to measurement for accountability. Some new forces are encouraging the integration of clinical care, while others are driving a community or regional approach whereby stakeholders collaborate to improve health care quality while controlling costs, and partnerships are bringing together health care and community organizations with a broad focus on improving health. These initiatives are occurring at multiple levels—national, state, regional, community, and institutional.
The movement to accountable care is a prominent example of the impact of new models of care on approaches to measurement. The establishment of accountable care organizations (ACOs) is a key feature of the Patient Protection and Affordable Care Act (ACA), proposed to replace the often fragmented and uncoordinated care system with a system that integrates the care received by a patient, with payment incentives aimed at individual and population health outcomes (Fisher et al., 2007). The Centers for Medicare & Medicaid Services (CMS), the federal agency responsible for implementing the ACO model, has launched several relevant programs, including the Medicare Shared Savings Program, the Pioneer ACO model, the Advance Payment Initiative, and Medicaid ACOs. In addition, private insurers, employers, and others have established ACO programs. Recent estimates suggest that more than 600 ACOs are now in existence (Peterson et al., 2014).
Numerous other care delivery reforms also call for tracking measures. Patient-centered medical homes, clinics devoted to high-risk patients, team-based care models, and retail clinics, for example, are changing the traditional capabilities, roles, and culture of care. Innovations in health care
payment, including bundled payments, pay-for-performance initiatives, global payments, and value-based insurance design, also are driving change throughout the health system.
Another ACA-related development affecting measurement priorities is the law’s creation of insurance marketplaces to expand individual access to affordable health insurance. The marketplaces, or exchanges, established under the ACA are not homogeneous: 16 states and the District of Columbia developed state-based marketplaces, 7 states developed marketplaces that are partnerships between the federal and state governments, and the marketplaces of 27 states are federally facilitated (KFF, 2014). The goal of these marketplaces is to provide a place for people to purchase individual insurance, with easily understandable information to support decisions among coverage options. They are coupled with other changes to insurance, such as setting essential benefits, communicating benefits, and other regulatory requirements. Clearly, participants in the various marketplaces will depend on the generation of reliable data on which to base program operations and improvement priorities.
At the vanguard of the myriad changes occurring in health care delivery is the widespread adoption of electronic health records (EHRs) and other health information technologies, enabling the gathering and use of measurements on a wide range of services, costs, and outcomes. Recent policies, such as the financial incentives offered under the Health Information Technology for Economic and Clinical Health (HITECH) Act, have incentivized the adoption and meaningful use of EHRs. The HITECH Act authorized a program of incentives and penalties that, according to Congressional Budget Office (CBO) estimates, amount to as much as $30 billion in additional federal Medicare and Medicaid payments (Blumenthal, 2009; Buntin et al., 2010). The adoption of EHRs has increased since the act’s implementation, yet more changes need to occur for all providers to utilize interoperable, comprehensive systems. In 2013, 78 percent of office-based physicians used any type of EHR system, and 48 percent reported having a system that met the criteria for a basic system (Hsiao and Hing, 2014). The availability, interoperability, harmonization, and reliable use of EHRs are foundational to a successful national measurement capacity.
States have a key leadership role in reshaping health and health care, and their measurement needs and policies are therefore a priority. For example, Massachusetts enacted plans to expand insurance coverage through a Connector, a forerunner of the insurance exchanges developed under the ACA, and additional coverage options for low-income adults and those ineligible for employer-sponsored insurance (Raymond, 2011). Beyond coverage, the state has implemented programs aimed at improving quality and value, including payment reforms and quality improvement initiatives (McDonough et al., 2008; Raymond, 2011; Song and Landon, 2012).
Other states, such as Utah, also established marketplaces prior to the passage of the ACA for use by their residents for purchasing individual health insurance policies (Corlette et al., 2011). As another example of state reforms, Vermont has implemented the Vermont Blueprint for Health, which includes patient-centered medical homes, community-based support teams, a statewide health information network, and other enhanced data systems (Bielaszka-DuVernay, 2011). And Oregon is transforming its Medicaid program to deliver care through coordinated care organizations—designed to be advanced versions of ACOs—which have received additional support in exchange for a commitment to reducing per capita Medicaid spending (Stecker, 2013).
Still other initiatives—such as the Aligning Forces for Quality program, the Chartered Value Exchange program, Beacon Communities, and the Triple Aim Initiative—are aimed at driving change at the regional and community levels (Maxson et al., 2010; McCarthy and Klein, 2010; Painter and Lavizzo-Mourey, 2008; Young, 2012). Each has its own measurement requirements and contributes insights for the conversation on measurement. For example, the Aligning Forces for Quality program consists of 16 collaboratives across the country that convene multiple stakeholders to address local challenges in care. The collaboratives employ different strategies for measuring and reporting health system quality, cost, and patient experience; engaging patients in care and care redesign; and testing new payment models (AF4Q, 2013; Mende and Roseman, 2013; Painter and Lavizzo-Mourey, 2008; Roseman et al., 2013; Scanlon et al., 2012). Similarly, two Wisconsin multi-stakeholder groups—the Wisconsin Collaborative for Healthcare Quality and the Wisconsin Health Information Organization—are working to increase the supply of data on health care quality and value to support value-based payment (Toussaint et al., 2011). More than 30 Regional Health Improvement Collaboratives are in place across the United States.
The combination of these changes to care delivery, payment, and coverage necessitates new capabilities for measurement. Measurement programs need to be adjusted to account for new models of care; to respond to the emerging needs of health care improvement, payment, and accountability; and to enable sharing of information with patients and consumers on their care and coverage options. These changes also add to the urgency of the need for broad assessment and streamlining of the measurement system, with a reliable standardized set of measures at the core to guide action and assess results.
As discussed in Chapter 1, the health measurement enterprise has grown significantly over time, with new measures continually being developed and refined. More than 60 years ago, Congress established the National Committee on Vital and Health Statistics to identify the needs for health statistics, data, and information (HHS, 2000). Some 35 years ago, the national Healthy People initiative brought attention to the potential gains to be realized from health promotion and disease prevention activities, providing a view of the overall health of the nation, setting national goals and objectives for health improvement, and underscoring that the focus of measurement should be on matters most important to health outcomes (IOM, 1990). Since the publication of Healthy People: The Surgeon General’s Report on Health Promotion and Disease Prevention, the Healthy People initiative has updated its vision and assessment every decade, most recently with Healthy People 2020 (HEW, 1979; Koh, 2010). Since that time, moreover, the number of organizations involved in assessing the progress of the health system has grown substantially, reflecting the growing national interest in quantifying performance (as illustrated by the examples presented in Table 2-1). These initiatives vary in their scale, considering performance at the
TABLE 2-1 Example Measure Set Sponsors and Users for the Four Domains Influencing Health
|Domain||Responsible Organization (measures/measurement activities)|
|Domain||Responsible Organization (measures/measurement activities)|
|Domain||Responsible Organization (measures/measurement activities)|
SOURCE: Adapted from IOM, 2013a.
county, state, or national level; in their focus, from physicians to hospitals and health plans; and in their data sources, from surveys and registries to clinical and payment records (AHRQ, 2013; Hussey et al., 2009; IOM, 2006; NQF, 2013d; Wold, 2008). Given the diverse sources and purposes of existing data, substantial work is needed to develop high-quality core measures.
Paralleling the diversity of organizations involved in measurement is the variety of uses for health measures: care improvement at multiple levels; disease surveillance, prevention, health promotion, and population health management; costs and outcomes reporting and transparency; health and safety regulation; professional certification and facility accreditation; payment incentives, benefit design, and purchasing decisions; tracking and reporting of grant performance; health services and effectiveness research; and patient and public experience and satisfaction (Berwick et al., 2003; IOM, 2006, 2011a, 2013a,b). Variation exists as well in the application of measures for these different purposes.
One analysis found that measures are used most commonly in health care for quality improvement and public reporting; they are used for payment almost half as frequently, and an even smaller number of measures are used for accreditation, certification, credentialing, and licensure (Damberg et al., 2011). A measure’s intended application is important to consider, as
the requirements placed on measures differ for each type of use. Application of a measure toward payment or public reporting will necessitate stronger requirements for statistical validity and conceptual accuracy than will use of a measure for internal improvement purposes. Therefore, measures must be considered in light of their intended application, as that will determine their suitability. The various uses of measures and related measurement activities are summarized in the remainder of this section, as well as in the appendixes to this report.
The Centers for Disease Control and Prevention’s (CDC’s) National Center for Health Statistics (NCHS) bears primary responsibility at the federal level for monitoring overall population health status. Its maintenance of vital statistics and data on reportable diseases is based on a blend of national standards and local application. Both vital statistics, which include births and deaths, and data on reportable diseases are recorded separately by each of the 50 states, the District of Columbia, and the territories, with national statistics being compiled from these local data through agreements with national entities. The agreements include some requirements for the data, as the decentralized data collection process introduces challenges of data consistency, comparability, quality, and timeliness (NRC, 2009). These data present an almost complete picture of the health status of the nation: one study, for example, found that the vital statistics system captures more than 99 percent of the nation’s births and deaths (Guyer et al., 2000).
Since the early 1960s, the CDC also has administered the periodic comprehensive National Health and Nutrition Examination Survey (NHANES) and the National Health Interview Survey, which provide data on the health status of adults and children. Besides providing information about a variety of national health issues, the NHANES supports epidemiologic research and assessment of health promotion and disease prevention programs (NCHS, 2013).
In addition to these targeted efforts, as discussed above, the U.S. Department of Health and Human Services (HHS) is in its fourth decade of producing, through the Healthy People initiative, regular national assessments of the nation’s health, as well as progress on goals and objectives established for its improvement. The most recent of these assessments, Healthy People 2020, has a five-part mission: develop priorities for nationwide health improvement; expand awareness of the determinants and factors influencing health, disease, and disability; identify measurable objectives and goals at multiple levels; build sectors across the health system to improve policies and practice; and describe areas in which knowledge needs to be increased through research and data (IOM, 2011b). Pursuant to these
goals, it is necessary to identify indictors that can be used to gauge meaningful progress on the nation’s health. Healthy People 2020 contains more than 1,200 objectives that can be used to monitor health, and its Leading Health Indicators are a focused set of 26 indicators in 12 categories that collectively capture the major trends in the public’s health (see Appendix D for the full list).
HHS also collects and reports data and monitors progress on key issues related to prevention through the National Prevention Strategy, housed in the office of the Surgeon General of the U.S. Public Health Service. Released in 2011, the National Prevention Strategy presents a vision for the future of prevention in the nation, along with goals, priorities, and associated resources. This initiative and its related activities, in such areas as smoking cessation, addictive behaviors, community health and safety, and health disparities, depend on reliable comparable measures for tracking progress (HHS, 2011).
A number of other measurement activities focus on assessing progress in population and community health. For example, the County Health Rankings project reports status and trends related to physical environment, social and economic factors, clinical care, health behaviors, and overarching health outcomes for nearly every county in the United States. Similarly, America’s Health Rankings, a program administered by United Health Foundation, uses measures of both health outcomes and health determinants to develop assessments of health in different states (United Health Foundation et al., 2012).
Another related initiative is the Key National Indicators project, overseen by the congressionally mandated Commission on Key National Indicators. The Key National Indicators, currently being maintained by the nonprofit State of the USA, encompass the state of the nation more broadly, with a focus on indicators of economic growth, development, and stability, but they also cover the state of American health and related health factors such as environment, education, and employment (The State of the USA, 2014).
Multi-stakeholder collaboratives have developed programs for assessing health in communities with the goal of understanding how to improve their health outcomes. The Network for Regional Healthcare Improvement (NRHI) serves as a national association of Regional Health Improvement Collaboratives, coordinating and advancing initiatives focused on improved health care quality and payment reform across the nation (Rosen et al., 2012).
A rapidly growing source of information for health-related measurement is patient-generated health data (PGHD) and data gathered via personal or remote site digital devices. According to the Office of the National Coordinator for Health Information Technology (ONC), PGHD is information created, recorded, gathered, or inferred by patients or their designees about health-related experiences and concerns (ONC, 2013). Traditionally, this largely historical information was provided by the patient verbally or in writing during clinical encounters, with no systematic processes in place to harness its utility for ongoing self-care management and longitudinal monitoring. The availability and characteristics of PGHD have changed dramatically in the last few years, driven in part by sophisticated technology capable of monitoring domains of wellness (i.e., exercise, diet, sleep) and patient-reported observations of daily life with illness. Additionally, health care reform legislation such as the ACA introduced new payment and delivery models that support the use of home-based sensors and monitoring devices for the collection of biometric data (i.e., blood glucose meters, pacemakers, pulmonary function devices). Federal certification criteria for EHRs qualifying under the Meaningful Use Program of HITECH include patient portals. Recommendations being considered for stage 3 of Meaningful Use include further support for PGHD by 2017. One limitation of this approach is that PGHD is limited to people within the clinical care system, so that results based on these data sources may be biased or of limited generalizability.
Although a nascent practice, some health systems have been experimenting with integrating PGHD into their clinical records. Group Health’s electronic Health Risk Assessment (e-HRA), an early adopter, feeds patient-reported data from the patient portal into the EHR. The Palo Alto Medical Foundation conducted a clinical trial (EMPOWER-D) with wirelessly uploaded glucometer readings as well as patient-entered activity and meal information and found that more patients contributing PGHD than controls showed improvement in their A1C readings, demonstrating better control of their diabetes. Partners HealthCare launched a system in 2013 that uploads data from medical devices directly into the patient’s EHR. The Veterans Health Administration began electronic health monitoring a decade ago and in 2013 monitored more than 140,000 veterans with high-risk chronic conditions (i.e., diabetes, hypertension, chronic obstructive pulmonary disease [COPD]), depression, posttraumatic stress disorder (PTSD), weight management issues, substance abuse, and spinal cord injuries (Darkins et al., 2008). And a study using pre-visit electronic journals at Brigham and Women’s Hospital is shedding light on the process of engaging patients in planning ahead for a clinical visit and offers an opportunity to
integrate PGHD into the clinical work flow. Overall, these efforts, along with similar programs in other large health systems, such as Kaiser Permanente, Vanderbilt, and Geisinger, have shown promising results in supporting better health for individuals at lower cost to the system.
Since 2011, ONC has supported a series of reports and expert panels seeking insight into the opportunities and challenges associated with the use of PGHD in health care. These initiatives have explored a range of topics, including potential policy levers; the need for data standards; and the value of PGHD in achieving the three-part aim of better care, lower cost, and better health within a continuously learning health system (Shapiro et al., 2008). Many measurement organizations, including the National Quality Forum (NQF) and the National Committee for Quality Assurance (NCQA), have taken notice of PGHD. Working with the Agency for Healthcare Research and Quality (AHRQ), NQF identified patient-reported outcomes and patient-generated data in EHRs as priorities for the 2012 National Strategy for Quality Improvement in Healthcare (HHS, 2012). NCQA recently completed a comprehensive report on the use of health information technology to support patient and family engagement that includes support for relevant PGHD as a contributor to coordinated care (Paget et al., 2014).
Another growth area for PGHD relates to patient-reported outcomes. Now that many Americans’ health information is captured and accessible electronically—by both providers and patients—the ability to obtain ongoing feedback from patients on their symptoms, pain, and functional status could make important contributions to evaluation of the impact of interventions and assessment of outcomes, although the quality and accessibility of these data are currently limited. Using the digital infrastructure now being established, the sampling of patient-reported outcomes can not only guide treatment of individuals but also provide outcomes for clinical research. Patient-reported outcomes are important measures that matter to people, which is a key consideration in the establishment of core measures.
While rapid growth has occurred in the potential use and value of PGHD, its utility remains largely limited and unstudied. Recently, NQF convened a multi-stakeholder group to provide guidance on priorities for the development and endorsement of performance measures for person-centered care and outcomes, in which PGHD and patient-reported outcome data play an important role. Patient-powered research networks (e.g., PatientsLikeMe, ImproveCareNow) are giving patients, researchers, and clinicians an unprecedented opportunity to capture the full patient experience in data models amenable to measurement development.
Virtually all health care delivery organizations use measures for quality improvement purposes, from improving outcomes for specific procedures to optimizing operations for an entire institution. It is important to note that quality improvement places different demands on measurement than on other uses, such as payment or public reporting. Therefore, quality improvement initiatives can use measures that may not be appropriate for other purposes—depending on the measure’s accuracy, precision, evidence base, or representativeness—and thus present an opportunity to test measures in practice without the consequences of changing financial incentives or impacting an organization’s reputation. For example, Intermountain Healthcare has used care process measures embedded in its clinical data systems and applied across clinical units. One result of this quality improvement effort was reengineering of the organization’s labor and delivery protocol to reduce the use of elective delivery, unplanned cesarean sections, and newborn intensive care units, thereby saving an estimated $50 million each year in the state of Utah (James and Savitz, 2011).
Over the past quarter century, a number of organizations have assumed various responsibilities for advancing broad-based quality improvement activities. NQF was founded in 1999 in response to a presidential commission’s recommendation to develop a forum on health care quality measurement and reporting (NQF, 2013a). The organization’s mission comprises three aims: “build consensus on national priorities and goals for performance improvement, and work in partnership with the public and private sectors to achieve them”; “endorse and maintain best-in-class standards for measuring and publicly reporting on healthcare performance quality”; and “promote the attainment of national healthcare improvement goals and the use of standardized measures through education and outreach programs” (NQF, 2013c, p. 68). Three recent NQF initiatives have garnered significant national attention. First, the National Priorities Partnership, a public-private partnership comprising more than 50 organizations, provided stakeholder input into the development of the National Quality Strategy. Second, the Measure Applications Partnership, which was included in the ACA, seeks to align measures across federal programs and between the public and private sectors. Notably, the Measure Applications Partnership provides pre-rule-making input for federal public reporting and performance payment programs, and it has introduced the concept of families of measures for aligning measurement of specific concepts. Finally, the NQF Buying Value initiative convened private and public purchasers aiming to transition toward paying for value, with the goal of aligning value-focused purchasing efforts to increase the success of these efforts.
NCQA, a private, nonprofit organization founded in 1990 “to transform health care quality through measurement, transparency, and accountability,” represents the first broad-based attempt at value-based purchasing (NCQA, 2013a). NCQA stewards the Healthcare Effectiveness Data and Information Set (HEDIS), which consists of approximately 80 measures in five domains and is used by more than 90 percent of health plans to measure performance (NCQA, 2013b,c). Beyond this tool, NCQA offers accreditation programs (e.g., for ACOs), certification programs (e.g., for disease management), physician recognition programs (e.g., for patient-centered medical homes), and health plan report cards.
A third organization working outside government to promote quality improvement is the Institute for Healthcare Improvement (IHI), founded in 1989. IHI works closely with health systems to drive down costs and enhance sustainability in both clinical and operational settings by “identifying proven and evidence-based strategies that demonstrate efficiency through the removal of waste, harm, and variation” (IHI, 2014). In the course of its work, IHI has developed a number of measures for use by the organizations within its sphere of activities. Its quality-based programs include diagnostic assessments of measurement methodologies, comprehensive approaches to the scaling up of efficiency efforts, and approaches to improving quality and lowering costs for people with chronic conditions (IHI, 2014). IHI accelerates improvement through its partnerships and integrated strategy objectives by cultivating motivation for transformation and putting strategic plans into action. IHI’s formulation of the Triple Aim of better care, lower cost, and better health has become a standard reference point for many health improvement efforts.
The Joint Commission also plays an important role in assessment of care quality. As an independent nonprofit accreditation body, the Joint Commission administers on-site surveys to thousands of health care systems across the nation. The decision on each health care organization’s accreditation is made public, ensuring transparency to all interested stakeholders and the community at large. In many states, the Joint Commission accreditation fulfills state regulatory requirements for health care providers as well as Medicare and Medicaid certification (Joint Commission, 2014).
Within the federal government, health data quality improvement efforts have been stewarded by several agencies, in particular CMS, AHRQ, and the CDC, coordinated by the Secretary of HHS. In addition to the NCHS programs described above for assessing population health, the CDC operates a number of categorical clinical preventive service programs (e.g., immunization, cancer screening) with elements aimed at improving the quality of those services, in part through measurement.
CMS has perhaps the greatest impact in the quality measurement arena, leveraging measures for multiple purposes in Medicare, Medicaid, and
the Children’s Health Insurance Program (CHIP). It has applied measures to its payment programs, such as the Medicare Shared Savings Program (ACOs), Medicaid health homes, and Innovation Center projects; public reporting programs, such as Hospital Compare, Physician Compare, and Medicare Advantage Star Ratings; and quality tracking, such as Medicaid Adult Health Care Quality measures and Medicaid/CHIP Children’s Health Care Quality measures. Moreover, CMS provides technical assistance on measurement through the Quality Improvement Organization program and coordinates with a variety of measurement organizations on measure development and accreditation.
CMS also is working with ONC within HHS to spearhead the implementation and application of EHRs and the exchange of health information across the system. To further encourage the adoption of health information technology, two HHS programs—the Medicare EHR Incentive program and Medicaid EHR Incentive program—provide financial incentives for providers and hospitals to use EHRs meaningfully. The capture and reporting of quality measures are required for Meaningful Use.
AHRQ has undertaken a number of projects aimed at improving measurement of health care performance. These include assessments of national health care performance through the National Healthcare Quality Report and National Healthcare Disparities Report, which describe the current status and trends in care effectiveness, patient safety, access, timeliness, and patient-centeredness. AHRQ also has developed a number of indicators for gauging health care quality, including the Prevention Quality Indicators, Inpatient Quality Indicators, Pediatric Quality Indicators, and Patient Safety Indicators. Moreover, AHRQ has supported and overseen the Consumer Assessment of Healthcare Providers and Systems (CAHPS) program, which uses surveys to gather information on patient and consumer care experiences in a variety of settings. Different surveys are available for hospitals, health plans, surgical care, dental care, and a range of other care types and settings. AHRQ further stores evidence-based measures and measure sets in the National Quality Measures Clearinghouse and compiles measures used by HHS in the HHS Measure Inventory.
The U.S. Departments of Defense and Veterans Affairs (DOD and VA) have pursued a variety of initiatives aimed at improving health care performance through measurement. For example, the Military Health System’s Quadruple Aim Innovation Challenge is aimed at promoting innovation in the health system around the quadruple aim of readiness, population health, experience of care, and per capita cost (HIMSS, 2012). At the VA, the Veterans Affairs Hospital Compare program allows patients and others to compare quality and performance at different hospitals and track progress on specific conditions over time (VA, 2011).
Finally, in addition to CAHPS, a variety of innovative projects are under way to further develop and refine the ability of the care system to monitor and assess patients’ perspectives. An example is the CollaboRATE Score, a project of the Dartmouth Institute, which is in pilot testing as a survey tool for gathering feedback on patients’ experience of shared decision making (Barr et al., 2014).
Comparisons offer inherent motivations and focus for progress, and measurement is a key tool and incentive for understanding and addressing variations within and among local clinical care practices, health care organizations, and the broader care system, enabling individuals and organizations to identify best practices in terms of positive patient health outcomes and improved value. For example, using a common measurement framework to understand variations in clinical outcomes of cardiac surgery can help identify the best practices of high performers throughout an organization (IOM, 2013a). In its studies of regional variation in health care spending and outcomes, Dartmouth has used benchmarking to show that cost, quality, and health care practice vary markedly across the country (Fisher et al., 2003a,b, 2009). A number of similar analyses of variations are under way.
CMS administers several comparative programs, including accountability systems such as Medicare Hospital Compare and Physician Compare that provide information for the public, and programs that report data on Medicare and Medicaid performance in terms of geographic variation and health care expenditures. CMS also operates a variety of systems that collect monitoring and compliance data to ensure that high-quality care is delivered to Medicare and Medicaid beneficiaries.
Another group active in promoting transparency is the Health Care Cost Institute (HCCI). HCCI, a nonpartisan and nonprofit organization, was established in 2011 to compile research and provide accurate information on costs associated with the U.S. health care system. Focusing on private health insurance claims data, HCCI strives to make transparent important information regarding the health care spending of privately insured individuals in the United States. To this end, HCCI developed a national claims database, populated by the nation’s largest insurers and available to researchers interested in the causes of health care costs and utilization. In addition, HCCI issues biannual reports on regional, state, and national trends in health care spending for the general public, and it also aggregates these trends and conveys their implications and impact at the policy level.
States have a long history of publicly reporting information on health care performance. One of the first state performance reports came from
the New York State Department of Health, which started publishing data on risk-adjusted mortality for cardiac bypass surgery in 1989 (Chassin, 2002). The number of such programs has continued to grow, and half of all states now sponsor a program for public reporting on care quality (Ross et al., 2010). These programs vary considerably as to whether they include information on care processes or health outcomes, whether they describe performance only for common diseases or for other diseases as well, and how their data are generated (Ross et al., 2010). In addition to public reporting, more than half of all states operate a hospital adverse event reporting system that requires hospitals to report the incidence of specific types of patient harm. These systems vary significantly from state to state as to what types of adverse events must be reported (Levinson, 2008; Wright, 2012). One limitation of these systems is that because they are focused on care institutions and providers, they are not fully inclusive of the state’s population, excluding those individuals who are not receiving care.
Publicly reported measures have been correlated with improved performance in the measured area and with organizational improvement activities (Hafner et al., 2011; Hibbard et al., 2003, 2005). Research found, for example, that publicly reported measures were associated with increased compliance with best practices in the use of prophylactic antibiotics for surgical patients (Chassin et al., 2010), improved quality of heart attack care (Werner and Bradlow, 2006, 2010), and improved compliance with recommended pneumonia care (Joint Commission, 2011).
Clinical registries have been used by a number of professional societies for benchmarking across care systems as well as for monitoring and for broader clinical research on health care procedures and outcomes. Registries are intended to collect data for a specific condition, disease, or treatment in a uniform way over time. Thus they can provide a detailed, consistent picture of a certain disease population or treatment that can be used for benchmarking against different regions or other characteristics, as well as over time. The data contained in registries tend to be more detailed and consistent than data available from other sources, which makes registries useful for determining the relative effectiveness of different treatments and interventions. However, these data sources also are limited in scope because their focus is on the subpopulation of people who are receiving care rather than on the total population.
Measurement in health care also is aimed at ensuring compliance or performance on certain dimensions of quality or service—for example, as a condition of accreditation or as a tool for ensuring compliance with payment or safety standards. The Joint Commission, for instance, provides
accreditation for a variety of health care organizations, from hospitals to behavioral health treatment facilities. To be accredited, these institutions must collect and submit to the Joint Commission data on a variety of performance measures. NCQA accredits health plans and offers voluntary programs for new care delivery models (Berenson et al., 2013). Examples of measurement programs from both organizations are included in the appendixes to this report. Similar programs, aimed at maintaining a baseline level of performance across diverse locations, populations, and facilities, are administered by organizations including the Environmental Protection Agency and the Occupational Safety and Health Administration.
Public and private payers have introduced multiple new payment models in an effort to move away from fee-for-service payment and to align incentives toward high-quality, high-value care. These new payment models often require clinicians and hospitals to collect and report multiple measures on care processes and outcomes. In some cases, financial incentives are directly tied to performance on a given measure, while in others a measure is used to ensure that quality and outcomes are not eroded under the new payment method (Schneider et al., 2011).
One recent change to the measurement capabilities of public payers is the Medicare & Medicaid Innovation Center (CMMI), which has the ability to test, evaluate, and expand care delivery and payment models in Medicare, Medicaid, and CHIP. If these models are found to be successful, the Secretary of HHS has the authority to scale them up nationally. CMMI has flexibility in measuring success in quality and outcomes, although all successful programs must be verified by the CMS actuary as reducing costs without affecting quality or as improving quality without raising costs. Another new measurement capability for public payers is State Innovation Waivers, which will allow states to test new models for their insurance exchanges, qualified health plans, and provisions such as cost sharing and coverage (Alker and Artiga, 2012; Artiga, 2011). Beyond payment, a variety of organizations are involved with accreditation and certification of health care in the United States, including the Joint Commission and NCQA. The Joint Commission accredits approximately 20,000 health care organizations and programs, while NCQA accredits health plans and offers voluntary programs for new delivery models (Berenson et al., 2013). Examples of measurement programs from both organizations are included in the appendixes.
Health-related federal grants to state and local governments have increased over the past three decades, amounting to nearly $300 billion in fiscal year 2011, a figure that includes support for both the state Medicaid
programs and the various categorical initiatives (CBO, 2013). The focus of these grant programs has shifted over time, increasing for Medicaid and other health programs and decreasing for other activities.
From a measurement perspective, an especially important trend has been the federal government’s use of its waiver authority to give states more flexibility in program design and to provide federal support for Medicaid and CHIP in return for a commitment to demonstrating progress toward agreed-upon targets. These waivers give states the flexibility to tailor programs to their needs and priorities, such as by expanding coverage to individuals not otherwise eligible, providing coverage for services not typically covered by the programs, or applying delivery system innovations to improve the quality and value of care (Alker and Artiga, 2012; Artiga, 2011).
For research and demonstration waivers, states are required to have an approved evaluation strategy in place (Alker and Artiga, 2012; Artiga, 2011). States have substantial flexibility in how they carry out their evaluation—including experimental and other quantitative and qualitative designs—as long as the final evaluation design is approved by CMS and published publicly (HHS, 2013). One commonality among the areas measured is program cost, as all approved projects must be budget neutral to the federal government over the course of the waiver.
The specific measures and strategies used to assess performance and provide accountability vary, with the details being determined by the authorizing and appropriations legislation; the agency’s grant management processes, such as funding announcements and notification; and government-wide grant management legislation, regulations, and executive orders. While substantial variation exists, recent reviews of federal grants have identified opportunities to improve the measures and data used to track program performance (GAO, 2006, 2012).
Measures also are frequently used by federal agencies in evaluating the results of grants made to states and localities. One prominent example is the Preventive Health and Health Services block grant, which allows states to pursue projects aligned with the Healthy People program. The program incorporates a variety of standardized measures of performance (CDC, 2011). Another example is the CDC’s Immunization Grant Program (Section 317), which provides aid to underinsured and low-income families for whom vaccinations impose a significant cost challenge. The Section 317 program also provides funding for immunization infrastructure (CDC, 2007). Similar grant programs are in place to provide added support in health programs related to cancer screening, community health, and other focal areas.
With any measurement activity, the reliability of the data collected is a function of the ability to guard against hazards that are inevitably encountered in the design, execution, analysis, and interpretation of results. The statistical and analytical challenges associated with health and health care assessment have been a focus of various assessments by the Institute of Medicine (IOM) and are summarized in Table 2-2. These challenges include gaps in coverage, comparability, consistency across sources and time, and statistical power. Other limitations in the ability to use the measures gathered relate to the capability to sustain data collection, the availability of and linkage to accountability levers, data quality and availability, and the programmatic distortions that may occur when an organization’s
TABLE 2-2 Key Considerations in Addressing Statistical and Analytical Challenges of Measurement
|Statistical or Analytical Challenge||Key Considerations|
|Attribution||When essential, can patient health outcomes, such as for acute or chronic conditions, be attributed to a specific clinician or health care organization?|
|Data sources||Can a measure be calculated from existing electronic health records or related sources such as survey, claims, and laboratory data?|
|Statistical accuracy and patient samples||For the average provider or health care organization, will there be a sufficient number of patients to enable estimating a performance measure with adequate confidence to support its use in a payment mechanism?|
|Tailoring care||Does a measure exclude patients who should not receive certain care based on clinical practice guidelines?|
|Risk adjustment||When necessary, can performance measures be properly adjusted for different patient populations with different risk factors, demographics, and health conditions?|
|Setting benchmarks||Do sufficient data exist with which to establish a performance benchmark for a measure, as well as for consistent attribution, risk adjustment, and data quality and completeness?|
|Potential for gaming||How difficult is it to change a measure’s score without any improvements to care or health? Will the measure’s value be altered by excluding patients with significant illnesses or health conditions?|
|Validity||How well does a measure capture the process or outcome it is intended to assess?|
SOURCES: Adapted from IOM, 2012, and Schneider et al., 2011.
compass is drawn to process rather than outcome measures. These issues are discussed below.
With efforts to initiate, require, and collect measures being carried out by many often unconnected and uncoordinated sources, inconsistencies and gaps are inevitable (IOM, 2006; Jacobson and Teutsch, 2012; NQF, 2013b,d; Schneider et al., 2011; Thompson and Harris, 2001). Many measurement initiatives focus on processes of health care, with limited consideration of outcomes (NQF, 2013b). Current measurement programs often do not adequately address key issues related to the leading causes of illness and death (Thompson and Harris, 2001). Examples of the many gaps in current measurement efforts include
- Patient engagement—few capabilities to assess patient-centered care and patient engagement;
- Care quality—limited scope of quality measurement for certain areas, such as special populations (e.g., children/adolescents, patients with multiple chronic conditions, patients with rare diseases, patients dually eligible for Medicare and Medicaid), care access and disparities, care coordination and transitions, and broader longitudinal accountability (such as over a patient’s entire course of treatment or for overall health outcomes);
- Value—limited capacity to assess value, affordability, waste, and overuse; and
- Healthy people—small number of measures that assess population health and well-being outside of the health care system, the use of high-impact clinical preventive services, and childhood development and health (IOM, 2006; Jacobson and Teutsch, 2012; NQF, 2013d; Schneider et al., 2011).
Another factor limiting the efficiency of measurement is the inadequate level of interoperability among different data sources. For instance, measurement for health monitoring is challenged by the limited connection between clinical data sources and public health surveillance systems, except in some pilot initiatives (Klompas et al., 2012a,b). As a consequence, measure results cannot reflect the richness of the data available, or information must be entered redundantly depending on the data sources drawn upon for calculation.
In many areas, moreover, comprehensive measures are lacking for high-level assessment of complex—yet easily understood—concepts. Gross domestic product, for example, is readily understood as an indicator for
the economy, representing a complex measurement algorithm generating a single indicator. Similar measures are needed in areas of health, including for social determinants, environmental health, cost burden, care quality, and care safety.
With the health measurement landscape being dominated by measures developed and oriented around the needs and priorities of individual departments, institutions, agencies, and programs, very few measures provide insights on comparative aspects of health or performance when applied at higher or lower levels of aggregation, or even across programs at the same level. If health data on a particular issue are available at higher levels of aggregation—nations, states, groups of hospitals—it can be difficult to find timely, meaningful information about health processes, outcomes, or costs at the level of individual hospitals, health care providers, or patients. What may be useful to payers, regulators, accreditors, and others concerned with compliance and with broad mandates may be of limited utility for patients, providers, and other stakeholders for use in health decision making and quality improvement programs. Even data available for assessing similar parameters may have been analyzed or presented in ways sufficiently different to limit comparison.
Figure 2-1 illustrates the lack of comparability and consistency among measurement programs by summarizing the results from a survey of 48 state and regional measure sets. This study found that only 20 percent of
FIGURE 2-1 Properties of different state and regional measure sets, highlighting the limited alignment (left) and usage of standard, modified, and homegrown measures (right).
SOURCE: Data drawn from Bazinsky and Bailit, 2013.
measures were used by more than one program, and none of those measures were used by every program surveyed. Measure alignment is further challenged by the modification of existing or the creation of homegrown measures. The study found that more than 30 percent of measures surveyed were either modified or homegrown; 80 percent of programs had modified at least one existing measure, and 40 percent of programs had created at least one new measure (Bazinsky and Bailit, 2013).
Various statistical and analytical challenges limit the development of reliable insights from measures across time, across organizations, and across levels of aggregation. Current measurement efforts have difficulty attributing a patient’s health outcomes to a particular intervention or clinician’s actions. This difficulty is due in part to the often long time lags, sometimes years or decades, between care for some conditions—especially chronic diseases—and changes in a patient’s health. The same is true for population health interventions, in particular for social or environmental interventions. The time lags are long, the relationships complex, and specific attribution virtually impossible. A program aimed at preventing the development of diabetes in children would be difficult to evaluate immediately after implementation, as its effects would not be expected to manifest for several years. Moreover, it can be difficult to separate the impact of care from the impact of other health factors such as diet, physical activity levels, smoking, and substance abuse. For example, a hospital serving a relatively low-income community may have lower scores on quality measures than a hospital serving a relatively high-income community because of differences in the populations served rather than meaningful differences in the quality of care provided. At the same time, differences in quality may be at work: failure to communicate or engage patients effectively, provision of different services to those with less ability to pay, or other reasons for suboptimal delivery of care. As illustrated in Table 2-2, statistical and analytical challenges also include adjusting measures for different populations of people, attributing performance on a measure to a specific clinician or organization, and ensuring that a measure excludes patients who should not receive a given treatment or intervention. Many of these considerations are focused on measures used for payment and public reporting, although they remain applicable to other dimensions of the health system and for other uses. Further, quality measures may focus on errors of omission, in part because of payment systems that incentivize volume instead of emphasizing errors of commission, such as overutilization.
One key challenge for health and health care measurement is to ensure that systems are in place to allow for and encourage continuous improvement as underlying technological capabilities evolve. New technologies, particularly mobile technologies, may augment measurement capabilities in diverse health care settings and should be incorporated into routine practice as they become viable. Emerging new devices can continually measure specific aspects of an individual’s physical state, which can allow for a more complete picture of the individual’s health status and the impact of various interventions. These evolving systems also present a challenge for total population data strategies, which often rely on telephone surveys that have increasingly poor response rates in addition to excluding subpopulations of people who use cell phones exclusively.
The expected flood of new data from these personal devices will have implications for what is measurable and how actionable different measures are. In addition, new challenges will arise—from the interoperability of different devices, to the capabilities for analysis and use of these new data, to the privacy and security of the data generated. And for mobile and non-mobile technologies alike, any measurement initiative must consider how measures will be updated and integrated as new technologies emerge.
Payment reform may also alter the landscape for health care measurement. Measurement data coupled with supportive financial incentives can be a powerful motivator for system-wide improvements. Recent payment reforms include a shift away from the fee-for-service model through the development of ACOs and other models that reward value rather than volume in health care, and they may encourage more meaningful patient-provider interactions beyond the provision of billable tests and services. At the same time, the move toward bundled or global payments could reduce the amount and type of data collected—particularly claims data—by leading to assessment of care at the event or episode level rather than at the level of individual services rendered.
Lastly, it is important to ensure that a core set of measures is forward looking and reflects continuous learning and improvement. To this end, a process is needed for continuously evaluating the utility of measures and pruning those that prove unnecessary, such as those for which near-universal compliance has been achieved, to prevent the measurement burden from increasing indefinitely. Furthermore, it is important that measurement itself be a learning system that improves over time and leverages advances in science and technology.
Attempts to hold health systems accountable for their performance can pose challenges in terms of the specifications and use of particular measures and the application of measures in certain programs and projects. Many health care consumers or funders, including patients and policy makers, perceive significant potential benefits from programs that tie payment or other resources to performance on specific measures or the achievement of performance targets. From the perspective of the care system, however, there is concern that these sorts of initiatives aimed at accountability, if poorly specified, could have negative consequences or create perverse incentives.
A variety of initiatives and programs under way throughout the nation are aimed at promoting accountability through measurement. They include pay-for-performance initiatives; various federal, state, and private incentive programs; and new models for accountable care. However, the impact of these approaches is not uniformly positive, suggesting that the intuitively appealing concept of incentives for improvement may face particular challenges in the context of the health system. For example, one evaluation of a CMS pay-for-performance pilot project found that participation in the program was not associated with a significant incremental improvement in quality of care or outcomes (Glickman et al., 2007).
While programs designed to promote accountability on the part of individual institutions or providers are being developed and have the potential to lead to improved outcomes, a broader view of accountability, in which a range of providers or stakeholders are held jointly accountable for care outcomes, could benefit the care system by both improving the quality of care and encouraging coordination and efficiency in the delivery of care across the care continuum. The importance of this approach to shared accountability is highlighted in the IOM report Rewarding Provider Performance: Aligning Incentives in Medicare. One of the recommendations in that report is that the Secretary of HHS should be able to aggregate data across care settings to enable an incentive structure in which providers would be rewarded on the basis of shared accountability and coordination (IOM, 2007).
Critical to any effort to measure performance over time or compare health outcomes or care quality across groups is the availability of high-quality, consistent standardized data. This is particularly true when measures are used for accountability purposes, either because they are tied to financial resources or decisions or because they are publicly reported as indicators of performance.
The availability of high-quality data is limited by a range of factors, including the lack of transparency and interoperability among data systems as well as the range of different measures in use for assessing similar concepts. Further, there are often disconnects between the approaches and data streams available at different levels of the health system. For example, national and state figures on health outcomes and performance often are assessed through large-scale, periodic national surveys, while at the community or institutional level, data on health outcomes may be available through individual EHRs or reporting programs. The ability to monitor the nation’s health and the performance of the health system routinely and accurately will depend on the availability of high-quality data on the outcomes that matter most. Furthermore, making useful comparisons at different levels throughout the health system will require a standardized approach to data collection, reporting, and use.
A significant challenge for the growing health measurement enterprise is the capacity to assess cost and price variation and affordability of care meaningfully and to identify sources of waste. Because of a lack of public knowledge regarding the costs of patient care and the associated outcomes, health care cost and pricing comparisons have been minimal. Cost analyses often are segregated by specialty or department level rather than over the full progression of patient care (Kaplan and Porter, 2011). As a result of this ambiguity, data on cost are limited and inadequately organized to meet the needs for consumer choice (RWJF, 2012).
Affordability is also a concept with a malleable definition. There are two generally accepted methods for measuring affordability: one relies on the ratio of expenditures to total household resources and the other on residual income after expenditures (Niens et al., 2012). Often data-intensive, these methods depend on extensive surveys and longitudinal studies. Given the relatively short supply of cost data, these measurement approaches rarely are applied to health care affordability.
The lack of transparency of cost and price information also presents a significant challenge. Prices for individual services vary widely across the nation and even among health care institutions serving the same locality. Additionally, the dollar amounts paid by patients and insurers are not disclosed consistently or accessibly, partly because of concerns about competitive advantage or disadvantage. A recent study on commercially insured patients found that on average, patients who looked at data on cost and quality saved $139 per medical visit, indicating that access to data on price and quality can lead to shifts in consumer care as well as quantifiable savings (Whaley et al., 2014).
Faced with responsibilities to acknowledge, collect, and assess measures that often are focused on organizational processes rather than meaningful results, program administrators may find it difficult to direct their attention to the most productive activities. These programmatic distortions may have unintended consequences. For example, a poorly specified performance measure could lead clinicians to select healthier patients or avoid less healthy patients (Shen, 2003). One study showed that the implementation of public report cards on coronary artery bypass graft in New York was associated with increased disparity in the use of this procedure between white and black or Hispanic patients (Werner et al., 2005). Considering and accounting for these potential unintended consequences is critical to ensuring that measurement leads to improvement in health and health care.
Furthermore, many measures today fail to reflect factors important to patients. Patients often are interested in the outcomes of their care and how it will impact the length of their lives, their quality of life, and their overall functioning and well-being. Yet many public reporting sites focus on performance for specific clinical processes. If measures are not centered on the most important concepts, improvement will be elusive (IOM, 2006; Werner and Asch, 2007).
The steady proliferation of measurement reporting, both voluntary and mandatory, has led to the collection of thousands of measures, most of which are related to processes of care. The impact of these activities on patient outcomes and the health of the general population has been somewhat limited. Figure 2-2 presents a schematic, including highlighted patient safety measures, to illustrate the growth of measurement in health and health care and the emergence of many variations for similar targets. Many of the measures in use today are collected in isolation with no context beyond a particular patient group, care delivery process, or organization. As a result, health and health care measurement falls short of its potential as a tool for analysis, comparison, and improvement across the various levels and components of the health system.
Many of the individual measures in use today were developed and implemented for a particular purpose or circumstance. The response to these initiatives has streamlined health care processes and led to significant progress on some of the most important clinical problems. For example, the implementation of checklists for central line placement has resulted in a significant reduction in blood stream infections (Hartman et al., 2014; Pageler et al., 2014; Ranji et al., 2007). Yet the focus of measurement remains quite
FIGURE 2-2 Schematic illustration of the growth of measurement in health and health care. The column on the left (measure targets) gives examples of elements being assessed in various categories. The column on the right (measures in use) illustrates that many different measures are used to assess the same issue. Highlighted are examples of the target issues and measures used in the safety arena. See Figure 4-19 for figure legend.
narrow, often targeting specific screening and documentation activities or care delivery for specific diseases or conditions.
An unanticipated outcome of the rapid growth in measurement of quality, safety, and value in the health care system has been the concomitant growth in administrative burden. The 2000 release of the IOM report To Err Is Human and the 2010 passage of the ACA both resulted in an increase in reportable quality measures (IOM, 2000; Panzer et al., 2013). A 2006 study of a sample of hospitals found that each hospital reported to an average of 5 programs, with the authors identifying 38 unique reporting programs among this sample of hospitals (Pham et al., 2006). And a 2013 analysis found that a major academic medical center was required to report more than 120 quality measures to regulators or payers, and that the cost of measure collection and analysis consumed approximately 1 percent of net patient service revenue (Meyer et al., 2012).
Not surprisingly, then, measurement activities often are viewed as a generally unquantified, underappreciated, and undercompensated burden for the U.S. health care system and its various stakeholders. As noted above, measure requirements often are overlapping or redundant. The result can be additional administrative burden with monetary and time costs but with no added value. This burden includes the time a patient may spend filling out questionnaires, providers entering quality data for Patient Quality Reporting System (PQRS) payment, hospitals reporting for accreditation or Leapfrog participation, and public health organizations reporting throughout the state and federal governments. The development and maintenance of the digital infrastructure needed for managing data also can create additional administrative cost and burden. Excess administrative costs due to measurement and a range of other activities are estimated at $190 billion per year, and continually expanding measurement activities and requirements could cause this figure to increase (IOM, 2012). Altogether, the development and validation of measures; the collection, analysis, and maintenance of measurement data; and the reporting of measures have grown increasingly burdensome, with significant financial impact.
Without reorientation, the proliferation of measures is likely to continue, with associated opportunity costs impacting the ability to meet other needs in the health care system. A variety of consequences could result, including the erosion of internal measurement activities and inefficient approaches to improving on measures without improving the measures’
underlying targets (Meyer et al., 2012). Given the substantial time, effort, and resource demands of these activities, it is essential to ensure that they focus on the most important opportunities for improvement and do not divert attention from higher health priorities.
In addition, more concrete financial risks are associated with the current environment of measurement and reporting. The use of measurement by multiple stakeholders in the health system has shifted it from a voluntary activity to one that is mandatory or, at a minimum, one with associated financial implications. Pay for reporting and value-based purchasing are examples of CMS programs involving financial penalties for nonreporting. Financial implications also exist for the Meaningful Use incentive program for EHR implementation (with anticipated non-reporting penalties beginning in 2015). In the current financial climate of health care organizations, the financial risks of nonreporting can be significant.
Preliminary results from a survey of leadership in 20 health care organizations, ranging in size from 180 to 3,000 beds, suggest that measurement activities may require the equivalent of 50 to 100 full-time employees, at estimated costs ranging from $3.5 to $12 million per year. While the providers consulted in the development of these preliminary findings believe that quality reporting is valuable and should continue, it was also suggested that reporting large numbers of measures may be overwhelming, such that resource-intensive reporting activities may crowd out efforts to improve based on the data produced (Dunlap, 2015).
Beyond the costs of infrastructure, personnel, and information technology associated with measure reporting, there is an additional risk of cost to reputation. Hospitals increasingly are being rated by national organizations, including the Joint Commission, Health Grades, and U.S. News & World Report, based on quality and safety measures, with significant financial implications. Reputation and brand are important marketing tools for organizations, and a failing grade on these proprietary report cards can directly impact hospital volume and revenues. Poor ratings can have indirect financial costs as well, impacting recruitment of faculty and residents, potential for research funding, magnet hospital status, and community standing.
Opportunity costs are high for busy practitioners faced with the increasing burden associated with measure reporting, as it directly impacts their time to spend with patients. CMS’s PQRS, initiated in 2007, offers incentives for hospitals and individual physicians and their equivalents to enter data on generally process-related quality measures. In part because of the high opportunity costs entailed, fewer than 30 percent of eligible
professionals have been participating in the PQRS (Berenson et al., 2013). Other explanations involve the economics of physician practices: CMS’s incentive payments account for only a minimal percentage of their revenues and are less important to them than to hospitals in the absence of the latter’s high overhead. However, as penalties begin to accrue to practices in the form of decreased payments from payers, greater involvement in the PQRS and other reporting programs may occur. For large practices, measure reporting entails further costs for outsourcing of data entry, while smaller practices often use internal billing staff or physicians themselves for data entry.
The ACA initiatives emphasize measures for organizations and individual clinicians, but the process of prioritization has lagged, so that individual practitioners have been slow to participate. They often perceive quality management and measurement as arbitrary and of marginal relevance to their patients, little more than busy work. Rewards emphasize compliance over quality, and clinicians often perceive limited control over factors impacting the data, including social environmental factors, that are beyond their realm of direct influence (Cassel and Jain, 2012; Rosenthal et al., 2004).
Efforts are now under way to improve the collection of data and the alignment and reporting of measures (Conway et al., 2013; Higgins et al., 2013). For individual practitioners, CMS is sponsoring payments for participation in both the PQRS and the EHR incentive programs. ONC has begun an initiative to define standards for sharing data and partnering with the private sector to enable the needed technology for decision support capabilities. In 2012, HHS established the Measurement Policy Council to reduce the reporting burden by aligning measures across agencies.
In the face of the paradox of the proliferation of measure requirements and deficiencies in health and health care performance, the potential utility of a core measure set lies in its ability to address both issues. Measurement is necessary to understand the current state and performance of health and health care, and necessarily involves costs in terms of time and resources. However, the costs and benefits of measurement activities are difficult to quantify. Many powerful, high-quality measures are already in use, but the lack of alignment and coordination discussed above limits their potential. Core measures will not displace measurement activities needed to guide specific organizational priorities, performance improvement activities, and decision making, but properly used, they should substantially streamline and harmonize reporting responsibilities and enhance system performance. As the understanding of health and health care expands beyond independent
services to an interrelated health system, measures that account for broader system performance and the alignment of the contributing components are key.
Progress in chronic disease is illustrative. A common concern for current measurement efforts is their poor applicability to complex chronic diseases whose treatment involves multiple practitioners and is heavily influenced by factors beyond the control of practitioners. Chronic illness now affects 45 percent of the U.S. population. Diabetes, for example, occurs in 8.3 percent of the population and accounts for one-third of all hospital stays in California (Meng et al., 2014; Ward and Schiller, 2013). By evaluating factors beyond a specific disease or process, core measures can better represent the complexity of patients in an accessible way. The measures themselves do not become the gold standard in care but focus on the many aspects of care for a disease. AHRQ, for example, currently reports 84 measures involving diabetes care or screening, many, such as HbA1C measures, involving specific characteristics or groups of patients (AHRQ, 2013). While helpful for defining best practice standards for HbA1C levels, these measures represent only one of many dimensions of diabetes care leading to good health, including blood pressure monitoring, weight and diet education, personal blood glucose testing, and ophthalmologic and podiatric surveillance. To avoid the natural tendency to focus on physiologic parameters at the expense of broader dynamics, patients with diabetes could instead be monitored on the key elements of the core measures, including healthy behaviors, receipt of preventive services, affordability of care, and their own and their community’s engagement with their health care.
A conceptual aim of payment reform is to link financial incentives to performance at the population level. Achieving this aim will require the availability of core measures that reflect the overall status of the health system, with process measures being left substantially to the discretion of individual organizations, for internal use in improvement efforts. Measuring “door to CT scan” times for stroke patients, for example, provides institutional data useful for managing hospital triage and patient flow so as to optimize time from door to thrombolysis. From a health system perspective, however, most important is the outcome of care for stroke and the relation of the outcome to the various processes involved in diagnosis and treatment within the health care system. Such measures might also include the cost of stroke-related services (measured as total cost of care) for individuals and populations. For the creation of a parsimonious core measure set, the latter indicators have broader utility than the process indicators used by particular hospitals to improve their operations.
Reporting of standardized core measures can therefore help elevate organizational perspective from individual processes to measures more meaningful to patients. Developing and broadly sharing such measures can help
improve patients’ participation in their care as well as related outcomes, as patients see the relevance of the measures to their own lives. For example, a 70-year-old woman with hypertension, obesity, and recently diagnosed diabetes may be less likely to be a “no show” if the circumstances of her care have been shaped by stronger provider and community focus on such core matters as access to care, care match to patient goals, self-management initiatives, personal spending burden, and community support. Ways to improve the impact of measurement are the focus of Chapter 3.
AF4Q (Aligning Forces for Quality). 2013. About us. http://forces4quality.org/about-us (accessed February 11, 2014).
AHRQ (Agency for Healthcare Research and Quality). 2013. National Quality Measures Clearinghouse. http://www.qualitymeasures.ahrq.gov/index.aspx (accessed April 9, 2013).
Alker, J., and S. Artiga. 2012. The new review and approval process rule for section 1115 Medicaid and CHIP demonstration waivers. Washington, DC: Kaiser Family Foundation.
Artiga, S. 2011. Five key questions and answers about section 1115 Medicaid demonstration waivers. Washington, DC: Kaiser Family Foundation.
Barr, P. J., R. Thompson, T. Walsh, S. W. Grande, E. M. Ozanne, and G. Elwyn. 2014. The psychometric properties of collaborate: A fast and frugal patient-reported measure of the shared decision-making process. Journal of Medical Internet Research 16(1):e2.
Bazinsky, K. R., and M. Bailit. 2013. The significant lack of alignment across state and regional health measure sets. http://www.bailit-health.com/articles/091113_bhp_measuresbrief.pdf (accessed December 6, 2013).
Berenson, R. A., P. J. Pronovost, and H. M. Krumholz. 2013. Achieving the potential of health care performance measures. Washington, DC: Urban Institute.
Berwick, D. M., B. James, and M. J. Coye. 2003. Connections between quality measurement and improvement. Medical Care 41(Suppl. 1):I30-I38.
Bielaszka-DuVernay, C. 2011. Vermont’s blueprint for medical homes, community health teams, and better health at lower cost. Health Affairs (Millwood) 30(3):383-386.
Blumenthal, D. 2009. Stimulating the adoption of health information technology. New England Journal of Medicine 360(15):1477-1479.
Buntin, M. B., S. H. Jain, and D. Blumenthal. 2010. Health information technology: Laying the infrastructure for national health reform. Health Affairs (Millwood) 29(6):1214-1219.
Cassel, C. K., and S. H. Jain. 2012. Assessing individual physician performance: Does measurement suppress motivation? Journal of the American Medical Association 307(24):2595-2596.
CBO (Congressional Budget Office). 2013. Federal grants to state and local governments. https://www.cbo.gov/publication/43967 (accessed April 17, 2014).
CDC (Centers for Disease Control and Prevention). 2007. Program in brief: Immunization grant program (section 317). http://www.cdc.gov/vaccines/programs/vfc/downloads/grant-317.pdf (accessed November 3, 2014).
CDC. 2011. Preventive health and health services block grant: A critical public health resource. http://www.cdc.gov/phhsblockgrant/docs/phhs-blockgrant-aag.pdf (accessed July 26, 2014).
Chassin, M. R. 2002. Achieving and sustaining improved quality: Lessons from New York state and cardiac surgery. Health Affairs (Millwood) 21(4):40-51.
Chassin, M. R., J. M. Loeb, S. P. Schmaltz, and R. M. Wachter. 2010. Accountability measures—using measurement to promote quality improvement. New England Journal of Medicine 363(7):683-688.
Conway, P. H., F. Mostashari, and C. Clancy. 2013. The future of quality measurement for improvement and accountability. Journal of the American Medical Association 309(21):2215-2216.
Corlette, S., J. Alker, J. Touschner, and J. Volk. 2011. The Massachusetts and Utah health insurance exchanges: Lessons learned. Washington, DC: Georgetown University Health Policy Institute.
Damberg, C. L., M. E. Sorbero, S. L. Lovejoy, K. Lauderdale, S. Wertheimer, A. Smith, D. Waxman, and C. Schnyer. 2011. An evaluation of the use of performance measures in health care. Santa Monica, CA: RAND Corporation.
Darkins, A., P. Ryan, R. Kobb, L. Foster, E. Edmonson, B. Wakefield, and A. E. Lancaster. 2008. Care coordination/home telehealth: The systematic implementation of health informatics, home telehealth, and disease management to support the care of veteran patients with chronic conditions. Telemedicine and e-Health 14(10):1118-1126.
Dunlap, N. 2015. Reporting quality metrics in healthcare: Observations from the field. Presentation at the members meeting of the IOM Roundtable on Value and Science-Driven Health Care, March 18, Washington, DC.
Fisher, E. S., D. E. Wennberg, T. A. Stukel, D. J. Gottlieb, F. L. Lucas, and E. L. Pinder. 2003a. The implications of regional variations in Medicare spending. Part 1: The content, quality, and accessibility of care. Annals of Internal Medicine 138(4):273-287.
Fisher, E. S., D. E. Wennberg, T. A. Stukel, D. J. Gottlieb, F. L. Lucas, and E. L. Pinder. 2003b. The implications of regional variations in Medicare spending. Part 2: Health outcomes and satisfaction with care. Annals of Internal Medicine 138(4):288-298.
Fisher, E. S., D. O. Staiger, J. P. Bynum, and D. J. Gottlieb. 2007. Creating accountable care organizations: The extended hospital medical staff. Health Affairs (Millwood) 26(1):w44-w57.
Fisher, E. S., J. P. Bynum, and J. S. Skinner. 2009. Slowing the growth of health care costs—lessons from regional variation. New England Journal of Medicine 360(9):849-852.
GAO (U.S. Government Accountability Office). 2006. Grants management: Enhancing accountability provisions could lead to better results. Washington, DC: GAO.
GAO. 2012. Grants to state and local governments: An overview of federal funding levels and selected challenges. Washington, DC: GAO.
Glickman, S. W., F. S. Ou, E. R. DeLong, M. T. Roe, B. L. Lytle, J. Mulgund, J. S. Rumsfeld, W. B. Gibler, E. M. Ohman, K. A. Schulman, and E. D. Peterson. 2007. Pay for performance, quality of care, and outcomes in acute myocardial infarction. Journal of the American Medical Association 297(21):2373-2380.
Guyer, B., M. A. Freedman, D. M. Strobino, and E. J. Sondik. 2000. Annual summary of vital statistics: Trends in the health of Americans during the 20th century. Pediatrics 106(6):1307-1317.
Hafner, J. M., S. C. Williams, R. G. Koss, B. A. Tschurtz, S. P. Schmaltz, and J. M. Loeb. 2011. The perceived impact of public reporting hospital performance data: Interviews with hospital staff. International Journal for Quality in Health Care 23(6):697-704.
Hartman, N., M. Wittler, K. Askew, and D. Manthey. 2014. Delphi method validation of a procedural performance checklist for insertion of an ultrasound-guided internal jugular central line. American Journal of Medical Quality pii:1062860614549762.
HEW (U.S. Department of Health, Education, and Welfare). 1979. Healthy people: The Surgeon General’s report on health promotion and disease prevention. Washington, DC: HEW.
HHS (U.S. Department of Health and Human Services). 2000. National Committee on Vital and Health Statistics 50th anniversary symposium reports. Washington, DC: HHS.
HHS. 2011. National prevention strategy. http://www.surgeongeneral.gov/initiatives/prevention/strategy (accessed November 3, 2014).
HHS. 2012. 2012 Annual progress report to Congress: National strategy for quality improvement in health care. http://www.ahrq.gov/workingforquality/nqs/nqs2012annlrpt.pdf (accessed April 4, 2014).
HHS. 2013. Evaluation requirements. Rule 42 CFR 431.424: 53-54. http://www.gpo.gov/fdsys/pkg/CFR-2013-title42-vol4/pdf/CFR-2013-title42-vol4-sec431-424.pdf (accessed June 4, 2014).
Hibbard, J. H., J. Stockard, and M. Tusler. 2003. Does publicizing hospital performance stimulate quality improvement efforts? Health Affairs (Millwood) 22(2):84-94.
Hibbard, J. H., J. Stockard, and M. Tusler. 2005. Hospital performance reports: Impact on quality, market share, and reputation. Health Affairs (Millwood) 24(4):1150-1160.
Higgins, A., G. Veselovskiy, and L. McKown. 2013. Provider performance measures in private and public programs: Achieving meaningful alignment with flexibility to innovate. Health Affairs (Millwood) 32(8):1453-1461.
HIMSS (Healthcare Information and Management Systems Society). 2012. Military health system announces quadruple aim innovation challenge. http://www.himss.org/News/NewsDetail.aspx?ItemNumber=2809 (accessed September 13, 2014).
Hsiao, C.-J., and E. Hing. 2014. Use and characteristics of electronic health records among office-based physician practices: United States, 2001-2013. NCHS Data Brief 143.
Hussey, P. S., H. de Vries, J. Romley, M. C. Wang, S. S. Chen, P. G. Shekelle, and E. A. McGlynn. 2009. A systematic review of health care efficiency measures. Health Services Research 44(3):784-805.
IHI (Institute for Healthcare Improvement). 2014. Quality, cost, and value. http://www.ihi.org/Topics/QualityCostValue/Pages/Overview.aspx (accessed July 17, 2014).
IOM (Institute of Medicine). 1990. Healthy people 2000: Citizens chart the course. Washington, DC: National Academy Press.
IOM. 2000. To err is human: Building a safer health system. Washington, DC: National Academy Press.
IOM. 2006. Performance measurement: Accelerating improvement. Washington, DC: The National Academies Press.
IOM. 2007. Rewarding provider performance: Aligning incentives in Medicare. Washington, DC: The National Academies Press.
IOM. 2011a. For the public’s health: The role of measurement in action and accountability. Washington, DC: The National Academies Press.
IOM. 2011b. Leading health indicators for Healthy People 2020: Letter report. Washington, DC: The National Academies Press.
IOM. 2012. Best care at lower cost: The path to continuously learning health care in America. Washington, DC: The National Academies Press.
IOM. 2013a. Core measurement needs for better care, better health, and lower costs: Counting what counts: Workshop summary. Washington, DC: The National Academies Press.
IOM. 2013b. Toward quality measures for population health and the leading health indicators. Washington, DC: The National Academies Press.
Jacobson, D. M., and S. Teutsch. 2012. An environmental scan of integrated approaches for defining and measuring total population health by the clinical care system, the government public health system, and stakeholder organizations. Washington, DC: National Quality Forum.
James, B. C., and L. A. Savitz. 2011. How intermountain trimmed health care costs through robust quality improvement efforts. Health Affairs (Millwood) 30(6):1185-1191.
Joint Commission. 2011. Improving America’s hospitals: The Joint Commission’s annual report on quality and safety. http://www.jointcommission.org/assets/1/6/TJC_Annual_Report_2011_9_13_11_.pdf (accessed September 25, 2011).
Joint Commission. 2014. Benefits of Joint Commission accreditation. http://www.jointcommission.org/benefits_of_joint_commission_accreditation (accessed July 29, 2014).
Kaplan, R. S., and M. E. Porter. 2011. How to solve the cost crisis in health care. Harvard Business Review 89(9):46-52, 54, 56-61 passim.
KFF (Kaiser Family Foundation). 2014. State decisions for creating health insurance marketplaces, 2014. Menlo Park, CA: KFF.
Klompas, M., J. McVetta, R. Lazarus, E. Eggleston, G. Haney, B. A. Kruskal, W. K. Yih, P. Daly, P. Oppedisano, B. Beagan, M. Lee, C. Kirby, D. Heisey-Grove, A. DeMaria, Jr., and R. Platt. 2012a. Integrating clinical practice and public health surveillance using electronic medical record systems. American Journal of Preventive Medicine 42(6, Suppl. 2):S154-S162.
Klompas, M., J. McVetta, R. Lazarus, E. Eggleston, G. Haney, B. A. Kruskal, W. K. Yih, P. Daly, P. Oppedisano, B. Beagan, M. Lee, C. Kirby, D. Heisey-Grove, A. DeMaria, Jr., and R. Platt. 2012b. Integrating clinical practice and public health surveillance using electronic medical record systems. American Journal of Preventive Medicine 102(Suppl. 3):S325-S332.
Koh, H. K. 2010. A 2020 vision for healthy people. New England Journal of Medicine 362(18):1653-1656.
Levinson, D. R. 2008. Adverse events in hospitals: State reporting systems. Washington, DC: HHS, Office of Inspector General.
Maxson, E. R., S. H. Jain, A. N. McKethan, C. Brammer, M. B. Buntin, K. Cronin, F. Mostashari, and D. Blumenthal. 2010. Beacon communities aim to use health information technology to transform the delivery of care. Health Affairs (Millwood) 29(9):1671-1677.
McCarthy, D., and S. Klein. 2010. The triple aim journey: Improving population health and patients’ experience of care, while reducing costs. Commonwealth Fund Case Study 48. http://www.commonwealthfund.org/~/media/Files/Publications/Case%20Study/2010/Jul/Triple%20Aim%20v2/1421_McCarthy_triple_aim_overview_v2.pdf (accessed December 12, 2013).
McDonough, J. E., B. Rosman, M. Butt, L. Tucker, and L. K. Howe. 2008. Massachusetts health reform implementation: Major progress and future challenges. Health Affairs (Millwood) 27(4):w285-w297.
Mende, S., and D. Roseman. 2013. The aligning forces for quality experience: Lessons on getting consumers involved in health care improvements. Health Affairs (Millwood) 32(6):1092-1100.
Meng, Y. Y., M. Pickett, S. H. Babey, A. C. David, and H. Goldstein. 2014. Diabetes tied to a third of California hospital stays, driving health care costs higher. UCLA Center for Health Policy Research Health Policy Brief 1-7.
Meyer, G. S., E. C. Nelson, D. B. Pryor, B. James, S. J. Swensen, G. S. Kaplan, J. I. Weissberg, M. Bisognano, G. R. Yates, and G. C. Hunt. 2012. More quality measures versus measuring what matters: A call for balance and parsimony. BMJ Quality & Safety 21(11):964-968.
NCHS (National Center for Health Statistics). 2013. National Health and Nutrition Examination Survey, 2013-2014: Overview. http://www.cdc.gov/nchs/data/nhanes/nhanes_13_14/2013-14_overview_brochure.pdf (accessed September 20, 2014).
NCQA (National Committee for Quality Assurance). 2013a. About NCQA. http://www.ncqa.org/AboutNCQA.aspx (accessed June 5, 2013).
NCQA. 2013b. HEDIS 2013 measures. Washington, DC: NCQA.
NCQA. 2013c. HEDIS & performance measurement. http://www.ncqa.org/HEDISQualityMeasurement.aspx (accessed June 5, 2013).
Niens, L. M., E. Van de Poel, A. Cameron, M. Ewen, R. Laing, and W. B. Brouwer. 2012. Practical measurement of affordability: An application to medicines. Bulletin of the World Health Organization 90(3):219-227.
NQF (National Quality Forum). 2013a. History. http://www.qualityforum.org/About_NQF/History.aspx (accessed June 4, 2013).
NQF. 2013b. Map pre-rulemaking report: 2013 recommendations on measures under consideration by HHS. Washington, DC: NQF.
NQF. 2013c. NQF report of 2012 activities to Congress and the secretary of the Department of Health and Human Services. Washington, DC: NQF.
NQF. 2013d. Report from the National Quality Forum: 2012 NQF measure gap analysis. Washington, DC: NQF.
NRC (National Research Council). 2009. Vital statistics: Summary of a workshop. Washington, DC: The National Academies Press.
ONC (Office of the National Coordinator for Health Information Technology). 2013. ONC issue brief: Patient-generated health data and health IT. http://www.healthit.gov/sites/default/files/pghd_brief_final122013.pdf (accessed October 13, 2014).
Pageler, N. M., C. A. Longhurst, M. Wood, D. N. Cornfield, J. Suermondt, P. J. Sharek, and D. Franzon. 2014. Use of electronic medical record-enhanced checklist and electronic dashboard to decrease clabsis. Pediatrics 133(3):e738-e746.
Paget, L., C. Salzberg, and S. H. Scholle. 2014. Building a strategy to leverage health information technology to support patient and family engagement. http://www.ncqa.org/HEDISQualityMeasurement/Research/BuildingaStrategytoLeverageHealthInformationTechnology.aspx (accessed August 25, 2014).
Painter, M. W., and R. Lavizzo-Mourey. 2008. Aligning forces for quality: A program to improve health and health care in communities across the United States. Health Affairs (Millwood) 27(5):1461-1463.
Panzer, R. J., R. S. Gitomer, W. H. Greene, P. R. Webster, K. R. Landry, and C. A. Riccobono. 2013. Increasing demands for quality measurement. Journal of the American Medical Association 310(18):1971-1980.
Peterson, M., D. Muhlestein, and P. Gardner. 2014. Growth and dispersion of accountable care organizations: June 2014 update. Washington, DC: Leavitt Partners.
Pham, H. H., J. Coughlan, and A. S. O’Malley. 2006. The impact of quality-reporting programs on hospital operations. Health Affairs (Millwood) 25(5):1412-1422.
Ranji, S. R., K. Shetty, K. A. Posley, R. Lewis, V. Sundaram, C. M. Galvin, and L. G. Winston. 2007. Closing the quality gap: A critical analysis of quality improvement strategies (Vol. 6: Prevention of healthcare-associated infections). AHRQ Technical Reviews 9.6.
Raymond, A. G. 2011. Massachusetts health reform: A five-year progress report. http://bluecrossfoundation.org/Health-Reform/~/media/0FF9BF33E14E4E089335AD12E8DEB77E.pdf (accessed December 20, 2011).
RWJF (Robert Wood Johnson Foundation). 2012. Counting change: Measuring health care prices, costs, and spending. http://www.rwjf.org/en/research-publications/find-rwjfresearch/2012/03/counting-change.html (accessed August 23, 2014).
Roseman, D., J. Osborne-Stafsnes, C. H. Amy, S. Boslaugh, and K. Slate-Miller. 2013. Early lessons from four “aligning forces for quality” communities bolster the case for patient-centered care. Health Affairs (Millwood) 32(2):232-241.
Rosen, B., A. Israeli, and S. M. Shortell. 2012. Accountability and responsibility in health care: Issues in addressing an emerging global challenge. 1st ed. Singapore: World Scientific Publishing Company.
Rosenthal, M. B., R. Fernandopulle, H. R. Song, and B. Landon. 2004. Paying for quality: Providers’ incentives for quality improvement. Health Affairs (Millwood) 23(2):127-141.
Ross, J. S., S. Sheth, and H. M. Krumholz. 2010. State-sponsored public reporting of hospital quality: Results are hard to find and lack uniformity. Health Affairs (Millwood) 29(12):2317-2322.
Scanlon, D. P., J. Beich, J. A. Alexander, J. B. Christianson, R. Hasnain-Wynia, M. C. McHugh, and J. N. Mittler. 2012. The aligning forces for quality initiative: Background and evolution from 2005 to 2012. American Journal of Managed Care 18(Suppl. 6):S115-S125.
Schneider, E. C., P. S. Hussey, and C. Schnyer. 2011. Payment reform: Analysis of models and performance measurement implications. Santa Monica, CA: RAND Corporation.
Shapiro, M., D. Johnston, J. Wald, and D. Mon. 2008. Patient-generated health data. http://www.rti.org/pubs/patientgeneratedhealthdata.pdf (accessed April 2, 2014).
Shen, Y. 2003. Selection incentives in a performance-based contracting system. Health Services Research 38(2):535-552.
Song, Z., and B. E. Landon. 2012. Controlling health care spending—the Massachusetts experiment. New England Journal of Medicine 366(17):1560-1561.
The State of the USA. 2014. The State of the USA: History. http://stateoftheusa.org/about/history (accessed April 11, 2014).
Stecker, E. C. 2013. The Oregon ACO experiment—bold design, challenging execution. New England Journal of Medicine 368(11):982-985.
Thompson, B. L., and J. R. Harris. 2001. Performance measures: Are we measuring what matters? American Journal of Preventive Medicine 20(4):291-293.
Toussaint, J. S., C. Queram, and J. W. Musser. 2011. Connecting statewide health information technology strategy to payment reform. American Journal of Managed Care 17(3):e80-e88.
United Health Foundation, American Public Health Association, and Partnership for Prevention. 2012. America’s health rankings: A call to action for individuals and their communities. Minnetonka, MN: United Health Foundation.
VA (U.S. Department of Veterans Affairs). 2011. VA hospital compare. http://www.hospitalcompare.va.gov (accessed September 2, 2014).
Ward, B. W., and J. S. Schiller. 2013. Prevalence of multiple chronic conditions among US adults: Estimates from the National Health Interview Survey, 2010. Preventing Chronic Disease 10:E65.
Werner, R. M., and D. A. Asch. 2007. Clinical concerns about clinical performance measurement. The Annals of Family Medicine 5(2):159-163.
Werner, R. M., and E. T. Bradlow. 2006. Relationship between Medicare’s hospital compare performance measures and mortality rates. Journal of the American Medical Association 296(22):2694-2702.
Werner, R. M., and E. T. Bradlow. 2010. Public reporting on hospital process improvements is linked to better patient outcomes. Health Affairs (Millwood) 29(7):1319-1324.
Werner, R. M., D. A. Asch, and D. Polsky. 2005. Racial profiling: The unintended consequences of coronary artery bypass graft report cards. Circulation 111(10):1257-1263.
Whaley, C., J. Schneider Chafen, S. Pinkard, G. Kellerman, D. Bravata, R. Kocher, and N. Sood. 2014. Association between availability of health service prices and payments for these services. Journal of the American Medical Association 312(16):1670-1676.
Wold, C. 2008. Health indicators: A review of reports currently in use. Washington, DC: The State of the USA.
Wright, S. 2012. Few adverse events in hospitals were reported to state adverse event reporting systems. Washington, DC: HHS, Office of Inspector General.
Young, G. J. 2012. Multistakeholder regional collaboratives have been key drivers of public reporting, but now face challenges. Health Affairs (Millwood) 31(3):578-584.