8

Common Themes

KEY QUESTIONS FOR CONSIDERATION

  • What is the principal purpose of core measures?
  • Who needs to be involved in the development of core metrics, and how?
  • What related work is already completed or under way?
  • What framework or model is best suited to the purpose?
  • What criteria should guide the selection of priorities?
  • How might overlaps be resolved among candidate measures?
  • Which measures are most actionable for progress?
  • What are the available data sources at each assessment level?
  • What are the data infrastructure needs?
  • How can the metrics and the process be most future-oriented?

The workshop summarized in this document had broad objectives, including examining a vision for core health metrics; drawing lessons from national, state, community, and organizational efforts; identifying the metrics that could reliably measure care outcomes, costs, and health improvement; and describing the implementation strategies for these measures. With a scope this broad, the discussions were similarly wide-ranging. However, certain points emerged multiple times in the presentations and audience discussions and became frequent reference points. In concluding



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 77
8 Common Themes KEY QUESTIONS FOR CONSIDERATION • What is the principal purpose of core measures? • Who needs to be involved in the development of core metrics, and how? • What related work is already completed or under way? • What framework or model is best suited to the purpose? • What criteria should guide the selection of priorities? • How might overlaps be resolved among candidate measures? • Which measures are most actionable for progress? • What are the available data sources at each assessment level? • What are the data infrastructure needs? • How can the metrics and the process be most future-oriented? The workshop summarized in this document had broad objectives, including examining a vision for core health metrics; drawing lessons from national, state, community, and organizational efforts; identifying the metrics that could reliably measure care outcomes, costs, and health improvement; and describing the implementation strategies for these mea- sures. With a scope this broad, the discussions were similarly wide-ranging. However, certain points emerged multiple times in the presentations and audience discussions and became frequent reference points. In concluding 77

OCR for page 77
78 CORE MEASUREMENT NEEDS remarks, Michael McGinnis summarized the common themes and potential opportunities for improvement in the measurement infrastructure. COMMON THEMES What Is the Principal Purpose of Core Measures? The workshop participants highlighted several motivations for build- ing a core set of measures. At the most fundamental level, basic measures should reflect and emphasize those issues most important to improving care, lowering costs, and improving health. The measures can then be used to improve program management and to develop incentives and payment systems targeted to the most important issues across the board. How, for example, might core measures be used to track progress in states receiving waivers to increase flexibility in managing Medicaid? At the practice level, having a common core set of measures should help reduce the burden of measurement imposed by the increasing proliferation of metrics that clinicians and care delivery organizations must collect and report. Several participants noted that the number and scope of metrics has increased steadily over time. These expansive measurement requirements have impacts in cost and human effort, and they also spread attention so broadly that individuals cannot focus on the set of actions that are truly important for improving value and health. A common set of measures will also allow for the identification of variations, whether among different health care delivery organizations, clinicians, treatments, or population health management techniques. One speaker noted that a common measurement framework in cardiac surgery allowed his organization to identify variations in clinical outcomes among different providers and then share the best practices from high perform- ers throughout the organization. Another speaker emphasized that public reporting of performance measures allows organizations to identify areas that need improvement and to track improvement over time. Several speakers noted that progress toward the three-part aim often requires diverse coalitions, as multiple factors influence health and health care. With such diverse coalitions, there is a need for integration of in- formation from all partners, including county-based health departments, health care delivery organizations, community-based organizations, and employers. Core measure sets can help these diverse groups work together by defining a common target for improvement and identifying the areas where data need to be collected. Finally, a common set of core measures can be used to guide the cre- ation of a robust, rational digital infrastructure. One speaker highlighted how his organization in Vermont used core measure sets to identify the

OCR for page 77
COMMON THEMES 79 necessary data elements that its electronic health record systems should capture during routine care. In this example, the core set of measures served as the basis for a data dictionary around which the electronic health record system was designed. The resulting system was able to be used to export and ingest these key elements, populate the core measures in a dynamic fashion, and assure transmission and exchange of the key data elements. Similar principles can apply to other data systems, from multi-payer claims databases to health surveillance systems. Who Needs to Be Involved in the Development of Core Metrics and How? The health and health care system consists of a diverse set of organiza- tions and individuals, each with a different perspective on the three-art aim. For example, the definition of cost varies depending on the stakeholder: Patients and consumers may consider out-of-pocket costs, a payer may consider total claims, and the federal government may view budgets and appropriations for health programs. The diversity of perspectives can be seen in the number of stakeholders, which include the following: • Patients, consumers, caregivers, and the public • Health care professionals (physicians, nurses, pharmacists, and others) • Hospitals and health care delivery organizations • Payers • Public health agencies • Regulators • Communication professionals and the media • Community-based organizations • States (legislators, governors, executive agencies) • Federal government (legislators, executive agencies) Understanding these varied perspectives is critical for ensuring the use- fulness of any core metric set; gathering these perspectives requires broad engagement across the health and health care system. This broad engage- ment also can uncover other factors that affect a metric’s actionability for different stakeholders, such as the stakeholder’s access to the underlying data for this metric, a stakeholder’s ability to affect the metric, and whether the metric captures processes or health outcomes that are most in need of improvement. Furthermore, different groups will need different communi- cation strategies based on their circumstances and needs, their numeracy and health literacy, and their perceptions of the metric. Communicating metrics to many stakeholder audiences requires multiple dissemination

OCR for page 77
80 CORE MEASUREMENT NEEDS methods that may include rankings, media reports, academic publications, publicly reported data, and other techniques. What Related Work Is Already Completed or Under Way? For decades, initiatives have been under way to identify the core mea- sures in health and health care. More than 60 years ago, Congress founded the National Committee on Vital and Health Statistics to identify the needs for health statistics, data, and information. More than 30 years ago, the Healthy People initiative began with the publication of Healthy People: The Surgeon General’s Report on Health Promotion and Disease Prevention. The ongoing Healthy People initiative has produced four follow-on publica- tions, the most recent being Healthy People 2020. A current effort to ad- vance aligned measures is the Measure Applications Partnership convened by the National Quality Forum, which has identified families of measures that could be used in core measure sets and which provides feedback for federal measurement efforts. In addition, the Institute of Medicine has pro- duced several reports examining various areas of measurement, including Performance Measurement (IOM, 2006) and For the Public’s Health: The Role of Measurement in Action and Accountability (IOM, 2010). Presently, many organizations are involved in measurement along one or more dimensions of the three-part aim (see Table 8-1 for an abbreviated list of example organizations). These initiatives vary in their scale, considering performance at a county, state, or national level; in their focus, from physi- cians to hospitals and health plans; and in their data sources, from surveys and registries to clinical records and health care payment records. The breadth of initiatives highlights the interest in improving measure- ment, but it also underscores the challenge of harmonizing across the many different initiatives currently under way. As noted by several meeting partici- pants, the number of initiatives contributes to the fact that many stakehold- ers feel overwhelmed by the quantity of data they are required to collect for measurement as well as by the quantity of measures they must routinely calculate and report. A basic challenge to the development of core metrics that can be reliably deployed at national, state, local, and institutional levels will be the design of a process that fairly, equitably, and responsibly ensures stakeholder input from the key perspectives. What Framework or Model Is Best Suited to the Purpose? To consider a measurement framework in more depth, the workshop participants divided into breakout groups for each dimension of the three- part aim: population health, health care, and cost. Each breakout group considered potential priority metric categories that reliably assess outcomes,

OCR for page 77
COMMON THEMES 81 TABLE 8-1 Example Organizations, with Several Example Initiatives Involved in Each Dimension of the Three-Part Aim Measurement Dimension Example Organizations and Example Initiatives •  DC (e.g., Community Health Status Indicators; National C Center for Health Statistics; Office of Surveillance, Epidemiology, and Laboratory Services) •  ounty Health Rankings (with the University of Wisconsin C Population Health Institute and the Robert Wood Johnson Foundation) •  HS (e.g., Healthy People 2020—Leading Health Indicators) H •  IH (e.g., Healthy Communities study [collaboration with N Population Health CDC and RWJF]) •  QF (e.g., convenes National Priorities Partnership, Measure N Applications Partnership, population health measure endorsement) •  rivate insurers and health plans P •  tate and local governments S •  tate of the USA project (e.g., State of the USA Health S Indicators) •  nitedHealth Foundation (e.g., America’s Health Rankings) U •  HA (e.g., Committee on Performance Improvement) A •  HRQ (e.g., National Healthcare Quality Report, National A Healthcare Disparities Report, National Quality Measures Clearinghouse, CAHPS) •  MA (e.g., convening the Physician Consortium for A Performance Improvement) •  QA Alliance (e.g., multi-stakeholder collaborative with A focus on using measurement to facilitate improvement and promoting best practices in reporting) •  DC (e.g., National Healthcare Safety Network) C •  MS (e.g., Hospital Compare, Physician Compare, Physician C Quality Reporting System, Shared Savings Program [ACO] measures, Medicaid/CHIP Pediatric Health Care Quality Health Care Measures) •  RSA (e.g., HRSA Clinical Quality Core Measure Set) H •  nstitute for Clinical Systems Improvement (e.g., developing I evidence-based guidelines and supporting collaborative initiatives for measure development) •  oint Commission (e.g., ORYX) J •  eapfrog Group (e.g., Hospital Safety Score) L •  CQA (e.g., HEDIS measures) N •  IH (e.g., Patient Reported Outcomes Measurement N Information System [PROMIS]) •  QF (e.g., convenes National Priorities Partnership, Measure N Applications Partnership, quality measure endorsement) •  NC (e.g., meaningful use measures) O continued

OCR for page 77
82 CORE MEASUREMENT NEEDS TABLE 8-1 Continued Measurement Dimension Example Organizations and Example Initiatives • O  SHA (e.g., health worker safety, injuries) • P  remier (e.g., QUEST collaborative measures) • P  rivate insurers and health plans • Q  uality Alliance Steering Committee (e.g., High-Value Health Care Project) • S  pecialty societies and professional societies (e.g., National Health Care Surgical Quality Improvement Program, registries) continued • S  tate and local governments • U  tilization Review Accreditation Committee (e.g., measurement for accreditation programs) • V  eterans Health Administration (e.g., ASPIRE, Surgical Care Improvement Project, Linking Information Knowledge and Systems, Medical Home Initiative) •  HA (e.g., AHA Annual Survey of Hospitals and AHA Annual A Survey of Hospitals—IT Supplement) •  HRQ (e.g., Healthcare Cost and Utilization Project, Medical A Expenditure Panel Survey [in conjunction with Census Bureau and CDC]) •  DC (e.g., National Health Interview Survey [collaboration C with Census Bureau], Medical Expenditure Panel Survey [collaboration with Census Bureau and AHRQ]) Cost •  ensus Bureau (e.g., National Health Interview Survey C [collaboration with CDC], Medical Expenditure Panel Survey [collaboration with CDC and AHRQ]) •  MS (e.g., National Health Expenditure Data) C •  QF (e.g., endorsement of resource use and cost-of-care N measures) •  rivate insurers and health plans P •  uality Alliance Steering Committee (e.g., High-Value Health Q Care Project) cost, and overall health improvement, and Table 8-2 summarizes the po- tential metric categories that were discussed by each group. For population health measurement, the breakout group leader noted that the discussions differentiated between measures that reflect current health versus measures that capture factors and contributors to future health. For health care measurement, a number of the breakout group participants observed that prior categorizations of health care quality, such as those in the 2001 IOM report Crossing the Quality Chasm, remained useful frameworks. In the cost breakout group, multiple participants outlined three categories for as- sessing cost that included resource use and overall expenditures, utilization of particular services and treatments, and overall affordability of health care for different stakeholders. In addition to these specific categories,

OCR for page 77
COMMON THEMES 83 TABLE 8-2 Example Organizing Framework for Describing the Core Measurement Needs Metric Domain Potential Metric Categories Cross-Cutting Population Health •  urrent health C •  ontributors and risks to future health C Health Care • P  atient-centered • E  ffective • S  afe Equity • V  alue and efficiency and • C  oordination and communication Variation Cost •  esource use and expenditures R •  tilization U •  ffordability A equity and variation were cross-cutting factors across all metric categories and dimensions; properly designed metrics could be analyzed for variation across geography, socioeconomic status, ethnicity and race, age, gender, and other characteristics. This emphasizes the importance of the population used for calculating metrics—unless the metric originally draws on data from a broad population, it will be difficult to calculate performance for specific smaller populations. What Criteria Should Guide the Selection of Priorities? Given the importance of accurately assessing progress toward the three- part aim of improved care quality, lower costs, and better population health, the metrics used for this purpose must have several key character- istics. One theme that several participants raised was the need to minimize the overall measurement burden in cost, time, and effort. One speaker described the efforts in his measurement work to derive measurement from data collected by routine care and health monitoring. Other attendees noted the value of standardization, such as using common technical specifications for calculating metrics, aligning metrics across different initiatives, and us- ing existing measures whenever possible. Given the multiple levels at which measurement occurs, a number of participants underscored the value of metrics that are useful at multiple levels. Multiple workshop participants emphasized the value of identifying metrics that are important, comprehensive, and meaningful. For example, an important metric is one that has an impact on health, health care, or cost and is tied to overarching goals for the health or health care system, such as reducing disparities. Some attendees noted that useful measures

OCR for page 77
84 CORE MEASUREMENT NEEDS are as comprehensive as possible and bundle individual metrics to describe meaningful concepts in health, health care, or cost. This composite mea- sure could include multiple process or intermediate outcome measures to assess progress on important health conditions. Composite measures assess broader impacts that narrow measures may miss. For example, a narrow prescription drug cost metric would show higher costs as adherence im- proves, while a broader cost measure would include potential savings from better compliance, such as reduced readmissions or lower hospital costs. One additional set of criteria centered on the actionability of the mea- sure. This concept was defined as how well the actions, policies, or in- centives implemented by individuals or organizations could influence the metric. Several attendees noted that actionability depends on the availabil- ity of benchmark or comparison data which allow the measured individu- als or organizations to make sense of the measurement results. Another factor ensuring actionability is the presence of an evidence base proving the reliability and validity of the metric. This can ensure that the measure is consistent across individuals and organizations and that it assesses the intended target. How Might Overlaps Be Resolved Among Candidate Measures? In addition to examining potential metric categories, the workshop breakout groups also considered example metrics for each category. Ex- amples that were mentioned during the breakout groups and subsequent workshop discussions are presented below in Table 8-3. These example metrics vary in their specificity, comprehensiveness, and actionability. Some workshop participants noted that conceptual overlaps existed between the metric categories, such as between the example metrics for effectiveness in the health care domain and the metrics for current health in the population health domain. Resolving these overlaps will require a deeper examination of the concepts underlying each domain and the actions that could affect a given metric. While identifying some potential metrics, the workshop discussions underscored the need for further deliberations to develop a full core metric set. Which Measures Are Most Actionable for Progress? Metrics do not exist in a vacuum but depend on their ultimate use. For example, a metric that aids an organization in quality improvement efforts may not be appropriate when tied to payment for health care services. This fact adds additional complexities to metric development and selection, as there are many ways that metrics are used today, including

OCR for page 77
COMMON THEMES 85 • Quality improvement (e.g., organizational, regional, state, national levels) • Payment and purchasing decisions (e.g., pay for performance, tiered networks, state exchanges) • Reporting and transparency (e.g. internal, clinical practice feed- back, rankings, public, exchanges, surveillance) • Regulation (e.g., professional certification, facility accreditation) • Funding (e.g., organizational and governmental budgets, philanthropy) • Scientific and clinical research (e.g., effectiveness research) There are several challenges in the routine implementation of these core measure sets. One issue that several participants raised was defining the population, such as determining whether that population consists of the panel of patients seen by a clinical provider or health care delivery organiza- tion, all of the people in a given geographic region, or another grouping of individuals. The choice of populations affects what measures are possible to implement and the ultimate use of the measures. If the population definition is overly restrictive, it may not be possible to accurately understand how performance and health outcomes vary for different subpopulations. Fur- thermore, restrictive population definitions may cause disconnects between measures calculated for the clinical care system and the public health system (Gourevitch et al., 2012). Beyond defining the population, additional chal- lenges occur when payment is linked to measurement, as this makes the measure high-stakes and increases attention on the measure’s limitations in accuracy or comprehensiveness. Another implementation issue is how to account for the organizational and social factors necessary for successful measurement strategies. These factors include organizational leadership, culture, the business case or return on investment, knowledge management infrastructure, and workforce competencies. For example, one participant noted that some organizational cultures view measurement and data as a weapon, while other organizational cultures promote the view that regular feedback is a welcome opportunity to improve. Several participants noted that these organizational and social factors can determine whether a metric set actually leads to improvement and is used throughout the health and health care system. Another implementation question that workshop attendees highlighted is how to roll up metrics from smaller to larger levels of aggregation, such as local to regional to national levels. One suggested method was to use a dashboard of key metrics that can track progress with a series of more specific measures attached to each dashboard measure. These more specific measures need to be associated with improvement of the dashboard metrics and could be operationalized at local levels. For example, some participants

OCR for page 77
86 CORE MEASUREMENT NEEDS TABLE 8-3 Example Metrics for Describing the Core Measurement Needs of the Three-Part Aim Potential Metric Metric Domain Categories Example Metrics Cross-Cutting Current health •  ength of life: L Mortality, life expectancy •  uality of life: Q Morbidity, functional status, indicator diseases, self-reported health status Equity Population Health •  omposite measures: C and QALY, HALY, DALY Variation Contributors and •  xtrinsic risks: E risks to future healthy communities, health physical and social environment •  ntrinsic risks: I health risks, health behaviors Patient-centered •  atient engagement P and experience, HCAHPS metrics •  hared decision S making •  atient–clinician P communication •  elf-management S Equity Health Care •  imeliness and access T and to needed care Variation Effective •  verall mortality, O mortality amenable to health care (risk adjusted), overall modifiable risk of death

OCR for page 77
COMMON THEMES 87 TABLE 8-3 Continued Potential Metric Metric Domain Categories Example Metrics Cross-Cutting Effective •  unctional status F improvements/ changes from treatments and interventions, changes in modifiable risk factors, patient- reported outcomes, clinician-reported outcomes •  isease-specific D outcome targets, time to recovery or time to return to function •  dherence to A clinical guidelines, appropriateness of care Safe •  edical errors, health M care–associated infections, overuse/ Health Care underuse/misuse Equity continued •  omposite medical C and harm measure Variation (including medical errors and health- associated infections) Value and efficiency •  tilization: U Ambulatory care– sensitive admissions and readmissions, care performed in most appropriate setting •  ffective management E Coordination and •  imeliness T communication •  are transitions C •  nformation sharing I and communication among care team (including patient and family) •  edication M reconciliation continued

OCR for page 77
88 CORE MEASUREMENT NEEDS TABLE 8-3 Continued Potential Metric Metric Domain Categories Example Metrics Cross-Cutting Resource use and •  ctual per capita A expenditures expenditures for health care (such as a risk-adjusted Total Cost of Care metric) across all conditions •  ercent of national P gross domestic product and/or federal government health care spending as percent of total Equity Cost and federal government spending Variation Utilization •  mergency room use, E advanced imaging services, and other services, treatments, interventions, diagnostics Affordability •  ercent of household P spending on health, premiums NOTE: DALY = disability-adjusted life year; HALY = health-adjusted life year; HCAHPS = Hospital Consumer Assessment of Healthcare Providers and Systems; QALY = quality- adjusted life year. in the cost measures breakout group noted that overall health care spending measures need to be the goal, but progress at the local level will depend on specific utilization measures, such as emergency department use or the utilization of advanced imaging technologies. Other participants noted that families of measures can be useful for ensuring that metrics are useful at different levels of aggregation. What Are the Available Data Sources at Each Assessment Level? A key practical consideration that was underscored frequently is identi- fying the data used to populate the core metric set. Various data sources can be leveraged to support measurement, and choosing among them can be a challenge. These data sources vary based on the population of individuals included, the purpose for the data, and the process for collecting the data.

OCR for page 77
COMMON THEMES 89 These variations affect whether the data sources can be used for different purposes. The current primary data sources for metrics include • Patient-level clinical care data (e.g., electronic health records, registries) • Individual-level social data (e.g., social and economic status; de- mographics; access to social and economic services, children and family services, elderly services, and home health services) • Population-level clinical data (e.g., cancer, chronic condition and screening registries) • Population-level safety data (e.g., adverse event reporting registries) • Vital statistics (e.g., local, state, and national vital statistics registries) • Claims data (e.g., Medicare claims database, private payer claims database, multi-payer claims databases) • Patient surveys (e.g., experience, health status) • Population surveys (e.g., U.S. Census surveys) What Are the Data Infrastructure Needs? A prerequisite to assessment is the ability to routinely capture the key data elements that populate core measures and to exchange those data ele- ments across data systems. Although progress is being made, there is a sig- nificant gap between current capabilities and the necessary data support. For example, despite an investment of significant resources, there is a patchwork of independent electronic health record systems that do not capture the nec- essary key data elements in consistent formats and do not readily exchange those elements across systems (Chan et al., 2010; Gold et al., 2012; Kern et al., 2013; Parsons et al., 2012). The country faces the possibility of a dis- jointed digital infrastructure that will not meet the needs of individuals or organizations, nor establish the capacity for regular assessment across the full landscape of organizations and individuals involved in the health and health care systems (IOM, 2011, 2012). Beyond the technical infrastructure needs, these data systems need to be considered in light of their usability for all people, from patients and families to clinicians. For example, health information technologies and publicly re- ported information will only be successful if patients are engaged, if the tools are accessible for patients with a range of technological skills, and if patients understand how to apply the tools to their own health and care decisions. Similarly, health information technology will only be successful for clinicians if it accounts for their workflow and it assists them in care. In addition, there are several policy issues that can limit progress. Sev- eral participants outlined the regulatory challenges that can prevent access and use of data for measurement, most notably the real and perceived bar-

OCR for page 77
90 CORE MEASUREMENT NEEDS riers associated with the Health Insurance Portability and Accountability Act. Another policy challenge that was highlighted is risk adjustment. Risk adjustment is challenging because of the number of potential methods for adjusting measures and the role that risk adjustment plays in promoting buy-in among clinical providers. How Can the Metrics and the Process Be Most Future-Oriented? One basic issue that was raised several times at the workshop is the tension between starting with available metrics and improving over time versus ensuring a certain level of metric quality before widespread deploy- ment. Those participants favoring the former approach highlighted the large number of measures currently available, the urgent need for progress, and the fact that the process of implementation can uncover logistical issues that may not have been envisioned in a planning process. Those preferring the latter approach noted that inaccurate measures can damage the cred- ibility of the measurement enterprise, that a process for clinician buy-in is important to ensure that metrics are accepted and used, and that incorrect measures can be unfair when used for high-stakes uses such as payment and regulation. Resolving this tension is important to progress in implementing a core set. Another issue affecting the ability of measures to improve over time is technological progress. For example, emerging devices can continually assess specific aspects of an individual’s physical state, which can allow a more complete picture of health status and the impact of various interven- tions. The expected flood of new data from these personal devices will have implications for what is measurable and for the actionability of different measures. Yet, new challenges will also occur, such as the interoperability of different devices, the capabilities to analyze and use this new data, and the privacy and security of the generated data. As well, any measurement initiative must consider how measures will be updated and implemented based on the technological progress that is sure to occur. One theme that arose during the discussion was how to ensure that core metric sets are forward looking and continuously learn and improve. Participants noted the need for a process to eliminate measures that are no longer helpful, such as ones that have achieved near-universal compliance. Without such a process to prune unneeded metrics, the measurement bur- den will only continue to increase. Several workshop attendees underscored the need to have measurement itself become a learning system so that it improves over time and takes advantage of improvements in science and technology. This will help ensure that measurement continually promotes progress in the health of the population, the quality of health care, and the overall value of the health and health care system.

OCR for page 77
COMMON THEMES 91 REFERENCES Chan, K. S., J. B. Fowles, and J. P. Weiner. 2010. Review: Electronic health records and the reliability and validity of quality measures: A review of the literature. Medical Care Research and Review 67(5):503–527. Gold, R., H. Angier, R. Mangione-Smith, C. Gallia, P. J. McIntire, S. Cowburn, C. Tillotson, and J. E. DeVoe. 2012. Feasibility of evaluating the CHIPRA care quality measures in electronic health record data. Pediatrics 130(1):139–149. Gourevitch, M. N., T. Cannell, J. I. Boufford, and C. Summers. 2012. The challenge of attribu- tion: Responsibility for population health in the context of accountable care. American Journal of Public Health 102(Suppl 3):S322–S324. IOM (Institute of Medicine). 2001. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press. ———. 2006. Performance Measurement: Accelerating Improvement. Washington, DC: The National Academies Press. ———. 2010. For the Public’s Health: The Role of Measurement in Action and Accountability. Washington, DC: The National Academies Press. ———. 2011. Digital Infrastructure for the Learning Health System: The Foundation for Continuous Improvement in Health and Health Care: Workshop Summary. Washington, DC: The National Academies Press. ———. 2012. Digital Data Improvement Priorities for Continuous Learning in Health and Health Care: Workshop Summary. Washington, DC: The National Academies Press. Kern, L. M., S. Malhotra, Y. Barron, J. Quaresimo, R. Dhopeshwarkar, M. Pichardo, A. M. Edwards, and R. Kaushal. 2013. Accuracy of electronically reported “meaning- ful use” clinical quality measures: A cross-sectional study. Annals of Internal Medicine 158(2):77–83. Parsons, A., C. McCullough, J. Wang, and S. Shih. 2012. Validity of electronic health record- derived quality measurement for performance monitoring. Journal of the American Medical Informatics Association 19(4):604–609.

OCR for page 77