4
Adopting a More Quantitative and Transparent Measure Selection Process

The IOM Future Directions committee recommends changes to AHRQ’s measure selection process in order to focus the outcome of the process on the central aspirations of quality improvement—improving health, value, and equity—by closing performance gaps in health care areas likely to have the greatest population impact, be most cost effective, and have a meaningful impact on eliminating disparities. In order to enhance the transparency of AHRQ’s process for measure selection, a Technical Advisory Subcommittee for Measure Selection is recommended under the existing AHRQ National Advisory Council for Healthcare Research and Quality (NAC) to advise on ranking measures for selection, inclusion in the national healthcare reports, and retirement. As part of this process, this subcommittee should recommend strategies for the development and acquisition of new measures and data sources.

Conceptual models of improving health care quality and eliminating disparities include measurement and reporting as integral to achieving performance goals; performance improvement systems, in turn, depend on the quality of data to support measures (Berwick et al., 2003; Kilbourne et al., 2006; Langley et al., 1996). Over the past decade, growing attention to health care quality measurement has led to the generation of a large number of quality measures now being available for use. Illustrating the magnitude of the universe of possible quality measures, the National Quality Measures Clearinghouse inventory now contains 1,475 potential quality measures (National Quality Measures Clearinghouse, 2009a,b). Likewise as of October 2009, the National Quality Forum (NQF) maintained a list of 537 measures meeting its standards for endorsement (NQF, 2009b). The growth in the number of possibilities necessitates a critical assessment of how to prioritize among existing and future measures for use in the NHQR and NHDR. There have been calls to develop a parsimonious common set of measures to “serve policy and frontline information needs” (McGlynn, 2003, p. I-39).

Since 2003, AHRQ has refined its measure set for the national healthcare reports and related products, and the measure set now includes approximately 260 individual measures, including a set of 46 core measures that are more prominently featured in the body of the 2008 NHQR and NHDR. The larger set of 260 measures is featured in online products such as the Web-based State Snapshots, NHQRDRnet, and appendixes to the NHQR and NHDR. The selection of measures for the national healthcare reports by AHRQ has been influenced by the availability of national data sources internal to HHS.

ARHQ has been urged to add more performance measures to the NHQR, NHDR, and related products, and has



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 59
4 Adopting a More Quantitative and Transparent Measure Selection Process The IOM Future Directions committee recommends changes to AHRQ’s measure selection process in order to focus the outcome of the process on the central aspirations of quality improement—improing health, alue, and equity—by closing performance gaps in health care areas likely to hae the greatest population impact, be most cost effectie, and hae a meaningful impact on eliminating disparities. In order to enhance the transparency of AHRQ’s process for measure selection, a Technical Adisory Sub- committee for Measure Selection is recommended under the existing AHRQ National Adisory Council for Healthcare Research and Quality (NAC) to adise on ranking measures for selection, inclusion in the national healthcare reports, and retirement. As part of this process, this subcommittee should recommend strategies for the deelopment and acquisition of new measures and data sources. Conceptual models of improving health care quality and eliminating disparities include measurement and reporting as integral to achieving performance goals; performance improvement systems, in turn, depend on the quality of data to support measures (Berwick et al., 2003; Kilbourne et al., 2006; Langley et al., 1996). Over the past decade, growing attention to health care quality measurement has led to the generation of a large number of quality measures now being available for use. Illustrating the magnitude of the universe of possible quality measures, the National Quality Measures Clearinghouse inventory now contains 1,475 potential quality measures (National Quality Measures Clearinghouse, 2009a,b). Likewise as of October 2009, the National Quality Forum (NQF) maintained a list of 537 measures meeting its standards for endorsement (NQF, 2009b). The growth in the number of possibilities necessitates a critical assessment of how to prioritize among existing and future measures for use in the NHQR and NHDR. There have been calls to develop a parsimonious common set of measures to “serve policy and frontline information needs” (McGlynn, 2003, p. I-39). Since 2003, AHRQ has refined its measure set for the national healthcare reports and related products, and the measure set now includes approximately 260 individual measures, including a set of 46 core measures that are more prominently featured in the body of the 2008 NHQR and NHDR. The larger set of 260 measures is featured in online products such as the Web-based State Snapshots, NHQRDRnet, and appendixes to the NHQR and NHDR. The selection of measures for the national healthcare reports by AHRQ has been influenced by the availability of national data sources internal to HHS. ARHQ has been urged to add more performance measures to the NHQR, NHDR, and related products, and has 

OCR for page 59
0 NATIONAL HEALTHCARE QUALITY AND DISPARITIES REPORTS asked the IOM Future Directions committee for guidance on prioritization among measures so that new measures could be added and highlighted in the reports while other measures could receive less emphasis or be removed entirely from AHRQ’s tracking. AHRQ regards the production of the NHQR and NHDR as having reached capac - ity given the agency’s current resources for measurement reporting, analysis, and presentation. AHRQ staff has deliberated about retiring some measures to allow for the incorporation of new measurement domains or measures, but the agency has found it difficult to retire measures because of advocacy, both internal and external to HHS, for each of the current measures. The Future Directions committee reviewed AHQR’s existing measure selection processes and criteria to shed light on how these processes might be improved, particularly in support of the committee’s overall aim to have the national healthcare reports focus on areas that matter most and to encourage various stakeholders to take action on the highest impact areas for quality improvement and disparities elimination. In this chapter, the committee describes how AHRQ’s measure selection process might be enhanced by selecting measures that support national priority areas for health care quality improvement (see Chapter 2), by incorporating concepts of value and equity (see Chapter 3), and by applying more explicit quantitative techniques in the selection process. Taking these steps would help direct attention to those performance areas with the great - est potential impact to transform health care quality for the country and for specific populations, and identify key areas for measure and data source development. AHRQ’S APPROACH TO SELECTING MEASURES The measure selection process for the national healthcare reports has been undertaken primarily by AHRQ staff in consultation with an HHS Interagency Workgroup consisting of program and data experts, as well as with some limited external feedback from AHRQ’s NAC. AHRQ’s Initial Measure Selection Process and Criteria AHRQ’s initial selection approach for measures in the NHQR and NHDR began with a call for measures involving all HHS agencies, as well as substantial input from private-sector entities that were solicited by the IOM during the research for its 2001 Enisioning the National Healthcare Quality Report (IOM, 2001). More than 600 candidate measures were generated through the call (AHRQ, 2003a). Subsequently, the HHS Interagency Workgroup for the NHQR/NHDR reduced the 600 candidate measures for tracking to about 140: (1) by applying three basic criteria recommended by the IOM in 2001—importance, scientific soundness, and feasibility (see discussion in Box 4-1)—to each individual measure; (2) by mapping potential mea - sures to the elements of the earlier quality framework (effectiveness, safety, timeliness, and patient-centeredness); and (3) by selecting clinically important conditions within effectiveness measures (AHRQ, 2003a). During the summer of 2002, public comments were solicited from hospitals, providers, researchers, and others via a public hearing conducted by the National Committee on Vital and Health Statistics (NCVHS) and through a Federal Register notice (AHRQ, 2002; NCVHS, 2002). As the HHS Interagency Workgroup refined the final package of measures for the NHQR and NHDR, input was sought from the HHS Data Council, technical and policy experts within AHRQ, and the Quality Interagency Coordination Task Force, which spanned several federal agen - cies (Veterans Affairs, Department of Defense, Federal Bureau of Prisons, and others). 1 A separate review process was held for home health measures, which were not included in the initial public review cycle (AHRQ, 2003b). As a result of this effort, the first edition of the NHQR published by AHRQ reported on 147 measures; of these, effectiveness measures (97 measures; 65 percent of the total measures) focused on the clinical conditions 1 See http://www.quic.gov (accessed November 28, 2009) for a full list of member agencies. The HHS Data Council coordinates all health and non-health data collection and analysis activities of HHS, including an integrated health data collection strategy, coordination of health data standards and health information and privacy activities. The HHS Data Council consists of senior level officials designated by their agency or staff office heads, the HHS Privacy Advocate, and the Secretary’s senior advisor on health statistics. It is co-chaired by the Assistant Secretary for Planning and Evaluation and a rotating Operating Division (OpDiv) head; AHRQ is the current OpDiv co-chair. For more information, see http://aspe.hhs.gov/datacncl/ (accessed May 14, 2010).

OCR for page 59
 MEASURE SELECTION PROCESS BOX 4-1 The IOM 2001 Recommendations for Measure Selection Criteria for the NHQR and NHDR In the IOM’s 2001 report Envisioning the National Healthcare Quality Report, three major criteria were proposed for measure selection: 1. Importance of what is being measured • Impact on health. What is the impact on health associated with this problem? • Meaningfulness. Are policy makers and consumers concerned about this area? • usceptibility to being influenced by the health care system. Can the health care system meaningfully S address this aspect or problem? 2. Scientific soundness of the measure • Validity. Does the measure actually measure what it is intended to measure? • Reliability. Does the measure provide stable results across various populations and circumstances? • Explicitness of the evidence base. Is there scientific evidence available to support the measure? 3. Feasibility of using the measure • Existence of prototypes. Is the measure in use? • vailability of required data across the system. Can information needed for the measure be collected in A the scale and time frame required? • Cost or burden of measurement. How much will it cost to collect the data needed for the measure? • apacity of data and measure to support subgroup analyses. Can the measure be used to compare dif- C ferent groups of the population? The 2001 IOM report stipulated that it is desirable for a measure to meet all 10 elements within the three overall criteria, but noted that it is not required that all 10 apply in order for a given measure to be considered for inclusion in the NHQR and NHDR. The 2001 IOM committee indicated that the three criteria, as listed above, provide a hierarchy by which measures should be considered, with priority to be given to measures evaluated for importance and scientific soundness and then by feasibility. For example, the committee stated: Measures that address important areas and are scientifically sound, but are not feasible in the immediate future, deserve potential inclusion in the data set and further consideration. However, measures that are sci- entifically sound and feasible, but do not address an important problem area, would not qualify for the report regardless of the degree of feasibility or scientific soundness. SOURCE: IOM, 2001, pp. 83 and 87. chosen for Healthy People 2010 (cancer, diabetes, end-stage renal disease, heart disease, HIV/AIDS, maternal and child health, mental health, respiratory disease, and nursing home and home health care) (AHRQ, 2003a; HHS, 2009b). AHRQ’s Current Measure Selection Process and Criteria AHRQ reduced the number of measures presented in subsequent editions in response to criticisms that the first edition was unwieldy (Gold and Nyman, 2004). The intent was to be able to “highlight measures with in- depth analysis, rather than broad, but sparse, coverage of all 179 measures” (AHRQ, 2004). 2 That basic format is maintained by AHRQ today, with a set of approximately 46 core measures presented in the body of the reports and more detailed tables available online for a larger set of measures. To select the 46 core measures for the NHQR and NHDR, AHRQ staff and the HHS Interagency Workgroup prioritized measures by the three original IOM criteria and several additional ones. Usability was added as a new primary criterion—one that is also articulated 2 Additional measures were added to the initial full measure set.

OCR for page 59
 NATIONAL HEALTHCARE QUALITY AND DISPARITIES REPORTS by NQF in considering the suitability of any measure as a voluntary consensus standard. 3AHRQ’s current criteria and principles for prioritizing measures in the NHQR and NHDR are summarized in Box 4-2. AHRQ gives greater weight to “primary criteria” than to “secondary criteria,” and the “balancing principles” were also added to ensure that the final set of core measures covered a variety of conditions and sites of care. AHRQ also emphasizes health care process measures over health outcome measures due to the fact that the focus of the reports is health care delivery and that outcome measures are often too distal or rare (e.g., mortal - ity) to be linked to the delivery of a particular service. Whenever a close relationship is deemed to exist (e.g., use of colorectal cancer screening to presentation with more advanced colorectal cancer), then AHRQ has tried to present paired process and outcome measures. The Future Directions committee recognizes the limitations of process measures, as does AHRQ, and encourages AHRQ to continue to report paired measures whenever pos - sible. Additionally, the committee encourages AHRQ to develop or adopt outcome measures as they hold great interest for policy makers, particularly outcomes associated with the implementation of specific programs. For example, AHRQ already reports on receipt of care for heart attack and inpatient mortality, but could also report related information on outcomes such as: “Since the beginning of public reporting on readmission rates for AMI by the Centers for Medicare and Medicaid Services [CMS], the readmission rates have been reduced X percent, yielding a potential savings to the federal Medicare budget of $Y.” Assessing Importance of Topic Areas for Inclusion Over time, AHRQ has taken stock of which health conditions or intervention topic areas warranted consid - eration within the NHQR and NHDR to determine if there should be measurement additions or deletions. AHRQ provided the Future Directions committee with a side-by-side comparison of the specific factors considered in identifying important topics for national reporting (Appendix E). These factors include: the leading causes of death, disability, or activity limitation; principal hospital diagnoses; costly conditions in general and among hospitaliza - tions specifically; areas with notable Black-White racial and educational-level disparities measured in life years lost; other significant racial and ethnic disparities; and priority areas named in several advisory reports from the IOM and HHS (e.g., HHS strategic plans; the 2003 IOM report Priority Areas for National Action: Transforming Health Care Quality)4 (IOM, 2003). From the sources, AHRQ has identified nationally relevant topics not yet reported in the NHQR, NHDR, or related products. For example, AHRQ added measures, such as obesity and substance abuse measures5 to the 2008 reports. AHRQ’s NAC provides advice on content. The NAC and an existing subcommittee consisting of a few NAC members with an interest in the NHQR and NHDR serve as a sounding board for AHRQ staff and provide input to the AHRQ report development process (e.g., recommendations to improve dissemination and to pay increased attention to child health measures; the need to close measurement gaps and set priorities; the need to address cost, waste, and value issues). Thus, the selection of new measures appears to be driven primarily by the need to address new topic areas based on expert opinion (e.g., IOM, NAC, HHS Interagency Workgroup), some general quantitative information about the overall burden of a condition on society and individuals, and the availability of data to report on a topic. In 2008, the NAC observed that the caliber of the NHQR and NHDR has improved with each updating (AHRQ, 2008a), and the Future Directions committee agrees. 3 The measure evaluation criteria used by NQF for measure endorsement are available at http://www.qualityforum.org/uploadedFiles/ Quality_Forum/Measuring_Performance/Consensus_Development_Process%E2%80%99s_Principle/EvalCriteria2008-08-28Final.pdf?n=4701 (accessed March 26, 2009). There is substantial overlap in the criteria for measure endorsement and selection to date, whether past IOM recommendations or current AHRQ processes for selection. 4 Similarly, NQF uses factors such as “affects large numbers, leading cause of morbidity/mortality, high resource use (current and/or future), severity of illness, and patient/societal consequences of poor quality” in determining the importance of a measure for endorsement (NQF, 2009a). 5 Obesity-related measures include ones addressing whether adults with obesity ever received advice from a health provider to exercise more, or whether children received advice from a health provider about healthful eating or being physically active. Substance abuse measurement relates to the number of persons age 12 years and over who needed treatment for illicit drug use and received such treatment at a specialty facil- ity in the past 12 months.

OCR for page 59
 MEASURE SELECTION PROCESS BOX 4-2 AHRQ’s Current Criteria and Principles for Prioritizing Measures Primary Criteria Importance 1. • impact on health (e.g., clinical significance, prevalence); • meaningfulness; and • usceptibility to being influenced by the health system (e.g., high utility for directing public policy, and sensitive s to change). Scientific Soundness (assumed because AHRQ only uses consensus-based endorsed measures). 2. Feasibility 3. • apacity of data and measure for subgroup analysis (e.g., the ability to track multiple groups and at multiple c levels so a number of comparisons are possible); • cost or burden of measurement; • availability of required data for national and subgroup analysis; and • measure prototype in use. Usability: easy to interpret and understand (methodological simplicity). 4. T ype of Measure: evidence-based health care process measures favored over health outcome measures be- 5. cause most outcome measures were too distal to an identified intervention. Secondary Criteria • applicable to general population rather than unique to select population; • data available regularly/data available recently; • linkable to established indicator sets (i.e., Healthy People 2010 targets); and • data source supports multivariate modeling (e.g., socioeconomic status, race, and ethnicity). Balancing Principles • balance across health conditions; • balance across sites of care; • at least some state data; and • at least some multivariate models. SOURCE: AHRQ, 2005. IMPROVING MEASURE SELECTION The Future Directions committee concludes that for the NHQR and NHDR to be more strategic and address the most important opportunities for concerted national action, AHRQ’s approach to measure selection needs to be modified. The Future Directions committee recommends broadening the range of input that AHRQ currently receives, making the process transparent, and incorporating a more systematic and quantitative process for ranking measures. The proposed selection process more closely looks at the gap between current and desired performance levels and the relative value of bridging that gap while also taking equity into account. This is a somewhat differ- ent approach for AHRQ, one that focuses on closing the quality gap rather than simply selecting conditions and measures based on the highest prevalence and costs. Focusing on High-Impact Areas The committee’s definition of high-impact areas for quality improvement builds on previous IOM and NQF guidance on determining what constitutes the criteria of importance in measure selection and endorsement (IOM, 2001, p. 83; NQF, 2009a). Specifically, the committee’s definition refocuses how AHRQ evaluates “impact on health” for the purposes of selecting measures for the NHQR and NHDR.

OCR for page 59
 NATIONAL HEALTHCARE QUALITY AND DISPARITIES REPORTS High impact areas for health care quality improvement: Ideally, “high impact” quality improvement and disparity reduction areas would be assessed by quantitatively ranking the population health impact of closing the gap between current performance and desired levels of performance (such as 100 percent of persons in need achieving guideline recommended care). These could be assessed for the entire population of the nation and/or for specific priority populations when data allow. The committee’s advice should not be construed to mean that an area would be considered a high impact area solely based on how large the gap is between current performance and desired performance levels (e.g., a spread of 25 percentage points is not automatically more befitting of attention than one that has a spread of 10 percent - age points); closure of a smaller gap could be ranked higher than a larger gap if its closure would yield a greater health outcome for the nation’s population. While the committee members’ emphasis is on quantitative assess - ment, they are cognizant that data limitations will at times require expert opinion to qualitatively rank measures, particularly in the absence of detailed data to allow assessment of equity considerations for different population groups. In these cases, a qualitative assessment of the impact of the intervention targeted by the measure would be combined with a quantitative assessment of the size of the gap or the disparity in order to rank the relative importance of the measure. The NAC has observed that health care quality measurement in the United States has been “incremental and evolutionary,” unfolding in the absence of a unified performance measurement strategy backed by a plan to obtain data to support key measures. The Future Directions committee hopes that an additional outcome of its proposed measure selection process would be the identification of measure and data needs and the formulation of a strategy for their development. For the reasons just cited and discussed further below, the committee recommends that AHRQ establish a new Technical Advisory Subcommittee on Measure Selection that can advise the NAC and AHRQ on performance measure selection: Recommendation 3: AHRQ should appoint a Technical Advisory Subcommittee for Measure Selec - tion to the National Advisory Council for Healthcare Research and Quality (NAC). The technical advisory subcommittee should conduct its evaluation of measure selection, prioritization, inclusion, and retirement through a transparent process that incorporates stakeholder input and provides public documentation of decision-making. This subcommittee should: • Identify health care quality measures for the NHQR and NHDR that reflect and will help measure progress in the national priority areas for improving the quality of health care and eliminating disparities while providing balance across the IOM Future Directions committee’s revised health care quality framework. • Prioritize existing and future health care quality measures based on their potential to improve value and equity. • Recommend the retirement of health care quality measures from the NHQR and NHDR for reasons including but not limited to the evolution of national priorities, new evidence on the quality of the measure, or the attainment of national goals. • Recommend a health care quality measure and data source development strategy for national reporting based on potential high-impact areas for inclusion in AHRQ’s national quality research agenda. The committee’s rationale for the establishment of the proposed NAC Technical Advisory Subcommittee for Measure Selection is discussed below. Subsequent sections of this chapter discuss desirable attributes of transparency in AHRQ’s process for selecting performance measures, a stepwise process to applying qualitative and quantitative criteria in prioritizing measures, and quantitative methods that have potential applicability to the process for assessing value and equity.

OCR for page 59
 MEASURE SELECTION PROCESS Establishing an Entity for Measure Selection The Future Directions committee considered several organizational alternatives to take on the responsibility of measure selection, but ultimately recommended the formation of the NAC Technical Advisory Subcommittee for Measure Selection. Retaining the Status Quo Retaining the status quo, with responsibility resting with AHRQ staff and HHS Interagency Workgroup members, is considered less desirable, even after possibly supplementing the current process with opportunities for public input and comment, because the process would likely retain its current limitations. The status quo did not appear tenable because AHRQ and HHS Interagency Workgroup members have already acknowledged the difficulty of being able to prioritize and eliminate health care quality measures through the current process (other than plans to semi-retire from the 2009 reports process measures that have a greater than 95 percent achievement rate [AHRQ, 2008a]).6 Furthermore, a critical parallel can be drawn to the lessons learned from Healthy People 2010. While Healthy People 2010 contains too many “primarily disease-oriented” objectives, it is nonetheless a “challenge to move away from a biomedical model because it is easier to create specific and measurable health targets that are disease specific,” “funding for many of the possible interventions is disease-specific,” and there are “strong constituencies,” both internal and external, for featuring those diseases (Fielding, 2009). Currently, the NHQR and NHDR are heavily weighted to the clinical conditions in Healthy People 2010, and a Future Directions committee concern is that some of the NHQR and NHDR content may be a product of this same history. Ultimately, the committee felt strongly that the decision-making process about measures needed to be a public one rather than internal to the HHS Interagency Workgroup and AHRQ staff so that decisions are more transparent and justified to those who advocate for the inclusion or exclusion of specific measures. AHRQ could improve the transparency of its existing practices by (1) publicizing on its website the documentation support - ing decisions behind the agency’s selection of measures and (2) establishing a public comment period on those decisions. However, the Future Directions committee also believes that AHRQ needs more focused external sup - port to make difficult decisions when ranking among measures, particularly as the selection process may result in a substantial change in the portfolio of measures over time. Furthermore, they need the technical, quantitative expertise to evaluate candidate measures. Changing the Status Quo The NAC provides AHRQ with advice on “the most important questions that AHRQ’s research should address in order to promote improvements in the quality, outcomes, and cost-effectiveness of clinical practice” (AHRQ, 2010). The committee considered whether the existing NAC could perform the necessary assessment of perfor- mance measures recommended by the Future Directions committee and concluded that it could not. The NAC’s advice is solicited for all of AHRQ’s activities and is not solely directed to the content and presen - tation of the NHQR and NHDR (AHRQ, 2009b). Private sector members are appointed for three-year terms, and members of seven federal agencies also serve in an ex-officio capacity. The NAC currently meets three times a year for one day each time. The NAC, as currently constituted, does not have sufficient technical expertise to systemati - cally apply constructs of clinically preventable burden (CPB), cost effectiveness (CE), and other valuation techniques to measurement prioritization and selection. Adequate expertise is necessary to evaluate any staff or contract work that supports the evaluation exercises; other prioritization and evaluation processes for guidelines and measures have found the need for such technical expertise on the decision-making body itself when employing rigorous grading of recommendations (Baumann et al., 2007; Guyatt et al., 2006). Additionally, the workload associated with quality measure selection and prioritization would be substantial and could interfere with current NAC duties. A new body to advise AHRQ with no affiliation with the NAC could be formed with the requisite expertise, but this approach raised concerns about lines of communication with AHRQ and disengagement from AHRQ’s 6 Personal communication, Ernest Moy, Agency for Healthcare Research and Quality, October 9, 2009.

OCR for page 59
 NATIONAL HEALTHCARE QUALITY AND DISPARITIES REPORTS overall portfolio of work. Instead, building on precedent, the committee decided to recommend a technical advisory subcommittee to the NAC. Proposed NAC Technical Adisory Subcommittee for Measure Selection The recommended NAC Technical Advisory Subcommittee for Measure Selection would differ from the cur- rent informal NAC subcommittee that provides general advice on the NHQR and NHDR. The current subcommittee is made up of NAC members and has limited face time with AHRQ staff (e.g., approximately one hour prior to the overall NAC meeting). The Technical Advisory Subcommittee for Measure Selection should have a more formal structure and will need more days per year to do its work, as well as the ability to commission and fund studies through AHRQ to support its deliberations. A precedent for this more formal relationship is the NAC Subcommittee on Quality Measures for Children’s Healthcare in Medicaid and Children’s Health Insurance Programs that was formed for a specific task—namely, the identification of an initial core measure set for children under the Child Health Insurance Program Reautho- rization Act.7 This NAC subcommittee includes two members from the NAC but meets separately from the NAC for detailed working sessions. The relationship of the NAC Subcommittee on Quality Measures for Children’s Healthcare in Medicaid and Children’s Health Insurance Programs is shown in Figure 4-1, and the Future Directions committee envisions the same relationship for the NAC Technical Advisory Subcommittee for Measure Selection for the NHQR and NHDR. Other NAC subcommittees have previously been formed for specific substantive tasks (e.g., safety). Individuals chosen to serve on the proposed subcommittee should include people with responsibilities for performance measurement and accountability; experts in measure design and data collection; health services researchers; and subject matter experts in applying quantitative techniques to evaluate gaps between current and desired performance levels, and on issues of disparities, economics, and bioethics. The subcommittee should ensure that membership accounts for both consumer and provider perspectives. A subject matter expert in disparities need not be limited to health services researchers but could also include representation, for example, from communities of color to ensure sensitivity to the concerns of smaller population groups when determining high impact areas. It would also be useful to have an individual with expertise in quality improvement in fields other than health care to share the challenges faced and overcome. The committee believes that the NAC Subcommittee for Measure Selection should have approximately 10 to 15 persons in order to encompass all of these areas of expertise. The emphasis in the skill set of the subcommittee is technical expertise; the NAC will balance this out with its broader stakeholder representation. The NAC Technical Advisory Subcommittee for Measure Selection will need staff and resources to help carry out its work in quantifying which areas of measurement constitute the greatest quality improvement impact considering value (health outcome for resource investment or net health benefit) 8 and population and geographic variability. The committee believes that AHRQ’s current NHQR and NHDR staff would play an important role in identifying content areas where there are actionable quality problems. However, the committee concludes that AHRQ would need to supplement its current report staff with other in-house technical experts, and/or seek assis - tance from entities such as the AHRQ-sponsored Evidence-Based Practice Centers or other outside contractors. Such additional experts could provide much of the detailed quantitative analyses to support the measure priori - tization and selection process for review by the subcommittee. The Evidence-Based Practice Centers might be an attractive model because they could develop a core of expertise and then gear up and down using contracting mechanisms according to the review workload (AHRQ, 2008b). Even with this additional expertise available, the NAC Technical Advisory Subcommittee for Measure Selection should include individuals with sufficient exper- tise to evaluate technical materials in areas such as cost-effectiveness analysis, statistics, assessment of clinically preventable burden, and valuation from a bioethics as well as an economic perspective. 7 Children’s Health Insurance Program Reauthorization Act, Public Law 111-3, 111th Cong., 1st sess. (January 6, 2009). 8 Health outcome for resource investment and net health benefit reflect quantitative concepts and are aspects of the concept of value discussed in Chapter 3.

OCR for page 59
 MEASURE SELECTION PROCESS Secretary, HHS AHRQ National Advisory Coun cil on Health care Re se arch an d Quality Director, AHRQ Subcommittee on Ch ildren’s He althcare Q uality M easures for Medicaid and CHIP Programs FIGURE 4-1 AHRQ, NAC, and subcommittee roles. SOURCE: AHRQ, 2009a. Figure 4-1 R01677 The NAC Technical Advisory Subcommittee for Measure Selection might want to use a variety of approaches in soliciting measures for the reports and in refining its selection criteria. Possible approaches include (1) issuing a public call for measures for inclusion/exclusion and areas needing measurement development or refinement, as well as suggestions for data support; (2) commissioning studies (e.g., comparison of different valuation techniques on the prioritization scheme, development of systematic reviews of presumed high-impact areas, valuation of dispari - ties); (3) forming strategic partnerships with entities doing measurement development and endorsement applicable to the reports (e.g., NQF, the National Committee for Quality Assurance, the National Priorities Partnership, the American Medical Association’s Physician Consortium for Performance Improvement, other HHS agencies such as CMS) to reduce duplication of effort; and (4) working with the Centers for Disease Control and Prevention (CDC) on those areas of health care improvement closely linked to priority public health outcomes and goals as well as the similar application of valuation techniques recommended for community-based prioritization in conjunction with Healthy People 2020 (see Box 4-3 later in this chapter). Enhancing Transparency in the Selection Process The committee believes that transparency in AHRQ’s process for selecting performance measures for the NHQR and NHDR is extremely important. In 2008, an IOM report stressed that transparency is a key to building public trust in decisions by having “methods defined, consistently applied, [and] available for public review so that observers can readily link judgments, decisions or actions to the data on which they are based” (IOM, 2008, p. 12). Transparent processes for decision-making bodies have been described as: • documenting decision-making by providing a public rationale; • reviewing the effects of the prioritization (Downs and Larson, 2007; Sabik and Lie, 2008); and • establishing and applying clear principles and criteria on which prioritization is based. Each of these aspects of transparency is examined in the discussion that follows. The NAC and its subcommittees—which would include the proposed NAC Technical Advisory Subcommittee for Measure Selec - tion—conduct their business in public under the Federal Adisory Committee Act.9 The fact that these bodies operate in public under this law is an attractive facet of their operation. 9 Federal Adisory Committee Act, Public Law 92-463, 92nd Cong., 2nd sess. (October 6, 1972).

OCR for page 59
 NATIONAL HEALTHCARE QUALITY AND DISPARITIES REPORTS Documenting Decision-Making by Proiding a Public Rationale Documentation of the rationale behind the NAC subcommittee prioritization decisions, the evidence support - ing the decisions, and an understanding of the role that data or resource constraints play in the decisions should be transparent. Furthermore, that information should be readily available for public access and in a timely fash - ion (Aron and Pogach, 2009). Such documentation should include analyses and syntheses of data and evidence produced by staff or obtained through other means. The Future Directions committee is particularly interested in this level of documentation because of its potential value in stimulating creation of an agenda for measure and data source development (including testing additional questions on existing data collection surveys or inclusion of elements in electronic health records) when desirable measures or data are not yet available (Battista and Hodge, 1995; Gibson et al., 2004; Whitlock et al., 2010). Documentation would also support why certain measures might either no longer be included in the print version of reports or removed from tracking altogether. Reiewing the Effects of Prioritization Prioritization is not a static activity but an “iterative process that allows priority setting to evolve” (Sabik and Lie, 2008, p. 9). With respect to the 46 core measures used in the print versions of the NHQR and NHDR, the process for selecting performance measures recommended by this committee could result in extensive changes in the measure set; the process, however, will be an iterative one. The existing measures displayed in the reports or the State Snapshots would not necessarily all be replaced. It would be logical for the NAC Technical Advisory Subcommittee for Measure Selection to begin its work by determining the relative prioritization within the exist - ing core measure group, as currently there is no priority hierarchy within selected measures as all are given equal weight in assessing progress. It is not known to what extent the existing measures within the NHQR, NHDR, or Web-based State Snapshots are specifically adopted as action items in whole or part by various audiences. This makes it difficult to evaluate the impact of changing the current measures on aspects other than report production within AHRQ. The commit - tee posits that making public the conversation about which measures will or will not have national or state data provided for them will enable AHRQ to begin to document in a more systematic fashion who uses the reports, how the data are used, and the potential impact of keeping or deleting measures. PRINCIPLES AND CRITERIA FOR SELECTION In order to establish a transparent process for creating a hierarchy among performance measures being con - sidered by AHRQ, the articulation of principles and criteria is necessary. Principles Before outlining the steps in the measure selection process, the Future Directions committee defined two principles that would guide the design. The first guiding principle is the use of a quantitatie approach, whenever feasible, for assessing the value of closing the gap between current health care practice and goal levels (i.e., aspi - rational goal of 100 percent or other goal such as one derived from the relevant benchmark). 10 To date, AHRQ’s measure selection process has not focused on evaluating what it would take to close the performance gap, or the potential benefits that could accrue to the nation in doing so for the reported measures. The committee’s second principle in prioritizing measures is taking specific note of significant, unwarranted ariation in health care per- 10 The terms aspirational goal, benchmark, and target as used in this report are defined in Box 2-1 in Chapter 2. An aspirational goal is the ideal level of performance in a priority area (e.g., no patients are harmed by a preventable health care error; all diabetes patients receive a flu shot—unless contraindicated). Benchmark is the quantifiable highest level of performance achieved so far (e.g., the benchmark among states would be set at 66.4 percent of diabetes patients received a flu shot because that represents the highest performance level of any state). Target is a quantifiable level of actual performance to be achieved relative to goal, usually by a specific date (e.g., by January 1, 2015, 75 percent of diabetes patients will receive an annual influenza shot).

OCR for page 59
 MEASURE SELECTION PROCESS formance with regard to disparities across population groups, geographic areas, and other contextual factors such as types of providers or payment sources. Application of these principles can result in reducing the burden of reporting to those areas that are deemed most important (Romano, 2009). Upon applying the principles in the measure selection process, the following provide further guidance: • Simply stated, measures should be prioritized and selected based on their potential for maximizing health care value and equity at the population level. • Priority should be given to selecting measures that maximize health benefit, improve equity, and minimize costs within a context that is respectful of and responsive to patient needs and preferences. • Measures that are principally relevant to a particular group even if they have less significance to the U.S. population as a whole (e.g., quality measures for treatment of sickle cell anemia) should be considered in measure selection. • The process, to the extent feasible, should be operationalized using formal quantitative methods and trans - parent decision-making. Thus, the emphasis is on investing in measures of conditions with the most impact while considering the ethi - cal principle of fairness. Siu and colleagues (1992) used such quantitative approaches to recommend measures for health plans in recognition that “limited resources [are] available for quality assessment and the policy con - sequences of better information on provider quality, priorities for assessment efforts should focus on those areas where better quality translated into improved health” (Siu et al., 1992). Steps in the Process and Criteria Figure 4-2 provides a schematic outline of the steps in the Future Directions committee’s proposed process for reviewing performance measurement areas—both for currently reported measures and new measures—for inclusion in the NHQR and NHDR. Inherent in relative ranking would be the identification of measures that could be dropped by AHRQ from tracking if they rank at a low level. Additionally, the process builds in specific steps for identification of measure and data source needs that should be formally captured for inclusion in a strategy for research and data acquisition for future national reporting. Previous IOM guidance regarding the selection of performance measures for the NHQR and NHDR gave greater prominence to the criterion of importance, noting that measures not meeting this criterion “would not qualify for the report regardless of the degree of feasibility or scientific soundness” (IOM, 2001, p. 83). NQF similarly stresses that every candidate measure for the NQF endorsement process “must be judged to be important to measure and report in order to be evaluated against the remaining criteria” (NQF, 2009a). To date, NQF has endorsed more than 500 measures. Although each of these measures may be useful for a specific quality improve - ment circumstance, there is a need to prioritize among the many possible measures for national reporting purposes. This committee recommends refining the pre-existing AHRQ-, IOM-, and NQF-recommended measure selection and endorsement criteria of importance to include consideration of recommended national priority areas, and an evaluation of the relative value of closing quality gaps, including consideration of equity (see Criteria A, B, C, D, E, and F). Enironmental Scan for Importance Identifying which areas should be considered important to monitor for performance improvement is a first step and could be undertaken by AHRQ staff prior to the Technical Advisory Subcommittee meeting. An environmental scan to identify those potential areas would include the type of factors that AHRQ has previously considered (see Appendix E), as well as looking to the potential effects of changing population dynamics on overall national health status, the burden of disease, and appropriate health care utilization. Additionally, ideas for possible candidate measurement areas for review could come from staff review of the literature for presumed high-impact areas and from nominations of areas for consideration from sources internal and external to HHS, including the assessment

OCR for page 59
 NATIONAL HEALTHCARE QUALITY AND DISPARITIES REPORTS Selecting Measures with the Potential for the Greatest Health Impact Quality-adjusted life years (QALYs) are the most widely used metric for quantifying the impact on health of health care interventions. QALYS can play a role in identifying areas where quality improvement interven - tions could have the greatest health impact. The use of QALYs as a value metric is rooted in the assumption that people value additional years of life spent in better health than they otherwise would have enjoyed without the application of some clinical intervention. QALYs have been derived for many clinical preventive services and for some commonly used diagnostic tests and therapeutic procedures. They have been identified as the best standard - ized measures of health effectiveness because of their “widespread use, flexibility, and relative simplicity” (IOM, 2006, p. 10). Life years can be estimated based on absolute risk reduction from clinical trials, and QALYS can be obtained directly from participants in clinical trials or estimated based on published quality-of-life data for various conditions. A similar construct, disability-adjusted life years (DALYs), is often used by the World Health Organization in international studies (Gold et al., 2002). When beneficial clinical interventions are applied to medically affected populations, the result in health benefit (measured as the total QALYs saved based on the number of persons affected by the intervention) is referred to as CPB (Maciosek et al., 2009). CPB is the health burden that is prevented or averted by a clinical intervention; it represents the absolute risk reduction from the intervention that can then be generalized to the relevant population (e.g., nation as a whole). Conceptually, it does not matter whether CPB results from improved use of a proven intervention (e.g., influ - enza vaccination, which is one of AHRQ’s effectiveness measures) or from reduction in harm to patients through improvement care processes (e.g., reduction of adverse drug events, which is one of AHRQ’s safety measures). In either case, an improvement in health, measurable in QALYs saved, has been achieved. CPB is relevant to prioritizing quality measures based on its ability to quantify the health impact of a measure’s associated clinical intervention. Therefore, CPB provides a means for comparisons across different clinical inter- ventions (e.g., mammography versus maintenance-phase medications for depression), facilitating prioritization of measures of those clinical interventions. Additionally, estimates of health impact can be used to compare measures either for the overall population or for subpopulations (in the context of assessing disparities). Selecting Measures That Target the Most Effectie Use of Health Care Resources The high (and growing) cost of health care in the United States is pushing cost considerations to the forefront of the political agenda (Davis, 2008; Fisher et al., 2009). Cost-effectiveness analysis (CEA) is perhaps the most widely used method for considering cost in the context of health gain from medical care (Gold et al., 1996). In its most complete form, CEA “measures net cost per QALY saved [using a clinical intervention], for which net costs equal the cost of the intervention minus any downstream financial savings” (Maciosek et al., 2009, p. 350). CEA facilitates comparisons across interventions by providing a common metric for comparing costs across different interventions or activities, thus informing allocation decisions designed to maximize health (measured by QALYs) within confined resources (Gold et al., 1996; Neumann et al., 2008; Wong et al., 2009). There have been calls for explicit consideration of CEA in the prioritization of quality measures and health care policy (Maciosek et al., 2009; Neumann et al., 2008; Siu et al., 1992; Wong et al., 2009; Woolf, 2009). These recommendations are supported by a burgeoning literature on the cost-effectiveness of several clinical preven - tive services and certain diagnostic testing and therapies (e.g., surgical and other procedures, devices, drugs and behavioral interventions), including the establishment of a searchable registry for CEA (Center for the Evaluation of Value and Risk in Health, 2009; NIHR Centre for Reviews and Dissemination, 2009). Most of the preventive and diagnostic services or interventions for which CEA data may be available fall within AHRQ’s framework component of effectiveness measures (Bentley et al., 2008; Hurley et al., 2009); less is known about the cost- effectiveness of clinical interventions in the safety or timeliness components, but there are some examples (Barlow et al., 2007; Furuno et al., 2008; Rothberg et al., 2005; van Hulst et al., 2002). Data permitting, CEA could play a role in selecting and prioritizing quality measures for a number of framework components. The committee recognizes that there has been some resistance to using CEA for health care improvement.

OCR for page 59
 MEASURE SELECTION PROCESS One criticism relates to the potential for bias in the conduct of CEA. For example, CEAs conducted by industry (e.g., health plans, pharmaceutical companies) frequently provide quite favorable results (Bell et al., 2006). Too often, CEA data follow rather than precede release of an intervention or technology into practice, limiting their usefulness at the time of its implementation (Greenberg et al., 2004). Furthermore, few CEAs report actual costs of implementing the intervention into routine care (Neumann et al., 2008), but instead focus largely on the cost of the intervention itself. Finally, ethical questions have been raised in terms of the impact of CEA on different popula - tions, such as the elderly or disabled. Strict application of CEA to interventions designed to improve quality of life among the dying might yield results suggesting that minimal additional QALYs might not outweigh the costs. These issues are potentially addressable (Neumann et al., 2008). For example, CEAs could employ standard and transparent methods, which may require some public financing so that they are not solely conducted by entities with a business interest in the result. Further, ethical considerations can be accounted for by incorporating bal - ance and equity into policy decisions in conjunction with CEA, which is consistent with this committee’s broader definition of health care value (see Chapter 3). CEA represents one approach to formal, evidence-based comparisons of interventions that account for trade- offs in costs and health benefits. These analyses could help track an important aspect of health care value and target the selection of measures that promote optimal health outcomes (e.g., QALYs, mortality rates, life expectancy). Prioritizing Measures with High Health Impact and Effective Resource Use To identify measures with the greatest potential value, particularly related to clinical effectiveness measures, the committee examined two strategies that employ health impact analysis and cost-effectiveness analysis. With - out endorsing any specific strategy or methodology, the committee believes that the discussion below provides examples of ways in which AHRQ could select high-value, prioritized measures for performance reporting. An Approach with Separate and Combined Clinically Preentable Burden and Cost-Effectieness Rankings Measurement of health impact in terms of both CPB and cost-effectiveness (CE) can be used to determine which among a given list of preventive measures has the greatest potential for quality improvement. In one example of this approach, Maciosek and colleagues examined a list of measures based on health care services interventions recommended by the U.S. Preventive Services Task Force (USPSTF). (Detailed methods for these calculations and additional information on the results are published elsewhere [Maciosek et al., 2006a,b]). CE and CPB calculations were used as the criteria to assess the relative value of each service. CPB was defined as “the total QALYS that could be gained if the clinical preventive service was delivered at recommended intervals” to a designated cohort; that is, total QALYs were compared between 100 percent of patients being advised to use or consider the interven - tion, and no use at all. CE was defined as “the average net cost per QALY gained in typical practice by offering the clinical preventive service at recommended intervals to a U.S. birth cohort over the recommended age range” (Maciosek et al., 2006a, pp. 53-54) (i.e., net cost of the intervention divided by the QALYs saved). Once calculations for health impact and CE were completed for each service, analysts ranked the calculations by scoring them on a scale of 1 to 5, with 5 being the best score (i.e., the highest estimates for health impact, and the lowest cost-effectiveness ratio for CE). This quintile scale was created to rank the calculated estimates of CPB and CE without overstating the precision of the individual estimates. An overall score was then derived by adding the CPB and CE scores together, conveying services of greatest value within a given set. Table 4-1 depicts these individual and combined scores with the ultimate ranking of clinical preventive services. Although the calculations for CE in the study by Maciosek and colleagues effectively included CPB (as the denominator of the equation), presenting CE and CPB separately allows decision-makers to consider both crite - ria either simultaneously or in isolation. This separation of factors may be useful when a measure’s associated intervention ranks low in cost-effectiveness yet has a significantly high health impact, which decision-makers may value more and thus give the measurement area a higher priority. Measures and associated interventions that rank lower in a prioritization scheme should be assumed to retain value to some stakeholders or regions who may want to continue to invest in tracking or improvement activities in those areas. Although the Maciosek study was

OCR for page 59
0 NATIONAL HEALTHCARE QUALITY AND DISPARITIES REPORTS TABLE 4-1 Ranking of Clinical Preventive Services for the U.S. Population Clinical Preventive Service CPB CE Total Discuss daily aspirin use: men 40+, women 50+ 5 5 10 Childhood immunizations 5 5 10 Smoking cessation advice and help to quit: adults 5 5 10 Alcohol screening and briefing counseling: adults 4 5 9 Colorectal cancer screening: adults 50+ 4 4 8 Hypertension screening and treatment: adults 18+ 5 3 8 Influenza immunization: adults 50+ 4 4 8 Vision screening: adults 65+ 3 5 8 Cervical cancer screening: women 4 3 7 Cholesterol screening and treatment: men 35+, women 45+ 5 2 7 Pneumococcal immunization: adults 65+ 3 4 7 Breast cancer screening: women 40+ 4 2 6 Chlamydia screening: sexually active women under 25 2 4 6 Discuss calcium supplementation: women 3 3 6 Vision screening: preschool children 2 4 6 Folic acid chemoprophylaxis: women of childbearing age 2 3 5 Obesity screening: adults 3 2 5 Depression screening: adults 3 1 4 Hearing screening: 65+ 2 2 4 Injury-prevention counseling: parents of child 0-4 1 3 4 Osteoporosis screening: women 65+ 2 2 4 Cholesterol screening: men <35, women <45 at high risk 1 1 2 Diabetes screening: adults at risk 1 1 2 Diet counseling: adults at risk 1 1 2 2a Tetanus-diptheria booster: adults 1 1 NOTE: The services shown in this table were services that had been recommended by the U.S. Preven- tive Services Task Force through December of 2004. a Corrected from Maciosek et al., 2009. In article, mistakenly listed as “1.” SOURCE: ANNUAL REVIEW OF PUBLIC HEALTH by Maciosek. Copyright 2009 by ANNUAL REVIEWS, INC. Reproduced with permission of ANNUAL REVIEWS, INC. in the format Other book via Copyright Clearance Center. specific to preventive services, the same methods can be applied to rank the value of other types of health care services (i.e., acute treatment, chronic condition management) as long as there is enough information to perform the calculations. A Net Health Benefit Approach Another approach to prioritizing measures is based on the concept of net health benefits (Stinnett and Mullahy, 1998). This approach is used to quantify the potential value of quality improvement for a given measure by estimat - ing the incremental health benefit gained by a clinical standard of care net of its incremental costs: “the difference between the health benefit achieved by a program, and the amount of health gain that would be needed to justify the program’s cost” (Hauck et al., 2004, p. 85; Secretary’s Advisory Committee on National Health Promotion and Disease Prevention Objectives for 2020, 2008a). This approach assumes that measures are defined with reference to some standard of care, that the benefits of implementation are measureable in terms of QALYs (or a similar metric of health benefit) on the basis of clini - cal evidence or consensus, and that the standard of care pertains to clinical quality, patient safety, organizational characteristics, utilization, or aspects of patient-provider relationships. The logic is as follows—if the costs and health benefits of standard-concordant care are known, and the costs and health benefits of non-standard-concordant care are also known, then the net health benefit (NHB) of the standard (the measure) can be calculated—the result

OCR for page 59
 MEASURE SELECTION PROCESS being the population health benefits net of cost. As a result, different clinical interventions can be compared to see which are most productive. Tengs and Graham (1996) illustrate how spending could be directed to clinical interventions with the potential for the greatest return. They examined the costs and benefits of 185 interventions, finding that the United States spent about $21.4 billion on these lifesaving interventions, averting 56,700 premature deaths and, in doing so, saving 592,000 life years. However, a smaller amount of funds could have been better allocated to minimize pre - mature deaths and maximize life years to save an additional 595,000 life years. Although cost-effectiveness estimates (measured in QALYs) are used in this method, they are only a part of the total calculation. In addition to comparing the costs and effectiveness of a standard of care, the net health benefit for a standard of care takes into account society’s willingness to pay for an additional unit of health benefit (as measured by QALYs). Knowing the societal cost-effectiveness threshold allows for the calculation of opportunity costs for achieving the desired standard of care. Thus, a net health benefit calculation derives the actual costs and opportunity cost of accomplishing a standard of care if an intervention were fully implemented to maximize its benefit. This, in turn, allows one to calculate the expected population value of improving the performance rate of a measure for a given clinical intervention to 100 percent. In Appendix F, a commissioned paper by David Meltzer and Jeanette Chung provides an illustrative analysis of Pap smears and estimates that 405,999 life years would be gained if every 18-year-old female received triennial screening (while current actual rates of screening yield 293,351 life years). Thus, the value of quality improvement would be the difference between perfect and actual implementation: 112,648 life years lost. Meltzer and Chung’s paper explores the net health benefit methods and their theoretical applicability to 14 NHQR measures that span different framework components. The strategy can be used to estimate the potential value of improving performance on existing quality measures, which can then be used to prioritize measures for reporting. Meltzer and Chung examine the applicability of these techniques for process measures with an associated standard of care, composite process of care measures, and incidence rates of complications (e.g., foreign body left in during a procedure per 1,000 hospital discharges). While the technique is well suited to analyze process measures, it is difficult to use for composite process measures or for most outcome measures because no specific treatment or intervention is defined. The issues with each of these measure types are discussed in more depth in their paper. Limitations of These Strategies While both of the approaches discussed above are useful for informing decision-makers of where to invest resources to improve health care, they have important limitations. First, these methods for prioritization do not include any equity or disparities considerations for specific priority population groups. It is conceivable, however, that CPB and CE estimates could be calculated for specific population groups if the necessary data were available; a few studies on the economic impact of disparities have recently been released (LaVeist et al., 2009; Waidmann, 2009). Second, the information necessary to compute CE and health impact calculations may not be readily available; it is rarely the case that analysts have all of the necessary information to do these estimates and must consequently make assumptions. These assumptions should be clearly identified, and sensitivity analyses should be used to examine the effect of assumptions on results. In the absence of data from the peer-reviewed literature, the assumptions should be guided by expert opinion and the gray literature. A third limitation, and an important one given the multidimensional aspect of health care value, is that the above-discussed approaches for prioritization are not readily applicable to all measures given that the calculation rests on quantifiable standards of information (e.g., financial cost, QALYs). The approaches apply primarily to clinical effectiveness measures and, to some extent, to safety and efficiency measures when a health care service or intervention has been identified to improve health outcomes with known costs. Yet there are measures reported in the NHQR and NHDR—some access, timeliness, and patient-centeredness measures—for which underlying interventions or processes are not easily tied to monetary or life duration factors. For example, the health impact of patient perceptions of care that promotes informed patient decision-making or alleviates suffering at the end- of-life is not easily translated to QALYs. Measures without an easily quantifiable impact arguably represent important and desirable ends in themselves, apart from any demonstrable effect on health. For these measures,

OCR for page 59
 NATIONAL HEALTHCARE QUALITY AND DISPARITIES REPORTS alternative means are needed to weigh the relative impact of gaps or disparities. This might be achieved through formal assessment of the relative value, or ranking of the health care processes captured in qualitative dimensions by consumers. Such rankings could facilitate prioritization if coupled with consideration of the gap or disparity in performance and the size of the population affected by the gap. Although this approach would not allow direct comparison with CPB or net health benefit, it would help facilitate prioritization among measures falling within a particular quality component of the framework. The framework components of care coordination and health systems infrastructure capabilities were not assessed using these strategies because measures for these components were not presented in the latest edition of the national healthcare reports. Chapter 3 referenced some studies that indicated potential cost-effectiveness using care coordination and implementing HIT. However, the evidence base for such interventions on improving the quality of care would need to be further examined to evaluate the applicability of these prioritization strategies to them. Finally, the resources required to discover, collect, and collate the data needed for these prioritization approaches, along with the human capital to perform the computation and analysis are substantial. Depending on the data available, a thorough search of the literature and calculations for a single measure will require a consider - able amount of dedicated time. If the NAC Technical Advisory Subcommittee for Measure Selection and AHRQ were to use such prioritization approaches, which this committee strongly recommends, appropriate resources to support this effort would be required. The Phase 1 report on Healthy People 2020 suggests that communities use similar techniques to prioritize their objectives and that support be given to communities in terms of technical support materials to make this possible. There would be synergy in AHRQ and CDC partnering to advance these more quantitative approaches to prioritization as well as partnerships with other public or private entities utilizing these techniques. Tools for Assessing Equity A high-value health care system, by definition, requires the provision of equitable, high-value care to all individuals; therefore, metrics that assess equity in health care delivery should be considered in the prioritization process for measure selection. Measures in which the nation as a whole is performing well (i.e., for which there is little or no gap between the national average and achieving recommended care for the entire applicable popula - tion) may show performance gaps when the data are stratified by population subgroups. Therefore, the goal of achieving value in health care must be balanced by considering the needs of population groups that differ in age, race, ethnicity, gender, disability, and socioeconomic status. Chosen quality measures should promote the core quality dimension of equity in health care. An inequity is a measurable, observable difference that can and should be closed (Carr-Hill, 2001; Whitehead and Dahlgren, 1991). For example, because the incidence of AIDS is more than 20 times higher among Black than White women, and two-thirds of new AIDS cases among women are in Black women (Kaiser Family Foundation, 2009), the CPB of interventions related to AIDS, such as use of highly active antiretroviral therapy, is much greater among Black women than among the population of all women. As is further discussed in Chapter 5, the identification of disparities is often hampered by sample sizes and a lack of systematic, standardized collection of sociodemographic data. Yet large disparities that are statistically insig - nificant due to small sample sizes may still be indicative of problems with equity (Siegel et al., 2009). This section explores some of the established techniques and tools that allow for the identification of disparities. It is important to consider (1) whether the disparity is measured on a relative or absolute scale, (2) the reference point from which differences are measured, and (3) whether the disparity is weighted by population size or degree of inequity. Relatie and Absolute Difference Absolute and relative measures of disparity can provide contradictory evidence regarding changes in a dis - parity over time. In the context of health care quality improvement, increasing relative but decreasing absolute inequality occurs when the rate of improvement is smaller for the group with the worst performance rate (Harper et al., 2010). In concert with one another, absolute and relative differences can provide a more comprehensive

OCR for page 59
 MEASURE SELECTION PROCESS picture of a disparity than either method alone. The committee does not recommend a single approach to measur- ing disparities and instead emphasizes that the method of measurement can determine the size and direction of a potential disparity. AHRQ presents information on disparities in terms of both relative and absolute differences in either adverse or favorable outcomes. In the Highlights section of the 2008 NHDR, AHRQ presents the three largest disparities in quality for different groups using relative differences (AHRQ, 2009c). The committee was not able to assess the validity of these rankings. A relative measure expresses the disparity as a ratio relative to the reference point or group, so that reference point becomes the unit of comparison. Absolute measures of disparity are simply the difference between a group rate and the reference group; most of the AHRQ graphs reflect absolute differences. See Table 4-2 below for a list of ways to measure absolute and relative health disparity. The following example highlights how examining relative and absolute differences can lead to different con - clusions, especially when comparing over time. In 2000, the rate of a specific disease was 8 percent in the African American population and 4 percent in the White population. In absolute terms, this was a 4-point difference, whereas in relative terms, the African American rate was twice the White rate. In 2010, the African American rate is 6 percent, and the White rate is 3 percent. Both groups have better rates, and the African American rate has improved more than the White rate. In absolute terms, the gap has shrunk from 4 points to 3 points. In relative terms, the African American rate is still double the White rate. In this case, the relative rate does not reflect that the situation is better in 2010 than it was in 2000. A 2005 report released by the CDC advised that to promote a more complete understanding of the “magnitude of disparities,” disparities should be measured in both absolute and relative terms, especially when making comparisons over time, geographic regions, or populations (Keppel et al., 2005). Additionally, Harper and colleagues have urged researchers against always using a single measure (e.g., a rate ratio), and instead advised researchers to “pay more attention to the normative choices inherent in measurement” (Harper et al., 2010, p. 22). When both absolute and relative difference cannot be presented (due to space constraints, for instance), major medical journals are trending toward presenting absolute differences (Braveman, 2006; Dombrowski et al., 2004; TABLE 4-2 Measures of Absolute and Relative Health Disparity Measures of Absolute Disparity Rate Difference Simple arithmetic difference between two groups (usually between the less-advantaged group and the more-advantaged group). Between-Group Variance The sum of squared deviations from a population average. The variance that would exist in the population if each individual had the average health of their social group. Absolute Concentration Index Measures the extent to which health or illness is concentrated among a particular group. Slope Index of Inequality Absolute difference in health status between the bottom and top of the social group distribution. Measures of Relative Disparity Rate Ratio Measures the relative difference in the rates of the best and worst group. Index of Disparity Summarizes the difference between several group rates and a reference rate and expresses the summed differences as a proportion of the reference rate. Relative Concentration Index Measures the extent to which health or illness is concentrated among a particular group. Relative Index of Inequality Measures the proportionate rather than the absolute increase or decrease in health between the highest and lowest group. Theil Index and Mean Log Measures of disproportionality. Summaries of the difference between the natural logarithm of shares of Deviation health and shares of population. NOTE: Although this table is on measures of health disparities rather than health care disparities, the same concepts can be applied to measuring disparities in health care performance. SOURCE: Harper and Lynch, 2007.

OCR for page 59
 NATIONAL HEALTHCARE QUALITY AND DISPARITIES REPORTS Regidor et al., 2009; Rosvall et al., 2008). The advantage of this approach is that it is more consistent with using population health burden as a metric for prioritizing within populations. When both measures cannot be presented, the committee suggests AHRQ might include absolute rates in graphs and tables and add a comment in the text about whether the relative disparity is changing. Calculating Disparities Using Odds Ratios By expressing disparities in terms of odds ratios, researchers can calculate and present the risk of one group over another (similar to relative rate).12 AHRQ employs this method to calculate the “odds,” for example, for uninsurance for Black and Asian adults to White adults. This method allows AHRQ to easily convey that the risk of uninsurance is 0.9 times higher for Blacks and 1.1 times higher for Asians (AHRQ, 2009c). Odds ratios should be used with caution as they can exaggerate differences and may be misleading in terms of clinical significance. For any notion of causality, notations of the absolute difference should be readily available (that is, on the prob - ability scale). The Reference Population As Nerenz and Fiscella have noted, the quality measures that matter to the overall population also matter to minority populations (Fiscella, 2007; Nerenz et al., 2006). Disparities may be assessed by stratifying quality data by various population groups. Indeed, AHRQ presents data on measures for priority populations in this way in the NHDR. This method also has the benefit of being able to use the same measures to assess performance levels for both disparities and quality among populations. However, additional measures of disparity may be relevant and necessary to fully document the extent or presence of inequities. Measuring disparities requires a comparison group. The reference group or point can be the unweighted mean of all groups, the weighted mean of the total population, the most favorable rate among population groups, or an external deliberate standard such as a Healthy People 2010 target or benchmark. Although each of these reference points can be useful, the group with the most favorable rate is often chosen as the reference point in disparities studies because it assumes that every group in the population has the potential to achieve the health of the best-off group. (In Chapter 6, the committee suggests that in the NHDR, AHRQ use benchmarks based on the best-in-class performance rate not just the highest population rate, which often is worse than the best-in-class performance rate.) An Index of Health Care Disparities Indices of disparities summarize average differences between groups and express the summation as a ratio of the reference rate (Harper et al., 2008). Most disparity indices measure statistically significant disparities across all populations for a given condition or disease (e.g., among all races in a given state), but do not always measure variance for a single discrete population group (Gakidou et al., 2000). Pearcy and Keppel’s Index of Disparity gives equal weight to each group, even when each group represents different proportions of the population (Pearcy and Keppel, 2002). This kind of unweighted measure of disparity means that an individual in a larger population group may receive more weight than an individual in a smaller population group. To be clinically relevant to providers, a disparity index needs to measure disparities in care among discrete subpopulations and needs to give greater weight to disparities that affect greater numbers of patients (Siegel et al., 2009). Doing so captures population impact. Siegel and colleagues developed a disparities index that takes in account the quality of health care being provided to all patients, the size of the affected population, and changes over time (Siegel et al., 2009). Another 12 “Odds ratios are a common measure of the size of an effect and may be reported in case-control studies, cohort studies, or clinical trials. Increasingly, they are also used to report the findings from systematic reviews and meta-analyses. Odds ratios are hard to comprehend directly and are usually interpreted as being equivalent to the relative risk. Unfortunately, there is a recognized problem that odds ratios do not approxi- mate well to the relative risk when the initial risk (that is, the prevalence of the outcome of interest) is high. Thus there is a danger that if odds ratios are interpreted as though they were relative risks then they may mislead” (Davies et al., 1998, p. 989).

OCR for page 59
 MEASURE SELECTION PROCESS benefit of using population-weighted measures is that they are able to account for changes in the distribution of the population that inevitably occur over time (Harper and Lynch, 2005). For the purposes of the national healthcare reports, measures of equity may need to consider more than just the number of individuals affected in the entire population. For instance, a very large gap in quality of care between one relatively small subpopulation and the overall population may have significant implications for quality. A report prepared for the National Cancer Institute on measuring cancer disparities adopted a population health perspec - tive on disparities. This perspective means that the researchers were primarily concerned with the total population burden of disparities and thus considered both absolute differences between groups and the size of the population subgroups involved (Harper and Lynch, 2005). Conclusion The methods discussed above should be considered when analyzing data relevant to assessing disparities in performance among different populations and prioritizing measure selection. Measures that reveal an equity gap, even when those same measures are equivalent in assessments of value, should be considered for prioritization as they exhibit an important attribute of the health care system where greater improvements in health care quality can be made. SUMMARY The Future Directions committee has recommended improving the process for selecting performance measures for the NHQR and NHDR to make the process more transparent and quantitative. It has also recommended estab - lishing a Technical Advisory Subcommittee for Measure Selection to advise AHRQ through the NAC. Although there are limits to applying more quantitative techniques in valuing measurement areas, they should be used whenever feasible. Their use is common in prioritization practices for resource allocation. National prioritization of measures can influence where resources are devoted to quality improvement. The potential impact of focusing quality improvement on closing the performance gaps of specific measure choices should be analyzed with care, particularly as the committee believes the national reports should be driving action rather than passively reporting on past trends. REFERENCES AHRQ (Agency for Healthcare Research and Quality). 2002. Preliminary measure set for the National Healthcare Quality Report. Federal Register 67(150):53801-53802. ———. 2003a. National Healthcare Quality Report, 00. Rockville, MD: Agency for Healthcare Research and Quality. ———. 2003b. Preliminary measure set for home health in the National Healthcare Quality Report—request for comments. Federal Register 68(56):14240-14241. ———. 2004. National Healthcare Quality Report, 00. Rockville, MD: Agency for Healthcare Research and Quality. ———. 2005. National Healthcare Disparities Report, 00. Rockville, MD: Agency for Healthcare Research and Quality. ———. 2008a. Agency for Healthcare Research and Quality (AHRQ) National Adisory Council (NAC) meeting summary, April 4, 2008. ———. 2008b. Eidence-based practice centers oeriew. http://www.ahrq.gov/clinic/epc/ (accessed January 6, 2010). ———. 2009a. Introductory remarks and charge to the subcommittee: Slide . Agency for Healthcare Research and Quality. Presentation to the AHRQ NAC Subcommittee on Children’s Healthcare Quality Measures for Medicaid and CHIP Programs, July 22, 2009. Rockville, MD. ———. 2009b. National Adisory Council for Healthcare Research and Quality. http://www.ahrq.gov/about/council.htm (accessed August 27, 2009). ———. 2009c. National Healthcare Disparities Report, 00. Rockville, MD: Agency for Healthcare Research and Quality. ———. 2010. National Adisory Council for Healthcare Research and Quality. http://www.ahrq.gov/about/council.htm (accessed February 23, 2010). Aron, D., and L. Pogach. 2009. Transparency standards for diabetes performance measures. Journal of the American Medical Association 301(2):210-212. Barclay, L., and D. Lie. 2006. Most aluable clinical preentie serices identified. http://cme.medscape.com/viewarticle/532983 (accessed May 13, 2010).

OCR for page 59
 NATIONAL HEALTHCARE QUALITY AND DISPARITIES REPORTS Barlow, G., D. Nathwani, F. Williams, S. Ogston, J. Winter, M. Jones, P. Slane, E. Myers, F. Sullivan, N. Stevens, R. Duffey, K. Lowden, and P. Davey. 2007. Reducing door-to-antibiotic time in community-acquired pneumonia: Controlled before-and-after evaluation and cost- effectiveness analysis. Thorax 62(1):67-74. Battista, R. N., and M. J. Hodge. 1995. Setting priorities and selecting topics for clinical practice guidelines. Canadian Medical Association Journal 153(9):1233-1237. Baumann, M. H., S. Z. Lewis, and D. Gutterman. 2007. ACCP evidence-based guideline development. Chest 132(3):1015-1024. Bell, C. M., D. R. Urbach, J. G. Ray, A. Bayoumi, A. B. Rosen, D. Greenberg, and P. J. Neumann. 2006. Bias in published cost-effectiveness studies: Systematic review. British Medical Journal 332(7543):699-703. Bentley, T. G. K., R. M. Effros, K. Palar, and E. B. Keeler. 2008. Waste in the U.S. health care system: A conceptual framework. Milbank Quarterly 86(4):629-659. Berwick, D. M., B. James, and M. J. Coye. 2003. Connections between quality measurement and improvement. Medical Care 41(1 Suppl): I30-I38. Bleichrodt, H., E. Diecidue, and J. Quiggin. 2004. Equity weights in the allocation of health care: The rank-dependent QALY model. Journal of Health Economics 23(1):157-171. Bleichrodt, H., D. Crainich, and L. Eeckhoudt. 2008. Aversion to health inequalities and priority setting in health care. Journal of Health Eco- nomics 27(6):1594-1604. Braveman, P. 2006. Health disparities and health equity: Concepts and measurement. Annual Reiew of Public Health 27:167-194. Carr-Hill, R. 2001. Measurement issues concerning equity in health. London: Kings Fund. Center for the Evaluation of Value and Risk in Health. 2009. Welcome to the CEA Registry. https://research.tufts-nemc.org/cear/Default.aspx (accessed September 25, 2009). The Commonwealth Fund. 2002. Roadmap to reduce disparities in the quality of care for minority patients identified: National Quality Forum experts recommend 0 steps be implemented. http://www.commonwealthfund.org/Content/News/News-Releases/2002/Jun/Roadmap-To- Reduce-Disparities-In-The-Quality-Of-Care-For-Minority-Patients-Identified.aspx (accessed February 22, 2010). Davies, H. T. O., I. K. Crombie, and M. Tavakoli. 1998. When can odds ratios mislead? British Medical Journal 316(7136):989-991. Davis, K. 2008. Slowing the growth of health care costs—learning from international experience. New England Journal of Medicine 359(17):1751-1755. Dombrowski, J. C., J. C. Thomas, and J. S. Kaufman. 2004. A study in contrasts: Measures of racial disparity in rates of sexually transmitted disease. Sexually Transmitted Diseases 31(3):149-153. Downs, T. J., and H. J. Larson. 2007. Achieving Millennium Development Goals for health: Building understanding, trust, and capacity to respond. Health Policy 83(2-3):144-161. Fielding, J. E. 2009. Lessons learned in Healthy People 00 & their application in Healthy People 00. L.A. County Department of Public Health. Presentation at the Healthy Israel 2020 conference, April 27, 2009. Fiscella, K. 2002. Using existing measures to monitor minority healthcare quality. In Improing healthcare quality for minority patients: Work- shop proceedings, edited by the National Quality Forum. Washington, DC: National Forum for Health Care Measurement and Reporting. Pp. B1-B42. ———. 2007. Eliminating disparities in healthcare through quality improvement. In Eliminating healthcare disparities in America: Beyond the IOM report, edited by R. A. Williams. Totowa, NJ: Humana Press Inc. Pp. 141-178. Fisher, E. S., J. P. Bynum, and J. S. Skinner. 2009. Slowing the growth of health care costs—lessons from regional variation. New England Journal of Medicine 360(9):849-852. Furuno, J. P., M. L. Schweizer, J. C. McGregor, and E. N. Perencevich. 2008. Economics of infection control surveillance technology: Cost- effective or just cost? American Journal of Infection Control 36(3 Suppl 1):S12-S17. Gakidou, E. E., C. J. L. Murray, and J. Frenk. 2000. Defining and measuring health inequality: An approach based on the distribution of health expectancy. Bulletin of the World Health Organization 78(1):42-54. Gibson, J. L., D. K. Martin, and P. A. Singer. 2004. Setting priorities in health care organizations: Criteria, processes, and parameters of success. BMC Health Serices Research 4(25):1-8. Gold, M., and R. Nyman. 2004. Ealuation of the deelopment process of the National Healthcare Quality Report. Washington, DC: Math- ematica Policy Research, Inc. Gold, M. R., J. E. Siegel, L. B. Russell, and M. C. Weinstein. 1996. Cost-effectieness in health and medicine. New York: Oxford University Press. Gold, M. R., D. Stevenson, and D. G. Fryback. 2002. HALYS and QALYS and DALYS, oh my: Similarities and differences in summary mea- sures of population health. Annual Reiew of Public Health 23(1):115-134. Greenberg, D., A. B. Rosen, N. V. Olchanski, P. W. Stone, J. Nadai, and P. J. Neumann. 2004. Delays in publication of cost utility analyses conducted alongside clinical trials: Registry analysis. British Medical Journal 328(7455):1536-1537. Guyatt, G., D. Gutterman, M. H. Baumann, D. Addrizzo-Harris, E. M. Hylek, B. Phillips, G. Raskob, S. Z. Lewis, and H. Schünemann. 2006. Grading strength of recommendations and quality of evidence in clinical guidelines: Report from an American College of Chest Physi- cians Task Force. Chest 129(1):174-181. Harper, S., and J. Lynch. 2005. Methods for measuring cancer disparities: Using data releant to Healthy People 00 cancer-related objec- ties. Bethesda, MD: National Cancer Institute. ———. 2007. Selected comparisons of measures of health disparities: A reiew using databases releant to Healthy People 00 cancer- related objecties. Bethesda, MD: National Cancer Institute.

OCR for page 59
 MEASURE SELECTION PROCESS Harper, S., J. Lynch, S. C. Meersman, N. Breen, W. W. Davis, and M. E. Reichman. 2008. An overview of methods for monitoring social dis- parities in cancer with an example using trends in lung cancer incidence by area-socioeconomic position and race-ethnicity, 1992-2004. American Journal of Epidemiology 167(8):889-899. Harper, S., N. B. King, S. C. Meersman, M. E. Reichman, N. Breen, and J. Lynch. 2010. Implicit value judgments in the measurement of health inequalities. Milbank Quarterly 88(1):4-29. Hauck, K., P. C. Smith, and M. Goddard. 2004. The economics of priority setting for health care: A literature reiew. Washington, DC: The International Bank for Reconstruction and Development / The World Bank. HHS (U.S. Department of Health and Human Services). 2009a. Feasibility, alternaties, and cost/benefit analysis guide: Appendix B: Glossary. http://www.acf.hhs.gov/programs/cb/systems/sacwis/cbaguide/appendixb.htm (accessed January 25, 2010). ———. 2009b. Leading health indicators...touch eeryone [Healthy People 00]. http://www.healthypeople.gov/lhi/touch_fact.htm (accessed November 23, 2009). ———. 2009c. Secretarial review and publication of the annual report to Congress submitted by the contracted consensus-based entity regard- ing performance measurement. Federal Register 74(174):46594-46603. Hurley, E., I. McRae, I. Bigg, L. Stackhouse, A. M. Boxall, and P. Broadhead. 2009. The Australian health care system: The potential for ef- ficiency gains. A reiew of the literature. Commonwealth of Australia: National Health and Hospitals Reform Commission. IOM (Institute of Medicine). 2001. Enisioning the National Healthcare Quality Report. Washington, DC: National Academy Press. ———. 2003. Priority areas for national action: Transforming health care quality. Washington, DC: The National Academies Press. ———. 2006. Valuing health for regulatory cost-effectieness analysis. Washington, DC: The National Academies Press. ———. 2008. Knowing what works in health care: A roadmap for the nation. Washington, DC: The National Academies Press. Kaiser Family Foundation. 2009. Fact sheet: Black Americans and HIV/AIDS. http://www.kff.org/hivaids/upload/6089-07.pdf (accessed De- cember 15, 2009). Keppel, K., E. Pamuk, J. Lynch, O. Carter-Pokras, I. Kim, V. Mays, J. Pearcy, V. Schoenbach, and J. S. Weissman. 2005. Methodological issues in measuring health disparities. Vital and Health Statistics 2(141):1-22. Kilbourne, A. M., G. Switzer, K. Hyman, M. Crowley-Matoka, and M. J. Fine. 2006. Advancing health disparities research within the health care system: A conceptual framework. American Journal of Public Health 96(12):2113-2121. Langley, G. J., K. M. Nolan, C. L. Norman, L. P. Provost, and T. W. Nolan. 1996. The improement guide: A practical approach to enhancing organizational performance. New York: Jossey-Bass Business and Management Series. LaVeist, T. A., D. J. Gaskin, and P. Richard. 2009. The economic burden of health inequalities in the United States. Washington, DC: Joint Center for Political and Economic Studies. Lurie, N., M. Jung, and R. Lavizzo-Mourey. 2005. Disparities and quality improvement: Federal policy levers. Health Affairs 24(2):354-364. Maciosek, M. V., A. B. Coffield, N. M. Edwards, T. J. Flottemesch, M. J. Goodman, and L. I. Solberg. 2006a. Priorities among effective clinical preventive services: Results of a systematic review and analysis. American Journal of Preentie Medicine 31(1):52-61. Maciosek, M. V., N. M. Edwards, A. B. Coffield, T. J. Flottemesch, W. W. Nelson, M. J. Goodman, and L. I. Solberg. 2006b. Priorities among effective clinical preventive services: Methods. American Journal of Preentie Medicine 31(1):90-96. Maciosek, M. V., A. B. Coffield, N. M. Edwards, T. J. Flottemesch, and L. I. Solberg. 2009. Prioritizing clinical preventive services: A review and framework with implications for community preventive services. Annual Reiew of Public Health 30(1):341-355. McGlynn, E.A. 2003. Selecting common measures of quality and system performance. Medical Care 41(1 Suppl):I39-I47. National Quality Measures Clearinghouse. 2009a. U.S. Department of Health & Human Serices (HHS) measure inentory. http://www. qualitymeasures.ahrq.gov/hhs/hhs.index.aspx (accessed November 23, 2009). ———. 2009b. Welcome! http://www.qualitymeasures.ahrq.gov/ (accessed November 23, 2009). NCVHS (National Committee on Vital and Health Statistics). 2002. National Committee on Vital and Health Statistics (NCVHS), Subcommittee on Populations—Working Group on Quality. http://www.ncvhs.hhs.gov/020725fr.htm (accessed November 23, 2009). Nerenz, D. R. 2002. Quality of care measures of special significance to minority populations. In Improing healthcare quality for minority patients, edited by the National Quality Forum. Washington, DC: National Quality Forum. Pp. C1-C23. Nerenz, D. R., K. A. Hunt, and J. J. Escarce. 2006. Health care organizations’ use of data on race/ethnicity to address disparities in health care. Health Serices Research 41(4p1):1444-1450. Neumann, P. J., J. A. Palmer, N. Daniels, K. Quigley, M. R. Gold, and S. Chao. 2008. A strategic plan for integrating cost-effectiveness analysis into the US healthcare system. American Journal of Managed Care 14(4):185-188. NIHR (National Institute for Health Research) Centre for Reviews and Dissemination. 2009. CRD databases. http://www.crd.york.ac.uk/ crdweb/ (accessed January 7, 2010). NQF (National Quality Forum). 2009a. Measure ealuation criteria. http://www.qualityforum.org/uploadedFiles/Quality_Forum/Measuring_ Performance/Consensus_Development_Process%E2%80%99s_Principle/EvalCriteria2008-08-28Final.pdf?n=4701] (accessed February 22, 2010). ———. 2009b. NQF-endorsed standards. http://www.qualityforum.org/Measures_List.aspx (accessed October 28, 2009). Pearcy, J. N., and K. G. Keppel. 2002. A summary measure of health disparity. Public Health Reports 117(3):273-280. Regidor, E. 2004. Measures of health inequalities: Part 1. Journal of Epidemiology and Community Health 58(10):858-861. Romano, P. S. 2009. Quality performance measurement in California: Findings and recommendations to the California Office of the Patient Adocate. Center for Healthcare Policy and Research, University of California, Davis. Presentation to the IOM Committee on Future Directions for the National Healthcare Quality and Disparities Reports, March 11, 2009. Newport Beach, CA. PowerPoint Presentation.

OCR for page 59
 NATIONAL HEALTHCARE QUALITY AND DISPARITIES REPORTS Rosvall, M., G. Engstrom, G. Berglund, and B. Hedblad. 2008. C-reactive protein, established risk factors and social inequalities in cardiovas- cular disease—the significance of absolute versus relative measures of disease. BMC Public Health 8(189):1-10. Rothberg, M. B., I. Abraham, P. K. Lindenauer, and D. N. Rose. 2005. Improving nurse-to-patient staffing ratios as a cost-effective safety inter- vention. Medical Care 43(8):785-791. Russell, L. B., J. E. Siegel, N. Daniels, M. R. Gold, B. R. Luce, and J. S. Mandelblatt. 1996. Cost-effectiveness analysis as a guide to resource allocation in health: Roles and limitations. In Cost-effectieness in health and medicine, edited by J. E. Siegel, L. B. Russell, and M. C. Weinstein. New York: Oxford University Press. Pp. 3-24. Sabik, L. M., and R. K. Lie. 2008. Principles versus procedures in making health care coverage decisions: Addressing inevitable conflicts. Theoretical Medicine & Bioethics 29(2):73-85. Sarvela, P. D., and R. J. McDermott. 1993. Health education ealuation and measurement: A practitioner’s perspectie. Dubuque, IA: William C. Brown Communications, Inc. Secretary’s Advisory Committee on National Health Promotion and Disease Prevention Objectives for 2020. 2008a. Phase I report: Recom- mendations for the framework and format of Healthy People 00. Appendix . Washington, DC: U.S. Department of Health and Human Services. ———. 2008b. Phase I report: Recommendations for the framework and format of Healthy People 00. Appendix . Washington, DC: U.S. Department of Health and Human Services. Siegel, B., D. Bear, E. Andres, and H. Mead. 2009. Measuring equity: An index of health care disparities. Quality Management in Health Care 18(2):84-90. Siu, A. L., E. A. McGlynn, H. Morgenstern, M. H. Beers, D. M. Carlisle, E. B. Keeler, J. Beloff, K. Curtin, J. Leaning, B. C. Perry, H. P. Selker, W. Weiswasser, A. Wiesenthal, and R. H. Brook. 1992. Choosing quality of care measures based on the expected impact of improved care on health. Health Serices Research 27(5):619-650. Stinnett, A. A., and J. Mullahy. 1998. Net health benefits: A new framework for the analysis of uncertainty in cost-effectiveness analysis. Medi- cal Decision Making 18(2):S68-S80. Stolk, E. A., S. J. Pickee, A. H. J. A. Ament, and J. J. V. Busschbach. 2005. Equity in health care prioritisation: An empirical inquiry into social value. Health Policy 74(3):343-355. Tengs, T. O., and J. D. Graham. 1996. The opportunity cost of haphazard social investments in life-saving. In Risks, costs, and lies saed: Get- ting better results from regulation, edited by R. W. Hahn. New York: Oxford University Press. Pp. 167-182. van Hulst, M., J. T. M. de Wolf, U. Staginnus, E. J. Ruitenberg, and M. J. Postma. 2002. Pharmaco-economics of blood transfusion safety: Review of the available evidence. Vox Sanguinis 83(2):146-155. Waidmann, T. 2009. Estimating the cost of racial and ethnic health disparities. Washington, DC: The Urban Institute. Whitehead, M., and G. Dahlgren. 1991. What can be done about inequalities in health? Lancet 338(8774):1059-1063. Whitlock, E. P., S. A. Lopez, S. Chang, M. Helfand, M. Eder, and N. Floyd. 2010. AHRQ series paper 3: Identifying, selecting, and refining topics for comparative effectiveness systematic reviews: AHRQ and the effective health-care program. Journal of Clinical Epidemiology 63(5):491-501. Wong, J. B., C. Mulrow, and H. C. Sox. 2009. Health policy and cost-effectiveness analysis: Yes we can. Yes we must. Annals of Internal Medicine 150(4):274-275. Woolf, S. H. 2009. A closer look at the economic argument for disease prevention. Journal of the American Medical Association 301(5): 536-538.