A high-quality cancer care delivery system should translate evidence into practice, measure quality, and improve the performance of clinicians. To arrive at a high-quality cancer care delivery system that does just that, clinicians need tools and initiatives that assist them with quickly incorporating new medical knowledge into routine care. Clinicians also need to be able to measure and assess progress in improving the delivery of cancer care, publicly report that information, and develop innovative strategies for further performance improvement.
In the figure illustrating the committee’s conceptual framework (see Figure S-2), knowledge translation and performance improvement are part of a cyclical process that measures the outcomes of patient-clinician interactions, implements innovative strategies to improve care, evaluates the impact of those interventions on the quality of care, and generates new hypotheses for investigation. Clinical practice guidelines (CPGs), quality metrics, and performance improvement initiatives are all tools supportive of that cyclical process. CPGs and performance improvement strategies enhance the translation of evidence into practice. Specifically, CPGs translate research results into clinical recommendations for clinicians, and performance improvement initiatives systematically bring about a change in the delivery of care that reflects the best available evidence. Quality metrics evaluate health care clinicians’ performance and practices by comparing actual clinical practices against recommended practices, and identifying areas that could be improved.
A high-quality cancer care delivery system’s focus on quality metrics and CPGs is consistent with the Institute of Medicine’s (IOM’s) 1999 report
Ensuring Quality Cancer Care, which recommended improving clinicians’ use of systematically developed guidelines and increasing the measurement and monitoring of cancer care using a core set of quality measures (IOM and NRC, 1999). Despite those recommendations, the translation of research findings into practice in the current cancer care system has been slow and incomplete, and many challenges plague the system for measuring and assessing performance. CPGs, for example, are often developed by fragmented processes that lack transparency (IOM, 2011c). Serious limitations in the evidence base supporting CPGs can result in different guidelines being developed on the same topic with conflicting advice to clinicians. Performance improvement initiatives are generally modest, localized efforts, and because they are tailored to unique local circumstances, are difficult to translate to the national level. Similarly, there are many challenges and pervasive gaps in existing measures that impede the development of cancer quality metrics.
The previous chapters discussed the importance of improving the scientific evidence base to guide the clinical decision making of patients and their health care clinicians, as well as the role of a learning health care information technology (IT) system for cancer in accomplishing this goal. This chapter discusses how to ensure that this evidence is translated into practice, that quality is measured, and that the system monitors and assesses its performance. The majority of the chapter focuses on cancer quality metrics. The committee commissioned a background paper on this topic and identified a great need for improvement in the metrics development process. The remainder of the chapter focuses on CPGs and performance improvement initiatives. The committee relied heavily on the IOM’s previous work on CPGs to derive the evidence base for the guideline portion of this chapter (IOM, 2008, 2011c). The committee identifies one recommendation for improving cancer quality metrics.
Cancer quality measures provide objective descriptors of the consequences of care and transform the nebulous concept of “good medicine” into a measurable discipline. These measures serve a number of roles in assessing quality of care by providing a standardized and objective means of measurement. For example, quality assurance measures assess a clinician’s or an organization’s performance for purposes of compliance, accreditation, and payment. Performance improvement metrics, however,
1 This section of the chapter was adapted from a background paper by Tracy Spinks, MD Anderson Cancer Center and Consultant, IOM Committee on Improving the Quality of Cancer Care: Addressing the Challenges of an Aging Population (2012).
are designed to identify gaps in care with the objective of closing those gaps. Typically these measures are implemented in a collaborative, rather than a punitive, environment. They can drive improvements in care by informing patients and influencing clinician behavior and reimbursement. Appropriately selected quality measures may be used prospectively to influence decision making and care planning and to align the mutual interests of patients, caregivers, clinicians, and payers. Moreover, they can provide insights into practice variations between clinicians and document changes over time within a given practice setting.
There are many unique considerations in measuring the quality of cancer care. As discussed in earlier chapters, the complexity of cancer care has exceeded that of many other common chronic conditions. Cancer comprises hundreds of different types of diseases and subtypes and includes multiple stages of disease (e.g., precancer, early-stage disease, metastatic disease). Cancer care often occurs in multiple phases—an acute phase, a chronic phase, and an end-of-life phase—requiring different treatments and approaches to care over time. The multiple treatment modalities and combination strategies during the acute treatment phase demand coordinated teams of professionals with multiple skill sets. Treatment during the chronic phase also requires coordination between various care teams. Additionally, patients and clinicians must make difficult treatment decisions due to the toxicity of many of the treatment options. Quality measures in cancer need to reflect and account for these complex characteristics of the disease.
The National Quality Forum (NQF), the Agency for Healthcare Research and Quality (AHRQ), the American Society of Clinical Oncology (ASCO), and the American College of Surgeons’ (ACoS’s) Commission on Cancer (CoC) have developed2 or endorsed3 a number of quality measures specific to or applicable to cancer for use in performance improvement and national mandatory reporting programs in the United States. These measures broadly fall into two categories: disease-specific measures (e.g., measures specific to breast cancer), and cross-cutting measures, which apply to a variety of cancers. Additionally, the Patient Protection and Affordable Care Act4 outlined six categories of measures for use in federal reporting of cancer care by the nation’s eleven cancer centers not
2 An organization develops a quality measure by investing time and resources to create a new variable to measure.
3 An organization endorses a quality measure by publicly expressing support or approval for the measure.
4 Patient Protection and Affordable Care Act, Public Law 111-148, 111th Congress, 2nd Sess. (March 23, 2010).
|Assessing Care of Vulnerable Eldersv (ACOVE)||ACOVE quality measures were developed by health services researchers at RAND Corporation in 2000 to assess care provided to vulnerable older adults (defined as those most likely to die or become severely disabled in the next 2 years). The measures reflect the complexity of measuring the quality of care for older adults, who often have multiple comorbidities and substantial variation in treatment preferences. They cover the broad range of health care issues that older adults experience, including primary care, chronic obstructive pulmonary disease, colorectal cancer, breast cancer, sleep disorders, and benign prostatic hypertrophy.|
|National Cancer Data Base (NCDB)||The Commission on Cancer (CoC) is a multidisciplinary consortium dedicated to increasing survival and improving quality of life in cancer patients through research, education, standard setting, and quality assessments. Currently, more than 1,500 cancer programs meet the criteria for CoC accreditation (ACoS, 2011d), which requires a review of the scope, organization, and activity of the cancer program and compliance with 36 specific standards (ACoS, 2011c). Since 1996, all CoC-accredited cancer programs have been required to submit data to the NCDB, a joint program of CoC and the American Cancer Society. The cases submitted to the NCDB represent approximately 70 percent of all newly diagnosed cancer cases in the United States and are summarized in various clinician-level reports to facilitate performance improvement, create benchmarks for comparative purposes, and identify trends in cancer care, such as survival and cancer incidence.|
paid under the Prospective Payment System (PPS)5: outcomes, structure, process, costs of care, efficiency, and patients’ perspectives on care. Existing measures are largely process oriented, although there are some measures of outcomes, structure, and patients’ perceptions of care. The activities of major organizations involved in quality metrics in cancer are summarized in Table 7-1.
5 The Prospective Payment System is used by Medicare to reimburse providers for services based on predetermined prices.
|National Quality Forum (NQF)||The NQF was formed in 1999 in response to a specific recommendation of the President’s Advisory Commission to create a nonprofit, public-private partnership that would develop a national strategy for measuring and reporting on health care quality to advance national aims in health care. In 2009, the NQF was awarded a contract with the U.S. Department of Health and Human Services (HHS) to endorse health care quality measures for use in public reporting in the United States. To date, the NQF has endorsed more than 60 cancer-specific measures that were developed by the American Society of Clinical Oncology (ASCO), the American Academy of Medicine’s (AMA’s) Physician Consortium for Performance Improvement, the American Society for Radiation Oncology, and the American Urological Association. These include more than 40 disease-specific measures that assess screening, diagnosis and staging, and initial cancer treatment (e.g., measures that assess concordance with treatment guidelines for breast cancer). The NQF has also endorsed broader cross-cutting measures that focus on end-of-life issues, such as symptom management and overutilization of care.|
|National Quality Measures Clearinghouse (NQMC)||The Agency for Healthcare Research and Quality established the NQMC in 2002 to serve as a Web-based repository of evidence-based health care quality measures and to promote widespread access to these measures among health care clinicians, health plans, purchasers, and other interested stakeholders. As of June 2013, the NQMC included 370 cancer-specific measures that assess screening, initial treatment, and end-of-life care. Of note, the NQMC includes many NQF-endorsed measures as well as cancer-specific measures that were developed outside of the United States, such as in Australia and the United Kingdom. The NQMC also includes a database of 95 cancer-specific measures currently used by the various agencies within HHS, including the Medicare Fee-For-Service Physician Feedback Program, the Meaningful Use Electronic Health Record Incentive Program, and the Hospital Outpatient Quality Reporting Program.|
|National Surgical Quality Improvement Program (NSQIP)||The Department of Veterans Affairs (VA) developed NSQIP in 1994 to monitor and improve the quality of surgical interventions in all VA medical centers. The American College of Surgeons expanded NSQIP in 2004 to serve as a private-sector quality improvement program for surgical care. The program is intended to assist hospitals in capturing and reporting 30-day morbidity and mortality outcomes for all major inpatient and outpatient surgical procedures. Examples of measures include surgical site infection, urinary tract infection, surgical outcomes in older adults, colorectal surgery outcomes, and lower-extremity bypass. The measures are captured using a site’s Surgical Clinical Reviewer who reviews patients’ medical charts, and if necessary, may contact patients by letters or phone.|
|Physician Consortium for Performance Improvement (PCPI)||PCPI, a national, physician-led initiative convened by the AMA, has developed evidence-based health care quality measures for use in the clinical setting. The NQF has endorsed more than 20 cancer-specific measures developed by PCPI, including cross-cutting measures for pain and disease-specific measures for breast, prostate, and other cancers.|
|Quality Oncology Practice Initiative (QOPI)||ASCO began work on its QOPI Program in 2002 to fill the void in oncology quality measurement. ASCO made the QOPI Program available to its member physicians as a voluntary practice-based program in 2006. This program provides tools and resources to oncology practices for quality measurement, benchmarking, and performance improvement and currently has more than 800 registered member practices. ASCO also offers a 3-year certification through its QOPI Certification Program, which is available to outpatient medical or hematology oncology practices in the United States. QOPI certification is awarded to practices that meet data submission requirements, minimum performance on a subset of QOPI measures, and compliance with certification standards developed by ASCO and the Oncology Nursing Society. As of June 2013, there are 190 QOPI-certified oncology practices across the country.|
SOURCES: ACoS, 2011a,b,c,d, 2013; AHRQ, 2012b,c,d,e; AMA, 2012; ASCO, 2012b,c,e, 2013; Bilimoria et al., 2008; Jacobson et al., 2008; Kizer, 2000; McNiff, 2006; Menck et al., 1991; NQF, 2012b,d, 2013c; President’s Advisory Commission on Consumer Protection and Quality in the Health Care Industry, 1998; RAND, 2010.
Challenges in Cancer Quality Measurement
There is minimal empirical support that publicly reporting health care quality measures has triggered meaningful improvements in the effectiveness, safety, and patient-centeredness of care (Shekelle et al., 2008; Werner et al., 2009). At best, experts have noted “pockets of excellence on specific measures or in particular services at individual health care facilities” (Chassin and Loeb, 2011, p. 562). Because cancer care has largely been excluded from public reporting, it is unclear whether these findings will hold true for cancer care in the future; however, some studies examining the impact of quality reporting in cancer care have noted improvements in care.
Blayney and colleagues studied the impact of implementing the ASCO Quality Oncology Practice Initiative (QOPI) at the University of Michigan’s Comprehensive Cancer Center between 2006 and 2008. They found that physicians changed their behavior when provided with oncology-specific quality data, especially in the areas of treatment planning and management (Blayney et al., 2009). Between 2009 and 2011, Blayney and colleagues expanded their focus and evaluated the impact of implementing QOPI at multiple oncology practices. They concluded that physician participation in the voluntary reporting program increased when the costs of data collection were defrayed by Blue Cross Blue Shield of Michigan. At the same time, they found that providing physicians with access to the quality reports was insufficient to trigger measurable improvements in care across participating practices (Blayney et al., 2012). In a separate study, Wick and colleagues studied the impact of participation in the ACoS’s National Surgical Quality Improvement Program (NSQIP) on surgical site infection rates following colorectal surgery at the Johns Hopkins Hospital. They observed a 33.3 percent reduction in the surgical site infection rate during the 2-year period studied (July 2009 to July 2011) (Wick et al., 2012).
There is no federal program that requires clinicians to report data on core cancer measures. Existing programs are primarily voluntary and favor “measures of convenience,” which are easy to report but lack meaning for patients (Spinks et al., 2011, p. 669). These measures are generally clinician-oriented, reflect existing fragmentation in care, and lack a clear method for triggering improvements. Most measures focus on short-term outcomes in care. Thus, there are serious deficiencies in cancer quality measurement in the United States, including (1) pervasive gaps in existing cancer measures, (2) challenges intrinsic to the measure development process, (3) a lack of consumer engagement in measure development and reporting, and (4) the need for data to support meaningful, timely, and actionable performance measurement. This chapter discusses each of these issues below.
Gaps in Existing Cancer Measures
No current quality reporting program or set of measures adequately assesses cancer care in a comprehensive, patient-oriented way. A recent report by the NQF-convened Measure Applications Partnership (MAP), which provides input to the Secretary of Health and Human Services (HHS) on the selection of measures for use in federal reporting, noted that cancer care measures are largely disease specific, process focused, and measured at the clinician level. These measures support operational improvement, but they are limited in their ability to induce wide-scale improvements in care, and provide limited insight into overall health care quality (MAP and NQF, 2012). For example, process measures are useful for establishing minimum standards for delivery systems to achieve and are simple to validate. Unfortunately, they do not reliably predict outcomes and they rarely are able to account for patient preferences of what constitutes a desirable care. Thus, it is important that process measures be supplemented by additional measures of outcome, structure, efficiency, cost, and patient perception of their care. Table 7-2 provides a summary of the benefits and drawbacks of the various types of measures used in cancer care.
All phases of the cancer care continuum—from prevention and early detection, to treatment, survivorship, and end-of-life care—need new measures. While NQF-endorsed measures and those included in the National Quality Measures Clearinghouse (NQMC) focus on screening and initial cancer treatment, few measures address post-treatment follow-up and the long-term consequences of care, such as survivorship care, disease recurrence, and secondary cancers. Assessments of end-of-life care, including overuse of therapeutic treatment at the end of life, are included in both measure sets, but could be expanded (AHRQ, 2012c; NQF, 2012d). The QOPI measure set primarily addresses treatment and includes a few measures related to prevention and diagnosis, as well as more than 25 measures evaluating end-of-life care (ASCO, 2012d). All of these measure sets, however, could better assess palliative care and hospice care referral patterns and the associated quality of life for cancer patients requiring these services. The MAP report emphasized survivorship care (by stage and cancer type), palliative care, and end-of-life care as priorities for enhancing quality measurement across the continuum of care (MAP and NQF, 2012).
Existing cancer measures also often fail to address all of the relevant dimensions of cancer care, such as access to care and care coordination, evaluation and management of psychosocial needs, patient and family engagement (especially shared decision making and honoring patient preferences), management of complex comorbidities, and advance care
|Structure||Measures the settings in which clinicians deliver health care, including material resources, human resources, and organizational structure (e.g., types of services available, qualifications of clinicians, and staffing hierarchies)||Identifies core infrastructure needed for high-quality care||Difficult to compare across settings of variable sizes and resources; implications for patients’ outcomes not always clear|
|Process||Measures the delivery of care in defined circumstances (e.g., screening the general populations, psychosocial evaluations of all newly diagnosed patients, care planning before starting chemotherapy)||Encourages evidence-based care and is generally straightforward to measure||Need to consider patient choices that differ from standard of care and contraindications; implications for patients’ outcomes not always clear|
|Clinical Outcome||Measures personal health and functional status as a consequence of contact with the health care system (e.g., survival, success of treatment)||Allows assessment of ultimate endpoints of care||Need to risk adjust for comorbidities; difficult to compare across settings with variable populations|
|Patient-Reported Outcome||Measures patients’ perceived physical, mental, and social well-being based on information that comes directly from the patient (e.g., quality of life, time to return to normal activity, symptom burden)||Integrates the patient’s “voice” into the medical record||Some outcomes are outside the scope of clinical care (e.g., social well-being)|
|Patients’ Perspective on Care||Measures patients’ satisfaction with the health care they received||Gathers data on patients’ experience throughout the health care delivery cycle||Need to account for patients’ limitations in assessing technical aspects of care|
|Cost||Measures the resources required for the health care system to deliver care and the economic impact on patients, their families, and governmental and private payers||Allows parties to weigh the relative values of potential treatment options, when combined with outcome measures||Difficult to measure the true cost of care given the range of prices and expenses in medical care; costs vary according to perspective (patients, payer, society, etc.); need to distinguish between costs and charges|
|Efficiency||Measures the time, effort, or cost to produce a specific output in the health care system (e.g., time to initiate therapy after diagnosis, coordination of care)||Reflects important determinants of patients’ outcomes and satisfaction with care and is a major driver of cost||Need to correlate with outcome measures; need to account for patient characteristics and preferences|
|Cross-Cutting||Measures issues that cross cancer or disease types (e.g., patient safety, care coordination, equity, and patients’ perspective on care)||Aligns with measurement of other cancers or conditions and reflects true multidisciplinary nature of cancer care||Difficult to capture the unique characteristics of cancer|
|Disease-Specific||Measures issues within a specific cancer type (e.g., clinicians’ concordance with clinical practice guidelines for breast, prostate, and colon cancer)||Reflects diversity of cancer and tumor biology||Need to account for stage of disease at presentation and comorbidities|
NOTE: The basis of quality measurement centers on the three major elements of quality measurement: outcome, processes and structure (Donabedian, 1980). These elements have been expanded in recent years to include concepts of efficiency, cost, and patient-reported outcomes. The types of measures are interrelated and overlapping. For example, a measure can be disease-specific and a process or outcome measure, or a patient-reported outcome and a clinical outcome.
planning for cancer patients. There are a number of NQF-endorsed measures, as well as measures in the NQMC and QOPI, that focus on the short-term physical consequences of cancer and its treatment (AHRQ, 2012c; ASCO, 2012d; NQF, 2012d). In addition, Cancer Care Ontario conducted a recent performance improvement project that included developing measures to assess the integration and coordination of palliative care services in cancer care (Dudgeon et al., 2009). However, management of complex comorbidities and the functional, emotional, and social consequences of the disease, and other high-quality measures, are largely unaddressed by current measures (Bishop, 2013; Spinks et al., 2011).
There are also gaps in measures that assess care planning and care coordination, which is particularly problematic because cancer care is rarely confined to one hospital or physician. Cancer patients tend to move between multiple care settings—primary care teams, cancer care teams, community and specialty hospitals, and potentially emergency centers, long-term care facilities, and hospice care (MAP and NQF, 2012). Existing cancer measures are limited by where a patient receives cancer care because many oncology practices and hospitals lack the infrastructure and sophistication to measure the quality of care they deliver. Moreover, NQF requires its endorsed measures to be validated in a specific disease or care setting, thus limiting the applicability of the measures in persons with multiple comorbidities or who traverse multiple care settings. In addition, the measurement of care is fragmented and rarely focused on the overall patient experience. Few measurement systems integrate a patient’s experience across care settings.
Quality metric development has also thus far failed to prioritize less common cancers. Although NQF has endorsed and AHRQ has included in the NQMC a number of disease-specific measures, including measures for more common cancers, such as breast and prostate cancers, as well as measures for less common cancers, such as pancreatic cancer and multiple myeloma, these measures are not evenly distributed across the diseases. There are few or no measures for other rare cancers, such as brain and ovarian cancers (AHRQ, 2012c; NQF, 2012d). QOPI, for example, includes disease-specific measures for breast, colorectal, lung, and gynecologic cancers, and non-Hodgkin lymphoma, but does not address prostate cancer or many other rare cancers (ASCO, 2012d).
The IOM’s 1999 report on the quality of cancer care recommended that patients undergoing technical procedures be treated in high-volume facilities (IOM and NRC, 1999). A large body of evidence shows that patients undergoing high-risk surgeries at high-volume facilities have better health outcomes and short-term survival than patients treated in low-volume facilities (Birkmeyer et al., 2003; Finks et al., 2011; Finlayson et al., 2003; Ho et al., 2006). Even with their strong track record, however, high-volume facilities currently lack the capacity to treat all cancer patients who require highly skilled procedures (Finks et al., 2011; Spinks et al., 2012). Thus, it will be necessary to establish additional quality measures that identify high-quality, lower volume facilities and clinicians.
ACoS’s NSQIP, the American Board of Medical Specialties Mainte-
nance of Certification Evaluation of Performance in Practice, the Joint Commission Ongoing Professional Practice Evaluation, and some payer-driven pay-for-performance initiatives are implementing programs for clinicians at low-volume facilities to transparently attain, verify, and maintain competence in highly technical procedures. These programs should be continually employed to help patients identify competent clinicians, regardless of the size of the program in which they practice (Spinks et al., 2012).
The challenges to developing meaningful and comprehensive quality measurements are amplified in older adults with cancer. Older adults have been underrepresented in quality measurement for cancer care for several reasons: their underrepresentation in clinical trials (see Chapter 5), conflicting recommendations and clinician beliefs regarding cancer screening and therapeutic treatment for this population, increased sensitivity to treatment-related toxicities, and multiple comorbidities (see discussion on older adults in Chapter 2). As a result, existing quality measures may not apply directly to older adults with cancer, and in some cases, existing quality measures may be clinically inappropriate for older adults with cancer. Process-based measures are traditionally developed based on guidelines for patients with a single disease (i.e., cancer), which do not address the complexities of caring for many older patients who have multiple, complex conditions and receive care across multiple settings over time.
Challenges Associated with the Measure Development Process
Many of the cancer measurement gaps stem from challenges associated with the measure development process. The NQF, AHRQ, and other organizations have adopted stringent guidelines for the evaluation of health care quality measures, such as scientific acceptability, usability, importance, and feasibility. These guidelines help ensure meaningful quality metrics that measure what they are intended to and help inform the decisions of patients, payers, and federal and state agencies. While this approach is generally well suited for process-based measures that evaluate the technical aspects of care (e.g., guideline adherence), it is not particularly suitable for evaluating other measures that assess the interpersonal aspects of care, outcomes, patients’ perspectives on care, and other non-process-oriented measures. In addition, quality measures often do not account for the appropriateness of some processes of care measures in special circumstances (e.g., advanced dementia, short life expectancy).
A lack of national coordination and oversight seriously compromises the measurement development process. Many independent groups, capable of funding the testing and validation of their own measures, have
developed discipline-specific quality metrics, which reflects the fragmentation in health care delivery. In its 2011 report For the Public’s Health: The Role of Measurement in Action and Accountability, the IOM noted that this process had produced an abundance of overlapping health care measure sets that vary in quality and application, confuse health care decision makers, and lead to further fragmentation in an already splintered field (IOM, 2011a). For example, the NQF has endorsed two measures related to hormonal therapy for hormone receptor positive breast cancer: NQF measure #0220—Adjuvant hormonal therapy; and NQF measure #0387—Oncology: Hormonal therapy for stage IC through IIIC, ER/PR positive breast cancer (NQF, 2012a,e). Both measures, based on National Comprehensive Cancer Network’s (NCCN’s) CPGs for breast cancer patients, have slight, but meaningful differences (e.g., patient population, care setting, and/or data source). This lack of coordination has contributed to pervasive gaps in measures, as discussed above.
Efforts by organizations, such as the NQF, to create parsimonious families of quality measures have reduced measure fragmentation to a limited degree. These organizations have prioritized the development of measures that fill crucial gaps in cancer measurement and apply to certain diseases and dimensions of care. Because these organizations lack the authority to ensure that measure developers implement their recommendations, however, minimal progress has been made in filling persisting gaps.
These groups are also working to harmonize existing measures. In its 2010 publication Guidance for Measure Harmonization—A Consensus Report, the NQF provided specific guidance to measure developers and NQF project steering committees. Outlined in the report were seven principles for measuring harmonization as well as considerations for harmonizing overlapping and related measures (NQF, 2010). Additionally, when submitting measures to the NQF for potential endorsement, measure developers must attest that the measure has been harmonized with existing measures (NQF, 2012c).
Compared with the scientific evidence supporting measurement of the technical aspects of cancer care (Schneider et al., 2004), there is a major void in the body of evidence supporting measure development for other dimensions of care—most notably, access to care and care coordination; patient and family engagement (including shared decision making and honoring patient preferences); management of complex comorbidities; quality-of-life issues during and after treatment; reintegration into society (e.g., return to work); and the costs of care. In Chapter 5, the committee makes several recommendations for improving the breadth and depth of information collected in clinical research. If these recommendations are implemented, the scientific evidence available to inform measurement development should improve.
Many process-of-care measures assess adherence to disease- and stage-specific CPGs. Despite the ubiquity of these guidelines, wide variations in adherence have been observed for certain diseases, for selected clinicians, and within cancer programs (Foster et al., 2009; Romanus et al., 2009). The voluntary nature of guideline adherence drives some variation, while a patient’s prior cancer treatment, comorbidities, and preferences may also influence guideline adherence (Spinks et al., 2012). Respecting individual patient needs, values, and preferences is at the heart of patient centeredness and is the foundation for the shift toward patient-driven, personalized cancer care (see discussion in Chapter 3). Thus, measures of clinicians’ adherence to guidelines must account for patient preferences in assessing performance without penalizing clinicians for honoring patients’ preferences. These measures should address patients who opt for care that differs from recommendations for screening and treatment (Kahn et al., 2002).
Several process-of-care measures “credit” physicians for recommending guideline-based treatment to their cancer patients, even when the patient does not receive the treatment due to medical contraindications or patient preference (e.g., NQF measure #0220—Adjuvant hormonal therapy) (NQF, 2012a), which is appropriate in many instances. It is important that measures be transparent and distinguish between concordant and recommended care. These delineations can identify areas where disparities in access to care exist and can be used to understand the relationship between the long-term outcomes and the use of evidence-based guidelines (NQF, 2012c)
Clinician attribution can also challenge the development of quality measures. Health care quality measures should assess aspects of care that may be influenced by individual clinicians (IOM, 2001), specifically for the purposes of accountability and reimbursement. In the current health care delivery system, where patients often move between multiple care settings and multiple clinicians are influencing patient outcomes, attribution of health care outcomes has become daunting. The shift to an “episode of care” framework, where quality is assessed and costs are accumulated across clinicians for a specific condition or disease or a designated period of time, could make the assessment of clinician attribution even more complicated because it will be unclear which clinician is responsible for each health outcome (Krumholz et al., 2008).
Understandably, clinicians may be reticent to be held accountable for the outcomes of care where multiple health care clinicians are engaged in its delivery. When resource use for any one patient is evaluated across multiple clinicians, these concerns may be amplified (Hussey and McGlynn, 2009). Thus, measure developers should adopt adequate precautions to ensure that measures are attributed to the individuals, groups,
or organizations responsible for the decisions, outcomes, and costs of care (Krumholz et al., 2008). Cancer care plans, as recommended in Chapter 3, should indicate who is responsible for each element of care provision, thereby making attribution easier. In assessing whether appropriate care was received, quality measures should account for the complications of treating asymptomatic disease, inappropriate or inadequate prior care, and patient preferences that differ from clinician recommendations (Kahn et al., 2002).
A number of risk-adjustment strategies, which account for factors that influence clinical outcomes (e.g., patient demographics, severity of illness, comorbid conditions), have been developed to support equitable comparisons across clinicians and to assess variations in patient outcomes. Because these models are not specific to cancer, however, they ignore some primary drivers of cancer outcomes: cancer type and stage, tumor markers, functional status and well-being, previous treatment, and patient adherence with treatment regimens (Kahn et al., 2002). Although a number of efforts (most recently by the University Health System Consortium) have been initiated to enhance existing risk-adjustment methodologies for more meaningful comparisons of cancer care (UHC, 2012), the utility of these models is limited by the availability, quality, and completeness of data to support risk adjustment.
Traditional risk-adjustment models utilize administrative claims data, which are widely available, but fail to capture many important variables, such as functional status, patient adherence with treatment regimens, socioeconomic status, and education level. Thus, standardized definitions, data collection, and reporting methods should be adopted for these outcomes drivers. Additionally, as risk-adjustment models are refined, risk adjustments should not mask disparities in care (Deutsch et al., 2012; NQF, 2012c; Weissman et al., 2011). Weismann and colleagues recommended stratifying outcomes by socioeconomic status and other demographic factors, where possible, rather than adjusting for these factors (Weissman et al., 2011).
A similar problem exists in comparing measures across care settings, especially measures of patients’ survival. Recently, a number of academic cancer centers began publishing their 3- and 5-year survival outcomes on the Internet, usually comparing their outcomes to national statistics or community-based data (Goldberg, 2011). Although survival outcomes data are critically important to patients, interpreting comparative survival outcomes data are complicated because of the great variability in cancer care delivery organizations’ patient populations and approaches to staging and labeling cancers (Berry, 2011). NQF’s MAP has had formal discussions about how to publicly report survival outcomes in a way
that allows meaningful comparisons (NQF, 2012f). However, considerable work needs to be done to achieve this goal.
Finally, small sample sizes can create problems for the measure development process. As part of that process, developers perform statistical testing to ensure that the measure results are statistically valid. Small sample sizes—which arise from measuring rare diseases, the increasing specificity of many measures, and measuring at the clinician level where an individual clinician may only see a small number of patients with a specific condition of interest (Higgins et al., 2011; MAP and NQF, 2012)—make it difficult to validate results. Cross-cutting measures that focus more broadly on patient safety, care coordination, and patients’ perspectives on care can help to overcome the limitations of small sample sizes (MAP and NQF, 2012). Use of these measures would create opportunities for assessing the quality of care across a much larger population, particularly for patients with rarer diseases that have not been addressed by existing disease-specific measures.
Lack of Consumer Engagement in Quality Measurement
Publicly reporting health care quality measures has been championed as a means of guiding patients to high-quality and efficient health care. The HHS National Strategy for Performance Improvement in Health Care (or National Quality Strategy) has identified public reporting as a policy lever for improving patients’ access to high-quality and affordable care in the United States (National Priorities Partnership, 2011). Furthermore, Hibbard and Sofaer proposed that consumer use of comparative performance reports might influence health care quality by enabling patients to seek out and obtain high-quality health care and encouraging performance improvement among health care clinicians to protect their reputations and maintain their market share (Hibbard and Sofaer, 2010).
This consumer-driven health care model assumes that patients, when provided with health care quality data, will seek care from high-quality and low-cost clinicians (Harris and Beeuwkes Buntin, 2008). Research suggests that patients have a strong interest in information on clinician quality (Harris and Beeuwkes Buntin, 2008), but rarely use health care quality data in choosing a clinician (Faber et al., 2009; Totten et al., 2012). For example, the Henry J. Kaiser Family Foundation, together with AHRQ, conducted a series of patient surveys in 2000, 2004, and 2006 to assess the national perception of health care quality, patients’ exposure to and use of health care quality information, and patients’ experience with poor care coordination and medical errors. In the 2006 study, only 36 percent of respondents reported viewing information on the quality of health plans, hospitals, and doctors within the prior year, and only 20
percent of respondents reported using this information to make health care decisions. Exposure to and usage of information on health plans was highest (29 percent and 12 percent, respectively) while exposure to and usage of information on physicians was lowest (12 percent and 7 percent, respectively) (KFF and AHRQ, 2006).
In 2010, AHRQ began publishing its “Best Practices in Public Reporting” series to guide public and private organizations in making public reports of health care quality data clearer, more meaningful, and actionable for patients. The first report in this series—Best Practices in Public Reporting No. 1: How to Effectively Present Health Care Performance Data to Consumers—outlined several challenges to consumers’ use of health care quality data, including differing definitions of quality among patients and clinical experts, and consumer difficulty with understanding and interpreting quality measures. The report also noted that clinical quality measures are often not meaningful to patients and are frequently misinterpreted. For example, patients may not associate high rates of hospital readmissions with poor care or harm by clinicians. Additionally, patients erroneously may equate more efficient, lower cost care with poor care (Hibbard and Sofaer, 2010). This type of misinterpretation may be common across all segments of society, but is likely more concentrated among individuals with poor health literacy, a characteristic that is disproportionately high among older adults and individuals with limited education, poor English proficiency, lower socioeconomic status, or mental or physical disabilities (IOM, 2011b).
Although most publishers of health care quality data have adopted a philosophy that “if you build it, they will come,” there is a dearth of consumer engagement in developing these reports; fundamental differences in perceptions of quality and value of health care by patients, clinicians, health plans, and state and federal agencies are likely contributors. Research suggests that patients place a high value on clinicians who are responsive to their individual needs, access to and choice of clinicians and services, and treatments that maximize their quality of life and productivity. Clinicians evaluate care in terms of their ability to draw on their medical expertise to achieve optimal patient outcomes, while health plans and state and federal agencies tend to equate quality of care with efficiency, appropriate utilization of diagnostic and therapeutic technologies, and high patient satisfaction. While there are some commonalities among these divergent perspectives (e.g., none of these stakeholder groups is indifferent to patient harm), balancing their diverse perspectives continues to challenge quality measurement, especially in public reporting (IOM and NRC, 1999; McGlynn, 1997).
Additionally, consumer reactions to variations in health care costs and quality of care may vary considerably from consumer reactions to
corresponding changes in other sectors of the economy, which often reflect trade-offs between costs and quality of goods and services. Limited supply (e.g., one hospital in the geographic region), the absence of information, passive behavior on the part of patients, and insurance coverage, which often shields patients from fluctuations in health care costs, have been suggested as contributing factors. Without access to accurate and timely cost and quality information, patients may err in their assessments of quality of care, and health care costs will lack sensitivity to quality of care (Pauly, 2011; Usman, 2011).
To reach patients effectively, quality and cost data should be collected and reported with patient needs in mind. Measure developers and reporting agencies will need to work closely with patients and their caregivers to understand their evolving informational needs and at what point in the cancer care continuum giving patients that information will be appropriate. Additionally, measure developers and reporting agencies should accommodate patient preferences regarding the format and delivery mechanism of this information so that it is understandable and useful for patients facing health care decisions. By bridging the gulf between patients and measure developers and reporting agencies, patient advocacy groups could play a key role in consumer-driven, patient-centered quality reporting.
Meaningful, Timely, and Actionable Performance Data
The widespread need for and general absence of meaningful, timely, and actionable performance data to support quality measurement and performance improvement is well documented (Anderson et al., 2012; IOM and NRC, 1999, 2000; MAP and NQF, 2012; Russell, 1998). Despite that recognition, during the past two decades, there has been little advancement in data collection and reporting to support better performance data. More than ever, the health care system in the United States has proved itself to be capable of documenting its persistent deficiencies, but it has failed to produce actionable performance data to mobilize real and lasting change (Davies, 2001). This absence of progress provides a sharp contrast to the technical and technological changes observed in health care delivery and health care technology for cancer care.
Electronic health records (EHRs) could improve the speed and ease of data collection and reporting. As described in Chapter 6, the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 has triggered substantial increases in EHR adoption among health care clinicians through a series of incentive payments and penalties. However, EHRs were not designed as quality measurement and reporting systems and they often lack interoperability—the ability for data systems to
exchange data to support health care delivery, decision making, and care coordination across multiple clinicians (Anderson et al., 2012). Moreover, patient-reported outcomes and other critical data elements are not routinely captured in EHRs or are not captured in a discrete and reportable format. Preliminary assessments of EHR-generated quality measures suggest that major work will be required to ensure the accuracy and validity of quality data obtained from EHRs (Parsons et al., 2012).
Manual chart abstraction and data entry remain a primary mechanism of data collection for quality measurement. In a 2012 hospital staffing survey published by The Advisory Board Company, respondents reported that a large proportion of quality data was obtained through manual abstraction: approximately 55 percent of respondents indicated that 80 percent to 100 percent of their quality data was obtained manually. In contrast, approximately 3 percent of respondents reported obtaining up to 25 percent of their quality data through manual abstraction. Survey respondents also noted a mean of 3.7 full-time employee equivalents responsible for data abstraction to support quality reporting, with a mean of 2.5 full-time employee equivalents dedicated to the Centers for Medicare & Medicaid Services’ (CMS’) inpatient and outpatient quality reporting programs (The Advisory Board Company, 2012). Staffing for these activities is costly, especially for smaller community hospitals, and it seems likely that these costs are passed onto patients and payers through increased charges.
Also problematic is the substantial delay that frequently occurs when manual data collection is required. For many cancer registries, including the NCI’s Surveillance, Epidemiology, and End Results program and the ACoS CoC’s National Cancer Data Base, several months may lapse between diagnosis and data submission, and the data are usually not available for review until many months to years later (ACoS, 2011d; NCI, 2012; Schneider et al., 2004). While these registries are rich national data sources on cancer incidence, treatment, and outcomes, delays limit their utility for real-time and actionable quality reporting. Likewise, retrospective outcomes studies, traditionally conducted on an ad hoc basis following treatment completion for a cohort of patients, require lengthy manual chart abstraction and data analysis. These studies, too, are limited in their ability to influence health care delivery because of their lengthy turnaround time.
A learning cancer IT system for cancer, as recommended by the committee (see Chapter 6), would provide a structured data system that collects and reports data to support more real-time quality assessments and informed decision making by patients, their caregivers, clinicians, payers, and federal and state agencies. Such a system would capture patient-reported data, integrate this information with data in EHRs and
other sources, and support robust data analytics and real-time decision-making support.
Clinicians will also play a crucial role in advancing the quantity and quality of data collected for reporting purposes. Clinicians need to agree on many complex decisions, such as
• Which data collection activities should be automated?
• Is prospective or retrospective data collection more appropriate for a given data collection activity?
• Which data elements must be collected by physicians, and which data elements may be collected in a more economical fashion by other members of the clinical staff without sacrificing the quality of the data?
Clinicians may need to sacrifice some degree of autonomy and personal preference to utilize and benefit from emerging technologies, such as structured dictation and clinical documentation. They may need to adopt standardized documentation styles and terminology to facilitate structured data collection and reporting, and to support data sharing with each other. Advances in natural language processing, however, could potentially reduce this need by allowing computers to analyze and capture the context of words and phrases within clinicians’ notes (Murff et al., 2011).
The transition from manual to automated data collection will require increased accuracy and specificity at the data collection point. EHRs and other IT systems cannot report accurately on patient characteristics or health care delivery that is not documented or is documented improperly. Recent research supports the intuitive notion that clinician workflow and documentation practice have a strong influence on EHR-based quality measures (Parsons et al., 2012). Thus, the quality and completeness of the data entered may constrain the utility, quality, and accuracy of automated reporting. Improved clinician workflow and documentation, together with IT advancements, could promote the availability of meaningful, timely, and actionable performance data for cancer quality measurement and reporting.
The Path Forward
The current independent efforts to develop cancer metrics have left patients, payers, clinicians, and state and federal agencies without an effective method to assess and improve the quality of cancer care delivery in America. Thus, to advance quality measurement in cancer care and improve the quality of cancer care, the committee identified the goal of
creating a national quality reporting program for cancer care as part of a learning health care system (see Chapter 6) (Recommendation 8).
The committee considered a number of stakeholders as potential leaders in accomplishing this goal. For example, several organizations have attempted to influence quality measurement for cancer care, including the IOM, RAND Corporation, NQF, AHRQ, and, most recently, two NQF-convened public-private partnerships (the MAP and the NPP) (NQF, 2013a,b). These organizations have expended substantial effort to expand this discipline, but they lack the authority to enforce their recommendations and the resources to fund the tremendous body of research that is needed. They also are not focused exclusively on cancer care. Additionally, professional organizations including the ACoS and ASCO, have instituted voluntary reporting programs through which program participants have demonstrated improvements in cancer care. The work of these organizations reflects some collaboration but their activities have been siloed to a large degree.
CMS, together with its parent agency HHS, have also attempted to influence quality measurement for cancer care through various mandatory reporting programs, including the Physician Quality Reporting System (CMS, 2012) and, most recently, a mandatory reporting program for the nation’s eleven cancer centers that are not paid under the PPS (Spinks et al., 2011). However, CMS has not provided strategic direction for cancer quality metrics. It has generally proposed an ever-growing list of process-oriented measures (or measures of short-term outcomes), which frequently are reported from administrative claims databases or patient sampling and are, therefore, relatively inexpensive to produce (Pronovost and Lilford, 2011). This approach fits the federal timetable under which CMS operates and its quest for provider accountability, but these timelines are too brief and CMS’ focus on the Medicare population is too narrow to implement an effective and influential national reporting program for cancer care.
In order to advance the development of a national quality reporting program for cancer, the committee recommends that HHS work with professional organizations to create and implement a formal long-term strategy for publicly reporting quality measures for cancer care that leverages existing efforts. The long-term strategy should focus on the needs of all individuals diagnosed with or at risk for developing cancer. The committee believes that clinicians, through their professional organizations, should be the primary actors because a clinician-led process will help ensure that the resulting reporting program is acceptable to practicing clinicians and reflects of key quality issues in cancer care. Moreover, these organizations are already in the process of developing quality metrics for their members. The committee believes that HHS
should play a convening role in order to improve the coordination of the work of professional organizations. In the past, these organizations have collaborated on an ad hoc basis but more systematic collaboration would speed progress toward this goal.
A key component of developing a formal long-term strategy for quality measures for cancer will be prioritizing, funding, and directing the development of meaningful quality measures, with a focus on outcome measures, and with performance targets for use in publicly reporting the performance of institutions, practices, and individual clinicians. These measures should target gaps in cross-cutting, nontechnical measures as well as measures for specific types of cancers that have largely been excluded from previous measure development efforts. The measures should also incorporate the components of the committee’s conceptual framework at the level of institutions or oncology practices, including measuring the effectiveness of
• patient-clinician communication and shared decision making in supporting patients and caregivers in making informed medical decisions consistent with their needs, values, and preferences, as well as advance care planning, the provision of palliative care and psychosocial support across the continuum of care, and timely referral to hospice care at the end of life (see Chapter 3);
• team-based cancer care that prioritizes patient-centered care and coordination with a patient’s primary care/geriatrics care team and other care teams (Chapter 4);
• evidence-based cancer care that is concordant with clinical practice guidelines and consistent with patients’ needs, values, and preferences (Chapter 5);
• efforts to improve the accessibility and affordability of cancer care (Chapter 8).
To be successful, stakeholders will need to make uncomfortable adjustments, such as adopting shared accountability across clinicians, increasing the transparency of traditionally proprietary cost data, and requiring patients to accept greater responsibility for their outcomes of care. While data availability will be an important consideration, it should not be the sole factor in measure selection. The committee’s goals of improving the breadth and depth of information collected in clinical research (see Chapter 5) will help fill in some of the knowledge gaps surrounding cancer care, such as management of complex comorbidities, quality-oflife issues during and after treatment, and the cost of care. A formal tool
could be developed to assist with prioritizing and selecting measures for development.
HHS should also work with professional organizations to implement a coordinated, transparent reporting infrastructure that meets the informational needs of all stakeholders, with an emphasis on transparency and reporting data that are meaningful and understandable to patients and can be used to guide their health care decisions. Achieving this recommendation will likely require the development of a learning health care IT system for cancer care, as discussed in Chapter 6. A learning health care IT system could facilitate the collection of reliable data in EHRs as part of clinicians’ day-to-day workflow. These data could then be aggregated to assess individual and organizational performance, and made publicly available to inform patients and other decision makers. The committee recognizes that implementation of this recommendation will present considerable challenges (e.g., technological, financial, and cultural). However, the need for a robust reporting infrastructure is great, given that independent efforts to develop cancer metrics have left patients, clinicians, payers, and the government without an effective mechanism to assess and improve the quality of cancer care delivery in the United States.
Clinical research leads to improvements in the quality of care only if these research results are translated into practice. Clinicians use CPGs to synthesize research findings into actionable steps for providing care. The IOM has defined CPGs as “statements that include recommendations intended to optimize patient care that are informed by a systematic review of the evidence and an assessment of the benefits and harms of alternative care options” (IOM, 2011c, p. 4). CPGs are often used to inform the development of quality metrics and decision support tools in EHRs (see Chapter 6). Clinicians’ adherence to CPGs may be measured as part of an outcomes-based reimbursement system (see Chapter 8). The major organizations that develop CPGs in cancer are ASCO, the American Society for Radiation Oncology, and NCCN, as well as the U.S. Preventive Services Task Force, which establishes recommendations on cancer screening and prevention. The activities of these organizations are summarized in Table 7-3.
The translation of evidence into CPGs is not straightforward or consistent. As mentioned in Chapter 5, the evidence base supporting clinical decisions is often incomplete, with few or no studies addressing many questions that are important to patients and clinicians. There is also great variability in the quality of individual scientific studies and in the sys-
|American Society for Radiation Oncology (ASTRO)||ASTRO is a professional organization that represents radiation oncologists, medical physicists, dosimetrists, radiation therapists, radiation oncology nurses and nurse practitioners, biologists, physician assistants, and practice administrators. It develops clinical practice guidelines (CPGs) for these radiation oncology clinicians.|
|American Society of Clinical Oncology (ASCO)||ASCO was founded in 1964 as a nonprofit professional organization that represents clinicians from all of the oncology disciplines and subspecialties. It convenes expert panels to develop CPGs for methods of cancer treatment and care. Many of ASCO’s guidelines are developed in partnership with other specialty societies, such as the American Society of Hematology and the College of American Pathologists. The manual for generating these guidelines is updated regularly to reflect changes in methodology standards.|
|National Comprehensive Cancer Network (NCCN)||NCCN is a coalition of 23 cancer centers. It develops CPGs that address preventive, diagnostic, treatment, and supportive services. The guidelines are developed and updated through informal consensus by expert panels, composed of clinicians and oncology researchers from the 23 NCCN member institutions.|
|U.S. Preventive Services Task Force (USPSTF)||The U.S. Public Health Services convened the USPSTF in 1984, and since 1998, it has been sponsored by the Agency for Healthcare Research and Quality. The USPSTF consists of a panel of private-sector experts, and its recommendations are regarded as the gold standard for clinical preventive services. It has produced recommendations on screening for bladder, breast, cervical, colorectal, lung, oral, ovarian, pancreatic, prostate, skin, testicular, and thyroid cancer, as well as some recommendations on cancer prevention.|
SOURCES: ASCO, 2012a; ASTRO, 2013; IOM, 2008; NCCN, 2012; USPSTF, 2012.
tematic reviews upon which CPGs should be based. In addition, the CPG development process is often fragmented, lacking in transparency, and plagued by potential conflicts of interest in the membership of the CPG panels that may bias the resulting product. In response to these criticisms, the IOM convened a committee to develop standards for trustworthy guidelines (IOM, 2011c). The recommendations of this committee are summarized in Box 7-1. In general, the guidelines committee concluded that to be trustworthy, CPGs should be based on a systematic review of the evidence; be developed by a knowledgeable and multidisciplinary panel; consider patient subgroups and patient preferences; be developed
1. Establishing Transparency
1.1 The processes by which a CPG is developed and funded should be detailed explicitly and publicly accessible.
2. Management of Conflict of Interest (COI)
2.1 Prior to selection of the guideline development group (GDG), individuals being considered for membership should declare all interests and activities that would potentially result in COI with development group activity by written disclosure to those convening the GDG:
• Disclosure should reflect all current and planned commercial (including services from which a clinician derives a substantial proportion of income), noncommercial, intellectual, institutional, and patient-public activities pertinent to the potential scope of the CPG.
2.2 Disclosure of COIs within GDG:
• All COI of each GDG member should be reported and discussed by the prospective development group prior to the onset of his or her work.
• Each panel member should explain how his or her COI could influence the CPG development process or specific recommendations.
• Members of the GDG should divest themselves of financial investments they or their family members have in, and not participate in marketing activities or advisory boards of, entities whose interests could be affected by CPG recommendations.
• Whenever possible, GDG members should not have COI.
• In some circumstances, a GDG may not be able to perform its work without members who have COI, such as relevant clinical specialists who receive a substantial portion of their incomes from services pertinent to the CPG.
• Members with COI should represent not more than a minority of the GDG.
• The chair or co-chairs should not be a person(s) with COI.
• Funders should have no role in CPG development.
3. GDG Composition
3.1 The GDG should be multidisciplinary and balanced, comprising a variety of methodological experts and clinicians, and populations expected to be affected by the CPG.
3.2 Patient and public involvement should be facilitated by including (at least at the time of clinical question formulation and draft CPG review) a current or former patient, and a patient advocate or patient/consumer organization representative in the GDG.
3.3 Strategies to increase effective participation of patient and consumer representatives, including training in appraisal of evidence, should be adopted by GDGs.
4. CPG—Systematic Review Intersection
4.1 CPG developers should use systematic reviews that meet standards set by the IOM’s Committee on Standards for Systematic Reviews of Comparative Effectiveness Research.
4.2 When systematic reviews are conducted specifically to inform particular guidelines, the GDG and systematic review team should interact regarding the scope, approach, and output of both processes.
5. Establishing Evidence Foundations and Rating Strength of Recommendations
5.1 For each recommendation, the following should be provided:
• An explanation of the reasoning underlying the recommendation, including
o A clear description of potential benefits and harms.
o A summary of relevant available evidence (and evidentiary gaps), description of the quality (including applicability), quantity (including completeness), and consistency of the aggregate available evidence.
o An explanation of the part played by values, opinion, theory, and clinical experience in deriving the recommendation.
• A rating of the level of confidence in (certainty regarding) the evidence underpinning the recommendation.
• A rating of the strength of the recommendation in light of the preceding bullets.
• A description and explanation of any differences of opinion regarding the recommendation.
6. Articulation of Recommendations
6.1 Recommendations should be articulated in a standardized form detailing, precisely, the recommended action, and under what circumstances it should be performed.
using a transparent process; provide ratings of both the quality of evidence and strength of recommendations; and be updated regularly.
Few CPGs in oncology meet the IOM’s standards for trustworthiness. Kung and colleagues (2012) reviewed the adherence of CPGs archived in the National Guidelines Clearinghouse to IOM standards. They found that the average CPG only satisfied 8 out of the 18 standards reviewed (44.4 percent) and fewer than half of the CPGs met more than 50 percent
6.2 Strong recommendations should be worded so that compliance with the recommendation(s) can be evaluated.
7. External Review
7.1 External reviewers should comprise a full spectrum of relevant stakeholders, including scientific and clinical experts, organizations (e.g., health care, specialty societies), agencies (e.g., federal government), patients, and representatives of the public.
7.2 The authorship of external reviews submitted by individuals and/or organizations should be kept confidential unless that protection has been waived by the reviewer(s).
7.3 The GDG should consider all external reviewers’ comments and keep a written record of the rationale for modifying or not modifying a CPG in response to reviewers’ comments.
7.4 A draft of the CPG at the external review stage or immediately following it (i.e., prior to the final draft) should be made available to the general public for comment. Reasonable notice of impending publication should be provided to interested public stakeholders.
8.1 The CPG publication date, date of pertinent systematic evidence review, and proposed date for future CPG review should be documented in the CPG.
8.2 Literature should be monitored regularly following CPG publication to identify the emergence of new, potentially relevant evidence and to evaluate the continued validity of the CPG.
8.3 CPGs should be updated when new evidence suggests the need for modification of clinically important recommendations. For example, a CPG should be updated if new evidence shows that a recommended intervention causes previously unknown substantial harm; that a new intervention is significantly superior to a previously recommended intervention from an efficacy or harms perspective; or that a recommendation can be applied to new populations.
SOURCE: IOM, 2011c.
of the IOM standards. Oncology CPGs were slightly above average, satisfying a median of 9.5 out of the 18 (52.8 percent) standards reviewed, with just over half meeting more than 50 percentof the standards.
In a separate study, Reames and colleagues (2013) scored CPGs and consensus statements addressing the screening, evaluation, or management of the four leading causes of cancer mortality in the United States (non-small-cell lung, breast, prostate, and colorectal cancers) on their
consistency with the IOM’s standards for CPGs published between 2005 and 2010. None of the 168 CPGs included in the study met all of the IOM’s standards; the average was 2.8 out of the 8 standards assessed. The CPGs were most compliant with the standards addressing transparency in the development process, articulation of the recommendations, and use of external review. The CPGs were least likely to comply with the standards requiring that CPGs be based on a systematic review of the evidence, involve patients and the public in the development process, or specify a process for making updates. In addition, Norris and colleagues found that most CPG developers have failed to develop conflict of interest policies consistent with the IOM’s recommendations (Norris et al., 2012).
The committee acknowledges the considerable challenges to implementing the IOM’s standards for trustworthy CPGs. The standards are stringent, resource intensive, and require major investments in time and human resources. Because of the importance of CPGs to improving the quality of cancer care and translating evidence into clinical practice, however, the committee endorses the IOM’s recommendations on producing trustworthy CPGs and encourages developers of CPGs in oncology to strive to meet these standards.
Quality measurement and CPGs are essential components of improving performance in health care. As discussed above, quality metrics provide insights into which aspects of health care require improvement and may be used to assess the success of performance improvement initiatives. They can also be used by individual clinicians to assess their performance and improve the care they provide (Blayney et al., 2009). CPGs are a type of performance improvement initiative that help clinicians stay abreast of an ever increasing evidence base and apply that information to their clinical practice. Although necessary, these activities, in the absence of other levers, are insufficient to drive meaningful improvements in health care (Berwick et al., 2003; Davies, 2001; IOM, 2011a).
To be successful, health care organizations must foster a culture of change through a variety of activities, such as those discussed in this report. Those activities include improving patient engagement, decision making, and communication (see Chapter 3); ensuring that personnel have sufficient training, appropriate licensure and certifications, and are empowered to contribute to performance improvement initiatives (see Chapter 4); investing in learning health care IT systems to collect data on quality of care, making this data transparent to the entire organization, and providing clinical decision support (see Chapter 6); and creating incentives that encourage clinicians and provider organizations to ad-
|Audit and Feedback||Clinician performance tracking and reviews, comparison with national/state quality report cards, publicly released performance data, and benchmark outcome data|
|Clinical Decision Support||Information technology provides clinicians with access to evidence-based clinical practice guidelines|
|Clinician and Patient Education||Classes, parent and family education, pamphlets, and other media|
|Clinician Reminder Systems||Prompts in electronic health records|
|Facilitated Relay of Clinical Data to Clinicians||Patient data transmitted by telephone call or fax from outpatient specialty clinics to primary care clinicians|
|Financial Incentives||Performance-based bonuses and alternative reimbursement systems for clinicians, positive or negative financial incentives for patients, changes in professional licensure requirements|
|Organizational Changes||Continuous performance improvement programs, lean and Six Sigma approaches, shifting from paper-based to computer-based record keeping, long-distance case discussion between professional peers, etc.|
|Patient Reminder Systems||Telephone calls or postcards from clinicians to their patients|
|Patient Safety Initiatives||Checklists, safety incident reporting, close call reporting, and root-cause analysis|
|Promotion of Disease Self-Management||Workshops, materials such as blood pressure or glucose monitoring devices|
SOURCE: Adapted from AHRQ, 2012a.
minister high-quality care rather than a high volume of care (e.g., patient-centered medical homes, care pathways, accountable care organizations) (see Chapter 8).
Performance improvement initiatives, which are conducted at the local level, have been described as “systematic, data-guided activities designed to bring about immediate, positive change in the delivery of health care in a particular setting,” as well as across settings (Baily, 2006, p. S5). These activities are interrelated and overlapping with quality improvement and patient safety initiatives. Table 7-4 provides examples of perfor-
mance improvement initiatives. Because these efforts are implemented in a single organization or health system, they can be undertaken immediately without action on a national or system level and can be tailored to the unique circumstances of the local environment. Experts have noted, however, that traditional approaches to performance improvement—clinician practice peer review, public reporting of quality measures, continuous performance improvement and total quality management, and regulatory and legislatively imposed reforms and penalties—lack the pace, breadth, magnitude, coordination, and sustainability to transform health care delivery (Chassin and Loeb, 2011; Davies, 2001).
Leadership is needed to create an institutional culture that values high-quality care, a key component of successful performance improvement initiatives. The aviation industry has long recognized the importance of embedding performance improvement initiatives in cultures that value inquiry and quality, and that have strong leaders dedicated to facilitating the necessary changes (Helmreich, 2000). Health care organizations have successfully applied this approach to performance improvement through efforts aimed at improving patient safety, such as by using checklists to reduce human error, and could apply them more broadly to improve quality in other areas of care (Gawande, 2009; Hudson, 2003; Longo et al., 2005; Pronovost et al., 2003).
In addition, health care organizations have rushed to adopt Six Sigma and “lean” systems approaches to reduce variation and waste in health care. These robust industrial performance improvement tools are most effective within organizations that have an embedded safety culture, senior leadership dedicated to organizational change, and clear mechanisms for identifying quality and safety issues and triggering performance improvement initiatives (Chassin and Loeb, 2011). Also important is leadership’s commitment to funding these activities, which often consume substantial organizational resources (Pryor et al., 2011). Without these organizational characteristics, it is unlikely that performance improvement initiatives will lead to improved patient outcomes and sustained improvements in care delivery.
A high-quality cancer care delivery system should translate evidence into clinical practice, measure quality, and improve clinician performance. This involves developing CPGs to assist clinicians in quickly incorporating new medical knowledge into routine care. Also critical are measuring and assessing a system’s progress in improving the delivery of cancer care, publicly reporting the information gathered, and developing innovative strategies to further facilitate performance improvement. In the
figure illustrating the committee’s conceptual framework (see Figure S-2), knowledge translation and performance improvement are part of a cyclical process that measures the outcomes of patient-clinician interactions and implements innovative strategies to improve the accessibility, affordability, and quality of care
CPGs translate evidence into practice by synthesizing research findings into actionable steps clinicians can take when providing care. The development of CPGs is not straightforward or consistent because the evidence base supporting clinical decisions is often incomplete and includes studies and systematic reviews of variable quality. In addition, organizations that develop CPGs often use fragmented processes that lack transparency and they are plagued by conflicts of interest. The committee endorses the standards in the 2011 IOM report Clinical Practice Guidelines We Can Trust to address these problems and produce trustworthy CPGs.
Performance improvement initiatives can also be used to translate evidence into practice. These tools have been described as “systematic, dataguided activities designed to bring about immediate, positive change in the delivery of health care in a particular setting,” (Baily, 2006, p. 55) as well as across settings. They can improve the efficiency, patient satisfaction, health outcomes, and costs of cancer care. These efforts are typically implemented in a single organization or health system; as a result, they often lack the pace, breadth, magnitude, coordination, and sustainability to transform health care delivery nationwide.
Cancer care quality measures provide a standardized and objective means for assessing the quality of cancer care delivered. Measuring performance has the potential to drive improvements in care, inform patients, and influence clinician behavior and reimbursement. There are currently serious deficiencies in cancer care quality measurement in the United States, including pervasive gaps in existing measures, challenges in the measure development process, lack of consumer engagement in measure development and reporting, and the need for data to support meaningful, timely, and actionable performance measurement. A number of groups representing clinicians who provide cancer care, including ASCO and ACoS, have instituted voluntary reporting programs, through which program participants have demonstrated improvements. HHS has also attempted to influence quality measurement for cancer care through various mandatory reporting programs.
Recommendation 8: Quality Measurement
Goal: Develop a national quality reporting program for cancer care as part of a learning health care system.
To accomplish this, the U.S. Department of Health and Human Services should work with professional societies to:
• Create and implement a formal long-term strategy for publicly reporting quality measures for cancer care that leverages existing efforts.
• Prioritize, fund, and direct the development of meaningful quality measures for cancer care with a focus on outcome measures and with performance targets for use in publicly reporting the performance of institutions, practices, and individual clinicians.
• Implement a coordinated, transparent reporting infrastructure that meets the needs of all stakeholders, including patients, and is integrated into a learning health care system.
ACoS (American College of Surgeons). 2011a. About ACS NSQIP. http://site.acsnsqip.org/ about (accessed April 25, 2013).
———. 2011b. About the CoCc. http://www.facs.org/cancer/coc/cocar.html (accessed August 15, 2012).
———. 2011c. How are cancer programs accredited? http://www.facs.org/cancer/coc/howacc.html (accessed August 15, 2012).
———. 2011d. National Cancer Data Base. http://www.facs.org/cancer/ncdb/index.html (accessed August 15, 2012).
———. 2013. Measures. http://site.acsnsqip.org/program-specifics/program-options/measures-option (accessed June 28, 2013).
The Advisory Board Company. 2012. Clinical Advisory Board Member Survey results: Staffing models for supporting quality reporting. http://www.advisory.com/~/media/Advisorycom/Research/CAB/Resources/2012/2012%20Staffing%20Models%20Survey%20Results.pdf (accessed August 15, 2012).
AHRQ (Agency for Healthcare Research and Quality). 2012a. Closing the quality gap series: Quality improvement interventions to address health disparities. http://www.effectivehealthcare.ahrq.gov/search-for-guides-reviews-and-reports/?pageaction=displayproduct&productID=1242&ECem=120827 (accessed December 21, 2012).
———. 2012b. Measures sought for National Quality Measures Clearinghouse. http://www.ahrq.gov/qual/nqmcmeas.htm (accessed August 15, 2012).
———. 2012c. National Quality Measures Clearinghouse measures by topic. http://www.qualitymeasures.ahrq.gov/browse/by-topic.aspx (accessed August 15, 2012).
———. 2012d. National Quality Measures Clearinghouse: About. http://www.qualitymeasures.ahrq.gov/about/index.aspx (accessed August 15, 2012).
———. 2012e. National Quality Measures Clearinghouse, U.S. Department of Health and Human Services: Measure inventory. http://www.qualitymeasures.ahrq.gov/hhs-measureinventory/browse.aspx (accessed August 15, 2012).
AMA (American Medical Association). 2012. Resources. http://www.ama-assn.org/ama/pub/physician-resources/physician-consortium-performance-improvement.page (accessed August 15, 2012).
Anderson, K. M., C. A. Marsh, A. C. Flemming, H. Isenstein, and J. Reynolds. 2012. An environmental snapshot—Quality measurement enabled by health IT: Overview, possibilities, and challenges. http://healthit.ahrq.gov/sites/default/files/docs/page/NRCD1PTQ%20Final%20Draft%20Background%20Report%2007102012_508compliant.pdf (accessed August 15, 2012).
ASCO (American Society of Clinical Oncology). 2012a. Clinical practice guidelines. http://www.asco.org/ASCOv2/Practice+%26+Guidelines/Guidelines/Clinical+Practice+Guidelines (accessed December 20, 2012).
———. 2012b. Geographic distribution. http://qopi.asco.org/GeographicDistribution (accessed August 15, 2012).
———. 2012c. QOPI certified practices. http://qopi.asco.org/certifiedpractices (accessed October 1, 2012).
———. 2012d. QOPI summary of measures, fall 2012. http://qopi.asco.org/Documents/QOPIFall12MeasuresSummary_002.pdf (accessed August 15, 2012).
———. 2012e. Who can apply? http://qopi.asco.org/whocanapply (accessed August 15, 2012).
———. 2013. Certification. http://qopi.asco.org/certification.html (accessed June 26, 2012).
ASTRO (American Society for Radiation Oncology). 2013. Guidelines. https://www.astro.org/Clinical-Practice/Guidelines/Index.aspx (accessed March 27, 2013).
Baily, M. A., M. Bottrell, J. Lynn, and B. Jennings. 2006. Hastings Center Report 36(4): S1-S40.
Berry, D. A. 2011. Comparing survival outcomes across centers: Biases galore. Cancer Letter 37(11):7-10.
Berwick, D. M., B. James, and M. J. Coye. 2003. Connections between quality measurement and improvement. Medical Care 41(1): I30-I38.
Bilimoria, K. Y., A. K. Stewart, D. P. Winchester, and C. Y. Ko. 2008. The National Cancer Database: A powerful initiative to improve cancer care in the Uunited States. Annals of Surgical Oncology 15(3):683-690.
Birkmeyer, J. D., T. A. Stukel, A. E. Siewers, P. P. Goodney, D. E. Wennberg, and F. L. Lucas. 2003. Surgeon volume and operative mortality in the United States. New England Journal of Medicine 349(22):2117-2127.
Bishop, T. F. 2013. Pushing the outpatient quality envelope. Journal of the American Medical Association 1-2.
Blayney, D. W., K. McNiff, D. Hanauer, G. Miela, D. Markstrom, and M. Neuss. 2009. Implementation of the Quality Oncology Practice Initiative at a university comprehensive cancer center. Journal of Clinical Oncology 27(23):3802-3807.
Blayney, D. W., J. Severson, C. J. Martin, P. Kadlubek, T. Ruane, and K. Harrison. 2012. Michigan oncology practices showed varying adherence rates to practice guidelines, but quality interventions improved care. Health Affairs (Millwood) 31(4):718-728.
Chassin, M. R., and J. M. Loeb. 2011. The ongoing quality improvement journey: Next stop, high reliability. Health Affairs (Millwood) 30(4):559-568.
CMS (Centers for Medicare & Medicaid Services). 2012. Physician Quality Reporting System formerly known as the Physician Quality Reporting Initiative. http://www.cms.gov/PQRS (accessed August 15, 2012).
Davies, H. T. 2001. Exploring the pathology of quality failings: Measuring quality is not the problem—changing it is. Journal of Evaluation in Clinical Practice 7(2):243-251.
Deutsch, A., B. Gage, L. Smith, and C. Kelleher. 2012. Patient-reported outcomes in performance measurement commissioned paper on PRO-based performance measures for healthcare accountable entities draft #1, September 4, 2012. http://www.qualityforum.org/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID=71824 (accessed September 15, 2012).
Donabedian, A. 1980. Explorations in Quality Assessment and Monitoring. In The definition of quality and approaches to its assessment. Vol. 1. Ann Arbor, MI: Health Administration Press.
Dudgeon, D. J., C. Knott, C. Chapman, K. Coulson, E. Jeffery, S. Preston, M. Eichholz, J. P. Van Dijk, and A. Smith. 2009. Development, implementation, and process evaluation of a regional palliative care quality improvement project. Journal of Pain and Symptom Management 38(4):483-495.
Faber, M., M. Bosch, H. Wollersheim, S. Leatherman, and R. Grol. 2009. Public reporting in health care: How do consumers use quality-of-care information? A systematic review. Medical Care 47(1):1-8.
Finks, J. F., N. H. Osborne, and J. D. Birkmeyer. 2011. Trends in hospital volume and operative mortality for high-risk surgery. New England Journal of Medicine 364(22):2128-2137.
Finlayson, E. V., P. P. Goodney, and J. D. Birkmeyer. 2003. Hospital volume and operative mortality in cancer surgery: A national study. Archives of Surgery 138(7):721-725.
Foster, J. A., M. Abdolrasulnia, H. Doroodchi, J. McClure, and L. Casebeer. 2009. Practice patterns and guideline adherence of medical oncologists in managing patients with early breast cancer. Journal of the National Comprehensive Cancer Network 7(7):697-706.
Gawande, A. 2009. The checklist manifesto: How to get things right. New York: Metropolitan books.
Goldberg, P. 2011. Fox Chase publishes its cancer survival data: The move is partly science, partly marketing. Cancer Letter 37(5):1-5.
Harris, K. M., and M. Beeuwkes Buntin. 2008. Choosing a health care provider. The Synthesis project. Research Synthesis Report (14).
Helmreich, R. L. 2000. On error management: Lessons from aviation. British Medical Journal 320(7237):781-785.
Hibbard, J., and S. Sofaer. 2010. Best practices in public reporting no. 1: How to effectively present health care performance data to consumers. http://www.ahrq.gov/qual/pubrptguide1.pdf (accessed August 15, 2012).
Higgins, A., T. Zeddies, and S. D. Pearson. 2011. Measuring the performance of individual physicians by collecting data from multiple health plans: The results of a two-state test. Health Affairs (Millwood) 30(4):673-681.
Ho, V., M. J. Heslin, H. Yun, and L. Howard. 2006. Trends in hospital and surgeon volume and operative mortality for cancer surgery. Annals of Surgical Oncology 13(6):851-858.
Hudson, P. 2003. Applying the lessons of high risk industries to health care. Quality & Safety in Health Care 12(Suppl 1):i7-i12.
Hussey, P., and E. A. McGlynn. 2009. Why are there no efficiency measures in the National Quality Measures Clearinghouse? http://www.qualitymeasures.ahrq.gov/expert/expertcommentary.aspx?id=16459 (accessed August 15, 2012).
IOM (Institute of Medicine). 2001. Envisioning The National Health Care Quality Report. Edited by M. P. Hurtado, E. K. Swift, and J. M. Corrigan. Washington, DC: National Academy Press.
———. 2008. Knowing what works in health care: A roadmap for the nation. Washington, DC: The National Academies Press.
———. 2011a. For the public’s health: The role of measurement in action and accountability. Washington, DC: The National Academies Press.
———. 2011b. Health literacy implications for health care reform: Workshop summary. Washington, DC: The National Academies Press.
———. 2011c. Clinical practice guidelines we can trust. Washington, DC: The National Academies Press.
IOM and NRC (National Research Council). 1999. Ensuring quality cancer care. Washington, DC: National Academy Press.
———. 2000. Enhancing data systems to improve the quality of cancer care. Edited by M. Hewitt and J. V. Simone. Washington, DC: National Academy Press.
Jacobson, J. O., M. N. Neuss, K. K. McNiff, P. Kadlubek, L. R. Thacker, 2nd, F. Song, P. D. Eisenberg, and J. V. Simone. 2008. Improvement in oncology practice performance through voluntary participation in the Quality Oncology Practice Initiative. Journal of Clinical Oncology 26(11):1893-1898.
Kahn, K. L., J. L. Malin, J. Adams, and P. A. Ganz. 2002. Developing a reliable, valid, and feasible plan for quality-of-care measurement for cancer: How should we measure? Medical Care 40(6 Suppl): III73-III85.
KFF (Kaiser Family Foundation) and AHRQ. 2006. Update on consumers’ views on patient safety and quality information. www.kff.org/kaiserpolls/pomr092706pkg.cfm (accessed August 15, 2012).
Kizer, K. W. 2000. The National Quality Forum seeks to improve health care. Academic Medicine 75(4):320-321.
Krumholz, H. M., P. S. Keenan, J. E. Brush, Jr., V. J. Bufalino, M. E. Chernew, A. J. Epstein, P. A. Heidenreich, V. Ho, F. A. Masoudi, D. B. Matchar, S. L. Normand, J. S. Rumsfeld, J. D. Schuur, S. C. Smith, Jr., J. A. Spertus, and M. N. Walsh. 2008. Standards for measures used for public reporting of efficiency in health care: A scientific statement from the American Heart Association Interdisciplinary Council on Quality of Care and Outcomes Rresearch and the American College of Cardiology Foundation. Journal of the American College of Cardiology 52(18):1518-1526.
Kung, J., R. R. Miller, and P. A. Mackowiak. 2012. Failure of clinical practice guidelines to meet Institute of Medicine standards: Two more decades of little, if any, progress. Archives of Internal Medicine 172(21):1628-1633.
Longo, D. R., J. E. Hewett, B. Ge, and S. Schubert. 2005. The long road to patient safety. A status report on patient safety systems. Journal of the American Medical Association 294(22):2825-2865.
MAP (Measure Applications Partnership) and NQF (National Quality Forum). 2012. Performance measurement coordination strategy for PPSs-exempt cancer hospitals. http://www.qualityforum.org/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID=71217 (accessed August 15, 2012).
McGlynn, E. A. 1997. Six challenges in measuring the quality of health care. Health Affairs (Millwood) 16(3):7-21.
McNiff, K. 2006. The Quality Oncology Practice Initiative: Assessing and improving care within the medical oncology practice. Journal of Oncology Practice/American Society of Clinical Oncology 2(1):26-30.
Menck, H. R., L. Garfinkel, and G. D. Dodd. 1991. Preliminary report of the National Cancer Database. CA: A Cancer Journal for Clinicians 41(1):7-18.
Murff, H. J., F. FitzHenry, M. E. Matheny, N. Gentry, K. L. Kotter, K. Crimin, R. S. Dittus, A. K. Rosen, P. L. Elkin, S. H. Brown, and T. Speroff. 2011. Automated identification of postoperative complications within an electronic medical record using natural language processing. Journal of the American Medical Association 306(8):848-855.
National Priorities Partnership. 2011. Input to the Secretary of Health and Human Services on priorities for The National Quality Strategy. http://www.qualityforum.org/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID=68238 (accessed August 15, 2012).
NCCN (National Comprehensive Cancer Network). 2012. NCCN guidelines & clinical resources. http://www.nccn.org/clinical.asp (accessed December 20, 2012).
NCI (National Cancer Institute). 2012. Surveillance, Epidemiology, and End Results: Overview of the SEER program. http://seer.cancer.gov/about/overview.html (accessed August 15, 2012).
Norris, S. L., H. K. Holmer, B. U. Burda, L. A. Ogden, and R. Fu. 2012. Conflict of interest policies for organizations producing a large number of clinical practice guidelines. PloS ONE 7(5):e37413.
NQF (National Quality Forum). 2010. Guidance for measure harmonization: A consensus report. http://www.qualityforum.org/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID=62381 (accessed August 15, 2012).
———. 2012a. Adjuvant hormonal therapy. http://www.qualityforum.org/MeasureDetails.aspx?actid=0&SubmissionId=450 (accessed August 15, 2012).
———. 2012b. Funding. http://www.qualityforum.org/About_NQF/Funding.aspx (accessed August 15, 2012).
———. 2012c. National Quality Forum: Measure evaluation criteria, january 2011. http://www.qualityforum.org/Measuring_Performance/Submitting_Standards/Measure_Evaluation_Criteria.aspx (accessed August 15, 2012).
———. 2012d. NQF-endorsed standards. http://www.qualityforum.org/Measures_List.aspx (accessed August 15, 2012).
———. 2012e. Oncology: Hormonal therapy for stage I through III, ER/PRR positive breast cancer. http://www.qualityforum.org/MeasureDetails.aspx?actid=0&SubmissionId=631 (accessed August 15, 2012).
———. 2012f. Performance measurement coordination strategy for PPS-exempt cancer hospitals. https://www.qualityforum.org/Publications/2012/06/Performance_Measurement_Coordination_Strategy_for_PPS-Exempt_Cancer_Hospitals.aspx (accessed August 8, 2013).
———. 2013a. Measure Applications Partnership. http://www.qualityforum.org/map (accessed August 7, 2013).
———. 2013b. National Priorities Partnership. http://www.qualityforum.org/Setting_Priorities/NPP/National_Priorities_Partnership.aspx (accessed August 7, 2013).
———. 2013c. NQF-Endorsed Standards. http://www.qualityforum.org/Measures_List.aspx (accessed June 28, 2013).
Parsons, A., C. McCullough, J. Wang, and S. Shih. 2012. Validity of electronic health record-derived quality measurement for performance monitoring. Journal of the American Medical Informatics Association 19(4):604-609.
Pauly, M. V. 2011. Analysis & commentary: The trade-off among quality, quantity, and cost: How to make it—if we must. Health Affairs (Millwood) 30(4):574-580.
President’s Advisory Commission on Consumer Protection and Quality in the Health Care Industry. 1998. Quality first: Better health care for all Americans, final report to the President of the United States. Washington, DC: United States G.P.O.
Pronovost, P. J, and R. Lilford. 2011. Analysis & commentary: A road map for improving the performance of performance measures. Health Affairs (Millwood) 30(4):569-573.
Pronovost, P. J., B. Weast, C. G. Holzmueller, B. J. Rosenstein, R. P. Kidwell, K. B. Haller, E. R. Reroli, J. B. Sexton, and H. R. Rubin, 2003. Evalution of the culture of safety: Survey of clinicians and managers in an academic medical center. Quality & Safety in Health Care 12:405-410.
Pryor, D., A. Hendrich, R. J. Henkel, J. K. Beckmann, and A. R. Tersigni. 2011. The quality “journey” at ascension health: How we’ve prevented at least 1,500 avoidable deaths a year—and aim to do even better. Health Affairs (Millwood) 30(4):604-611.
RAND. 2010. About acove. http://www.rand.org/health/projects/acove/about.html (accessed April 25, 2013).
Reames, B. N., R. W. Krell, S. N. Ponto, and S. L. Wong. 2013. A critical evaluation of oncology clinical practice guidelines. Journal of Clinical Oncology 31(20):2563-2568.
Romanus, D., M. R. Weiser, J. M. Skibber, A. Ter Veer, J. C. Niland, J. L. Wilson, A. Rajput, Y. N. Wong, A. B. Benson, S. Shibata, and D. Schrag. 2009. Concordance with NCCN colorectal cancer guidelines and ASCO/NCCN quality measures: An NCCN institutional analysis. Journal of the National Comprehensive Cancer Network 7(8):895-904.
Russell, E. 1998. The ethics of attribution: The case of health care outcome indicators. Social Science & Medicine 47(9):1161-1169.
Schneider, E. C., J. L. Malin, K. L. Kahn, E. J. Emanuel, and A. M. Epstein. 2004. Developing a system to assess the quality of cancer care: ASCO’s national initiative on cancer care quality. Journal of Clinical Oncology 22(15):2985-2991.
Shekelle, P. G., Y. W. Lim, S. Mattke, and C. Damberg. 2008. Does public release of performance results improve quality of care? A systematic review. London, UK: The Health Foundation.
Spinks, T. E., R. Walters, T. W. Feeley, H. W. Albright, V. S. Jordan, J. Bingham, and T. W. Burke. 2011. Improving cancer care through public reporting of meaningful quality measures. Health Affairs (Millwood) 30(4):664-672.
Spinks, T., H. W. Albright, T. W. Feeley, R. Walters, T. W. Burke, T. Aloia, E. Bruera, A. Buzdar, L. Foxhall, D. Hui, B. Summers, A. Rodriguez, R. Dubois, and K. I. Shine. 2012. Ensuring quality cancer care: A follow-up review of the Institute of Medicine’s 10 recommendations for improving the quality of cancer care in america. Cancer 118(10):2571-2582.
Totten, A. M., J. Wagner, A. Tiwari, C. O’Haire, J. Griffin, and M. Walker. 2012. Public reporting as a quality improvement strategy. Closing the quality gap: Revisiting the state of the science. http://www.effectivehealthcare.ahrq.gov/ehc/products/343/1198/Evidencereport208_CQG-PublicReporting_ExecutiveSummary_20120724.pdf (accessed August 15, 2012).
UHC (UnitedHealthcare). 2012. UHC expands and refines risk-adjusted models for pediatrics and oncology—updated models take children’s care into account, help simplify cancer patient diagnosis and treatment methods. https://www.uhc.edu/docs/45014734_Press_Release_RiskModel.pdf (accessed August 15, 2012).
Usman, O. 2011. We need more supply-side regulation. Health Affairs (Millwood) 30(8):1615; author reply 1615.
USPSTF (U.S. Preventive Services Task Force). 2012. USPSTF topic guide. http://www.uspreventiveservicestaskforce.org/uspstopics.htm#Ctopics (accessed December 20, 2012).
Weissman, J. S., J. R. Betancourt, A. R. Green, G. S. Meyer, A. Tan-McGrory, J. D. Nudel, J. A. Zeidman, and J. E. Carrillo. 2011. Commissioned paper: Healthcare disparities measurement. Boston, MA: Massachusetts General Hospital and Harvard Medical School. Sponsored by the National Quality Forum, grant funding from Robert Wood Johnson Foundation.
Werner, R. M., R. T. Konetzka, E. A. Stuart, E. C. Norton, D. Polsky, and J. Park. 2009. Impact of public reporting on quality of postacute care. Health Services Research 44(4):1169-1187.
Wick, E. C., D. B. Hobson, J. L. Bennett, R. Demski, L. Maragakis, S. L. Gearhart, J. Efron, S. M. Berenholtz, and M. A. Makary. 2012. Implementation of a surgical comprehensive unit-based safety program to reduce surgical site infections. Journal of the American College of Surgeons 215(2):193-200.
This page intentionally left blank.