7

Translating Evidence into Practice, Measuring Quality, and Improving Performance

A high-quality cancer care delivery system should translate evidence into practice, measure quality, and improve the performance of clinicians. To arrive at a high-quality cancer care delivery system that does just that, clinicians need tools and initiatives that assist them with quickly incorporating new medical knowledge into routine care. Clinicians also need to be able to measure and assess progress in improving the delivery of cancer care, publicly report that information, and develop innovative strategies for further performance improvement.

In the figure illustrating the committee’s conceptual framework (see Figure S-2), knowledge translation and performance improvement are part of a cyclical process that measures the outcomes of patient-clinician interactions, implements innovative strategies to improve care, evaluates the impact of those interventions on the quality of care, and generates new hypotheses for investigation. Clinical practice guidelines (CPGs), quality metrics, and performance improvement initiatives are all tools supportive of that cyclical process. CPGs and performance improvement strategies enhance the translation of evidence into practice. Specifically, CPGs translate research results into clinical recommendations for clinicians, and performance improvement initiatives systematically bring about a change in the delivery of care that reflects the best available evidence. Quality metrics evaluate health care clinicians’ performance and practices by comparing actual clinical practices against recommended practices, and identifying areas that could be improved.

A high-quality cancer care delivery system’s focus on quality metrics and CPGs is consistent with the Institute of Medicine’s (IOM’s) 1999 report



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 271
7 Translating Evidence into Practice, Measuring Quality, and Improving Performance A high-quality cancer care delivery system should translate evidence into practice, measure quality, and improve the performance of clinicians. To arrive at a high-quality cancer care delivery system that does just that, clinicians need tools and initiatives that assist them with quickly incorporating new medical knowledge into routine care. Cli- nicians also need to be able to measure and assess progress in improving the delivery of cancer care, publicly report that information, and develop innovative strategies for further performance improvement. In the figure illustrating the committee’s conceptual framework (see Figure S-2), knowledge translation and performance improvement are part of a cyclical process that measures the outcomes of patient-clinician interactions, implements innovative strategies to improve care, evaluates the impact of those interventions on the quality of care, and generates new hypotheses for investigation. Clinical practice guidelines (CPGs), quality metrics, and performance improvement initiatives are all tools supportive of that cyclical process. CPGs and performance improvement strategies enhance the translation of evidence into practice. Specifically, CPGs translate research results into clinical recommendations for cli- nicians, and performance improvement initiatives systematically bring about a change in the delivery of care that reflects the best available evi- dence. Quality metrics evaluate health care clinicians’ performance and practices by comparing actual clinical practices against recommended practices, and identifying areas that could be improved. A high-quality cancer care delivery system’s focus on quality metrics and CPGs is consistent with the Institute of Medicine’s (IOM’s) 1999 report 271

OCR for page 271
272 DELIVERING HIGH-QUALITY CANCER CARE Ensuring Quality Cancer Care, which recommended improving clinicians’ use of systematically developed guidelines and increasing the measure- ment and monitoring of cancer care using a core set of quality measures (IOM and NRC, 1999). Despite those recommendations, the translation of research findings into practice in the current cancer care system has been slow and incomplete, and many challenges plague the system for measur- ing and assessing performance. CPGs, for example, are often developed by fragmented processes that lack transparency (IOM, 2011c). Serious limitations in the evidence base supporting CPGs can result in different guidelines being developed on the same topic with conflicting advice to clinicians. Performance improvement initiatives are generally modest, localized efforts, and because they are tailored to unique local circum- stances, are difficult to translate to the national level. Similarly, there are many challenges and pervasive gaps in existing measures that impede the development of cancer quality metrics. The previous chapters discussed the importance of improving the scientific evidence base to guide the clinical decision making of patients and their health care clinicians, as well as the role of a learning health care information technology (IT) system for cancer in accomplishing this goal. This chapter discusses how to ensure that this evidence is translated into practice, that quality is measured, and that the system monitors and assesses its performance. The majority of the chapter focuses on cancer quality metrics. The committee commissioned a background paper on this topic and identified a great need for improvement in the metrics development process. The remainder of the chapter focuses on CPGs and performance improvement initiatives. The committee relied heavily on the IOM’s previous work on CPGs to derive the evidence base for the guideline portion of this chapter (IOM, 2008, 2011c). The committee iden- tifies one recommendation for improving cancer quality metrics. Cancer Quality Metrics1 Cancer quality measures provide objective descriptors of the conse- quences of care and transform the nebulous concept of “good medicine” into a measurable discipline. These measures serve a number of roles in assessing quality of care by providing a standardized and objective means of measurement. For example, quality assurance measures assess a clinician’s or an organization’s performance for purposes of compliance, accreditation, and payment. Performance improvement metrics, however, 1  This section of the chapter was adapted from a background paper by Tracy Spinks, MD Anderson Cancer Center and Consultant, IOM Committee on Improving the Quality of Cancer Care: Addressing the Challenges of an Aging Population (2012).

OCR for page 271
TRANSLATING EVIDENCE 273 are designed to identify gaps in care with the objective of closing those gaps. Typically these measures are implemented in a collaborative, rather than a punitive, environment. They can drive improvements in care by informing patients and influencing clinician behavior and reimbursement. Appropriately selected quality measures may be used prospectively to influence decision making and care planning and to align the mutual in- terests of patients, caregivers, clinicians, and payers. Moreover, they can provide insights into practice variations between clinicians and document changes over time within a given practice setting. There are many unique considerations in measuring the quality of cancer care. As discussed in earlier chapters, the complexity of cancer care has exceeded that of many other common chronic conditions. Can- cer comprises hundreds of different types of diseases and subtypes and includes multiple stages of disease (e.g., precancer, early-stage disease, metastatic disease). Cancer care often occurs in multiple phases—an acute phase, a chronic phase, and an end-of-life phase—requiring different treatments and approaches to care over time. The multiple treatment modalities and combination strategies during the acute treatment phase demand coordinated teams of professionals with multiple skill sets. Treat- ment during the chronic phase also requires coordination between vari- ous care teams. Additionally, patients and clinicians must make difficult treatment decisions due to the toxicity of many of the treatment options. Quality measures in cancer need to reflect and account for these complex characteristics of the disease. The National Quality Forum (NQF), the Agency for Healthcare Re- search and Quality (AHRQ), the American Society of Clinical Oncology (ASCO), and the American College of Surgeons’ (ACoS’s) Commission on Cancer (CoC) have developed2 or endorsed3 a number of quality mea- sures specific to or applicable to cancer for use in performance improve- ment and national mandatory reporting programs in the United States. These measures broadly fall into two categories: disease-specific measures (e.g., measures specific to breast cancer), and cross-cutting measures, which apply to a variety of cancers. Additionally, the Patient Protection and Affordable Care Act4 outlined six categories of measures for use in federal reporting of cancer care by the nation’s eleven cancer centers not 2 An organization develops a quality measure by investing time and resources to create a new variable to measure. 3 An organization endorses a quality measure by publicly expressing support or approval for the measure. 4  Patient Protection and Affordable Care Act, Public Law 111-148, 111th Congress, 2nd Sess. (March 23, 2010).

OCR for page 271
274 DELIVERING HIGH-QUALITY CANCER CARE TABLE 7-1  Examples of Quality Metrics Projects Relevant to Cancer Care Organization Description Assessing Care of ACOVE quality measures were developed by health Vulnerable Elders services researchers at RAND Corporation in 2000 to (ACOVE) assess care provided to vulnerable older adults (defined as those most likely to die or become severely disabled in the next 2 years). The measures reflect the complexity of measuring the quality of care for older adults, who often have multiple comorbidities and substantial variation in treatment preferences. They cover the broad range of health care issues that older adults experience, including primary care, chronic obstructive pulmonary disease, colorectal cancer, breast cancer, sleep disorders, and benign prostatic hypertrophy. National Cancer Data Base The Commission on Cancer (CoC) is a multidisciplinary (NCDB) consortium dedicated to increasing survival and improving quality of life in cancer patients through research, education, standard setting, and quality assessments. Currently, more than 1,500 cancer programs meet the criteria for CoC accreditation (ACoS, 2011d), which requires a review of the scope, organization, and activity of the cancer program and compliance with 36 specific standards (ACoS, 2011c). Since 1996, all CoC-accredited cancer programs have been required to submit data to the NCDB, a joint program of CoC and the American Cancer Society. The cases submitted to the NCDB represent approximately 70 percent of all newly diagnosed cancer cases in the United States and are summarized in various clinician-level reports to facilitate performance improvement, create benchmarks for comparative purposes, and identify trends in cancer care, such as survival and cancer incidence. paid under the Prospective Payment System (PPS)5: outcomes, structure, process, costs of care, efficiency, and patients’ perspectives on care. Ex- isting measures are largely process oriented, although there are some measures of outcomes, structure, and patients’ perceptions of care. The activities of major organizations involved in quality metrics in cancer are summarized in Table 7-1. 5  TheProspective Payment System is used by Medicare to reimburse providers for services based on predetermined prices.

OCR for page 271
TRANSLATING EVIDENCE 275 TABLE 7-1 Continued Organization Description National Quality Forum The NQF was formed in 1999 in response to a specific (NQF) recommendation of the President’s Advisory Commission to create a nonprofit, public-private partnership that would develop a national strategy for measuring and reporting on health care quality to advance national aims in health care. In 2009, the NQF was awarded a contract with the U.S. Department of Health and Human Services (HHS) to endorse health care quality measures for use in public reporting in the United States. To date, the NQF has endorsed more than 60 cancer-specific measures that were developed by the American Society of Clinical Oncology (ASCO), the American Academy of Medicine’s (AMA’s) Physician Consortium for Performance Improvement, the American Society for Radiation Oncology, and the American Urological Association. These include more than 40 disease-specific measures that assess screening, diagnosis and staging, and initial cancer treatment (e.g., measures that assess concordance with treatment guidelines for breast cancer). The NQF has also endorsed broader cross-cutting measures that focus on end-of-life issues, such as symptom management and overutilization of care. National Quality The Agency for Healthcare Research and Quality Measures Clearinghouse established the NQMC in 2002 to serve as a Web-based (NQMC) repository of evidence-based health care quality measures and to promote widespread access to these measures among health care clinicians, health plans, purchasers, and other interested stakeholders. As of June 2013, the NQMC included 370 cancer-specific measures that assess screening, initial treatment, and end-of-life care. Of note, the NQMC includes many NQF-endorsed measures as well as cancer- specific measures that were developed outside of the United States, such as in Australia and the United Kingdom. The NQMC also includes a database of 95 cancer-specific measures currently used by the various agencies within HHS, including the Medicare Fee-For-Service Physician Feedback Program, the Meaningful Use Electronic Health Record Incentive Program, and the Hospital Outpatient Quality Reporting Program. continued

OCR for page 271
276 DELIVERING HIGH-QUALITY CANCER CARE TABLE 7-1 Continued Organization Description National Surgical Quality The Department of Veterans Affairs (VA) developed NSQIP Improvement Program in 1994 to monitor and improve the quality of surgical (NSQIP) interventions in all VA medical centers. The American College of Surgeons expanded NSQIP in 2004 to serve as a private-sector quality improvement program for surgical care. The program is intended to assist hospitals in capturing and reporting 30-day morbidity and mortality outcomes for all major inpatient and outpatient surgical procedures. Examples of measures include surgical site infection, urinary tract infection, surgical outcomes in older adults, colorectal surgery outcomes, and lower-extremity bypass. The measures are captured using a site’s Surgical Clinical Reviewer who reviews patients’ medical charts, and if necessary, may contact patients by letters or phone. Physician Consortium for PCPI, a national, physician-led initiative convened by the Performance Improvement AMA, has developed evidence-based health care quality (PCPI) measures for use in the clinical setting. The NQF has endorsed more than 20 cancer-specific measures developed by PCPI, including cross-cutting measures for pain and disease-specific measures for breast, prostate, and other cancers. Quality Oncology ASCO began work on its QOPI Program in 2002 to fill the Practice Initiative (QOPI) void in oncology quality measurement. ASCO made the QOPI Program available to its member physicians as a voluntary practice-based program in 2006. This program provides tools and resources to oncology practices for quality measurement, benchmarking, and performance improvement and currently has more than 800 registered member practices. ASCO also offers a 3-year certification through its QOPI Certification Program, which is available to outpatient medical or hematology oncology practices in the United States. QOPI certification is awarded to practices that meet data submission requirements, minimum performance on a subset of QOPI measures, and compliance with certification standards developed by ASCO and the Oncology Nursing Society. As of June 2013, there are 190 QOPI-certified oncology practices across the country. SOURCES: ACoS, 2011a,b,c,d, 2013; AHRQ, 2012b,c,d,e; AMA, 2012; ASCO, 2012b,c,e, 2013; Bilimoria et al., 2008; Jacobson et al., 2008; Kizer, 2000; McNiff, 2006; Menck et al., 1991; NQF, 2012b,d, 2013c; President’s Advisory Commission on Consumer Protection and Quality in the Health Care Industry, 1998; RAND, 2010.

OCR for page 271
TRANSLATING EVIDENCE 277 Challenges in Cancer Quality Measurement There is minimal empirical support that publicly reporting health care quality measures has triggered meaningful improvements in the effective- ness, safety, and patient-centeredness of care (Shekelle et al., 2008; Werner et al., 2009). At best, experts have noted “pockets of excellence on specific measures or in particular services at individual health care facilities” (Chassin and Loeb, 2011, p. 562). Because cancer care has largely been excluded from public reporting, it is unclear whether these findings will hold true for cancer care in the future; however, some studies examining the impact of quality reporting in cancer care have noted improvements in care. Blayney and colleagues studied the impact of implementing the ASCO Quality Oncology Practice Initiative (QOPI) at the University of Michigan’s Comprehensive Cancer Center between 2006 and 2008. They found that physicians changed their behavior when provided with oncology-specific quality data, especially in the areas of treatment plan- ning and management (Blayney et al., 2009). Between 2009 and 2011, Blayney and colleagues expanded their focus and evaluated the impact of implementing QOPI at multiple oncology practices. They concluded that physician participation in the voluntary reporting program increased when the costs of data collection were defrayed by Blue Cross Blue Shield of Michigan. At the same time, they found that providing physicians with access to the quality reports was insufficient to trigger measurable im- provements in care across participating practices (Blayney et al., 2012). In a separate study, Wick and colleagues studied the impact of participation in the ACoS’s National Surgical Quality Improvement Program (NSQIP) on surgical site infection rates following colorectal surgery at the Johns Hopkins Hospital. They observed a 33.3 percent reduction in the surgical site infection rate during the 2-year period studied (July 2009 to July 2011) (Wick et al., 2012). There is no federal program that requires clinicians to report data on core cancer measures. Existing programs are primarily voluntary and favor “measures of convenience,” which are easy to report but lack meaning for patients (Spinks et al., 2011, p. 669). These measures are generally clinician- oriented, reflect existing fragmentation in care, and lack a clear method for triggering improvements. Most measures focus on short-term outcomes in care. Thus, there are serious deficiencies in cancer quality measurement in the United States, including (1) pervasive gaps in existing cancer measures, (2) challenges intrinsic to the measure development process, (3) a lack of consumer engagement in measure development and reporting, and (4) the need for data to support meaningful, timely, and actionable performance measurement. This chapter discusses each of these issues below.

OCR for page 271
278 DELIVERING HIGH-QUALITY CANCER CARE Gaps in Existing Cancer Measures No current quality reporting program or set of measures adequately assesses cancer care in a comprehensive, patient-oriented way. A recent report by the NQF-convened Measure Applications Partnership (MAP), which provides input to the Secretary of Health and Human Services (HHS) on the selection of measures for use in federal reporting, noted that cancer care measures are largely disease specific, process focused, and measured at the clinician level. These measures support operational improvement, but they are limited in their ability to induce wide-scale improvements in care, and provide limited insight into overall health care quality (MAP and NQF, 2012). For example, process measures are useful for establishing minimum standards for delivery systems to achieve and are simple to validate. Unfortunately, they do not reliably predict out- comes and they rarely are able to account for patient preferences of what constitutes a desirable care. Thus, it is important that process measures be supplemented by additional measures of outcome, structure, efficiency, cost, and patient perception of their care. Table 7-2 provides a summary of the benefits and drawbacks of the various types of measures used in cancer care. All phases of the cancer care continuum—from prevention and early detection, to treatment, survivorship, and end-of-life care—need new measures. While NQF-endorsed measures and those included in the Na- tional Quality Measures Clearinghouse (NQMC) focus on screening and initial cancer treatment, few measures address post-treatment follow-up and the long-term consequences of care, such as survivorship care, dis- ease recurrence, and secondary cancers. Assessments of end-of-life care, including overuse of therapeutic treatment at the end of life, are included in both measure sets, but could be expanded (AHRQ, 2012c; NQF, 2012d). The QOPI measure set primarily addresses treatment and includes a few measures related to prevention and diagnosis, as well as more than 25 measures evaluating end-of-life care (ASCO, 2012d). All of these measure sets, however, could better assess palliative care and hospice care referral patterns and the associated quality of life for cancer patients requiring these services. The MAP report emphasized survivorship care (by stage and cancer type), palliative care, and end-of-life care as priorities for enhancing quality measurement across the continuum of care (MAP and NQF, 2012). Existing cancer measures also often fail to address all of the relevant dimensions of cancer care, such as access to care and care coordination, evaluation and management of psychosocial needs, patient and family engagement (especially shared decision making and honoring patient preferences), management of complex comorbidities, and advance care

OCR for page 271
TRANSLATING EVIDENCE 279 TABLE 7-2  Types of Quality Metrics Used in Cancer Care Type Description Benefits Challenges Structure Measures the settings in Identifies core Difficult to compare which clinicians deliver infrastructure across settings health care, including needed for high- of variable sizes material resources, quality care and resources; human resources, and implications for organizational structure patients’ outcomes not (e.g., types of services always clear available, qualifications of clinicians, and staffing hierarchies) Process Measures the delivery Encourages Need to consider of care in defined evidence- patient choices circumstances based care and that differ from (e.g., screening the is generally standard of care and general populations, straightforward to contraindications; psychosocial evaluations measure implications for of all newly diagnosed patients’ outcomes not patients, care planning always clear before starting chemotherapy) Clinical Measures personal Allows assessment Need to risk adjust Outcome health and functional of ultimate for comorbidities; status as a consequence endpoints of care difficult to compare of contact with the across settings with health care system (e.g., variable populations survival, success of treatment) Patient- Measures patients’ Integrates the Some outcomes are Reported perceived physical, patient’s “voice” outside the scope Outcome mental, and social into the medical of clinical care (e.g., well-being based on record social well-being) information that comes directly from the patient (e.g., quality of life, time to return to normal activity, symptom burden) Patients’ Measures patients’ Gathers data Need to account for Perspective satisfaction with the on patients’ patients’ limitations on Care health care they experience in assessing technical received throughout aspects of care the health care delivery cycle continued

OCR for page 271
280 DELIVERING HIGH-QUALITY CANCER CARE TABLE 7-2 Continued Type Description Benefits Challenges Cost Measures the resources Allows parties to Difficult to measure required for the health weigh the relative the true cost of care care system to deliver values of potential given the range of care and the economic treatment options, prices and expenses impact on patients, when combined in medical care; costs their families, and with outcome vary according to governmental and measures perspective (patients, private payers payer, society, etc.); need to distinguish between costs and charges Efficiency Measures the time, Reflects important Need to correlate with effort, or cost to produce determinants of outcome measures; a specific output in patients’ outcomes need to account for the health care system and satisfaction patient characteristics (e.g., time to initiate with care and is and preferences therapy after diagnosis, a major driver of coordination of care) cost Cross-Cutting Measures issues that Aligns with Difficult to capture the cross cancer or disease measurement unique characteristics types (e.g., patient safety, of other cancers of cancer care coordination, equity, or conditions and patients’ perspective and reflects true on care) multidisciplinary nature of cancer care Disease- Measures issues within a Reflects diversity Need to account for Specific specific cancer type (e.g., of cancer and stage of disease at clinicians’ concordance tumor biology presentation and with clinical practice comorbidities guidelines for breast, prostate, and colon cancer) NOTE: The basis of quality measurement centers on the three major elements of quality measurement: outcome, processes and structure (Donabedian, 1980). These elements have been expanded in recent years to include concepts of efficiency, cost, and patient-reported outcomes. The types of measures are interrelated and overlapping. For example, a measure can be disease-specific and a process or outcome measure, or a patient-reported outcome and a clinical outcome. planning for cancer patients. There are a number of NQF-endorsed mea- sures, as well as measures in the NQMC and QOPI, that focus on the short-term physical consequences of cancer and its treatment (AHRQ, 2012c; ASCO, 2012d; NQF, 2012d). In addition, Cancer Care Ontario con- ducted a recent performance improvement project that included develop-

OCR for page 271
TRANSLATING EVIDENCE 281 ing measures to assess the integration and coordination of palliative care services in cancer care (Dudgeon et al., 2009). However, management of complex comorbidities and the functional, emotional, and social conse- quences of the disease, and other high-quality measures, are largely unad- dressed by current measures (Bishop, 2013; Spinks et al., 2011). There are also gaps in measures that assess care planning and care coordination, which is particularly problematic because cancer care is rarely confined to one hospital or physician. Cancer patients tend to move between multiple care settings—primary care teams, cancer care teams, community and specialty hospitals, and potentially emergency centers, long-term care facilities, and hospice care (MAP and NQF, 2012). Exist- ing cancer measures are limited by where a patient receives cancer care because many oncology practices and hospitals lack the infrastructure and sophistication to measure the quality of care they deliver. Moreover, NQF requires its endorsed measures to be validated in a specific disease or care setting, thus limiting the applicability of the measures in persons with multiple comorbidities or who traverse multiple care settings. In ad- dition, the measurement of care is fragmented and rarely focused on the overall patient experience. Few measurement systems integrate a patient’s experience across care settings. Quality metric development has also thus far failed to prioritize less common cancers. Although NQF has endorsed and AHRQ has included in the NQMC a number of disease-specific measures, including measures for more common cancers, such as breast and prostate cancers, as well as measures for less common cancers, such as pancreatic cancer and multiple myeloma, these measures are not evenly distributed across the diseases. There are few or no measures for other rare cancers, such as brain and ovarian cancers (AHRQ, 2012c; NQF, 2012d). QOPI, for example, includes disease-specific measures for breast, colorectal, lung, and gynecologic can- cers, and non-Hodgkin lymphoma, but does not address prostate cancer or many other rare cancers (ASCO, 2012d). The IOM’s 1999 report on the quality of cancer care recommended that patients undergoing technical procedures be treated in high-volume facilities (IOM and NRC, 1999). A large body of evidence shows that patients undergoing high-risk surgeries at high-volume facilities have better health outcomes and short-term survival than patients treated in low-volume facilities (Birkmeyer et al., 2003; Finks et al., 2011; Finlayson et al., 2003; Ho et al., 2006). Even with their strong track record, however, high-volume facilities currently lack the capacity to treat all cancer pa- tients who require highly skilled procedures (Finks et al., 2011; Spinks et al., 2012). Thus, it will be necessary to establish additional quality mea- sures that identify high-quality, lower volume facilities and clinicians. ACoS’s NSQIP, the American Board of Medical Specialties Mainte-

OCR for page 271
298 DELIVERING HIGH-QUALITY CANCER CARE consistency with the IOM’s standards for CPGs published between 2005 and 2010. None of the 168 CPGs included in the study met all of the IOM’s standards; the average was 2.8 out of the 8 standards assessed. The CPGs were most compliant with the standards addressing transparency in the development process, articulation of the recommendations, and use of external review. The CPGs were least likely to comply with the standards requiring that CPGs be based on a systematic review of the evidence, involve patients and the public in the development process, or specify a process for making updates. In addition, Norris and colleagues found that most CPG developers have failed to develop conflict of interest policies consistent with the IOM’s recommendations (Norris et al., 2012). The committee acknowledges the considerable challenges to imple- menting the IOM’s standards for trustworthy CPGs. The standards are stringent, resource intensive, and require major investments in time and human resources. Because of the importance of CPGs to improving the quality of cancer care and translating evidence into clinical practice, how- ever, the committee endorses the IOM’s recommendations on producing trustworthy CPGs and encourages developers of CPGs in oncology to strive to meet these standards. Performance Improvement Initiatives Quality measurement and CPGs are essential components of im- proving performance in health care. As discussed above, quality metrics provide insights into which aspects of health care require improvement and may be used to assess the success of performance improvement initia- tives. They can also be used by individual clinicians to assess their per- formance and improve the care they provide (Blayney et al., 2009). CPGs are a type of performance improvement initiative that help clinicians stay abreast of an ever increasing evidence base and apply that information to their clinical practice. Although necessary, these activities, in the absence of other levers, are insufficient to drive meaningful improvements in health care (Berwick et al., 2003; Davies, 2001; IOM, 2011a). To be successful, health care organizations must foster a culture of change through a variety of activities, such as those discussed in this report. Those activities include improving patient engagement, decision making, and communication (see Chapter 3); ensuring that personnel have sufficient training, appropriate licensure and certifications, and are empowered to contribute to performance improvement initiatives (see Chapter 4); investing in learning health care IT systems to collect data on quality of care, making this data transparent to the entire organization, and providing clinical decision support (see Chapter 6); and creating incentives that encourage clinicians and provider organizations to ad-

OCR for page 271
TRANSLATING EVIDENCE 299 TABLE 7-4  Examples of Performance Improvement Strategies Type Description Audit and Feedback Clinician performance tracking and reviews, comparison with national/state quality report cards, publicly released performance data, and benchmark outcome data Clinical Decision Support Information technology provides clinicians with access to evidence-based clinical practice guidelines Clinician and Patient Education Classes, parent and family education, pamphlets, and other media Clinician Reminder Systems Prompts in electronic health records Facilitated Relay of Clinical Data to Patient data transmitted by telephone call Clinicians or fax from outpatient specialty clinics to primary care clinicians Financial Incentives Performance-based bonuses and alternative reimbursement systems for clinicians, positive or negative financial incentives for patients, changes in professional licensure requirements Organizational Changes Continuous performance improvement programs, lean and Six Sigma approaches, shifting from paper-based to computer-based record keeping, long-distance case discussion between professional peers, etc. Patient Reminder Systems Telephone calls or postcards from clinicians to their patients Patient Safety Initiatives Checklists, safety incident reporting, close call reporting, and root-cause analysis Promotion of Disease Self-Management Workshops, materials such as blood pressure or glucose monitoring devices SOURCE: Adapted from AHRQ, 2012a. minister high-quality care rather than a high volume of care (e.g., patient- centered medical homes, care pathways, accountable care organizations) (see Chapter 8). Performance improvement initiatives, which are conducted at the local level, have been described as “systematic, data-guided activities de- signed to bring about immediate, positive change in the delivery of health care in a particular setting,” as well as across settings (Baily, 2006, p. S5). These activities are interrelated and overlapping with quality improve- ment and patient safety initiatives. Table 7-4 provides examples of perfor-

OCR for page 271
300 DELIVERING HIGH-QUALITY CANCER CARE mance improvement initiatives. Because these efforts are implemented in a single organization or health system, they can be undertaken immedi- ately without action on a national or system level and can be tailored to the unique circumstances of the local environment. Experts have noted, however, that traditional approaches to performance improvement— clinician practice peer review, public reporting of quality measures, con- tinuous performance improvement and total quality management, and regulatory and legislatively imposed reforms and penalties—lack the pace, breadth, magnitude, coordination, and sustainability to transform health care delivery (Chassin and Loeb, 2011; Davies, 2001). Leadership is needed to create an institutional culture that values high-quality care, a key component of successful performance improve- ment initiatives. The aviation industry has long recognized the importance of embedding performance improvement initiatives in cultures that value inquiry and quality, and that have strong leaders dedicated to facilitating the necessary changes (Helmreich, 2000). Health care organizations have successfully applied this approach to performance improvement through efforts aimed at improving patient safety, such as by using checklists to reduce human error, and could apply them more broadly to improve quality in other areas of care (Gawande, 2009; Hudson, 2003; Longo et al., 2005; Pronovost et al., 2003). In addition, health care organizations have rushed to adopt Six Sigma and “lean” systems approaches to reduce variation and waste in health care. These robust industrial performance improvement tools are most ef- fective within organizations that have an embedded safety culture, senior leadership dedicated to organizational change, and clear mechanisms for identifying quality and safety issues and triggering performance improve- ment initiatives (Chassin and Loeb, 2011). Also important is leadership’s commitment to funding these activities, which often consume substantial organizational resources (Pryor et al., 2011). Without these organizational characteristics, it is unlikely that performance improvement initiatives will lead to improved patient outcomes and sustained improvements in care delivery. Summary and Recommendations A high-quality cancer care delivery system should translate evidence into clinical practice, measure quality, and improve clinician performance. This involves developing CPGs to assist clinicians in quickly incorporat- ing new medical knowledge into routine care. Also critical are measuring and assessing a system’s progress in improving the delivery of cancer care, publicly reporting the information gathered, and developing in- novative strategies to further facilitate performance improvement. In the

OCR for page 271
TRANSLATING EVIDENCE 301 figure illustrating the committee’s conceptual framework (see Figure S-2), knowledge translation and performance improvement are part of a cycli- cal process that measures the outcomes of patient-clinician interactions and implements innovative strategies to improve the accessibility, afford- ability, and quality of care CPGs translate evidence into practice by synthesizing research find- ings into actionable steps clinicians can take when providing care. The development of CPGs is not straightforward or consistent because the evidence base supporting clinical decisions is often incomplete and in- cludes studies and systematic reviews of variable quality. In addition, organizations that develop CPGs often use fragmented processes that lack transparency and they are plagued by conflicts of interest. The committee endorses the standards in the 2011 IOM report Clinical Practice Guidelines We Can Trust to address these problems and produce trustworthy CPGs. Performance improvement initiatives can also be used to translate ev- idence into practice. These tools have been described as “systematic, data- guided activities designed to bring about immediate, positive change in the delivery of health care in a particular setting,” (Baily, 2006, p. 55) as well as across settings. They can improve the efficiency, patient satisfac- tion, health outcomes, and costs of cancer care. These efforts are typically implemented in a single organization or health system; as a result, they often lack the pace, breadth, magnitude, coordination, and sustainability to transform health care delivery nationwide. Cancer care quality measures provide a standardized and objective means for assessing the quality of cancer care delivered. Measuring per- formance has the potential to drive improvements in care, inform pa- tients, and influence clinician behavior and reimbursement. There are currently serious deficiencies in cancer care quality measurement in the United States, including pervasive gaps in existing measures, challenges in the measure development process, lack of consumer engagement in measure development and reporting, and the need for data to support meaningful, timely, and actionable performance measurement. A num- ber of groups representing clinicians who provide cancer care, including ASCO and ACoS, have instituted voluntary reporting programs, through which program participants have demonstrated improvements. HHS has also attempted to influence quality measurement for cancer care through various mandatory reporting programs. Recommendation 8: Quality Measurement Goal: Develop a national quality reporting program for cancer care as part of a learning health care system.

OCR for page 271
302 DELIVERING HIGH-QUALITY CANCER CARE To accomplish this, the U.S. Department of Health and Human Services should work with professional societies to: •  reate and implement a formal long-term strategy for publicly C reporting quality measures for cancer care that leverages exist- ing efforts. •  rioritize, fund, and direct the development of meaningful P quality measures for cancer care with a focus on outcome mea- sures and with performance targets for use in publicly report- ing the performance of institutions, practices, and individual clinicians. •  mplement a coordinated, transparent reporting infrastructure I that meets the needs of all stakeholders, including patients, and is integrated into a learning health care system. References ACoS (American College of Surgeons). 2011a. About ACS NSQIP. http://site.acsnsqip.org/ about (accessed April 25, 2013). ———. 2011b. About the CoCc. http://www.facs.org/cancer/coc/cocar.html (accessed Au- gust 15, 2012). ———. 2011c. How are cancer programs accredited? http://www.facs.org/cancer/coc/howacc. html (accessed August 15, 2012). ———. 2011d. National Cancer Data Base. http://www.facs.org/cancer/ncdb/index.html (accessed August 15, 2012). ———. 2013. Measures. http://site.acsnsqip.org/program-specifics/program-options/ measures-option (accessed June 28, 2013). The Advisory Board Company. 2012. Clinical Advisory Board Member Survey results: Staffing models for supporting quality reporting. http://www.advisory.com/~/media/Advisory- com/Research/CAB/Resources/2012/2012%20Staffing%20Models%20Survey%20 Results.pdf (accessed August 15, 2012). AHRQ (Agency for Healthcare Research and Quality). 2012a. Closing the quality gap se- ries: Quality improvement interventions to address health disparities. http://www.effective healthcare.ahrq.gov/search-for-guides-reviews-and-reports/?pageaction=display product&productID=1242&ECem=120827 (accessed December 21, 2012). ———. 2012b. Measures sought for National Quality Measures Clearinghouse. http://www.ahrq. gov/qual/nqmcmeas.htm (accessed August 15, 2012). ———. 2012c. National Quality Measures Clearinghouse measures by topic. http://www.quality measures.ahrq.gov/browse/by-topic.aspx (accessed August 15, 2012). ———. 2012d. National Quality Measures Clearinghouse: About. http://www.qualitymeasures. ahrq.gov/about/index.aspx (accessed August 15, 2012). ———. 2012e. National Quality Measures Clearinghouse, U.S. Department of Health and Hu- man Services: Measure inventory. http://www.qualitymeasures.ahrq.gov/hhs-measure- inventory/browse.aspx (accessed August 15, 2012). AMA (American Medical Association). 2012. Resources. http://www.ama-assn.org/ama/ pub/physician-resources/physician-consortium-performance-improvement.page (ac- cessed August 15, 2012).

OCR for page 271
TRANSLATING EVIDENCE 303 Anderson, K. M., C. A. Marsh, A. C. Flemming, H. Isenstein, and J. Reynolds. 2012. An environmental snapshot—Quality measurement enabled by health IT: Overview, pos- sibilities, and challenges. http://healthit.ahrq.gov/sites/default/files/docs/page/ NRCD1PTQ%20Final%20Draft%20Background%20Report%2007102012_508compliant. pdf (accessed August 15, 2012). ASCO (American Society of Clinical Oncology). 2012a. Clinical practice guidelines. http:// www.asco.org/ASCOv2/Practice+%26+Guidelines/Guidelines/Clinical+Practice+ Guidelines (accessed December 20, 2012). ———. 2012b. Geographic distribution. http://qopi.asco.org/GeographicDistribution (ac- cessed August 15, 2012). ———. 2012c. QOPI certified practices. http://qopi.asco.org/certifiedpractices (accessed Oc- tober 1, 2012). ———. 2012d. QOPI summary of measures, fall 2012. http://qopi.asco.org/Documents/QOPI Fall12MeasuresSummary_002.pdf (accessed August 15, 2012). ———. 2012e. Who can apply? http://qopi.asco.org/whocanapply (accessed August 15, 2012). ———. 2013. Certification. http://qopi.asco.org/certification.html (accessed June 26, 2012). ASTRO (American Society for Radiation Oncology). 2013. Guidelines. https://www.astro. org/Clinical-Practice/Guidelines/Index.aspx (accessed March 27, 2013). Baily, M. A., M. Bottrell, J. Lynn, and B. Jennings. 2006. Hastings Center Report 36(4):S1-S40. Berry, D. A. 2011. Comparing survival outcomes across centers: Biases galore. Cancer Letter 37(11):7-10. Berwick, D. M., B. James, and M. J. Coye. 2003. Connections between quality measurement and improvement. Medical Care 41(1):I30-I38. Bilimoria, K. Y., A. K. Stewart, D. P. Winchester, and C. Y. Ko. 2008. The National Cancer Database: A powerful initiative to improve cancer care in the Uunited States. Annals of Surgical Oncology 15(3):683-690. Birkmeyer, J. D., T. A. Stukel, A. E. Siewers, P. P. Goodney, D. E. Wennberg, and F. L. Lucas. 2003. Surgeon volume and operative mortality in the United States. New England Journal of Medicine 349(22):2117-2127. Bishop, T. F. 2013. Pushing the outpatient quality envelope. Journal of the American Medical Association 1-2. Blayney, D. W., K. McNiff, D. Hanauer, G. Miela, D. Markstrom, and M. Neuss. 2009. Imple- mentation of the Quality Oncology Practice Initiative at a university comprehensive cancer center. Journal of Clinical Oncology 27(23):3802-3807. Blayney, D. W., J. Severson, C. J. Martin, P. Kadlubek, T. Ruane, and K. Harrison. 2012. Michigan oncology practices showed varying adherence rates to practice guidelines, but quality interventions improved care. Health Affairs (Millwood) 31(4):718-728. Chassin, M. R., and J. M. Loeb. 2011. The ongoing quality improvement journey: Next stop, high reliability. Health Affairs (Millwood) 30(4):559-568. CMS (Centers for Medicare & Medicaid Services). 2012. Physician Quality Reporting System formerly known as the Physician Quality Reporting Initiative. http://www.cms.gov/PQRS (accessed August 15, 2012). Davies, H. T. 2001. Exploring the pathology of quality failings: Measuring quality is not the problem—changing it is. Journal of Evaluation in Clinical Practice 7(2):243-251. Deutsch, A., B. Gage, L. Smith, and C. Kelleher. 2012. Patient-reported outcomes in performance measurement commissioned paper on PRO-based performance measures for healthcare account- able entities draft #1, September 4, 2012. http://www.qualityforum.org/WorkArea/linkit. aspx?LinkIdentifier=id&ItemID=71824 (accessed September 15, 2012).

OCR for page 271
304 DELIVERING HIGH-QUALITY CANCER CARE Donabedian, A. 1980. Explorations in Quality Assessment and Monitoring. In The definition of quality and approaches to its assessment. Vol. 1. Ann Arbor, MI: Health Administration Press. Dudgeon, D. J., C. Knott, C. Chapman, K. Coulson, E. Jeffery, S. Preston, M. Eichholz, J. P. Van Dijk, and A. Smith. 2009. Development, implementation, and process evaluation of a regional palliative care quality improvement project. Journal of Pain and Symptom Management 38(4):483-495. Faber, M., M. Bosch, H. Wollersheim, S. Leatherman, and R. Grol. 2009. Public reporting in health care: How do consumers use quality-of-care information? A systematic review. Medical Care 47(1):1-8. Finks, J. F., N. H. Osborne, and J. D. Birkmeyer. 2011. Trends in hospital volume and opera- tive mortality for high-risk surgery. New England Journal of Medicine 364(22):2128-2137. Finlayson, E. V., P. P. Goodney, and J. D. Birkmeyer. 2003. Hospital volume and operative mortality in cancer surgery: A national study. Archives of Surgery 138(7):721-725. Foster, J. A., M. Abdolrasulnia, H. Doroodchi, J. McClure, and L. Casebeer. 2009. Practice patterns and guideline adherence of medical oncologists in managing patients with early breast cancer. Journal of the National Comprehensive Cancer Network 7(7):697-706. Gawande, A. 2009. The checklist manifesto: How to get things right. New York: Metropolitan books. Goldberg, P. 2011. Fox Chase publishes its cancer survival data: The move is partly science, partly marketing. Cancer Letter 37(5):1-5. Harris, K. M., and M. Beeuwkes Buntin. 2008. Choosing a health care provider. The Synthesis project. Research Synthesis Report (14). Helmreich, R. L. 2000. On error management: Lessons from aviation. British Medical Journal 320(7237):781-785. Hibbard, J., and S. Sofaer. 2010. Best practices in public reporting no. 1: How to effectively present health care performance data to consumers. http://www.ahrq.gov/qual/pubrptguide1.pdf (accessed August 15, 2012). Higgins, A., T. Zeddies, and S. D. Pearson. 2011. Measuring the performance of individual physicians by collecting data from multiple health plans: The results of a two-state test. Health Affairs (Millwood) 30(4):673-681. Ho, V., M. J. Heslin, H. Yun, and L. Howard. 2006. Trends in hospital and surgeon volume and operative mortality for cancer surgery. Annals of Surgical Oncology 13(6):851-858. Hudson, P. 2003. Applying the lessons of high risk industries to health care. Quality & Safety in Health Care 12(Suppl 1):i7-i12. Hussey, P., and E. A. McGlynn. 2009. Why are there no efficiency measures in the National Qual- ity Measures Clearinghouse? http://www.qualitymeasures.ahrq.gov/expert/expert- commentary.aspx?id=16459 (accessed August 15, 2012). IOM (Institute of Medicine). 2001. Envisioning The National Health Care Quality Report. Edited by M. P. Hurtado, E. K. Swift, and J. M. Corrigan. Washington, DC: National Academy Press. ———. 2008. Knowing what works in health care: A roadmap for the nation. Washington, DC: The National Academies Press. ———. 2011a. For the public’s health: The role of measurement in action and accountability. Wash- ington, DC: The National Academies Press. ———. 2011b. Health literacy implications for health care reform: Workshop summary. Washing- ton, DC: The National Academies Press. ———. 2011c. Clinical practice guidelines we can trust. Washington, DC: The National Acad- emies Press. IOM and NRC (National Research Council). 1999. Ensuring quality cancer care. Washington, DC: National Academy Press.

OCR for page 271
TRANSLATING EVIDENCE 305 ———. 2000. Enhancing data systems to improve the quality of cancer care. Edited by M. Hewitt and J. V. Simone. Washington, DC: National Academy Press. Jacobson, J. O., M. N. Neuss, K. K. McNiff, P. Kadlubek, L. R. Thacker, 2nd, F. Song, P. D. Eisenberg, and J. V. Simone. 2008. Improvement in oncology practice performance through voluntary participation in the Quality Oncology Practice Initiative. Journal of Clinical Oncology 26(11):1893-1898. Kahn, K. L., J. L. Malin, J. Adams, and P. A. Ganz. 2002. Developing a reliable, valid, and feasible plan for quality-of-care measurement for cancer: How should we measure? Medical Care 40(6 Suppl):III73-III85. KFF (Kaiser Family Foundation) and AHRQ. 2006. Update on consumers’ views on patient safety and quality information. www.kff.org/kaiserpolls/pomr092706pkg.cfm (accessed August 15, 2012). Kizer, K. W. 2000. The National Quality Forum seeks to improve health care. Academic Medicine 75(4):320-321. Krumholz, H. M., P. S. Keenan, J. E. Brush, Jr., V. J. Bufalino, M. E. Chernew, A. J. Epstein, P. A. Heidenreich, V. Ho, F. A. Masoudi, D. B. Matchar, S. L. Normand, J. S. Rumsfeld, J. D. Schuur, S. C. Smith, Jr., J. A. Spertus, and M. N. Walsh. 2008. Standards for mea- sures used for public reporting of efficiency in health care: A scientific statement from the American Heart Association Interdisciplinary Council on Quality of Care and Outcomes Rresearch and the American College of Cardiology Foundation. Journal of the American College of Cardiology 52(18):1518-1526. Kung, J., R. R. Miller, and P. A. Mackowiak. 2012. Failure of clinical practice guidelines to meet Institute of Medicine standards: Two more decades of little, if any, progress. Ar- chives of Internal Medicine 172(21):1628-1633. Longo, D. R., J. E. Hewett, B. Ge, and S. Schubert. 2005. The long road to patient safety. A status report on patient safety systems. Journal of the American Medical Association 294(22):2825-2865. MAP (Measure Applications Partnership) and NQF (National Quality Forum). 2012. Perfor- mance measurement coordination strategy for PPSs-exempt cancer hospitals. http://www. qualityforum.org/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID=71217 (accessed August 15, 2012). McGlynn, E. A. 1997. Six challenges in measuring the quality of health care. Health Affairs (Millwood) 16(3):7-21. McNiff, K. 2006. The Quality Oncology Practice Initiative: Assessing and improving care within the medical oncology practice. Journal of Oncology Practice/American Society of Clinical Oncology 2(1):26-30. Menck, H. R., L. Garfinkel, and G. D. Dodd. 1991. Preliminary report of the National Cancer Database. CA: A Cancer Journal for Clinicians 41(1):7-18. Murff, H. J., F. FitzHenry, M. E. Matheny, N. Gentry, K. L. Kotter, K. Crimin, R. S. Dittus, A. K. Rosen, P. L. Elkin, S. H. Brown, and T. Speroff. 2011. Automated identification of post- operative complications within an electronic medical record using natural language processing. Journal of the American Medical Association 306(8):848-855. National Priorities Partnership. 2011. Input to the Secretary of Health and Human Services on priorities for The National Quality Strategy. http://www.qualityforum.org/WorkArea/ linkit.aspx?LinkIdentifier=id&ItemID=68238 (accessed August 15, 2012). NCCN (National Comprehensive Cancer Network). 2012. NCCN guidelines & clinical re- sources. http://www.nccn.org/clinical.asp (accessed December 20, 2012). NCI (National Cancer Institute). 2012. Surveillance, Epidemiology, and End Results: Overview of the SEER program. http://seer.cancer.gov/about/overview.html (accessed August 15, 2012).

OCR for page 271
306 DELIVERING HIGH-QUALITY CANCER CARE Norris, S. L., H. K. Holmer, B. U. Burda, L. A. Ogden, and R. Fu. 2012. Conflict of interest policies for organizations producing a large number of clinical practice guidelines. PloS ONE 7(5):e37413. NQF (National Quality Forum). 2010. Guidance for measure harmonization: A consensus report. http://www.qualityforum.org/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID=62381 (accessed August 15, 2012). ———. 2012a. Adjuvant hormonal therapy. http://www.qualityforum.org/MeasureDetails. aspx?actid=0&SubmissionId=450 (accessed August 15, 2012). ———. 2012b. Funding. http://www.qualityforum.org/About_NQF/Funding.aspx (ac- cessed August 15, 2012). ———. 2012c. National Quality Forum: Measure evaluation criteria, january 2011. http:// www.qualityforum.org/Measuring_Performance/Submitting_Standards/Measure_ Evaluation_Criteria.aspx (accessed August 15, 2012). ———. 2012d. NQF-endorsed standards. http://www.qualityforum.org/Measures_List.aspx (accessed August 15, 2012). ———. 2012e. Oncology: Hormonal therapy for stage I through III, ER/PRR positive breast cancer. http://www.qualityforum.org/MeasureDetails.aspx?actid=0&SubmissionId=631 (ac- cessed August 15, 2012). ———. 2012f. Performance measurement coordination strategy for PPS-exempt cancer hospitals. https://www.qualityforum.org/Publications/2012/06/Performance_Measurement_ Coordination_Strategy_for_PPS-Exempt_Cancer_Hospitals.aspx (accessed August 8, 2013). ———. 2013a. Measure Applications Partnership. http://www.qualityforum.org/map (ac- cessed August 7, 2013). ———. 2013b. National Priorities Partnership. http://www.qualityforum.org/Setting_ Priorities/NPP/National_Priorities_Partnership.aspx (accessed August 7, 2013). ———. 2013c. NQF-Endorsed Standards. http://www.qualityforum.org/Measures_List. aspx (accessed June 28, 2013). Parsons, A., C. McCullough, J. Wang, and S. Shih. 2012. Validity of electronic health record- derived quality measurement for performance monitoring. Journal of the American Medical Informatics Association 19(4):604-609. Pauly, M. V. 2011. Analysis & commentary: The trade-off among quality, quantity, and cost: How to make it—if we must. Health Affairs (Millwood) 30(4):574-580. President’s Advisory Commission on Consumer Protection and Quality in the Health Care Industry. 1998. Quality first: Better health care for all Americans, final report to the President of the United States. Washington, DC: United States G.P.O. Pronovost, P. J, and R. Lilford. 2011. Analysis & commentary: A road map for improving the performance of performance measures. Health Affairs (Millwood) 30(4):569-573. Pronovost, P. J., B. Weast, C. G. Holzmueller, B. J. Rosenstein, R. P. Kidwell, K. B. Haller, E. R. Reroli, J. B. Sexton, and H. R. Rubin, 2003. Evalution of the culture of safety: Survey of clinicians and managers in an academic medical center. Quality & Safety in Health Care 12:405-410. Pryor, D., A. Hendrich, R. J. Henkel, J. K. Beckmann, and A. R. Tersigni. 2011. The quality “journey” at ascension health: How we’ve prevented at least 1,500 avoidable deaths a year—and aim to do even better. Health Affairs (Millwood) 30(4):604-611. RAND. 2010. About acove. http://www.rand.org/health/projects/acove/about.html (ac- cessed April 25, 2013). Reames, B. N., R. W. Krell, S. N. Ponto, and S. L. Wong. 2013. A critical evaluation of oncol- ogy clinical practice guidelines. Journal of Clinical Oncology 31(20):2563-2568.

OCR for page 271
TRANSLATING EVIDENCE 307 Romanus, D., M. R. Weiser, J. M. Skibber, A. Ter Veer, J. C. Niland, J. L. Wilson, A. Rajput, Y. N. Wong, A. B. Benson, S. Shibata, and D. Schrag. 2009. Concordance with NCCN colorectal cancer guidelines and ASCO/NCCN quality measures: An NCCN institu- tional analysis. Journal of the National Comprehensive Cancer Network 7(8):895-904. Russell, E. 1998. The ethics of attribution: The case of health care outcome indicators. Social Science & Medicine 47(9):1161-1169. Schneider, E. C., J. L. Malin, K. L. Kahn, E. J. Emanuel, and A. M. Epstein. 2004. Developing a system to assess the quality of cancer care: ASCO’s national initiative on cancer care quality. Journal of Clinical Oncology 22(15):2985-2991. Shekelle, P. G., Y. W. Lim, S. Mattke, and C. Damberg. 2008. Does public release of performance results improve quality of care? A systematic review. London, UK: The Health Foundation. Spinks, T. E., R. Walters, T. W. Feeley, H. W. Albright, V. S. Jordan, J. Bingham, and T. W. Burke. 2011. Improving cancer care through public reporting of meaningful quality measures. Health Affairs (Millwood) 30(4):664-672. Spinks, T., H. W. Albright, T. W. Feeley, R. Walters, T. W. Burke, T. Aloia, E. Bruera, A. Buzdar, L. Foxhall, D. Hui, B. Summers, A. Rodriguez, R. Dubois, and K. I. Shine. 2012. Ensur- ing quality cancer care: A follow-up review of the Institute of Medicine’s 10 recommen- dations for improving the quality of cancer care in america. Cancer 118(10):2571-2582. Totten, A. M., J. Wagner, A. Tiwari, C. O’Haire, J. Griffin, and M. Walker. 2012. Public reporting as a quality improvement strategy. Closing the quality gap: Revisiting the state of the science. http://www.effectivehealthcare.ahrq.gov/ehc/products/343/1198/Evidencereport208_ CQG-PublicReporting_ExecutiveSummary_20120724.pdf (accessed August 15, 2012). UHC (UnitedHealthcare). 2012. UHC expands and refines risk-adjusted models for pediatrics and oncology—updated models take children’s care into account, help simplify cancer patient diagnosis and treatment methods. https://www.uhc.edu/docs/45014734_Press_ Release_RiskModel.pdf (accessed August 15, 2012). Usman, O. 2011. We need more supply-side regulation. Health Affairs (Millwood) 30(8):1615; author reply 1615. USPSTF (U.S. Preventive Services Task Force). 2012. USPSTF topic guide. http://www. uspreventiveservicestaskforce.org/uspstopics.htm#Ctopics (accessed December 20, 2012). Weissman, J. S., J. R. Betancourt, A. R. Green, G. S. Meyer, A. Tan-McGrory, J. D. Nudel, J. A. Zeidman, and J. E. Carrillo. 2011. Commissioned paper: Healthcare disparities measurement. Boston, MA: Massachusetts General Hospital and Harvard Medical School. Sponsored by the National Quality Forum, grant funding from Robert Wood Johnson Foundation. Werner, R. M., R. T. Konetzka, E. A. Stuart, E. C. Norton, D. Polsky, and J. Park. 2009. Impact of public reporting on quality of postacute care. Health Services Research 44(4):1169-1187. Wick, E. C., D. B. Hobson, J. L. Bennett, R. Demski, L. Maragakis, S. L. Gearhart, J. Efron, S. M. Berenholtz, and M. A. Makary. 2012. Implementation of a surgical comprehensive unit-based safety program to reduce surgical site infections. Journal of the American Col- lege of Surgeons 215(2):193-200.

OCR for page 271