tiple institutions in a collaborative effort to build these clinical decision support rules.” He suggested creating and following best practices for clinical decision support systems so that these tools can be disseminated quickly across multiple institutions (IOM, 2015).
Khorasani added that Harvard Medical School has created a public repository of evidence for clinical decision support that is machine readable, transparently graded, and continuously updated.25 Categories of evidence in the repository include clinical decision rules, professional society guidelines, and local best practices. He said Harvard curates and grades the available evidence in order to promote collaboration to accelerate the development of evidence-based clinical decision support tools.
Laser asked how long it should take for new clinical practice guidelines to be embedded in clinical decision support systems. Khorasani responded that for guidelines relating to imaging, vendors of imaging clinical decision support systems will need to update their tools within 1 year, based on a provision in the Protecting Access to Medicare Act.26 This law stipulates that starting in January 2020, clinicians in ambulatory settings must use certified clinical decision support tools with appropriate use criteria to order advanced imaging tests. Brink noted that ACR Select is one of the certified clinical decision support tools, and is built on ACR’s Appropriateness Criteria.27
Several speakers discussed opportunities to promote high-quality pathology and radiology care using quality improvement strategies such as peer learning, feedback, and continuous education and assessment of pathologists and radiologists.
27 See https://www.acr.org/Clinical-Resources/Clinical-Decision-Support and https://www.acr.org/Clinical-Resources/ACR-Appropriateness-Criteria (accessed July 5, 2018).
Larson noted that in general, there is very limited supervision of clinician performance in clinical practice, and most often it is focused on removing individual “bad apple” clinicians, but he said that this “is not effective, nor is it the right approach to take.” Larson suggested that rather than focusing on poorly performing individual clinicians, a systems-level approach to improving performance should be used, in which whole practices are held accountable. He also suggested employing a blended model of accountability, in which specialists supervise, coach, and provide feedback to generalists. Such a model would provide the monitoring and feedback needed for improvement, he said. Stead added that feedback loops are key to quality improvement efforts, and Sause agreed: “Feedback and self-correction [are] absolutely critical to making [quality improvement] dynamic and patient-centric.”
Sause reported on Intermountain Healthcare’s quality improvement efforts, which rely on quality measurement using robust data systems and reporting this information back to clinicians. “We look for variation in how care is delivered, feed that back to the clinicians, and try and change their behavior,” Sause said. For example, data collected by Intermountain Healthcare found that breast biopsies performed toward the end of the week were much more likely to be estrogen receptor-negative. Sause said this led to the realization that these biopsies often were not analyzed until the following week, by which time they had deteriorated in quality. “This was a eureka moment for us that reflects how this is a dynamic, living process. You have to look and understand the data and use that understanding to change how you’re practicing medicine,” Sause stressed.
Sause discussed a number of challenges in implementing quality improvement efforts. He added that clinician leadership and engagement are critical to quality improvement efforts, but can be challenging, especially given the heavy workload clinicians face. “We can’t just layer on, but have to figure out how to take some things away to make this work,” Sause said. “We’re so focused on metrics that are imposed on us from payers and other organizations that we’re losing the clinicians because we’re not providing them real metrics to help them take care of their patients.” In addition, he said that although he is in charge of the oncology quality improvement program at Intermountain Healthcare, “most of what I have to do is be the bully pulpit.” He said quality improvement efforts can be strengthened if
program managers are given more authority, provided a centralized budget, and operate within a value-based reimbursement system.
Hofmann said PROMs should be included in quality improvement activities. “We need to find out what the patient’s goals are and see how we match up against that,” he stressed. However, Sause noted it can be challenging to elicit patient responses to survey questions. Spears said that a lack of response might be due to questions not being relevant, and recommended asking fewer, more pertinent questions. Hofmann suggested applying computerized adaptive testing methods in patient surveys in order to ensure questions are relevant to a patient and minimize the burden in completing such surveys.28 Spears also noted that patient handouts may not stress the importance of the questionnaire, and she encouraged clinicians to convey that such surveys have the same importance as diagnostic testing results. Baker added that “patients need to have confidence that somebody is actually going to look at the form and maybe even act on it.”
Fennessy said the Dana-Farber Cancer Institute and Brigham and Women’s Hospital try to ensure the quality of imaging interpretation through a program called Worth Another Look. Through this program, radiologists can share information about patients’ cases—marked as privileged communications—in order to promote discussion and peer learning, especially when imaging leads to conflicting or surprising findings. Fennessy added that this program encourages radiologists who work in different imaging modalities (e.g., CT, MRI, ultrasound) to review the same case, and determine whether or not a different imaging modality might confirm initial results and interpretation. Fennessy noted that community radiologists who are part of the Dana-Farber Cancer Institute and Brigham and Women’s Hospital can also participate in the program.
Cohen said that second reviews were included in recommendations from the College of American Pathologists and the Association of Directors of Anatomic and Surgical Pathology to improve the accuracy of pathology reports (Nakhleh et al., 2015). These recommendations included the following:
28 See http://med.stanford.edu/researchit/infrastructure/choir.html (accessed August 14, 2018).
- Develop procedures for the review of selected cases to detect disagreements and potential interpretive errors;
- Perform case reviews in a timely manner;
- Document case review procedures relevant to the practice settings;
- Continuously monitor and document the results of case review; and
- Take steps to improve agreement of pathology case reviews, if needed.
However, Cohen noted that “having two pathologists read every pathology specimen is impractical.” He said there could be mandatory second review of all new cancer diagnoses, but this practice could also potentially lead to delays in diagnosis. Nayar added that pathology organizations, as well as many hospitals and institutions, have emphasized the importance of timely secondary reviews of malignancies to improve patient care. Consequently, many pathologists making a first-time diagnosis of cancer do have a second pathologist review the case. Difficult or unusual cases often require group consultations, she said.
Nayar added that when errors do occur, it is important to understand the contributing factors. She said pathology residents and fellows are encouraged to participate in root cause analyses to understand why errors occurred and how they can be prevented in the future.
Several speakers discussed efforts to move toward a more continuous process of learning and assessment of clinician knowledge, judgment, and skills through MOC programs, rather than relying on an initial board exam and recertification exams every 10 years. Brink noted the importance of more frequent learning and assessment, given the rapid accumulation of new knowledge in medicine: “When I took my boards a few decades ago, genomics was not an area of interest. Today, I would be grossly inadequate for sophisticated cancer imaging if I was certified exclusively by that credential,” Brink said. Wagner noted that “we do not know what happens to radiologists in the 10-year period after they take their test. We just know what they were like on that day.” He added that another shortcoming to 10-year recertification exams is that they do not provide detailed feedback to clinicians—a clinician is only informed whether he or she passed the test.
“Longitudinal assessment is changing training and how we assess
competence in our diplomates,” Nayar stressed. Since 2006, all primary and subspecialty certificates issued by the American Board of Pathology require pathologists to participate in the MOC program (now called the Continuing Certification program), which measures six core competencies in a four-part framework (Johnson, 2014):
- Professionalism and professional standing, such as hospital privileges and an active medical license;
- Lifelong learning and self-assessment through participation in CME and adequate performance in self-assessment modules;
- Assessment of knowledge, judgment, and skills; and
- Improvement in medical practice.
Pathology diplomates report on parts 1, 2, and 4 of the MOC activities every 2 years, and for part 3, diplomates must pass an exam every 10 years, said Nayar. In 2017, the American Board of Pathology started a pilot program to assess whether longitudinal assessment through ABPath Cert-Link29 could replace the traditional 10-year recertification exam. Volunteers participating in the pilot are asked to answer 20 questions per quarter. They receive immediate feedback on whether they answered correctly or not, as well as the key learning point for each question, an explanation of why the correct answer is right and the others are not, and references for further reading. Volunteers receive information on how their scores compare to that of their peers, and are retested on key content they answered incorrectly in previous tests. Nayar added that the participants also provide feedback to the American Board of Pathology on the relevance of the questions to clinical practice and their specialties.
In January 2019, Wagner said the American Board of Radiology will transition to a lifelong longitudinal assessment process. Participants will be provided two questions per week via email and will receive immediate feedback on their answers. He noted that radiologists can decline to answer certain questions that do not pertain to their practice, and will instead receive more relevant questions. “It increases the number of questions that are built around your practice environment,” he said. “We want radiologists to learn what they need to know to be relevant and useful to their patients, and we can’t ask them to be experts at everything,” he stressed, adding, “[The] new
29 See http://www.abpath.org/index.php/abpath-certlink/2017-09-15-13-21-49 (accessed July 12, 2018).