Chao, Samantha. "3 State of the Science of Quality Improvement Research." The State of Quality Improvement and Implementation Research: Expert Views, Workshop Summary. Washington, DC: The National Academies Press, 2007.
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
The State of Quality Improvement and Implementation Research: Expert Views, Workshop Summary
intrinsically require less testing or that the need for action trumps the need for evidence?
Does this answer depend on variations in context (e.g., across patients, clinical microsystems, health plans, regions)? Other contextual factors? Which aspects of context, if any, do you measure as part of quality improvement research?
Do you have suggestions for appropriately matching research approaches to research questions?
What additional research is needed to help policy makers/ practitioners improve quality of care?
Some panelists submitted written responses to these questions; those responses are included in Appendix C.
EVIDENCE-BASED PRACTICE CENTER
Paul Heidenreich of both the Palo Alto Veterans Administration (VA) Hospital and Stanford Evidence-based Practice Center (EPC) presented the EPC’s approach to evaluating quality improvement research. The EPC, a collaborative effort between Stanford and University of California, San Francisco (UCSF), is one of 13 EPCs funded by AHRQ to provide evidence-based reports and disseminate the findings of those reports.1 The Stanford–UCSF EPC has authored a series of reports titled “Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies.” These reports attempt to provide guidance to those doing quality improvement and assess the effectiveness of various quality improvement strategies under specific circumstances. A secondary goal is to advance review methodology. The series has evaluated a variety of issues in health care, including diabetes and medication management. Each evaluation studies the effects of the same quality improvement strategies, such as provider reminders and techniques to promote self-management (Table 3-1). The studies are all conducted using one of three evaluation designs: randomized trials, concurrent trials, and interrupted time series.
The EPC employs a “strength of evidence” scale to rate studies on three factors: impact, study strength, and effect size. The level of difficulty to implement an intervention is also considered, focusing on cost barriers and complexity. The strength of evidence and diffi-
Topics for reports are typically requested by AHRQ, and the EPC will develop a framework to address it. The EPC will also identify experts and stakeholders to be involved in the report before proceeding.