Test effective and reliable processes for disseminating to the broader health care field findings on practice guidelines, processes, and procedures that result from translational research activities.
Inform public policy; continually examine the overall impact of research findings on the purchasing, management, and delivery of care; and monitor fidelity with the findings of this report and the principles of the Quality Chasm report.
The committee believes the timely and efficient production of the evidence needed to address such a broad range of issues will require a research agenda that makes appropriate use of experimental, quasi-experimental, and observational approaches.
As discussed in Chapter 4, while well-designed, randomized controlled trials are recognized as the gold standard for generating sound clinical evidence, the sheer number of possible pharmacological and nonpharmacological treatments for many M/SU illnesses makes relying solely on such trials to identify evidence-based care infeasible (Essock et al., 2003). Moreover, some features of mental health care make the use of such trials methodologically problematic (Tanenbaum, 2003). For these reasons, behavioral and social science research has often used quasi-experimental as well as qualitative research designs (National Academy of Sciences, undated); indeed, some assert that quasi-experimental studies often are more useful in generating practical information about how to provide effective mental health interventions in some clinical areas (Essock et al., 2003). Consistent with this point of view, the U.S. Preventive Services Task Force notes that a well-designed cohort study may be more compelling than a poorly designed or weakly powered randomized controlled trial (Harris et al., 2001). Observational studies also have been identified as a valid source of evidence useful in determining aspects of better quality of care (West et al., 2002). However, others note the comparative weakness of these study designs in controlling for bias and other sources of error and exclude them from systematic reviews of evidence for the determination of evidence-based practices. Many researchers and methodologists already are considering strategies for addressing these difficult issues (Wolff, 2000).
As this study was under way, the National Research Council had established a planning committee to oversee the development of a broad, multiyear effort—the Standards of Evidence–Strategic Planning Initiative—to identify critical issues affecting the quality and utility of research in the behavioral and social sciences and education (National Academy of Sci-