utilized: (1) studies other than randomized controlled trials, (2) administrative datasets that often exist electronically, and (3) patients and their ability to report changes in their symptoms and well-being (outcomes of care). Steps can be taken to make better use of each of these sources.

Studies Other Than Randomized Controlled Trials

While well-designed randomized controlled trials are recognized as the gold standard for generating sound clinical evidence, experts note that the sheer number of possible pharmacological and nonpharmacological treatments for many M/SU illnesses makes relying solely on such studies to identify evidence-based care infeasible (Essock et al., 2003). Others add that some features of mental health care make use of randomized controlled trials methodologically problematic as well. For example, in studies of the effectiveness of psychotherapy, the therapist and the patient cannot be blinded to the intervention, delivery of a placebo psychotherapeutic intervention is difficult to conceptualize, and standardization of the intervention is problematic because therapists must respond to what happens in a psychotherapy session as it unfolds (Tanenbaum, 2003). For such reasons, the behavioral and social sciences have often used quasi-experimental as well as qualitative research designs (National Academy of Sciences, undated), practices that are sometimes a source of contention.

Some assert that quasi-experimental studies often are more useful than randomized controlled trials in generating practical information on how to provide effective mental health interventions in some clinical areas (Essock et al., 2003). Consistent with this assertion, the U.S. Preventive Services Task Force notes that a well-designed cohort study may be more compelling than a poorly designed or weakly powered randomized controlled trial (Harris et al., 2001). Observational studies also have been identified as a valid source of evidence that is useful in determining aspects of better quality of care (West et al., 2002). However, others note the comparative weakness of these study designs in controlling for bias and other sources of error and exclude them from systematic reviews of evidence for the determination of evidence-based practices.

A discussion of variations in study design and their implications for systematic reviews of evidence is beyond the scope of this report; many researchers and methodologists are considering strategies for addressing these difficult issues (Wolff, 2000). As this study was under way, the National Research Council had established a planning committee to oversee the development of a broad, multiyear effort—the Standards of Evidence–Strategic Planning Initiative—to identify critical issues affecting the quality and utility of research in the behavioral and social sciences and education (National Academy of Sciences, undated). The committee believes such



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement