ment fared more poorly than those who received a full course of treatment. Such situations are not conclusive, but they do seem more informative than passive correlational studies that lack such exogenous shocks. Situations in which data collection is ongoing when such shocks occur are rare; we know of no examples involving cocaine treatment or modalities for heroin other than methadone maintenance.
Another line of relevant evidence comes from statistical comparisons of voluntary versus coerced treatment clients. The current consensus is that it does not matter—coerced clients fare no worse (and no better) than voluntary clients (see reviews by Anglin and Hser, 1990; Farabee et al., 1998; Lawental et al., 1996; Silverstein, 1997). Gostin (1991) argues that “the intuition that compulsory treatment will fail because drug dependent people must be self-motivated to benefit…simply is not borne out by the data.” For example, Silverstein (1997) found no significance outcome differences for court-mandated versus other clients at a semirural drug abuse treatment clinic. Lawental et al. (1996) found comparable improvements for both self-referred and employer-coerced private treatment clients.
These studies help to address concerns about regression and selection artifacts. However, these studies use quasi-experimental, “nonequivalent control group” designs, comparing coerced and noncoerced clients at the same site. Although most of these studies attempted statistical matching, there is no way of knowing whether the coerced and noncoerced groups are otherwise comparable; for all we know, the coerced clients could be individuals who would have benefited even more from treatment in the absence of coercion.
Finally, one could argue for the effectiveness of drug treatment by analogy to other behavior change interventions that have been more rigorously assessed. Other forms of psychotherapy have fared well under randomized, no-treatment control experiments. As discussed in Chapter 7, Lipsey and Wilson (1993) provided a comprehensive review of these literatures, and an enormously ambitious “meta-meta-analysis” of 302 published meta-analyses of treatment interventions. These meta-analyses did not include cocaine or opiate treatment, but they did include arguably similar interventions such as cognitive therapy for depression, tobacco cessation, and weight control. Across the 302 meta-analyses, they reported an average effect size of behavior change interventions of about half a standard deviation; 90 percent were greater than or equal to 0.10, and 85 percent were greater than or equal to 0.20. For smoking cessation, the effect sizes ranged from 0.21 to 0.62 in magnitude; all were reliably above zero. But none of these interventions is perfectly analogous to treatment for psychoactive drug dependence. There are undoubtedly differences across domains in client characteristics, etiology, mechanisms of pathol-