tingent on their own drug use patterns. Those in the contingent voucher condition showed greater reductions in cocaine use than those in the noncontingent vouchers; importantly, the noncontingent vouchers significantly reduced attrition from the study. Thus, it appears that vouchers reduce dropout rates, but that contingent vouchers promote reductions in use that are not solely attributable to remaining in treatment. This study shares a weakness of the Higgins et al. study—a small sample size that limits the statistical power of the analyses.
It is useful to contrast these studies with some of the major American treatment outcome research initiatives of the past 30 years:
The Drug Abuse Reporting Program (DARP—see Simpson and Sells, 1982, 1990),
The Treatment Outcome Prospective Study (TOPS—see Hubbard et al., 1989), and
The Drug Abuse Treatment Outcome Study (DATOS—see Simpson and Curry, 1997).
DARP, TOPS, and DATOS were three large-scale, multisite, multi-investigator initiatives involving tens of thousands of clients, hundreds of clinicians, and a broad range of treatment modalities and therapeutic techniques, client characteristics, and drug abuse patterns. These were ambitious efforts that addressed multiple goals. One goal was descriptive—to attempt to describe the universe of treatment clients, settings, and modalities in the United States. Another goal was inferential—to assess the effects of drug treatment on various client outcomes. Arguably, programs like the Treatment Episodes Data Set (TEDS) and the National Drug and Alcohol Treatment Unit Survey (NDATUS) are better suited for the routine collection of aggregate descriptive statistics about trends in national delivery of drug treatment services. For the second, inferential goal, in the committee’s judgment, future research funds would be better spent on a large number of randomized clinical trials, with cross-site extensions and replications. Because they lacked randomized assignment to condition, DARP, TOPS, and DATOS could not provide rigorous evidence on the relative effectiveness or efficacy of particular drug-by-treatment combinations, or for estimating the absolute effect size, cost-effectiveness, or benefit-cost ratio of treatment. The committee recommends that priorities for the funding of treatment evaluation research should be changed; large-scale, national treatment inventory studies should not be conducted at the expense of greater funding for randomized controlled clinical trials.