Biglan, et al., 2005). One of the 31 requirements for an intervention to be designated “effective” is evaluation in real-world conditions (Flay, Biglan, et al., 2005). The importance of this criterion was recently demonstrated by Hallfors, Pankratz, and Hartman (2007), who tested a drug abuse prevention intervention that was designated as a model program by SAMHSA (under the old CSAP system)17 and as “research-based” by NIDA based on efficacy trial data. In a large, multisite effectiveness trial, they showed main effects that were either null or worse for the experimental group compared with the control group. They argue that small efficacy trials provide insufficient evidence for the selection of interventions at the community level (see also Chapter 10).

Linking Research and Services

Identifying strategies for effective implementation of evidence-based programs is a clear future research priority (see Chapter 11). NIH has adopted several efforts to facilitate this process from a research perspective. First, in response to a general lack of knowledge about how to disseminate and implement effective prevention programs, they are convening trans-NIH forums to prepare applicants for new grant programs on dissemination and implementation research that explore characteristics of communities, interventions, and system change that impact prevention outcomes in community settings. Ideally, researchers and community organizations will develop partnerships to move this next generation of research forward in a productive manner.

In addition, NIMH’s agenda for facilitating prevention programs is laid out in the Bridging Science and Service report (National Institute of Mental Health, 2006a) and follow-up reports. It emphasizes linking its agenda on implementation research with ongoing funding of programs by other federal agencies that are responsible for service delivery, including SAMHSA and ED. It specifies that key research questions should focus on mechanisms for successful implementation, particularly with ethnic and minority populations.

Programs designed to fund services typically do not provide adequate funding for rigorous evaluations; when programs are evaluated, they typically do not include random assignment. Although both the SSHS Program and the Strategic Prevention Framework encourage evaluation, there is currently no national evaluation information available. SSHS has published a sample of data from local evaluations, with promising evidence of positive outcomes. These evaluations do not appear, however, in published scientific

17

The program is being re-reviewed by NREPP, but it is unclear if this study will be included in the review.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement