interventions state-, county-, or locality-wide are needed to establish an infrastructure for the delivery of preventive interventions across systems of care.
Recommendation 12-3: The U.S. Departments of Health and Human Services, Education, and Justice should fund states, counties, and local communities to implement and continuously improve evidence-based approaches to mental health promotion and prevention of MEB disorders in systems of care that work with young people and their families.
A dizzying array of technical assistance centers, online resources, and publications and guides is available. Prominent among them are efforts to identify effective programs. Differences across these efforts, particularly in the standards applied, make it difficult to understand the meaning of an assigned rating or to assess the expected results of a given program.
Recommendation 12-4: Federal and state agencies should prioritize the use of evidence-based programs and promote the rigorous evaluation of prevention and promotion programs in a variety of settings in order to increase the knowledge base of what works, for whom, and under what conditions. The definition of evidence-based should be determined by applying established scientific criteria.
In applying scientific criteria, the agencies should consider the following standards:
Evidence for efficacy or effectiveness of prevention and promotion programs should be based on designs that provide significant confidence in the results. The highest level of confidence is provided by multiple, well-conducted randomized experimental trials, and their combined inferences should be used in most cases. Single trials that randomize individuals, places (e.g., schools), or time (e.g., wait-list or some time-series designs) can all contribute to this type of strong evidence for examining intervention impact.
When evaluations with such experimental designs are not available, evidence for efficacy or effectiveness cannot be considered definitive, even if based on the next strongest designs, including those with at least one matched comparison. Designs that have no control group (e.g., pre-post comparisons) are even weaker.
Programs that have widespread community support as meeting community needs should be subject to experimental evaluations before being considered evidence-based.