THE SUBSTANCE ABUSE AND MENTAL HEALTH SERVICES ADMINISTRATION’S NATIONAL REGISTRY OF EVIDENCE-BASED PROGRAMS AND PRACTICES1
The National Registry of Evidence-based Programs and Practices (NREPP) provides publicly available electronic access to information on more than 350 substance abuse and mental health interventions. NREPP’s registry and review system was established to provide information to the public about evidence-based programs and practices that are available for implementation. NREPP’s registry, run by the Substance Abuse and Mental Health Services Administration (SAMHSA), includes only interventions that undergo NREPP’s review process, which provides information on research quality and impacts on individual outcomes. However, it should be noted that some, but not all, of the evidence presented online has been reviewed. Therefore, NREPP’s registry is not fully comprehensive, and specific interventions are not recommended or supported. Instead, the registry is intended to be a tool for use in designing an intervention to meet specific needs.
The information provided by NREPP includes a program profile that gives
- a description of the program, the population(s) served, and the program’s major components and goals;
- key study findings and ratings for outcomes (both positive and negative);
- a compilation of evaluations of the effectiveness of the program;
- dissemination and implementation information; and
The information is updated over time.
Typically, interventions that are candidates for inclusion in the registry are submitted by developers or other interested parties or found through environmental scans, such as literature searches by staff, or through agency nominations. NREPP then screens the interventions to determine whether they are eligible for review. To be eligible, an intervention must meet three minimum requirements:
- The intervention’s research or evaluation must either measure mental health or substance abuse outcomes or behavioral health-related outcomes for those with or at risk of mental health issues or substance use problems.
- Evidence of outcomes must have been found in a minimum of one experimental or quasi-experimental study.
- Results of the study/studies must have been published in a professional publication such as a peer-reviewed journal or included in an eligible comprehensive evaluation report.
NREPP recently revised its review criteria and ratings. The new review process is intended to improve the quality of the reviews themselves as well as the information they yield. Programs that are eligible for review are rated as effective, promising, or ineffective. These new ratings are intended to make it easier for users to find evidence-based programs that can address their specific needs. From September 2015 through June 2019, NREPP will be re-reviewing all programs currently in the registry.
Previously, programs were given a rating for the quality of research for each outcome assessed, as well as for the program’s overall readiness for dissemination, on a scale of 0 to 4, with 4 being the highest rating. Higher scores indicated stronger, more persuasive evidence. Outcomes were rated individually since programs could aim to achieve more than one outcome (e.g., decreased substance use and improvement of parent-child relationships), and the evidence for each outcome could differ. A brief description of the criteria used to rate programs is provided in Box B-1, as until the updated reviews have been completed, the results of the previous review process will be the only information available.
Now, new interventions that qualify for the registry undergo a review process that begins with information gathering and a literature search for
relevant evaluation studies and eligible outcomes that meet minimum criteria. Eligible outcomes presently include mental health, substance abuse, and wellness. Next, an expert review performed by two certified reviewers measures the rigor of the study and the impact on outcomes. The outcomes are reviewed using an NREPP outcome rating instrument and are judged on the basis of four dimensions: rigor, effect size, program fidelity, and conceptual framework (see Box B-2).
After all eligible measures or effects have been rated, the scores for each outcome are calculated, an evidence class for each measure is determined, and an outcome rating is determined (see Figure B-1).
First, the evidence class is determined based on the evidence score (a combination of the rigor and fidelity dimensions) and the effect class (based on the confidence interval of the effect size). The evidence classes are as follows:
- Class A: highest-quality evidence with confidence interval completely within the favorable range
- Class B: sufficient evidence with confidence interval completely within the favorable range
- Class C: sufficient or highest-quality evidence with confidence interval spanning both the favorable and trivial ranges
- Class D: sufficient or highest-quality evidence with confidence interval completely within the trivial range
- Class E: sufficient or highest-quality evidence with confidence interval spanning both the harmful and trivial ranges
- Class F: sufficient or highest-quality evidence with confidence interval completely within the harmful range
- Class G: limitations in the study design preclude reporting further on the outcome
Next, the outcome rating is determined on the basis of the outcome scores and the conceptual framework. The outcome scores are calculated from the evidence classes of each component measure, and the rating of the conceptual framework is a determination of whether a program has clear goals, activities, and a theory of change. The possible outcome ratings are
- Effective: strong evidence of a favorable effect
- Promising: sufficient evidence of a favorable effect
- Ineffective: sufficient evidence of a negligible effect OR sufficient evidence of a possibly harmful effect
- Inconclusive: study design limitations or a lack of effect size information precludes reporting further on the effect
BLUEPRINTS FOR HEALTHY YOUTH DEVELOPMENT2
Blueprints for Healthy Youth Development is a registry of evidence-based programs for positive youth development created by the Center for the Study and Prevention of Violence at the University of Colorado Boulder. The registry is intended to be a source of information for decision makers investing in programs with a goal of promoting positive youth development. Positive youth development includes academic performance and success, health and well-being, and positive relationships. Programs can be family-, school-, or community-based and can have a variety of different goals (e.g., violence prevention, reduced school delinquency, reduced sub-
stance use, improved mental and physical health, improved self-regulation, higher educational achievement). The registry is intended to provide users with information on evidence-based programs with high standards. Thus far, more than 1,300 programs have been reviewed, with less than 5 percent having been found to meet the review criteria. For programs that have met the criteria, users can find a program description as well as information on outcomes, target population, risk and protective factors, training and tech-
nical assistance, evaluation methodology, program costs, funding strategies, benefits and costs, and references.
Program reviews are conducted by staff and then an advisory board of seven youth development experts to determine whether a program meets the criteria of (1) evaluation quality, (2) intervention impact, (3) intervention specificity, and (4) dissemination readiness. (See Box B-3 for more detail.) Programs meeting these criteria have demonstrated at least some effective-
ness at changing targeted behavior and developmental outcomes. These programs are then added to the registry with a rating of Model or Promising.
Model programs meet higher standards than those met by Promising programs and offer greater confidence in the program’s ability to modify behavior and developmental outcomes. These programs are suggested for use in large-scale implementation, such as at the national or state level. Model programs meet the four criteria above and two additional requirements:
- Evaluation Quality: Evidence is required from two high-quality randomized controlled trials (RCTs) or one RCT and one quasi-experimental evaluation.
- Intervention Impact: A minimum of one long-term follow-up (at least 12 months after the intervention has ended) on at least one outcome measure must indicate that results have been sustained following the intervention. Data on sustainability are required for both program and control groups. For interventions designed to extend over many years, evidence is required that effects have been sustained after several years of participation in the program even though participation is continuing and will be accepted as evidence of sustainability.
Programs rated as Model that also have been independently replicated in a high-quality manner are designated Model Plus.
Promising programs meet the four criteria elaborated in Box B-3 and are recommended for local community and system adoption. Promising programs do not have to meet the additional evaluation quality and intervention impact requirements for Model programs listed above.
THE CALIFORNIA EVIDENCE-BASED CLEARINGHOUSE FOR CHILD WELFARE3
The California Evidence-Based Clearinghouse for Child Welfare (CEBC), funded by the California Department of Social Services’ Office of Child Abuse Prevention, is a database of child welfare-related programs in-
tended to provide information and resources for child welfare professionals. The mission of the clearinghouse is to advance the effective implementation of evidence-based practices for children and families involved in the child welfare system. CEBC provides descriptions of and information on research evidence for specific programs, as well as implementation guidance. All programs are categorized by topic area. Assignment to topic areas is based on clear definitions and requirements that programs must meet. Requirements also are specified for which program outcomes the research evidence must demonstrate for programs to be rated within each topic area.
The CEBC review process starts with the selection of topic areas, which is performed annually by an advisory committee. A list of possible programs to be included in each topic area is then generated based on information from topic experts and literature searches conducted by staff. Each of these programs is then contacted with a list of screening questions. If the program passes the screening, it receives a questionnaire to complete, and a literature search is conducted for any relevant published peer-reviewed research literature. A program outline is then created, and study outcomes are summarized from the research literature. Outcomes of focus relate to child welfare and include safety, permanency, and child/family well-being. Program outlines that meet one of the five categories of the CEBC Scientific Rating Scale (see Figure B-2) are then sent to raters (usually the topic expert and two staff). The scale’s purpose is to assess each program based on the available research evidence.
Scientific Rating Scale
The Scientific Rating Scale is a 1 to 5 rating of the strength of the research evidence supporting a program. A scientific rating of 1 represents a program with the strongest research evidence, while a 5 represents a con-
cerning practice that appears to pose substantial risk to children and families. A rating of 2 indicates the program is supported by research evidence, 3 indicates promising research evidence, and 4 indicates that the evidence fails to demonstrate effect. Specific criteria for each rating are presented in Box B-4. Some programs currently lack strong enough research evidence to be rated on the Scientific Rating Scale and are classified as NR (Not Able to Be Rated). A rating of NR does not mean a program is not effective.
Program ratings are evaluated on an ongoing basis as new research is published, and programs are rerated if necessary. Intermittent re-reviews are conducted to look for new published, peer-reviewed research on programs already rated. Program representatives also can submit new published, peer-reviewed studies to initiate the re-review process at any time.
Child Welfare System Relevance Levels
In addition to its assigned rating, each program included in the database is reviewed to determine how child outcomes are addressed in the program’s research evidence. The topic expert and staff review the target population and goals of the program to determine a Child Welfare System Relevance Level of high, medium, or low. Programs rated high are designed or commonly used to meet the needs of children and families receiving child welfare services. Those rated medium are designed or commonly used to serve children and families similar to child welfare populations and likely include current and former child welfare participants. Finally, programs rated low serve children and families with little or no apparent similarity to child welfare participants.