Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Prepublication Copy, Uncorrected Proofs Chapter 7 7 Assessing the Evidence on Interventions Part II of this report examines a wide range of evidence that has expanded understanding of the efficacy of many sorts of interventions designed to promote healthy mental, emotional, and behavioral (MEB) development and prevent disorder. In many areas, findings that have emerged since 2009 provide the basis for more productive directions and shifts in emphasis for policymakers and others who are tackling specific MEB-related challenges. The evidence for some interventions is quite well established. Much of it has been synthesized in meta-analyses of research including randomized controlled trials and other strong study designs, and in many cases research that has tracked potential effects across long time periods. Looking across Chapters 3 through 6, we note, for example, strong evidence for the efficacy of programs to support and promote effective parenting and family bonding, screen pregnant women and mothers for depression and offer effective treatments, and teach children from preschool through grade 12 social and emotional skills and mindfulness. Emerging evidence is the basis for recommendations regarding the integration of MEB health promotion and disorder prevention into primary health care. On the policy front, we have seen evidence that programs of long standing, such as Medicaid and the Earned Income Tax Credit, can have direct benefits for the MEB health of children and youth. For other interventionsâsuch as parent training to reduce substance use disorders or prevent child abuse, whole-school approaches to establishing a positive school climate, or prenatal parenting education to reduce pregnant womenâs substance useâresearchers are still exploring mechanisms for achieving desired outcomes and other questions. Existing work, however, points to significant potential benefits. This body of evidence is foundationalâthe necessary basis for selecting programs for dissemination and implementation on a broad scale that are the path for achieving meaningful improvements in MEB health for U.S. children and youth. We close Part II with our reflections on the nature of the research on interventions relevant to MEB health and the broad implications of this body of work. We begin this discussion by noting that it is common to speak of the value of âevidence- basedâ interventions without necessarily defining what âevidenceâ means in general or in specific contexts. The research reviewed in Part II on the primary points of access for improving MEB healthâfamilies and caregivers, schools, the health care system, and government policyâ includes randomized controlled trials, quasi-experimental studies, and qualitative studies. We have described many individual studies to illustrate the sorts of interventions that are available and some of the ways researchers have tried to identify the components of effectiveness. Where large-scale studies with strong experimental designs or meta-analyses are available, we have highlighted those studies, and we have also highlighted available meta-analyses. Evidence is often reported in terms of effect size, reflecting the degree to which a hoped- for outcome was achieved. Looking across the research we have reviewed, it consists primarily of efficacy studies (which examine whether an intervention can produce the intended result when Â 171
Prepublication Copy, Uncorrected Proofs Chapter 7 the study circumstances are tightly controlled) and limited numbers of effectiveness studies (which examine whether an intervention can produce the intended result in more complex, real- world circumstances) (Gartlehner et al., 2006). In efficacy and effectiveness studies related to MEB outcomes, effect sizes vary, but average approximately 0.3. A robust effect size would be 0.5 or above, which means that many study participants did not show an expected benefit, even when the outcome for the study group was statistically significant. Effect sizes are usually lower in effectiveness and dissemination than in efficacy studies, which suggests that in scaled interventions, expectations for effect sizes should be relatively low, even if the intervention is faithfully implemented. Obviously, effect size is an important consideration when selecting an intervention for implementation at a population level, but other factors are also important. One is that for many reasons, people eligible to participate in an intervention often decline to do so. For example, refusal rates for home visiting can be as high as 50 percent, and those who agree to participate are often those who least need the intervention (Ammerman et al., 2015). Most reports of outcomes from efficacy and effectiveness studies do not include data on enrollment rate, although this situation may be changing. To the extent that rates of program dropout are reported, they are also high, as discussed in Chapter 8. There is little available data about outcomes when participation is limited. In practice, an intervention that has a favorable effect size of 0.5 in efficacy studies, participation and retention rates of 50 percent each, and an effect size in effectiveness studies that is 75 percent of that found in efficacy studies, the calculation might actually show benefit for fewer than 10 percent of families or children who were projected to benefit. These considerations mean that it is important also to report and consider the number of eligible participants who enroll in a study, the number of enrollees who complete the study, and what the benefit might be for partial participation, as well as other factors that affect the potential for successful implementation, such as program complexity, cost, and the infrastructure and workforce needed to create and sustain the program. If these factors were routinely reported and discussed in published studies, it would be easier to consider them carefully when planning scale-up, organizing quality improvement efforts to achieve best outcomes, and assessing the population-level value of an intervention. For example, benefit/cost calculations have been based on outcomes of efficacy studies and have not routinely factored in these considerations; they therefore may overestimate potential benefits. We also note that many randomized controlled trials of the sorts of interventions discussed in this report have a risk of bias because the study conditions are not blinded, or for other reasons (e.g., Allen et al., 2016; Yap et al., 2016). Blinding is not always feasible in the context of intervention trials, particularly those delivered in real-world settings. It is also worth noting that strictly controlled laboratory experiments often lack generalizability to real-world settings (external validity), whereas intervention trials conducted in unmodified settings (pragmatic trials) that serve children and families can provide valuable information to support implementation and dissemination. This discussion of methodological issues reveals not weaknesses in the interventions studied but limitations on the certainty provided by even the most rigorous research because of the complexity of factors that influence developmental outcomes for children and youth. In short, it is important to look beyond the expectation that âevidence-basedâ programs, as defined in the past, if widely adopted, can achieve the desired benefits for all or even a substantial proportion of children in need. Â 172
Prepublication Copy, Uncorrected Proofs Chapter 7 Without a doubt, research is needed to continue building on this foundation. But the problem that has kept a body of knowledge that was already strong 10 years ago from bringing meaningful improvement at the population level, in the committeeâs judgment, is not with whether and how interventions work in an experimental setting, but with how to effectively reach and benefit children and families who could benefit at a population level, the subject of Part III. Â 173
Prepublication Copy, Uncorrected Proofs Chapter 7 REFERENCES Allen, M.L., Garcia-Huidobro, D., Porta, C., Curran, D., Patel, R., Miller, J., and Borowsky, I. (2016). Effective parenting interventions to reduce youth substance use: A systematic review. Pediatrics, 138(2). Ammerman, R.T., Altaye, M., Putnam, F.W., Teeters, A.R., Zou, Y., and Van Ginkel, J.B. (2015). Depression improvement and parenting in low-income mothers in home visiting. Arch Womens MentArchives of Womenâs Mental Health, 18(3), 555-â563. Gartlehner G., Hansen, R.A., Nissman, D., Lohr, K., and Carey, T.S. (2006). Trials in Systematic Reviews. Rockville, MD: Agency for Healthcare Research and Quality. Available: https://www.ncbi.nlm.nih.gov/books/nbk44029/. Yap, M.B.H., Morgan, A.J., Cairns, K., Jorm, A.F., Hetrick, S.E., and Merry, S. (2016). Parents in prevention: A meta-analysis of randomized controlled trials of parenting interventions to prevent internalizing problems in children from birth to age 18. Clin Psychol Rev, 50, 138-158. Â 174