conducted in Hawaii yielded disappointing results, with as many negative as positive impacts on key family process outcomes (Duggan, McFarlane, et al., 2004). The evaluators offered several potential explanations for these results: The program may have been poorly implemented, as half (51 percent) of the parents dropped out within the first year and participating families received fewer home visits than intended; the paraprofessional staff may not have had sufficient skills to identify high-risk situations and engage parents in the process of reducing risks associated with abusive parenting; and a shift away from an emphasis on recognizing and addressing risks for abusive parenting toward an early intervention philosophy of parent-driven goal setting, which was caused by funding requirements, may have compromised its effectiveness.

A recent evaluation of an augmented HFA program, with a sharper focus on using cognitive appraisal theory to reduce risks for abuse and neglect, as well as better implementation practices, yielded considerably more favorable results compared with both the unenhanced HFA program and a control group that did not receive any home visiting services (Bugental, Ellerson, et al., 2002). These positive findings were particularly evident for medically vulnerable infants, such as those born prematurely or those with low Apgar scores (assessing physical condition after delivery) at birth. Although the study was small and thus in need of replication, this finding illustrates the current effort among the nation’s largest home visiting models to use evaluation findings to promote program improvement. The lessons learned from this study (i.e., the importance of engaging families, providing high-quality training and ongoing supervision of staff, and ensuring consistent and well-implemented service delivery) illustrate the value of program accountability as a strategy for continuous enhancement rather than as a vehicle for terminating potentially effective services that produce initially disappointing results. Similarly, lessons learned from the Healthy Families New York evaluation (see Chapter 6) reinforce the likely value of targeting first-time mothers with limited resources.

The implementation of quality early childhood programs at scale continues to be a vexing problem in the field. Few evaluations have linked implementation quality to the magnitude of impacts on children’s social-emotional or mental health outcomes. Research from Early Head Start’s 17-site evaluation has shown that sites with earlier and more complete implementation, as measured by the Early Head Start Program Performance Standards, had stronger positive impacts on child social-emotional and cognitive outcomes than those with later or less complete implementation (Love, Kisker, et al., 2002). Other research on quality in at-scale programs has found that Head Start classrooms in general are in the “good” (though not “excellent”) range of quality on the Early Childhood Environment Ratings Scale (Administration for Children and Families, 2006). Research on

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement