for comprehensive treatment programs, but these analyses have rarely been conducted. Studying interactions between child or family features and treatment requires a sample size large enough to generate sufficient power to detect a difference. For example, in one study, children diagnosed as having autism or pervasive developmental disorder were randomly assigned to an intensive intervention program based upon the UCLA Young Autism Project model or a parent training model. Although it appeared that children with pervasive developmental disorder scored consistently higher than children with autism on some measures, there were no significant differences between groups (Smith et al., 2000). The authors attributed the failure to find significant difference to the small sample size (6–7 in each subgroup in each experimental condition). In another example, Harris and Handleman (2000) examined class placements of children with autism 4–6 years after they had left a comprehensive early intervention program. In an aptitude-by-treatment-interaction type analysis, they found that children who entered their program at an earlier age (mean=46 months) and had relatively higher IQ scores at intake (mean=78 months) were significantly more likely to be in regular class placements, and children with relatively lower IQ scores at intake who entered the program later (54 months) were more likely to be placed in special education classes. Even with a relatively small number of participants (28), the robustness of this finding provided information about characteristics of the children who were likely to benefit most from the program.

Fidelity of Treatment

In addition to assessing outcome measures, it is important for researchers examining the effects of educational interventions to verify that the treatment was delivered. Measurement of the delivery of an individual intervention practice or comprehensive intervention program has been called fidelity of treatment, treatment implementation, and procedural reliability (Billingsley et al., 1980; Hall and Louchs, 1977). Here we use the term treatment fidelity.

Treatment fidelity requires that researchers operationally define their intervention or the components of their comprehensive program well enough so that they or others can assess the degree to which procedures have been carried out. Such assessment takes different forms (e.g., direct observations with discrete behavioral categories, checklists, etc.). For example, staff of the LEAP preschool program (see Chapter 12) have developed a set of fidelity-of-treatment protocols that assess whether eight components of the program are being implemented: positive behavioral guidance, interactions with families, teaching strategies, interactions with children, classroom organization and planning, teaching communication



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement