ment programs. For individual intervention techniques, interventions addressing language and communication skills (see Goldstein, 1999) and problem behaviors (see Horner et al., 2000) are the most often replicated by different investigators.
Independent measurement or verification of treatment outcome is another important issue. The potential effect of experimenter bias exists when outcome assessments are conducted by individuals who know about the nature of the research study, the treatment groups to which children are assigned, and the phases of studies in which children are participating. For most group and single-subject design research, outcome data are collected by project staff; this may introduce a potential confounding effect. This confounding effect may be countered by having blind or naive assessors collect pre- and post-outcome data for group designs and daily performance data for single-subject designs. Also, for single-subject designs, the assessment of socially important outcomes of interventions by individuals outside of the project, called “social validity” (Schwartz and Baer, 1991; Wolf, 1978), provides some control of potential bias by observers, raters, and testers.
In experimental group designs, the average or mean performances of children on outcome measures and standard deviations are generally reported for each group. The standard deviation describes the variation of outcome scores around the mean. In group-design studies, children make different amounts of progress, with some possibly scoring much higher and some scoring much lower than the mean. Analyses of group means does not provide information about which children benefited the most or least from treatment.
To obtain more specific knowledge about the characteristics of children that are associated with performance, researchers analyze aptitude-by-treatment interactions or ATIs. For example, an examination of different language training curricula for preschool children with disabilities (not specifically autism) did not find a main effect for treatment (i.e., both treatments appeared to be equally effective) (Cole et al., 1991). However, when they analyzed the interaction of treatment by aptitude, they found that children who were higher performers on pretest measures benefited more from a didactic language training approach, and children who were lower performers at pretest benefited more from a responsive curriculum approach to language training.
This type of aptitude-by-treatment-interaction analysis has the potential for providing valuable information about the characteristics of children with autistic spectrum disorders that are associated with outcomes