usually within the same geographic area, and the willingness of the programs and parents to participate. This situation does not often occur.
Another issue related to random assignment is the heterogeneity of the population of children with autistic spectrum disorders. Most treatment studies, because of the prevalence of autistic spectrum disorders and the expense and labor intensity of treatment, will have small sample sizes. Random assignment within a relatively small, heterogeneous sample does not ensure equivalent groups, so a researcher may match children on relevant characteristics (e.g., IQ score, age) and then select from the matched sets to randomly assign children to control and treatment groups. As noted above, such stratification of the sample of participants requires a thorough description of the participants as well as confidence that the variable(s) on which children are matched are of greatest significance.
An issue related to the size and heterogeneity of groups in the randomized clinical trail approach is statistical power (Cohen, 1988). Groups have to be large enough to detect a significant difference in treatment outcomes when it occurs. The smaller the size of the group, the larger the difference in treatment outcomes has to be in order to show a statistically significant effect. Also, variability on pretest measures, as may occur with heterogeneous samples, sometimes obscures treatment differences if the sample size is not sufficiently large. Because the number of children with autistic spectrum disorders enrolled in particular treatment programs often is not large, sample size and within-group variability are challenges to the use of randomized clinical control methodology for determining the effectiveness of educational interventions for those children.
In contrast to group experimental designs, single-subject design methodology uses a smaller number of subjects and establishes the causal relationship between treatment and outcomes by a series of intrasubject or intersubject replications of treatment effects (Kazdin, 1982). The two most frequently used methods are the withdrawal-of-treatment design and the multiple baseline design.
In the withdrawal of treatment design, a baseline level of performance (e.g., frequency of stereotypic behavior or social interactions) is established over a series of sessions, and a treatment is applied in a second phase of the study. When reliable changes in the outcome variable occur, the treatment is withdrawn in the third phase of the study, and concomitant changes in the outcome variable are examined. Often, the treatment is reinstated in a fourth phase of the study, with changes in the outcome variable expected. Changes in the outcome variable (e.g., in