Click for next page ( 4


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 3
rental practice A Re v i s e d M e t a -a na ly s i s of th e He nt a 1 P ra ct i c e Literature on Motor Skill Learning Concomitant with the cognitive revolution in psychology has been the resurgence of research on mental practice. As a specific form of practice, mental practice has also been referred to as symbolic rehearsal tSackett, 1935), imaginary practice (Perry, 1939), covert rehearsal (Corbin, 1967), implicit practice (Morrisett' 1956) ~ mental rehearsal (Whiteley, 1962) ~ conceptualizing practice (Egstrom, 1964), mental preparation (Weinberg, 1982), and visualization (Seiderman & Schneider, 1983). According to Richardson (1967, p. 95), "mental practice refers to the symbolic rehearsal of a physical activity in the absence of any gross muscular movements. Such covert activity is commonly observed among musicians and athletes prior to their performances. For example, when a gymnast imagines going through the motions of performing a still ring routine he is engaged in mental practice. Since the 19 30s there have been over 100 studies on mental practice. The specific research question addressed in these studies has been whether a given amount of mental practice prior to performing a motor skill will enhance one's subsequent motor performance. Unfortunately, definitive answers to this question have not been readily forthcoming. Although there are existing narrative (Corbin, 1972; Richardson, 1967 a, b; Weinberg, 1982) and meta-analytic (Feltz & Landers, 1983) reviews of the mental practice literature, the conclusions have been contradictory. There is a need, therefore, to conduct a comprehensive review of the mental practice literature using more sophisticated me~a-

OCR for page 3
4 analytic procedures and examining more study features than used in previous studies (eager Feltz & Landers' 1983). MENTAL PRACTICE PARADIGMS Most experiments on skill acquisition have been variants on a research design which employs four groups of subjects randomly selected from a homogeneous parent population or equated on initial levels of performance. These groups have been (a) mental practice, (b) physical practice, (c) combined physical and mental practice, and (d) no physical or mental practice (i.e., control). Most studies compared the performances (pre-post) of subjects who had previous mental practice to a control group that had not received mental instructions. In the mental practice group the time intervening between pre and posttest was usually occupied in sitting or standing and rehearsing the skill in imagination for a set amount of time. The members of the no practice group were simply instructed not to practice the skill physically or mentally during the interval. A more appropriate control has required members of the no practice group to participate in the same number of practice sessions as the mental and physical practice groups, but with activity that has been irrelevant to the task. Quite often. these around wars n1 an ~^nFr~ct=~ hm ~ _ groups were also physical practice group and a group receiving physical practice. A practice period was then varied considerably in the number of trials in session and in total number and spacing of mental-physical practice groups, practice contrasted to a combined mental and instituted which each practice trials. In combined periods involved either

OCR for page 3
a 1 t e mat ing me nt a ~ a n d p by s ~ ca 1 p ra at i c e t ri a 1 s, me nt a 1 1 y practicing a number of trials followed by physical practice, or physically practicing a number of trials followed by mental practice. Following this practice period, the subjects' skills were tested under standard conditions to determine whether their performance scores differed as a result of the practice condition administered. The scope of the present meta-analytic review is considerably broader than in previous reviews. Whereas Feltz and Landers (1983) limited their review to only comparisons between mental practice and no practice, all four groups are compared in the present review. The previous meta-analytic study included only studies that had pretest scores or a control group with which to be compared. By contrast, the present review included only single or multiple group studies having pre and posttest scores. The use of pre-post designs permitted a determination of a change-score effect size for each group examined in this set of mental practice s tudies . PREVIOUS REVIEWS Research studies examining the effects of mental practice on motor learning and skilled performance have been reviewed on a selective basis. The reviews by Richardson ( 1967a ) and Corbin ( 1972) included from 22 to 56 studies and provided contradictory conclusions. Richardson (1967a) reviewed studies of three types: (a) those that focused on how mental practice could facilitate the initial acquis ition of a perceptual motor skill, (b ) those that focused on aiding the continued . etention of a motor skill,

OCR for page 3
6 and (c) those that focused on improving the immediate performance of a skill. He concluded that in 8 majority of the studies reviewed, mental practice facilitates the acquisition of a motor skill. There were not enough studies to draw any conclusions regarding the effect of mental practice on retention or immediate performance of a task. Five years later, Corbin (1972) who reviewed many other factors that could affect mental practice was much more cautious in his interpretation of the effects of menta' practice on acquisition and retention of skilled motor behavior. In fact, he maintained that the studies were inconclusive and that a host of individual, task and methodological factors used with mental practice produced different mental practice results. In a 1982 review of "mental preparation, n Weinberg reviewed 27 studies dealing with mental practice. Although Weinberg noted the equivocal nature of this literature, he maintained that the following consistencies were apparent: (a) physical practice is better than mental practice; and (b) mental practice combined and alternated with physical practice is more effective than either physical practice or mental practice alone. The latter conclusion is similar to Richardson's (1967a) cautious inference that the combined practice group is as good as or better than physical practice trials only. Another conclusion reached by Weinberg (1982) was that for mental practice to be effective individuals had to achieve a minimum skill proficiency. However, in their meta-analysis, Feltz and Landers (1983) found no significant differences between

OCR for page 3
the effect sizes determined for novice and experienced performers. It is not surprising that with all of the significant and nonsignificant findings in the numerous mental practice studies, it is exceedingly difficult in these narrative reviews (Corbin, 1972; Richardson, 1967; Weinberg, 1982) to obtain any clear patterns. The insights about directions for future research that were provided in previous reviews by Richardson (1967), Corbin (1972) and Weinberg (1982) were helpful. In the above reviews, however, the conclusions about mental practice effects may have been distorted for one or more of the following reasons: (a) too few studies have been included to accurately portray the overall empirical findings in the area; (b) only a subset of possible studies was included, leaving open the possibility that bias on the reviewers' part may have influenced them to include studies that supported their position, while excluding those that may have contradicted their beliefs; (c) although the reviewers speculated about a range of variables that may influence the effectiveness of mental practice, the style used in these reviews was more narrative and rhetorical than technical and statistical, thus making it difficult to systematically identify the variables; and (d) the reviews have ignored the issue of relationship strength, which may hare allowed weak disconfirmation, or the equal weighting of conclusions based on few studies with conclusions based on several studies (see Cooper, 1979). In other words, they had a smaller pool of studs es, and at that time, more sophisticated tools for research

OCR for page 3
r L a a; ~ ~ i,; ~ integration were not widely available. Thus, some of their conclusions may no longe r be t enable . Given the current confusion that may have resulted from the basic limitations of previous reviews, there is a need for a more comprehensive review of existing research, using a more powerful method of combining results than summary impression. The methodology recommended for such a purpose is meta-analysis, which examines the magnitude of differences between conditions as well as the probability of finding such differences. AN OVERVIEW OF META-ANALYSIS TECHNIQUES This section provides an overview of the concept and practice of meta-analysis, the quantitative synthesis of research findings. A brief introduction is followed by a discussion of Cooper's (1984) formulation of the process of integrative research reviewing. The effect size, as popularized by Glass (1976), is next introduced: this measure serves as an index of the effectiveness of mental practice training in our review. An overview of hypotheses tested by statistical method designed specifically for analyzing effect-size data (e.g., Hedges & Olkin, 1985) concludes the section. Introduction "Meta-analysis,n (Glass, 1976) or the analysis of analyses, is an approach to research reviewing that is based upon the quantitative synthesis of results of related research studies. Although the idea of statistically combining measures of study outcomes is not new in the agricultural or physical sciences

OCR for page 3
9 (e.g., Birge' 1932; Fisher, 1932), Deed to summarize research results Glass ( 1976) proposed the idea of Glass described meta-analysis the casual' narrative discussions typify our attempts to make research literature" ( 1976, Smith ( 1981) presents an ore conceptua 1~ ze d . In Glas s ' s to explore the variation in same way that one might ana Ques tions of the ef f ects of treatment implementation on empirically. Thus we avoid few studies not believed to and teas ing the conclusions results . Some critics (e.g., Eysenck, 1978; Slavin, 1984) have claimed that meta-analysis (as it is generally applied) is little more than the thoughtless application of statistical summaries to the results of studies of questionable quality. In fact, as is true for some published primary research, some published meta- analyses are flawed because of problems in data collection, data analysis, or other important aspects. However, when thoughtfully conducted, a meta-analysis can provide a more rigorous and ob jective alternative to the traditional narrative review. Additionally, the development of statistical analyses desk gned this approach was not of ten in the social sciences until mete -ana lys is . as "a rigorous alternative to of res earch s tudies which s ens e of the rapidly expanding p. 3) . The book by Glass, McGaw, and Purview of the process as it was f irs ~ view, the task of the meta-analyst is the f indings of studies in much the lyze dat a in p rims ry res ear ch . d-. fferences in study design or s tu dy r e s u i t s a r e a d d r e s s e d the practice of eliminating all but a be deficient in design o- analysis, of the review on the remains ng

OCR for page 3
10 especially for effect sizes makes the thoughtful meta-analysis a necessity rather than an option. T Integrative Review Both Jackson (1980) and Cooper (1982, 1984) have conceived of the steps involved in an integrative research review as parallel to those familiar in the conduct of primary research. Cooper (1984) outlines and details five steps in a research review and the "functions, sources of variance, and potential threats to validity associated with each stage of the review process" (1984, p. 12). These five stages are outlined below. Problem Formulation At this first stage of the review, the researcher must outline the research questions for the review and the kinds of evidence that should be sought in order to address those questions. Here the reviewer deals with the conceptualization and operationalization of constructs, the specificity versus generality of conclusions to be drawn, and the question of whether to conduct a review which tests hypotheses on the basis of "study-generated evidence" or a review which proposes hypotheses on the basis of "review-generated evidence." Study- generated evidence comprises information about effects examined within studies, such as treatment effects or the relationships of critical subject characteristics to treatment effects. Review- generated evidence concerns effects that cannot be, or usually are not, tested within single studies. For example, evidence about the relationship to study results of features of research design or methodology would be review-generated evidence.

OCR for page 3
11 Data Co11 ection At this stage of the review, the issue is the identification and collection of studies. Cooper details many literature-search procedures, and discusses ways to evaluate their adequacy, Data Evaluation _, This stage of the research involves the accumulation of study results and the "coding" of study features which may later serve as explanations for patterns of study outcomes. During this step, the meta-analyst computes quantitative indices of study outcomes (representing treatment effects, degrees of relationships between variables, or other outcomes ) which will later be analyzed. Also at this stage the issues of subject and treatment characteristics and study quality become crucial. Features of the sub jects (both experimental and control sub jects), the treatments, and the context of the study may be related either purposely or accidentally to study outcomes. Some guidance about which features should be important will come from the problem formulation stage of the review. Important treatment he vim the Trot i rn 1 features and sub ject characteristics that importance must be noted for each study in order to examine plausible explanations for differences for similarities) in study re s u It s . Cooper describes two approaches for evaluating study quality, the "threats-to-validityn approach and the "methods- de s crip Lion " approa ch . The threats -t o-va li city app roe ch invo lve s determining whether each study in the review ~ s sub ject to any of a number of threats to validity (such as those listed by Campbell

OCR for page 3
~-G1 r ~ aQ~C~ 12 and Stanley, 1963) and the methods-description approach involves the description of the features of study design via coding of the primary researchers' descriptions of the methodology of the studies. Clearly, either approach has the weakness that different reviewers may choose to list different threats to validity or methodological features, but the methods-description has the advantages of requiring fewer judgments and being more detailed (because finer details of study methods are noted). Data Analysis and Interpretation At this stage the reviewer selects and applies procedures in order to draw inferences about the questions formulated at the first stage of the review procedure. Different procedures are available for analyzing measures of effect magnitude such as correlations and standardized mean differences, and for analyzing probability values from independent studies. Different inferences can be based on these two kinds of analyses. Public Presentation of Result s Finally, the reviewer must prepare the results of the integrative review for public consumption. Here issues of the amount of detail that should be reported about the conduct of the four previous stages are critical. Clearly the inclusion of every detail, regardless of its eventual importance in the findings of the review, is unwise. However, Cooper argues that the omission of details about the conduct of the review constitute a primary threat to the validity of the review. Summary The Claris ication alone of the process of conduct ding an integrative review has done much to enable researchers to take a

OCR for page 3
La-L rrac~lce 13 more rigorous and systematic approach to research reviewing. Even so, in each review there will be special considerations suggested by the nature of the research topic or the data available that do not allow the conduct of such a review to be an automatic, thoughtless process. Glass's Effect Size ,~ . For many years the quantitative summarization of measures of effect magnitude was not possible for much of the research in the social sciences. Glass's popularization of the effect size, or standardized mean difference, as a measure of treatment effect that could be compared across studies using nonidentical instruments or measures, was the breakthrough that allowed the broad application of quantitative research synthesis techniques in the social and behavior sciences. The effect size for a comparison between the experimental and control groups in a study is the standardized mean difference where yE and yC are the experimental and control group means, respectively, and S is the pooled within-groups estimate of o, the common population standard deviation of the scores. (Though &lass proposed using the control group standard deviation as S. Hedges (1981) noted that the pooled standard deviation is a more precise estimate of 0 when the assumption of equal population va ri ance s is s at is f _e d . ~

OCR for page 3
26 control and mental-practice groups. Where some intervening physical practice has taken place, the relationship is leaker; the correlations for physical and combined groups are less than one-third the size of the control and mental practice corre rations . We also computed some effect sizes by approximating the value of Sg with the pooled within-groups mean square from a gain-score analysis of variance. Thus, with this method, we used the same standard deviation for all groups resulting from one article or study. Our formula for ~ was ~ Y - X) V 2 ( 1-r ~ \/ M3W Preliminary analyses indicated, however, that effect sizes computed using this approach were systematically larger than effect sizes from studies similar in other aspects. This may have resulted because of between-group differences in variation or pretest versus posttest differences in variation which could not be detected (because the necessary variances were not reported). Six studies with effect sizes computed via this method were eliminated from further statistical analysis. Variance of the Ef f ect Size Hedges (1981) presented asymptotic distribution theory for Glass's estimate of effect size. The ga e-score effect size has a similar distribution. The gain-score effect size is biased, but an unbiased estimate of the population value is computed as _ = c (n-l)g ~ where c(_) = 1 - (3/(4m-1)), and the variance of d is approximately

OCR for page 3
2( l-r) + d2 _ = _ ~ n 2(n-1) _ _ Again r is the estimated pre-post correlation and n is the sample - size. The estimate d is asymptotically normal with an expected ~c' en value of Depopulation difference-score effect size and a variance given by V. Analyses of our difference-score effect sizes are based on those described in detail by Hedges (e.g., Hedges, 1982; Hedges & Olkin, 1985~. Coding of Study Feature s ~ . . Numerous study characteristics were coded for the 55 studies in the final collection. Table 1 presents a list of the study f eatures used in our analyses . These study features are the same as those used by Feltz and Landers ( 1983) with the exception of sub ject 's sex and design characteristics as well as categories of open/closed skills. Sub ject's sex was not found to be important in moderating the effect of mental practice and was, therefore, not coded in our d~fference-score effect s izes were computed in des ign characteris tics used by Feltz and appropriate . review. Because our analysis, the Lande rs we re not Types of Comparisons Our primary comparison of interest was among the treatment groups or different types of practice. It has been theorized

OCR for page 3
28 that combined mental and physical practice is better than either physical practice or mental practice alone (Corbin, 1972). However, this comparison has not yet been made within a meta- analysis. In addition, as was done in the Feltz and Landers (1983) review, comparisons were made by task type, publication status, subject experience, and time of posttest. Comparisons that had not been made previously were between studies using different types of dependent measures and between studies using subjects with different levels of imagery ability. The continuous predictor variables that were investigated were number of practice sessions and number of practice trials per session or length of each practice session in seconds. Some researchers have suggested that the greater the number of mental rehearsals the greater the effect on performance (Sackett, 1935; Smyth, 1975), whereas others have suggested that there may be an optimal number of practice sessions and length of practice in which mental practice is most effective (Corbin, 1972; Twining, 1949). Feltz and Landers (1983) found no linear or curvilinear relationship between number of practice sessions and effect size; however, they did find curvilinear relationships between length of practice and effect size. Unfortunately, they were not able to determine, statistically, whether other variables (e.g., task type ) moderated ~ hese relationships. Rationale and Methodology for Outliers Outliers were examined in the first step of the data analysis to identify unusual studies that could bias subsequent results. Confidence intervals were computed and plotted for each effect size. Unusual results were identified by examining the

OCR for page 3
confidence interval plots for the separate treatment groups. s tudies Dentin fed were then re -read to determine any unusual f eatures . On the basis of this preliminary analysis, six studies that had effect sizes computed by approximating the value of Sg with the pooled within-groups mean square were eliminated from further analysis. One study (Corbin, 1966) was eliminated because the pretest task was different than the posttest task. In addition, the Kelsey (1961) study was eliminated because it was the only study that measured muscular endurance. Consequently the physical practice sample in this study had extremely high effect sizes . RE SILTS Overall Test of Homogeneit y From the 55 studies in which effect sizes were computed, 48 were used in our meta-analysis. These 48 studies had examined change in motor skills for 223 separate samples. 4 summary of the characteristics for these studies is presented in Table 2. Included in this table is an indication of random assignment of sub jects to groups, whether pretreatment group differences existed, and how effect sizes were computed.: We first tested the consistency of change in motor skill across 223 samples. The overall homogeneity test AT value was 788.32, which as chi-square variable with k-1 = 221 degrees of 1 The effect sizes for these studies can be obtained by writing the first author.

OCR for page 3
30 freedom, is quite large (p<.OOl). All the change-score effect sizes cannot be represented with one population parameter. This does not seem surprising since the biased uncorrected effect sizes range from -0.38 to 13.91. The weighted average effect size for all studies is estimated to be C.43 standard deviations, which differs from zero (~<.05). This value represents the average change effect from pre- to posttest across all types of practice treatments. The value is just slightly lower than the unweighted average effect size (0.48) reported by Feltz and Landers (1983) which was computed using the mental practice versus control means rather than computing cliff erence-s core ef f ect s sizes . Categorical Comparison s We next grouped the effects according to treatment group or type of practice. Table 3 shows the homogeneity statist) cs obtained for this categorical analysis and the overall homogeneity test (Hedges, 1982b). An overall test of the within- groups homogeneity, lit, is the sum of the homogeneity values for each subgroup. Its value, 668.69 is significant at the .001 level (df=218). Thus, there is still considerable variation in the sizes of change over practice within the treatment groups. The results within the four treatment categories are also not homo ge ne ou s . The test for differences among mean effect sizes for the treatment groups is given by HB, which is also a chi-square variable, with 3 degrees of f reendow. the conclude that the four

OCR for page 3
31 sets of pre-post differences have different population effect sizes, since HB = 119.63 is significant. Mean change differences for all of the treatment groups were significantly greater than zero with physical practice showing the greatest change effects (0.79) and, as we would expect, the control groups showing the smallest change effects (0.22). The average weighted change-score effect size for mental practice groups (0.47) is very close to the unweighted effect size reported by Feltz and Landers (1983). Contrary to what has been previously theorized in the literature (Corbin, 1972), combined mental and physical practice does not appear to be more effective than either mental or physical pract~ ce alone. We next subdivided the cliff Brent treatment groups according to task type since this was the categorized variable that Feltz and Landers ~ 1983) f ound to be mos t s ignif icant in differentiating effect sizes. The task-type categories were motor tasks ~ cognitive tasks, and s trength tasks. The homogeneity statistics for task type divided by treatment group are shown in Table 4. An inspection of Table 4 indicates that most of the variation in effect sizes occur with the motor tasks. The overall test of within-groups homogeneity is significant, HW(df=155) = 547.74 as well as the four treatment categories. Since grouping the studies by task type for four treatment groups did not fully explain the variations in pre/post differences, we explored the use of another study feature, type of dependent measure used, as a grouping variable for motor type tasks. The dependent measure categories were accuracy, speed, form, distance, and time on target or -~ n balance. The

OCR for page 3
32 homogeneity statistics for measure type by treatment group are shown in Table 5. It appears that most of the variation in effect sizes for motor tasks is from studies using measures of accuracy or time on target/in balance. Analyses Using Continuous Predictors In order to determine the influence of number of practice sessions and length of practice per session, we conducted separate regression analyses for each predictor variable for each of the four treatment groups. In each regression analysis, we tested for ta) overall significance of the regression model using four polynomial predictors (linear, quadratic, cubic, and quartic), (b) the fit of the regression model (analogous to Hw homogeneity tests), and (c) Z tests for significance of individual predictors. Table 6 contains the summary statistics for these analyses. For the number of practice sessions variable, the overall models were significant for mental practice, physical practice and combined practice groups, but the chi squares for model fit were also significant indicating a large amount of error in the models. For the length of practice per session variable which was measured in terms of number of practice trials, the overall models were significant for control, mental practice and physical practice groups with the control group having the only nonsignificant chi square for model fit. Although the control group regression analysis was significant and showed good fit, none of the individual polynomial predictors were significant using a Z test. This may be due to the multicolinearity among

OCR for page 3
the predictors. Thus, unlike Feltz and Landers (1983) who found a curvilinear relationship between length of practice and effect size, we found no linear or curvilinear relationships between the continuous variables measured and effect size. Discussion Comparing across all types of tasks and practice conditions used in the 48 studies reviewed, the results of the meta-analysis showed that the average difference in effect size from pretest to posttest was 0.43 standard deviations (p<.05). Likewise, the average effect size for mental practice was 0.47 (p<.OS). The overall learning, as indicated by the magnitude of the difference in pretest to posttest effect sizes, is of similar magnitude to the overall mental practice effect size (0.48) reported by Feltz and Landers (1983). Regardless of whether the effect size was computed using mental practice versus control (Feltz and Landers, 1983) or computed using change-score effect sizes, the resulting effect sizes represent approximately one-half a standard deviation. Considering the marked differences in types of tasks, ages, background of subjects, and research designs/methodologies employed in the studies subjected to meta-analysis, it is clear that: (a) mental practice does facilitate learning, (b) these results are replicable, and (c) they have surprisingly good generality. When the overall effect sizes were broken down to examine moderating variables of task type and type of dependent measure, most of the variation was found in tasks that predominantly involved accuracy or tasks that were primarily "motor" in nature

OCR for page 3
34 (versus cognitive and strength). The failure to find Variation for strength and cognitive tasks, as well as speed, distance, time-on-target/in balance and form-dependent measures was most likely due to the insufficient number of samples in some practice conditions (N ~ 5~. Examination of the categorical comparisons of practice conditions f or the motor and accuracy tasks showed that the learning associated with mental practice was two ce as great as that achieved from the minimal (but significant) learning demonstrated by the subjects in the no practice (control ) condition. Compared to the physical practice, however, mental practice was 41-45% less effective than physical practice. These results support the general findings in the literature that physical practice is a more effective learning strategy than mental practice (Weinberg, 1982). Although some learning was achieved by the control subjects, it was 71-73% less than that achieved through physical practice. Of particular interest in the present meta-analytic review was the categorical comparisons for the combined practice condition. Previous reviewers (Richardson, 1967; Weinberg, 1982) have maintained that a combination of mental and physical practice "is more ef f ective than either physical practice or mental practice alone" (Weinberg, 1982, p. 203). Richardson (1967a) is much more cautious suggesting only a trend for the motor performance of combined practice to be nas good or better than physical practice trials onlyn (p. 103). These conclusions were not supported by the findings of the meta-ana~ysis. Where

OCR for page 3
the number of effect sizes were sufficient for legitimate statistical comparisons to be made,2 the results showed that the effect sizes for combined practice was always less than those for physical practice. For the effect size summed across types of tasks as well as the effect sizes for motor and accuracy tasks, the combined practice was respectively 22%, 8% and 27% less than that achieved by the exclusive employment of physical practice. It appears that overall there is a reduction in performance efficiency when physical practice is replaced by mental practice. However, there are times when such a loss may be acceptable or even desirable. For example, some motor or accuracy tasks for which actual physical practice may either be expensive, time- consuming' physically or mentally fatiguing or potentially dangerous, the small decrements in performance resulting from combined practice may be an effective teaching-learning strategy, since its effects are nearly as good as physical practice with only half the number of physical practice trials. With only one exception (Oxendine, 1969), most of the combined practice consisted of a 50:50 ratio of physical practice to mental practice trials. In Oxendine's (1969) study, only one of the three tasks examined showed differences among the following ratios of physical practice to mental practice trials: 8:0, 6:2, 4:4, and 2:6. The 8:0 and 6:2 ratios had the greatest improvement in time-on-target scores with means of 4.37 and 4.43, 2 For task measures of time-on-target/ n balance, combined practice actually had a larger difference score effect size than either physical or mental practice. However, this finding is of questionable significance due to the relatively small number of samples and a much larger s tandard error of measurement.

OCR for page 3
36 re spe at i ve 1 y . With f ewe r physical practice trials, the scores were considerably less (i.e., the 2:6 ratio ~ . 3.98 for the 4:4 ratio and 2.94 for Although much more research is needed to confirm these findings, it appears that the conclusions of Richardson ( 1967a ~ and Weinberg ( 1982) may be valid, but only if the ratio of the physical to mental practice trials is at least 75:25.