Skip to main content

Currently Skimming:

A Revised Meta-analysis of the Mental Practice Literature on Motor Skill Learning
Pages 3-36

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 3...
... Since the 19 30s there have been over 100 studies on mental practice. The specific research question addressed in these studies has been whether a given amount of mental practice prior to performing a motor skill will enhance one's subsequent motor performance.
From page 4...
... A practice period was then varied considerably in the number of trials in session and in total number and spacing of mental-physical practice groups, practice contrasted to a combined mental and instituted which each practice trials. In combined periods involved either
From page 5...
... By contrast, the present review included only single or multiple group studies having pre and posttest scores. The use of pre-post designs permitted a determination of a change-score effect size for each group examined in this set of mental practice s tudies .
From page 6...
... cautious inference that the combined practice group is as good as or better than physical practice trials only. Another conclusion reached by Weinberg (1982)
From page 7...
... although the reviewers speculated about a range of variables that may influence the effectiveness of mental practice, the style used in these reviews was more narrative and rhetorical than technical and statistical, thus making it difficult to systematically identify the variables; and (d) the reviews have ignored the issue of relationship strength, which may hare allowed weak disconfirmation, or the equal weighting of conclusions based on few studies with conclusions based on several studies (see Cooper, 1979)
From page 8...
... formulation of the process of integrative research reviewing. The effect size, as popularized by Glass (1976)
From page 9...
... However, when thoughtfully conducted, a meta-analysis can provide a more rigorous and ob jective alternative to the traditional narrative review. Additionally, the development of statistical analyses desk gned this approach was not of ten in the social sciences until mete -ana lys is .
From page 10...
... Reviewgenerated evidence concerns effects that cannot be, or usually are not, tested within single studies. For example, evidence about the relationship to study results of features of research design or methodology would be review-generated evidence.
From page 11...
... Cooper describes two approaches for evaluating study quality, the "threats-to-validityn approach and the "methodsde s crip Lion " approa ch . The threats -t o-va li city app roe ch invo lve s determining whether each study in the review ~ s sub ject to any of a number of threats to validity (such as those listed by Campbell
From page 12...
... Different procedures are available for analyzing measures of effect magnitude such as correlations and standardized mean differences, and for analyzing probability values from independent studies. Different inferences can be based on these two kinds of analyses.
From page 13...
... Glass's popularization of the effect size, or standardized mean difference, as a measure of treatment effect that could be compared across studies using nonidentical instruments or measures, was the breakthrough that allowed the broad application of quantitative research synthesis techniques in the social and behavior sciences. The effect size for a comparison between the experimental and control groups in a study is the standardized mean difference where yE and yC are the experimental and control group means, respectively, and S is the pooled within-groups estimate of o, the common population standard deviation of the scores.
From page 14...
... An effect size of 0.75 indicates that the treatment implemented raises the score of the average sub ject three f ourths of one s tandard deviat ion ~ Statistical Analyses for Effect-size Data Glass's Analyses When Glass proposed using quantitative methods to effect sizes, he argued that the effect sizes could be summarize t reate d as "typical" data and analyzed using familiar procedures (e.g., ANOVA, regression)
From page 15...
... Analyses for Effect Sizes ~ , Analyses based on sample effect sizes allow inferences about corresponding population parameters.
From page 16...
... Statistical analyses designed specifically for effect sizes not only avoid the statistical problems of traditional analysis methods, but also provide tests of the adequacy of proposed models for the effect sizes which are not available from traditional methods. Rather than detail the statistical theory for the effect-size analyses, which is presented clearly by Hedges (e.g., Hedges, 1982a,b; Hedges & Olkin, 1985)
From page 17...
... The goal of the alternative statistical analyses designed for effect sizes is to either "explain," estimate, or identify the sources of variability in study results. Tests for the significance of specific explanatory models are accompanied by tests for the adequacy of those models.
From page 18...
... which posits different population effect sizes for qualitatively different sets of studies. Other analyses assume that "random-effects" or mixed models are more appropriate for describing effect-size outcomes.
From page 19...
... Second, our present study will improve upon the earlier review by Feltz and Landers (1983) by using modern statistical analyses for effect sizes.
From page 20...
... Furthermore, we will use the methods described by Hedges and 01kin for identifying outliers or unusual studies to pinpoint very large effect sizes. Thus, we will be able to select studies that show particularly strong mental-practice or combined mental and physical practice effects, which might serve to identify problem studies or exemplars for the design of mental-practice interventions.
From page 21...
... Each article was then read, effect-size measures were extracted where sufficient data were provided, and relevant study features were coded. This procedure produced 55 studies from which effect sizes could be obtained.
From page 22...
... , for ~ = 1' ni:' i = 1, "Ji' ~ = 1,...,k. The Difference Score Effect Size We define the difference-score effect size as the standardized difference between the posttest and pretest means for a single sample, divided by the pretest standard deviation.
From page 23...
... The second reason is that the pretest standard deviations would not be influenced by the treatments. They should be roughly equivalent across groups withing studies, assuming that subjects were randomly assigned to groups, thus large differencescore effects should not result from decreased variation in scores in groups where the treatment may have affected score variability (note influence of 62= W2 The sample change-score effect population effect size, [ij' which on variance of o)
From page 24...
... Computation of Effect Sizes Most studies provided the pretest and posttest means and the pretest standard deviation needed to compute the effect size directly, as shown in equation 1. Effect sizes were computed for as many distinct control, mental practice, physical practice, or combined mental/physical practice groups as were examined.
From page 25...
... . The values of r used f or the f our t reatme nt groups were r = +.69 for control groups, r = +.64 for the mental practice groups, r = +.20 for the physical practice groups, and r = +.16 for the combined mental/physical groups.
From page 26...
... Our formula for ~ was ~ Y - X) V 2 ( 1-r ~ \/ M3W Preliminary analyses indicated, however, that effect sizes computed using this approach were systematically larger than effect sizes from studies similar in other aspects.
From page 27...
... The estimate d is asymptotically normal with an expected ~c' en value of Depopulation difference-score effect size and a variance given by V Analyses of our difference-score effect sizes are based on those described in detail by Hedges (e.g., Hedges, 1982; Hedges & Olkin, 1985~.
From page 28...
... found no linear or curvilinear relationship between number of practice sessions and effect size; however, they did find curvilinear relationships between length of practice and effect size. Unfortunately, they were not able to determine, statistically, whether other variables (e.g., task type )
From page 29...
... study was eliminated because it was the only study that measured muscular endurance. Consequently the physical practice sample in this study had extremely high effect sizes .
From page 30...
... This does not seem surprising since the biased uncorrected effect sizes range from -0.38 to 13.91. The weighted average effect size for all studies is estimated to be C.43 standard deviations, which differs from zero (~<.05)
From page 31...
... The homogeneity statistics for task type divided by treatment group are shown in Table 4. An inspection of Table 4 indicates that most of the variation in effect sizes occur with the motor tasks.
From page 32...
... Table 6 contains the summary statistics for these analyses. For the number of practice sessions variable, the overall models were significant for mental practice, physical practice and combined practice groups, but the chi squares for model fit were also significant indicating a large amount of error in the models.
From page 33...
... they have surprisingly good generality. When the overall effect sizes were broken down to examine moderating variables of task type and type of dependent measure, most of the variation was found in tasks that predominantly involved accuracy or tasks that were primarily "motor" in nature
From page 34...
... is much more cautious suggesting only a trend for the motor performance of combined practice to be nas good or better than physical practice trials onlyn (p.
From page 35...
... study, only one of the three tasks examined showed differences among the following ratios of physical practice to mental practice trials: 8:0, 6:2, 4:4, and 2:6. The 8:0 and 6:2 ratios had the greatest improvement in time-on-target scores with means of 4.37 and 4.43, 2 For task measures of time-on-target/ n balance, combined practice actually had a larger difference score effect size than either physical or mental practice.
From page 36...
... 3.98 for the 4:4 ratio and 2.94 for Although much more research is needed to confirm these findings, it appears that the conclusions of Richardson ( 1967a ~ and Weinberg ( 1982) may be valid, but only if the ratio of the physical to mental practice trials is at least 75:25.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.