Click for next page ( 58


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 57
~7 the comparability of effect sizes. Thus the question becomes, "Do the studies obtain effect sizes of similar nonzero magnitude?" rather than "Do the studies all obtain statistically significant results?" Defining replication in terms of similarity of effect sizes would obviate arguments over whether a study that obtained a ~=.06 was or was not a successful replication (Nelson, Rosenthal, & Rosnow, in press; Rosenthal, in press). Suggestions for Future Research Expectancy Control Designs Throughout this paper, we have offered our opinion on the extent to which interpersonal expectancy effects may be responsible for the results of studies on various human performance technologies. Our approach has been necessarily speculative, as very few of these studies directly addressed the possibility that expectancy effects might be an important cause of the results. We have pointed out factors that lead us to believe that expectancy effects may have been occurring in several cases, but we were not present at the time the studies were conducted, and we do not have videotapes of the sessions. All we can conclude on the basis of the information available to us is that expectancy effects could have happened; we do not know that they did. However, we can make suggestions for designing future studies that would not only assess whether an expectancy effect was present but also would allow the direct comparison of the magnitude of expectancy ef fects versus the phenomenon of interest. This is accomplished through the use of an expectancy control design (Rosenthal, 1966; Rosenthal & Rosnow, 19841. In this design, experimenter expectancy becomes a second independent variable that is systematically varied along with the variable of theoretical interest. It is easiest to explain this design with a concrete example ~ and we will use as our

OCR for page 57
58 illustration a study by Burnham (1966). Burnham was interested in the effects of lesions on the learning performance of rats. Half of a sample of 23 rats were given brain lesions, and the rest of the rats underwent "sham" surgery. The unique aspect of the study was that eperimenter expectancies were also manipulated by labelling the rats as lesioned or unlesioned, so that half the time the label inaccurately described the true state of the rat. In other words, half of the rats labelled "lesioned" were actually unlesioned, and half of the rats labelled "unlesioned" were actually lesioned. Results showed a main effect of lesioning such that unlesioned rats performed better on the maze task, an unsurprising result. More astonishing was the fact that the main effect for expectancy was just as large as the effect for lesioning, so that rats who were thought to be unlesioned performed better than rats who were thought to be lesioned. The primary advantage of the expectancy control design is that it allows the direct comparison of the independent effects of expectancy and treatment manipulation on the dependent measure. Analogous expectancy control designs could easily be used in research on the human performance technologies described in this paper. For example, experiments in the area of neurolinguistic programming on predicate matching could easily adopt an expectancy control design. This would entail manipulating the counselors' expectations that they would be interacting with a client who was matched or unmatched with respect to their Preferred Representational System (PRS). Half of the time, however, the counselor's expectation would be the opposite of the actual state (matched or unmatched) of the subject. The effect of counselor expectancy could then be compared to the effect of clients actually being matched or unmatched with respect to PRS.

OCR for page 57
59 Studies on mental practice could also adopt an expectancy control design. This could be done by giving subjects written instructions that are sealed in envelopes. The labels on the envelopes could be manipulated such that half of the time the experimenter thought the subject was using mental practice, but the instructions actually tell the subject to do something else. Of course, in this and other expectancy control studies, care would have to be taken to make the cover stories to the experimenters and subjects plausible. Biofeedback is ideally suited for expectancy control studies. Experimenters could be told that half of their subjects were receiving actual feedback on their physiological levels, and half of the subjects would be receiving random feedback. In reality, half of the subjects labelled as receiving random feedback would be receiving actual feedback, and vice versa. It is harder to plan an expectancy control design for the SALT technique; the teacher of necessity must be aware of what teaching technique he or she is using, and it would be difficult to lead the teacher to believe that the technique they were using was actually something else. A not wholly satisfactory alternative would be to have two teachers per classroom, one who uses the SALT techniques and another who takes over the classroom afterwards and administers the tests. This second teacher could then be given false labels about which classes had received SALT training . This design, however , would not be able to address expectancy effects taking place during the actual SALT training, yet that is when expectancy ef fects are probably most prevalent. Contra 1 s f or Expe c fancy E f f ects The expectancy control design is the only way researchers can assess the extent to which expectancy effects are occurring in their studies. However,

OCR for page 57
60 j; by. r; if! many researchers do not want to know the magnitude of expectancy effects; instead, they simply want to include controls to ensure that expectancy effects do not occur. Some of the strategies a researcher can undertake to prevent expectancy effects are as follows (these strategies are elaborated in Rosenthal, 1976, and Rosenthal & Rosnow, 1984~: 1. Keeping experimenters blind to the experimental condition of their subjects. We have stressed the importance of blind contact between experimenters ant subjects over and over again throughout this paper. If experimenters do not know what treatment their subjects are receiving, they will be unable to communicate differential expectancies for the efficacy of those treatments. The necessity for keeping experimenters blind is fully recognized in the area of medical research; no pharmacological study is taken seriously unless it has followed elaborate double blind procedures. 2. Increasing the number of experimenters. This strategy reduces the likelihood of expectancy effects in various ways. First, it tends to randomize expectancies; that is, experimenters may have different expectancies that will cancel out if there are enough experimenters. Second, it helps to maintain blind contact between experimenters and subjects; experimenters will be less likely to figure out what treatment a given subject is in if they do not interact with many subjects. Third' it decreases the learning of influence techniques; if an experimenter learns on an unconscious level, over time, how best to influence the ~ubject's behavior, then expectancy effects will be minimized if the experimenter sees fewer subjects. Lastly, even beyond the issue of expectancy effects, increasing the number of experimenters increases the generality of the results. As mentioned in the SALT section, we can be more confident of a result if it was obtained by a larger number of people

OCR for page 57
61 than if only one experimenter produced it. 3. Minimizing experimenter-subject contact. This strategy, along with keeping experimenters blind, is one of the best ways of assuring the expectancy effects will not occur. Logically enough, the less contact an experimenter has with a subject the less likely he or she will be communicating expectancies to that subject. Experimenter-subject contact can be minimized by relying more on standardized or automated means of data collection. For example, instructions to subjects can be written out or tape recorded. As personal computers become increasingly popular, more and more researchers are programming computers to instruct subjects and record their responses. Some experiments consist only of greeting the subjects and seating them in front of a monitor; the computer does all the rest. However, the strategy of minimizing contact with the subject may be difficult to employ in some of the human performance technologies that rely heavily on interpersonal interaction, such as SALT and NLP. But even in the case of SALT, it would be possible to prepare videotapes of lessons, and analogous tapes could be similarly prepared for NLP studies. Such automation would make the experimental context more artificial, but if these studies were contuc ted in conjunction with the typical, more natural kind of studies, we court be more confident of the results. 4. Observing experimenter behavior. Another strategy is to have the principal investigator observe the experimenters as they conduct their - sessions. This will not by itself eliminate expectancy effects, but it would help in identifying unprogrammed, differential experimenter behaviors. Experimenters would also probably make greater efforts to keep their behavior constant and standardized if they knew they were being observed.

OCR for page 57
62 5. Developing training procedures. If experimenters are given extensive training and practice in the running of experimental sessions, their behavior should be better standardized, which should reduce the risk of expectancy effects. It should be self-evident that these strategies are, on the whole, uncomplicated and easy to implement. Moreover, many of them are rooted in basic principles of good experimental design. In our brief review of the literature on these human performance technologies, we felt it unfortunate that many of the studies overlooked these basic design principles and consequently made sound causal inference virtually impossible. It is our hope that, in the future, studies in these areas can incorporate some of these suggestions and thus produce results of which we can be more confident. Expectancies and the Enhancement of Human Performance If expectancy effects may be responsible for some of the results reported in human technologies research, then why not use positive expectations themselves as a means of enhancing human performance? Indeed, several of the techniques we have discussed (e.g., SALT, biofeedback, and mental practice) incorporate positive expectations, explicitly or implicitly, as part of their procedures. For example, one distinct component of the SALT technique is the induction of positive expectancies in the students for a successful learning experience. Another example is biofeedback therapy, where an initial period is typically spent convincing the patient that the biofeedback equipment does indeed accurately reflect the patient~s physiological states. Consequently, a valid question is whether incorporating the systematic induction of positive expectations into the technologies discusses here would result in increased human performance. The induction of expectancies could

OCR for page 57
63 take place on two levels: the intra-individual level, where peoples' expectations about themselves are changed, and the interpersonal level, where teachers' or leaders' expectations about other people are changed. It is the interpersonal level in which we are most interested in the present paper, for what we want to know is whether we can take advantage of expectancy effects by encouraging the explicit communication of positive expectancy on the part of therapists, teachers, leaders, and other authority figures. With respect to this issue, the distinction between selection and training is helpful. Selection occurs when we identify those people who believe in what they are doing and are able to communicate their confidence to others. Every person can think back to his or her elementary school days and remember those teachers who were exceptionally warm and enthusiastic about education, as well as those teachers who seemed to regard teaching as a not too pleasant job. Such differences in behavior are probably due in part to the "natural style" of individuals as well as past patterns of reinforcement; for example, teachers who accurately think that they are able to teach well probably think so because in the past they have taught well. Administrators of human performance programs would do well to pay explicit attention to the issues of personal style and how well a person communicates enthusiasm and positive expectations when selecting personnel for running their programs. A second approach to incorporating positive expectations in human performance technologies involves the direct training of personnel. It is certainly feasible to identify behaviors associated with positive communications and to train teachers, supervisors, and other people in leadership positions to use those behaviors. The meta-analysis of the mediation of interpersonal expectancy effects (Harris & Rosenthal, 1985)

OCR for page 57
64 provides one such list of behaviors that, in most cases, could readily be fostered through training. A couple of programs have already been developed in the domain of education aimed at providing such training in positive expectations. We will now describe one of these programs, the Teacher Expectations and Student Achievement in-service training model (Kerman Martin, 1980), so as to give a better idea of how such programs are implemented. The TESA training model concentrates on three categories of teacher behaviors, based on the four-factor theory: response opportunities (output), feedback, and personal regard (climate). Within these three broad categories, 15 specific teaching behaviors are addressed, including touch, praise, distance, higher-level questioning, and equal distribution of reinforcements. The workshops focus first on educating teachers about interpersonal expectancy effects and then on training them in each of the 15 skills. A recent evaluation of the TESA program (Penman, 1982) showed that teachers who received the TESA training exhibited significant increases in positive behaviors and decreases in negative behaviors toward low achieving students. Programs analogous to the TESA workshops could easily be developed for application to the human performance technologies of interest. In our opinion, however, the selection approach would probably be more effective in the long run than the training approach; human performance may be enhanced more by people who possess naturally high expectations than by trying to induce high expectations artificially. Both approaches, however, deserve further research attention. From an applied perspective, there is the question of whether such training programs need be developed or whether we should simply continue with

OCR for page 57
65 the programs that have already been developed, such as SALT or biofeedback. After all, if a program works, in a pragmatic sense it does not matter what the causal agent is, be it expectations or the treatment as originally conceptualized. The decision of whether to pursue these programs depends in part on the cost of the program compared to the cost of us ing a program specifically designed to enhance expectations. It also depends on how well the expectancy effects generalize from the laboratory to applied contexts, a question that needs to be addressed empirically. Conclusion The quest for the enhancement of human performance has captured the imaginations of men and women for centuries. Much progress has been made as our approaches have become more scientific and theoretically based. But as the reviews in this paper have shown, much work remains to be done. In many of the areas covered here, we cannot at this point conclude with confidence that the treatment works, and we have pointed out in each section ways in which research designs could be improved for future studies. At the same time, however, enough data exist in terms of anecdotal evidence and the studies conducted so far to indicate that most of these domains are well worth further exploration. Continued research on these techniques would also help to specify those variables that are critical in enhancing performance, variables that could be then be incorporated in other more cost-effective training packages. A final thought concerns the attitude of researchers and critics in these areas. When dealing with controversial areas such as the five covered in this paper, it is best to adopt a skeptical but open attitude. People's reactions to these areas vary across a long continuum, and we feel that reactions at both tails of this distribution are not helpful. Advances in our understanding