Training and Education
Joyce L. Shields, Joseph B. Cavallaro, Beverly M. Huey, Harold P. Van Cott
This chapter focuses on the relationship between job-oriented training and human factors research. The underlying premise is that human factors research can help solve some of the difficult problems confronted by the people who design, develop, and manage training programs and who deliver training services to trainees.
Education is included as a topic because there are strong links between general education and training for the workplace. Education prepares for job-oriented training and influences it. To a large extent, the educational attainment of prospective trainees determines what the training provider can do and how the training can or should be presented. In this sense, the level of educational attainment of the trainee is a parameter in the assembly of any given training program.
In job-oriented training, training and human factors are to some extent reciprocal. When a new system is being created, the goal is to design the system so that training requirements are as light as possible. If there are design deficiencies, the operators of the system must be trained to overcome or compensate for them. Thus, the absence of effective human factors inputs into the system design process generates technical problems for the people responsible for the training of operator and maintenance personnel.
Human-centered systems design and development also come into the process of providing the tools used for training. The complete ensemble of
such tools is itself a system and is, therefore, a reasonable domain for the kinds of systems research carried out by human factors scientists.
There are many ways in which human factors research is and continues to be useful in addressing some of the problems that now are being confronted by the people engaged in providing job-oriented training. In the rest of this chapter, we will explore these problems and the current and potential contribution of human factors to them.
Changing Conditions of Work
The very nature of work is changing. Instead of the physical manipulations that formerly characterized most industrial jobs, more and more jobs are based on the manipulation of symbols—words and numbers. New technologies and work processes are shifting skill requirements in some types of jobs from manual and job-specific to generic, from concrete to abstract. For example, in many modern factories, the production process is so computerized that many workers no longer physically execute tasks but are now responsible mainly for monitoring automated processes. Whereas previously a machinist adjusted and maintained machinery by hands-on manipulation, today a technician works through a keyboard and CRT display and consequently has a more distant relationship to both process and product.
A direct consequence of computerization is that tangible work outcomes are less evident. There is some evidence suggesting that spontaneous learning and skill transfer do not work as well in highly computerized environments as in more traditional workplaces. For example, the difference in performance between the best and the average appears to be greater in computerized work than in traditional jobs. This finding leads to the conjecture that less incidental or informal learning takes place in work that involves symbol manipulation. This conjecture is supported by the fact that much about symbol manipulation is invisible, unlike the visible actions involved in object manipulation. The findings also suggest that John Seely Brown (1988) and others are correct when they call for new forms of training such as a "cognitive apprenticeship," which would somehow make the mental procedures and decision processes of exemplary performers "visible" so that others might learn from them.
Changing Workforce Demographics and Changing National Patterns
Similar concerns are expressed about the changing demographics of the workforce. In this era of vigorous global competition, rapid technological
change, and shifting economic conditions, trainers must be able to adjust to such changes. In Workforce 2000 (Johnston and Packer, 1987), the following "demographic facts" are identified as shaping the workforce of the future:
There will soon be a slowdown in the growth of the population and the workforce—the greatest since the 1930s.
The average age of the workforce will rise and the pool of young workers will decline.
More women will enter the workforce, although the rate of increase will be smaller than in the past.
Minorities will be a larger share of new labor force entrants.
Immigrants will represent the largest share of the increase in the population and the workforce since the First World War.
Thus, the "next decade will usher in a workforce unlike anything Corporate America has ever seen—teeming with women, minorities, and the elderly, the antithesis of the workforce present at many companies today" (Goldstein, 1988). Employers will soon have to utilize these nontraditional sources of labor supply in order to sustain output—even if equal opportunity regulations are abolished.
Human resource implications of demographic shifts are numerous. For example, Taylor (1989) lists several training and development issues that will arise from the "age wave" of older workers, including the need to recognize and emphasize the contributions that older employees can make at all levels of the organization. It will be incumbent on the employer organization to provide effective training and retraining programs for these older workers.
Because people will remain in the workforce longer, there will be an increasing need to retrain people who have already reached positions of some seniority and authority. Some of these people will be the victims of the buggywhip scenario—their jobs will just become obsolete. For others, the job will still be there to be done, but it will be done by a robot or a computer.
Older workers might need to be retrained through substantially different techniques than those used to train true novices. In some instances, a highly proficient worker may be required to unlearn some of the skills that gave status in the original job. Such skills may actually be counterproductive in the latest version of some jobs.
Discussing the greater numbers of women in the workforce, Johnston and Packer (1987) predict that the distinctions between males and females in wage rates and in terms of concentrations in particular jobs will decline in response to market pressures. They also predict that the proportion of "part-time, flexible, and stay-at-home jobs will increase, and [that] total
work hours per employee are likely to drop in response to the needs of women to integrate work and child rearing.'' On the plus side, training requirements may become more uniform if both men and women are doing the same work. Negatives in terms of increased costs could result from having more people to train if more jobs are shared by two individuals.
The Office of Technology Assessment (1990) discusses the number of new workforce entrants who will be coming from minority groups and who are often disadvantaged because of inadequate basic educational opportunities. Furthermore, many young workforce entrants, regardless of their ethnic heritage, will confront a daunting challenge if the predicted shift to more highly skilled jobs within the manufacturing sector takes place (Personick, 1989). If, as seems likely, a similar shift is taking place in the service sector as it continues to expand, the country might find itself in the ironic condition of having large numbers of workers unemployed while there are labor shortages in relatively well-paying job categories.
At the national level, the growing service sector is expected to account for over 90 percent of the 18 million new jobs forecasted for the next 10 years. Much of the growth is expected to occur in the health, education, business, and food services. In both the service and manufacturing sectors, systems analyst and computer programmer jobs will be among the fastest-growing occupations, and workers in other occupations will need to be increasingly computer literate. Improved office technology will continue to limit the growth of administrative support occupations, which will be among the slower-growing groups of occupations (Bureau of Labor Statistics, 1991).
Changing Organizational Arrangements
Numerous changes are also occurring in the nature and structure of work at the organizational level. Changes instituted by organizations include decentralizing and flattening organizational structures, establishing work teams, pushing decision-making responsibilities downward throughout the organization, and streamlining the management ranks through the elimination of middle-and lower-level managers.
Flattened organizational structures reduce the hierarchical flow of information and should accelerate decision making. Job classifications are broadening as work teams assume some or all of the responsibilities of the first-line supervisor such as inspection, quality control, production scheduling, work allocation, and coordination with other departments (Office of Technology Assessment, 1990).
Decentralizing the organizational structure and establishing work teams increase the worker's autonomy and responsibility. In a decentralized organizational structure, fewer layers of management result in managers' supervising greater numbers of workers. Managers are unable to manage at the
same level of detail as before, and some of the responsibility and authority previously held by middle managers is delegated to workers in lower levels of the organization (Kravitz, 1988).
The development of cross-functional work teams has also increased the responsibilities of workers. Employees no longer perform fragmented tasks, but have broad responsibility for knowing how to perform all aspects of a job, a situation that draws greater initiative from workers and increases their involvement in decision making (Bailey, 1990). Consequently, social skills will also become more important as organizational structures change and as more service jobs are created. Interpersonal skills, teamwork, and communication skills will become critical for every employee, from the chief executive to the line worker (Carnevale et al., 1988).
The nature of work is also likely to change owing to developments in areas such as telecommunications (e.g., teleconferencing, voice mail, picture phones, and facsimile machines) and personal computers (e.g., wide-area networks, modems, communications software, laptop and notebook computers). These technologies are making it possible for more and more workers to work out of the home. As more people work at home, a variety of management and organizational questions are raised, including how to structure teamwork, how to train, and how to measure performance.
HUMAN FACTORS CHALLENGES
It seems wise to have some sort of conceptual model to guide the process of laying out a research program at the strategic level. Human factors researchers are fortunate to have several such models from which to choose. One was generated in the 1980s when the Army developed the policies, procedures, and tools to more fully integrate the inclusion of manpower, personnel, training, systems safety, health hazards, and human factors engineering trade-offs into the design of systems (Booher, 1990). This program, referred to as MANPRINT (Manpower and Personnel Integration), successfully refocused the design process on the performance of the total system in the hands of the user. The program has led to the design and redesign of weapon systems with greatly reduced training demands and enhanced performance capabilities. Lessons learned in this program have potential application to the acquisition and design of a wide range of complex systems.
Another approach was introduced by Rouse (1991), who provided a systematic framework for ensuring that the concerns, values, and perceptions of all participants in a design effort, as well as the ultimate users, are considered and balanced. This approach seeks to enhance human abilities,
overcome human limitations, and foster user acceptance throughout the design process. Rouse's method recommends asking the following questions in the design and evaluation of systems:
Viability—are the benefits of using a system sufficiently greater than its costs?
Acceptance—do organizations/individuals use the system?
Validation—does the system solve the problem?
Evaluation—does the system meet the requirements?
Demonstrationp—how do observers react to the system?
Verification—is the system put together as planned?
Testing—does the system run, compute, and so forth?
From a designer's perspective, the process begins with measures of effectiveness and then addresses understandability and, finally, compatibility. From a user's perspective, the evaluation begins with compatibility, and goes on to understandability and then effectiveness (top-down design vs. bottom-up evaluation).
Both of these macro-ergonomic approaches can be applied to the design and development of training systems. However, the field would benefit from additional competition. Researchers should be encouraged to develop additional concepts for this area of discourse.
Building Bridges Between Theory and Practice
A major research issue related to designing and implementing effective training is closing the gap between (1) theory and research and (2) applications. In an article in Human Factors, Cannon-Bowers et al. (1991) addressed the challenge of linking the theory and practice of training. This is particularly difficult because research and practitioner communities exist separately and are not well coordinated. The authors highlight linkages between training-related theory and training-related techniques in the areas of training analysis, design, and evaluation. This represents a good start in illustrating the possibilities; however, as the authors note, the current body of training research does not provide the practitioner with a full complement of training techniques that are fully grounded in logical principles. Further, there remains the question of how to enhance the acceptance and translation of research into practice.
Boff and Rouse (1987) emphasized the issue of ensuring an adequate supply of useful and usable data resources in formats that are specifically intelligible to design engineers. Although there is a substantial volume of research information that has potential value and relevance to systems design, Boff and Lincoln (1988) found that the data had low usability in terms
of accessibility, interpretability, and applicability. If these data were more usable, they could greatly enhance the effectiveness of new training systems; therefore, ways to exploit the existing investment in research findings need to be developed.
This theme is also present in a report by the Office of Technology Assessment (1990). The report indicates that research that could lead to improvement in the efficiency and quality of training often fails to be integrated into training practices. The report recommends that federal agencies (1) develop and disseminate information about best-practice approaches and technologies and (2) work with industry to develop and implement operating standards for training technologies and related software.
An example of such an approach in the human factors area is the Crew System Ergonomics Information Analysis Center (CSERIAC). The objective of CSERIAC is to acquire, analyze, and disseminate timely information on crew system ergonomics. CSERIAC's principal products and services include technical advice and assistance; customized bibliographies; written reviews, analyses, and assessments; workshops, conferences, symposia, and short courses; and special studies.
The challenge to the human factors community is to devise, evaluate, and perfect ways to transfer the methods, data, and principles of human-centered design into the procedures used by training system designers and managers. Human factors scientists should also ensure that feedback from applications experiences closes the loop between research and practice.
A central methodological problem is the lack of a comprehensive array of performance measures that are acceptable to both researchers and practitioners. The basic premise is that one cannot know whether a training system or training practice is better than an alternative unless one can (1) measure system performance reliably and accurately and (2) link such bottom-line measures with specific attributes of the training system.
One cause of the problem is that there are so many different tasks in the world of work, and each task has its own mix of desired outcomes. Consequently, research that is intended to provide general findings is often not convincing to the practitioner, who sees a distinction between the supposedly generic laboratory task and the task for which his or her students are being trained.
A closely related methodological problem is the lack of criteria for doing an overall evaluation of training programs. Therefore, comprehensive evaluations are rarely conducted, even though their importance is recognized. When evaluations are done, about half of all companies and the vast majority of large, successful corporations use crude measures such as
student ratings or inferences drawn from informal follow-up interviews. This may be due in part to the difficulty and expense of specifying better measures as well as to the concerns of trainers and educators that negative results may jeopardize future efforts (Goldstein, 1992). The human factors community can contribute to the resolution of these issues.
An example of such a contribution comes from the work of Kraiger et al. (1992), who presented a theoretically based model of training evaluation in an attempt to link evaluation to learning objectives and provide a more conceptual framework for evaluation. The authors conceive of learning as a multidimensional process of cognitive, behavioral, and affective change and believe it is important to move away from the influence of behaviorism in evaluating training. They argue that the emphasis on behaviorism has stifled the development of training as a scientific discipline. The framework they present leads to a large number of suggested measures and provides a stimulus for further research on the development and testing of measures that could lead to a better understanding of training effectiveness.
Toward a similar end, Goldstein (1992) points out that training can be characterized as attempting to achieve one of four types of validity: (1) training validity, (2) transfer validity, (3) intraorganizational validity, and (4) interorganizational validity. The first two validities are measured by the performance of the individual trainee at the time of training or on the job. Both training and transfer validity begin with needs assessment. But transfer validity is greatly affected by the organizational environment of the trainee on the job. Therefore, special consideration has to be given to the collection and interpretation of the results.
The last two types of validity refer to the effectiveness of the training program for a new group of employees in the same organization or in different organizations. In both cases, data should be collected from new populations of trainees to determine the effectiveness of the training program for different trainees and organizations. Validities can change because of (1) changes in tasks, people, or the organization; (2) the rigor of the data collection/evaluation; and (3) changes in the training program. Carefully planned and executed evaluation programs are clearly required to assess and demonstrate the value of training.
A significant development in evaluation has been the estimation of the dollar value or utility of training interventions (Clegg, 1987). A variety of equations have been derived for a variety of situations, from estimations of return on investment to cost-benefit analyses using organizational levels of measures of performance (Wexley and Baldwin, 1986; Paquet et al., 1987). However, only a third of the large, successful companies evaluate bottom-line improvement to assess training or educational success.
It is clear that conducting evaluations of the relevance and effectiveness of education and training programs has been complex and expensive. The
questions remain: How can information be collected in an affordable manner to aid decision makers in determining the value of training? And how can organizations be encouraged to employ "bottom-line" methods for assessing their training programs?
Evaluation studies would be more logical and convincing if they were tied to a formal assessment of the real needs of the worker and the requirements of the job. A common complaint about existing training programs is that they are not responsive to these areas.
Tannenbaum and Yukl (1991) tend to confirm this allegation. They report that comprehensive needs analyses are rarely done in organizations. Therefore, training program content is not always linked to organizational strategy or job performance requirements.
When needs analysis is attempted, the predominant approach is likely to be to use the procedures outlined by McGehee and Thayer (1961). These link organization, task, and personal attributes in a basic stimulus-response type of logic structure. Given the major changes in workplace training in the last 30 years and the requirements of the modern workplace, the validity of this needs assessment framework is now problematic.
The starting point for specifying the requirements of a job is often a task analysis in which a human factors practitioner focuses on the technical and procedural portion of the task. Task analysis is a general-purpose method for describing how tasks are performed. Devised by Robert B. Miller in 1954, it was first used to specify the characteristics of the displays and controls needed to enable good operator performance. The method was later expanded to identify and document the skills, knowledge, and abilities (SKAs) that a person must have to perform a task and job effectively. These SKAs formed a database called qualitative and quantitative personnel requirements information, which was used to forecast manpower, human factors engineering, and training needs for new systems. Task analysis was subsequently adopted by educators and test construction specialists.
The methods of task analysis in general use today assume that the content of jobs can be defined by their stimulus and response components. This assumption is understandable considering that task analysis had its origin in behavioristic psychology. Although the stimulus-response approach may be acceptable for characterizing the psychomotor tasks that have historically been dominant components of most jobs, it cannot be used to identify the cognitive content of jobs. Our ability to analyze the growing cognitive elements in everyday tasks has not kept up with events in the real world. Consequently, we need methods to handle tasks that may involve more complex cognitive and social skills. Such forward-looking analysis procedures have been recommended (Tannenbaum and Yukl, 1991) but have not been tested.
One way to broaden the scope of task analysis is to view cognitive
functions as similar to computer programs. A computer cannot function without a protocol, a program that tells the computer how to perform a given application. Similarly, a protocol is needed to describe the overt and cognitive tasks that make up a particular job. Without such a protocol, no rational basis, other than tradition or intuition, exists for systematically defining and describing the content of a job training program or the procedures to be followed in performing job tasks.
This approach has yet to be fully implemented, and alternative approaches do not exist. Although several attempts have been made to overcome the limitations of conventional task analysis (Glaser, 1992), no one has yet devised a method that is easily learned and acceptable to practitioners as well as researchers. To be most useful, such a method should allow for the description of the cognitive, sensory, and motor components of a task as part of a comprehensive systems and functions analysis.
Specific Research Projects and Programs
Two major studies conducted recently document the increasing importance accorded to fundamental workplace skills. One comprehensive study was a joint two-year project of the American Society for Training and Development and the U.S. Department of Labor. Their report, Workplace Basics: The Essential Skills Employers Want (Carnevale et al., 1990), identified 16 basic skills within 7 broad skill groups:
the foundation: learning to learn;
competence: reading, writing, and computation;
communication: listening and oral communication;
adaptability: creative thinking and problem solving;
personal management: self-esteem, goal-setting/motivation, and personal/career development;
group effectiveness: interpersonal skills, negotiation, and teamwork; and
influence: organizational effectiveness and leadership.
Another widely cited study, by the Department of Labor (Secretary's Commission on Achieving Necessary Skills, 1991), proposed a set of 14 fundamental skills within 3 categories that they regard as the foundation for effective job performance:
basic skills—reading, writing, arithmetic/mathematics, listening, and speaking;
thinking skills—creative thinking, decision making, problem solving, knowing how to learn, and reasoning; and
personal qualities—individual responsibility, self-esteem and self-management, sociability, and integrity.
This study is part of a major effort by the U.S. Department of Labor to examine worker skill requirements and other issues related to work-based training and the transition from school to work. The challenge for the human factors research community is to understand how fundamental skills relate to the performance of technical tasks, how to design comprehensive training programs incorporating knowledge of worker skills, and how to evaluate these skills in the workplace.
Training individuals to perform effectively as teams is another area of increasing importance and requires greater research attention (Salas et al., 1992). Work teams are increasing in frequency in American organizations as "total quality management" programs are introduced. The performance of these teams is critical to organizational success. Research is needed in order to develop a greater understanding of work teams and of when and how they are likely to be most effective. In addition to training, other interventions to improve team effectiveness include team building, job redesign, and group incentive programs. In designing training programs, team-building interventions, and other programs for complex systems, human factors researchers are in a unique position to assist in developing information on the relationship between team processes and team performance over time and on the relationship between task type, stage of team development, and team performance.
The design of educational and training programs to meet these new demands offers special challenges. Researchers who have focused on the more traditional areas of how to design, deliver, and evaluate training and educational programs will have to integrate research from a number of fields in order to give trainees the resources they need to develop the skills required by the changing work environment. Furthermore, technical training must increasingly be integrated with a broader set of skills. Bringing all this material together in the design and development of training systems is difficult. The typical human factors specialist may not be very well versed in such subjects as group dynamics, which are rarely taught in system design curricula. As indicated in a recent publication by the National Research Council (Van Cott and Huey, 1992), although human factors practitioners may learn much about training design and development, they receive little formal education in these "softer" areas. Therefore, the human factors research community faces the challenge of developing the research base to adequately educate and prepare practitioners to develop training systems for the workplace that meet the demands of improving group rather than individual performance.
Flexible, Individualized Training
Thomas Bailey (1990), working with the National Center for Research in Vocational Education, reports that the approximately half of U.S. youth who do not go on to college receive little help in making the transition from school to work. The challenge is how to design educational and training programs to create an active learning environment that can ease this transition. Research is demonstrating that individuals have different learning styles and that the effectiveness of alternative teaching and training methods corresponds to how well it matches an individual's preferred learning style (Martel, 1991). The importance of individual differences in learning styles also has implications for education and training design. For example, Snow and Lohman (1984) found that students of low ability benefited more from high-structure/low-complexity programs and that for students of high ability the reverse was true. Much of this research is for grades K-12; more research is needed to determine the full parameters of aptitude treatment interactions and also to determine how to train/educate students so they can continue to learn in natural settings (Goldstein, 1992). Goldstein believes that "the processes involved in designing instructional systems and the revolution first described by both Gagne and Brenner in the 1960s has only begun to be understood." Research is needed on methodologies to support the design process, so that training developers can effectively incorporate what is to be learned with who the learners are and what characteristics they possess.
Research in cognitive psychology has also highlighted shortcomings in traditional approaches to teaching. Cognitive psychology focuses on how learning occurs by addressing ways the human mind processes and uses information. It has shifted the emphasis in learning to the mental operations and attributes of the learner/trainee and has provided a new theoretical basis for training and instructional technology. Cognitive research suggests that teaching/training should emphasize the development of thinking and problem-solving skills rather than the learning of facts through expository lectures (Molnar, 1989). In addition, much current research is addressing the effect of context on training and the transfer of information obtained during training to the situation in which it must be used. (See Druckman and Bjork, 1994, for a full discussion of the effect of the training context on effective transfer to other situations and systems.) These lines of research have significant implications for the design of programs that enhance learning and retention of knowledge and skills, that increase the probability of transfer of learning from one situation to another, and that support lifelong learning as well as the roles of teachers and trainers.
Cognitive science research has also now embraced the mental models concept (Stevens and Gentner, 1983; Johnson-Laird, 1983), which focuses
on the ways in which humans understand systems. According to Rouse and Morris (1986), mental models allow a person to understand and to anticipate the behavior of a system, which implies that mental models must be capable of prediction. These models are domain-, context-, and situation-specific. "With respect to training, a number of theorists have hypothesized that training that fosters development of accurate mental models of a system will improve performance" (Cannon-Bowers and Salas, 1990:4). However, training in only general principles of system design and function is of limited utility (e.g., Kieras and Bovair, 1984; Morris and Rouse, 1985). To obtain optimal training, guidance is required in how to apply the knowledge in order to accomplish the task (Rouse and Morris, 1986).
Changing Roles of Teachers and Trainers
Just as new job-specific and managerial skills are required to meet today's demands, the roles of the teacher and trainer are changing. Rather than functioning strictly as lecturers, teachers and trainers will increasingly need to become facilitators. The major reason for this change is the exponential growth in information resources. Teachers can no longer be viewed as all-knowing sources of information. Because of the huge and expanding body of knowledge, teachers cannot possibly remain abreast of information in all areas (Molnar, 1989).
The focus of teachers and trainers is also changing. They must help others learn how to access information as needed and be able to do this themselves. Research is required on learning to learn and on how to teach people both how to determine what information is required for specific tasks and how to access it quickly. At the same time, research is needed on the design of job aids to assist teachers and workers in acquiring information on demand, as needed.
Instructor aids form another area in which technological advances should be promoted and in which potential innovations should be evaluated from a human factors perspective. For example, some progress has been made over the past 15 years or so in the construction of software packages for desktop computers that aid instructors in composing lesson plans, exercises, simulation scenarios, and examinations. These systems appear to vary markedly in their utility. Consistently high quality in such aids for training program managers might be obtained if a properly authorized organization were to sponsor rigorous evaluations of new products before these products were released to the market.
A key human factors research issue concerning the use of simulation technology in training programs is fidelity. Technology is available to
make simulators realistic, comprehensive representations of a system; however, there is limited knowledge on the relationship of physical and functional fidelity and training effectiveness. In 1985, a National Research Council working group on simulation commented (Jones et al., 1985):
A comprehensive body of behavioral principles and methods relevant to simulation does not exist. Integrative assessments of the research and operating experience are almost completely lacking for important topics such as fidelity. … A result is that there is no history of simulation research and utilization technology in a readily accessible form that serves as a guide for the future.
That need was partially addressed by Hays and Singer (1989), who provided an extensive review of the literature on simulation fidelity. However, the need for a comprehensive body of data on the behavioral principles and methods for determining the effectiveness of fidelity persists.
Despite assertions by human factors specialists that physical fidelity and psychological fidelity may not be equivalent—and in the absence of any empirical data to support their view—simulator manufacturers have increased simulation fidelity as fast as they were paid to do it and the developing new technology allowed. This practice will continue until studies test this assumption empirically.
There is a need for laboratory and field research to address the issue of how much and what kind of fidelity is required for cost-effective simulator training. This research will become increasingly important as simulation technology begins to provide some form of augmented reality that can be used to create training environments. The relationship between fidelity and training effectiveness remains unknown. In truth, the attributes of psychological fidelity need to be defined first. A metric is needed that can then be used to measure and compare psychological fidelity with physical fidelity. These areas all need attention from human factors research.
On a more practical level, studies have compared the cost-effectiveness of alternative training techniques and technologies (Orlansky and String, 1977). The results showed that simple, low-cost, part-task, and procedural trainers can be more effective for certain types of training than more costly full-scale, high-fidelity simulators. In the 1960s, task analysis and the systems approach to training—two methods for the qualitative definition of training objectives and the skill, knowledge, and abilities required for any job training course to be valid—were developed. In the 1970s, new research on team training was initiated.
In the 1980s, researchers built on that knowledge to develop crew resource management training; this technique is now used throughout the airline industry to train crews in team skills and has recently been extended to other systems as well. Also in the 1980s, human factors researchers,
faced with the high costs of conventional training devices and classroom instruction, devised the concept of ''embedded training."
Embedded training, a training capability built into or added onto operational systems, presents an opportunity to practice tasks and skills, especially those that should be performed automatically under high-stress conditions. It uses the computer and display capabilities of operational equipment to provide opportunities for task training, practice, and performance feedback on the same equipment that is used in operations. This approach is better suited for refresher training than for training in new skills (Oberlin, 1988) and can provide high-quality, standardized, individualized training. Today, the technique is used to train Army tank gunners, the control room crews of commercial process control facilities, and people in many other occupations.
Computer networking and telecommunications will also continue to open up new methods of team training. Simulation networking (SIMNET), the simulator network developed under the sponsorship of the Defense Advanced Research Projects Agency for training teams in tank warfare, uses workstations that simulate Army tank workstations to train soldiers in different geographic locations in battle tactics (Thorpe, 1993). The work-stations are interactive so that the responses of one tank or tank unit alter the engagement environment for training participants, allowing trainees to practice a range of communication and decision-making tasks and roles in a realistic environment. The SIMNET concept appears to be generalizable to other education and training domains that have geographically dispersed workers.
Virtual reality, sometimes called cyberspace or augmented reality, is the experience of perceiving and interacting through sensors and effectors in a computer-modeled and computer-generated environment. A person who experiences a virtual world is immersed in and surrounded by simulated sight, sound, touch, and feedback of force from simulated objects. Eventually, as the technology develops, the experience of virtual reality will become increasingly similar to experience of the real world. This technology was recently addressed by the National Research Council's Committee on Virtual Reality Research and Development (Durlach and Mavor, 1995).
The proper use and exploitation of the potential of virtual reality is fundamentally a psychological rather than an engineering or computer science problem. Engineering and computer science provide the basic tools, but psychology and human factors will be essential in shaping and using these tools in productive ways.
Since virtual reality is still in its infancy, it is impossible to predict all of the research that will be needed to make proper use of it. However, some needs are apparent. Two of the most basic are (1) the need to study the transfer of training from a virtual to a real-world environment and (2) the
need to understand the psychological as well as the physical basis for transfer of training. The high costs of modeling virtual environments makes research on transfer of training and fidelity especially important.
Other research on virtual reality in training will be required to better understand such questions as the following:
Do individual differences affect the ability to adapt to and use virtual reality?
What methods will be needed to identify the instructional elements that will be sufficient to meet stated learning objectives in a learning environment and at the same time exclude "virtual noise"?
Can virtual reality systems be used to train teams?
Does having been immersed in and having adapted to a virtual training environment have any negative effects on cognitive or psychomotor performance in the real world?
Training is sometimes regarded as an alternative to human factors and vice versa. The role of the human factors specialist has been to design systems that minimize the need for training. Success from either perspective is measured by improved performance, productivity, and efficiency of individuals and organizations. The human factors/systems perspective should not replace or usurp the perspective of the training specialist but can be constructively supplemental.
Among the issues that could benefit from human factors research, the highest priorities should be given to the design and evaluation of complex training systems and the design and application of learning technologies. Human factors scientists can add to the understanding of these issues by analyzing existing data, synthesizing such data across multiple studies, and conducting new empirical studies. Examples of the type of research that is required include the development of systematic design principles for training systems, empirical tests of alternative delivery practices, development and tests of theories to support the understanding and evaluation of training effectiveness from the systems perspective to the perspective of the individual learner, and development and validation of cost-effective means and technologies to provide training on demand.
The issue of performance measurement runs through every portion of the literature on training, from basic or scholarly articles to articles for the teachers in the trenches of vocational education (Fitz-enz, 1994). The problem of developing good measurement methods and evaluating these methods exists on at least two levels. The easier of the two levels is the measurement of performance gains in a specific task setting. In such a situation,
the job provides its own criteria. The main problem with these rather obvious criteria is that variances between individuals and within the same person during the learning sequence tend to be small unless the trainee is put under serious task stresses. In other words, these obvious measures of performance are relatively insensitive and may not differentiate between alternative training procedures that are otherwise very unlike one another.
The second level of measurement is at the far end of the outcome sequence—that is, indices of the net impact of a training program on system or organizational effectiveness. The obvious difficulty with such measures is that the connection back to particular manipulations of the training regimen are tenuous at best. The real-world systems tend to be too noisy; too many factors other than variations in training techniques change during the course of the training process. Furthermore, the usual cure for such a noisy research environment, multiple trials, is out of bounds in the real-world settings in which impact research must be conducted.
A closely related problem that must be pursued at a basic level of research is how to specify skill requirements for specific jobs. One need only to contemplate how useful it would be to be able to list a set of measurable skills for each occupational specialty in the military jobs system or for each occupational description in the list of occupations used by the U.S. Department of Labor. We already have most of the tools for this in the skill taxonomies developed by human factors specialists.
We should also be able to match trainees with training strategies. In spite of such programs as MANPRINT, there are still significant gaps in our ability to fit the training to the learner (Clark and Taylor, 1994). Individual variability in learning styles may be a relatively fragile construct, but individuals do differ in their capabilities and limitations. Research is badly needed on how to develop descriptive profiles that can in turn be used for prescriptive actions in the delivery of the training service to the learner.
Bailey, T. 1990 Changes in the Nature and Structure of Work: Implications for Skill Requirements and Skill Formation. Berkeley, Calif.: National Center for Research in Vocational Education.
Boff, K., and W.B. Rouse 1987 System Design: Behavioral Perspectives on Designers, Tools, and Organizations . New York: North Holland.
Boff, K.R. and J.E. Lincoln, eds. 1988 Engineering Data Compendium. Armstrong Medical Research Laboratory. Dayton, Ohio: Wright-Patterson Air Force Base.
Booher, H., ed. 1990 MANPRINT: An Approach to Systems Integration. New York: Van Nostrand Reinhold.
Brown, J.S., A. Collins, and P. Duguid 1988 Situated Cognition and the Culture of Learning. Technical Report No. IRL88-0008. Palo Alto, Calif.: Institute for Research on Learning.
Bureau of Labor Statistics 1991 Occupational Outlook Quarterly Fall. Washington, D.C.: U.S. Department of Labor.
Cannon-Bowers, J.A., and E. Salas 1990 Cognitive Psychology and Team Training: Shared Mental Models in Complex Systems. Paper presented at the annual meeting of the Society for Industrial and Organizational Psychology, Miami, Fla.
Cannon-Bowers, J.A., S. Tannenbaum, E. Salas, and S. Converse 1991 Toward an integration of training theory and technique. Human Factors 33(3):281-292.
Carnevale, A.P., L.J. Gainer, A.S. Meltzer, and S.L. Holland 1988 Skills employers want. Training and Development Journal 42(10/October):22-30.
Carnevale, A.P., L.J. Gainer, and A.S. Meltzer 1990 Workplace Basics: The Essential Skills Employers Want. American Society for Training and Development/the U.S. Department of Labor. San Francisco: Jossey-Bass.
Clark, R.C., and Taylor, D. 1994 The causes and cures of learner overload. Training 31(7):40-43.
Clegg, W. 1987 Management training evaluation: an update. Training and Development Journal February:65-71.
Druckman, D., and R.A. Bjork, eds. 1994 Learning, Remembering, Believing: Enhancing Human Performance . Committee on Techniques for the Enhancement of Human Performance, National Research Council. Washington, D.C.: National Academy Press.
Durlach, N.I., and A.S. Mavor, eds. 1995 Virtual Reality: Scientific and Technological Challenges. Committee on Virtual Reality Research and Development, National Research Council. Washington, D.C.: National Academy Press.
Fitz-enz, J. 1994 Yes. … you can weigh training's value. Training 31(7):54-58.
Glaser, R. 1992 Learning, cognition, and education: then and now. Pp. 239-265 in H.L. Pick, P.W. Van den Broek, and D.C. Knill, eds., Cognition: Conceptual and Methodological Issues. Washington, D.C.: American Psychological Association.
Goldstein, I. 1988 Tomorrow's workforce today. Industry Week 41-43.1992 Training in Organizations: Needs Assessment, Development, and Evaluation, 3rd ed. Pacific Grove, Calif.: Brooks/Cole.
Hays, R., and M. Singer 1989 Simulation Fidelity in Training System Design: Bridging the Gap Between Reality and Training. New York: Springer-Verlag.
Johnson-Laird, P.N. 1983 Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Cambridge, Mass.: Harvard University Press.
Johnston, W., and A. Packer 1987 Workforce 2000: Work and Workers for the 21st Century. Indianapolis, Ind.: Hudson Institute.
Jones, E., R. Hennessy, and S. Deutsch, eds. 1985 Human Factors Aspects of Simulation. Working Group on Simulation, Committee on Human Factors, National Research Council. Washington, D.C.: National Academy Press.
Kieras, D.E., and S. Bovair 1984 The role of a mental model in learning to control a device. Cognitive Science 8:255-273.
Kraiger, K., J. Ford, and E. Sales 1992 Integration of Cognitive, Behavioral, and Affective Theories of Learning Into New Methods of Training Evaluation. Unpublished paper. U.S. Naval Warfare Center, Orlando, Fla.
Kravitz, D. 1988 The Human Resources Revolution: Implementing Progressive Management Practices for Bottom Line Success. San Francisco: Jossey-Bass.
Martel, L. 1991 The Integrative Learning System: A Review of Theoretical Foundations . Hilton Head Island, S.C.: National Academy of Integrated Learning.
McGehee, W., and P.W. Thayer, eds. 1961 Training in Business and Industry. New York: Wiley.
Molnar, A. 1989 Information and communications technology: today and in the future. In Lifelong Engineering Education, Proceedings from a Symposium of the Royal Swedish Academy of Engineering Sciences, IVA Rapport 365, Stockholm, Sweden.
Morris, N.M., and W.B. Rouse 1985 The effects of type of knowledge upon human problem solving in a process control task. IEEE Transactions on Systems, Man, and Cybernetics SMC-15:698-707.
Oberlin, M. 1988 Some considerations in implementing embedded training. Human Factors Society Bulletin 31(4):1-4.
Office of Technology Assessment 1990 Worker Training: Competing in the New International Economy. Washington, D.C.: U.S. Government Printing Office.
Orlansky, J., and J. String 1977 Cost-Effectiveness of Flight Simulators for Military Training . Paper No. 1275, August 1977. Arlington, Va.: The Institute for Defense Analyses.
Paquet, B., E. Kasl, L. Weinstein, and W. Waite 1987 The bottom line: here's how one company proved the business impact of management training. Training and Development Journal May:27-33.
Personick, V. 1989 Industry output and employment: a slower trend for the nineties. Monthly Labor Review 11(November 2):24-41.
Rouse, W. 1991 Designing for Success: A Human Centered Approach to Designing Successful Products and Systems. New York: Wiley.
Rouse, W.B., and N.M. Morris 1986 On looking into the black box: prospects and limits in the search for mental models. Psychological Bulletin 100(3):349-363.
Salas, E., T. Dickinson, S. Converse, and S. Tannenbaum 1992 Toward an understanding of team performance and training. In R. Swezey and E. Salas, eds., Teams: Their Training and Performance . Norwood, N.J.: Ablex.
Secretary's Commission on Achieving Necessary Skills 1991 What Work Requires of Schools: A SCANS Report for America 2000 . Washington, D.C.: U.S. Department of Labor.
Snow, R., and D. Lohman 1984 Toward a theory of cognitive aptitude for teaming from instruction. Journal of Educational Psychology 71:347-376.
Stevens, A.L., and D. Gentner, eds. 1983 Mental Models. Hillsdale, N.J.: Erlbaum.
Tannenbaum, S., and G. Yukl 1991 Training and development in work organizations. Annual Review of Psychology 41:399-441.
Taylor, S. 1989 The aging of America. Training and Development Journal October:44-50.
Thorpe. J. 1993 Synthetic Environments Strategic Plan. Draft 3B. Defense Advanced Research Projects Agency, Alexandria, Va.
Van Cott, H.P., and B.M. Huey, eds. 1992 Human Factors Specialists' Education and Utilization: Results of a Survey. Committee on Human Factors, National Research Council. Washington, D.C.: National Academy Press.
Wexley, K.N., and P.T. Baldwin 1986 Management development. Journal of Management 12(2):277-294.