Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 241
Flight to the Future: Human Factors in Air Traffic Control 12 Automation Automation technology for air traffic control has steadily advanced in complexity and sophistication overtime. Techniques for measurement and control, failure detection and diagnosis, display technology, weather prediction, data and voice communication, multitrajectory optimization, and expert systems have all steadily improved. These technological advances have made realistic the prospect of revolutionary changes in the quality of data and the aids available to the air traffic controller. The most ambitious of this new generation of automated tools will assist and could replace the controller's decision-making and planning activities. Although these technological developments have been impressive, there is also little doubt that automation is far from being able to do the whole job of air traffic control, especially to detect when the system itself is failing and what to do in the case of such failure. The technologies themselves are limited in their capabilities, in part because the underlying models of the decision-making processes are oversimplified. As we have noted, it is unlikely that technical components of any complex system can be developed in such a way as to ensure that the system, including both hardware and software components, will never fail. The human is seen as an important element in the system for this reason to monitor the automation, to act as supervisory controller over the subordinate subsystems, and to be able to step in when the automation fails. Humans are thought to be more flexible, adaptable, and creative than automation and thus better able to respond to changing or unpredictable circumstances. Given that no automation technology (or its human designer) can foresee all possibilities in a complex environment,
OCR for page 242
Flight to the Future: Human Factors in Air Traffic Control the human operator's experience and judgment will be needed to cope with such conditions. The implementation of automation in complex human-machine systems can follow a number of design philosophies. One that has received considerable recent interest in the human factors community is the concept of human-centered automation. As we mentioned earlier in this report, human-centered automation is defined as "automation designed to work cooperatively with human operators in pursuit of stated objectives" (Billings, 1991:7). This design approach is discussed in more detail toward the end of this chapter. Over a decade of human-factors research on cockpit automation has shown that automation can have subtle and sometimes unanticipated effects on human performance (Wiener, 1988). In a recent report (Federal Aviation Administration, 1996a) the impact of cockpit automation on flight deck crew performance has been documented in some detail. Similar effects have been noted in other domains in which advanced automation has been introduced, including medical systems, ground transportation, process control, and maritime systems (Parasuraman and Mouloua, 1996). Understanding these effects is important for ensuring the successful implementation of new forms of automation, although not all such influences of automation on human performance will apply to the air traffic control environment. The nature of the relationships between controllers and ground control systems on one hand, and pilots and aircraft systems on the other, will also change in as yet unknown ways. For example, at one extreme, control of the flight path and maintenance of separation could be achieved by automated systems on the ground, data-linked to flight deck computers. At the other extreme, as in some of the concepts involved in free flight, all responsibility for maintaining separation could rest with the pilot and on-board traffic display and collision avoidance systems (Planzer, 1995). Whether or not the most advanced automated tools are implemented, however, it is likely that the nature of the controller's tasks will change dramatically. At the same time, future air traffic control will require much greater levels of communication and integration between ground and airborne resources. In this chapter, we focus on four aspects of automation in air traffic control. We first describe the different forms and levels of automation that can be implemented in human-machine systems in general. Second, we describe the functional characteristics of several examples of air traffic control automation, covering past, current, and new systems slated for implementation in the immediate future. We do not discuss automation concepts that are still in the research and development stage, such as free flight, or the national route program, which will provide indications of air traffic control capabilities and requirements relevant for free flight considerations. Third, we discuss a variety of important human factors issues related to automation in general, with a view to drawing implications for air traffic control (see also Hopkin, 1995). Recent empirical investigations and human factors analyses of automation have been predicated on the view
OCR for page 243
Flight to the Future: Human Factors in Air Traffic Control that general principles of human operator interaction with automation apply across domains of application (Mouloua and Parasuraman, 1994; Parasuraman and Mouloua, 1996). Thus, many of the ways in which automation changes the nature of human work patterns may apply to air traffic control as well. At the same time, there may also be some characteristics of human interaction with automation that are specific to air traffic control. This caveat should be kept in mind because most of what has been learned in this area has come from studies of cockpit automation and, to a lesser extent, automation in manufacturing and process control. Finally, we discuss the attributes of human-centered automation as they apply to air traffic control. FORMS OF AUTOMATION The term automation has been so widely used as to have taken on a variety of meanings. Several authors have discussed the concept of automation and tried to define its essence (Billings, 1991, 1996; Edwards, 1977; Sheridan, 1980, 1992, 1996; Wiener and Curry, 1980). The American Heritage Dictionary (1976) definition is quite general and therefore not very illuminating: "automatic operation or control of a process, equipment, or a system." Other definitions of automation as applied to human-machine systems are quite diverse, ranging from, at one extreme, a tendency to consider any technology addition as automation, and, at the other extreme, to include only devices incorporating "intelligent" expert systems with some autonomous decision-making capability as automation. Definition A middle ground between the two extreme views of automation would be to define automation as: a device or system that accomplishes (partially or fully) a function that was previously carried out (partially or fully) by a human operator. Because this definition emphasizes a change in the control of a function from a human to a machine (as opposed to the machine control of a function never before carried out by humans), what is considered automation will change overtime with technological development and with human usage. Once a function is allocated to a machine in totality, then after a period of time the function will tend to be seen simply as a machine operation, not as automation. The reallocation of function is permanent. According to this reasoning, the electric starter motor of an automobile, which serves the function of turning over the engine, is no longer considered automation, although in the era when this function was carried out manually with a crank (or when both options existed), it would have been so characterized. Other examples of devices that do not meet this definition of automation are the automatic elevator and the fly-by-wire flight controls on many modern aircraft. By the same token, cruise controls in automobiles and autopilots in aircraft represent current automation. In air traffic control, electronic flight
OCR for page 244
Flight to the Future: Human Factors in Air Traffic Control strips represent a first step toward automation, whereas such decision-aiding automation as the final approach spacing tool (FAST)(Erzberger, 1992) represents higher levels of automation that could be implemented in the future. Similar examples can be found in other systems in aviation, process control, manufacturing, and other domains. From a control engineering perspective, automation can be categorized into (and historically automation has progressed through) several forms: Open-loop mechanical or electronic control. This was the only automation at first, as epitomized by the elegant water pumping and clockworks of the Middle Ages: gravity or spring motors driving gears and cams to perform continuous or repetitive tasks. Positioning, forcing, and timing were dictated by the mechanism and whatever environmental disturbances (friction, wind, etc.) that happened to be present. The automation of factories in the early parts of the Industrial Revolution was also of this form. In this form of automation, there is no self-correction by the task variables themselves. Much automation remains open loop, and precision mechanical parts or electronic timing circuits ensure sufficient constancy. Classic linear feedback control. In this form of automation, the difference between a reference setting of the desired output and a measurement of the actual output is used to drive the system into conformance. The flyball governor on the steam engine was probably the first such device. The gun-aiming servo-mechanisms of World War II enlarged the scope of such automation tremendously. What engineers call conventional proportional-integral-derivative (PID) control also falls into this category. Optimal control. In this type of control, a computer-based model of the controlled process is driven by the same control input as that used to control the actual process. The model output is used to predict system state and thence to determine the next control input. The measured discrepancy between model and actual output is then used to refine the model. This "Kalman filtering" approach to estimating (observing) the system state determines the best control input, under conditions of noisy state measurement and time delay, "best" being defined in terms of a specified trade-off between control error, resources used, and other key variables. Such control is inherently more complex than PID control but, when computer resources are available, it has been widely adopted. Adaptive control. This is catchall term for a variety of techniques in which the structure of the controller is changed depending on circumstances. This category includes the use of rule-based controllers (either "crisp" or "fuzzy" rules or some combination), neural nets, and many other nonlinear methods. Levels of Automation It is useful to think of automation as a continuum rather than as an all-or-nothing
OCR for page 245
Flight to the Future: Human Factors in Air Traffic Control TABLE 12.1 Levels of Automation 1. The computer offers no assistance, the human must do it all. 2. The computer offers a complete set of action alternatives, and 3. narrows the selection down to a few, or 4. suggests one, and 5. executes that suggestion if the human approves, or 6. allows the human a restricted time to veto before automatic execution,or 7. executes automatically, then necessarily informs the human, or 8. informs the human after execution only if he asks, or 9. informs the human after execution if it, the computer, decides to. 10. The computer decides everything and acts autonomously, ignoring the human. Source: Sheridan (1987). concept. The notion of levels of automation has been discussed by a number of authors (Billings, 1991, 1996; Hopkin, 1995; McDaniel, 1988; National Research Council, 1982; Parasuraman et al., 1990; Sheridan, 1980; Wickens, 1992). At the extreme of total manual control, a particular function is continuously controlled by the human operator, with no machine control. At the other extreme of total automation, all aspects of the function (including its monitoring) are delegated to a machine, so that only the end product and not its operation is made available to the human operator. In between these two extremes lie different degrees of participation in the function by the human and by automation (Table 12.1). At the seventh level, for example, the automation carries out a function and is programmed to inform the operator to that effect, but the operator cannot influence the decision. McDaniel (1988) similarly described the level of monitored automation as one at which the automation carries out a series of operations autonomously that the human operator is able to monitor but cannot change or override. Despite the relatively high level of automation autonomy, the human operator may still monitor the automation because of implications elsewhere for the system, should the automation change its state. What is the appropriate level of automation? There is no easy or single answer to this question. Choosing the appropriate level of automation can be relatively straightforward in some cases. For example, most people would probably prefer to have an alarm clock or a washing machine operate at a fairly high level of automation (level 7 or higher), and a baby-sitting robot set at a fairly low level of automation, or not at all (level 2 or 1). In most complex systems, however, the choice of level of automation may not be so simple. Furthermore, the level of automation may not be fixed but context dependent; for example, in dynamic systems such as aircraft, the pilot will select whatever level he or she considers appropriate for the circumstances of the maneuver. The concept of levels of automation is also useful in understanding distinctions in terminology with respect to automation. Some authors prefer to use the term automation to refer to functions that do not require, and often do not permit,
OCR for page 246
Flight to the Future: Human Factors in Air Traffic Control any direct participation or intervention in them by the human operator; they use the term computer assistance to refer to cases in which such human involvement is possible (Hopkin, 1995). According to this definition of automation, only technical or maintenance staff could intervene in the automated process. The human operator who applies the products of automation has no influence over the processes that lead to those products. For example, most previous applications of automation to air traffic control have affected simple, routine, continuous functions such as data gathering and storage, data compilation and correlation, data synthesis, and the retrieval and updating of information. These applications are universal and unselective. When some selectivity or adaptability in response to individual needs has been achieved or some controller intervention is permitted, automation then becomes computer assistance. Because the notion of multiple levels of automation includes both the concepts of automation and computer assistance, only the term automation is used in the remainder of this chapter. However, our use of this term does not imply, at this stage in our analysis, adoption of a general position on the appropriate level for the air traffic control system, whether full automation or computer assistance. Authority and Autonomy Higher levels of automation are associated with greater machine autonomy, with a corresponding decrease in the human operator's ability to control or influence the automation. The level of authority may also be characterized by the degree of emergency or risk involved. Figure 12.1 shows a two-dimensional characterization of where authority might reside (human versus computer) in this respect. Sarter and Woods (1994a, 1995a, 1995b) have suggested that automation, particularly high-level automation of the type found in advanced cockpits, needs to be decomposed with respect to critical properties such as autonomy, authority, and observability. Although these terms can be defined in many instances, there are cases in which they are independent of one another and cases in which they are not. Autonomous automation, once engaged, carries out many operations with only early initiating input from the operator. The operations respond to inputs other than those provided by the operator (from sensors, other computer systems, etc.). As a result, autonomous automation may also be less "observable" than other forms of automation, though machine functions can be programmed to keep the human informed of machine activities. Automation authority refers to the power to carry out functions that cannot be overridden by the human operator. For example, the flight envelope protection function of the Airbus 320 cannot be overridden except by turning off certain flight control computers. The envelope protection systems in other aircraft (e.g., MD-11 and B-777), however, have "soft" limits that can be overridden by the flight crew.
OCR for page 247
Flight to the Future: Human Factors in Air Traffic Control FIGURE 12.1 Alternatives for ultimate authority as related to risk. Autonomy and authority are also interdependent automation properties (Sarter and Woods, 1994a). Sarter and Woods (1995b) propose that the combination of these properties of high-level automation creates multiple ''agents" in the workplace who must work together for effective system performance. Although the electronic copilot or pilot's associate (Chambers and Nagel, 1985; Rouse et al., 1990) is still a future concept for the cockpit, current cockpit automation possesses many qualities consistent with autonomous, agent-like behavior. Unfortunately, because the properties of the automation can create strong but silent partners to the human operator, mutual understanding between machine and human agents can be compromised (Sarter and Woods, 1995b). Future air traffic management systems may incorporate multiple versions of such agents, both human and machine, and both on the ground and in the air. FUNCTIONAL CHARACTERISTICS Despite the high-tech scenarios that are being contemplated for the 21st century (e.g., free flight), current air traffic control systems, while including some automation (e.g., automated handoffs, conflict warnings), remain largely manual. The influence on air traffic control operations of existing and proposed automation, both near-term and long-term, has been discussed extensively in recent years as the prospects for implementing various forms of advanced automation have become clear (Harwood, 1993; Hopkin, 1991, 1994, 1995; Hopkin and Wise, 1996; Vortac, 1993). In this section we describe the functional characteristics
OCR for page 248
Flight to the Future: Human Factors in Air Traffic Control of air traffic control automation, emphasizing currently implemented technologies or immediate near-term proposals for automation, rather than "blue-sky" concepts. Before describing specific examples, however, we discuss the rationale for implementing automation, both generally and as represented in the strategic planning of the FAA. The Need for Automation More people wish to use air transportation, and more planes are needed to get them to their destinations. All current forecasts foresee substantial future increases in air traffic and a continuing mix of aircraft types. The FAA must face and accommodate increasing demands for air traffic control services. Current systems were never intended to handle the quantities of air traffic now envisaged for the future. Many of them are already functioning at or near their planned maximum capacity for traffic handling much of the time. Present practices, procedures, and equipment are often not capable of adaptation to cope with a lot more traffic. In addition, traffic patterns, predictability, and laws of control may change. New systems therefore have to be designed not merely to function differently initially but also to evolve differently while they are in service. The apparent option of leaving air traffic control systems unchanged and expecting them to cope with the predicted large increases in traffic is therefore not a practical option at all. Air traffic control must change and evolve (Wise et al., 1991) to meet the foreseen changes in demand. The limited available airspace in regions of high traffic density constrains the kinds of solutions to the problem of increased air traffic. The only way to handle still more air traffic within regions that are already congested is to permit each aircraft to occupy less airspace. This means that aircraft will be closer together. Their safety must not be compromised in any way, they must not be subjected to more delays, and each flight should be efficient in its timing, costs, and use of resources. To further these objectives, flight plans, navigational data, on-board sensors, prediction aids, and computations can provide information about the state of each flight, the flight objectives, its progress, and its relationships to other flights, and about any potential hazards such as adverse weather, terrain proximity, and airspace restrictions or boundaries. All this information, combined with high-quality information about the position of each aircraft on its route and about the route itself, could allow the minimum separation standards between aircraft to be reduced safely. Since changes in air traffic control tend to be evolutionary rather than revolutionary, current systems have to be designed so that they can evolve to integrate and make effective use of such developments in technology during the lifetime of the system. The appearance and functionality of some current systems have therefore been influenced, sometimes considerably, by the ways in which they are expected to be improved during their operational lifetime. Common current
OCR for page 249
Flight to the Future: Human Factors in Air Traffic Control examples are the replacement of paper flight progress strips by electronic ones and the advent of data links. Many of the remaining practical limitations on the availability of data for air traffic control purposes are expected eventually to disappear altogether. One apparent response to the problem of increased air traffic can be ruled out: that is simply to recruit more controllers and to make each controller or each small team of controllers responsible for the air traffic in a smaller region of airspace. Unfortunately, in many of the most congested airspace regions, this process has already been taken about as far as is practicable, because the smaller the region of airspace of each air traffic control sector, the greater the amount of communication, coordination, liaison, and handoff required in the air traffic control system itself and in the cockpit, whenever an aircraft flies out of one controller's jurisdiction and into another's. Eventually any gains from having smaller regions are more than canceled by the associated additional activities incurred by the further partitioning of the airspace. Automation has therefore been seen by some as the best alternative to the problem of increased traffic demand. Automation may include technologies for such functions as information display, communication, decision making, and cooperative problem solving. Free flight is an alternate remedy that has been recently proposed (RTCA, 1995; Planzer and Jenny, 1995). However, increased traffic is only one factor in the drive to automate the air traffic control system. Another factor is the increasing tendency of air traffic control providers to serve rather than to control the aviation community in its use of airspace resources. Automation is seen as one of the ways in which service providers can meet the needs of airspace customers, both now and in the future (Federal Aviation Administration, 1995). The advanced automation system (AAS) was a major program of technology enhancement that was initiated by the FAA in the 1980s. Despite its title, the program was largely concerned with the modernization of equipment and, although some parts of the program did deal with automation, the major thrust was not with automation per se. In 1994, following delays and other problems in meeting the goals of the program, the AAS was divided into smaller projects, each concerned with replacement of aging equipment with newer, more efficient, and powerful capabilities. In contrast to these efforts at improving the air traffic control infrastructure, other planning efforts have focused more specifically on automation. The FAA's plans for automation are contained in its Automation Strategic Plan (Federal Aviation Administration, 1994) and the Aviation System Capital Investment Plan (Federal Aviation Administration, 1996b). Two major goals for automation are identified: the improvement of system safety and an increase in system efficiency. In the context of safety automation is proposed because of its potential to:
OCR for page 250
Flight to the Future: Human Factors in Air Traffic Control Reduce human error through better human-computer interfaces and improved data communications (e.g., datalink), Improve surveillance (radar and satellite-based), Improve weather data, Improve reliability of equipment, and Prevent system overload. Automation is proposed to improve system efficiency because of its potential to: Reduce delays, Accommodate user-preferred trajectories, Provide fuel-efficient profiles, Minimize maintenance costs, and Improve workforce efficiency. Air Traffic Control Automation Systems: General Characteristics There is no easy way to classify or to even list comprehensively all of the various automation systems that have been deployed in air traffic control. In this section, we describe the major systems that have been fielded in the United States and discuss the principal functional characteristics of the automation to date, making reference to specific systems as far as possible. Although there is considerable diversity in the technical features, functionality, and operational experience with different automation systems, some generalizations are possible. On the whole, automation to date has been successful,1 in the sense that the new technologies that have been fielded over the past 30 years have been fairly well integrated into the existing air traffic control system and have generally been found useful by controllers. This is clearly a positive feature, and efforts should be made to ensure its continuity in the future as new systems are introduced. There has been a steady increase in the complexity and scope of automated systems that have been introduced into the air traffic control environment. In terms of functionality, automation to date has largely been concerned with improving the quality of the information provided to the controller (e.g., automated data synthesis and information presentation) and with freeing the controller from simple but necessary routine activities (e.g., automated handoffs to other sectors) and less so with the automation of decision-making and planning functions. Historically, following the development of automation of data processing, 1 With the expection of the AAS, which, as noted before, was not primarily an automation program.
OCR for page 251
Flight to the Future: Human Factors in Air Traffic Control Automation systems that are currently under field testing or about to be deployed in the near future (e.g., the center-TRACON automation system—CTAS), will have a greater impact on the controller's complex cognitive activities, the role of which was discussed extensively in Chapter 5. Higher levels of automation, with possibly greater authority and autonomy, may be the next to follow. Some of these systems are briefly mentioned in this chapter, but a more detailed discussion is expected in Phase 2 of the panel's work. Although processing of flight data and radar data automatically distributes common data to team members, and although automated handoff supports pairs of controllers, automation to date, as well as most future projected systems, has been designed to support individual controllers rather than teams of controllers. Given the importance of team activities to system performance (discussed in Chapter 7), this fact will need to be taken into account in evaluating the impact of future automation. Different forces have led to the development and deployment of air traffic control automation systems. In some cases, technology has become available, or there has been technology transfer from other domains, as in the case of the global positioning system (GPS). Other contributing sources to the development of particular automation systems include controllers, users of air traffic control services, the FAA, and human factors research efforts. Finally, cockpit automation has generally been more pervasive than air traffic control automation. Some of these systems, such as the FMS and TCAS, have implications for air traffic control and hence must be considered in any discussion of air traffic control automation. Table 12.2 provides an overview of the automated systems that have been implemented in the air traffic control environment. Automation technologies introduced overtime are represented along the columns, with the rows representing the four major types of environments. The distinction among tower, TRACON, en route, and oceanic air traffic control is just one of many ways in which the system could be subdivided, but it is convenient for the purpose of identifying specific systems that have been implemented, from the initial automation of data-gathering functions to the more advanced decision-aiding technologies that are currently under production. Because some cockpit automation systems (such as TCAS) have significant implications for control of the airspace, relevant cockpit systems are also shown in the table.
OCR for page 279
Flight to the Future: Human Factors in Air Traffic Control lead to a loss of situation awareness compared with the case when the solution is generated manually. In an early study of decision aiding in air traffic control, Whitfield et al. (1980) reported such a loss of the mental picture in controllers, who tended to use the automated resolutions under conditions of high workload and time pressure. Interactions Between Factors Affecting Use of Automation We have discussed several factors that can influence a human operator's decision to use automation. Several other factors are presumably also important in influencing automation use. Some of these factors may have a direct influence, whereas others may interact with the factors already discussed. For example, the influence of cognitive overhead may be particularly evident if the operator's workload is already high. Under such circumstances, many operators may be reluctant to use automation even if it is reliable, accurate, and generally trustworthy. The studies by Lee and Moray (1992) and Riley (1994) also identified self-confidence in one's manual skills as an important moderator of the influence of trust in automation. If trust in automation is greater than self-confidence, automation will be engaged, but not if trust is lower than self-confidence. Riley (1996) suggested that this interaction could itself be moderated by other factors, such as the risk associated with the decision to use or not use automation. On the basis of his experimental results, he outlined a model of automation use based on a number of factors (Figure 12.3). Solid lines indicate the influences of factors on automation use decisions that are supported by experimental evidence; dashed lines indicate factors that may also influence automation use, but for which FIGURE 12.3 Factors influencing automation usage. Source: Riley (1994). Reprinted by permission.
OCR for page 280
Flight to the Future: Human Factors in Air Traffic Control empirical evidence is lacking. Studies in more realistic task environments are needed to validate the model shown in the figure, which provides only a general overview of the factors that potentially influence the use of automation. HUMAN-CENTERED AUTOMATION High workload, mistrust, overtrust (complacency), high cognitive overhead, impaired situation awareness—these represent some of the potential human performance costs of certain forms of automation. Factors such as trust, workload, and cognitive overhead also influence an operator's choice to use or not use automation in order to perform a particular task, when that choice is available. As noted previously, however, these performance costs are not inevitable consequences of automation, but rather represent outcomes associated with poorly designed automation. Can such negative outcomes be eliminated, while promoting more effective use of automation? Human-centered automation (Billings, 1991, 1996) has been proposed as a design approach that may accomplish these objectives. Human-centered automation is a philosophy that guides the design of automated systems in a way that both enhances system safety and efficiency and optimizes the contribution of human operators. In a general sense, it requires that the benefits of automation be preserved while minimizing the human performance costs described earlier in this chapter. However, although human-centered automation is currently a fashionable idea in aviation and other contexts, its precise meaning is not well or commonly understood. It evokes many associations, some good and some not so good. The many faces of human-centered automation need to be considered. At various times and in various contexts, it can mean: Allocating to the human the tasks best suited to the human and allocating to the automation the tasks best suited to it. Maintaining the human operator as the final authority over the automation, or keeping the human in command. Keeping the human operator in the decision and control loop. Keeping the human operator involved in the system. Keeping the human operator informed. Making the human operator's job easier, more enjoyable, or more satisfying through automation. Empowering or enhancing the human operator to the greatest extent possible through automation. Generating trust in the automation by the human operator. Giving the operator computer-based advice about everything he or she might want to know.
OCR for page 281
Flight to the Future: Human Factors in Air Traffic Control Engineering the automation to reduce human error and keep response variability to the minimum. Casting the operator in the role of supervisor of subordinate automatic control system(s). Achieving the best combination of human and automatic control, best being defined by explicit system objectives. Making it easy to train operators to use automation effectively, minimizing training time and costs. Creating similarity and commonality in various models and derivatives that may by operated by same person. Allowing the human operator to monitor the automation. Making the automated systems predictable. Allowing the automated systems to monitor the human operator. Designing each element of the system to have knowledge of the other's intent. These seemingly innocuous objectives can often be undesirable and/or in conflict with one another. Their problems and inconsistencies are apparent when the several meanings of human-centered automation are considered further. Allocation of Tasks to Humans and to Automation Appropriate allocation of tasks to humans and to automation is easy to say, but not so easy to do. The Fitts (1951) approach to function allocation specifies which tasks are performed better by machines and which by humans. This approach, developed some 45 years ago, has not been able to provide an effective procedure for task allocation. Furthermore, there is the question: If designers automate those tasks that machines are better at and also require the operator to monitor the automation and maintain situation awareness of all those variables, will there be a gain in system efficiency and safety? Human In or Out of the Loop Keeping the human operator in the decision and control loop can mean full manual control, or it can mean tolerance of human intervention into the automatic control. Sometimes it may be best to get the operator out of the loop altogether, not letting him or her touch anything, including overriding or adjusting the automatic control. Early in the development of nuclear reactors, it was agreed that certain safety-related operations that must be performed in the case of loss of coolant must be fully automatic—the human operator was too slow and undependable in a stressful situation. The industry later evolved a standard practice that any safety-related action that must be performed within 10 minutes of a major event must be automatic, with the human operator observing, hands off.
OCR for page 282
Flight to the Future: Human Factors in Air Traffic Control The Human Operator as the Final Authority Maintaining the human operator as the final authority over the automation is another meaning of human-centered automation. Many people may feel more comfortable if they know that, in the end, some human is in charge of a complex automated system. But is this belief justified? Humans are not known for their reliability; machines are. Although the human can always pull the plug or jam the machine if necessary, that may take more time than the process normally allows. In any case, the appropriateness of human or machine as final authority is likely to be context dependent. Sometimes it may be safest to require an extra confirmation action by the human (e.g., to an "are you sure?" query by the machine) or covers or guards to certain switches that must first be removed (like a fire alarm box), or tests of human capability before the human is allowed to override an automatic system. Sometimes specific action of two or more independent humans may be required before an automatic process is initiated (e.g., keys inserted and turned on by two designated officers to enable the firing of a ballistic missile). Job Satisfaction Some researchers claim that human-centered automation involves making the human operator's job easier, more enjoyable, or more satisfying through friendly automation, although this is clearly not the most prevalent view of its major characteristic. The operator's job may be easiest when he or she is doing nothing (or doing poorly). Designing for greater ease makes sense only if all other factors stay the same, including the tendency of the operator to become bored or drowsy—tendencies that can be enhanced by "easy" jobs. And what most satisfies the system operator may not be what most satisfies the management, the customer, or the public. Reducing the operator's mental workload, at least to a comfortable or acceptable level, is an admirable goal. But the same semiautomatic system that results in a comfortable workload under normal conditions can be quite uncomfortable under abnormal conditions. As noted previously, automation intended to decrease operator workload can end up increasing workload at the most critical times (Bainbridge, 1983; Wiener, 1988). Empowering the Human Operator Empowering an operator who is misguided or lacking in certain capacities can be dangerous. Empowerment may be doubly problematic if, as Hopkin (1994) has suggested in the context of future air traffic control, operator incompetence is masked because of the routine acceptance of automated solutions. The problem of empowerment was the theme of Norbert Wiener's (1964) Pulitzer prize-winning book, God and Golem, Incorporated. Wiener's main theme was
OCR for page 283
Flight to the Future: Human Factors in Air Traffic Control that the computer does what it is programmed to do, whether that is what its programmer intended or not. Although stand-alone computers may not be dangerous in this respect, computers hooked up to hardware, especially rapidly responding hardware, can do significant damage before they can be stopped. Generating Trust in Automation by the Human Operator This view of human-centered automation can be broken down into several subgoals: making the automation more reliable and predictable, better able to adapt to a variety of circumstances, more familiar in the sense of its operation being understandable, and more open and communicative about what it is doing or about to do. These are all properties of a trustworthy friend or helper—and that is fine if trust is deserved. A system must not give the impression that it is operating normally when it is not. In some cases, operators are taught not to trust the computer or the machine. As noted previously, there can be negative consequences of both mistrust and overtrust. Trust must be calibrated appropriately to the system. Normally trust is built up over a period of time, but failure can give no warning. An operator once burned will have difficulty regaining trust. Trust requires in addition some capability to fail safe or fail soft. Keeping the Human Operator Informed Humans can absorb and make use of only very limited quantities of information. It is well established that displaying all the information that might be useful means there is too much information to be able to find what is needed when it is needed. The control panel at the nuclear power plant at Three Mile Island and the Boeing 707 cockpit are early examples of this problem. Modern control rooms and cockpits, at any given time, actually display less information than before, but make it available for the asking. But then a problem must be faced: How should the operator ask for it and, in an emergency, is there a danger that the operator will forget how to ask, or inadvertently request and act on the wrong information, information believed to be characterizing a different variable than what was actually requested? As discussed previously, this type of mode error has been noted to occur in many highly automated cockpits and is directly responsible for at least one fatal airline accident. The computer can always be designed to second-guess the operator when the computer thinks it knows what the operator should be interested in (for example, to generate displays on the computer screen as a function of what operating mode the system is in, or to give unsolicited advice or to bring up the appropriate screen in the event of a system failure, as in the MD-11 aircraft's automated systems displays). But for some reason not known to the computer at that point, the operator may really want to see some other information. Even given that old-fashioned display clutter has been cleared up in modern
OCR for page 284
Flight to the Future: Human Factors in Air Traffic Control systems, there remains the hazard of bombarding the operator with advice in other forms—what some pilots have referred to as "killing us with kindness." Engineering Automation to Reduce Human Error Human resourcefulness in case of automation failure may require taking liberties that are normally seen as human error, for they may circumvent standard emergency procedures. Such procedures may be appropriate in most cases but inappropriate in some specific case that had not been considered by the procedure writers. In any case, for the human to experiment and learn about the system, some tolerance for nonstandard behavior (variability and what would normally be called inappropriate response or error) is necessary. For example, on several occasions in the Apollo lunar spacecraft expeditions, ground controllers at Cape Canaveral had to "fool the computer" by giving it nonstandard instructions in order to cope with certain bugs and failures that were encountered. The human operator can exercise his or her marvelous capability to learn and adapt only if allowed some freedom to experiment. In the Darwinian sense, there must be some "requisite variety." Fooling the computer is commonplace in glass cockpits (Wiener, 1988). Human Supervisory Control Being a supervisor takes the operator out of the inner control loop for short periods or even for significantly longer periods, depending on the level at which the supervisor chooses to operate. In a human organization, the boss may not know in any detail what the subordinate employees are doing, and the more layers of middle management there are, the less the supervisor may know. Optimal Combination of Human and Automatic Control To explicate system objectives in a quantitative way that allows for mathematical optimization is usually not possible, or at least is very difficult. This is especially true when there are multiple conflicting objectives. It is often the case that large system planners seize on one or two easily quantifiable criteria and optimize those, totally ignoring what admittedly might be more important criteria that are not easily quantifiable. Furthermore, when the optimal combination is defined as a flexible one, which may adaptively change overtime (Hancock and Chignell, 1989), such flexibility can lead to inconsistency and, worse yet, ambiguity regarding who's in charge at a given point in time. Billings (1991, 1996) emphasizes the importance of the need for both the human and automated agent to share knowledge about the other's operations and functioning, intent, and plans. We have provided several examples to illustrate that these principles are not
OCR for page 285
Flight to the Future: Human Factors in Air Traffic Control always upheld in current automated systems. For example, the work on FMS mode awareness indicates that the multiple agents in the cockpit do not always know each other's intent; studies on monitoring, overtrust, and silent automation failures show that human operators are not always able to monitor the automation effectively; and studies on automation surprises indicate that automated systems are sometimes unpredictable. Billings' view of human-centered automation (1991, 1996) provides some general guidelines for the design of future automated systems and sets some boundary conditions on the types and levels of automation (see Table 12.1) that are appropriate. For example, human-centered automation would seem to rule out certain very high-level automation with complete autonomy for more complex cognitive functions (e.g., level 9 or 10), on the grounds that this would subvert the principle of the human operator's being in command of basic decision-making processes. Accordingly, concepts in which the controller is removed from responsibility for maintaining separation between aircraft, would seem to violate the principles of human-centered automation. By the same token, data-linking of information from the controller to the FMS can communicate the intent of the controller to the pilot regarding the flight path to be followed, but for the mutual intent principle of human-centered automation to be met, pilot intent, particularly if different from that programmed into the FMS, must also be communicated back to the controller. Mutual knowledge of intent (air-to-air and air-to-ground) is also likely to be an important factor in the efficient implementation of such future automation concepts as free flight. The Architectural Framework of Supervisory Control Human supervisory control may provide an appropriate architecture for human-centered automation in general (Sheridan, 1992) and for air traffic control in particular (see Figure 12.4). The human roles of planning what is to be done, particularly what automatic actions, teaching (programming) the computer, monitoring the automatic action while looking for abnormalities, intervening when necessary, and learning from experience, still seem appropriate to human roles, although the computer is learning to help (or encroach, depending on one's viewpoint) even here. It is fashionable to assert that today's complex supervisory systems require more cognition than before and less motor skill. One might contend, however, that the cognitive skills have always been there, and that earlier it was just easier to integrate them with the required manual skills, since the body had learned to do this naturally over thousands of years of evolution, with much of the communication going on internally and subconsciously. Now the operator must behave more like a mother, trying to think ahead and anticipate problems for the child (the computer, the automated system). The mother must communicate quite explicitly but let the child do the acting, meanwhile monitoring the child's behavior.
OCR for page 286
Flight to the Future: Human Factors in Air Traffic Control FIGURE 12.4 Architecture for supervisory control. Supervisory control has many reasonable manifestations, but currently there is no predictive model for supervisory control that is acceptable and robust. One can say that current TRACON controllers are doing tactical supervisory control through their elaborate and high-level display systems, and also in the sense that they are giving commands to the pilots, who in turn are closing the aircraft position control loop locally with their own respective aircraft. (Currently the human pilot serves as the lower-level task-interactive "computer" in the traditional supervisory paradigm.) Current tower controllers perform supervisory control in the same sense, especially in conditions of instrument flight rules in which GCA (ground controlled approach) or AUTOLAND (automated landing) systems are used to partially automate the real-time control itself. The CTAS traffic management advisor would extend the controller's supervisory control into the strategic arena. The supervisory functions of the controller operator are clustered under the following categories, outlined in the figure. Plan, which is performed off-line, in the form of training. It includes (a) understanding of the physical system well enough to have a working mental model of the characteristics of different aircraft (required speeds, separations, etc.). It also includes (b) knowledge of objectives (relative importance, urgency, and good-bad evaluations of events). Coming to understand both of these is
OCR for page 287
Flight to the Future: Human Factors in Air Traffic Control augmented by (computer-based) training aids. In an ideal case in which these two functions are completely specified in mathematical form, a simultaneous solution determines optimal performance. In the real world, the supervisor must further take into account (c) procedures and guidelines specified by higher authority (FAA) in order to set strategy. Monitor, which is the controller's afferent function performed on-line. It includes (a) allocation of attention (what to look at and listen to in order to get the needed information) and is driven largely by the operator's mental model of expectations as well as by the current displays and voice communications. Next it includes (b) estimation of process state (defined as the lateral and vertical positions of all aircraft under surveillance), which can be augmented by TCAS predictor lines and other visualization aids. A final step is to evaluate the state as estimated, to determine whether there is some abnormality that requires special attention, as further aided by TCAS alarms and similar advice from the computers. Decide and Communicate, the on-line efferent function of the controller. This is broken down into steps. Step (a) involves deciding what are the proper actions to take, based on the operator's knowledge of where the errant aircraft is (are) positioned and headed, what are the options available, and the expected results of taking those options. In this case, the controller must be guided by FAA rules and procedures as well as whatever CTAS, TCAS, etc., aiding exists. Step (b) involves communication in the normal case, which must be brief and in the proper format, and in the near future will be aided by datalink. Step (c) involves communication in the abnormal case, wherein instructing the selected one or several aircraft takes priority over the other aircraft. The loop is closed to ensure that proper actions are being taken, which is important for normal as well as abnormal situations. Learn, a supervisory function that is (a) partially an on-line memory task and (b) partially a matter of later off-line reflection and study of recorded events. In the future, supervisory control in air traffic control will probably move toward further automation, which means there will be more aids to support tasks identified in Figure 12.4, and communication will be more by datalink and less by voice. More of the functions performed now by the pilot will also be automated or at least aided by computer means. Implementation Prospects Although the concept of human-centered automation provides a general framework for the design of automated systems, as currently formulated it cannot provide specific details on particular automation components. It may not always be clear whether particular automated subsystems meet a particular principle of human-centered automation, and, if not, how they can be redesigned to do so.
OCR for page 288
Flight to the Future: Human Factors in Air Traffic Control Furthermore, some principles may require solutions to conflicting problems. For example, literal adherence to the principle of keeping the controller informed would lead to an information explosion and added workload. How much information should be provided to keep the controller sufficiently informed under normal as well as contingency conditions, and how it is displayed, are key issues in meeting this requirement. Other principles may also be difficult to achieve. For example, it is not clear how the high-level decision-making activities of the controller can be monitored covertly by the automation. Overt monitoring is possible, e.g., by query, but this can be cumbersome and aversive to the controller, who may not like to have his or her actions continually questioned (although in some cases it can be helpful). Further research and conceptual analyses are needed to address these issues of implementation of human-centered automation. The information gained from studies of human use of automation can also be added to the knowledge base of the automation. If an intelligent system can predict when the controller is likely to choose or not to choose a particular automated subsystem, then communication of intent can be facilitated. CONCLUSIONS Automation refers to devices or systems that execute functions that could be carried out by a human operator. Various levels of automation can be identified between the extremes of direct manual control and full automation, with higher levels being associated with greater complexity, autonomy, and authority. A number of components of automation have been introduced in air traffic control over the past decades in the areas of sensing, warning, prediction, and information exchange. These automated systems have provided a number of system benefits, and acceptance by controllers has generally been positive. Several higher-level automated systems targeting decision-making and planning functions are being contemplated, both in the near term and in the long term. Advanced automation is to be introduced because of anticipated increases in traffic over the next decade, which threaten to outstrip the handling capabilities of the current system. It is hoped that automation will not only increase capacity, but also improve safety, increase efficiency, reduce personnel, operational, and maintenance costs, and reduce the workload levels of controllers. Achieving these outcomes will require consideration of human factors involved in operator interaction with automation. Past and most current automated systems have been designed using a technology-centered approach. These systems have led to numerous benefits, including more efficient performance, elimination of some error types, and reduced operator workload in some cases. At the same time, several costs have been noted, including increased workload, increased monitoring demands, reduced situation awareness, unbalanced trust (mistrust and overtrust), new error forms, the masking of incompetence, and loss of team cooperation. These and related
OCR for page 289
Flight to the Future: Human Factors in Air Traffic Control factors also influence the controller's choice to use or not to use automation, although the relative importance of the factors and their interactions are not fully understood. There is considerable interest in the human factors community in the design philosophy known as human-centered automation, although there is less agreement on its specific characteristics. Several meanings can be discerned; however, the objectives of each view can be undesirable and/or in conflict with one another. Effective human-centered automation requires that these inconsistencies be resolved. The specific means by which the principles are realized in design also remain to be fully articulated.
Representative terms from entire chapter: