Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 11
The Future of Air Traffic Control: Human Operators and Automation 1 Automation Issues in Air Traffic Management The pressures for automation of the air traffic control system originate from three primary sources: the needs for improved safety, and efficiency (which may include flexibility, potential cost savings, and reductions in staffing); the availability of the technology; and the desire to support the controller. Even given the current very low accident rate in commercial and private aviation, the need remains to strive for even greater safety levels: this is a clearly articulated implication of the ''zero accident" philosophy of the Federal Aviation Administration (FAA) and of current research programs of the National Aeronautics and Space Administration (NASA). Naturally, solutions for improved air traffic safety do not need to be found only in automation; changing procedures, improving training and selection of staff, and introducing technological modernization programs that do not involve automation per se, may be alternative ways of approaching the goal. Yet increased automation is one viable approach in the array of possibilities, as reflected in the myriad of systems described in Section II. The need for improvement is perhaps more strongly driven by the desire to improve efficiency without sacrificing current levels of safety. Efficiency pressures are particularly strong from the commercial air carriers, which operate with very thin profit margins, and for which relatively short delays can translate into very large financial losses. For them it is desirable to substantially increase the existing capacity of the airspace (including its runways) and to minimize disruptions that can be caused by poor weather, inadequate air traffic control equipment, and inefficient air routes. The forecast for the increasing traffic demands
OCR for page 12
The Future of Air Traffic Control: Human Operators and Automation over the next several decades exacerbates these pressures. Of course, as with safety, so with efficiency: advanced air traffic control automation is not the only solution. In particular, the concept of free flight (RTCA,1 1995a, 1995b; Planzer and Jenny, 1995) is a solution that allocates greater responsibility for flight path choice and traffic separation to pilots (i.e., between human elements), rather than necessarily allocating more responsibility to automation. Automation is viewed as a viable alternative solution to solve the demands for increased efficiency. Furthermore, it should be noted that free flight does depend to some extent on advanced automation and also that, from the controller's point of view, the perceived loss of authority whether it is lost to pilots (via free flight) or to automation, may have equivalent human factors implications for design of the controller's workstation. It is, of course, the case that automation is made possible by the existence of technology. It is also true that, in some domains, automation is driven by the availability of technology; the thinking is, "the automated tools are developed, so they should be used." Developments in sensor technology and artificial intelligence have enabled computers to become better sensors and pattern recognizers, as well as better decision makers, optimizers, and problem solvers. The extent to which computer skills reach or exceed human capabilities in these endeavors is subject to debate and is certainly quite dependent on context. However, we reject the position that the availability of computer technology should be a reason for automation in and of itself. It should be considered only if such technology has the capability of supporting legitimate system or human operator needs. Automation has the capability both to compensate for human vulnerabilities and to better support and exploit human strengths. In the Phase I report, we noted controller vulnerabilities (typical of the vulnerabilities of skilled operators in other systems) in the following areas: Monitoring for and detection of unexpected low-frequency events, Expectancy-driven perceptual processing, Extrapolation of complex four-dimensional trajectories, and Use of working memory to either carry out complex cognitive problem solving or to temporarily retain information. In contrast to these vulnerabilities, when controllers are provided with accurate and enduring (i.e., visual rather than auditory) information, they can be very effective at solving problems, and if such problem solving demands creativity or access to knowledge from more distantly related domains, their problem solving 1 Prior to 1991, when its name was formally changed, the RTCA was known as the Radio Technical Commission for Aeronautics.
OCR for page 13
The Future of Air Traffic Control: Human Operators and Automation ability can clearly exceed that of automation. Furthermore, to the extent that accurate and enduring information is shared among multiple operators (i.e., other controllers, dispatchers, and pilots), their collaborative skills in problem solving and negotiation represent important human strengths to be preserved. In many respects, the automated capabilities of data storage, presentation, and communications can facilitate these strengths. As we discuss further in the following pages, current system needs and the availability of some technology provide adequate justification to continue the development and implementation of some forms of air traffic control automation. But we strongly argue that this continuation should be driven by the philosophy of human-centered automation, which we characterize as follows: The choice of what to automate should be guided by the need to compensate for human vulnerabilities and to exploit human strengths. The development of the automated tools should proceed with the active involvement of both users and trained human factors practitioners. The evaluation of such tools should be carried out with human-in-the-loop simulation and careful experimental design. The introduction of these tools into the workplace should proceed gradually, with adequate attention given to user training, to facility differences, and to user requirements. The operational experience from initial introduction should be very carefully monitored, with mechanisms in place to respond rapidly to the lessons learned from the experiences. In this report, we provide examples of good and bad practices in the implementation of human-centered design. LEVELS OF AUTOMATION The term automation has been defined in a number of ways in the technical literature. It is defined by some as any introduction of computer technology where it did not exist before. Other definitions restrict the term to computer systems that possess some degree of autonomy. In the Phase I report we defined automation as: "a device or system that accomplishes (partially or fully) a function that was previously carried out (partially or fully) by a human operator." We retain that definition in this volume. For some in the general public the introduction of automation is synonymous with job elimination and worker displacement. In fact, in popular writing, this view leads to concerns that automation is something to be wary or even fearful of. While we acknowledge that automation can have negative, neutral, or even positive implications for job security and worker morale, these issues are not the focus of this report. Rather we use this definition to introduce and evaluate the relationships between individual and system performance on one hand and the design of the kinds of automation that have been proposed to support air traffic controllers, pilots, and other human operators in the safe and efficient management of the national airspace on the other.
OCR for page 14
The Future of Air Traffic Control: Human Operators and Automation In the Phase I report we noted that automation does not refer to a single either-or entity. Rather, forms of automation can be considered to vary across a continuum of levels. The notion of levels of automation has been proposed by several authors (Billings, 1996a, 1996b; Parasuraman et al., 1990; Sheridan, 1980). In the Phase I report, we identified a 10-level scale, that can be thought of as representing low to high levels of automation (Table 1.1). In this report we expand on that scale in three important directions: (1) differentiating the automation of decision and action selection from the automation of information acquisition; (2) specifying an upper bound on automation of decision and action selection in terms of task complexity and risk; and (3) identifying a third dimension, related to the automation of action implementation. First, in our view, the original scale best represents the range of automation for decision and action selection. A parallel scale, to be described, can be applied to the information automation. These scales reflect qualitative, relative levels of automation and are not intended to be dimensional, ordinal representations. Acquisition of information can be considered a separate process from action selection. In both human and machine systems, there are (1) sensors that may vary in their sophistication and adaptability and (2) effectors (actuators) that have feedback control attached to do precise mechanical work according to plan. Eyes, radars, and information networks are examples of sensors, whereas hands and numerically controlled industrial robots are examples of effectors. We recognize that information acquisition and action selection can and do interact through feedback loops and iteration in both human and machine systems. Nevertheless, it is convenient to consider automation of information acquisition and action selection separately in human-machine systems. Second, we suggest that specifications for the upper bounds on automation of decision and action selection are contingent on the level of task uncertainty. Finally, we propose a third scale that in this context is dichotomous, related to the automation of action implementation, applicable at the lower-levels of automation TABLE I.1 Levels of Automation Scale of Levels of Automation of Decision and Control Action HIGH 10. The computer decides everything and acts autonomously, ignoring the human. 9. informs the human only if it, the computer, decides to 8. informs the human only if asked, or 7. executes automatically, then necessarily informs the human, and 6. allows the human a restricted time to veto before automatic execution, or 5. executes that suggestion if the human approves, or 4. suggests one alternative, and 3. narrows the selection down to a few, or 2. The computer offers a complete set of decision/action alternatives, or LOW 1. The computer offers no assistance: the human must take all decisions and actions.
OCR for page 15
The Future of Air Traffic Control: Human Operators and Automation FIGURE 1.1 Three-scale model of levels of automation. of decision and action selection. The overall structure of this model is shown in Figure 1.1, and the components of the model are described in more detail as follows. Information Acquisition Computer-based automation can apply to any or all of at least six relatively independent features involving operations performed on raw data: Filtering. Filtering involves selecting certain items of information for recommended operator viewing (e.g., a pair of aircraft that would be inferred to be most relevant for conflict avoidance or a set of aircraft within or about to enter a sector). Filtering may be accomplished by guiding the operator to view that information (e.g., highlighting relevant items while graying out less relevant or irrelevant items; Wickens and Yeh, 1996); total filtering may be accomplished by suppressing the display of irrelevant items. Automation devices may vary extensively in terms of how broadly or narrowly they are tuned. Information Distribution. Higher levels of automation may flexibly provide more relevant information to specific users, filtering or suppressing the delivery of that same information for whom it is judged to be irrelevant. Transformations. Transformations involve operations in which the automation functionality either integrates data (e.g., computing estimated time to contact on the basis of data on position, heading, and velocity from a pair of aircraft) or otherwise performs a mathematical or logical operation on the data (e.g., converting time-to-contact into a priority score). Higher levels of automation transform and integrate raw data into a format that is more compatible with user needs (Vicente and Rasmussen, 1992; Wickens and Carswell, 1995). Confidence Estimates. Confidence estimates may be applied at higher
OCR for page 16
The Future of Air Traffic Control: Human Operators and Automation levels of automation, when the automated system can express graded levels of certainty or uncertainty regarding the quality of the information it provides (e.g., confidence in resolution and reliability of radar position estimates). Integrity Checks. Ensuring the reliability of sensors by connecting and comparing various sensor sources. User Request Enabling. User request enabling involves the automation's understanding specific user requests for information to be displayed. If such requests can be understood only if they are expressed in restricted syntax (e.g., a precisely ordered string of specific words or keystrokes), it is a lower-level of automation. If requests can be understood in less restricted syntax (e.g., natural language), it is a higher level of automation. The level of automation in information acquisition and integration, represented on the left scale of Figure 1.1, can be characterized by the extent to which a system possesses high levels on each of the six features. A system with the highest level of automation would have high levels on all six features. Decision and Action Selection and Action Implementation Higher levels of automation of decision and action selection define progressively fewer degrees of freedom for humans to select from a wide variety of actions (Table 1.1 and the middle scale of Figure 1.1). At levels 2 to 4 on the scale, systems can be developed that allow the operator to execute the advised or recommended action manually (e.g., speaking a clearance) or via automation (e.g., relaying a suggested clearance via data link by a single computer input response). The manual option is not available at the higher levels for automation of decision and action selection. Hence, the dichotomous action implementation scale applies only to the lower-levels of automation of decision and action selection. Finally, we note that control actions can be taken in circumstances that have more or less uncertainty or risk in their consequences, as a result of more or less uncertainty in the environment. For example, the consequences of an automated decision to hand off an aircraft to another controller are easily predictable and of relatively low risk. In contrast, the consequences of an automation-transmitted clearance or instruction delivered to an aircraft are less certain; for example, the pilot may be unable to comply or may follow the instruction incorrectly. We make the important distinction between lower-level decision actions in the former case (low uncertainty) and higher level decision actions in the latter case (high uncertainty and risk). Tasks with higher levels of uncertainty should be constrained to lower-levels of automation of decision and action selection. The concluding chapter of the Phase I report examined the characteristics of automation in the current national airspace system. Several aspects of human
OCR for page 17
The Future of Air Traffic Control: Human Operators and Automation interaction with automation were examined, both generally and in the specific context of air traffic management. In this chapter, we discuss system reliability and recovery. SYSTEM PERFORMANCE System Reliability Automation is rarely a human factors concern unless it fails or functions in an unintended manner that requires the human operator to become involved. Therefore, of utmost importance for understanding the human factors consequences of automation are the tools for predicting the reliability (inverse of failure rate) of automated systems. We consider below some of the strengths and limitations of reliability analysis (Adams, 1982; Dougherty, 1990). Analysis Techniques Reliability analysis, and its closely related methodology of probabilistic risk assessment, have been used to determine the probability of major system failure for nuclear power plants, and similar applications may be forthcoming for air traffic control systems. There are several popular techniques that are used together. One is fault tree analysis (Kirwan and Ainsworth, 1992), wherein one works backward from the "top event," the failure of some high level function, and what major systems must have failed in order for this failure to occur. This is usually expressed in terms of a fault tree, a graphical diagram of systems with ands and ors on the links connecting the second-level subsystems to the top-level system representation. For example, radar fails if any of the following fails: the radar antennas and drives, or the computers that process the radar signals, or the radar displays, or the air traffic controller's attention to the displays. This amounts to four nodes connected by or links to the node representing failure of the radar function. At a second-level, for example, computer failure occurs if both the primary and the backup computers fail. Each computer, in turn, can experience a software failure or a hardware failure or a power failure or failure because of an operator error. In this way, one builds up a tree that branches downward from the top event according to the and-/or-gate logic of both machine and human elements interacting. The analysis can be carried downward to any level of detail. By putting probabilities on the events, one can study their effects on the top event. As may be realized by the above example, system components depending on and-gate inputs are far more robust to failures (and hence reliable) than those depending on or-gate inputs. Another popular technique is event tree analysis (Kirwan and Ainsworth, 1992). Starting from some malfunction, the analyst considers what conditions
OCR for page 18
The Future of Air Traffic Control: Human Operators and Automation may lead to other possible (and probably more serious) malfunctions, and from the latter malfunction what conditions may produce further malfunctions. Again, probabilities may be assigned to study the relative effects on producing the most serious (downstream) malfunctions. Such techniques can provide two sorts of outputs (there are others, such as cause-consequence diagrams, safety-state Markov diagrams, etc.; Idaho National Engineering Laboratory, 1997). On one hand, they may produce what appear to be "hard numbers" indicating the overall system reliability (e.g., .997). For reasons we describe below, such numbers must be treated with extreme caution. On the other hand, reliability analysis may allow one to infer the most critical functions of the human operator relative to the machinery. In one such study performed in the nuclear safety context, Hall et al. (1981) showed the insights that can be gained without even knowing precisely the probabilities for human error. They simply assumed human error rates (for given machine error rates) and performed the probability analysis repeatedly with different multipliers on the human error rate. The computer, after all, can do this easily once the fault tree or event tree structure is programmed in. The authors were able to discover the circumstances for which human error made a big difference, and when it did not. Finally, it should be noted that the very process of carrying out reliability analysis can act as a sort of audit trail, to ensure that the consequences of various improbable but not impossible events are considered. Although reliability analysis is a potentially valuable tool for understanding the sensitivity of system performance to human error (human "failure"), as we noted above, one must use great caution in trusting the absolute numbers that may be produced, for example, using these numbers as targets for system design, as was done with the advanced automation system (AAS). There are at least four reasons for such caution, two of which we discuss briefly, and two in greater depth. In the first place, any such number (i.e., r = .997) is an estimate of a mean. But what must be considered in addition is the estimate of the variability around that mean, to determine best-case and worst-case situations. Variance estimates tend to be very large relative to the mean for probabilities that are very close to 0 or 1.0. And with large variance estimates (uncertainty of the mean), the mean value itself has less meaning. A second problem with reliability analysis pertains to unforeseen events. It seems to be a given that things can fail in the world, failures that the analysts have no way of predicting. For example, it is doubtful that any reliability analyst would have been able to project, in advance, the likelihood that a construction worker would sever the power supply to the New York TRACON with a backhoe loader, let alone have provided a reliable estimate of the probability of such an event's occurring. The two further concerns related to the hard numbers of reliability analysis are the extreme difficulties of making reliability estimates of two critical components
OCR for page 19
The Future of Air Traffic Control: Human Operators and Automation in future air traffic control automation: the human and the software. Because of their importance, each of these is dealt with in some detail. Human Reliability Analysis Investigators in the nuclear power industry have proposed that engineering reliability analysis can be extended to incorporate the human component (Swain, 1990; Miller and Swain, 1987). If feasible, such extension would be extremely valuable in air traffic control, given the potential for two kinds of human error to contribute to the loss of system reliability: errors in actual operation (e.g., a communications misunderstanding, an overlooked altitude deviation) and errors in system set-up or maintenance. Some researchers have pointed out the difficulty of applying human reliability analysis (to derive hard numbers, as opposed to doing the sort of sensitivity analysis described above [Adams, 1982; Wreathall, 1990]). The fundamental difficulties of this technique revolve around the estimation of the component reliabilities and their aggregation through traditional analysis techniques. For example, it is very hard to get meaningful estimates of human error rates, because human error is so context driven (e.g., by fatigue, stress, expertise level) and because the source of cognitive errors remains poorly understood. Although this work has progressed, massive data collection efforts will be necessary in the area of air traffic control, in order to form even partially reliable estimates of these rates. A second criticism concerns the general assumptions of independence that underlie the components in an event or fault tree. Events at levels above (in a fault tree) or below (in an event tree) are assumed to be independent, yet human operators show two sorts of dependencies that are difficult to predict or quantify (Adams, 1982). For one thing, there are possible dependencies between two human components. For example, the developmental controller may be reluctant to call into question an error that he or she noticed that was committed by a more senior, full-performance-level controller at the same console. For another thing, there are poorly understood dependencies between human and system reliabilities, related to trust calibration, which we discuss later in this chapter. For example, a controller may increase his or her own level of vigilance to compensate for an automated component that is known to be unreliable; alternatively, in the face of frustration with the system, a controller may become stressed or confused and show decreased reliability. Software Reliability Analysis Hardware reliability is generally a function of manufacturing failures or the wearing out of components. With sophisticated testing, it is possible to predict how reliable a piece of hardware will be according to measures such as mean time between failures. Measuring software reliability, however, is a much more difficult
OCR for page 20
The Future of Air Traffic Control: Human Operators and Automation problem. For the most part, software systems need to fail in real situations, in order to discover bugs. Generally, many uses are required before a piece of software is considered reliable. According to Parnas et al. (1990), failures in software are the result of unpredictable input sequences. Predicting failure rate is based on the probability of encountering an input sequence that will cause the system to fail. Trustworthiness is defined by the extent to which a catastrophic failure or error may occur; software is trusted to the extent that the probability of a serious flaw is low. Testing for trustworthiness is difficult because the number of states and possible input sequences is so large that the probability of an error's escaping attention is high. For example, the loss of the Airbus A330 in Toulouse in June 1994 (Dornheim, 1995) was attributed to autoflight logic behavior changing dramatically under unanticipated circumstances. In the altitude capture mode, the software creates a table of vertical speed versus time to achieve smooth level-off. This is a fixed table based on the conditions at the time the mode is activated. In this case, due to the timing of events involving a simulated engine failure, the automation continued to operate as though full power from both engines was available. The result was steep pitchup and loss of air speed—the aircraft went out of control and crashed. There are a number of factors that contribute to the difficulty of designing highly reliable software. First is complexity. Even with small software systems, it is common to find that a programmer requires a year of working with the program before he or she can be trusted to make improvements on his or her own. Second is sensitivity to error. In manufacturing, hardware products are designed within certain acceptable tolerances for error; it is possible to have small errors with small consequences. In software, however, tolerance is not a useful concept because trivial clerical errors can have major consequences. Third, it is difficult to test software adequately. Since mathematical functions implemented by software are not continuous, it is necessary to perform an extremely large number of tests. In continuous function systems, testing is based on interpolation between two points—devices that function well on two close points are assumed to function well at all points in between. This assumption is not possible for software, and because of the large number of states it is not possible to do enough testing to ensure that the software is correct. If there is a good model of operating conditions, then software reliability can be predicted using mathematical models. Generally, good models of operating conditions are not available until after the software is developed. Some steps can be taken to reduce the probability of errors in software. Among them is conducting independent validation using researchers and testing personnel who were not involved in development. Another is to ensure that the software is well documented and structured for review. Reviews should cover the following questions:
OCR for page 21
The Future of Air Traffic Control: Human Operators and Automation • Are the correct functions included? Is the software maintainable? For each module, are the algorithms and data structures consistent with the specified behavior? Are codes consistent with algorithms and data structures? Are the tests adequate? Yet another step is to develop professional standards for software engineers that include an agreed-upon set of skills and knowledge. Recently, the capacity maturity model (CMM) for software has been proposed as a framework for encouraging effective software development. This model covers practices of planning, engineering, and managing software development and maintenance. It is intended to improve the ability of organizations to meet goals for cost, schedule, functionality, and product quality. The model includes five levels of achieving a mature software process. Organizations at the highest level can be characterized as continuously improving the range of their process capability and thereby improving the performance of their projects. Innovations that use the best software engineering practices are identified and transferred throughout the organization. In addition, these organizations use data on the effectiveness of software to perform cost-benefit analyses of new technologies as well as proposed changes to the software development process. Conclusion Although the concerns described above collectively suggest extreme caution in trusting the mean numbers that emerge from a reliability analysis conducted on complex human-machine systems like air traffic control, we wish to reiterate the importance of such analyses in two contexts. First, merely carrying out the analysis can provide the designer with a better understanding of the relationships between components and can reveal sources of possible failures for which safeguards can be built. Second, appropriate use of the tools can provide good sensitivity analyses of the importance (in some conditions) or nonimportance (in others) of human failure. System Failure and Recovery Less than perfect reliability means that automation-related system failures can degrade system performance. Later in this chapter we consider the human performance issues associated with the response to such failures and automation-related anomalies in general. Here we address the broader issue of failure recovery from a system-wide perspective. We first consider some of the generic properties of failure modes that affect system recovery and then provide the framework for a model of failure recovery—that is, the capability of the team of
OCR for page 37
The Future of Air Traffic Control: Human Operators and Automation to retain skill levels. An alternative possibility is to pursue design alternatives that will not rely on those skills that may be degraded, given a system failure. Cognitive Skills Needed Automation may affect system performance not only because controller skills may degrade, but because new skills may be required, ones that controllers may not be adequately trained for. Do future air traffic management automated systems require different cognitive skills on the part of controllers for the maintenance of efficiency and safety? In the current system, the primary job of the controller is to ensure safe separation among the aircraft in his or her sector, as efficiently as possible. To accomplish this job, the controller uses weather reports, voice communication with pilots and controllers, flight strips describing the history and projected future of each flight, and a plan view (radar) display that provides data on the current altitude, speed, destination, and track of all aircraft in the sector. According to Ammerman et al. (1987), there are nine cognitive-perceptual ability categories needed by controllers in the current system: higher-order intellectual factors of spatial, verbal, and numerical reasoning; perceptual speed factors of coding and selective attention; short-and long-term memory; time sharing; and manual dexterity. As proposed automation is introduced, it is anticipated that the job of the controller will shift from tactical control among pairs of aircraft in one sector to strategic control of the flow of aircraft across multiple sectors (Della Rocco et al., 1991). Current development and testing efforts suggest that the automation will perform such functions as identifying potential conflicts 20 minutes or more before they occur, automatically sequencing aircraft for arrival at airports, and providing electronic communication of data between the aircraft and the ground using data link. Several displays may be involved and much of the data will be presented a graphic format (Ei-feldt, 1991). These prospective aids should make it possible for controllers in one sector to anticipate conflicts down the line and make adjustments, thus solving potential problems long before they occur. Essentially, as automation is introduced, it is expected that there will be less voice communication, fewer tactical problems needing the controller's attention, a shift from textual to graphic information, and an extended time frame for making decisions. However, it is also expected that, in severe weather conditions, emergency situations, or instances of automation failure, the controller will be able to take over and manually separate traffic. Manning and Broach (1992) asked controllers who had reviewed operational requirements for future automation to assess the cognitive skills and abilities needed. These controllers agreed that coding, defined as the ability to translate and interpret data, would be extremely important as the controller becomes involved in strategic conflict resolution. Numerical reasoning was rated as less
OCR for page 38
The Future of Air Traffic Control: Human Operators and Automation relevant in future systems, because it was assumed that the displays would be graphic and the numerical computations would be accomplished by the equipment. Skills and abilities related to verbal and spatial reasoning and to selective attention received mixed ratings, although all agreed that some level of these skills and abilities would be needed, particularly when the controller would be asked to assume control from the automation. The general conclusion from the work of Manning and Broach (1992), as well as from analyses of proposed automation in AERA 2 and AERA 3 (Reierson et al., 1990), is that controllers will continue to need the same cognitive skills and abilities as they do in today's system, but the relative importance of these skills and abilities will change as automation is introduced. The controller in a more highly automated system may need more cognitive skills and abilities. That is, there will be the requirement for more strategic planning, for understanding the automation and monitoring its performance, and for stepping in and assuming manual control as needed. An important concern, echoed throughout this volume, is the need to maintain skills and abilities in the critical manual (as opposed to supervisory) control functions that may be performed infrequently. Dana Broach (personal communication, Federal Aviation Administration Civil Aeromedical Institute, 1997) has indicated that the Federal Aviation Administration is currently developing a methodology to be used in more precisely defining the cognitive tasks and related skill and ability requirements as various pieces of automation are introduced. Once in place, this methodology should be central to identifying possible shifts in both establishing selection requirements and designing training programs. ADAPTIVE AUTOMATION The human performance vulnerabilities that have been discussed thus far may be characteristic of fixed or static automation. For example, difficulties in situation awareness, monitoring, maintenance of manual skills, etc., may arise because with static automation the human operator is excluded from exercising these functions for long periods of time. If an automated system always carries out a high level function, there will be little incentive for the human operator to be aware of or monitor the inputs to the function and may consequently not be able to execute the function well manually if he or she is required to do so at some time in the future. Given these possibilities, it is worthwhile considering the performance characteristics of an alternative approach to automation: adaptive automation, in which the allocation of function between humans and computer systems is flexible rather than fixed. Long-term fixed (or nonadaptive) automation will generally not be problematic for data-gathering and data integration functions in air traffic management because they support but do not replace the controller's decision making activities (Hopkin, 1995). Also, fixed automation is necessary, by definition, for
OCR for page 39
The Future of Air Traffic Control: Human Operators and Automation functions that cannot be carried out efficiently or in a timely manner by the human operator, as in certain nuclear power plant operations (Sheridan, 1992). Aside from these two cases, however, problems could arise if automation of controller decision making functions—what Hopkin (1995) calls computer assistance—is implemented in such a way that the computer always carries out decisions A and B, and the controller deals with all other decisions. Even this may not be problematic if computer decision making is 100 percent reliable, for then there is little reason for the controller to monitor the computer's inputs, be aware of the details of the traffic pattern that led to the decision, or even, following several years of experience with such a system, know how to carry out that decision manually. As noted in previous sections, however, software reliability for decision making and planning functions is not ensured, so that long-term, fixed automation of such functions could expose the system to human performance vulnerabilities. Under adaptive automation, the division of labor between human operator and computer systems is flexible rather than fixed. Sometimes a given function may be executed by the human, at other times by automation, and at still others by both the human and the computer. Adaptive automation may involve either task allocation, in which case a given task is performed either by the human or the automation in its entirety, or partitioning, in which case the task is divided into subtasks, some of which are performed by the human and others by the automation. Task allocation or partitioning may be carried out by an intelligent system on the basis of a model of the operator and of the tasks that must be performed (Rouse, 1988). This defines adaptive automation or adaptive aiding. For example, a workload inference algorithm could be used to allocate tasks to the human or to automation so as to keep operator workload within a narrow range (Hancock and Chignell, 1989; Wickens, 1992). Figure 1.5 provides a schematic of how this could be achieved within a closed-loop adaptive system (Wickens, 1992). An alternative to having an intelligent system invoke changes in task allocation or partitioning is to leave this responsibility to the human operator. This approach defines adaptable automation (Billings and Woods, 1994; Hilburn, 1996). Except where noted, the more generic term adaptive is used here to refer to both cases. Nevertheless, there are significant and fundamental differences between adaptive (machine-directed) and adaptable (human-centered) systems in terms of such criteria as feasibility, ease of communication, user acceptance, etc. Billings and Woods (1994) have also argued that systems with adaptive automation may be more, not less, susceptible to human performance vulnerabilities if they are implemented in such a way that operators are unaware of the states and state changes of the adaptive system. They advocate adaptable automation, in which users can tailor the level and type of automation according to their current needs. Depending on the function that is automated and situation-specific factors (e.g., time pressure, risk, etc.), either adaptive or adaptable automation may be
OCR for page 40
The Future of Air Traffic Control: Human Operators and Automation FIGURE 1.5 Closed-loop adaptive system. appropriate. Provision of feedback about high level states of the system at any point in time is a design principle that should be followed for both approaches to automation. These and other parameters of adaptable automation should be examined with respect to operational concepts of air traffic management. In theory, adaptive systems may be less vulnerable to some of the human performance problems associated with static automation (Hancock and Chignell, 1989; Parasuraman et al., 1990; Scerbo, 1996; Wickens, 1992; but see Billings and Woods, 1994). The research that has been done to date suggests that there may be both benefits and costs of adaptive automation. Benefits have been reported with respect to one human performance vulnerability, monitoring. For example, a task may be automated for long periods of time with no human intervention. Under such conditions of static automation, operator detection of automation malfunctions can be inefficient if the human operator is engaged in other manual tasks (Molloy and Parasuraman, 1996; Parasuraman et al., 1993). The problem does not go away, and may even be exacerbated, with highly reliable automation (Parasuraman, Mouloua, Molloy, and Hilburn, 1996). Given automation-induced monitoring inefficiency, how might it be ameliorated? One possibility is adaptive task allocation, or reallocating a formerly automated task to the human operator. Given that an in-the-loop monitor performs better than one who is out of the loop (Parasuraman et al., 1993; Wickens and Kessel, 1979; but see Liu et al., 1993), this should enhance monitoring performance. But this is clearly not an allocation strategy that can be pursued generally for all automated tasks and at all times, for it would lead to excessive manual workload, thus defeating one of the purposes of automation. One potential solution is to allocate the automated task to the human operator for only a
OCR for page 41
The Future of Air Traffic Control: Human Operators and Automation brief period of time, before returning it once again to automation. The benefits of temporary allocation of a task to human control may persist for some time, even after the task is returned to automation control. This hypothesis was tested in a study by Parasuraman, Mouloua, and Molloy (1996). During multiple-task flight simulation, a previously automated engine-status monitoring task was adaptively allocated to the operator for a 10-minute period in the middle of a session, and then returned to automatic control (see Figure 1.6). Detection of engine malfunctions was better during the 10-minute block when the task was returned to human control from automation, consistent with previous reports of superior monitoring under conditions of active human control (Parasuraman et al., 1993; Wickens and Kessel, 1979). More importantly, however, detection performance under automation control was markedly superior in the post-allocation phase than in the identical pre-allocation phase (see Figure 1.6). (In both these phases, the engine-status monitoring task was automated but the post-allocation phase immediately followed one in which the task was performed manually.) The performance benefit (of about 66 percent) persisted even after the engine-status monitoring task was returned to automation, for about 20 minutes. The benefit of adaptive task allocation was attributed to this procedure, allowing human operators to update their memory of the engine-status monitoring task. A similar view was put forward by Lewandowsky and Nikolic (1995) on the basis of a connectionist (neural network) simulation of these monitoring performance data. In addition to improved monitoring, benefits of adaptive automation for operator mental workload have also been reported in recent studies by Hilburn (1996). This research is of particular interest because it examined the utility of adaptive automation in the specific context of air traffic control. Experienced controllers worked with an advanced simulation facility, NARSIM, coupled with the CTAS automation tool, specifically the descent advisor (DA). Controllers were required to perform the role of an executive controller in a southern sector of the Amsterdam airspace. A plan view display contained traffic with associated flight data blocks, a data link status panel, and the descent advisor timeline display from CTAS. Three levels of CTAS automated assistance could be provided: none (manual, traffic status only), conflict detection only, or conflict detection plus resolution advisory. Controller workload was assessed using physiological (eye scan entropy, heart rate variability) and subjective measures (NASA-TLX). Monitoring was assessed by recording controller reaction times to respond to occasional data link anomalies. A baseline study established that controller workload increased with traffic load but was reduced by each level of automation assistance compared with manual performance. In a second study, Hilburn (1996) examined the effects of adaptive automation for two levels of CTAS aiding: manual control or resolution advisory. In two static automation conditions, automation level remained constant throughout the simulation, irrespective of shifts in traffic load. In the adaptive condition,
OCR for page 42
The Future of Air Traffic Control: Human Operators and Automation FIGURE 1.6 (A) Time line for adaptive task allocation. (B) Effects on monitoring performance. Source: Parasuraman, Mouloua, and Molloy (1996, Vol. 38, No. 4). Copyright 1996 by the Human Factors and Ergonomics Society. All rights reserved. Reprinted by permission.
OCR for page 43
The Future of Air Traffic Control: Human Operators and Automation shifts between manual control and resolution advisory coincided with traffic pattern shifts, giving the appearance that the adaptation was triggered by the traffic increase or decrease. Compared with the static automation conditions, the adaptive condition was associated with workload benefits, particularly under high traffic load. There was also a trend for monitoring to be better in the adaptive condition, compared with the two static conditions, consistent with the previously described study by Parasuraman, Mouloua, and Molloy (1996). Despite these performance benefits, adaptive systems may not be free of some costs. For example, if the adaptive logic on which the system is based is oversensitive to the eliciting criteria, then the system may oscillate between automated and manual control of a task at frequent intervals. There is evidence that performance costs can occur if the cycle time between automated and manual control of a task is very short, particularly if the operator has no control over function changes (Hilburn, Parasuraman, and Mouloua, 1995; Scallen et al., 1995). The question of operator control leads to the issue raised by Billings and Woods (1994) on adaptive versus adaptable automation. Very little empirical work has been done on this issue. Hilburn et al. (1993) had individuals perform a multitask flight simulation with the ability to turn automation on or off whenever they chose (adaptable automation). The times that automation was invoked or turned off were recorded and presented as the output of an intelligent adaptive system to another group of individuals in a yoked-control design modified from one used by Liu et al. (1993). Overall performance was superior for the adaptable automation group compared with the adaptive automation group, consistent with the findings of Billings and Woods (1994), although automation benefited both groups. In these and other studies on adaptive automation, function changes involved allocation of entire tasks to automation or to human control. As mentioned earlier, another possibility is to partition tasks—that is, to allocate subtasks. Partitioning may lead to performance costs if tasks are partitioned in a nonmodular way (Gluckman et al., 1993). Vortac and Manning (1994) also found performance costs of partitioning in an air traffic control context. They suggested that automation benefits will accrue only if entire behavioral modules are allocated to automation. Finally, adaptive systems may not necessarily be immune from operator errors arising from misunderstanding or lack of awareness of the activities of the automation at a particular time (Sarter and Woods, 1995b). Given that adaptive systems will probably be granted higher levels of autonomy than current automation, automation-related surprises may occur, particularly if the system is slow to communicate intent to the human. Bubb-Lewis and Scerbo (1997) have considered ways in which human-computer communication can be enhanced in adaptive systems, but problems in coordination and communication remain potential concerns with such systems. It remains to be seen whether these potential
OCR for page 44
The Future of Air Traffic Control: Human Operators and Automation costs of adaptive automation will outweigh the performance benefits that have been reported to date. DESIGN AND MANAGEMENT INFLUENCES Another aspect of human performance in automated systems involves the impact of other human agents in automated systems, not just those who have direct responsibility for operation of the system, whether as individuals or in teams. In an earlier part of this chapter we mentioned the influence of these other individuals—e.g., those involved in design, test and certification, and maintenance—with respect to system failures. In this section we consider their influence with respect to the human operator response to system failure. Parasuraman and Riley (1997) discussed the system performance consequences of human usage of automation both when automation works as designed as well as when failures, unexpected automation behavior, or other anomalies occur. An important feature of their analysis is that they consider the impact of human interaction with automation for all humans involved with the automation: that is, not only human operators but also human designers of automation and managers and supervisors who implement and enforce policies and procedures concerning human operator use of automation. Parasuraman and Riley (1997) showed how automation can act as a surrogate for the designer or the manager. As a result, when automation has an adverse impact on system performance, this can occur not only because of the performance of the human operator, but also because of specific decisions made by the designers of automation and by managers. In some instances, such decisions can legitimately be called designer or management errors. Two examples taken from Parasuraman and Riley (1997) serve as illustrations. In 1993 an Airbus 320 crashed in Warsaw, Poland, when the pilot was unable to activate thrust reversers and brakes after landing because of a failure in the weight-on-wheels sensor on the landing gear (Main Commission Aircraft Accident Investigation, 1994). This system was specifically designed to prevent pilots from inadvertently deploying the spoilers to defeat lift or operate the thrust reversers while still in the air. The protections were presumably put in place due to a lack of trust in the pilot to not do something unreasonable and potentially catastrophic. Lack of trust in the pilot is the complement of trust in the (human) designer of the weight-on-wheels automated system to anticipate all possible conditions. But if the weight-on-wheels sensor fails, as it did in Warsaw, the pilot is prevented from deploying braking devices precisely when they are needed. This represents an error of the designer. The second example from Parasuraman and Riley (1997) concerns management practices or corporate policies regarding automation. In some cases these may prevent human operators from using automation effectively, particularly under emergency conditions. The weight-on-wheels sensor case represents an
OCR for page 45
The Future of Air Traffic Control: Human Operators and Automation example of the human operator's not being able to use automation because of prior decisions made by the designer of automation. Alternatively, even though automation may be designed to be engaged flexibly, management may not authorize its use or its disengagement in certain conditions. This appears to have been the case in a recent accident involving a subway train near Washington, D.C. The train collided with a standing train in a heavy snowstorm when the automatic speed control system failed to bring the train to a stop because of snow on the tracks. Just prior to this accident, the previous policy of allowing train operators intermittent use of manual speed control was rescinded suddenly and without explanation. The accident investigation board determined that the management policy and the decision of the central train controller (who had to enforce policy) to refuse the train operator's request to run the train manually because of the poor weather was a major factor in the accident (National Transportation Safety Board, 1997b). Thus, automation can also act as a surrogate for the manager, just as it can for the system designer, and management policy errors regarding automation can also adversely impact human performance in automated systems. TEAM PERFORMANCE AND COORDINATION Chapter 7 of the Phase I report discussed team aspects of air traffic control and employed a broad definition of team that includes not only controllers and their supervisors who are in face-to-face contact, but also pilots who interact indirectly. The Phase I report also highlighted the importance of shared knowledge of evolving situations for system performance. As the air traffic system becomes more automated, information sharing and team coordination issues will continue to be critical and, under some circumstances, assume greater importance. Those needing to share information in air traffic management include not only controllers and pilots, but also traffic managers and dispatchers (Smith et al., 1996). Automation can facilitate information sharing (for example, by data link or a large CTAS display visible to all operators at a facility). However, automation can also impede shared awareness and the development of common mental models for several reasons. One is that interactions with automated devices through keyboard entries may be far less visible (or audible) to adjacent operators than interaction via more traditional media (such as voice communication or stick manipulations; see Segal, 1995). A second is that some automated systems may not provide comparable information to all participants in the system. The introduction of TCAS resolution advisories has occasionally resulted in a lack of controller awareness of an aircraft's intended maneuver (Jones, 1996; Mellone and Frank, 1993). A third is that automated systems may allow reconfiguration of system characteristics by remote operators in a way that is initially transparent to other affected operators (Sarter and Woods, 1997). An incorrect mental model of a developing situation and the status of automation
OCR for page 46
The Future of Air Traffic Control: Human Operators and Automation can prevent effective action by a single operator. Similarly, if team members hold different mental models about how an automated device operates and what it is doing, differing perceptions of the situation can thwart effective communication. To date there has been relatively little research in the area of shared situation awareness and mental models in the collaborative use of automation (Idaszak, 1989; Segal, 1995). Sarter (1997) argues that information requirements in the cockpit will be increased if both aircraft and controllers are to have accurate, shared mental models. This has implications for the workload of flight crews, who are responsible for all aspects of a flight's management. The keyboard mode of communication associated with data link may prove to be inefficient for team communication and conflict resolution associated with decision making under time constraints that may be imposed as aircraft near the final approach fix or in other time-critical situations (Sarter, 1997). The ultimate goal of shared information and situation awareness is to allow users to make optimal decisions in the operating environment. If the expressed goals of a less constrained, more flexible air traffic management system are to be achieved, there will be more collaborative, shared decision making in the future than in the present system (Smith, Billings, Woods et al., 1997). Some research is currently in progress on the dynamics of distributed decision making in air traffic control (Hutton, 1997; Orasanu et al., in press). To achieve effective team decision making between widely separated individuals with different workloads and information displays, more research into the processes and media requirements (i.e., data link versus radio for negotiation) will be needed. Such research can benefit from knowledge of the area of collaborative technology (discussed in Chapter 2). This research should provide guidelines for training to optimize distributed decision making and the resolution of decision conflicts. Training in decision making should take into account the characteristics of expert decision making in naturalistic settings (Hutton, 1997). The problem of collaborative decision making between air traffic control and aircraft is exacerbated because the pilot members of decision making teams come from diverse, multicultural backgrounds. Hence, controllers will not be able to assume a common decision making orientation. Nor will many users of the air traffic system have had formal training in collaborative decision making. This will pose a serious challenge for training and will force consideration of the range of air traffic control users (i.e., from many cultures with different approaches to decision making and communication) who may be required to collaborate with air traffic control to resolve incipient flight path conflicts. Team issues will also become critical in situations in which the automated system's capabilities degrade. In the event of reduced capability, tasks normally accomplished by a single controller may require additional human support—a reversion to present-day team duty assignments. The Phase I report identified team communications factors in current air traffic control sector management. Jones (1996) further specified team issues associated with both operational incidents
OCR for page 47
The Future of Air Traffic Control: Human Operators and Automation and exemplary controller performance, issues amenable to training interventions. In a more automated air traffic control system, there is a strong possibility that team skills associated with coordination among controllers may degrade if the system functions with high reliability. Both formal training and regular practice in the use of team skills within the facility will be needed to maintain a safety buffer. In summary, the same team skills (and formal training to gain, maintain, and reinforce them) that are needed in the current air traffic control system will be required under more automated systems. It is should be noted that Eurocontrol, the parent agency for air traffic control in the European Community, is implementing a program of team training called team resource management that makes use of the experience gained from crew resource management programs for flight crews (Barbarino, 1997; Masson and Paries, 1997). In addition, the concept of shared decision making between aircraft and air traffic control will require further training in distributed decision making and conflict resolution.
Representative terms from entire chapter: