For this workshop session, the panelists had been asked by the steering committee to address the following question:
What are the key knowledge gaps within these communities for understanding the interaction of the pilot/operator with GCS [ground-control system] automation technologies?
Kathy Abbott (Federal Aviation Administration [FAA]) offered a list of lessons learned. The first lesson is that automated systems have contributed significantly to improvements in safety, operational efficiency, and precise flight-path management, although some vulnerabilities exist (e.g., pilots sometimes relying too much on automated systems). The second lesson, she said, was to clarify: “automated systems, not automation: it’s not a single thing; it’s not a single system; it’s not a single type of task that gets automated.”
Third, Abbott said, lack of practice can result in degradation of basic knowledge and skills, such as degradation of motor and cognitive skills and knowledge for manual flight operations. Fourth, “levels of automation” is a useful concept for communicating ideas about automated systems, but it can be hard to operationalize. That is, it is not a simple linear hierarchy. Fifth, she pointed out that the use of automated systems can reduce workload during normal operations, but it may add complexity and workload during demanding situations. A corollary to this lesson, she said, is that adding automated systems requires a pilot to monitor those systems. Sixth, she noted that sometimes the issue is complexity, not automation. Seventh, she urged caution in referring to automation as another crew member. Last, she observed that pilots and controllers do mitigate risk on a regular and ongoing basis.
In closing, Abbott listed many topics to consider: (1) Pilots are responsible for the safety of aircraft operation. (2) Clear roles and responsibilities need to be specified among all players. (3) Various levels of safety need to be considered. (4) A demonstration is not equal to the real world. (5) If people do not know how humans contribute to safety and effective operation, how can they automate to achieve those goals? (6) The issue of complexity needs to be taken into account. (7) Some assumptions may be better treated as research. (8) Regulatory matters, involving both operations and certification, need to be addressed.
With reference to the first issue of pilot responsibility, Abbott later in the workshop clarified that, although she had earlier indicated FAA is not directly responsible for the safety of a particular operation, FAA does have responsibility for safety oversight, which is not quite the same thing. During this clarification, she also referred to an earlier question regarding FAA guidance on newer complex and highly integrated systems for which all
Mica Endsley (steering committee member) next discussed human-automation integration research needs. She described research gaps in transparency, predictability, context and consistency, dependence, annoyance, operating at cross purposes, responsibility, and training. Each of these gaps is detailed in Box 6-1.
Endsley also provided a long list of Unmanned Aerial System (UAS) cockpit enabling technologies, such as basic user interfaces for information integration and display, compensation for lack of auditory and tactile cues, and object recognition. The overall purpose of developing these types of technologies would be human-automation integration that is smooth, simple, and seamless.
Mary Cummings (steering committee member) agreed with the previous speakers, but she noted that a lot of research had already been done. Rather than more research and discussion about one operator controlling X number of UASs, she suggested that the biggest knowledge gap—not an operational gap—was in the area of understanding fundamentally what people need to be taught to operate these things. Furthermore, she said, this knowledge gap applies across the different classes of UASs, from large to medium to small (under 55 pounds), and the small will dominate for a long time.
1 Available: https://www.faa.gov/regulations_policies/advisory_circulars/index.cfm/go/document.information/documentID/22032 [February 2018].
2 Available https://www.faa.gov/documentLibrary/media/Advisory_Circular/AC_20-174.pdf [February 2018].
Cummings next discussed Air Force and FAA training for UAS pilots. The Air Force trains them like pilots for manned aircraft are trained, and the FAA certifies commercial UAS operators with a general aviation test that has nothing to do with UAS operations and only tests general piloting skills. She said she is not pleased with the FAA approach: there are a lot of incidents happening out there causing havoc. So it would be very valuable to step back as a community and examine UAS training, why the current training is done, what is needed for training, and what is the point of training. Cummings said that different skill sets are needed for different types of UAS operations.
John Hansman (steering committee member) concluded by picking up on the point that there are very different types of UAS operations, and human factors issues will be dependent on the UAS architectures and concepts of operation. For example: Is the mission of the UAS transport, surveillance, or crop dusting, and what are the communication bandwidths, sensors, and vehicle performance parameters? There could be value in defining a few reference architectures and concepts of operation for consideration. He also discussed a gap in the certification of autonomous systems, which today are glorified autopilots but in the future will be much more difficult to certify. Finally, regarding trust, Hansman said, the real issue is not trust: it is appropriate use of the automation, its conditional reliability, and the consequences of not fulfilling its obligation.
This page intentionally left blank.