The panel for this session considered the question of what can be automated, what should be automated, and what should not be automated. The steering committee had asked the panelists to consider the following question and issues regarding Unmanned Aerial Systems (UASs) and the National Airspace System (NAS):
How can automation technologies be leveraged to support UAS integration into the NAS? An interesting way of approaching this question is by looking at how manned aircraft interact with the NAS. A great deal of human activity is performed by pilots, including verbal interactions as well as human-triggered actions (such as activating or deactivating systems). Analyzing these pilot-driven activities for the potential for automation and the repercussions thereof (both intended and unintended) provides an important set of attributes for consideration. Taking advantage of the strengths of automated systems (like the ability to “remember” long, complicated, process-driven events) and identifying the strengths of humans in or over the loop (such as recognizing subtle problems and conceiving novel reactions to unforeseeable events) are obvious ways to proceed with developing a systematic understanding of the challenges and potentials.
Marcie Langelier (U.S. Navy) focused on the interactions in the community that are working together to develop standards and policies, speaking through the lens of her current program, the Triton MQ-4C (an unmanned aerial vehicle). There are substantial air traffic control interoperability challenges associated with automation concerns because of the planned operating parameters of the Triton, which cause it to leave and return to controlled airspace, creating the additional challenge of interacting with other aircraft that are always in controlled airspace and whose pilots and operators have a mental model that all elements in the controlled airspace are continuously controlled. This mental image creates a safety concern at the boundaries, particularly when the Triton is returning to controlled airspace at low altitude, rather than descending through the ceiling altitude into controlled space.
She noted that a technology that the Navy is looking into with great interest is automated collision avoidance and where that technology should reside. It is particularly important for the implementation architecture to meet minimum operational performance standards and for there to be enough time for humans in the loop, such as in piloted aircraft, to be aware of, avoid, and communicate to avoid collisions. An important part of this architecture is an “intervene and cancel override” capability—when a pilot or ground-based controller should be able to override autonomous decisions, how that is sequenced, and how it is managed. There is also an issue associated with returning the control to automated systems once normal operations have been resumed. An important part of the
architecture development process is understanding how things will work in unexpected scenarios. Because of this, use cases, vignettes, and analyses of what is not yet known are critical features of the ongoing efforts.
She pointed out that an obvious case for automation is in collision detection and avoidance, but even in this use case, there are complexities that need to be explored. For this application, there needs to be a distinct and well-engineered interface that possibly takes elements of control away from the human during the event, potentially even “graying out” information that the human might otherwise see. An example of where the decision has been made that the collision avoidance needs to be resident within the aerial vehicle is with the ACAS Xu, which the Navy is planning on installing in the Triton systems. The time issues associated with closure rates, communications, and processing latencies are such that the avoidance capabilities need to be executed as fast as possible, and that means it has to be both resident onboard and automated. Just as clearly, the information associated with the maneuver needs to be made obvious to the operator/pilot as soon as possible after the execution, to include any resultant deviation from the flight profile. The interactions between the humans and the automation continue to be studied and engineered in a coordinated manner with the larger community.
Humans have well-recognized weaknesses when dealing with detailed repetitive tasks, with documented reductions in effectiveness in dealing with these types of tasks over time. These types of tasks are well suited for automation, which can execute the same things repetitively in precisely the same manner without reduction in effectiveness, leaving humans to focus their attention on more unique and complex issues.
Bill Kaliardos (Federal Aviation Administration [FAA]) addressed the topic from the perspective of his experiences working with and doing research with the FAA. There has been a wealth of knowledge built over the period of time in which manned aircraft have been regulated, approved, and allowed to operate. The differences posed by unmanned aircraft are substantial enough that it may be more realistic to consider the current state of the art to be more akin to experimental than production-line systems.1
Kaliardos noted that a great many of the existing UASs were not designed to operate within the regulatory environment that exists for manned aircraft. That is not to say that they are not well designed; rather, it is that different assumptions and engineering solutions were developed that require a system reset in understanding how to move forward, particularly as more of them enter operations and in a wider variety of roles. What is needed is for policy makers to help define the future regulatory environment for UASs designed to operate in both controlled and uncontrolled environments.
A great deal of energy has already been expended on the human factors issues associated with UASs, Kaliardos said.2 Now there needs to be a pull from the nonhuman factors community to provide a set of problems to address. In terms of solutions, there has been some talk about using a risk-based approach to certification, but this presents challenges because, when one size doesn’t fit all, there is a lot of “wiggle room” for designs. While wiggle room can be a good thing, he noted, it can be extremely challenging when one needs to consider how everything works together now, how it will work together in the future, and how that future can be appropriately managed. In other words, standards and predictability are critical features that must be designed in. Without standards and predictability, automation at any level will be more prone to failure.
It is not hard to get an aircraft to go from point A to point B when nothing goes wrong, Kaliardos said. What is difficult is how to anticipate likely challenges, create expectations on how those challenges should be handled, and engineer predictability into the overall architecture. It is likely that full automation is not feasible, at least in the foreseeable future, he said, simply because people simply do not have enough experience to fully automate circumstances that are unknown and unknowable. He said that having more experience will allow designers and others to understand how to define “good enough” for safety and predictability issues.
Rob Hughes (Northrup-Grumman Corporation) addressed the issues from the perspective of industry and from his expertise as a pilot operating in controlled airspace. He pointed out that developing a flight plan can be daunting. A flight plan can include several hundred pages of details, and that can be overwhelming to the system elements that must process the provided information. However, the detailed flight plan provides very accurate,
1 Experimental airframes are allowed much more latitude with regard to regulation than production-line systems, which are subjected to a variety of safety and reliability inspection regimes.
reliable, and robust information for both on-board processes and ground-control station processes and personnel. That planning process is a critical feature for predictability and contingency planning, he said. The amount of data being passed between systems is something that must be considered, especially as more UASs are fielded.
Hughes said that it has been illuminating for industry to consider the question of how much human interaction is necessary. The evolutionary process is nonlinear and complex. The boundaries and scope of that complex interaction are not well defined. Designing for resiliency and robustness are keys to understanding what needs to be done. This is a systems issue; perhaps systems-level certification is an idea that should be considered, he suggested. The command and control of the systems includes the operator, the communications links, and the automated elements. The safety and assurance design levels for the entire system are outstanding questions that are tough to answer, and it is possible that they should be considered separately from the perspective of operating limitation.3,4
The goals of the community will drive analytical responses to these issues. He noted that, if the solution does not need to be gold plated and is desired sooner rather than later, there will be obvious shortfalls. He questioned an assumption that more autonomy equals higher levels of safety and reliability. At least currently, he stressed, there are limitations to autonomy and automation. He described such limits with illustrative questions: “What can automation accomplish? What can automation not accomplish? And probably more importantly, what should automation not accomplish?” Hughes concluded with a hanging question for the audience to consider: “How reliable or how robust is that automation to intentional disruption?”
3 Levels of safety have both qualitative and quantitative descriptors. A qualitative example is extremely improbable: so unlikely that it is not anticipated to occur during the entire operational life of an entire system or fleet. A quantitative example would be the probability of occurrence per operational hour is less than 1 × 10–9. See https://www.faa.gov/regulations_policies/handbooks_manuals/aviation/risk_management/ss_handbook/media/Chap3_1200.pdf [February 2018].
4 A design assurance level is all of the planned and systematic actions used to substantiate, at an adequate level of confidence, that design errors have been identified and corrected such that the items (hardware, software) satisfy the applicable certification basis. See https://aviationglossary.com/design-assurance-level-dal/ [February 2018]
This page intentionally left blank.