Health Awareness in Systems of Multiple Autonomous Aerospace Vehicles
Boeing Research & Technology
Significant investments have been made in the development of off-line systems for monitoring and predicting the condition and capabilities of aerospace systems, usually for the purpose of reducing operational costs. A recent trend, however, has been to include these technologies online and use the information they provide for real-time autonomous or semi-autonomous decision making. Health-based adaptations are common in systems that control critical functions, such as redundant flight control systems, but as the scope of systems has expanded—for instance, to systems with multiple vehicles—new challenges and opportunities continue to arise.
Recent studies have explored health-based adaptations at all levels (subsystem, system, and systems-of-systems layers) of a heterogeneous, multi-vehicle system. This emphasis on health awareness has the potential to address two needs: (1) to improve safety, overall system performance, and reliability; and (2) to meeting the expectations (situational awareness, override capability, and task or mission definition) of human operators, who are inevitably present.
One approach to evaluating complex, multi-vehicle systems is to use a subscale indoor flight-test facility where common real faults are manifested in different forms. This type of facility can handle a great many flight hours at low cost for a wide range of vehicle types and component technologies. The lessons learned from these tests and from the architecture developed to complete them are relevant for a large variety of aerospace systems.
This paper begins with a brief review of health awareness in aerospace vehicles and highlights of recent research. Key challenges are then discussed followed by a description of the integrated, experiment-based approach mentioned
above. The paper concludes with a summary of lessons learned and opportunities for further research.
“Health management” in the context of aerospace systems can be defined as the use of measured data and supporting diagnostic and prognostic algorithms to determine the condition and predict the capability of systems and subsystems. The “condition,” which provides insight into the current state of the system or subsystem, is primarily determined by diagnostic algorithms. As a notional example, determining condition might include measuring the voltage of a battery (diagnosis) and assessing its state (fully charged, partially charged, discharged). Determining “capability,” which requires more sophisticated prognostic algorithms, might involve estimating the amount of charge or remaining time at a selected load level (prognosis). A more advanced capability would be an estimate of the number of remaining charge/discharge cycles.
Diagnostic algorithms are now commonly used in commercial and military aircraft and are a basic tool for many maintenance services. These technologies, which minimize the time aircraft must be out of service for maintenance, have tremendous value. Based on the extensive measurement suites available on existing aircraft, analyses are typically used for binary decision making (e.g., continue to use or replace). Although in some cases the information is down-linked in near real time, analyses are generally performed off-line at regular intervals.
Online diagnostic algorithms have only been used in limited situations for critical applications; these include real-time sensor integrity algorithms for managing redundancy in multichannel, fly-by-wire, flight control systems. Despite the limited use of these algorithms, their successes to date have illustrated the potential for health-based algorithms and decision making.
Ongoing research is being done on expanding the application of health-based diagnostic and “longer viewing” prognostic algorithms, which could significantly improve real-time decision making. Recent research has focused on how these technologies might be used in real time to augment decision making by autonomous systems and systems-of-systems. The research is divided into several categories: (1) sensors for providing raw data for algorithms; (2) diagnostic and prognostic algorithms for mining data and providing actionable condition and capability information; and (3) algorithms for using condition and capability data to make decisions. The resulting health-based adaptation can be made in various layers in a large-scale system or system-of-systems, ranging from subsystems (e.g., primary flight control or power management) to systems (e.g., individual vehicles) to systems-of-systems (e.g., multi-vehicle mission management).
Researchers have focused on identifying and addressing several key challenges. These include system complexity, the development of a system architecture, and a suitable evaluation environment.
In the large-scale systems of interest, there are subtle interactions between subsystems, systems, prototype algorithms, and the external environment. These interactions can lead to emergent behavior that makes it difficult to understand the contributions of various algorithms to overall system performance.
For instance, consider the effect of a new algorithm for ensuring the safe separation of aerial vehicles. How does this algorithm perform in the context of a large air traffic network in the presence of faults in various components and communication links? And how might these same technology elements be applicable to alternate missions such as search and rescue missions? A related issue is the development of suitable high-level system missions and associated metrics for the quantifiable evaluation of performance.
Development of an Adequate Architecture
A second challenge is to develop a system architecture that provides a framework for guiding and maturing technology components. Much of the development of existing algorithms is performed in isolation. Thus, even though it is based on excellent theoretical results, the consideration of peripheral effects in the complete system may be limited. An effective system architecture must be modular to allow the various technology elements to be implemented and evaluated in a representative context with one another.
The third challenge is to ensure that the evaluation environment has sufficient complexity, scope, and flexibility to address the first two challenges. Simulations have some potential, but hands-on experiments with real hardware are essential to maturing technologies and addressing the challenges.
Recent advances in motion-capture technology combined with continued developments in small-scale electronics can enable the rapid design and evaluation of flight vehicle concepts (Troy et al., 2007). These evaluations can be extended to the mission level with additional vehicles and associated software.
Boeing has been collaborating with other researchers since 2006 on the development of an indoor flight-test capability for the rapid evaluation of multi-vehicle flight control (Halaas et al., 2009; How et al., 2008; Saad et al., 2009). Several other researchers have also been developing multi-vehicle test environments, both outdoor (Hoffmann et al., 2004; Nelson et al., 2006) and indoor (Holland et al., 2005; Vladimerouy et al., 2004).
Boeing has focused on indoor, autonomous flight capability where the burden of enabling flight is on the system rather than on the vehicles themselves. This arrangement makes it possible for novel concepts to be flown quickly with little or no modification. This also enables the rapid increase in the number of vehicles with minimal effort.
Boeing has also focused on improving the health and situational awareness of vehicles (Halaas et al., 2009). The expanded-state knowledge now includes information related to the power consumption and performance of various aspects of the vehicle. Automated behaviors are implemented to ensure safe, reliable flight with minimal oversight, and the dynamics of these behaviors are considered in mission software. The added information is important for maximizing individual and system performance.
EXPERIMENTAL ENVIRONMENT FOR INTEGRATED SYSTEMS
To address the challenges mentioned above, Boeing Research & Technology has integrated component technologies into an open architecture with simplified subsystems and systems with sufficient fidelity to explore critical, emergent issues. Representative, simple systems consisting of small, commercially available vehicles are modified to include health awareness. These systems are then combined under a modular architecture in an indoor flight environment that enables frequent integrated experiments under realistic fault conditions. Sufficient complexity is introduced to result in emergent behaviors and interactions between multiple vehicles, subsystems, the environment, and operators. This approach avoids the inherent biases of simulation-based design and evaluation and is open to “real-world” unknown unknowns that can influence overall system dynamics.
Vehicle Swarm Technology Laboratory
Boeing Research & Technology has been developing the Vehicle Swarm Technology Laboratory (VSTL), a facility that provides an environment for testing a variety of vehicles and technologies in a safe, indoor, controlled environment (Halaas et al. 2009; Saad et al., 2009). This type of facility not only can accommodate a significant increase in the number of flight test hours available over traditional flight-test ranges, but can also decrease the amount of time required to first flight of a concept. The primary components of the VSTL include a position-reference system, vehicles and associated ground computers, and operator inter-
face software. The architecture is modular and thus supports rapid integration of new elements and changes to existing elements.
The position-reference system consists of a motion-capture system that emits coordinated pulses of light reflected by markers placed on the vehicles within viewing range of the cameras. Through coordinated identification by multiple cameras, the position and attitude of the marked vehicles is calculated and broadcast on a common network. This position-reference system has the advantages of allowing for the modular addition and removal of vehicles, short calibration time, and submillimeter and sub-degree accuracy.
The vehicles operated in the VSTL are modified, commercially available, remotely controlled helicopters, aircraft, and ground vehicles equipped with custom electronics in place of the usual onboard electronics. The custom electronics, which include a microprocessor loaded with common laboratory software, current sensors, voltage sensors, and a common laboratory communication system, enable communication with ground-control computers and add functionality. The ground computers execute the outer-loop control, guidance, and mission management functions.
A key component developed as part of the VSTL is improved vehicle self-awareness. A number of automated safety and health-based behaviors have been implemented to support simple, reliable, safe access to flight testing. Several command and control applications provide an interface between the operator and the vehicles. The level of interaction includes remotely piloted, low-level task control and high-level mission management. The mission management application was used to explore opportunities associated with health-based adaptations and obtain some initial information.
Three missions were evaluated to determine the flexibility of the architecture and the indoor facility to test a variety of concepts rapidly. A specific metric was used for each mission to quantify performance. The first mission was non-collaborative and consisted of several vehicles repeatedly performing independent flight plans on conflicting trajectories. The metric was focused on evaluating flight safety and the performance of collision avoidance methodologies.
The second mission was an abstracted, extended-duration coordinated surveillance mission. The mission metric was associated with the level of surveillance provided in the presence of faults.
The third mission, an exercise to test the full capability of the architecture, highlighted the ability of vehicles and architecture to support a diversity of possible tasks. The mission involved the assessment of a hazardous area using multimodal vehicles and tasking. In addition, there were multiple human operators at different command levels. Success was measured as the completion of the tasks included in the mission and robustness in the presence of faults.
LESSONS LEARNED AND OPPORTUNITIES FOR FUTURE RESEARCH
The three experiments resulted in a number of lessons learned. The approach of integrating the various elements into a modular architecture and performing a range of simplified missions was validated; interactions among the various components exhibited complex behaviors, especially in the presence of faults; peripheral effects of inserting new technologies or algorithms were revealed; lower level functions, especially collision avoidance, need to be evaluated for a range of mission conditions; the role of operators, even in the essentially autonomous missions, was clear; in the presence of faults, sufficient situational awareness is necessary, as well as the ability to intervene if needed (although this capability existed, operators interacted with the system elements sometimes from the higher level command interface and sometimes from a lower level interface). These and other lessons indicate that further research will be necessary in several areas.
Areas for Future Research
First, we need a more formal framework for evaluating technologies and analyzing experimental results. This research should address the following questions: What tools can be developed to guide decisions about which technologies to insert? What is the risk of disrupting other functions? Second, we will need more research on interactions between systems and human operators, who are inevitably present and, thus, play a role in overall mission success: Can the influence of human operators be included in evaluating the overall potential benefit of a proposed technology?
We are hopeful that these and other questions that have emerged can be addressed using the capability and architecture that is already in place.
Halaas, D.J., S.R. Bieniawski, P. Pigg, and J. Vian. 2009. Control and Management of an Indoor, Health Enabled, Heterogenous Fleet. Proceedings of the AIAA Infotech@Aerospace Conference and Exhibit and AIAA Unmanned … Unlimited Conference and Exhibit, AIAA-2009-2036, Seattle, Washington, 2009. Reston, Va.: AIAA.
Hoffmann, G., D.G. Rajnarayan, S.L. Waslander, D. Dostal, J.S. Jang, and C.J. Tomlin. 2004. The Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC). 23rd Digital Avionics System Conference, Salt Lake City, Utah, November 2004. New York: IEEE.
Holland, O., J. Woods, R. De Nardi, and A. Clark. 2005. Beyond swarm intelligence: the UltraSwarm. Pp. 217–224 in Proceedings of the 2005 IEEE Symposium, Pasadena, Calif., June 2005. New York: IEEE.
How, J., B. Bethke, A. Frank, D. Dale, and J. Vian. 2008. Real-time indoor autonomous vehicle test environment. IEEE Control Systems Magazine 28(2): 51–64.
Nelson, D.R., D.B. Barber, T.W. McLain, and R.W. Beard. 2006. Vector Field Path Following for Small Unmanned Air Vehicles. Pp. 5788–5794 in Proceedings of the 2006 American Control Conference, Minneapolis, Minn., June 14–16, 2006.
Saad, E., J. Vian, G. Clark, and S.R. Bieniawski. 2009. Vehicle Swarm Rapid Prototyping Testbed. Proceedings of the AIAA Infotech@Aerospace Conference and Exhibit and AIAA Unmanned … Unlimited Conference and Exhibit, AIAA-2009-1824, Seattle, Washington, 2009. Reston, Va.: AIAA
Troy, J.T., C.A. Erignac, and P. Murray. 2007. Closed-loop Motion Capture Feedback Control of Small-scale Aerial Vehicles. AIAA Paper 2007-2905. Infotech@Aerospace 2007 Conference and Exhibit, Rohnert Park, Calif., May 7–10, 2007. Reston, Va.: American Institute of Aeronautics and Astronautics.
Vladimerouy, V., A. Stubbs, J. Rubel, A. Fulford, J. Strick, and G. Dullerud. 2004. A Hovercraft Testbed for Decentralized and Cooperative Control. Pp. 5332–5337 in Proceedings of the 2004 American Control Conference, Boston, Mass., 2004.