National Academies Press: OpenBook
« Previous: 10 HSI Processes and Measures of Human-AI Team Collaboration and Performance
Suggested Citation:"11 Conclusions." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

11

Conclusions

The introduction of AI into advanced command and control systems brings with it many challenges, not least of which will be the development of reliable and robust technology capable of high levels of performance within the highly complex, variable, and changing scenarios common for peacekeeping and warfare operations. Further, the challenges of cyber attacks and more subtle information attacks may offer new pathways for adversarial actions against AI systems. Nonetheless, there is a likelihood that the military may begin to adopt AI systems, at least for limited applications, in the foreseeable future.

In keeping with DOD directives and ethical considerations, as well as in attempt to ensure that AI systems operate in a manner that is safe and consistent with military objectives, there will be a need for people to direct and oversee the operations of AI systems. However, decades of research have shown that people often struggle to perform this role adequately, due to both cognitive limitations (e.g., poor vigilance in monitoring, inappropriate levels of trust) and inadequate design of the technology (e.g., inadequate transparency, system designs that create low engagement levels or bias human decision making). It is imperative that AI systems be designed to support the needs of the warfighters who will have the ultimate responsibility for successful mission execution.

As AI is developed to provide more intelligent behaviors, there will be an increased need for AI systems to function effectively as teammates with humans. Just as human-only teams have many advantages over people working alone (e.g., the ability to spread work to manage workload fluctuations, the provision of diverse skills, knowledge, and capabilities toward the completion of common goals, and the ability to compensate for deficiencies or challenges faced by individuals) human-AI teams can have similar benefits. When considering an AI system as a part of a team, rather than simply a tool capable of limited actions, the need for a framework for improving the design of AI systems to enhance the overall success of human-AI teams becomes apparent. A failure to consider the needs of the many airmen, soldiers, seamen, guardians, and marines who are responsible for successful military operations will result in AI technologies that ultimately fail to provide the necessary high levels of performance and may instead cause harm.

The design of AI systems for human-AI teams needs to incorporate several highly interrelated considerations. These include designing the AI system to support not only taskwork, but also teamwork. AI systems capable of communication, coordination, cooperation, social intelligence, and human-AI language exchange will be needed. In addition, there will be a need to support ongoing shared situation awareness (SA) between humans and AI teammates, including SA of the environment, SA of the broader system and context, SA of each other’s tasks, and SA of one’s own and each other’s performance or state. SA includes the need to maintain a representation and alignment of changing goals, functional assignments, tasks, plans, and actions across the distributed team.

Suggested Citation:"11 Conclusions." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Improved methods for supporting trust assessment are needed that consider the situational factors that inform humans of when it is appropriate to rely on an AI system (allowing attention to be shifted to other tasks) and when to be less reliant or to intervene. The effects of directability, transparency, and explainability on evolving trust and cooperation need to be further explored. Issues of both distrust and trust are important, as is an understanding of how trust evolves over time, trust repair, and how trust is affected by changing goals within the team. Further, a consideration of trust and SA within multi-echelon, distributed, and ad hoc networks of teams, potentially with multiple AI systems, presents new challenges that need to be addressed.

A key means of providing the needed SA and trust to support effective human-AI teaming is the development of AI systems with high levels of transparency and explainability, without information overload. Transparent, explainable AI systems are required to support decision making, often in time-critical situations. Potential challenges, such as bias or brittleness in AI system capabilities, multiple AI systems that each work differently, and AI systems that learn and change over time, accentuate the need for high levels of real-time transparency and explainability. Humans cannot function effectively with AI systems in the absence of the ability to accurately understand and project the behavior of the AI systems.

Training of the human-AI team as a unit will also become increasingly important. Training is essential for building accurate mental models of an AI system to support SA and trust, and for forming accurate expectations regarding teamwork behaviors. While training will need to include formal instruction, it will also increasingly need to rely on simulated, structured practice scenarios in which perturbations, edge cases, and novel events can be introduced. Opportunities for training and for observing AI system transparency prior to mission events (e.g., during planning, pre-mission briefings) or after mission events (e.g., during debriefing, after-action reviews) would also benefit from exploration, as these opportunities use periods of lower workload to provide high levels of relevant context. Considerable research is needed to determine effective methods for training human-AI teams. New research can leverage current knowledge on training in all-human teams, but also needs to extend beyond human-human team research to address the unique challenges associated with establishing appropriate expectations for human-AI interaction and trust.

Central human-AI interaction design decisions, such as the distribution of responsibilities within the team (i.e., the level of automation) and the ways in which those responsibilities shift over time (i.e., flexible autonomy), have significant impacts on human workload, SA, and the overall success of the team in both routine and novel situations. Methods for supporting the smooth functioning of the team, such as Playbook or goal-based interaction, provide potential opportunities, but more research is needed to predict how design decisions affect emergent behaviors, skill retention, training requirements, job satisfaction, and overall human-AI team resilience. Research is also needed to develop predictive models of human-AI performance. AI system responsivity and directability may also provide methods for improving levels of trust, via system interaction design.

Several significant challenges exist for successful AI system development. These include detecting and preventing information attacks and the systematic bias that can undermine AI system performance and negatively affect human decision making. AI systems with robust situation models and causal models will be needed for decision making. The challenges of maintaining SA in high-speed, on-the-loop operations are significant and will require new breakthroughs in information presentation and AI system capabilities.

AI may also be beneficial in directly supporting the performance of team operations, including detecting and mitigating human biases and customizing AI system behaviors to adapt to the needs of its teammates. It would also be beneficial to explore the potential role of an AI system as a coordinator, orchestrator, or human-resource manager. Two-way communication between humans and AI systems may be important to consider, including the need to provide the AI system with explanations from humans, or to transmit information on human goals or states to the AI system.

Good human-systems integration systems and practices underpin the ability of the USAF to address these various research-and-development goals. To support the development of AI systems that work effectively as a part of human-AI teams, improved methods for setting system requirements and analysis, design, and evaluation of human-AI team performance will be needed. These requirements are particularly important for systems developed via agile software-development processes, which need detailed safeguards for effectively incorporating human-systems integration best practices. An increased emphasis on interdisciplinary research-and-development teams

Suggested Citation:"11 Conclusions." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

is needed, along with research on workforce skillsets, tools, methodologies, and policies for new AI maintenance teams, and cyber detection and response teams. Methods, processes, and systems for testing, evaluation, verification, and validation of AI systems across their lifespans are needed, particularly with respect to AI blind spots and edge cases, as well as managing the potential for software drift over time, supported by robust human-AI testbeds.

This report establishes a number of interrelated research objectives for meeting these needs. Table 11-1 provides a summary of these research objectives, aligned along near- (1–5 years), mid- (6–10 years) and far-term (11–15 years) objectives, with the most immediately accessible and foundational needs listed as near-term objectives and more advanced goals listed as mid- or far-term objectives. Because these objectives are all important for the development of AI systems that will work effectively with people in future military operations, it is not possible to fully prioritize them.

Taken together, this integrated set of research priorities will help to achieve significant advances in human-AI teaming competence. These priorities are fundamental prerequisites to the safe introduction of AI systems into critical situations such as multi-domain operations.

TABLE 11-1 Summary of Human-AI Teaming Research Objectives Aligned Along Near-, Mid-, and Far-Term Objectives

Table

Suggested Citation:"11 Conclusions." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Table

Suggested Citation:"11 Conclusions." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Table

Suggested Citation:"11 Conclusions." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

This page intentionally left blank.

Suggested Citation:"11 Conclusions." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 85
Suggested Citation:"11 Conclusions." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 86
Suggested Citation:"11 Conclusions." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 87
Suggested Citation:"11 Conclusions." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 88
Suggested Citation:"11 Conclusions." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 89
Suggested Citation:"11 Conclusions." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 90
Next: References »
Human-AI Teaming: State-of-the-Art and Research Needs Get This Book
×
Buy Paperback | $30.00 Buy Ebook | $24.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Although artificial intelligence (AI) has many potential benefits, it has also been shown to suffer from a number of challenges for successful performance in complex real-world environments such as military operations, including brittleness, perceptual limitations, hidden biases, and lack of a model of causation important for understanding and predicting future events. These limitations mean that AI will remain inadequate for operating on its own in many complex and novel situations for the foreseeable future, and that AI will need to be carefully managed by humans to achieve their desired utility.

Human-AI Teaming: State-of-the-Art and Research Needs examines the factors that are relevant to the design and implementation of AI systems with respect to human operations. This report provides an overview of the state of research on human-AI teaming to determine gaps and future research priorities and explores critical human-systems integration issues for achieving optimal performance.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!