National Academies Press: OpenBook
« Previous: 11 Conclusions
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

References

Abrahamsson, P., Salo, O., Ronkainen, J., and Warsta, J. 2017. Agile software development methods: Review and analysis. arXiv:1709.08439.

Achille, L.B., Gladwell Schulze, K., and Schimdt-Nielsen, A. 1995. An analysis of communication and use of military terms in Navy team training. Military Psychology, 7(2), 95–107. doi: 10.1207/s15327876mp0702_4.

Ackerman, E., and Stavridis, J. 2021. 2034: A Novel of the Next World War. London: Penguin Press.

Adams, M.J., Tenney, Y.J., and Pew, R.W. 1995. Situation awareness and the cognitive management of complex systems. Human Factors, 37(1), 85–104.

Air Force Scientific Advisory Board. 2004. Human-System Integration in Air Force Weapon Development and Acquisition. Available: https://www.scientificadvisoryboard.af.mil/Studies/.

Ajzen, I., and Fishbein, M. 1980. Understanding Attitudes and Predicting Social Behavior. Englewood Cliffs, NJ: Prentice Hall.

Akhtar, N., and Mian, A. 2018. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410–14430.

Alcorn, M.A., Li, Q., Gong, Z., Wang, C., Mai, L., Ku, W.-S., and Nguyen, A. 2019. Strike (With) A Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4845–4854.

Alliger, G.M., Beard, R., Bennett Jr., W., Colegrove, C.M., and Garrity, M. 2007. Understanding Mission Essential Competencies as a Job Analytic Method. Pp. 603–624 in The Handbook of Work Analysis: Methods, Systems, Applications and Science of Work Measurement in Organizations (M.A. Wilson, W. Bennett Jr., S.G. Gibson, and G.M. Alliger, eds.). New York: Routledge Taylor & Francis Group.

Allspaw, J. 2016. Human Factors and Ergonomics Practice in Web Engineering and Operations: Navigating a Critical Yet Opaque Sea of Automation in Human Factors and Ergonomics in Practice: Improving System Performance and Human Well-Being in the Real World. London: CRC Press. doi: 10.1201/9781315587332-26.

Allspaw, J. 2012. Fault injection in production: Making the case for resilience testing. Queue, 10(8), 30–35. doi: 10.1145/2346916.2353017.

Allspaw, J., and Hammond, P. 2009. 10+ deploys per day: Dev and ops cooperation at Flickr. Available: https://www.slideshare.net/jallspaw/10-deploys-per-day-dev-and-ops-cooperation-at-flickr/.

Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P.N., Inkpen, K., Teevan, J., Kikin-Gil, R., and Horvitz, E. 2019. Guidelines for Human-AI interaction. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, pp. 1–13. doi: 10.1145/3290605.3300233.

Ananny, M., and Crawford, K. 2016. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society, 20(3), 1–17. doi: 10.1177/1461444816676645.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garciag, A., Gil-Lopeza, S., Molinag, D., Benjaminsh, R., Chatilaf, R., and Herrera, F. 2020. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.

Bagheri, N., and Jamieson, G.A. 2004. The Impact of Context-Related Reliability on Automation Failure Detection and Scanning Behaviour. 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No. 04CH37583), vol. 1, 212–217.

Bailey, N.R., Scerbo, M.W., Freeman, F.G., Mikulka, P.J., and Scott, L.A. 2003. A brain-based adaptive automation system and situation awareness: The role of complacency potential. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 47(9), 1048–1052.

Bainbridge, L. 1983. Ironies of automation. Automatica, 19, 775–779.

Banbury, S., Selcon, S., Endsley, M., Gorton, T., and Tatlock, K. 1998. Being certain about uncertainty: How the representation of system reliability affects pilot decision making. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 42(1), 36–39.

Barnes, C.M., and Van Dyne, L. 2009. I’m tired: Differential effects of physical and emotional fatigue on workload management strategies. Human Relations, 62(1), 59–92.

Bass, E.J., Baumgart, L.A., and Shepley, K.K. 2013. The effect of information analysis automation display content on human judgment performance in noisy environments. Journal of Cognitive Engineering and Decision Making, 7(1), 49–65.

Bean, N.H., Rice, S.C., and Keller, M.D. 2011. The effect of gestalt psychology on the system-wide trust strategy in automation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 55(1), 1417–1421.

Beck, H.P., Dzindolet, M.T., and Pierce, L.G. 2007. Automation usage decisions: Controlling intent and appraisal errors in a target detection task. Human Factors, 49(3), 429–437.

Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., Grenning, J., Highsmith, J., Hunt, A., Jeffries, R., Kern, J., Marick, B., Martin, R., Mellor, S., Schwaber, K., Sutherland, J., and Thomas, D. 2001. Manifesto for Agile Software Development. Available: http://AgileManifesto.org.

Beede, E., Baylor, E., Hersch, F., Iurchenko, A., Wilcox, L., Ruamviboonsuk, P., and Vardoulakis, L. 2020. A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic Retinopathy. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, April 25–30, Honolulu, HI, pp. 1–12.

Behutiye, W.N., Rodríguez, P., Oivo, M., and Tosun, A. 2017. Analyzing the concept of technical debt in the context of agile software development: A systematic literature review. Information and Software Technology, 82, 139–158.

Bennett, W., Alliger, G.M., Colegrove, C.M., Garrity, M.J., and Beard, R.M. 2017. Mission Essential Competencies: A Novel Approach to Proficiency-Based Live, Virtual, and Constructive Readiness Training and Assessment. Pp. 47–62 in Fundamental Issues in Defense Training and Simulation. Boca Raton, FL: CRC Press.

Behymer, K., Rothwell, C., Ruff, H., Patzek, M., Calhoun, G., Draper, M., Douglass, S., and Lange, D. 2017. Initial Evaluation of the Intelligent Multi-UxV Planner with Adaptive Collaborative/Control Technologies (IMPACT). Beavercreek, OH: Infoscitex Corp.

Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., Ghosh, J., Puri, R., Moura, J., and Eckersley, P. 2020. Explainable Machine Learning in Deployment. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 648–657.

Bhatti, S., Demir, M., Cooke, N.J., and Johnson, C.J. 2021. Assessing Communication and Trust in an AI Teammate in a Dynamic Task Environment. 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS), pp. 1–6, doi: 10.1109/ICHMS53169.2021.9582626.

Bisantz, A., and Roth, E.M. 2008. Analysis of Cognitive Work. Pp. 1–43 in Reviews of Human Factors and Ergonomics, vol. 3 (D.A. Boehm-Davis, ed.). Santa Monica, CA: Human Factors and Ergonomics Society.

Blaha, L.M., Bos, N., Fallon, C.K., Gonzalez, C., and Gutzwiller, R.S. 2019. Opportunities and challenges for human-machine teaming in cybersecurity operations. Proceedings of the Human Factors and Ergonomic Society Annual Meeting, 63(1), 442–446.

Blasch, E., Sung, J., and Nguyen, T. 2020. Multisource AI Scorecard Table for System Evaluation. AAAI FSS-20: Artificial Intelligence in Government and Public Sector, Washington, DC.

Boardman, M., and Butcher, F. 2019. An Exploration of Maintaining Human Control in AI Enabled Systems and the Challenges of Achieving It. Brussels: North Atlantic Treaty Organization Science and Technology Organization.

Boehm-Davis, D.A., Durso, F.T., and Lee, J.D. 2015. APA Handbook of Human Systems Integration. Washington DC: American Psychological Association.

Bolstad, C.A., and Endsley, M.R. 1999. Shared mental models and shared displays: An empirical evaluation of team performance. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 43(3), 213–217.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Bolstad, C.A., and Endsley, M.R. 2000. The effect of task load and shared displays on team situation awareness. 14th Triennial Congress of the International Ergonomics Association and Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 44(1), 189–192.

Bolstad, C.A., Riley, J.M., Jones, D.G., and Endsley, M.R. 2002. Using goal directed task analysis with Army brigade officer teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 46(3), 472–476.

Bonney, L., Davis-Sramek, B., and Cadotte, E.R. 2016. “Thinking” about business markets: A cognitive assessment of market awareness. Journal of Business Research, 69(8), 2641–2648.

Boodraj, M. 2020. Managing technical debt in agile software development projects. Dissertation, Georgia State University. Available: https://scholarworks.gsu.edu/cis_diss/77.

Boyce, M.W., Chen, J.Y., Selkowitz, A.R., and Lakhmani, S.G. 2015. Effects of Agent Transparency on Operator Trust. Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts, Portland, Oregon, pp. 179–180.

Bradshaw, J.M., Hoffman, R.R., Woods, D.D., and Johnson, M. 2013. The seven deadly myths of autonomous systems. IEEE Intelligent Systems, 28(3), 54–61.

Brandon, D.P., and Hollingshead, A.B. 2004. Transactive memory systems in organizations: Matching tasks, expertise, and people. Organization Science, 15(6), 633–644.

Brandt, S.L., Lachter, J., Russell, R., and Shively, R.J. 2017. A Human-Autonomy Teaming Approach for a Flight-Following Task. Pp. 12–22 in Advances in Neuroergonomics and Cognitive Engineering, AHFE 2017. Advances in Intelligent Systems and Computing, vol. 586. Springer, Cham.

Brown, P., and Levinson, S.C. 1987. Politeness: Some Universals in Language Usage, vol. 4. Cambridge: Cambridge University Press.

Bryson, J.J., and Theodorou, A. 2019. How Society Can Maintain Human-Centric Artificial Intelligence. Pp. 305–323 in Human-Centered Digitalization and Services, vol. 19. Singapore: Springer. doi: 10.1007/978-981-13-7725-9_16.

Buchanan, B., and Shortliffe, E. 1984. Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Boston, MA: Addison-Wesley.

Buchler, N., Rajivan, P., Marusich, L.R., Lightner, L., and Gonzalez, C. 2018. Sociometrics and observational assessment of teaming and leadership in a cyber security defense competition. Computers and Security, 73, 114–136. doi: 10.1016/j. cose.2017.10.013.

Burdick, M.D., and Shively, R.J. 2000. Evaluation of a Computation Model of Situational Awareness. Proceedings of the Joint IEA 14th Triennial Congress and Human Factors and Ergonomics Society 44th Annual Meeting, 44(1), 109–112.

Burkart, N., and Huber, M.F. 2021. A survey on the explainability of supervised machine learning. Journal of Artificial Intelligence Research, 70, 245–317.

Burke, J.L., Murphy, R.R., Coovert, M.D., and Riddle, D.L. 2004. Moonlight in Miami: Field study of human-robot interaction in the context of an urban search and rescue disaster response training exercise. Human–Computer Interaction, 19(1–2), 85–116.

Burns, C.M., Bryant, D.J., and Chalmers, B.A. 2005. Boundary, purpose, and values in work-domain models: Models of naval command and control. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 35(5), 603–616.

Bussone, A., Stumpf, S., and O’Sullivan, D. 2015. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems. 2015 International Conference on Healthcare Informatics, pp. 160–169.

Caldwell, B. 2005. Multi-team dynamics and distributed expertise in mission operations. Aviation, Space, and Environmental Medicine, 76, 145–153.

Caldwell, B.S., and Onken, J.D. 2011. Modeling and Analyzing Distributed Autonomy for Spaceflight Teams. 41st International Conference on Environmental Systems, Portland, OR. doi: 10.2514/6.2011-5135.

Caldwell, B.S., and Wang, E. 2009. Delays and user performance in human-computer-network interaction tasks. Human Factors, 51(6), 813–830.

Caldwell, B.S., Palmer III, R.C., and Cuevas, H.M., 2008. Information alignment and task coordination in organizations: An ‘information clutch’ metaphor. Information Systems Management, 25(1), 33–44.

Calhoun, G. 2021. Adaptable (not adaptive) automation: The forefront of human–automation teaming. Human Factors, 00187208211037457.

Cannon-Bowers, J.A., Salas, E., and Converse, S. 1993. Shared Mental Models in Expert Team Decision Making. Pp. 221–246 in Current Issues in Individual and Group Decision Making (J. Castellan Jr., ed.). Hillsdale, NJ: Lawrence Erlbaum.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Canonico, L.B., Flathmann, C., and McNeese, N. 2019. Collectively intelligent teams: Integrating team cognition, collective intelligence, and AI for future teaming. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 63(1), 1466–1470.

Carroll, M., Shah, R., Ho, M.K., Griffiths, T., Seshia, S., Abbeel, P., and Dragan, A. 2019. On the utility of learning about humans for human-AI coordination. Advances in Neural Information Processing Systems, 32, 5174–5185.

Carvalho, D.V., Pereira, E.M., and Cardoso, J.S. 2019. Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8), 832.

Case, N. 2018. How to become a centaur. Journal of Design and Science. doi: 10.21428/61b2215c.

Casner, S.M., Geven, R.W., Recker, M.P., and Schooler, J.W. 2014. The retention of manual flying skills in the automated cockpit. Human Factors, 56(8), 1506–1516.

Chakraborti, T., Kambhampati, S., Scheutz, M., and Zhang, Y. 2017a. AI challenges in human-robot cognitive teaming. arXiv:1707.04775.

Chakraborti, T., Sreedharan, S., Zhang, Y., and Kambhampati, S. 2017b. Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy. Pp. 156–163 in Proceedings of the International Joint Conference on Artificial Intelligence, New York, NY: IEEE.

Chandler, S. 2020. How Explainable AI Is Helping Algorithms Avoid Bias. Forbes, February 18. Available: https://www.forbes.com/sites/simonchandler/2020/02/18/how-explainable-ai-is-helping-algorithms-avoid-bias/#4c16d79e5ed3.

Chella, A., Pipitone, A., Morin, A., and Racy, F. 2020. Developing self-awareness in robots via inner speech. Frontiers in Robotics and AI, 7, 16.

Chen, J.Y.C., and Barnes, M.J. 2015. Agent Transparency for Human-Agent Teaming Effectiveness. 2015 IEEE International Conference on Systems, Man, and Cybernetics, pp. 1381–1385.

Chen, J.Y.C., Lakhmani, S.G., Stowers, K., Selkowitz, A.R., Wright, J.L., and Barnes, M. 2018. Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3), 259–282. doi: 10.1080/1463922X.2017.1315750.

Chen, J.Y.C., Procci, K., Boyce, M., Wright, J., Garcia, A., and Barnes, M. 2014a. Situation Awareness-Based Agent Transparency. Aberdeen Proving Ground, MD: Army Research Laboratory. Available: https://apps.dtic.mil/sti/pdfs/ADA600351.pdf.

Chen, T.B., Campbell, D., Gonzalez, F., and Coppin, G. 2014b. The Effect of Autonomy Transparency in Human-Robot Interactions: A Preliminary Study on Operator Cognitive Workload and Situation Awareness in Multiple Heterogeneous UAV Management. Proceedings of the Australasian Conference on Robotics and Automation, December 2–4, Melbourne, Australia.

Childers, T.L., Houston, M.J., and Heckler, S.E. 1985. Measurement of individual differences in visual versus verbal information processing. Journal of Consumer Research, 12(2), 125–134.

Chiou, E.K., and Lee, J.D. 2021. Trusting automation: Designing for responsivity and resilience. Human Factors: The Journal of the Human Factors and Ergonomic Society, online April 27. doi: 10.1177/00187208211009995.

Clark, H.H., and Schaefer, E.F. 1989. Contributing to discourse. Cognitive Science, 13, 259–294.

Cockburn, A. 2002. Agile Software Development. Boston, MA: Addison-Wesley.

Cook, B. 2021. The future of artificial intelligence in ISR Operations. Air and Space Power Journal (Summer), 41–55. Available: https://www.airuniversity.af.edu/Portals/10/ASPJ/journals/Volume-35_Special_Issue/F-Cook.pdf.

Cooke, N.J. 2018. 5 ways to help robots work together with people. The Conversation. Available: https://theconversation.com/5-ways-to-help-robots-work-together-with-people-101419.

Cooke, N.J., and Gorman, J.C. 2009. Interaction-based measures of cognitive systems. Journal of Cognitive Engineering and Decision Making: Special Section on: Integrating Cognitive Engineering in the Systems Engineering Process: Opportunities, Challenges and Emerging Approaches 3, 27–46.

Cooke, N.J., Gorman, J.C., Duran, J.L., and Taylor, A.R. 2007. Team cognition in experienced command-and-control teams. Journal of Experimental Psychology: Applied, 13(3), 146.

Cooke, N.J., Gorman, J.C., Myers, C.W., and Duran, J.L. 2013. Interactive team cognition. Cognitive Science, 37(2), 255–285.

Cooke, N.J., Kiekel, P.A., and Helm, E.E. 2001. Measuring team knowledge during skill acquisition of a complex task. International Journal of Cognitive Ergonomics, 5(3), 297–315.

Coolen, E., Draaisma, J., and Loeffen, J. 2019. Measuring situation awareness and team effectiveness in pediatric acute care by using the situation global assessment technique. European Journal of Pediatrics, 178(6), 837–850.

Coovert, M.D., Miller, E.E.P, and Bennett, W., Jr. 2017. Assessing trust and effectiveness in virtual teams: Latent growth curve and latent change score models. Social Sciences, 6(3), 87.

Copeland, B.J. 2021. Artificial Intelligence. Available: https://www.britannica.com/technology/artificial-intelligence.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Cowie, R., Cox, C., Martin, J.-C., Batliner, A., Heylen, D.K.J., and Karpouzis, K. 2011. Issues in Data Labelling. Pp. 213–241 in Emotion-Oriented Systems: The Humaine Handbook (P. Petta, C. Pelachaud, C. Roddy, eds.), Berlin, Heidelberg: Springer-Verglag.

Crandall, B., Klein, G., and Hoffman, R.R. 2006. Working Minds: A Practitioner’s Guide to Cognitive Task Analysis. Cambridge, MA: MIT Press.

Cranford, E.A., Gonzalez, C., Aggarwal, P., Cooney, S., Tambe, M., and Lebiere, C. 2020. Toward personalized deceptive signaling for cyber defense using cognitive models. Topics in Cognitive Science, 12(3), 992–1011.

Cranford, E.A., Gonzalez, C., Aggarwal, P., Tambe, M., Cooney, S., and Lebiere, C. 2021. Towards a cognitive theory of cyber deception. Cognitive Science, 45(7), e13013.

Crozier, M.S., Ting, H.Y., Boone, D.C., O’Regan, N.B., Bandrauk, N., Furey, A., Squires, C., Hapgood, J., and Hogan, M.P. 2015. Use of human patient simulation and validation of the Team Situation Awareness Global Assessment Technique (TSAGAT): A multidisciplinary team assessment tool in trauma education. Journal of Surgical Education, 72(1), 156–163.

CRS (Congressional Research Service). 2020. Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems. Washington, DC: Congressional Research Service. Available: https://sgp.fas.org/crs/natsec/IF11150.pdf.

Cuevas, H.M., Fiore, S.M., Caldwell, B.S., and Strater, L. 2007. Augmenting team cognition in human-automation teams performing in complex operational environments. Aviation, Space, and Environmental Medicine, 78(5, Supp. Section II), B63–70.

Cummings, M.L. 2004. Automation Bias in Intelligent Time Critical Decision Support Systems. AIAA 3rd Intelligent Systems Conference, Chicago, IL.

Cummings, M.L. 2019. Lethal autonomous weapons: Meaningful human control or meaningful human certification?” IEEE Technology & Society 38(10), 20–26.

Cummings, M.L. 2021. Rethinking the maturity of artificial intelligence in safety-critical settings. Artificial Intelligence Magazine, 42(1), 6–15.

Cummings, M.L., and Guerlain, S. 2007. Developing operator capacity estimates for supervisory control of autonomous vehicles. Human Factors, 49(1), 1–15.

Cummings, M.L., and Li, S. 2021a. Sources of subjectivity in machine learning models. ACM Journal of Data and Information Quality, 13(2), 1–9.

Cummings, M.L., and Li, S. 2021b. Subjectivity in the creation of machine learning models. Journal of Data and Information Quality, 13(2), 1–19. doi: 10.1145/3418034.

Cummings, M.L., How, J.P., Whitten, A., and Toupet, O. 2011. The impact of human–automation collaboration in decentralized multiple unmanned vehicle control. Proceedings of the IEEE, 100(3), 660–671.

Cummings, M.L., Li, S., and Zhu, H. 2022. Modeling operator self-assessment in human-autonomy teaming settings. International Journal of Human-Computer Studies, 157. doi: 10/gngrwc.

Dadashi, N., Stedmon, A.W., and Pridmore, T.P. 2013. Semi-automated CCTV surveillance: The effects of system confidence, system accuracy and task complexity on operator vigilance, reliance and workload. Applied Ergonomics, 44(5), 730–738.

Daugherty, P.R., and Wilson, H.J. 2018. Human + Machine: Reimagining Work in the Age of AI. Cambridge, MA: Harvard Business Press.

Dautenhahn, K. 2007. A Paradigm Shift in Artificial Intelligence: Why Social Intelligence Matters in the Design and Development of Robots with Human-Like Intelligence. Pp 288–302 in 50 Years of Artificial Intelligence. Berlin, Heidelberg: Springer-Verlag.

Davis, E., and Marcus, G. 2016. The scope and limits of simulation in automated reasoning. Artificial Intelligence, 233, 60–72.

Davis, F.D., Bagozzi, R.P., and Warshaw, P.R. 1989. User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 903–1028.

de Visser, E., and Parasuraman, R. 2011. Adaptive aiding of human-robot teaming: Effects of imperfect automation on performance, trust, and workload. Journal of Cognitive Engineering and Decision Making, 5(2), 209–231.

de Visser, E.J., Pak, R., and Shaw, T.H. 2018. From ‘automation’ to ‘autonomy’: The importance of trust repair in human–machine interaction. Ergonomics, 61(10), 1409–1427.

de Weck, O.L., Roos, D., Magee, C.L, and Vest, C.M. 2011. Life-Cycle Properties of Engineering Systems: The Ilities. Pp. 65–96 in Engineering Systems: Meeting Human Needs in a Complex Technological World. Cambridge, MA: MIT Press.

DeChurch, L.A., and Mesmer-Magnus, J.R. 2010. The cognitive underpinnings of effective teamwork: A meta-analysis. Journal of Applied Psychology, 95(1), 32.

Defense Innovation Board. 2019. AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense. Washington, DC: Defense Innovation Board. Available: https://media.defense.gov/2019/Oct/31/2002204459/-1/-1/0/DIB_AI_PRINCIPLES_SUPPORTING_DOCUMENT.PDF.pdf.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Defense Science Board. 2012. The Role of Autonomy in DoD Systems. Washington, DC: Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. Available: https://irp.fas.org/agency/dod/dsb/autonomy.pdf.

Dekker, S., and Woods, D.D. 2002. MABA-MABA or abracadabra? Progress on human-automation coordination. Cognition, Technology and Work, 4, 240–244.

Delise, L.A., Allen Gorman, C., Brooks, A.M., Rentsch, J.R., and Steele-Johnson, D. 2010. The effects of team training on team outcomes: A meta-analysis. Performance Improvement Quarterly, 22(4), 53–80.

Demir, M., Likens, A.D., Cooke, N.J., Amazeen, P.G., and McNeese, N.J. 2018. Team coordination and effectiveness in human-autonomy teaming. IEEE Transactions on Human-Machine System, 1–10. doi: 10.1109/THMS.2018.2877482.

Demir, M., McNeese, N.J., and Cooke, N.J. 2016. Team Communication Behaviors of the Human Automation Teaming. 2016 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support, San Diego, CA, pp. 28–34. doi: 10.1109/COGSIMA.2016.7497782.

Demir, M., McNeese, N.J., Gorman, J.C., Cooke, N.J., Myers, C.W., and Grimm, D.A. 2021. Exploration of teammate trust and interaction dynamics in human-autonomy teaming. IEEE Transactions on Human-Machine Systems, 51(6), 696–705.

Dignum, V. 2019. AI is multidisciplinary. AI Matters, 5(4), 18–21. doi: 10.1145/3375637.3375644.

Dierdorff, E.C., Fisher, D.M., and Rubin, R.S. 2019. The power of percipience: Consequences of self-awareness in teams on team-level functioning and performance. Journal of Management, 45(7), 2891–2919.

Dimoka, A. 2010. What does the brain tell us about trust and distrust? Evidence from a functional neuroimaging study. MIS Quarterly, 34(2), 373. doi: 10.2307/20721433.

DOD (Department of Defense). 2012. DoD Directive 3000.09: Autonomy in Weapon Systems. Washington, DC: Department of Defense.

DOD. 2020. DOD Instruction 5000.02: Operation of the Adaptive Acquisition Framework. Washington, DC: Department of Defense. Available: https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodi/500002p.pdf.

Domeyer, J., Dinparastdjadid, A., Lee, J.D., Douglas, G., Alsaid, A., and Price, M. 2019. Proxemics and kinesics in automated vehicle–pedestrian communication: Representing ethnographic observations. Journal of the Transportation Research Board, 2673(10), 70–81.

Dorneich, M.C., Passinger, B., Hamblin, C., Keinrath, C., Vašek, J., Whitlow, S.D., and Beekhuyzen, M. 2017. Evaluation of the display of cognitive state feedback to drive adaptive task sharing. Frontiers in Neuroscience, 11, 144.

Draper, M., Calhoun, G., Hansen, M., Douglass, S., Spriggs, S., Patzek, M., Rowe, A., Ruff, H., Behymer, K., Howard, M., Bearden, G., and Frost, E. 2017. Intelligent Multi-Unmanned Vehicle Planner with Adaptive Collaborative/Control Technologies (IMPACT). 19th International Symposium on Aviation Psychology, p. 226.

Driskell, T., Salas, E., and Driskell, J.E. 2018. Teams in extreme environments: Alterations in team development and teamwork. Human Resource Management Review 28(4), 434–449.

Dybå, T., and Dingsøyr, T. 2008. Empirical studies of agile software development: A systematic review. Information and Software Technology, 50(9–10), 833–859.

Dzindolet, M.T., Pierce, L., Peterson, S., Purcell, L., Beck, H., and Beck, H. 2002. The influence of feedback on automation use, misuse, and disuse. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 46(1), 551–555.

Ebert, C., Gallardo, G., Hernantes, J., and Serrano, N. 2016. DevOps. IEEE Software, 33, 94–100.

Eiband, M., Buschek, D., Kremer, A., and Hussmann, H. 2019. The Impact of Placebic Explanations on Trust in Intelligent Systems. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–6.

Einhorn, H.J., and Hogarth, R.M. 1981. Behavioral decision theory: Processes of judgment and choice. Annual Review of Psychology, 32(1), 53–88.

Elix, B., and Naikar, N. 2021. Designing for adaptation in workers’ individual behaviors and collective structures with cognitive work analysis: Case study of the diagram of work organization possibilities. Human Factors, 63(2), 274–295.

Elm, W.C., Gualtieri, J.W., McKenna, B.P., Tittle, J.S., Peffer, J.E., Szymczak, S.S., and Grossman, J.B. 2008. Integrating cognitive systems engineering throughout the systems engineering process. Journal of Cognitive Engineering and Decision Making, 2(3), 249–273.

Endsley, M.R. 1988. Design and evaluation for situation awareness enhancement. Proceedings of the Human Factors Society Annual Meeting, 32(2), 97–101.

Endsley, M.R. 1990. Predictive utility of an objective measure of situation awareness. Proceedings of the Human Factors Society Annual Meeting, 34(1), 41–45.

Endsley, M.R. 1993. A survey of situation awareness requirements in air-to-air combat fighters. International Journal of Aviation Psychology, 3(2), 157–168.

Endsley, M.R. 1995a. Measurement of situation awareness in dynamic systems. Human Factors, 37(1), 65–84.

Endsley, M.R. 1995b. Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32–64.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Endsley, M.R. 1996. Automation and Situation Awareness. Pp. 163–181 in Automation and Human Performance: Theory and Applications (R. Parasuraman and M. Mouloua, eds.), Mahwah, NJ: Lawrence Erlbaum.

Endsley, M.R. 2008. Situation Awareness: A Key Cognitive Factor in Effectiveness of Battle Command. Pp. 95–119 in The Battle of Cognition: The Future of Information-Rich Warfare and the Mind of the Commander (A. Kott, ed.), Westport, CT: Praeger.

Endsley, M.R., 2015. Situation awareness misconceptions and misunderstandings. Journal of Cognitive Engineering and Decision Making, 9(1), 4–32.

Endsley, M.R. 2017. From here to autonomy: Lessons learned from human-automation research. Human Factors, 59(1), 5–27.

Endsley, M.R. 2018a. Combating information attacks in the age of the internet: New challenges for cognitive engineering. Human Factors, 60(8), 1081–1094.

Endsley, M.R. 2018b. Level of automation forms a key aspect of autonomy design. Special Issue on Advancing Models of Human-Automation Interaction, Journal of Cognitive Engineering and Decision Making, 12, 29–34.

Endsley, M.R. 2019. Human factors and aviation safety: Testimony to the United States House of Representatives Hearing on Boeing 737-Max8 crashes. Available: https://transportation.house.gov/imo/media/doc/Endsley%20Testimony.pdf.

Endsley, M.R. 2020a. Human-Automation Interaction and the Challenge of Maintaining Situation Awareness in Future Autonomous Vehicles. Pp. 151–168 in Automation and Human Performance: Theory and Applications, 2nd ed. (M. Mouloua and P. Hancock, eds.). Boca Raton, FL: CRC Press.

Endsley, M.R. 2020b. The divergence of objective and subjective situation awareness: A meta-analysis. Journal of Cognitive Engineering and Decision Making, 14(1), 34–53.

Endsley, M.R. 2021a. A systematic review and meta-analysis of direct objective measures of situation awareness: A comparison of SAGAT and SPAM. Human Factors, 63(1), 124–150.

Endsley, M.R. 2021b. Situation Awareness in Teams: Models and Measures. Pp. 1–28 in Handbook of Distributed Team Cognition: Contemporary Research Models, Methodologies, and Measures in Distributed Team Cognition (M. McNeese, E. Salas, and M. Endsley, eds.). Boca Raton, FL: CRC Press.

Endsley, M.R., and Jones, D.G. 2012. Designing for Situation Awareness: An Approach to Human-Centered Design, 2nd ed. London: Taylor and Francis.

Endsley, M.R., and Jones, W.M. 2001. A Model of Inter- and Intrateam Situation Awareness: Implications for Design, Training and Measurement. Pp. 46–67 in New Trends in Cooperative Activities: Understanding System Dynamics in Complex Environments (M. McNeese, E. Salas, and M. Endsley, eds.). Santa Monica, CA: Human Factors and Ergonomics Society.

Endsley, M.R., and Kaber, D.B. 1999. Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics, 42, 462–492.

Endsley, M.R., and Kiris, E.O. 1994. Information presentation for expert systems in future fighter aircraft. International Journal of Aviation Psychology, 4(4), 333–348.

Endsley, M.R., and Kiris, E.O. 1995. The out-of-the-loop performance problem and level of control in automation. Human Factors, 37(2), 381–394.

Endsley, M.R., Bolte, B., and Jones, D.G. 2003. Designing for Situation Awareness: An Approach to Human-Centered Design. London: Taylor and Francis.

Endsley, M.R., English, T.M., and Sundararajan, M. 1997. The modeling of expertise: The use of situation models for knowledge engineering. International Journal of Cognitive Ergonomics, 1(2), 119–136.

Endsley, M.R., Jones, D.G., Hannen, M., and Dunlap, K.L. 2008. A Case Study in Systems-Of-Systems Engineering: Cognitive Engineering in the Army’s Future Combat Systems. Marietta, GA: SA Technologies.

Entin, E.E., and Serfaty, D. 1999. Adaptive team coordination. Human Factors, 41(2), 312–325. doi: 10.1518/001872099779591196.

Entin, E.E., and Entin, E.B. 2001. Measures for Evaluation of Team Process and Performance in Experiments and Exercises. Proceedings of the 6th International Command and Control Research and Technology Symposium, Washington, DC: Command and Control Research Program.

Epley, N., Waytz, A., and Cacioppo, J.T. 2007. On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886.

Erev, I., Ert, E., Roth, A.E., Haruvy, E., Herzog, S.M., Hau, R., Hertwig, R., Stewart, T., West, R., and Lebiere, C. 2010. A choice prediction competition: Choices from experience and from description. Journal of Behavioral Decision Making, 23(1), 15–47.

Ernest D., and Marcus, G. 2016. The scope and limits of simulation in automated reasoning. Artificial Intelligence, 233, 60–72, doi: 10.1016/j.artint.2015.12.003.

Evans, D.C., and Fendley, M. 2017. A multi-measure approach for connecting cognitive workload and automation. International Journal of Human-Computer Studies, 97, 182–189.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Evenson, S., Muller, M., and Roth, E.M. 2008. Capturing the context of use to inform system design. Journal of Cognitive Engineering and Decision Making, 2(3), 181–203.

Eykholt, K., Evtimov, I., Fernandes, E., Bo, L., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., and Song, D. 2017. Robust physical-world attacks on deep learning models. arXiv: 1707.08945.

Federal Aviation Administration Human Factors Team. 1996. The Interfaces Between Flightcrews and Modern Flight Deck Systems. Available: http://www.tc.faa.gov/its/worldpac/techrpt/hffaces.pdf.

Feigh, K.M., and Pritchett, A.R. 2014. Requirements for effective function allocation: A critical review. Journal of Cognitive Engineering and Decision Making, 8(1), 23–32.

Felzmann, H., Fosch-Villaronga, E., Lutz, C., and Tamò-Larrieux, A. 2020. Towards transparency by design for artificial intelligence. Science and Engineering Ethics, 26(6), 3333–3361.

Ferguson-Walter, K., Fugate, S., and Wang, C. 2020. Introduction to the Minitrack on Cyber Deception for Defense. In Proceedings of the 53rd Hawaii International Conference on System Sciences, pp. 1823–1824.

Fern, L., and Shively, R.J. 2009. A Comparison of Varying Levels of Automation on the Supervisory Control of Multiple UASs. Proceedings of AUVSI’s Unmanned Systems North America 2009, Washington, DC, pp. 10–13.

Fernández-Loría, C., Provost, F., and Han, X. 2020. Explaining data-driven decisions made by AI systems: The counterfactual approach. arXiv: 2001.07417.

Ferrer, X., van Nuenen, T., Such, J.M., Coté, M., and Criado, N. 2021. Bias and discrimination in AI: A cross-disciplinary perspective. IEEE Technology and Society Magazine, 40(2), 72–80.

Fishbein, M., and Ajzen, I. 1975. Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research. Boston, MA: Addison-Wesley.

Fitts, P.M. 1951. Human Engineering for an Effective Air-Navigation and Traffic-Control System. Washington, DC: National Research Council. Available: https://apps.dtic.mil/sti/pdfs/ADB815893.pdf

Flathmann, C., Schelble, B.G., Zhang, R., and McNeese, N.J. 2021. Modeling and Guiding the Creation of Ethical Human-AI Teams. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pp. 469–479.

Flournoy, M.A., Haines, A., and Chefitz, G. 2020. Building Trust through Testing. Available: https://cset.georgetown.edu/wpcontent/uploads/Building-Trust-Through-Testing.pdf.

Forbus, K.D. 2016. Software social organisms: Implications for measuring AI progress. AI Magazine, 37(1), 85–90.

Friesen, D., Borst, C., Pavel, M.D., Masarati, P., and Mulder, M. 2021. Design and Evaluation of a Constraint-Based Helicopter Display to Support Safe Path Planning. Nitros Safety Workshop, April 9–11.

Gallina, P., Bellotto, N., and Di Luca, M. 2015. Progressive Co-Adaptation in Human-Machine Interaction. 2015 12th International Conference on Informatics in Control, Automation and Robotics (ICINCO), pp. 362–368.

Ganaie, M.Y., and Mudasir, H. 2015. A study of social intelligence & academic achievement of college students of District Srinagar, J&K, India. Journal of American Science, 11(3) 23–27.

Gao, J., and Lee, J.D. 2006. Effect of shared information on trust and reliance in a demand forecasting task. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 50(3), 215–219.

Gao, J., Lee, J.D., and Zhang, Y. 2006. A dynamic model of interaction between reliance on automation and cooperation in multi-operator multi-automation situations. International Journal of Industrial Ergonomics, 36(5), 511–526.

Gardner, A.K., Kosemund, M., and Martinez, J. 2017. Examining the feasibility and predictive validity of the SAGAT tool to assess situation awareness among medical trainees. Simulation in Healthcare, 12(1), 17–21.

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J.W., Wallach, H., Daumé III, H., and Crawford, K. 2018. Datasheets for Datasets. 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning, Stockholm, Sweden.

Gianfrancesco, M.A., Tamang, S., Yazdany, J., and Schmajuk, G. 2018. Potential biases in machine learning algorithms using electronic health record data. JAMA Internal Medicine, 178(11), 1544–1547.

Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., and Kagal, L. 2018. Explaining Explanations: An Overview of Interpretability of Machine Learning. 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), Turin, Italy, pp. 80–89.

Gonzalez, C. 2017. Decision Making: A Cognitive Science Perspective. Pp. 249–263 in The Oxford Handbook of Cognitive Science (S.E.F. Chipman, ed.). Oxford University Press. doi: 10.1093/oxfordhb/9780199842193.013.6.

Gonzalez C., and Dutt, V. 2011. Instance-based learning: Integrating sampling and repeated decisions from experience. Psychological Review, 118(4), 523–551.

Gonzalez, C., Aggarwal, P., Cranford, E., and Lebiere, C. 2020. Design of Dynamic and Personalized Deception: A Research Framework and New Insights. Proceedings of the 53rd Hawaii International Conference on System Sciences HICSS 2020, January 7–10, pp. 1825–1834. doi: 10.24251/HICSS.2020.226.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Gonzalez, C., Ben-Asher, N., Martin, J. and Dutt, V. 2015. A cognitive model of dynamic cooperation with varied interdependency information. Cognitive Science, 39, 457–495.

Gonzalez, C., Ben-Asher, N., Oltramari, A., and Lebiere, C. 2014. Cognition and Technology. Pp. 93–117 in Cyber Defense and Situational Awareness, vol. 62 (C. Kott, A. Wang, and R. Erbacher, eds.). Switzerland: Springer International Publishing. doi: 10.1007/978-3-319-11391-3.

Goodfellow, I.J., Shlens, J., and Szegedy, C. 2015. Explaining and harnessing adversarial examples. arXiv: 1412.6572.

Goodwin, G.F., Blacksmith, N., and Coats, M.R. 2018. The science of teams in the military: Contributions from over 60 years of research. American Psychologist, 73(4), 322.

Gorman, J.C., Cooke, N.J., and Amazeen, P.G. 2010. Training adaptive teams. Human Factors, 52(2), 295–307.

Gorman, J.C., Cooke, N.J., and Winner, J.L. 2006. Measuring team situation awareness in decentralized command and control environments. Ergonomics, 49(12–13), 1312–1325.

Gorman, J.C., Demir, M., Cooke, N.J., and Grimm, D.A. 2019. Evaluating sociotechnical dynamics in a simulated remotely piloted aircraft system: A layered dynamics approach. Ergonomics, 62(5), 1–44. doi: 10.1080/00140139.2018.1557750.

Gorman, J.C., Grimm, D.A., Stevens, R.H., Galloway, T., Willemsen-Dunlap, A.M., and Halpin, D.J. 2020. Measuring real-time team cognition during team training. Human Factors: The Journal of the Human Factors and Ergonomics Society, 62(52), 825–860.

Gottman, J., Swanson, C., and Swanson, K. 2002. A general systems theory of marriage: Nonlinear difference equation modeling of marital interaction. Personality and Social Psychology Review, 6(4), 326–340.

Graham, J., Schneider, M., Bauer, A., Bessiere, K., and Gonzalez, C. 2004. Shared mental models in military command and control organizations: Effect of social network distance. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 48(3), 509–512.

Grand, J.A., Braun, M.T., Kuljanin, G., Kozlowski, S.W., and Chao, G.T. 2016. The dynamics of team cognition: A process-oriented theory of knowledge emergence in teams. Journal of Applied Psychology, 101(10), 1353.

Gray, W. 2002. Simulated task environments: The role of high-fidelity simulations, scaled worlds, synthetic environments, and laboratory tasks in basic and applied cognitive research. Cognitive Science Quarterly, 2, 205–227.

Groom, V., and Nass, C. 2007. Can robots be teammates?: Benchmarks in human–robot teams. Interaction Studies, 8(3), 483–500.

Groover, M. 2020. Automation, Production Systems, and Computer-Integrated Manufacturing, 5th ed. New York, NY: Pearson.

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., and Pedreschi, D. 2018. A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 1–42.

Gunderson, E.K.E. 1973. Psychological Studies in Antarctica: A Review. Polar Human Biology: The Proceedings of the SCAR/IUPS/IUBS Symposium on Human Biology and Medicine in the Antarctic. London: Butterworth-Heinemann, pp. 352–361.

Gutzwiller, R., Ferguson-Walter, K., Fugate, S., and Rogers, A. 2018. “Oh, look, a butterfly!” A framework for distracting attackers to improve cyber defense. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62(1), 272–276. doi: 10.1177/1541931218621063.

Hagendorff, T. 2020. The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120.

Hallaq, B., Somer, T., Osula, A.M., Ngo, K., and Mitchener-Nissen, T. 2017. Artificial Intelligence Within the Military Domain and Cyber Warfare. European Conference on Cyber Warfare and Security ECCWS, pp. 153–157.

Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., de Visser, E.J., and Parasuraman, R. 2011. A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 53(5), 517–527.

Hancock, P., and Verwey, W.B. 1997. Fatigue, workload and adaptive driver systems. Accident Analysis and Prevention, 29(4), 495–506.

Harding, S M., Rajivan, P., Bertenthal, B.I., and Gonzalez, C. 2018. Human Decisions on Targeted and Non-Targeted Adversarial Samples. 40th Annual Meeting of the Cognitive Science Society (CogSci 2018), July 25–28, Madison, WI.

Harris, W.C., Hancock, P.A., Arthur, E.J., and Caird, J.K. 1995. Performance, workload, and fatigue changes associated with automation. The International Journal of Aviation Psychology, 5(2), 169–185.

Harrison McKnight, D., and Chervany, N.L. 2001. Trust and Distrust Definitions: One Bite at a Time. Pp. 27–54 in Trust in Cyber-societies, vol. 2246 (R. Falcone, M. Singh, and Y.-H. Tan, eds.). Berlin Heidelberg: Springer. doi: 10.1007/3-540-45547-7_3.

Hayes, C., and Miller, C.A. 2010. Human-Computer Etiquette. New York: Auerbach Publications.

Hilburn, B. 2017. Dynamic Decision Aiding: The Impact of Adaptive Automation on Mental Workload. Pp. 193–200 in Engineering Psychology and Cognitive Ergonomics Abingdon: Routledge.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Hilburn, B., Jorna, P.G., Bryne, E.A., and Parasuraman, R. 1997. The Effect of Adaptive Air Traffic Control Decision Aiding on Controller Mental Workload. Pp. 84–91 in Human Automation Interaction: Research and Practice (M. Mouloua and J.M. Koonce, eds.). Mahwah, NJ: LEA.

Hill, S.C. 2021. Joint all-domain operations: The key to decision dominance and overmatch. Breaking Defense. Available: https://breakingdefense.com/2021/2005/joint-all-domain-operations-the-key-to-decision-dominance-and-overmatch/.

Hinski, S. 2017. Training the code team leader as a forcing function to improve overall team performance during simulated code blue events. PhD thesis, Human Systems Engineering, Arizona State University.

Ho, N., Sadler, G.G., Hoffmann, L.C., Zemlicka, K., Lyons, J., Fergueson, W., Richardson, C., Cacanindin, A., Cals, S., and Wilkins, M. 2017. A longitudinal field study of auto-GCAS acceptance and trust: First-year results and implications. Journal of Cognitive Engineering and Decision Making, 11(3), 239–251.

Hoff, K.A., and Bashir, M. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors: The Journal of the Human Factors and Ergonomic Society, 57(3), 407–434.

Hoffman, R.R., and Hancock, P.A. 2017. Measuring resilience. Human Factors, 59, 564–581.

Hoffman, R.R., and Woods, D.D. 2011. Beyond Simon’s slice: Five fundamental trade-offs that bound the performance of macrocognitive work systems. IEEE Intelligent Systems, 26(6), 67–71.

Hoffman, R.R., Mueller, S.T., Klein, G., and Litman, J. 2018. Metrics for explainable AI: Challenges and prospects. arXiv:1812.04608.

Hohman, F., Kahng, M., Pienta, R., and Chau, D. 2019. Visual analytics in deep learning: An interrogative survey for the next frontiers. IEEE Transactions on Visualization and Computer Graphics, 25(8), 2674–2693.

Howard, A., and Borenstein, J. 2018. The ugly truth about ourselves and our robot creations: The problem of bias and social inequity. Science and Engineering Ethics, 24(5), 1521–1536.

Huang, L., Cooke, N., Johnson, C., Lematta, G., Bhatti, S., Barnes, M., and Holder, E. 2020. Human-Autonomy Teaming: Interaction Metrics and Models for Next Generation Combat Vehicle Concepts. Technical Report for ARL Grant W911NF1820271.

Huang, L., Cooke, N.J., Gutzwiller, R.S., Berman, S., Chiou, E.K., Demir, M., and Zhang, W. 2021. Distributed Dynamic Team Trust in Human, Artificial Intelligence, and Robot Teaming. Pp. 301–319 in Trust in Human-Robot Interaction. Academic Press. doi: 10.1016/B978-0-12-819472-0.00013-7.

Human Factors and Ergonomics Society. 2021. Human Readiness Level Scale in the System Development Process ANSI/HFES 400–2021. Washington, DC: Human Factors and Ergonomics Society.

Hutchins, E. 1990. The Technology of Team Navigation. Pp. 191–220 in Intellectual Teamwork: Social and Technological Foundations of Cooperative Work (J. Galegher, R.E. Kraut, and C. Egido, eds.). Hillsdale, NJ: Lawrence Erlbaum Associates.

IJtsma, M., Ma, L.M., Pritchett, A.R., and Feigh, K.M. 2019. Computational methodology for the allocation of work and interaction in human-robot teams. Journal of Cognitive Engineering and Decision Making, 13(4), 221–241. doi:10.1177/1555343419869484.

Jian, J.-Y., Bisantz, A.M., and Drury, C.G. 2000. Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71.

Johnson, M., and Bradshaw, J.M. 2021. The Role of Interdependence in Trust. Pp. 379–403 in Trust in Human-Robot Interaction. Elsevier, Inc. doi: 10.1016/B978-0-12-819472-0.00016-2.

Johnson, M., and Vera, A. 2019. No AI is an island: The case for teaming intelligence. AI Magazine, 40(1), 16–28.

Johnson, M., Bradshaw, J.M., and Feltovich, P.J. 2017. Tomorrow’s human–machine design tools: From levels of automation to interdependencies. Journal of Cognitive Engineering and Decision Making, 12(1), 77–82.

Johnson, M., Bradshaw, J.M., Feltovich, P.J., Jonker, C.M., van Riemsdijk, M.B., and Sierhuis, M. 2014. Coactive design: Designing support for interdependence in joint activity. Journal of Human Robot Interaction, 3(1), 43–69.

Johnson, M., Vignati, M., and Duran, D. 2020. Understanding Human-Machine Teaming Through Interdependence Analysis. Pp. 209–233 in Contemporary Research. Boca Raton, FL: CRC Press.

Jones, R.E.T., Connors, E.S., Mossey, M.E., Hyatt, J.R., Hansen, N.J., and Endsley, M.R. 2011. Using fuzzy cognitive mapping techniques to model situation awareness for army infantry platoon leaders. Computational Mathematical Organizational Theory, 17(3), 272–295.

Juvina, I., Lebiere, C., and Gonzalez, C. 2015. Modeling trust dynamics in strategic interaction. Journal of Applied Research in Memory and Cognition, 4(3), 197–211.

Kaber, D.B. 2018. Issues in human-automation interaction modeling: Presumptive aspects of frameworks of types and levels of automation. Special Issue on Advancing Models of Human-Automation Interaction, Journal of Cognitive Engineering and Decision Making, 12, 7–24.

Kaber, D.B., and Endsley, M.R. 2004. The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task. Theoretical Issues in Ergonomic Science, 5(2), 113–153.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Kaber, D.B., and Endsley, M.R. 1997. Level of Automation and Adaptive Automation Effects on Performance in a Dynamic Control Task. Proceedings of the 13th Triennial Congress of the International Ergonomics Association, pp. 202–204. Helsinki: Finnish Institute of Occupational Health.

Kaber, D.B., and Riley, J. 1999. Adaptive automation of a dynamic control task based on secondary task workload measurement. International Journal of Cognitive Ergonomics, 3(3), 169–187.

Kahneman, D., and Tversky, A. 1979. Intuitive prediction: Biases and corrective procedures. Studies in Management Sciences, 12, 313–327.

Kahneman, D., Slovic, P., and Tversky, A. 1982. Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.

Kaplan, A.D., Kessler, T.T., Brill, J.C., and Hancock, P.A. 2021. Trust in artificial intelligence: Meta-analytic findings. Human Factors, online May 28. doi: 10.1177/00187208211013988.

Kelley, H.H., and Thibaut, J. 1978. Interpersonal Relations: A Theory of Interdependence. New York: Wiley.

Kibbe, M., and McDowell, E.D. 1995. Operator Decision Making: Information on Demand. Pp. 43–48 in Human Factors in Aviation Operations, vol 3 (R. Fuller, N. Johnston, and N. McDonald, eds.). Aldershot, UK: Avebury.

Kirlik, A. 1993. Modeling strategic behavior in human-automation interaction: Why an “aid” can (and should) go unused. Human Factors, 35(2), 221–242.

Klein, G.A. 1993. A Recognition Primed Decision (RPD) Model of Rapid Decision Making. Pp. 138–147 in Decision Making in Action: Models and Methods (G.A. Klein, J. Orasanu, R. Calderwood, and C.E. Zsambok, eds.). Norwood, NJ: Ablex.

Klein, G., Feltovich, P.J., and Woods, D.D. 2005. Common Ground and Coordination in Joint Activity. Pp. 139–184 in Organizational Simulation (W.B. Rouse and K.R. Boff, eds.). New York: John Wiley and Sons, Inc.

Knight, W. 2017. The Dark Secret at the Heart of AI. Cambridge, MA: MIT Technology Review.

Kokar, M.M., and Endsley, M.R. 2012. Situation awareness and cognitive modeling. IEEE Intelligent Systems, 27(3), 91–96.

Konaev, M., Chahal, H., Fedasiuk, R., Huang, T., and Rahkovsky, I. 2020. US Military Investments in Autonomy and AI: Costs, Benefits, and Strategic Effects. Washington, DC: Center for Security and Emerging Technology. doi: 10.51593/20190044.

Koschmann, T., and LeBaron, C.D. 2003. Reconsidering Common Ground: Examining Clark’s Contribution Theory in the OR. Proceedings of the Eighth European Conference on Computer-Supported Cooperative Work, Helsinki, Finland, pp. 81–98.

Kott, A. 2008. Battle of Cognition: The Future Information-Rich Warfare and the Mind of the Commander. Westport, CT: Praeger Security International.

Kramer, A.F. 2020. Physiological Metrics of Mental Workload: A Review of Recent Progress. Pp. 279–328 in Multiple-Task Performance. London: CRC Press. doi: 10.1201/9781003069447.

Kruchten, P. 2016. Refining the definition of technical debt. Available: https://philippe.kruchten.com/2016/04/22/refining-the-definition-of-technical-debt/.

Kuang, C. 2017. Can AI be taught to explain itself? New York Times Magazine, November 21. Available: https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html.

Kunze, A., Summerskill, S.J., Marshall, R., and Filtness, A.J. 2019. Automation transparency: Implications of uncertainty communication for human-automation interaction and interfaces. Ergonomics, 62(3), 345–360.

Layton, C., Smith, P.J., and McCoy, C.E. 1994. Design of a cooperative problem-solving system for en-route flight planning: An empirical evaluation. Human Factors, 36(1), 94–119.

Leavitt, H.J. 1951. Some effects of certain communication patterns on group performance. The Journal of Abnormal and Social Psychology, 46(1), 38.

Lee, J.D. 2001. Emerging challenges in cognitive ergonomics: Managing swarms of self-organizing agent-based automation. Theoretical Issues in Ergonomics Science, 2(3), 238–250.

Lee, J.D. 2018. Perspectives on automotive automation and autonomy. Journal of Cognitive Engineering and Decision Making, 12(1), 53–57.

Lee, J.D., and Moray, N. 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35, 1243–1270. doi: 10.1080/00140139208967392.

Lee, J.D., and Moray, N. 1994. Trust, self-confidence, and operations’ adaptation of automation. International Journal of Human-Computer Studies, 40, 153–184.

Lee, J.D., and See, K.A. 2004. Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80.

Lee, N.T., Resnick, P., and Barton, G. 2019. Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Available: https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practicesand-policies-to-reduce-consumer-harms/.

Lee, N., Kim, J., Kim, E., and Kwon, O. 2017. The influence of politeness behavior on user compliance with social robots in a healthcare service setting. International Journal of Social Robotics, 9, 727–743. doi: 10.1007/s12369-017-0420-0.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Levin, T. 2021. Tesla’s full self-driving tech keeps getting fooled by the moon, billboards, and Burger King signs. Available: https://www.businessinsider.com/tesla-fsd-full-self-driving-traffic-light-fooled-moon-video-2021-7.

Lewicki, R.J., and Brinsfield, C. 2017. Trust repair. Annual Review of Organizational Psychology and Organizational Behavior, 4, 287–313.

Lewis, P.R., Chandra, A., Parsons, S., Robinson, E., Glette, K., Bahsoon, R., Torresen, J., and Yao, X. 2011. A Survey of Self-Awareness and Its Application in Computing Systems. 2011 Fifth IEEE Conference on Self-Adaptive and Self-Organizing Systems Workshops, pp. 102–107.

Lieberman, H. 2001. Your Wish Is My Command: Programming by Example. San Francisco, CA: Morgan Kaufmann Publishers.

Lin, R., and Kraus, S. 2010. Can automated agents proficiently negotiate with humans? Communications of the ACM, 53(1), 78–88.

Lipshitz, R. 1987. Decision Making in the Real World: Developing Descriptions and Prescriptions from Decision Maker’s Retrospective Accounts. Boston, MA: Boston University Center for Applied Sciences.

Lipton, Z.C. 2017. The doctor just won’t accept that!. arXiv: 1711.08037.

Littman, M.L., Ajunwa, I., Berger, G., Boutilier, C., Currie, M., Doshi-Velez, F., Hadfield, G., Horowitz, M.C., Isbell, C., Kitano, H., Levy, K., Lyons, T., Mitchell, M., Shah, J., Sloman, S., Vallor, S., and Walsh, T. 2021. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report. Stanford, CA: Stanford University. Available at: http://ai100.stanford.edu/2021-report.

Long, S.K., Sato, T., Millner, N., Loranger, R., Mirabelli, J., Xu, V., and Yamani, Y. 2020. Empirically and theoretically driven scales on automation trust: A multi-level confirmatory factor analysis. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 64(1), 1829–1832. doi: 10.1177/1071181320641440.

Lyons, J.B. 2013. Being Transparent About Transparency: A Model for Human-Robot Interaction. 2013 AAAI Spring Symposium Series, pp. 48–53.

Lyons, J.B., Ho, N.T., Van Abel, A.L., Hoffmann, L.C., Sadler, G.G., Fergueson, W.E., Grigsby, M.W., and Wilkins, M. 2017. Comparing trust in auto-GCAS between experienced and novice air force pilots. Ergonomics in Design, 25(4), 4–9.

Lyons, J.B., Sycara, K., Lewis, M., and Capiola, A. 2021. Human-autonomy teaming: Definitions, debates, and directions. Frontiers in Psychology, 12, 589585.

Madhavan, P., and Wiegmann, D.A. 2007. Similarities and differences between human-human and human-automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277–301. doi: 10/d4sv4f.

Malone, T.W. 2018. How human-computer ‘superminds’ are redefining the future of work. MIT Sloan Management Review, 59(4), 34–41.

Malone, T.W., and Crowston, K. 1994. The interdisciplinary study of coordination. ACM Computing Surveys, 26(1), 87–119.

Malone, T.W., and Crowston, K. 2001. The Interdisciplinary Study of Coordination. Pp. 7–50 in Coordination Theory and Collaboration Technology (G.M. Olson, T.W. Malone, and J.B. Smith, eds.). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

Marks, M.A., DeChurch, L.A., Mathieu, J.E., Panzer, F.J., and Alonso, A. 2005. Teamwork in multiteam systems. Journal of Applied Psychology, 90(5), 964.

Marks, M.A., Mathieu, J.E., and Zaccaro, S.J. 2001. A temporally based framework and taxonomy of team processes. Academy of Management Review, 26(3), 356–376.

Marks, M.A., Sabella, M.J., Burke, C.S., and Zaccaro, S.J. 2002. The impact of cross-training on team effectiveness. Journal of Applied Psychology, 87(1), 3.

Marlow, S.L., Lacerenza, C.N., Reyes, D., and Salas, E. 2017. The Science and Practice of Simulation-Based Training in Organizations. Pp. 256–277 in The Cambridge Handbook of Workplace Training and Employee Development (K.G. Brown, ed.). New York, NY: Cambridge University Press.

Marriott, D., Ferguson-Walter, K., Fugate, S., and Carvalho, M. 2021. Proceedings of the 1st International Workshop on Adaptive Cyber Defense. arXiv:2108.08476.

Mathieu, J.E., Heffner, T.S., Goodwin, G.F., Salas, E., and Cannon-Bowers, J.A. 2000. The influence of shared mental models on team process and performance. Journal of Applied Psychology, 85(2), 273.

McClumpha, A., and James, M. 1994. Understanding Automated Aircraft. Pp. 183–190 in Human Performance in Automated Systems: Current Research and Trends (M. Mouloua and R. Parasuraman, eds.). Hillsdale, NJ: Erlbaum.

McCroskey, J.C., and Young, T.J. 1979. The use and abuse of factor analysis in communication research. Human Communication Research, 5(4), 375–382. doi: 10.1111/j.1468-2958.1979.tb00651.x.

McDermott, P., Dominguez, C., Kasdaglis, N., Ryan, M., Trahan, I., and Nelson, A. 2018. Human-Machine Teaming Systems Engineering Guide. Bedford, MA: MITRE.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

McDermott, P., Walker, K., Dominguez, C., Nelson, A., and Kasdaglis, N. 2017. Quenching the Thirst for Human-Machine Teaming Guidance: Helping Military Systems Acquisition Leverage Cognitive Engineering Research. Proceedings of the 13th International Conference on Naturalistic Decision Making 2017, Bath, UK, pp. 236–240.

McGrath, J.E. 1984. Groups: Interaction and Performance. Englewood Cliffs: Prentice Hall.

McGrath, J.E. 1990. Time Matters in Groups. Pp. 23–61 in Intellectual Teamwork: Social and Technological Foundations of Cooperative Work (J. Galegher, R.E. Kraut, and C. Egido, eds.). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

McGuirl, J.M., and Sarter, N.B. 2006. Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human Factors: The Journal of the Human Factors and Ergonomics Society, 48(4), 656–665.

McNeese, N.J., Demir, M., Cooke, N.J., and Myers, C. 2018. Teaming with a synthetic teammate: Insights into human-autonomy teaming. Human Factors, 60, 262–273. doi: 10.1177/0018720817743223.

McNeese, N.J., Demir, M., Chiou, E.K., and Cooke, N.J. 2021a. Trust and team performance in human–autonomy teaming. International Journal of Electronic Commerce, 25(1), 51–72.

McNeese, N., Schleble, B., Barberis Canonico, L., and Demir, M. 2021b. Who/what is my teammate? Team composition considerations in human-AI teaming. IEEE Transactions on Human-Machine Systems, 51(4).

Mercado, J.E., Rupp, M.A., Chen, J.Y.C., Barnes, M.J., Barber, D., and Procci, K. 2016. Intelligent agent transparency in human-agent teaming for multi-UxV management. Human Factors, 58, 401–415. doi: 10.1177/0018720815621206.

Merritt, S.M., Lee, D., Unnerstall, J.L., and Huber, K. 2015. Are well-calibrated users effective users? Associations between calibration of trust and performance on an automation-aided task. Human Factors: The Journal of the Human Factors and Ergonomics Society, 57(1), 34–47.

Metzger, U., and Parasuraman, R. 2005. Automation in future air traffic management: Effects of decision aid reliability on controller performance and mental workload. Human Factors, 47(1), 35–49.

Miller, C.A. 2000. From the Microsoft Paperclip to the Rotorcraft Pilot’s Associate: Lessons Learned from Fielding Adaptive Automation Systems. Human Performance, Situation Awareness and Automation: User-Centered Design for the New Millennium Conference, Savannah, GA.

Miller, C.A. 2014. Delegation and Transparency: Coordinating Interactions so Information Exchange Is No Surprise. International Conference on Virtual, Augmented and Mixed Reality, pp. 191–202.

Miller, C.A. 2018. The risks of discretization: What is lost in (even good) levels-of-automation schemes. Journal of Cognitive Engineering and Decision Making, 12(1), 74–76.

Miller, C.A. 2021. Trust, Transparency, Explanation, and Planning: Why We Need a Lifecycle Perspective on Human-Automation Interaction. Pp. 233–257 in Trust in Human-Robot Interaction. Academic Press.

Miller, C.A., and Parasuraman, R. 2007. Designing for flexible interaction between humans and automation: Delegation interfaces for supervisory control. Human Factors, 49(1), 57–75.

Miller, C.A., Funk, H., Goldman, R., Meisner, J., and Wu, P. 2005. Implications of Adaptive vs. Adaptable UIs on Decision Making: Why “Automated Adaptiveness” Is Not Always the Right Answer. Proceedings of the 1st International Conference on Augmented Cognition. Las Vegas, Nevada.

Miller, C.A., Shaw, T., Emfield, A., Hamell, J., de Visser, E., Parasuraman, R., and Musliner, D. 2011. Delegating to automation: Performance, complacency and bias effects under non-optimal conditions. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 55(1), 95–99.

Miller, J.G., and Miller, J.L. 1991. Introduction: The nature of living systems. Behavior Science, 36, 157–163.

Miller, T. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. doi: 10/gfwcxw.

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I.D., and Gebru, T. 2019. Model Cards for Model Reporting. FAT* ‘19: Proceedings of the Conference on Fairness, Accountability, and Transparency. Atlanta, Georgia, pp. 220–229.

MITRE. 2014. Systems Engineering Guide. McLean, VA: The MITRE Corporation.

Mohammed, S., Ferzandi, L., and Hamilton, K. 2010. Metaphor no more: A 15-year review of the team mental model construct. Journal of Management, 36(4), 876–910.

Molnar, C. 2020. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Available: https://christophm.github.io/interpretable-ml-book/.

Montague, E., Xu, J., and Chiou, E. 2014. Shared experiences of technology and trust: An experimental study of physiological compliance between active and passive users in technology-mediated collaborative encounters. IEEE Transactions on Human-Machine Systems, 44(5), 614–624. doi: 10/f6hs5c.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Montréal Responsible AI Declaration Steering Committee. 2018. The Montreal Declaration for the Responsible Development of Artificial Intelligence. Canada: Universite de Montreal.

Moon, Y., and Nass, C. 1996. How “real” are computer personalities? Psychological responses to personality types in human-computer interaction. Communication Research, 23(6), 651–674.

Moore, R.A., Schermerhorn, J.H., Oonk, H.M., and Morrison, J.G. 2003. Understanding and Improving Knowledge Transactions in Command and Control. San Diego, CA: Pacific Science and Engineering Group Inc.

Moray, N. 1986. Monitoring Behavior and Supervisory Control. Pp. 40/41–40/51 in Handbook of Perception and Human Performance, vol. 2 (K.R. Boff, L. Kaufman, and J.P. Thomas, eds.). New York: John Wiley and Sons.

Moray, N., and Inagaki, T. 2000. Attention and complacency. Theoretical Issues in Ergonomics Science, 1(4), 354–365.

Moreland, R.L. 2010. Are dyads really groups? Small Group Research, 41(2), 251–267.

Morgan, B.B.J., Salas, E., and Glickman, A.S. 1993. An analysis of team evolution and maturation. Journal of General Psychology, 120, 277–291.

Morris, J.X., Lifland, E., Yoo, J.Y., Grigsby, J., Jin, D., and Qi, Y. 2020. TextAttack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP. arXiv:2005.05909.

Murphy, R.R. 2021. Role of Autonomy in DoD Systems and HADR. Human-AI Teaming Through Warfighter-Centered Designs Workshop, July 28, National Academy of Sciences, Engineering, and Medicine.

Myers, C.W., Ball, J.T., Cooke, N.J., Freiman, M.D., Caisse, M., Rodgers, S.M., Demir, M., and McNeese, N.J. 2018. Autonomous intelligent agents for team training: Making the case for synthetic teammates. IEEE Transactions on Intelligent Systems. 1–1. doi: 10.1109/MIS.2018.2886670.

Nadeem, A. 2021. Human-centered approach to static-analysis-driven developer tools: The future depends on good HCI. Queue, 19(4), 68–95.

NASEM (National Academies of Sciences, Engineering, and Medicine). 2018. Multi-Domain Command and Control: Proceedings of a Workshop–in Brief. Washington, DC: The National Academies Press. doi: 10.17226/25316.

NASEM. 2021a. Adapting to Shorter Time Cycles in the United States Air Force: Proceedings of a Workshop Series. Washington, DC: The National Academies Press. doi: 10.17226/26148.

NASEM. 2021b. Energizing Data-Driven Operations at the Tactical Edge: Challenges and Concerns. Washington, DC: The National Academies Press. doi: 10.17226/26183.

Nass, C., and Moon, Y. 2000. Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. doi: 10/cqzrs6.

National Security Commission on Artificial Intelligence. 2021. Final Report. Available: https://www.nscai.gov/wp-content/uploads/2021/03/Final_Report_Executive_Summary.pdf.

Nayyar, M., Zoloty, Z., McFarland, C., and Wagner, A.R. 2020. Exploring the Effect of Explanations During Robot-Guided Emergency Evacuation. International Conference on Social Robotics, Golden, Colorado, pp. 13–22.

Nelson, B., Biggio, B., and Laskov, P. 2011. Understanding the Risk Factors of Learning in Adversarial Environments. Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence. Association for Computing Machinery, New York, NY, pp. 87–92.

Neville, K., Rosso, H., and Pires, B. 2021. A Systems-Resilience Approach to Technology Transition in High-Consequence Work Systems. Proceedings of the Naturalistic Decision Making and Resilience Engineering Symposium, Toulouse, France.

Northcutt, C.G., Athalye, A., and Mueller, J. 2021. Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks. asXiv:2103.14749.

Nourani, M., Kabir, S., Mohseni, S., and Ragan, E.D. 2019. The effects of meaningful and meaningless explanations on trust and perceived system accuracy in intelligent systems. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7(1), 97–105.

NRC (National Research Council). 1993. Workload Transition: Implications for Individual and Team Performance. Washington, DC: The National Academies Press.

NRC. 2015. Enhancing the Effectiveness of Team Science. Washington, DC: The National Academies Press.

NRC. 2007. Human-System Integration in the System Development Process: A New Look. Washington, DC: The National Academies Press.

NTSB (National Transportation Safety Board). 2020. Collision Between a Sport Utility Vehicle Operating With Partial Driving Automation and a Crash Attenuator, Mountain View, CA, March 23, 2018. (NTSB/HAR-20/01). Washington, DC: National Transportation Safety Board. Available: https://www.ntsb.gov/investigations/AccidentReports/Reports/HAR2001.pdf.

O’Neill, P.H. 2020. Hackers Can Trick a Tesla into Accelerating by 50 Miles Per Hour. Available: https://www.technologyreview.com/2020/02/19/868188/hackers-can-trick-a-tesla-into-accelerating-by-50-miles-per-hour/.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

O’Neill, T., McNeese, N., Barron, A., and Schelble, B. 2020. Human–autonomy teaming: A review and analysis of the empirical literature. Human Factors, 0018720820960865.

Oduor, K.F., and Wiebe, E.N. 2008. The effects of automated decision algorithm modality and transparency on reported trust and task performance. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 52(4), 302–306.

Olson, G.M., and Olson, J.S. 2001. Technology Support for Collaborative Workgroups. Pp. 559–584 in Coordination Theory and Collaboration Technology (G.M. Olson, T.W. Malone, and J.B. Smith, eds.). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

Olson, W.A., and Sarter, N.B. 1999. Supporting informed consent in human machine collaboration: The role of conflict type, time pressure, and display design. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 43(9), 189–193.

Onnasch, L., Wickens, C.D., Li, H., and Manzey, D. 2014. Human performance consequences of stages and levels of automation: An integrated meta-analysis. Human Factors, 56(3), 476–488.

Oppermann, R. 1994. Adaptive User Support: Ergonomic Design of Manually and Automatically Adaptable Software. Boca Raton, FL: CRC Press.

Oser, R.L., McCallum, G., Salas, E., and Morgan Jr., B.B. 1989. Toward a Definition of Teamwork: An Analysis of Critical Team Behaviors. Orlando, FL: Naval Training Systems Center. Available: https://apps.dtic.mil/sti/pdfs/ADA212454.pdf.

Owen, H., Mugford, B., Follows, V., and Pullmer, J. 2006. Comparison of three simulation-based training methods for management of medical emergencies. Resuscitation, 71, 204–11.

Panganiban, A.R., Matthews, G., and Long, M.D. 2020. Transparency in autonomous teammates: Intention to support as teaming information. Journal of Cognitive Engineering and Decision Making, 14(2), 174–190.

Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., and Swami, A. 2016. The limitations of deep learning in adversarial settings. arXiv: 1511.07528.

Parasuraman, R., and Manzey, D. 2010. Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410.

Parasuraman, R., and Riley, V. 1997. Humans and automation: Use, misuse, disuse and abuse. Human Factors, 39(2), 230–253.

Parasuraman, R., and Wickens, C.D. 2008. Humans: Still vital after all these years of automation. Human Factors, 50(3), 511–520.

Parasuraman, R., Molloy, R., and Singh, I.L. 1993. Performance consequences of automation-induced complacency. International Journal of Aviation Psychology, 3(1), 1–23.

Parasuraman, R., Sheridan, T.B., and Wickens, C.D. 2000. A model of types and levels of human interaction with automation. IEEE Transactions on Systems, Man and Cybernetics, 30(3), 286–297.

Parasuraman, R., Galster, S., Squire, P., Furukawa, H., and Miller, C. 2005. A flexible delegation-type interface enhances system performance in human supervision of multiple robots: Empirical studies with RoboFlag. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 35(4), 481–493.

Parush, A., Hazan, M., and Shtekelmacher, D. 2017. Individuals perform better in teams but are not more aware – performance and situational awareness in teams and individuals. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 61(1), 1173–1177.

Paul, C., and Matthews, M. 2016. The Russian “Firehose of Falsehood” Propaganda Model: Why It Might Work and Options to Counter It. Santa Monica, CA: RAND Corporation.

Pearl, J., and Mackenzie, D. 2018. The Book of Why: The New Science of Cause and Effect. New York: Basic Books.

Premack, D., and Woodruff, G. 1978. Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515–526.

Prince, C., Ellis, E., Brannick, M.T., and Salas, E. 2007. Measurement of team situation awareness in low experience level aviators. The International Journal of Aviation Psychology, 17(1), 41–57.

Prinzel, L.J., Freeman, F.G., Scerbo, M.W., Mikulka, P.J., and Pope, A.T. 2003. Effects of a psychophysiological system for adaptive automation on performance, workload, and the event-related potential P300 component. Human Factors, 45(4), 601–613.

Rabinowitz, N.C., Perbet, F., Song, H.F., Zhang, C., and Botvinick, M. 2018. Machine Theory of Mind. Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80.

Radloff, R., and Helmreich, R. 1968. Groups Under Stress: Psychological Research in SEALAB II. New York: Appleton-Century-Crofts.

Rajivan, P., and Gonzalez, C. 2018. Creative persuasion: A study on adversarial behaviors and strategies in phishing attacks. Frontiers in Psychology, 9, 135.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Ramakrishnan, R., Zhang, C., and Shah, J. 2017. Perturbation training for human-robot teams. Journal of Artificial Intelligence Research, 59, 495–541.

Ramchurn, S.D., Stein, S., and Jennings, N.R. 2021. Trustworthy human-AI partnerships. IScience, 24(8), 102891. doi: 10.1016/j.isci.2021.102891.

Rasmussen, J. 1983. Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13(3), 257–266. doi: 10.1109/TSMC.1983.6313160.

Reggia, J.A. 2013. The rise of machine consciousness: Studying consciousness with computational models. Neural Networks, 44, 112–131.

Reichenbach, J., Onnasch, L., and Manzey, D. 2010. Misuse of automation: The impact of system experience on complacency and automation bias in interaction with automated aids. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 54(4), 374–378.

Rensink, R., O’Regan, K. and Clark, J. 1997. To see or not to see: The need for attention to perceive changes in scenes. Psychological Science, 8, 368–373.

Reyes, D., Dinh, J., and Salas, E. 2019. What makes a good team leader?. The Journal of Character & Leadership Development, 6(1), 88–100.

Riegelsberger, J., Sasse, M.A., and McCarthy, J.D. 2005. The mechanics of trust: A framework for research and design. International Journal of Human-Computer Studies, 62(3), 381–422.

Riley, V. 1994. A Theory of Operator Reliance on Automation. Pp. 8–14 in Human Performance in Automated Systems: Current Research and Trends (M. Mouloua and R. Parasuraman, eds.). Hillsdale, NJ: Lawrence Erlbaum Associates.

Riley, J.M., Endsley, M.R., Bolstad, C.A., and Cuevas, H.M. 2006. Collaborative planning and situation awareness in Army command and control. Ergonomics, Special Issue: Command and Control, 49(12-13), 1139–1153.

Roethlisberger, F.J., and Dickson, W.J. 1934. Management and the Worker: Technical Vs. Social Organization in An Industrial Plant. Boston, MA: Harvard University Graduate School of Business.

Rosenman, E.D., Dixon, A.J., Webb, J.M., Brolliar, S., Golden, S.J., Jones, K.A. Shah, S., Grand, J.A., Kozlowski, S.W.J., Chao, G.T., and Fernandez, R. 2018. A simulation-based approach to measuring team situational awareness in emergency medicine: A multicenter, observational study. Academic Emergency Medicine, 25(2), 196–204.

Roth, E.M., and Pritchett, A.R. 2018. Preface to the special issue on advancing models of human-automation interaction. Journal of Cognitive Engineering and Decision Making, 12, 3–6.

Roth, E.M., DePass, B., Harter, J., Scott, R., and Wampler, J. 2018. Beyond levels of automation: Developing more detailed guidance for human automation interaction. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62(1), 150–154.

Roth, E.M., DePass, B., Scott, R., Truxler, R., Smith, S., and Wampler, J. 2017. Designing Collaborative Planning Systems: Putting Joint Cognitive Systems Principles to Practice. Pp. 247–268 in Cognitive Systems Engineering: The Future for a Changing World (P.J. Smith and R.R. Hoffman, eds.). Boca Raton: Taylor and Francis, CRC Press.

Roth, E.M., Sushereba, C., Militello, L.G., Diiulio, J., and Ernst, K. 2019. Function allocation considerations in the era of human autonomy teaming. Journal of Cognitive Engineering and Decision Making, 13(4), 199–220.

Rouse, W.B. 1988. Adaptive aiding for human/computer control. Human Factors, 30(4), 431–438.

Rouse, W.B., Cannon-Bowers, J.A., and Salas, E. 1992. The role of mental models in team performance in complex systems. IEEE Transactions on Systems, Man, and Cybernetics, 22, 1296–1308.

Rouse, W.B., and Morris, N.M. 1985. On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin, 100(3), 349–363.

Rudin, C. 2019. Stop explaining black box machine learning models for high stakes decision making and use interpretable models instead. Nature Machine Intelligence, 1, 206–215.

SAE International. 2013. Requirements for Models of Situation Awareness (ARD 50050). Warrendale, PA: SEA International. Available: https://infostore.saiglobal.com/en-us/standards/sae-ard-50050-1997-1018481_saig_sae_sae_2370694/.

SAE International. 2019. SAE6906 Standard Practice for Human Systems Integration. Warrendale, PA: SEA International. Available: https://www.sae.org/standards/content/sae6906/.

Salas, E., Bowers, C.A., and Cannon-Bowers, J.A. 1995. Military team research: 10 years of progress. Military Psychology, 7(2), 55–75.

Salas, E., Cooke, N.J., and Rosen, M.A. 2008. On teams, teamwork, and team performance: Discoveries and developments. Human Factors, 50(3), 540–547.

Salas, E., Diaz Granados, D., Klein, C., Burke, C.S., Stagl, K.C., Goodwin, G.F., and Halpin, S.M. 2008. Does team training improve team performance? A meta-analysis. Human Factors, 50(6), 903–933.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Salas, E., Dickinson, T.L., Converse, S.A., and Tannenbaum, S.I. 1992. Toward An Understanding of Team Performance and Training. Pp. 3–29 in Teams: Their Training and Performance (R.W. Swezey and E. Salas, eds.). Norwood, NJ: Ablex.

Salas, E., Sims, D.E., and Burke, C.S. 2005. Is there a “big five” in teamwork? Small Group Research, 36(5), 555–599.

Salerno, J.J., Hinman, M.L., and Boulware, D.M. 2005. A situation awareness model applied to multiple domains. Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications, 5813. Orlando, FL.

Salles, A., Evers, K., and Farisco, M. 2020. Anthropomorphism in AI. AJOB Neuroscience, 11(2), 88–95.

Salmon, P.M., Stanton, N.A., Walker, G.H., Jenkins, D., Ladva, D., Rafferty, L., and Young, M. 2009. Measuring situation awareness in complex systems: Comparison of measures study. International Journal of Industrial Ergonomics, 39(3), 490–500.

Sandberg, B. 2021. Artificial Social Intelligence for Successful Teams (ASIST). Presentation to the Committee on Human-System Integration Research Topics for the 711th Human Performance Wing of the Air Force Research Laboratory. July 29.

Samimi, A., Mohammadian, A., and Kawamura, K. 2010. An Online Freight Shipment Survey in US: Lessons Learnt and a Non-Response Bias Analysis. 89th Annual Transportation Research Board Meeting, January 11–15, Washington, DC.

Sanders, T.L., Wixon, T., Schafer, K.E., Chen, J.Y., and Hancock, P.A. 2014. The Influence of Modality and Transparency on Trust in Human-Robot Interaction. 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 156–159.

Sanneman, L., and Shah, J.A. 2020. A Situation Awareness-Based Framework for Design and Evaluation of Explainable AI. In Explainable, Transparent Autonomous Agents and Multi-Agent Systems. EXTRAAMAS 2020, Lecture Notes in Computer Science, vol. 12175 (D. Calvaresi, A. Najjar, M. Winikoff, and K. Främling, eds.). Springer, Cham. doi: 10.1007/978-3-030-51924-7_6.

Sarter, N.B., and Schroeder, B. 2001. Supporting decision making and action selection under time pressure and uncertainty: The case of in-flight icing. Human Factors, 43(4), 573–583.

Sarter, N.B., and Woods, D.D. 1994. “How in the World Did I Ever Get into That Mode”: Mode Error and Awareness in Supervisory Control. Pp. 111–124 in Situational Awareness in Complex Systems (R.D. Gilson, D.J. Garland, and J.M. Koonce, eds.). Daytona Beach, FL: Embry-Riddle Aeronautical University Press.

Sarter, N.B., and Woods, D.D. 1995. “How in the World Did I Ever Get Into That Mode”: Mode Error and Awareness in Supervisory Control. Human Factors, 37(1), 5–19.

Scerbo, M.W. 1996. Theoretical perspectives on adaptive automation. Pp. 37–63 in Automation and Human Performance: Theory and Application (R. Parasuraman and M. Mouloua, eds.). Mahwah, NJ: Lawrence Erlbaum.

Schaefer, K.E., Chen, J.Y.C., Szalma, J.L., and Hancock, P.A. 2016. A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors, 58(3), 377–400.

Schmidt, E., Work, B., Catz, S., Chien, S., Clyburn, M., Darby, C., Ford, K., Griffiths, J-M., Horvitz, E., Jassy, A., Louie, G., Mark, W., Matheny, J., McFarland, K., and Moore, A. 2021. Final Report: National Security Commission on Artificial Intelligence. Washington, DC: National Security Commission on Artificial Intelligence. Available: https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf.

Schmitt, F., Roth, G., Barber, D., Chen, J., and Schulte, A. 2018. Experimental Validation of Pilot Situation Awareness Enhancement Through Transparency Design of a Scalable Mixed-Initiative Mission Planner. Intelligent Human Systems Integration, Proceedings of the 1st International Conference on Intelligent Human Systems Integration (IHSI 2018): Integrating People and Intelligent Systems, Dubai, United Arab Emirates, 209–215.

Schraagen, J.M., Barnhoorn, J.S., van Schendel, J., and van Vught, W. 2021. Supporting teamwork in hybrid multi-team systems. Theoretical Issues in Ergonomics Science, 1–22. doi: 10.1080/1463922X.2021.1936277.

Schroeder, N.L., Chiou, E.K., and Craig, S.D. 2021. Trust influences perceptions of virtual humans, but not necessarily learning. Computers and Education, 160, 104039.

Schünemann, B., Keller, J., Rakoczy, H., Behne, T., and Bräuer, J. 2021. Dogs distinguish human intentional and unintentional action. Scientific Reports, 11, 14967. doi: 10.1038/s41598-021-94374-3.

Sebok, A., and Wickens, C.D. 2017. Implementing lumberjacks and black swans into model-based tools to support human–automation interaction. Human Factors, 59(2), 189–203.

Sebok, A., Walters, B., and Plott, C. 2017. Integrating human-centered design and the agile development process for safety and mission critical system development. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 61(1), 1086–1090.

Seckler, M., Heinz, S., Forde, S., Tuch, A.N., and Opwis, K. 2015. Trust and distrust on the web: User experiences and website characteristics. Computers in Human Behavior, 45, 39–50. doi: 10.1016/j.chb.2014.11.064.

Selcon, S.J. 1990. Decision Support in the Cockpit: Probably a Good Thing? Human Factors Society 34th Annual Meeting, Santa Monica, CA.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Selkowitz, A.R., Lakhmani, S.G., and Chen, J.Y.C. 2017. Using agent transparency to support situation awareness of the autonomous squad member. Cognitive Systems Research, 46, 13–25.

Selkowitz, A.R., Lakhmani, S.G., Larios, C.N., and Chen, J.Y.C. 2016. Agent Transparency and the Autonomous Squad Member. Human Factors and Ergonomics Society Annual Meeting, Los Angeles, CA.

Seong, Y., and Bisantz, A.M. 2008. The impact of cognitive feedback on judgment performance and trust with decision aids. International Journal of Industrial Ergonomics, 38(7–8), 608–625.

Seppelt, B.D., and Lee, J.D. 2007. Making adaptive cruise control (ACC) limits visible. International Journal of Human-Computer Studies, 65(3), 192–205.

Sharif, M., Bhagavatula, S., Bauer, L., and Reiter, M.K. 2016. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria.

Sheridan, T. 1986. Human Supervisory Control of Robot Systems. 1986 IEEE International Conference on Robotics and Automation, pp. 808–812. doi: 10.1109/ROBOT.1986.1087506.

Sheridan, T.B. 1988. Task Allocation and Supervisory Control. Pp. 159–173 in Handbook of Human-Computer Interaction. Netherlands: North Holland.

Sheridan, T.B. 1992. Telerobotics, Automation, and Human Supervisory Control. Cambridge, MA: MIT Press.

Sheridan, T.B. 2011. Adaptive automation, level of automation, allocation authority, supervisory control, and adaptive control: Distinctions and modes of adaptation. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 41(4), 662–667.

Sheridan, T.B. 2019. Individual differences in attributes of trust in automation: Measurement and application to system design. Frontiers in Psychology, 10, 1117. doi: 10/gf4xk9.

Sheridan, T.B., and Johannsen, G. 1976. Monitoring Behavior and Supervisory Control. Boston, MA: Springer.

Sheridan, T.B., and Parasuraman, R. 2005. Human-automation interaction. Reviews of Human Factors and Ergonomics, 1(1), 89–129.

Sheridan, T.B., and Verplank, W.L. 1978. Human and Computer Control of Undersea Teleoperators (No. NR196-152). Arlington, VA: Office of Naval Research. Available: https://apps.dtic.mil/sti/pdfs/ADA057655.pdf.

Shively, R.J., Lachter, J., Brandt, S.L., Matessa, M., Battiste, V., and Johnson, W.W. 2017. Why Human-Autonomy Teaming? International Conference on Applied Human Factors and Ergonomics, Springer, Cham, pp. 3–11.

Shneiderman, B. 2020. Human-centered artificial intelligence: Three fresh ideas. AIS Transactions on Human-Computer Interaction, 12(3), 109–124.

Shneiderman, B. 2021. 19th Note: Human-Centered AI Google Group. In Human-Centered AI (September 12, 2021 ed.). Available: https://groups.google.com/g/human-centered-ai/c/syqiC1juHO.c.

Siau, K., and Wang, W. 2018. Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31(2), 47–53.

Simon, H.A. 1955. A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118.

Simon, H.A. 1957. Models of Man: Social and Rational. New York: John Wiley & Sons.

Simpkiss, B. 2009. Human Systems Integration Requirements Pocket Guide. Falls Church, VA: Air Force Human Systems Integration Office.

Simpson, J.A. 2007. Psychological foundations of trust. Current Directions in Psychological Science, 16(5), 264–268.

Singer, S.J., Kellogg, K.C., Galper, A.B., and Viola, D. 2021. Enhancing the value to users of machine learning-based clinical decision support tools: A framework for iterative, collaborative development and implementation. Health Care Management Review, doi: 10.1097/hmr.0000000000000324.

Singh, K., Aggarwal, P., Rajivan, P., and Gonzalez, C. 2019. Training to detect phishing emails: Effects of the frequency of experienced emails. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 63(1), 453–457.

Slack, D., Hilgard, S., Jia, E., Singh, S., and Lakkaraju, H. 2020. Fooling LIME and SHAP: Adversarial Attacks on Post Hoc Explanation Methods. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. AIES ’20 (Association for Computing Machinery), pp. 180–186. doi: 10.1145/3375627. 3375830.

Smallman, H.S., and St. John, M.F. 2003. CHEX (Change History EXplicit): New HCI concepts for change awareness. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 47(3), 528–532.

Smith, P.J. 2018. Conceptual frameworks to guide design. Special Issue on Advancing Models of Human-Automation Interaction, Journal of Cognitive Engineering and Decision Making, 12(1), 50–52.

Smith, P.J. 2017. Making Brittle Technologies Useful. Pp. 181–208 in Cognitive Systems Engineering. Boca Raton, FL: CRC Press.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Sottilare, R.A., Shawn Burke, C., Salas, E., Sinatra, A.M., Johnston, J.H., and Gilbert, S.B. 2017. Designing adaptive instruction for teams: A meta-analysis. International Journal of Artificial Intelligence in Education, 28(2), 225–264. doi: 10.1007/s40593-017-0146-z.

Sreedharan, S., Chakraborti, T., and Kambhampati, S. 2021. Foundations of explanations as model reconciliation. Artificial Intelligence, 301, 103558.

Stanners, M., and French, H.T. 2005. An Empirical Study of the Relationship Between Situation Awareness and Decision Making. (DSTO-TR-1687). Edinburgh South Australia: Defence Science and Technology Organization, Land Operations Division.

Stanton, N.A., Baber, C., and Harris, D. 2008. Modeling Command and Control: Event Analysis of Systematic Teamwork. Hampshire, England and Burlington, VT: Ashgate Publishing Limited.

Stein, G.J. 1996. Information attack: Information warfare in 2025. 2025 White Papers: Power and Influence, 3, 14–28.

Stevens, R., Galloway, T., and Lamb, C. 2014. Submarine navigation team resilience: Linking EEG and behavioral models. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 58(1), 245–249.

Stowers, K., Kasdaglis, N., Rupp, M., Chen, J., Barber, D., and Barnes, M. 2017. Insights Into Human-Agent Teaming: Intelligent Agent Transparency and Uncertainty. Pp. 149–160 in Advances in Human Factors in Robots and Unmanned Systems: Proceedings of the AHFE 2016 International Conference on Human Factors in Robots and Unmanned Systems, Walt Disney World, Florida.

Strater, L.D., Cuevas, H.M., Connors, E.S., Ungvarsky, D.M., and Endsley, M.R. 2008. Situation awareness and collaborative tool usage in ad hoc command and control teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 52(4), 468–472.

Su, J., Vargas, D.V., and Sakurai, K. 2019. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5), 828–841.

Sulistayawati, K., Wickens, C.D., and Chui, Y.P. 2011. Prediction in situation awareness: Confidence bias and underlying cognitive abilities. International Journal of Aviation Psychology, 21(2), 153–174.

Sundstrom, E., DeMeuse, K.P., and Futrell, D. 1990. Work teams: Applications and effectiveness. American Psychologist, 45(2), 120–133.

Swezey, R.W., and Salas, E. 1992. Teams: Their Training and Performance. Norwood, NJ: Ablex Publishing.

Takayama, L. 2009. Making Sense of Agentic Objects and Teleoperation: In-The-Moment and Reflective Perspectives. Human-Robot Interaction Conference, San Diego, CA, pp. 239–240.

Taleb, N.N. 2012. Antifragile: Things That Gain from Disorder. New York: Random House

Tambe, M. 2011. Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned. Cambridge: Cambridge University Press.

Tannenbaum, S.I., and Cerasoli, C.P. 2013. Do team and individual debriefs enhance performance? A meta-analysis. Human Factors, 55(1), 231–245.

Taylor, R.M., and Reising, J. 1995. The Human-Electronic Crew: Can We Trust the Team? Proceedings of the 3rd International Workshop on Human-Computer Teamwork, Cambridge, United Kingdom. Available: https://apps.dtic.mil/sti/pdfs/ADA340601.pdf.

Topcu, U., Bliss, N., Cooke, N., Cummings, M., Llorens, A., Shrobe, H., and Zuck, L. 2020. Assured autonomy: Path toward living with autonomous systems we can trust. arXiv:2010.14443.

Toulmin, S.E. 1958. The Uses of Argument. Cambridge: Cambridge University Press.

Trapsilawati, F., Wickens, C., Chen, C.-H., and Qu, X. 2017. Transparency and Conflict Resolution Automation Reliability in Air Traffic Control. 19th International Symposium on Aviation Psychology, Dayton, Ohio, pp. 419–424.

Tsifetakis, E., and Kontogiannis, T. 2019. Evaluating non-technical skills and mission essential competencies of pilots in military aviation environments. Ergonomics, 62(2), 204–218.

Turban, E., and Frenzel, L.E. 1992. Expert Systems and Applied Artificial Intelligence. MacMillan Publishing Company.

Turk, W. 2006. Writing requirements for engineers [good requirement writing]. Engineering Management Journal, 16(3), 20–23.

Tversky, A., and Kahneman, D. 1987. Rational Choice and the Framing of Decisions. Rational Choice; The Contrast Between Economics and Psychology. Chicago: University of Chicago Press.

Tversky, A., and Kahneman, D. 1974. Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. doi: 10.1126/science.185.4157.1124.

USAF (U.S. Air Force). 2013. Air Force Research Laboratory Autonomy Science and Technology Strategy. Wright-Patterson Air Force Base, OH: Air Force Research Laboratory. December 2, 2013. Available: https://web.archive.org/web/20170125102447/ http://www.defenseinnovationmarketplace.mil/resources/AFRL_Autonomy_Strategy_DistroA.PDF.

USAF. 2015. Autonomous Horizons: The Way Forward. Washington, DC: Office of the U.S. Air Force Chief Scientist.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

USAF. 2020. Department of the Air Force Role in Joint All Domain Operations (JADO). Air Force Doctrine Publication (AFDP) 3-99. Maxwell Air Force Base, AL. Available: https://www.doctrine.af.mil/Doctrine-Publications/AFDP-3-99DAF-Role-in-Jt-All-Domain-Ops-JADO/.

van Dongen, K., and van Maanen, P. 2005. Designing for Dynamic Task Allocation. Proceedings of the Seventh International Naturalistic Decision Making Conference (NDM7). Amsterdam, Netherlands.

Vered, M., Howe, P., Miller, T., Sonenberg, L., and Velloso, E. 2020. Demand-driven transparency for monitoring intelligent agents. IEEE Transactions on Human-Machine Systems, 50(3), 264–275.

Vicente, K.J. 1999. Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work. Boca Raton, FL: CRC Press.

Volpe, C.E., Cannon-Bowers, J.A., Salas, E., and Spector, P.E. 1996. The impact of cross-training on team functioning: An empirical investigation. Human Factors, 38, 87–100.

Volz, K., Yang, E., Dudley, R., Lynch, E., Dropps, M., and Dorneich, M.C. 2016. An evaluation of cognitive skill degradation in information automation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1), 191–195.

Walker, G.H., Stanton, N.A., Salmon, P.M., and Jenkins, D.P. 2009. Command and Control: The Sociotechnical Perspective. London: CRC Press. doi: 10.1201/9781315572765.

Wang, L., Jamieson, G.A., and Hollands, J.G. 2009. Trust and reliance on an automated combat identification system. Human Factors, 51(3), 281–291.

Wang, N., Pynadath, D.V., and Hill, S.G. 2016. Trust Calibration Within a Human-Robot Team: Comparing Automatically Generated Explanations. 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 109–116.

Wang, N., Pynadath, D.V., Rovira, E., Barnes, M.J., and Hill, S.G. 2018. Is It My Looks? Or Something I Said? The Impact of Explanations, Embodiment, and Expectations on Trust and Performance in Human-Robot Teams. 13th International Conference, Persuasive 2018, Waterloo, ON, Canada, pp. 56–69. Springer, Cham.

Warm, J.S., Dember, W.N., and Hancock, P.A. 1996. Vigilance and Workload in Automated Systems. Pp. 183–200 in Automation and Human Performance: Theory and Applications. Boca Raton, FL: CRC Press.

Weaver, S.J., Salas, E., Lyons, R., Lazzara, E.H., Rosen, M.A., Diazgranados, D., and King, H. 2010. Simulation-based team training at the sharp end: A qualitative study of simulation-based team training design, implementation, and evaluation in healthcare. Journal of Emergencies, Trauma and Shock, 3(4), 369.

West, S.M., Whittaker, M., and Crawford, K. 2019. Discriminating Systems: Gender Race and Power in AI. AI Now Institute. Available: https://ainowinstitute.org/discriminatingsystems.pdf.

Wickens, C.D. 1995. The Tradeoff of Design for Routine and Unexpected Performance: Implications of Situation Awareness. Pp. 57–64 in Experimental Analysis and Measurement of Situation Awareness (D.J. Garland and M.R. Endsley, eds.). Daytona Beach, FL: Embry-Riddle Aeronautical University Press.

Wickens, C.D. 2008. Situation awareness: Review of Mica Endsley’s 1995 articles on situation awareness theory and measurement. Human Factors, 50(3), 397–403.

Wickens, C.D. 2009. The Psychology of Aviation Surprise: An 8 Year Update Regarding the Noticing of Black Swans. 2009 International Symposium on Aviation Psychology, Dayton, OH, pp. 1–6.

Wickens, C.D. 2015. Situation awareness: Its applications value and its fuzzy dichotomies. Journal of Cognitive Engineering and Decision Making, 9(1), 90–94.

Wickens, C.D. 2018. Automation stages and levels, 20 years after. Journal of Cognitive Engineering and Decision Making, 12(1), 35–41.

Wickens, C.D., Helton, W.S., Hollands, J.G., and Banbury, S. 2022. Engineering Psychology and Human Performance, 5th ed. New York: Routledge.

Widmer, G., and Kubat, M. 1996. Learning in the presence of concept drift and hidden contexts. Machine Learning, 23, 69–101. Wiener, E.K., Kanki, B.G., and Helmreich, R.L. 1993. Cockpit Resource Management. San Diego, CA: Academic Press.

Wiener, E.L. 1985. Cockpit Automation: In Need of a Philosophy. 1985 Behavioral Engineering Conference, Warrendale, PA. Wiener, E.L., and Curry, R.E. 1980. Flight deck automation: Promises and problems. Ergonomics, 23(10), 995–1011.

Williams, K.D. 2010. Dyads can be groups (and often are). Small Group Research, 41(2), 268–274. doi: 10/d6msv6.

Wilson, G.F., and Russell, C.A. 2007. Performance enhancement in an uninhabited air vehicle task using psychophysiologically determined adaptive aiding. Human Factors, 49(6), 1005–1018.

Wimmer, H., and Perner, J. 1983. Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children’s understanding of deception. Cognition, 13(1), 103–128.

Wojton, H., Vickers, B., Carter, K., Sparrow, D., Wilkins, L., and Fealing, C. 2021. DATAWorks 2021: Characterizing Human-Machine Teaming Metrics for Test and Evaluation. IDA Document NS D-21563. Alexandria, VA: Institute for Defense Analysis.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Woods, D.D. 2015. Four concepts for resilience and the implications for the future of resilience engineering. Reliability Engineering and System Safety, 141, 5–9.

Woods, D.D. 2016. The risks of autonomy: Doyle’s catch. Journal of Cognitive Engineering and Decision Making, 10(2), 131–133.

Woods, D.D. 2017. STELLA: Report from the SNAFUcatchers Workshop on Coping with Complexity. Columbus, OH: The Ohio State University. Available: https://snafucatchers.github.io/.

Woods, D.D., and Hollnagel, E. 2006. Joint Cognitive Systems: Patterns in Cognitive Systems Engineering. Boca Raton, FL: CRC Press (Taylor and Francis).

Wright, J.L., Chen, J.Y., Barnes, M.J., and Hancock, P.A. 2016. Agent reasoning transparency’s effect on operator workload. Human Factors: The Journal of the Human Factors and Ergonomic Society, 58(3), 401–415.

Wynne, K.T., and Lyons, J.B. 2018. An integrative model of autonomous agent teammate-likeness. Theoretical Issues in Ergonomics Science, 19(3), 353–374.

Yadav, A., Patel, A., and Shah, M. 2021. A comprehensive review on resolving ambiguities in natural language processing. AI Open, 2, 85–92.

Yang, J.X., Schemanske, C., and Searle, C. 2021. Toward quantifying trust dynamics: How people adjust their trust after moment-to-moment interaction with automation. Human Factors, 00187208211034716.

Yeh, M., and Wickens, C. 2001. Display signaling in augmented reality: Effects on cue reliability and image realism on attention allocation and trust calibration. Human Factors, 43(3), 355–365.

Yeh, M., Wickens, C.D., and Seagull, F.J. 1999. Target cueing in visual search: The effects of conformality and display location on the allocation of visual attention. Human Factors, 41(4), 524–542.

Young, J.P., Fanjoy, R.O., and Suckow, M.W. 2006. Impact of glass cockpit experience on manual flight skills. Journal of Aviation/Aerospace Education and Research, 15(2), 27–32.

Young, L.R.A. 1969. On adaptive manual control. Ergonomics, 12(4), 635–657.

Young, M.S., Brookhuis, K.A., Wickens, C.D., and Hancock, P.A. 2015. State of science: Mental workload in ergonomics. Ergonomics, 58(1), 1–17.

Zaccaro, S.J., and DeChurch, L.A. 2011. Leadership Forms and Functions in Multiteam Systems. Pp. 265–300 in Multiteam Systems. New York: Routledge.

Zaccaro, S.J., Marks, M.A., and DeChurch, L.A. 2012. Multiteam Systems: An Organization Form for Complex, Dynamic Environments. New York, NY: Routledge Taylor and Francis Group.

Zacharias, G., Miao, A., Illgen, C., Yara, J., and Siouris, G. 1996. SAMPLE: Situation Awareness Model for Pilot-in-the-Loop Evaluation. First Annual Conference on Situation Awareness in the Tactical Air Environment, Patuxent River, MD: Naval Air Warfare Center.

Zhang, R., McNeese, N.J., Freeman, G., and Musick, G. 2021. “An ideal human” expectations of AI teammates in human-AI teaming. ACM on Human-Computer Interaction, 4(CSCW3), 1–25.

Zhang, T., Yang, J., Liang, N., Pitts, B.J., Prakah-Asante, K.O., Curry, R., Duerstock, B.S., Wachs, J.P., and Yu, D. 2020. Physiological measurements of situation awareness: A systematic review. Human Factors: The Journal of the Human Factors and Ergonomics Society, online November 26. doi: 10.1177/0018720820969071.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

This page intentionally left blank.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 91
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 92
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 93
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 94
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 95
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 96
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 97
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 98
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 99
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 100
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 101
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 102
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 103
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 104
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 105
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 106
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 107
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 108
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 109
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 110
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 111
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 112
Next: Appendixes »
Human-AI Teaming: State-of-the-Art and Research Needs Get This Book
×
 Human-AI Teaming: State-of-the-Art and Research Needs
Buy Paperback | $30.00 Buy Ebook | $24.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Although artificial intelligence (AI) has many potential benefits, it has also been shown to suffer from a number of challenges for successful performance in complex real-world environments such as military operations, including brittleness, perceptual limitations, hidden biases, and lack of a model of causation important for understanding and predicting future events. These limitations mean that AI will remain inadequate for operating on its own in many complex and novel situations for the foreseeable future, and that AI will need to be carefully managed by humans to achieve their desired utility.

Human-AI Teaming: State-of-the-Art and Research Needs examines the factors that are relevant to the design and implementation of AI systems with respect to human operations. This report provides an overview of the state of research on human-AI teaming to determine gaps and future research priorities and explores critical human-systems integration issues for achieving optimal performance.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!