Skip to main content

Currently Skimming:

7 Integrating the Social and Behavioral Sciences (SBS) into the Design of a HumanMachine Ecosystem
Pages 189-252

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 189...
... . Future technology could support the design of a human–machine ecosystem for intelligence analysis: an ecosystem composed of human analysts and a ­ utonomous AI agents, supported by other technologies that could work in true collaboration.
From page 190...
... Continuous research in each domain will form components of a larger research program that supports the development of an operational human– machine ecosystem to support intelligence analysis. We note that the set of research topics that could potentially advance the development of a human–machine ecosystem is vast.
From page 191...
... However, not all analytic work can be automated or turned over to AI. The human analyst will still play a critical role in information processing and decision making, exercising the complex capacity to make judgments in the face of the high levels of uncertainty and risk associated with intelligence analysis.
From page 192...
... A variety of sensors (shown surrounding the ecosystem) provide information to the human and AI agents for a number of different purposes, from monitoring and analyzing data pertinent to intelligence analysis to collecting and processing data from interactions between analysts and AI to improve performance.
From page 193...
... Autonomous systems suitable for intelligence analysis may not be available now, but they are coming. We use the term "semiautonomous agents" for such machines to stress the essential involvement of humans in critical decisions.
From page 194...
... RESEARCH DOMAINS If AI technology becomes powerful and autonomous enough to support an ecosystem for intelligence analysis, that system will be useful only to the extent that human analysts benefit from and are able to take advantage of the assistance it offers. SBS research is essential to ensuring that developers of such an ecosystem for the IC understand the strengths and limitations of human agents.
From page 195...
... While it is often difficult to identify specific cargo, data analyses by tools within the human–machine ecosystem might discern that the supplier of the cargo in question is the leading exporter of those particular explo sives and triggering devices. Semiautonomous agents would pass that informa­ tion to appropriate analysts, and analysts following terrorist group Z might then suspect that those explosive and triggering devices are among the weapons shipments being tracked by the weapons proliferation analysts.
From page 196...
... , information and using a recommender system, proactively • Analyst updates own routines when identifies relevant information and presents it to necessary analyst • Analyst employs own strategy to filter the • Analyst interacts with HME to rate relevance and found information and focus attention on value of identified information, allowing HME to that of most relevance continuously improve its recommendations • Analyst reads and digests selected • HME finds connection between analyst's search information, discussing material with for information and that of another analyst's available colleagues as relevant investigation and sends alerts to the two analysts to talk to each other • Analyst may initiate new information searches with this new connection • HME assembles relevant information into a graphical display (or other form of sensory presentation) for analyst's review • Analyst interacts with HME to rate relevance and value of graphical display (and HME continuously improves how it displays connected information to analyst)
From page 197...
... Analyze Assembled Information • Analyst considers importance and accuracy • Together, analyst and HME create hypotheses of of reviewed information outcomes regarding assembled information • Analyst uses expertise to recognize patterns • Analyst, drawing on own expertise, interacts with and connections between new information HME to rate hypotheses considered and previous information or knowledge • HME can interact concurrently with other relevant • Many potential questions could be analysts to gather feedback on hypotheses asked regarding significance of reviewed • HME mines all available data for supporting and information, but analyst may be limited to conflicting evidence for top-rated hypotheses considering only a few • Analyst catalogues important information in own record or a database shared by analytic team or organization Communicate Intelligence and • Analyst may informally share an important • HME identifies other analysts working on related Analysis to Others insight with colleagues, update a shared issues, proactively shares important insights, and document, or prepare a formal intelligence makes connections report • When a formal report is needed, analyst selects • Analyst will coordinate and collaborate a working hypothesis, and HME automatically with relevant colleagues to prepare the prepares draft report for review by analyst formal report • Analysts will source report, assembling information used in the analysis such that it can be understood by analytic colleagues and policy makers • Analyst will revise and defend report in response to edits continued 197
From page 198...
... clients to assess current intelligence needs • HME identifies information that does not appear • Analyst may work with analytic to fit into any existing models -- i.e., flagging methodologist to explore new tools for analyst to consider whether information is discerning links between different sets of irrelevant or a new model is needed information • Analyst may explore new models and theories on issues of interest and update own framework for gathering and filtering information NOTE: Descriptions of analysts' activities are based on discussion in Chapter 4.
From page 199...
... Human Capacities While human agents bring sophisticated and contextualized reasoning to the process of analysis and inference, they are also limited in significant ways in their ability to process information. Understanding those limits, particularly in the context of intelligence analysis, is an important step in designing the technologies to augment them.
From page 200...
... In this figure, the presence of a single purple FIGURE 7-2  Attention and finding a target. NOTE: This figure illustrates that some visual search tasks are easier than others.
From page 201...
... Meanwhile, researchers pursuing the design of a human–machine ecosystem can use what is already known in developing technologies, systems, or processes that may augment these human capacities.
From page 202...
... Many agents would likely be operating in the work environment, with other agents asynchronously providing new information to be assessed and possibly interrupting the task of another agent. A promising research avenue is to seek ways to better characterize how and when the products of semiautonomous agents and supporting technologies can best be conveyed to human analysts.
From page 203...
... , task switching, and interruption have also been studied in human factors research, often using tasks related to flight deck or other complex operations scenarios. Such studies have consisted of observing how individual human agents (e.g., a pilot)
From page 204...
... to best coordinate information sharing among multiple human agents, semiautonomous agents, and supporting technologies while best accommodating the needs of the human analyst; and • research on the costs and benefits of ongoing monitoring of analytic work within a human–machine ecosystem, including issues of privacy and the need for control of the environment by the human analysts.
From page 205...
... Knowledge-based tasks are actions aided by mental models developed over time through repeated experience. Expertise-based tasks are actions predicated on previous knowledge-based tasks and dependent on significant experience in the presence of uncertainty.
From page 206...
... For intelligence analysis in the age of data overload, human analysts are likely to need assistance from machines in a number of ways. Examples include (1)
From page 207...
... . H For all of these applications, implementation requires making choices about the specific nature of the human–machine interactions involved, and the same will be true for applications of AI to intelligence analysis.
From page 208...
... Many of the targets that intelligence analysts try to detect are rare events. It would be desirable, for example, to detect the warning signs of a terrorist's intentions or of a coup d'état.
From page 209...
... . In intelligence analysis, the capacity to consider other information that may be vital to the analysis is essential.
From page 210...
... . Further research is needed to determine whether such prompts could work for the tasks of intelligence analysis and whether interactions with machines can be optimized to ensure that additional information critical to the problem at hand is reviewed and shared among analysts as appropriate.
From page 211...
... Trust. Lack of trust on the part of human agents will limit the potential of a human–machine ecosystem.
From page 212...
... This issue has been termed "explainable AI" and is being examined by a number of research programs.6 In intelligence analysis or other types of decision making, it is not enough just to flag a connection or an anomaly; it would be useful if the machines could explain how they reached their findings. The challenge for SBS research, then, is to understand how humans can make the best use of imperfect information from AI agents and supporting 6See more information on Defense Advanced Research Projects Agency's AI program at https://www.darpa.mil/program/explainable-artificial-intelligence [January 2019]
From page 213...
... Research Directions Examine the most effective ways in which AI agents can bring information to the attention of human analysts that is both useful and trustworthy. A number of researchable questions pertain to how AI agents can bring informa tion to the attention of human analysts in a trustworthy manner.
From page 214...
... As the volume of data potentially relevant both to research and to situations of interest to intelligence analysts increases, so do the challenges of how best to analyze and display these data for human interpretation and comprehension. Effectively conveying information to the human analyst may help support understanding, and therefore trust in the outputs of AI agents.
From page 215...
... In practice, intelligence analysts will likely focus on the problems to be solved, not what can be accomplished with a specific visualization tool. Thus the most useful design focus will be on making the interactions with data tools natural, obvious, and transparent, permitting the analyst to move easily between different visualization applications (Shapiro, 2010; Steele and Iliinsky, 2010)
From page 216...
... Several areas seem promising: • evaluation of the effectiveness of specific visual cues for depicting properties beyond the simple measures of mean, variance, and first-order patterns, in cluding the effective visualization of time-varying properties and more complex patterns and relations; • behavioral analysis and the development of new theories of knowledge dis­ covery describing how analysts use visualization successfully to develop and test hypotheses and, equally important, how visualization can support collab oration among multiple analysts; • research to identify systems that can be used by human analysts or AI agents to query datasets for special-purpose visualizations with which to test h ­ ypotheses or look for specific patterns, with an emphasis on the naturalness and transparency of the query process; and • research to improve an AI agent's prediction of the data and/or analyses most likely to be needed by a human analyst, including the ability of an AI agent to predictively sample, preprocess, and/or precompute appropriate data visualizations. A combination of research in the vision sciences, the behavioral sciences, and human factors has the potential to advance understanding of how people extract meaning from a data visualization, resulting in more effective techniques and design principles.
From page 217...
... . The centrality of forecasting to intelligence analysis was recently highlighted by an Intelligence Advanced Research Projects Activity (IARPA)
From page 218...
... Incorporating human behavior into forecasting models. Research on human behavior, whether cognitive or social, has become increasingly relevant to forecasting.
From page 219...
... ; (2) by measuring key inputs based on associated aspects of human behavior (e.g., disease contagion or measures of social unrest harvested from social media)
From page 220...
... . The IC has a growing program in open-source intelligence making use of such nontraditional data sources (Williams and Blum, 2018)
From page 221...
... Further, much work is needed to optimize approaches to forecasting for large scale data mining or tracking of unformed artifacts of social media indicators. Other questions include whether and how human analysts may provide comple mentary and synergistic analyses or offer solutions to the challenges of machine learning and AI approaches.
From page 222...
... . Recent advances have been made in developing tools for monitoring the physiological state of human agents and enhancing the interactions between humans and machines.
From page 223...
... . A human operator using a 17Other physiological parameters that can be monitored include heart rate variability; cardiovascular performance derived from impedance cardiography; sympathetic and parasympathetic activity indexed by pupillometry; localized cerebral blood flow revealed by near-infrared spectroscopy; and electrodermal, electroencephalographic, electromyographic, and neuroendocrine responses.
From page 224...
... Current applications of this technology include devices that monitor the attention an individual devotes to a task and those that detect a selection that currently would be indicated by a mouse click. Emerging applications interface with virtual reality to convey feedback of reactions to the system through thoughts.
From page 225...
... . The researchers later developed a collaboration tool designed to encourage the analysts on the team to share their unique information, which increased overall team performance (Rajivan and Cooke, 2018)
From page 226...
... . Research has also yielded practical guidance on how best to assemble human teams, how to train and lead teams, and how such outside influences as stress influence teamwork (Cannon-Bowers and Salas, 1998; Contractor, 2013)
From page 227...
... Once on the code blue team, these leaders were taught to request information that did not come in a timely manner. Results later indicated that the trained leaders helped train the other team members, thereby improving team performance relative to that of teams with untrained leaders.
From page 228...
... . Current findings from applied research and human factors analyses -- albeit based on contexts other than intelligence analysis, such as aircraft navigation or production facilities -- consistently reveal the need to retain one or more active human agents at the stage at which a system comes to a decision or chooses an action.
From page 229...
... . Whether the same potential for error seen in these other operational environments applies to the more fluid environment of intelligence analysis, or in the same way, remains to be assessed (Holzer and Moses, 2015)
From page 230...
... , ensuring that operational solutions, like the human–machine ecosystem, address both the needs and the capabilities and limitations of human analysts. This kind of integration cannot be added on after a human–machine 23There is growing recognition in the research community that ecosystems can be character ized as multidimensional social networks of nodes representing both humans and nonhuman elements, such as AI agents.
From page 231...
... Notably, many other industries and government agencies have required the use of this approach, often after a disaster occurs that can be partially attributed to poor integration of human behavior within a sociotechnical system (e.g., Three Mile Island, the Piper Alpha, the Challenger explosion)
From page 232...
... CONCLUSION 7-1: To develop a human–machine ecosystem that functions effectively for intelligence analysis, it will be necessary to integrate findings from social and behavioral sciences research into the design and development of artificial intelligence and other technologies involved. A research program for this purpose would extend theory and findings from current research on human–machine interactions to new types of interactions involving multiple agents in a complex teaming environment.
From page 233...
... The most successful testbeds have been designed to address specific problems, and multiple testbeds might therefore be an option for carrying out all the research necessary to support the development and use of a human–machine ecosystem for intelligence analysis. For the types of human–machine interactions that need to be investigated, such a testbed could be virtual so that researchers from multiple institutions could be involved.
From page 234...
... Ethical Considerations For some, talk of machines and collaboration with AI agents can be somewhat chilling. Reasonable concerns include the prospect that, in restricted environments, there will be more opportunities for inadvertent disclosures of confidential information; that biases inherent in algorithms will negatively affect decision making; that machine-generated output may increase false positives and subsequent false alarms; and that too much trust may be placed in machines to find the emergent patterns and signals, perhaps usurping what should be functions of human analysts or occupying them with new oversight and management tasks that compete with their analytic work.
From page 235...
... These important ethical issues need to be considered during the research and design phase, before a human–machine ecosystem is ready for implementation (see the discussion of standards for such research phases in Box 7-5)
From page 236...
... . Semiautonomous agents in an IC context will need to operate according to the ethical requirements of their roles and the environment.
From page 237...
... Answering these questions will require understanding of the norms and values of the analytical workplace and the moral benefits and limits of a human–machine ecosystem. Including semiautonomous AI agents as team members in intelligence analysis raises further issues of accountability and control.
From page 238...
... The development of appropriate limitations on the use of surveillance devices in the context of intelligence analysis will require further study. Answering questions about when and how sensor systems will be used will require careful ethical accounting to determine whether and when infringements on privacy are justified and on what grounds (Ajunwa et al., 2017)
From page 239...
... Artificial Intelligence Law, 25(3)
From page 240...
... . Data visualization with multidimensional scaling.
From page 241...
... . Milestones in the History of Thematic Cartography, Statistical Graphics, and Data Visualization.
From page 242...
... IEEE Transactions on Visualization and Computer Graphics, 20(12)
From page 243...
... . The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.
From page 244...
... . Visual cues: Practical data visualization.
From page 245...
... . Machines Who Think: A Personal Inquiry into the History and Pros pects of Artificial Intelligence (2nd ed.)
From page 246...
... . Artificial Intelligence and Machine Learning to Accelerate Translational Research: Proceedings of a Workshop -- in Brief.
From page 247...
... . An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence.
From page 248...
... . Artificial Intelligence, A Modern Approach (3rd ed.)
From page 249...
... . Four types of ensemble cod ing in data visualizations.
From page 250...
... . Discrete task switching in over load: A meta-analyses and a model.
From page 251...
... . Varying target prevalence reveals two dissocia ble decision criteria in visual search.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.