The study’s sponsors came to the National Academies with a problem: The amount of inputs for complex decision making, and the availability of computing assistance for that process, has outpaced our ability to efficiently and effectively exploit it all. How, they asked, might humans and computers team up to turn data into reliable (and when necessary, speedy) decisions?
While the study’s sponsors did not want to give details about the specific kinds of decisions they are targeting, nor to confine the committee’s thinking to particular types of decisions, one can imagine that military planners today are faced with enormous amounts of information that might provide some value for critical decisions. Consider, for example, the process for deciding how to approach a destination in a hostile environment. The decision-makers may have access to tremendous amounts of heterogeneous information, though not all of it available predictably. Some information is collected over the longer term, such as knowledge about roads and bridges, inferred social networks, patterns of individuals and organizations, social schedules (market days and hours, religious services, regular meetings), the attitudes of nearby populations, and general environmental conditions. Along with that, near-term information is gathered about particular threats, weather and wind conditions, influxes or outflows of population (e.g., for special events). Aerial images, emergency calls, media reports, information traffic over social media, and “data exhaust”1 also provide valuable information. Each of these sources has its own uncertainties, and the quality and variability of the sources may be interdependent: for instance, the attitudes of surrounding populations can change depending on what information is communicated day by day, and by the positioning of troops. Decision making for some processes, such as a multi-day approach to a new location, might unfold through a number of smaller-scale decisions; whereas decision-making in some other cases, such as whether or not to order a drone strike, is a single yes-no decision with high consequences.
Other military decision making can be even more complex. Consider, planning for a major deployment. The multi-month needs of the force must be anticipated, supply chains established, logistics planned, and so on. These demands have existed for centuries, and since at least World War II a strong foundation of analytical tools has been developed. But with today’s information technology, the amount of information that can be assembled and the number of options that can be examined have grown tremendously. As an operation unfolds, it will build on reports and forecasts about weather, tide, wind, and storm conditions, movements of others on transportation routes, aerial images and other sensed data, human-generated reports from the field, media and intelligence reports, information traffic over social media, data exhaust, and so on. This broad range of information strains the capabilities of the tools and of the planners to
1 Data exhaust refers to information that systems collect in the course of their work, as opposed to information that a user explicitly views or incorporates. It includes such data as time/date stamps, GPS coordinates, records of past actions by the user, and so on. A well-known example is the use of misspellings gathered during previous searches to improve the front-end interpreter used by search engines.
make good use of it all. At the same time, the relative ease with which inputs can be assembled has opened the door to compressing the decision making timeline, which further challenges the human planners.
Decision making of similar complexity occurs in many other contexts. Teams that manage the response to major disasters, such as damage from Hurricane Sandy in the United States or from the Fukushima Daiichi nuclear accident in Japan, must incorporate a very broad range of information from multiple heterogeneous sources—e.g., satellite images, local reports, scientific projections, and various communication flows—and quickly generate or update plans on multiple scales, ranging from immediate actions to staging of resources. The response to the 2013 Boston Marathon bombing is another case, one that may be more akin to some military decision making. In addition to multiple, partial information, the decision makers who managed the immediate response had to incorporate preliminary forensic evidence, crowd sourced inputs (of untested value), and other inputs, all with a great deal of time pressure. The decisions constituted a family of choices, such as where to deploy police, which areas to consider high risk for citizens and police, top-priority search areas, the best information to follow or expand, and how to conduct the search.
In all of these cases, the decision making is a team effort, with many experts evaluating information and using their analysis and judgment to create portions of the overall decision or plan. Overlain on that is a process by which team members challenge one another and jointly merge their individual insights to create a bigger picture. Ultimate decisions are made by team leaders based on this funneling of information and analysis. It is difficult for humans to make good decisions in such complex situations. It must be remembered that the ultimate goal is to make good decisions: merely finding a way to analyze and incorporate all the data is not valuable unless it also leads more reliably to good decisions.
Computational support in the form of large-scale data collection and analysis, visualization, etc. has been readily incorporated into some human decision making processes. For example, computation is in the control processes for all manner of processing plants (chemical processing, nuclear power generation and petroleum refining), infrastructure (electric grid and telecommunications), manufacturing (chip fabrication and large scale baking plants), assembly (electronics and automotive robotic assembly), transactions (credit card and banking) and the military (management of theater operations).
The rest of this chapter focuses on the three components of such complex decision-making: the essence of decision making; exploiting the vast amounts of data that have become available as the basis for complex decision making; and the nature of collaboration that is possible between humans and machines in the process of making complex decisions. The committee chose not to address whether, and if so how, autonomous systems might someday replace humans as decision makers in complex situations.
Decision making is integral to the human experience. Our ability to consider the implications of future actions, ponder cause and effect, and leverage our exquisite executive function capabilities sets us apart from the rest of the animal kingdom. Yet our decision making
prowess is far from perfect, and some might argue that it is getting worse as we face a wave of decisions that we are ill equipped to deal with using human cognition alone.
As decision makers ponder a web of interconnections among people and automated systems, they wonder which connections matter, how disruptive a single choice will be on the overall system; how to weigh uncertainty or the potential for misunderstanding, misinterpretation, or someone else’s deception; and how to analyze decision making in circumstances where there is no “right” answer. In addition, some decisions must be made against (near) time constraints. A decision not to act (or a failure to decide at all) is a decision in some dynamic, rapidly-changing environments. In addition to reaching a decision about a course of action, one also must make decisions about which information to consider out of the vast amounts that are available, weighing the cost of obtaining the information against its potential value. Such decisions about process can affect the quality of the ultimate decision, and they may be challenging in many of the same ways because of the multitude of options available.
The term “decision making” is itself a simplification; it refers to a process of evaluating information and reaching an actionable policy or strategy. Decision making tends to be context dependent; it often requires understanding of not only the observed or experienced situation but also of the relevant history and background.
Early theories of decision making focused upon serial processing models, where sensor processes fed perception and the several memory systems under study (working, or short-term, memory, and long-term memory) in a relatively straightforward process. Decision making itself was presumed to be a logical deduction from the information provided, and these models were often described in the language of traditional information technology and processing. Thus, the widely cited OODA-loop model originated by John Boyd has a series of primarily sequential stages: Observation, Orientation (analysis), Decision, and Action (with feedback from the stages of decision and action as well as from the environment back to the orientation stage). Although a useful paradigm for training—it was designed for situations requiring a rapid response time, such as decision making by fighter pilots, so it is necessarily simplified—it is a coarse model for studying the frontiers of decision making it oversimplifies the underlying processes. As a result, although widely referred to in operational situations and for training, it is not a common framework in the research community.
Recently the military literature has addressed the inadequacy of the OODA loop to deal with complex situations where the decision maker does not have access to a model of the underlying mechanisms between actions and outcomes (Benson and Rotkoff, 2011). For example, a “red team” approach, in which a team of experienced personnel is explicitly charged with undermining the ability of “blue” decision makers can be a valuable method for exploring a broader range of scenarios, including those that a decision-making team might deem as very low probability. A red team could identify ways in which observations might be made misleading, decisions anticipated, and actions countered, thus undercutting the applicability of an OODA-loop description of the decision-making process.
In addition, the implied sequential nature of the OODA loop—even if there is feedback between stages and perhaps multiple trips through the loop—does not fit well with real, complex decision making. In responding to a natural disaster, for example, decision making is extremely interactive, which is not modeled well with an OODA framework. And in situations for which a large amount of potentially useful information is available, it may be desirable to perform multiple versions of the observation stage—assembling several different pictures of reality—and/or to carry out multiple versions of the orientation (analysis) stage. Both of these exercises
create alternatives to be assessed, possibly by following through to develop a family of potential decisions which are then evaluated before a final decision is reached.
A speaker at one of the study’s early meetings suggested that human-machine-network decision making might be improved by adjusting each stage of the OODA loop to make better use of the human-machine-network “team,” such as by identifying good ways of apportioning cognitive load across those three components for each of the stages. The committee believes that the OODA-loop construct is not well matched to complex decision making with large volumes of information; while the four stages are part of any decision-making process, they can be combined in multiple ways. Consequently, it developed the following finding:
Finding 1. A common representation of the decision-making process, used to train fighter pilots in rapid decision-making for air combat, calls for sequential steps to observe, update beliefs, choose an action, and take the action (the so-called OODA loop). While those steps are inherent to any careful decision making, for complex decisions the OODA loop framework does not readily reflect feedback loops between the steps and branching to consider multiple choices of action, both of which are common. The study of decision making in complex situations, and the design of automated decision support systems, requires an understanding of those complexities. Thus the OODA-loop framework may not be sufficient in these contexts.
Early decision-making models tended to assume that decision making occurred at the conscious level of processing. A more nuanced view was presented by the theorist, J. Rasmussen (1983), who divided the decision-making process of skilled operators into three categories: Skill-, Rule-, and Knowledge-based procedures. Skill-based behavior refers to those capabilities that are sensory-motor and developed after a period of training, such as riding a bike. Rule-based behavior refers to those that are based on learned rules or procedures, such as following a recipe. In this taxonomy, knowledge-based processing is the highest level of cognitive control because it includes the challenge of solving novel problems (Cummings, 2014).
Today’s decision-making theories assume that much more complex cognitive processing is occurring, much of it subconscious and involving neural networks that interact as a dynamical system, with considerable iteration, feedback, and continual adjustment of parameters. For example, recent evidence about human thought implies that decisions by experts are often reached subconsciously, with reason and logic coming afterward to justify the decision (Mercier and Sperber, 2011).
Recognition-primed decision making (Klein, 2008) involves rapid pattern matching to the situation, one of the powerful properties of fast, subconscious systems. Klein’s work has been of particular value in guiding decision making in complex situations—real situations, not the simplified, artificial settings studied by laboratory-based researchers.
Heuristics and rule-following present a mix of behavior at the conscious and subconscious levels of processing. At the subconscious level, researchers have identified numerous heuristics that people use to simplify and speed up decision making—effectively, pattern matching to situations previously experienced. This decision making is often referred to as fast and frugal (Gigerenzer and Goldstein, 1996). The quality of the decisions is determined by whether appropriate information is examined—driven not only by what is available, but also by the adequacy of the decision maker’s implicit model and handling of biases—and by the
history of prior experiences (see Mercier and Sperber, 2011; Gigerenzer, 2008; Gigerenzer and Todd, 1999).
The past decade has seen significant progress in developing technologies and methods that support human sense-making and decision-making processes in complex domains. Understanding the dynamics of a complex system or organization can help one foresee the side effects of a decision or anticipate events before they occur. Many studies have been undertaken on measuring and supporting situation awareness, especially for individual decision makers, but there are still major gaps in our understanding of how to design and evaluate technologies and methods to provide effective cognitive support for individual and team sense making (Klein et al., 2006a, b; Moore and Hoffman, 2011) and decision making (e.g., Schmorrow et al., 2012).
Traditional models of human decision making focus entirely upon mental processing—all the action takes place in the brain—but another important trend in our understanding of human behavior is to understand the role of embodiment—that the human body exists in the world, interacting with it in ways that enhance our ability to function. Norman described it as a melding of knowledge in the head and knowledge in the world, because when accomplishing some task, the environment provides much of the information required as well as providing constraints, guides, and suggested courses of action (Norman, 1988, 2013). The research field called “embodied cognition” has expanded this notion, incorporating not only the environment but also the way that the entire body is used to enhance decision making (Todd and Gigerenzer, 2007; Kirsh, 2013; Dourish, 2001).
Information systems can be designed to support the human decision maker in tasks or subtasks that are domain or situation specific. However, the quality of support afforded will necessarily depend on the skill and foresight of the software’s creators. Designers of today’s analytic support systems have begun to build them so that they interact in a more naturalistic way with humans. More and more systems are able to respond intelligently to queries in natural language (e.g., Apple’s “Siri” and Google’s “Google Now”) and, as speech understanding progresses, this usage is expected to increase both in coverage and in power.
Indeed, in many of today’s activities, decision making is no longer an exclusively human endeavor. In both virtual and real ways, technology has vastly extended people’s range of movement, speed, and access to massive amounts of data. Consequently, the scope of complex decisions that human beings are capable of making has greatly expanded—for example, Google’s technology helped to quickly map the impact of the 2010 earthquake in Haiti and then helped to develop a person-finder tool. At the same time, some of these technologies have also complicated the decision-making process. For example, social networking was responsible for many false claims just after the Boston Marathon bombing in April 2013, and subsequently, throughout the hunt for the perpetrators.2
In addition to meeting the challenge of supporting its intended user, systems that incorporate data analysis can encounter situations in which hostile entities intend to deceive the decision maker. When such strategic actors are present, they might play a meta-level role in determining what we are able to observe. These potential vulnerabilities must be considered when designing and using information systems as decision aids. For example, our actions, including further information gathering, can inform adversaries about our current state of knowledge.
2 See, for example, “Social Media’s Rush to Judgment in the Boston Bombings,” http://www.npr.org/blogs/alltechconsidered/2013/04/23/178556269/Social-Medias-Rush-To-JudgmentIn-The-Boston-Bombings. Last accessed March 19, 2014.
A decision-making context and process can be characterized along dimensions such as the following:
- Whether a single decision is to be made, a sequential cascade of decisions made in discrete or continuous fashion, or some complex construct of multiple-related decisions, possibly made by numerous people in concert or solo;
- The pace of decisions: real time, seconds, minutes, days, years, or decades;
- Any clear trigger that characterizes the point at which a decision is forced; this might be the observation of a provocative action, the impending loss of a desirable alternative for action, or the closing of a narrow window in which an intervention can be effective;
- The degree of confidence that must be reached in order to justify a decision; if there is a
- trigger, a low weight of evidence might suffice, but in other cases the inevitable
- uncertainties in information and models must be characterized and factored in;
- The number of decision makers, their relation to one another, their diversity, and the
- responsibility and authority of each;
- Resources available to the decision makers (limited, adequate, rich);
- Scalability: if the relevant, available data that can be processed during the decision-making process is large, and possibly dynamic;
- Cultural differences;
- Rules of engagement that enable or constrain options; and
- Quality and availability of relevant data
- Stale, inadequate data
- Data from different sources: people, computers, and sensors that may be of different kinds and varying performance
- Complexity of structure and formats (audio, video, image, electromagnetic, handwritten)
- Data location: collocated, geographically distributed across a network
- Levels of certainty associated with each data source.
The list above characterizes many aspects of the human decision-making process, and each dimension will influence an information system that is designed to support decision making. Any or all of these dimensions might be considered when developing such an information system.
Many other factors have crucial effects on decisionmaking, such as emotions, social context, relationships, organizational structures, authority systems, and so forth. And the way individuals work in networks can have strong impacts. Ignoring these factors can lead to failures of team decision making, and an understanding of these factors must inform the design and incorporation of technologies. Currently, tools that assist with team coordination are making great advances. For example, Facebook is able to learn how its subscribers interact with one another and make decisions about the site’s features and interfaces based on that information. Analogous technology that uses big data to understand human networks and interactions is also affecting other important decisions such as where to distribute malaria nets in Africa, where to send emergency teams in a disaster, how to advertise a political candidate, and how to induce people to contribute to charity. Culture should also be considered, because it affects team-members’ attitudes and unspoken assumptions, such as how they feel about privacy, trust, sharing, and so on. There is a good deal of emerging research on this topic.
We live in an era of “big data.” 3 Today, big data are everywhere, and datasets are growing in size, noise, and complexity—experiments, observations, simulations, images, video, text, networks: Science and business are generating terabytes of data and greater, and the scale of social media data can extend into the exascale range. More and more, these data are considered potential sources of knowledge, requiring increasingly sophisticated analysis techniques to uncover their relational and semantic underpinnings. Indeed, it could be argued that much of the drive toward big data has been bottom up: Let’s collect more data and then analyze it and hopefully derive knowledge (which at times may only be correlations rather than causal knowledge).
Arguably, we currently stand at the beginning of a decades-long trend toward increasingly evidence-based, data-informed decision making across all walks of life. This trend is powered by the confluence of several technical and societal trends that are projected to accelerate over the coming years: the exploding volume and variety of data, the accelerating use of the Internet to share these data and to support team decision making, and the widespread adoption of personal mobile devices that give individuals nearly continuous opportunities to communicate, to collect data about themselves and their surroundings, and to access online computer assistance.
Analyses of massive datasets have already led to breakthroughs in fields as diverse as genomics, astronomy, health care, urban planning, and marketing.4 For example, credit card companies now make better decisions about which credit card transactions are likely to involve fraud by scrutinizing millions of historical credit card transactions to automatically discover the subtle marks that distinguish fraudulent from acceptable charges. Local governments use historical and real-time data feeds to improve decisions about traffic control and about where and when to allocate foot police to keep the peace. Individuals now use mobile devices to capture continuous data about the number of steps they take every day, their weight, and other personal health data in an effort to understand and improve their own health. We are also beginning to witness new ways in which groups of networked individuals can work together to make better decisions: Social network sites invite visitors to play games that design new proteins (Foldit) or use their differing expertise to answer one another’s questions (e.g., Yahoo! Answers, Quora).5
Big data poses tremendous opportunities—the promise of having much improved understanding of the many elements of relevance to our questions and choices—but also great challenges, because creating that “understanding” requires much more than simply finding the information. The process of inferring true knowledge from it is non-trivial. The sheer volume of the data requires computing just to prepare and filter the data for human interpretation. But that may not be enough, because the filtered output can still be enormous, and current capabilities
3 The term “big data” is an umbrella term that refers not only to the vast amounts of data that computers now make available but also to “a transformation in how society processes information,” what Kenneth Neil Cukier and Viktor Mayer-Schoenberger call the “datafication [of] many aspects of the world that have never been quantified before.” Foreign Affairs, May–June 2013. http://www.foreignaffairs.com/articles/139104/kenneth-neil-cukier-and-viktor-mayer-schoenberger/therise-of-big-data. Last accessed March 24, 2014.
4 See, for example, The Fourth Paradigm (2009).
cannot filter out all noise, such as errors and spurious patterns. Humans excel at some of these steps: for example, a typical Internet search can yield thousands or millions of “hits”, some very much related to the query, and some very far afield. The fact that appropriate results are often at the top of the list is an amazing accomplishment, but it is still necessary for a user—an analyst—to assess the top N hits to determine which are most promising. Humans are remarkably good and fast at this, thus exceeding the capabilities of computers, although even then, humans can be fooled by erroneous information, superficial associations, manipulation of search engines, and other artifacts of the data or the algorithms that filter it. And for many cases, it is not feasible to simply dump search results onto an analyst’s screen because there may be too many relevant results for a human to check. Even if feasible, the timeliness of decision making will then be limited by the speed of a human analyst.
Finding 2. Increasingly the data used to support computer-assisted decisions are drawn from heterogeneous sources (e.g. unstructured text, images, simulation outputs). Current techniques for filtering and aggregating these disparate data types into a well-characterized input for decision making are limited, which therefore limits the quality of the decisions.
Thus, the response to big data appears to be “big computing.” Computers are undisputedly better than humans at keeping track of myriad details, filtering and organizing massive amounts of data.6 Algorithms give us needed information at our fingertips nearly instantaneously. Yet it is still often the case today that the human has to adapt to the machine, rather than the other way around. It is important to understand and quantify the unique capabilities of the human and the information system to allow both to function optimally.
It is also critical to recognize that exploiting large bodies of data is not necessarily better than traditional approaches. Smaller amounts of data, including data drawn via a process of sampling from large stores or streams of data, may provide the most important inputs to decision making.
As discussed in detail in the 2013 National Research Council report Frontiers in Massive Data Analysis, there are still substantial challenges for massive data. These range from the more “familiar” domains of storage, indexing, and querying to “the ambitious goal of inference” (italics in the original) needed for decision making, which the report defines as
. . . the problem of turning data into knowledge, where knowledge often is expressed in terms of entities that are not present in the data per se but are
6 That said, human vision and cognition still far exceed machine-based vision and cognition in many areas. A 2008 report from the National Academies (Emerging Cognitive Neuroscience and Related Technologies) observed “The global scientific computing community is approaching an era in which high-end computing will, in principle, be sufficient in capacity and computational power to model the human brain. However, there does not yet exist either an adequate and detailed understanding of how such modeling can be done, nor a complete model of how the brain interacts with complex regulatory and monitoring systems throughout the body. These and other difficulties make it highly unlikely that in the next two decades anyone could build a neurophysiologically plausible model of the whole brain and its array of specialized and general-purpose higher cognitive functions.”
present in models that one uses to interpret the data. Statistical rigor is necessary to justify the inferential leap from data to knowledge, and many difficulties arise in attempting to bring statistical principles to bear on massive data.
(National Research Council, 2013)
Among these hurdles are sampling bias, provenance, and control of error rates. All statistical methods rely on assumptions about how the data were gathered and sampled; however, massive datasets are often constructed from many subcollections of data, each of which was amassed using a different sampling scheme for a different purpose. The analyst may have little control or insight into this collection. Further, the “data” may not be the original observations, but may be the product of previous inferential procedures, and, without care, subsequent analyses can amplify noise.
Finally, the temptation with massive data is to multiply the number of hypotheses explored, and this can lead to substantial issues with “false discovery.”
Finding 3. While improved information availability can improve the quality of decision making, more information alone is not sufficient. This is particularly evident in complex scenarios where the goals of different team members are not completely aligned and delays make it difficult to attribute effects to actions.
Although existing statistical tools can address these issues, much work remains to be done in developing and applying them to massive data. In particular, there is still a gap: the middleware that would enable statistical tools to interact with distributed systems. The Committee on the Analysis of Massive Data (2013) identified several key research areas:
- Data representation, including characterizations of the raw data and transformations that are often applied to data, particularly transformations that attempt to reduce the representational complexity of the data;
- Computational complexity issues and how the understanding of such issues supports characterization of the computational resources needed and of trade-offs among resources;
- Statistical model building in the massive data setting, including data cleansing and validation;
- Sampling, both as part of the data-gathering process and as a key methodology for data reduction; and
- Methods for including people in the data-analysis loop.7
Much of the current report focuses on the final research area listed above.
7 See Frontiers in Massive Data Analysis (2013), 4-5.
In the early days of computing, machines were slow, they lacked today’s software power, and they communicated poorly with humans. Such computers could be programmed to perform well-understood, limited, and often repetitious tasks. They could display almost real-time radar returns, hold and represent text for an author, and list inventory. In these situations, the automation executes minor, even incidental, tasks that support human decision making. The results provided are useful, but the computers are not centrally involved in determining how the decision process is orchestrated over time. Humans have tended to delegate discrete tasks to computation, such as searching for information in a data base, mining large volumes of data, depicting information in visual form that is more amenable to human understanding, and monitoring some behavior (such as streams of credit card transactions or surveillance camera recordings).
Advances in computing capabilities over recent decades now make it reasonable to consider how to integrally incorporate automation into complex decision-making systems. This progress has enabled human beings and computers to assemble into networks composed of geographically dispersed members. As computing devices have gained increasing abilities to intelligently interpret information and to act over long periods of time with diminished human supervision, their ability to act as teammates rather than simply be tools has increased. Although the distinction between tool and teammate is not a sharp one, the difference in experience working with a device that is a teammate rather than simply a tool is powerful. For instance, consider two systems that might help a person in writing a paper. The tool-system allows the person to easily reach Google and search for citations. The teammate-system goes farther and provides some functions akin to what a colleague could bring to the partnership. For example, based on a few “hints” (authors, keywords) provided by the user, the teammate-system would then operate autonomously and in parallel with the user to perform additional searches to find candidate papers and citations. The system may have used machine learning to generate additional inferences (perhaps from a history of the user’s own searches, or from a textual analysis of the user’s past writing) about the user’s unstated intent for the automated searching. Or it may have generated a family of such information based on previous examinations of the user’s colleagues. When the user is ready, the teammate-system presents the results of its searches, and perhaps some analyses; when the user has selected (in this case) references to include in the paper being drafted, the system can do all formatting required so that text is ready to drop into place.9
Categorizing an automated element as a tool or a teammate does not carry great importance, except to recognize that the relationship between humans and computers is changing. The capabilities described in the last paragraph may look primitive before too many years have elapsed. The pertinent challenge is to determine strategies for improving human-machine systems that engage in complex decision making or, stated another way, how to
8 This phrase refers to the article “From Tools to Teammates: Joint Activity in Human-Agent-Robot Teams,” Jeffrey M. Bradshaw et al. (2009).
9 See, for example, Tamara Babaian, Barbara J. Grosz, and Stuart M. Shieber, “A Writer’s Collaborative Assistant,” Proceedings of the Intelligent User Interfaces Conference, (San Francisco, CA), 2002. ACM Press. Available at http://dash.harvard.edu/handle/1/2252600. Last accessed April 8, 2014.
structure decision making in the face of enormous amounts of information and computers with very strong, but specialized, capabilities.
Cognitive scientists have examined a wide range of decision-making activities involving mixed teams composed of people and machines (e.g., Hollnagel and Woods, 2005). Computer scientists in the fields of artificial intelligence and multiagent systems have formalized collaborative behavior, developed specifications for system design, built computer “agent” systems with teamwork capabilities, and developed “collaborative interface systems” (e.g., Grant et al., 2005; Gal et al., 2010). This work has taken a perspective very different from the older descriptions of human-computer systems in which the various skills and weaknesses of people were compiled and used to attempt to determine how best to partition task components between person and machine (e.g., Fitts, 1951; Parasuraman et al., 2000). It has led to a variety of frameworks for describing possible relationships between humans and automation in the carrying out of complex tasks (e.g., Miller, 2012).
New capabilities in automation and ubiquitous connectivity are making it increasingly feasible—and feasible in novel ways—to connect humans with a larger and broader set of automation types, including vehicles and other assets. As noted in Miller (2012). “Control” can now exist along a spectrum of multiple operators, perhaps at multiple levels of an organization, and it can be shared in various ways among them.
In the future, each human or machine participant might:
- Proffer information or observations or suggestions to team members that advance some
- aspect of the shared objectives;
- Proffer critiques of the team’s problem-solving strategies;
- Possess “self-awareness” when approaching overload and recruit help in such a situation;
- Monitor teammates’ problem-solving process and execution, and then anticipate the
- information needs of others; give and accept feedback; identify gaps in approach; and
- cover for another’s execution failure;
- Explain how a result was reached; and
- Adjust activities over time to account for changing needs of the team and its members; adapt as the decision scenario unfolds.
This point of view that sees human-machine decision making as a collaboration echoes the 2012 Defense Science Board report The Role of Autonomy in DoD Systems:
The Task Force reviewed many of the DoD-funded studies on “levels of autonomy” and concluded that they are not particularly helpful to the autonomy design process. These studies attempt to aid the development process by defining taxonomies and grouping functions needed for generalized scenarios. They are counter-productive because they focus too much attention on the computer rather than on the collaboration between the computer and its operator/supervisor to achieve the desired capabilities and effects. Further, these taxonomies imply that there are discrete levels of intelligence for autonomous systems, and that classes of vehicle systems can be designed to operate at a specific level for the entire mission.10
10 Department of Defense Science Board, Task Force Report: The Role of Autonomy in DoD Systems . Washington, DC, 2012, p. 3.
Viewing the system as a team offers a framework for exploring avenues toward a more effective overall process or processes, which in turn will dictate the design of the automated components as well as how the participants interact.
The technological advances outlined above have enriched or promise to enrich the relationships and potential between humans and automation as well as the quality of decisions they produce. To fully exploit this situation, engineers can use a growing number of design techniques for building and structuring human-machine decision-making teams. The committee analyzed multiple aspects of the machine-human relationship. Members discussed opportunities to achieve better decision-making processes as well as problems that arise when the design does not sufficiently honor the strengths and weaknesses of the two types of participants, as will be discussed in the following chapters.