Based on the components discussed in Chapters 3 and 4, it is becoming feasible to assemble human-computer teams that are suitable for some decision making. This chapter provides some background on the following research areas that underlie human-machine collaboration for decision making: sensing, software agent systems, neuroscience, and human computation. A good deal of innovation is taking place in these areas, but fundamental questions remain before the pieces can be assembled into reliable decision-making systems.
When considering humans or animals, sensing often refers to the processes by which stimuli from outside or inside the body are received and felt, such as through the faculties of hearing, sight, smell, touch, taste, and equilibrium. Thus, sensing is a person’s critical mechanism for data acquisition. The act of sensing as pure data acquisition—for example, translation of data from the world to the computer—has advanced significantly in recent years. Whereas data acquisition used to be the bottleneck for the data-to-decisions pipeline, that is no longer the case for many disciplines. It is because of these advances in data acquisition that we can now work on improving the entire data-to-decisions process.
Consider the volumes of images and video, a critical source of data for decision making by both humans and machines, which are now readily available. Users upload about 300 million images a day to Facebook,1 with this number increasing to more than a billion a day during some special occasions.2 Additional photo-sharing sites such as WhatsApp, Picasa, and Flickr add to this incredible source of imagery for decision making. Technological progress has made digital cameras so cheap (and advanced) that they are in the pockets of hundreds of millions of people, something unheard of just 10 years ago. Similarly, the Department of Defense acquires significant amounts of images and videos in daily operations, with reports indicating terabytes of data being generated in Iraq in a single day. Both Google and Facebook possess sufficient data for reliable object recognition, even face identification, at levels that rival human performance.3 These are relatively simple tasks, but they offer clear examples of the value that large quantities of data bring to important applications. Similar examples can be found in fields such as medical diagnostics and other disciplines with very well-defined tasks and performance goals, although that degree of definition is not always possible in data-to-decision scenarios.
Analysis is clearly lagging behind sensing and data acquisition, even in cases where access to enormous amounts of data has improved (as in constrained object recognition, see more on this below). Still, progress has been made in recent years in the automatic analysis of vast amounts of sensed visual information, such as the analysis of consumer photographs, and in
automatic language translation, to name two popular capabilities used in the Internet. It is precisely the exploitation of large amounts of sensed data, more than the analysis of particular instances, that has been driving the interpretation of visual information. While it is not yet clear whether this is the only way to obtain state-of-the-art performance (and probably is not), access to large amounts of data has been found to be critical for learning visual features important for image and object recognition. Such automatic analysis is the only way to deal with massive amounts of data, and the only way to infer hidden information and singularities relevant for decision making. Significant recent examples in this area include, again, the automatic annotation and labeling (object detection) of millions of images by team of G. Hinton4 (University of Toronto and Google), based on deep learning, and the development of technologies for assisting the visually impaired, such as the OrCam.5 The outstanding performance of such technologies is clearly dependent on the ability to observe large amounts of data to learn the patterns needed. Such automatic analysis of image information is critical for decision making within systems such as those in an automobile that can automatically detect a pedestrian ahead and direct the car to stop.6 The system developed by OrCam is a clear additional example of humans and sensing machines collaborating to make a decision (Do I cross the street? Do I sit at this table?), with the human first pointing the camera in an “interesting” direction, the machine (video camera or sensing device, and algorithms for automatic interpretation) providing information, and finally the human using such information for decision making.
In addition to sensing the visual world, data to decisions also depends on other modalities such as audio and text. Audio sensing and analysis has also significantly advanced in recent years, as we clearly witness from the automatic categorization of voice when users call service centers, something once again unheard of 10 years ago but now used with high reliability by multiple call centers. One of the most interesting advances in the area of “sensed text” is in automatic translation. While this is based on large amounts of data as well, it has critical components of grammatical structure, an area still lacking in the analysis of visual data, where the “grammar” of pictures is significantly lacking.
A critical new challenge resulting from the advance of sensing technology is its integration. This means not only understanding how to merge and combine different sensing disciplines but also understanding when and how one can replace or augment the other, in particular, when one is significantly cheaper or easier to deploy. For example, are functional MRIs necessary to understand brain activity and states of the human decision-making process, or can the same information be inferred (at least the information critical for understanding decision making) from simpler data acquisition devices such as eye tracking?
Sensing keeps improving, but it still faces numerous challenges, such as continuing to reduce the size and the energy consumption of sensing devices. Biologically inspired sensors constitute a very exciting area of research with significant advances almost daily. Our success in data acquisition has spawned a new challenge: developing the capability to eliminate the incredible amounts of uninformative data we acquire. We have clearly transitioned into the phase
4 See, for example, Krizhevsky, A., I. Sustskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing 25, MIT Press: Cambridge, MA, 2012.
where the challenge for many disciplines is in the automatic analysis and interpretation of the sensed world, and not so much on sensing it.
Although there does not seem to be a universally acceptable definition of a software agent, the community of agent research and practice would generally agree that a software agent is a computational system that (a) is situated in an environment; (b) is goal directed; (c) is capable of flexible, autonomous action; and (d) learns from its experience.7 Element (a) means that the agent is able to receive input from the real or the informational world and performs actions that could change its environment in some ways. “Autonomy” does not mean that the agent has complete ability to reason and act on its own, but that certain ‘autonomous capabilities’ may minimize the need for human supervision for particular tasks and task contexts;8 and that the agent is able to control its internal computational state and actions. “Flexibility” means that the agent should be responsive (adaptive) to perceived changes in the environment. Additionally the agent is proactive; that is, the agent can exhibit goal-directed behavior, make predictions about future environment states, respond appropriately to the predictions, and take appropriate initiatives. The agent should be able to learn by its experiences, and thus improve its performance. Finally, the agent should be social; that is, able to interact with other artificial agents or humans in the course of performing its own problem solving or in order to assist others. In a multiagent system, each agent has incomplete information about the environment, other agents, their attitudes and problem-solving abilities; there is no overall system control (each agent controlling its problem solving locally); data is typically decentralized; and computation is asynchronous. Even though distributed computation offers robustness, in that there is no single point of system failure, multiagent systems face a multitude of challenges, including
- How to formulate the distributed problem, allocate tasks to various agents, and synthesize the results;
- How to initiate agent interactions, including when and what agents should communicate;
- How to ensure coherence in the distributed problem solving and avoid harmful interactions and effects;
- How to enable agents to reason about the state of their overall coordinated process;
- How to manage allocation of limited resources;
- How to allow agents to form and maintain a model of the other agents’ problem solving so as to coordinate more effectively;
- How to reconcile conflicting local viewpoints, intentions, information, and results;
- How to manage distributed problem solving in the face of failures, and changing environmental and social dynamics (e.g., agents unpredictably leave and join the agent society); and
7 This definition is adapted from Jennings, Sycara, and Wooldridge, 1998.
8 See for example, Bradshaw, J.M, Robert R. Hoffman, Matthew Johnson, and David D. Woods. The Seven Deadly Myths of “Autonomous Systems.” IEEE Intelligent Systems, May/June 2013 28:(3):54-61.
- How to manage asynchrony of communication and computation and determine effective trade-offs between communication and local computation, especially in the face of the increasing amount of data available to the multiagent system.
Research to date has identified and modeled a variety of multiagent coordination regimes. They range from teamwork,9 where the agents work towards common goals, to adversarial interactions, where the agents would like to maximize their own payoffs even at the expense of other agents. Using different coordination regimes, a multitude of applications of single agents and multiagent systems have been developed in such diverse areas as manufacturing, electronic commerce, transportation, telecommunications, air traffic control, military and civilian crisis response, health management, games and entertainment, and information management. This latter application domain is most relevant to the data-to-decisions context, because it develops agents to manage the user’s information overload problems10 that arise from the vast volume of information available from a myriad of information-gathering systems.
Most of the applications alluded to above have involved software agents that collaborate without human interaction, or where the human interaction with the agent(s) is very simple and stylized. However, it is likely that with the increased sophistication of agent technology and network pervasiveness, agent support for decision making will (a) move beyond today’s state, in which the agent involvement is relatively stylized and of short duration (e.g., buying a travel ticket) to more complex and longer-duration situations (e.g., assistance while driving) and (b) transition from assistance offered to a single decision maker to assistance offered to human-networked decision-making teams. While a large body of research has been conducted for agent decision support of single decision makers, there is comparatively little work on agent assistance for networked human decision-making teams.
Researchers desire to make agents an integral part of teams (Christoffersen and Woods, 2004), but this desire has not yet been fully realized. Researchers must identify how to best incorporate agents into human teams and what roles they should assume. The three primary roles that agents play when interacting with humans are as follows (Sycara and Lewis, 2004):
1. Agents supporting individual team members in completion of their own tasks. These agents often function as personal assistant agents and are assigned to specific team members. Two situations exist: either each human is supported by a single agent proxy in which agent proxies interact with other agents to accomplish the human’s tasks, or each human is supported by a team of agents that work to accomplish the single human’s directives. Often there are no other humans involved in the task, and
9 Teamwork activities may include negotiation and auctions, where the agents interact in order to resolve conflicts (as in resource and task allocation), and formation of coalitions, where agents form alliances for more effective problem solving.
10 Information overload problems include information gathering and selection (where the sheer amount of information present prevents the decision maker from finding the particular information he or she requires); information filtering of the enormous amounts of information that a decision maker is faced with; and information reconciliation and fusion.
the only “teamwork” involved is between the software agents. Examples of these type of agent systems include agents assisting humans in allocating disaster rescue resources and multi-robot control systems in which teams of robots perform tasks under the guidance of a human operator. Task-specific agents utilized by multiple team members also belong in this category.
2. Agents supporting the team as a whole. The performance of teams, especially in tightly coupled tasks, is believed to be highly dependent on the following interpersonal skills: information exchange, communication, supporting behavior, and team initiative and leadership. Therefore, agents supporting the team as a whole, rather than focusing on task-completion activities of individual human team members, directly facilitate teamwork by aiding communication, coordination among human agents, and focus of attention. In certain applications, this has shown to be more effective than having the agents directly aid in task completion (Sycara and Lewis, 2004). Aiding teamwork also requires less domain knowledge than aiding tasks, thus suggesting that teamwork aids might be reusable across domains. The experimental results summarized in Sycara and Lewis (2004) indicate that aiding human teamwork rather than individual team members might be the most effective aiding strategy for agents in support of human teams.
3. Agents assuming the role of an equal team member. These agents are expected to function as “virtual humans” within the organization, capable of the same reasoning and tasks as their human teammates (Traum et al., 2003). This is the hardest role for a software agent to assume, since it is difficult to create a software agent that is as effective as a human at both task performance and teamwork skills. Instead of merely assisting human team members, the software agents can assume equal roles in the team, sometimes replacing missing human team members. It can be challenging to develop software agents of comparable competency with human performers unless the task is relatively simple. Agents often fulfill this role in training simulation applications, acting as team members or tutors for the human trainees.11
Creating shared understanding between human and agent teammates is a sizable challenge facing developers of mixed-initiative collaborative human-agent systems. The limiting factor in most human-agent interactions is the user’s ability and willingness to spend time communicating with the agent in a manner that both humans and agents understand, rather than the agent’s computational power and bandwidth (Sycara and Lewis, 2004). The problem of shared understanding—whether the agents reduce uncertainty through communication, inference, or a mixture of the two—has been formulated (Horvitz, 1999) as a process of managing uncertainties: (1) managing uncertainties that agents may have about user’s goals and focus of attention, and (2) uncertainty that users have about agent plans and status. Also, protecting users from unauthorized agent interactions is a concern in any application of agent technology.
11 See, for example, Rickel and Johnson, 2003.
Arguably, one of the areas that may hold the most promise for integrated teams of humans and machines truly acting as teammates is neuroscience. Neuroscience from the broadest perspective is the understanding of how the brain (and in particular the human brain) processes information, executes cognition and then takes actions based on that information. Across the animal kingdom, the human brain – with its dense and folded cortex – is seen to be the pinnacle of cognitive evolution. Our ability to experience, reason, remember and make decisions based on cause and effect is key to our dominance and success as a species. Although consciousness and decision making come naturally and easily to us in our daily lives, the precise biological mechanisms of cognition are not well understood. A great deal of outstanding fundamental research over the last three decades has given us a glimpse into the anatomical regions and cortical networks that underlie cognition. Recent advances in neuroscience and brain-sensing technology such as fMRI (functional magnetic resonance imaging) have allowed us to see inside the brain when we make decisions. For example, different labs are investigating brain activity, via fMRI, when making decisions,12 providing a window (and a potential signal input) into still one of the most complicated decision systems available: the human mind. The signal can also be used to actually reconstruct and read the sensed world, in other words, reverse engineer the brain and understand the sensed image by looking at the (fMRI) signal.13 However, it is just within the last decade that we are beginning to make strides on understanding the functioning of the human brain in more applied settings that deal with practical decision making and tasks of military and intelligence relevance.14 Researchers are beginning to identify brain activities that are associated with decision making.15 These measures are typically made in real-time, non invasively with electroencephalography (EEG). Some of these signals are subconscious and can occur before a person realizes that he or she has made a decision.16 Other physiological activities, too, are coupled with decision making. Eye-tracking
14 See Kruse, A.A.: Operational neuroscience: neurophysiological measures in applied environments. Aviat Space Environ Med. 78(5), 4–191 (2007).
15 See Macdonald, J., Mathan, S.P., and Yeung, N. (2011). Trial-by-trial variations in subjective attentional state are reflected in ongoing prestimulus EEG alpha oscillations. Frontiers In Psychology, 2 (82); Mathan, S., Erdogmus, D., Huang, C., Pavel, M., Ververs, P., Carciofini, J., Dorneich, M., and Whitlow, S. 2008. Rapid image analysis using neural signals. In CHI '08 Extended Abstracts on Human Factors in Computing Systems, (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 3309-3314.
16 See Sajda, P., Pohlmeyer, E., Wang, J., Parra, L. C., Christoforou, C.,Dmochowski, J., et al. (2010). In a Blink of an Eye and a Switch of a Transistor: Cortically Coupled Computer Vision. Proceedings of the IEEE,98(3), 462-478. doi:10.1109/JPROC.2009.2038406; Pohlmeyer, E. A., Wang, J., Jangraw, D. C., Lou, B., Chang, S.-F., and Sajda, P. (2011). Closing the loop in cortically-coupled computer vision: A brain-computer interface for searching image databases. Journal of Neural Engineering, 8(3), 036025. doi:10.1088/1741-2560/8/3/036025.
and pupillometry technologies are also very helpful in sensing our attention and are helping to understand how we scan (sense) the world before making decisions.17
In the work of Sajda et al. (2010), the real-time brain signatures are not only used to help the analyst perform the image triage task more quickly, but they are also used to train the assistive computer vision system. Over time, with the brain-in-the-loop and computational system as a team, the overall performance of the system will improve. Humans will be less overwhelmed with data, and the computational system will have learned from the ultimate expert. While these results have been seen most clearly in specific task domains (largely due to funding availability), we expect that as neurophysiological monitoring become more ubiquitous the implications will be much wider. Eventually, real-time neurophysiological responses for decision making could inform the computer in a way that enables the human to make better future decisions—perhaps even about physical manifestations of subconscious knowledge or levels of confidence. Similarly, another method for using this real-time information would be to feed back the individual’s state to the user. In neurofeedback paradigms, self-awareness can enhance a person’s ability to best manage his or her state, which may eventually include states related to optimal decision making. This approach has been demonstrated with physiological data in the Quantified Self community–and before long will also extend to cognition. Likewise, although early in development–are real-time neural measures of team cognition.18 Research has demonstrated that optimally performing teams, on complex tasks like submarine navigation, can be detected from their collective brain signatures alone. Perhaps one day, the computer will join in the networked teams, both collaborating on a task and sensing/optimizing the performance of its human teammates. In additional to purely decision making states, research is now revealing brain states that may account for bias formation and influenceability.19 Recent research has shown that persuasive messaging about how others feel about painful stimuli, can actually influence an individual’s physical perception of how painful that stimulus is. (personal communication, DARPA Narrative Networks Program). Being able to detect or at least guard against bias and influence may be another role that neuroscience can play in this space.
The interactions described have been explicitly non-invasive, using only passively
17 Decision-level fusion of EEG and pupil features for single-trial visual detection analysis. Ming Qian, Mario Aguilar, Karen N Zachery, Claudio Privitera, Stanley Klein, Thom Carney, Loren W Nolte; Teledyne Scientific and Imaging LLC, Research Triangle Laboratory, Durham, NC 27713, USA. IEEE transactions on bio-medical engineering (Impact Factor: 2.15). 04/2009; 56(7):1929-37. DOI:10.1109/TBME.2009.2016670. Marshall, S. P. (2007). Identifying cognitive state from eye metrics. Aviation, Space, & Environmental Medicine, 78(5), 165-175; Marshall, S. P. (2007). Measures of Attention and Cognitive Effort in Tactical Decision Making. In M. Cook, J. Noyes, & V. Masakowski (Eds.), Decision Making in Complex Environments (pp. 321-332). Aldershot, Hampshire UK: Ashgate Publishing;
18 See Stevens R., Galloway T., Wang P., Berka C., Tan V., Wohlgemuth T., Lamb J., and Buckles R. (2012). Modeling the neurodynamic complexity of submarine navigation team. Computational and Mathematical Organization Theory, August 2012; DOI 10.1007/s10588-012-9135-9; Kovacs, A., Tognoli, E., Afergan, D., Coyne, J., Gibson, G., Stripling, J., Keso, J.A.S.: Brain Dynamics of Coordinated Teams. In: Human Computer Interaction International, Orlando, FL. Springer, Heidelberg (2011).
19See Falk, E.B., O’Donnell, M.B., & Lieberman (2012). Getting the word out: Neural correlates of enthusiastic message propagation. Frontiers in Human Neuroscience, 6:313; Falk, E.B., Berkman, E.T., Mann, T. Harrison, B., & Lieberman, M.D. (2010). Predicting persuasion-induced behavior change from the brain. Journal of Neuroscience, 30, 8421-8424.
recorded signals on the surface of the scalp or functional imaging with magnets. The signals are measured in response to the human completing a task or engaging in a specific mental exercise. In contrast to passive recording, brain-computer interfaces (BCI) provide a direct avenue of communication between the human brain and an external device. So far, much of the work in this area has focused on medical applications—for example, restoration of missing sensory capabilities. Enormous progress has been made in this area for prosthetics and motor controls for locked-in patients; however, there has been little utility to date for direct BCI in every-day task settings. In reality this is because for most able-bodied individuals, motor actions are much faster and more precise than those translated through BCI mechanisms. BCI approaches, particularly those that do not require implantation of electrodes in the surface of the brain, require extensive training and are highly specific to the individual. In principle, it might be possible to exploit progress in this field for decision making. For example, Chapter 3 mentioned the possibility of using machines to help humans consider new alternatives and to reduce errors. In that context, the text was referring to devices that would prod people from outside their bodies. It is conceivable, however, that computer algorithms might one day enhance the quality of human deliberation through direct interaction with the brain. Similarly, they might extend human memory or provoke “out-of-the-box” thinking. Already there is a robust collection of transcranial direct current stimulation (tDCS) research in cognition and neuroscience.20 In tDCS a simple device is used to inject a weak electrical current into the brain through the scalp. This method has not yet entered main stream paradigms for performance enhancement. However, as these devices become readily accessible to consumers (www.foc.us), we may see practical applications emerge before the research community
Seamless computer-brain connections might not only supply “extra intelligence” to humans, as fantasized about in the paragraph above. This technology might also allow people to control machines with their thoughts.
Both investment in and further study of these applied questions in neuroscience will certainly inform the future of decision making, particularly as envisioned in this study. As neurophysiological and physiological monitoring becomes more commonplace in the work environment, researchers and engineers will leverage these inputs for increased performance across the entire decision making system. Whether that is harnessing the natural talents of the human brain, aiding the decision maker through feedback on workload or bias, or eventually participating fully in an integrated team—the network of human and computation will play a key role in these future systems.
20See for example, Manuel A.L., David, A.W., Bikson, M., Schnider, A. Frontal tDCS modulates orbitofrontal reality filtering. Neuroscience 2014; 264: 21-27; Berker, A.O., Bikson, M., Bestmann, S. Predicting the behavioural impact of transcranial direct current stimulation: issues and limitations Frontiers of Human Neuroscience 2013; Transcranial direct current stimulation's effect on novice versus experienced learning. Bullard, L.M., Browning, E.S., Clark, V.P., Coffman, B.A., Garcia, C.M., Jung, R.E, van der Merwe, A.J., Paulson, K.M., Vakhtin, A.A., Wootton, C.L., Weisend, M.P. Exp Brain Res. 2011 Aug;213(1):9-14. doi: 10.1007/s00221-011-2764-2. Epub 2011 June 26.
Although computers are of course well known for their computational powers, humans have some unique strengths in this arena. Human computation, such as crowdsourcing, is “a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms” (Law and von Ahn, 2011).
The Association for the Advancement of Artificial Intelligence has recognized this potential: It held its first conference on the topic in November 2013 (humancomputation.com/2013/). The organization invited submissions on “efforts and developments on principles, experiments, and implementations of systems that rely on programmatic access to human intellect to perform some aspect of computation, or where human perception, knowledge, reasoning, or physical activity and coordination contributes to the operation of larger computational systems, applications, and services.”
Quinn and Bederson (2011) make a distinction between human computation and crowdsourcing. They consider human computation to refer to replacing computers with humans, and “crowdsourcing” to mean “replacing traditional human workers with members of the public.” However, many other researchers use the terms interchangeably, as we do here.
A recent report from the National Research Council21 discussed the use of human computation, or crowdsourcing, for data acquisition, noting that “This has already been shown to be a powerful mechanism for tasks as varied as monitoring road traffic, identifying and locating distributed phenomena, and discovering emerging trends and events.” It points to tasks such as “deep language understanding and certain kinds of pattern recognition and outlier detection” that can be performed better by people than by machines, and notes a number of emerging opportunities to harness that capability. It goes on to make a distinction between crowdsourcing that leverages human activity, such as by tracking the way humans search for information on the Web or navigate a challenge, and that which leverages human intelligence, such as by enlisting multiple humans to work in parallel to label images or otherwise contribute to content and analyses.
That same report identified several types of crowdsourced systems that apply to data analysis:
• User-generated content sites. Wikipedia is a prominent example of a user-generated content site where people create, modify, and update pages of information about a huge range of topics. More specialized sites exist for reviews and recommendations of movies, restaurants, products, and so on. In addition to creating basic content, in many of these systems users are also able to edit and curate the data, resulting in collections of data that can be useful in many analytics tasks.
• Task platforms. Much of the interest around crowdsourcing has been focused on an emerging set of systems known as microtask platforms. A microtask platform creates a marketplace in which requesters offer tasks and workers accept and perform the tasks. Microtasks usually do not require any special training and typically
21 For more information see, National Research Council, Frontiers in Massive Data Analysis, National Academies Press, Washington, DC., 2013, pp. 137-138.
take no longer than 1 minute to complete, although they can take longer. Typical microtasks include labeling images, cleaning and verifying data, locating missing information, and performing subjective or context-based comparisons. One of the leading platforms at present is Amazon Mechanical Turk (AMT). In AMT, workers from anywhere in the world can participate, and there are thought to be hundreds of thousands of people who perform jobs on the system.
Other task-oriented platforms have been developed or proposed to do more sophisticated work. For example, specialized platforms have been developed to crowdsource creative work such as designing logos (e.g., 99designs) or writing code (e.g., TopCoder). In addition, some groups have developed programming languages to encode more sophisticated multistep tasks, such as Turkit (Little et al., 2010), or market-based mechanisms for organizing larger tasks (Shahaf and Horvitz, 2010). These types of platforms can be used to get human participation on a range of analytics tasks, from simple disambiguation to more sophisticated iterative processing.
• Crowdsourced query processing. Recently, a number of research efforts have investigated the integration of crowdsourcing with query processing as performed by relational database systems. Traditional database systems are limited in their ability to tolerate inconsistent or missing information, which has restricted the domains in which they can be applied largely to those with structured, fairly clean information. Crowdsourcing based on application programming interfaces (APIs) provides an opportunity to engage humans to help with those tasks that are not sufficiently handled by database systems today. CrowdDB (Franklin et al., 2011) and Qurk (Marcus et al., 2011) are examples of such experimental systems.
• Question-answering systems. Question-answering systems are another type of system for enlisting human intelligence. Many different kinds of human-powered or human-assisted sites have been developed. These include general knowledge sites where humans help answer questions (e.g., Cha Cha), general expertise-based sites, where people with expertise in particular topics answer questions on those topics (e.g., Quora), and specialized sites focused on a particular topic (e.g., StackOverflow for computer-programming related questions).
• Massive multi-player online games. Another type of crowdsourcing site uses gamification to encourage people to contribute to solving a problem. Such games can be useful for simulating complex social systems, predicting events (e.g., prediction markets), or for solving specific types of problems. One successful example of the latter type of system is the FoldIt site [http://fold.it], where people compete to most accurately predict the way that certain proteins will fold. FoldIt has been competitive with, and in some cases even beaten, the best algorithms for protein folding, even though many of the people participating are not experts.
• Specialized platforms. Some crowdsourcing systems have been developed and deployed to solve specialized types of problems. One example is Ushahidi [http://ushahidi.com], which provides geographic-based information and visualizations for crisis response and other applications. Another such system is Galaxy Zoo [http://www.galaxyzoo.org], which enables people to help identify interesting objects in astronomical images. Galaxy Zoo learns the skill sets of its participants over time and uses this knowledge to route particular images to the people who are most likely to accurately detect the phenomena in those images.
• Collaborative analysis. This class of systems consists of the crowdsourcing platforms that are perhaps the most directly related to data analytics at present. Such systems enable groups of people to share and discuss data and visualizations in order to detect and understand trends and anomalies. Such systems typically include a social component in which participants can directly engage each other. Examples of such systems include ManyEyes, Swivel, and Sense.us.22
Other useful overviews of this topic include E. Kamar, et al. (Combining Human and Machine Intelligence in Large-scale Crowdsourcing, AAMAS 2012, 2012; http://research.microsoft.com/pubs/162286/galaxyZoo.pdf) and D. Shahaf, et al. (Generalized Task Markets for Human and Machine Computation, AAAI 2010. http://www.aaai.org/ocs/index.php/AAAI/AAAI10/paper/viewFile/1951/2132). While the area of human computation is still quite new, the wide range of innovation currently emerging seems likely to someday produce results that can be applied to complex decision making.
22 Extended quote taken from pp. 139-141 of Frontiers in Massive Data Analysis, National Research Council, National Academies Press, Washington, DC, 2013.