The third session of the workshop turned to the ways artificial intelligence (AI), big data, and other emerging technologies may be influencing the nature of narratives and the ways in which they are formed, as well as the impact of social media on the development and dissemination of narratives. Moderator Doug Randall, Protagonist, opened the discussion with a brief look at some of the ways technology is affecting daily life. The average American, he noted, spends 12 hours per day consuming electronic media of some sort, including 2 hours spent on computers, 3 on mobile devices, and 4 watching television. “Those are a lot of narratives getting thrown at us every day,” he commented, adding that these interactions produce an “overwhelming amount of data,” as well as an opportunity to understand developing narratives. In his view, rapidly increasing rates of both financial investment and research in developing technologies, including AI, machine learning, natural language processing, and social media monitoring, reflect the major impact these technologies are having.
A computer scientist by training, Michael Young, University of Utah, described aspects of his work in building computational models that can be used analytically. He began by explaining that his objective in building computational models of narrative is to produce not only textual narratives but also cinematic narratives for 3D virtual worlds and interactive narratives for virtual environments and computer games. He noted that his work draws on collaborations with cognitive psychologists who have studied how
people develop mental models, some of which was originally developed for modeling and programming robots.
Young’s first step in generating a narrative model is to form representations using large amounts of event data. He then creates a structure for the story and the underlying discourse to convey the story through text or cinematic means. It is this first step, he noted, that is most relevant to the Intelligence Community (IC), as the representations used to produce the narrative could help an analyst make sense of the data. He added that the storylines are based on what he termed primary or atomic building blocks—events or actions that propel a story forward—and the algorithm he develops to model the story sequences the blocks and searches for all possible threads that might connect them.
Young explained that when building narrative models, he is guided by research from cognitive psychologists on mental models (conceptual representations used to make sense of the world). Mental models, he continued, tend to remain constant across media and various interactions, so that when situations are created as they progress within the telling of the story, causation, intentionality, and actions can be based on these mental models.
Catherine Tejeda, Parenthetic, offered brief remarks on her research on the question of how human–computer interactions influence the flow of narrative. She is particularly interested in the methods used by marketers to measure attitudes and behaviors, which are in turn used to determine what information is directed to individuals and groups online and how. One way marketers measure attitudes and behaviors, she explained, is by observing the choices a person makes online. Such actions as shopping online, reading news articles, and posting on social media can reveal a person’s attitudes and behaviors, she added. Just as nations are branded by the image they choose to portray, she suggested, online actions determine how a person is perceived by those observing their behavior.
Questions raised in the discussion session focused first on the context for understanding narratives. Referring back to earlier discussion of the point that narratives exist and are understood within the scope of narrative landscapes, a participant asked whether there is a similar landscape that must be considered when building a model and what this means for the creation of models. Young responded that this is actually a critical perspective in computational research and that a debate exists as to whether story structure is separate from the discourse component. For example, he elabo-
rated, if an engineer creates a story, some believe that it should then move to another department so that someone else can work on the discourse. Young believes that the communicative goals of the story can best be achieved if the story and discourse are created concurrently.
Another question raised was about whether the link between expressed narrative (purposefully influencing a narrative to affect public perception) and behavior is a consideration in the computational modeling of narrative. Young responded that, unlike synthetic narratives (narratives automatically fabricated by a software program), the narratives generated from existing data must be created by the actions of a participant (e.g., a game player or social media user). However, he continued, while participants do influence events, they do not act with the intention of influencing the narrative. In fact, he explained, participants are usually operating with limited knowledge of the environment in which they are acting.
Participants also raised questions related to AI. Randall wondered what advances in this technology might mean for people’s ability to analyze, understand, and shape narratives in the future. Tejeda predicted that the use of automated conversation, which is based on machine learning, is likely to increase in the future. Although it is used primarily as a way to initiate chats with online customer service agents, she observed, it has recently become popular with health firms, medical practices, and large health insurance agencies. Young pointed out that while machine learning has experienced great successes in recent years, it is only one form of AI. He suggested that the combination of increased access to data, improved processing power, and new algorithms has allowed researchers to extract data structures that have helped solve critical engineering problems. However, he asserted, while machine learning will most likely continue to advance such scientific breakthroughs as self-driving cars, it will not answer such important questions as how to understand the cognitive processes around narrative.
Making reference to a science fiction–based article that describes a machine so advanced that it sought to unite the world by releasing emotionally compelling stories on a global scale, one participant wondered whether the panelists believed something like this could ever happen. Young suggested that the machine itself would not have the power to do such a thing. An argument could be made in terms of the number of people that received the narrative, but the power would come from the narrative rather than the machine. A participant suggested that, while the infrastructure for such an event exists, the real issue when thinking about the needs of the IC would be how best to time the release of such an event (e.g., gradual release, over a period of 5 years, immediately for a more disruptive event).
Another participant suggested that persuasion typically occurs at the individual level through one-on-one conversation. Scientists, he continued, are beginning to develop machines to initiate conversation for the benefit
of humans. He suggested that it may not be long before robots can be used to create and shape narratives so compelling that humans will become engaged in back-and-forth conversation with the robots. He was curious to know whether the panelists had encountered this type of human–machine interaction in their work. Young replied that, although he does not work with robots, he does see this type of interaction in computer games. He illustrated the point by explaining that when people begin playing a computer game, they enter a virtual world that requires them to make choices that determine how a story will unfold. If a player unintentionally makes a choice that derails the storyline, the computer will respond by creating a new pathway so that the player can continue to move through a new narrative arc without ever knowing a derailment occurred. Young noted that this technology is also used to create training programs, such as those used to build social skills in soldiers. The interactive narrative generation allows the computer to personalize the training to the individual. Sara Cobb, George Mason University, added that, just as new structure patterns could be created in Young’s example, it may be possible to design bots that can create structure patterns to encourage positive outcomes for real-world issues, such as conflict deescalation, health, positive emotion, and prosocial values.
Referring to Tejeda’s presentation, a participant asked how influence is related to such technologies as machine learning and AI. Returning to the subject of measuring attitude and behavior, Tejeda responded that technology helps researchers move beyond traditional survey work into more observational techniques. For example, she said, in addition to “clicks” and “likes,” a researcher might analyze online comments to provide context for those choices. Young pointed out that technology could also be used to influence the way stories are perceived and understood. For example, he explained, a computational model might influence participatory thinking by generating a storyline that is presented in a way that builds suspense. It is also possible, he suggested, that a storyline could be presented to an analyst in a way that would influence that person to highlight certain inferences.
Thinking in terms of the spread of online messages, one participant asked whether most people are more concerned with the number of times something is shared (e.g., Kim Kardashian shares content with her online followers) or how it is shared (e.g., Kim Kardashian shares content with her followers and states that it changed her life). Tejeda responded that there is usually more concern as to how content is shared. Randall followed up by noting that the extent to which people engage with the content and how it is referenced are also important considerations.
Another measure of interest in narrative research, Randall suggested, is engagement. He argued that measuring both the metadata and the substance of communications can help researchers know whether people are accepting and connecting with the narrative. Young observed that computer
scientists tend to be more concerned with comprehension than engagement. They also want to know, he continued, how the structures used to increase comprehension contribute to engagement or to proxies for engagement, such as suspense. “What is the cognitive machinery around suspense?” he asked rhetorically. “Is that knowable or can we begin to view the cognitive processes like engagement as a result of the way that we design the narrative itself?” Tejeda commented that reach is also important when measuring engagement—in other words, not just who engaged with the content but who saw it.
A participant noted that, if combined, new technologies such as IBM’s Watson1 (a cognitive computing system) and Cambridge Analytica2 (a data mining and analysis program capable of public profiling) could have the power to exert influence on a scale that would be both marvelous and terrifying. The participant added that because such technological advances are making it easier to manipulate narrative, knowing how narratives are created, manipulated, and disseminated could help protect people from toxic narratives. Tejeda commented that while the possibility of companies using technology to influence the choices made by an individual is disturbing, she is not sure that a mass influence campaign would be effective. Returning to the idea that individual narratives are understood and shaped by narrative landscapes in which they fall, she suggested that attempts to influence on such a large scale may fail.
Thinking about the power and scope of narrative, Matsumoto asserted that learning how a narrative might be limited or stopped is also essential to understanding influence. Furthermore, he asked, “What is the source that is going to stop it? Is that source available right now so that it limits the everyday influence of narrative?” Randall suggested that while technology is advancing and researchers are working to collect large amounts of data, the field is limited by the fact that it is not there yet. Johnson made the point that not all narratives are believable. Tejeda agreed and added that it is not always easy to trust that a source is providing accurate information. Young suggested that the way a text is written may play a role in its believability, with some narratives perhaps being easier to follow than others. Furthermore, he noted, as had been mentioned in previous panels, the narrative may not line up with a person’s internal narratives. Remarking that work on credibility indicators has been done in other domains, Matsumoto wondered whether the panelists were aware of similar research done to index and measure believability indicators. Young acknowledged
that researchers in computational modeling are beginning to look at these questions, but they are not far enough along to model them.
In the social sciences, Roberto Franzosi, Emory University, explained, researchers use theory to develop models that can then be used to make predictions. However, he continued, there has been a recent trend among computer scientists to suggest that big data has made theory obsolete. Young agreed that this is a problem. While the data-driven approach may work well for solving engineering problems, he observed, very few scientific questions are answered this way. For example, he said, if one is studying the interactions among invariant properties in an external model (e.g., cognitive processes), an internal model (e.g., the structural processes within the properties of a narrative), and an artifact (the interface or discourse) to see how they might change when manipulated, theory is needed to answer the questions that arise.