The first workshop panel focused on using networks to understand the functioning of individuals in the context of the social and physical worlds. Matthew Brashears, University of South Carolina, a moderator of this panel, raised several points about networks to start the session. First, he suggested that physical proximity can be an ambiguous signal; that is, physical closeness does not necessarily indicate that individuals have a significant interaction or relationship. Second, he observed that electronic devices designed to serve as sensors, tracking locations and other types of data, can also lead to interactions and modify individuals’ behaviors. For example, people may use these technologies for “impression management,” using them strategically to achieve particular objectives and even introducing a degree of deception in the information that is captured.
Brashears also referenced the large amount of data collected through physical sensors and online interactions that can be used for network studies. He suggested that researchers think not only about data they can collect but also data they can scrap. The challenge today, he elaborated, is not finding data; rather, the challenge is identifying the theoretical and methodological approaches best suited to sorting through the data to find what is most useful for insights on network thinking.
Brashears emphasized that what is considered to be a network, as well as how that network is measured, depends fundamentally on the context in which research on the network is conducted. He cited three different types of context: geographic, political, and cultural. Relationships, he said, often depend on proximity with respect to these contexts. That is, connections are made when one person comes in contact with another, physically
or virtually, and stronger connections result when the two share cultural knowledge or political ideas. One challenge today, according to Brashears, is to understand the echo chambers created by social media algorithms that intentionally connect people with similar interests, and how they are changing the ways in which social networks are formed and used to achieve individual and social goals.
Regina Joseph, New York University, described her research on supersynthesizers, defined as individuals with cognitive and analytic skills that go beyond those of superforecasters.1 Superforecasters, she explained, are individuals identified as “most accurate in determining future geopolitical and economic outcomes.” She argued that supersynthesizers are needed as members of analytic teams within the Intelligence Community (IC) to tackle national security challenges presented by the changing nature of social networks and the ways in which the public retrieves, consumes, and distributes information. In the past, she suggested, the influence of information could be represented by a “one-to-many broadcast model,” with information being distributed through radio, television, and print, whereas today, the addition of social media and global networks has resulted in a “many-to-many broadcast model.” This new model presents challenges, she said, because of its speed, ease of access, and vulnerability to entrenching confirmation biases (the echo chamber effect noted by Brashears).
Joseph continued by asserting that society is at a point where technology for documenting truth is about to be superseded by technology for fabricating deception. She elaborated that society and the security community have become accustomed to a level of protection afforded by technologies that record images and sound (e.g., security cameras). Increasingly, however, that protection is being questioned as people develop and use technologies and software programs to create fake images, to mimic sounds of people speaking, and to simulate videos (i.e., generating audio, recording faces, and mapping them together to create the illusion of a captured speech or interaction on video that never actually occurred). Through this growing ability to fabricate digital information, Joseph argued, societies across the globe face greater difficulty in distinguishing between reality and false narratives. They are subject to more intrusions and influences in the public space via social networks, whether it be propaganda, disinformation, or legitimate attempts to sway public opinion.
Joseph considered this new information environment through the lens
1 For more information on superforecasting, see Tetlock, P.E., and Gardner, D. (2016). Superforecasting: The Art and Science of Prediction. New York: Broadway Books.
of Aldous Huxley, who wrote of an ultimate revolution whereby “control takes place through willing participation on the part of the individuals.” She used the example of Facebook, where vast numbers of individuals have knowingly given up their privacy and some security in exchange for the social and informational transactions enabled by the site. These transactions, she suggested, can have adverse impacts if manipulated for emotional effect. From a Huxley viewpoint, she asserted, the addictive behavior of active social media users constitutes a form of servitude; devotion to such platforms and the enormous amount of personal data now owned by social media companies underscore the potential for effective control of the public.
Joseph expressed concern about the shrinking diversity of information being accessed by members of the public, as well as the increasing risks of disinformation. Such influence, she suggested, may become a threat to national security, warranting consideration of counterinfluences. Identifying appropriate counterinfluences, she continued, requires early detection, which in this new information environment will depend on intelligence analysts’ ability to “harness the advantages of open-source information and social networks” and on the development of supports to improve analytic teams’ situational awareness and their ability to detect signals in noise.
Joseph suggested that a new type of expertise is needed. She pointed out that the nation’s education system privileges vertical specialization and creates a talent pool of specialists with very particular forms of knowledge. Drawing on the McKinsey concept, she identified these specialists as “t-shaped” individuals—those who have in-depth knowledge in one or two areas and skills complementary to other areas.2 She argued that such individuals are a good start to meeting the needs for expertise, but that the needed expertise would better come from “comb-shaped” individuals—those “who have multiple areas of niched expertise that are connected with long and broad deep generalist knowledge.” She explained that this “comb-shaped” concept emerged from the study of superforecasters that identified individuals with superior general knowledge, better resistance to biases, and multiple areas of expertise.
In closing, Joseph identified three areas for future research: (1) research to identify individuals as supersynthesizers, (2) research to develop training that can hone skills for synthesizing information, and (3) research and development to produce hybrid cognitive-assist platforms and technologies to aid humans in their analytic work. She pointed to some avenues of inquiry in her own research that are identifying and targeting these needs: anticipatory intelligence; cognitive prosthetics; and human–machine innovation,
2 For more information, see, for example, https://www.psychologytoday.com/blog/career-transitions/201204/career-success-starts-t [February 2018].
including neurocognitive approaches and analytical quantification using sophisticated crowdsourcing.
Leslie DeChurch, Northwestern University, provided an overview of the research literature on teams and teaming and how teams can be understood from a social network perspective. A primary conclusion from her research, she said, is that “teams assemble from networks to form networks of teams whose success can be predicted by looking at the networks that exist within and between teams.” She went on to define several terms salient to her presentation.
First, “teams in organizations” refers to the traditional view of a team as a predefined, relatively small group of people with a clear boundary and shared goal. Second, “organizing in teams” denotes a dynamic group of many more people with a fluid boundary and a shared purpose. Finally, “teaming” is defined as “purposive, collaborative interaction among a set of individuals.” In the rest of her presentation, DeChurch identified literature that has examined how to assemble, manage, detect, and/or disrupt teaming.
According to DeChurch, two types of literature in the behavioral sciences—in the areas of team assembly and team composition—address how teams assemble from networks. In the team assembly area, she said, researchers examine what social forces drive people to want to work together. Findings from this research, she continued, indicate that self-forming teams avoid diversity, leading to networks that are often homophilous and face challenges with bringing in newcomers; they avoid becoming too large; and they tend to assemble with previous collaborators. In the team composition area, researchers examine the best team composition for optimal performance. DeChurch observed that findings in this area point to the benefits of diversity—a diverse range of expertise, as well as a balance between newcomers and incumbents. She noted further that what is known about how to design good teams is at odds with what is known about how teams naturally assemble.
DeChurch presented some findings from a recent study of factors affecting the selection of teammates among environmental scientists and social psychologists from two different universities. Participants were given a complex problem to address, along with a recommender system that algorithmically identified potential collaborators. DeChurch reported that these scientists were most likely to team up with prior collaborators. Those not choosing prior collaborators were more likely to team up with a “recommended teammate” (someone in the top 10 from the recommender system) as opposed to a random selection. DeChurch pointed out that the algo-
rithmic teammate recommendations significantly improved the chances of teaming up for those who had not previously collaborated.
DeChurch continued to point to relevant literature, such as that in the area of multiteam systems, team interaction networks, and leadership networks (how teaming is managed). She defined a multiteam system as a network comprising two or more teams that pursue both proximal team subgoals and distal system-level goals. Research in this area, she noted, has shown that leadership that fosters connections within and between teams is an important element of effective multiteam systems. A concept that has emerged in this research, she added, is that of group social capital, defined as the set of informal ties connecting teams to other groups and providing access to people in other agencies or organizations.
DeChurch pointed out that research in the area of team interaction networks has expanded on a long history of literature, largely in psychology, focused on the properties of well-functioning groups and teams. One of the findings from this work, she said, is “that small groups suffer in their ability to share information…so decision biases and heuristics that individuals hold when they process information only get further magnified when people [stay confined to a small group].” She noted that information sharing and decision making improve among teams with decentralized information-sharing networks.
Research in the area of leadership networks, DeChurch observed, has examined both formal and informal leadership. The findings from this work indicate that teams with formal leaders who are more central (i.e., have many connections outside the team) are more effective than teams with less central leaders. DeChurch added that the work on teams with informal leadership (i.e., teams with few if any prescribed leaders) has found that such teams still need individuals to assume the role of providing motivation and direction for the group, and that those informal leaders with “dense influence ties” improve the effectiveness of their teams. She suggested that the reason outside or multiteam connections are important is that they improve information sharing, which in turn can bring more facts and ideas to teams for consideration, as well as increase understanding of and appreciation for a team’s progress and goals.
DeChurch drew attention to one study that examined how prompts or interventions affect information sharing among online teams. The study found that several interventions changed how teams interacted by increasing the amount of information sharing beyond that resulting from simply instructing team members to collaborate and share unique information. DeChurch explained that the three interventions were developed from the psychological literature on how groups process information: (1) the task was framed as having a demonstrably correct solution; (2) a cooperative norm among the team was created; and (3) participants were instructed
to use a structured discussion process. She believes this study was useful because it illustrated the possibility of intervening with teams to improve (or perhaps disrupt) their collaboration.
DeChurch offered some applications of the literature on organizing in teams to the work of intelligence analysts. She referenced a report that reviewed the prewar intelligence reports on Iraq’s weapons programs and intelligence oversight after the terrorist attacks of September 11, 2001,3 which identified shortcomings in teamwork. She cited from that report issues of “groupthink dynamics” that led to assumptions and failure to use institutional mechanisms designed to challenge those assumptions, as well as issues of bureaucratic structure and complex policies that impeded information sharing. These events, she reported, led to calls for reorganizing analytic teams and gaining new knowledge that would help the IC with teaming, enabling analysts to “adaptively configure and reconfigure themselves” as necessary. She pointed to four themes in the literature that are important to the IC community: How can agile analyst teams be assembled? How can these teams be managed? How do analyst teams detect adversarial teams? Finally, how can adversarial teams be disrupted?
In closing, DeChurch presented her thoughts on priorities for future research. First, she suggested the development of “support mechanisms that will enable effective team assembly practices and team self-regulatory processes.” Second, she offered several research ideas that could provide the foundation for developing these mechanisms: (1) identify the network structures that optimize the performance of analyst teams; (2) validate network interventions; and (3) develop technologies that aid the work of analyst teams.
Zachary Neal, Michigan State University, provided an overview of research on urban networks, drawing on events at a recent symposium in Belgium sponsored by the Urban Studies Foundation. He described the symposium as bringing together 40 scholars in both the social and natural sciences from around the world to discuss a future research agenda at the intersection of urban studies and network science. He commented that the field of urban network research has grown in size and scope in the last decade. Participants at the symposium represented several different disciplines and areas of expertise: those in physics and mathematics specialize in big data and methodological sophistication; those in geography specialize in
3 Rosenbach, E., and Peritz, A.J. (2009). Confrontation or Collaboration? Congress and the Intelligence Community. Boston, MA: Harvard Kennedy School, Belfer Center for Science and International Affairs.
understanding and modeling spatial data; those in sociology specialize in understanding human behavior and the formation of groups; and those in urban planning specialize in the practical effects of infrastructure and urban policy.
Neal offered a quote from the work of Michael Batty to express the importance of research in urban networks: “To understand place, we must understand flows, and to understand flows, we must understand networks.” Neal explained that places are where events happen, and that to forecast events, one must have a better understanding of networks to make sense of the flow of ideas and behaviors that affect events.
Neal helped the audience understand the nature of an urban network by identifying some of its features. The entities or nodes in an urban network can be, for example, people, road intersections, or cities. The connections or ties between the nodes can be financial, human, or commodity flows. The scale of an urban network can be within a city or among multiple cities. Neal pointed out that the research on urban networks has been characterized by an overreliance on node-level analysis as opposed to analysis at other levels (e.g., the dyadic, or pairs of nodes, level and the network level). However, he added, the field has made use of multiple layers in its analyses; given information from geographic information systems, for example, street networks can be overlaid on social networks.
According to Neal, the symposium had made clear that the field of urban networks is conducting research relevant to the IC. This includes (1) economic work to identify global and regional economic patterns and understand how and why those patterns are changing over time; (2) transportation research to ensure accessibility from one place to another, but also to understand the robustness of systems under failure or attack; (3) investigations of online social networks to track (via geotagged posts) the spatial diffusion of illnesses or ideas; and (4) research on communities that examines where people live and travel and how infrastructure affects community cohesion, segregation, and polarization.
Given both the range of questions that arise in urban network research and the number of relevant disciplines, Neal sees a need for researchers to consider more carefully when a network science approach is appropriate for a particular research question to address some of the issues he has noted in the literature. One such issue is that a network is often presented as a naturally occurring thing, when it is in fact a simplification used for conceptual and analytic clarity. Another issue, Neal asserted, is that paths must be meaningful if a matrix of data is to be treated as a network; he argued that correlation matrices and origin–destination matrices are not networks. He suggested the need for minimum reporting standards for publication for those in the field of network science, and urban networks in particular. His ideas included having researchers define both the nodes
and edges in their work and demonstrate how they were measured, as well as the validity of those measurements. He also suggested that researchers be required to provide a theoretical justification for why particular nodes and edges together form a network that matters.
Neal called attention to one of the major challenges in the field—whether available network methods work for spatially embedded networks. Many current methods were developed from nonspatial graph theory, he observed, adding that the field has discovered that “conventional community detection algorithms tend to work fairly poorly if the network is spatially embedded.” By poorly, he said he meant that the algorithms almost always find the already-obvious proximate clusters of nodes and provide no deeper information. In addition, he said, conventional statistical null models fail to account for the spatial embedding present in most urban network contexts. Furthermore, he said, “Conventional summary statistics . . . used in the network literature like clustering coefficients and path links often don’t take into account how those features might be different in spatially embedded networks.” He urged researchers to start developing new theoretical and methodological tools.
Guido Cervone, Pennsylvania State University, provided an overview of what is known as “cyberscience” and then discussed findings from his own research on citizen science4 efforts following a nuclear emergency in Japan. He pointed out that science has progressed for centuries largely through a combination of observation, theory development, and experimentation. But in the last 30 years, he said, something fundamentally different has been added to the mix—data- or computational-enabled research—to which the label “cyberscience,” or in his field “geoinformatics,” has recently been given. In Cervone’s opinion, this approach “has been transformative” with respect to the way research is undertaken.
Cervone observed that there exist many different types of data and a number of large datasets. He explained that the largest datasets include data from commercial transactions, videos and audio, remote sensing, numerical modeling,5 and geographic submissions volunteered through social media or online citizen science projects. He then asserted that data are not particularly useful by themselves, and described how data, information, and knowledge differ: data are a collection of symbols and other recorded
4 “Citizen science” refers to science projects that use members of the general public to collect and/or analyze data.
5 Cervone pointed out, “numerical models generate orders of magnitude of new data from their initial datasets.”
material, information is interpreted data, and knowledge is patterns in data used to solve problems. He suggested that the way to get from data to knowledge, particularly with large amounts of data, is by using automated algorithms. These algorithms, he explained, are generally created outside of such fields as geography; they emanate from mathematics, statistics, computer science, and other fields that research databases, machine learning, and artificial intelligence. He expressed his view that in the future, the content and computational fields could work together more closely to develop algorithms needed to solve difficult problems.
Cervone then turned to his research on the reliability of a citizen science contribution to a dataset. He showed satellite images and video of the 2011 nuclear crisis that occurred in Fukushima, Japan, after an earthquake triggered a tsunami that caused a massive failure in the cooling system at a nuclear power plant. He reported that about a month after the event, the U.S. Department of Energy and its Japanese counterpart began collecting remote sensing data to examine the extent of radiation contamination from the accident. These data were collected over a 2-month period and would serve as the official measurements of the radiation. In addition to this government effort, however, there also were unofficial measurements. Cervone explained that in the immediate aftermath of the accident, the University of Tokyo initiated a citizen science project called SAFECAST, which distributed about 8,000 instruments6 to people living in the Fukushima area so they could collect crowdsourced radiation data. These people became a network of data contributors, and Cervone reported that to date, 70 million of these observations have been uploaded to the SAFECAST server.
Cervone then raised the question of whether data collected by citizens could be useful for national security applications. He reported that by itself, the SAFECAST collection of data tends to overpredict the extent of radiation. However, he explained that the coverage of the citizen data varies widely, with some locations having many observations and others having few (given the mobility of people with instruments). However, his research group has been able to make several interpolations over time to compensate for this variability in the data, and Cervone reported that with this calibration, the distribution in the unofficial data matches well with the official data. He concluded by asserting that citizen collection of data is useful, particularly because it provides continuous measurements beyond what has been collected by official efforts. He warned, however, that spatial and temporal patterns in the collected data may affect what can be known from the data. He also cautioned that this particular dataset is currently publicly accessible but may not be in the future.
6 These instruments were an open-source bGeigie Nano device that uses the LND7317 radiation sensor, which can detect alpha, beta, and gamma radiation.
This page intentionally left blank.