Much of the way we understand social interactions comes from observable behaviors among organisms with brains. These organisms demonstrate actions that are interpretable in a human context. Many concepts have arisen from such observations, including but by no means limited to altruism, kin recognition, eavesdropping, status badges, cheating, and veils of ignorance. The most general theories for the evolution of social behavior apply to these observations, including kin selection, and mutualism. But is it necessary that the actors have any form of cognition?
The purpose of this challenge is not to get caught in the mire of deciding what is and is not cognitive, or when a brain actually functions in one way or another, but to go to organisms, parts of organisms, and artificial life, both robots and programs like Avida, where brains are not present at all and explore the power of brain-based social understanding. Four areas come to mind: (1) the major transitions of life, (2) within-genome interactions, (3) microbial interactions, and (4) robots and artificial life.
The major transitions are the progression in complexity of life formulated by Maynard Smith and Szathmary, including genes into chromosomes, mitochondria and host cells into eukaryotes, single cells to multicellularity, and solitary to eusocial insects. Cooperation and control of conflict characterize these transitions, with some being formed of related individuals (fraternal transitions) and some being formed of different individuals, typically of different species (egalitarian transitions), to use terms chosen by Queller.
Within genome interactions include genomic imprinting (the differential expression in an offspring of maternally and paternally derived genes), the parliament of the genes, and selfish genetic elements and their control.
The current view is that microbes are nearly always social, living in a mix of relatives, non-relatives, predators, prey, and mutualists. Exactly who does what to whom is challenging and important, for microbes inhabit every nook of our bodies and can either make us sick or well. Do the models for cooperation, conflict, and control work for them? Are we missing something because microbes are so different? What about viruses?
The interplay among robots or artificial life units is another area for exploration. Under artificial life systems like Avida, different units with complementary functions may fuse to work more effectively. Robots may behave differently collectively, with the sum of actions becoming something different than the local decisions they arise from.
For each, the challenge is to ask whether we have fully mined brain-based concepts for potential applicability, and conversely whether we have been blinded to possible insights by beginning with brain-based ideas. It is important to keep in mind that actions that cognitive organisms display may not necessarily be the result of cognition. Insight into which actions this is true for might be found by examining behavior of noncognitive organisms under similar circumstances.
The interplay among robots or artificial life units is another area for exploration. Generally speaking the word robot may refer to both physical robots and virtual software agents. Talking to the experts in this field, it appears that there is no universal agreement on which machines/devices represent the robots; however, one has the general appreciation that robots could perform tasks including moving around, operating a mechanical limb, sensing and manipulating their environment, and exhibiting intelligent behavior. Particularly, one likes to see behavior that resembles intelligent beings such as humans or other animals. There exists consensus that “robot” signifies an apparatus (machine/device) that can be programmed to perform a variety of physical tasks or actions.
It is perceived that the two distinct ways that robots are different from actual beings are in the arena of cognition and biological features. Regardless of its human-like or dislike appearance, robots need programming to function properly. The advent of modern feedback control systems along with advances in computational powers of miniaturized electronics has made modern robots a lot more artificially intelligent. This allows them to perform tasks based on their own sensing of their surroundings and
potentially perceived outcomes. This may be performed either by an individual robot or by a group (or swarm) of robots. In the swarm scenario the concept of particle swarm optimization technique could be used to control the movements of the robots. In this manifest the robots rely on their own individual sensing experiences (self awareness) of the environment and also utilize the collective knowledge (swarm knowledge) of the environment among all of them to guide the robots to move to the desired direction and/or desired outcome, i.e., the dream land! The human-like issues such as jealousy, sympathy, becoming number one, etc., could also be programmed into robots if this would be necessary to perform the desired tasks more effectively. The field of artificial intelligent is a rapidly growing field and modern electronics and miniaturized mechanical actuators have allowed the robot designers to make their robots amazingly powerful and self-supporting!
What concepts from the social sciences have not had an impact yet on understanding cooperation in microbes, across major transitions, or within genomes? What might we learn from closer attention to social science theory?
How is conflict controlled in microbes, across major transitions, within genomes, or among units of artificial life?
What mechanisms substitute for cognition and brains in microbes, across major transitions, within genomes, or among robots or units of artificial life?
What characterizes social science concepts, like veil of ignorance, for example, that are clearly important in microbes, across major transitions, within genomes, or among robots or artificial life units?
Inclusive fitness theory and mutualism theory are clearly powerful in explaining cooperation in microbes, across major transitions, or within genomes, so what can we add to that from social science theory?
What are we missing from the gap between these two social disciplines that closer discussion might reveal?
Are robots going to become a complete human?
Are some robots more powerful than the human in some task that necessitates multitasking and fast computations?
Could robots perform tasks beyond what is programmed in them?
Cook KS, Cheshire C, Rice ER, and Nakagawa S. Social exchange theory. In Handbook of Social Psychology, pp 61-68. Springer: New York, 2013.
Crespi BJ. The evolution of social behavior in microorganisms. Trends in Ecology & Evolution 2001;16(4):178-183. [Abstract available.]
Fehr E and Fischbacher U. The nature of human altruism. Nature 2003;425:785-791.
Griffin AS and West SA. Kin selection: Fact and fiction. Trends in Ecology & Evolution 2002;17(1):15-21.
Haig D. Transfers and transitions: Parent-offspring conflict, genomic imprinting, and the evolution of human life history. Proceedings of the National Academy of Sciences of the United States of America 2009;107:1731-1735.
Queller D and Strassmann J. The veil of ignorance can favour biological cooperation. Biology Letters 2013;9(6):20130792.
Queller DC. Cooperators since life began. Book review of: The Major Transitions in Evolution, by Maynard Smith J and Szathmáry E. Quarterly Review of Biology 72:184-188.
Queller DC. Expanded social fitness and Hamilton’s rule for kin, kith, and kind. Proceedings of the National Academy of Sciences of the United States of America 2011;108, Suppl. 2:10792-10799.
Werren JH. Selfish genetic elements, genetic conflict, and evolutionary innovation. Proceedings of the National Academy of Sciences of the United States of America 2011;108;10863-10870.
West SA, Diggle SP, Buckling A, Gardner A, and Griffins AS. The social lives of microbes. Annual Review of Ecology, Evolution, and Systematics 2007;38:53-77.
Wilson RK. The contribution of behavioral economics to political science. Annual Review of Political Science 2011;14:201-223.
IDR TEAM MEMBERS
Lydia M. Contreras, University of Texas-Austin
William W. Driscoll, The University of Minnesota
Justin Gallivan, Defense Advanced Research Projects Agency
David Hughes, The Pennsylvania State University
Joel D. Levine, University of Toronto Mississauga
Siobhan O’Brien, University of Exeter
Theodore P. Pavlic, Arizona State University
Aristides Requicha, University of Southern California
Sarah Schwartz, Massachusetts Institute of Technology
Guy Theraulaz, Centre National de la Recherche Scientifique
IDR TEAM SUMMARY— GROUP 1
Sarah Schwartz, NAKFI Science Writing Scholar Massachusetts Institute of Technology
IDR Team 1 was asked to use our understanding of cooperation in cognitive organisms to understand cooperation in organisms or entities that have brains, and organisms that don’t.
Our scientific conception of social interaction is largely based on studies of organisms with brains. Humans and animals engage in a range of complex collective behaviors, from altruism to cheating to competition. But brainless entities also exhibit intricate social interactions. Bacteria use chemical signals to coordinate unified actions; a selfish gene can promote its own transmission at the cost of other genes and its whole organism; and robots can be programmed to work together, completing tasks as a swarm. IDR Team 1 set out to identify underlying properties of interaction that transcend cognitive context, comparing and contrasting the mechanisms at play in diverse social systems. The team focused on how best to consider social engagements across fields, with the goal of facilitating interdisciplinary dialogue and applying biotic concepts of sociality to robotics.
Coordination, not Cooperation
IDR Team 1 determined that “coordination,” and not “cooperation,” should be used to discuss, analyze, and compare collective behavior across scales. The team defined coordination as local, nonrandom interactions involving an exchange of information and a response. Cooperation, however, implied a higher-level, perhaps evolutionarily defined engagement, which delivered some “greater good” to an entity or species. A key difference between the two terms, the IDR team said, was that coordination is a proximate process, concerned with “how” a behavior occurs, while cooperation represents an ultimate process, focused on “why” a behavior occurs.
The IDR team emphasized coordination over cooperation for several reasons. First, the team said, it can be challenging to determine whether cooperation actually exists in a group. Relationships that seem cooperative or mutualistic may just represent some co-evolved dependency; the team used the example of mitochondria, a power-generating component of animal cells. It was believed that these organelles were once independent cells,
engulfed by larger cells to the benefit of both. But, recent research suggests that mitochondria may have actually once been parasites, and they simply evolved over time into an integral part of animal cells. It is also possible that, while it might appear that bacteria are cooperating while sharing a public good, this is not an “intentional” relationship. A bacterium may secrete some compound when its internal concentration grows too high, and other microorganisms can then use the substance. The release of the compound is programmed on an individual level; it simply happened to benefit others. Even in cases where an action benefits all agents involved, “cooperation” may overestimate the actions of each entity, the IDR team posited. The team considered the example of a species of toxic algae: each cell appears to cooperate as they band together, dominating marine environments. As more cells accumulate, they can kill larger and larger prey. But, the team argued, each cell is simply acting in its own best interest, trying to individually find higher concentrations of food. The sum of individual coordination creates an emergent group behavior.
The IDR team agreed that coordination might represent an intermediate between solitary and collective behaviors, a “stepping stone” to cooperation at larger scales. Because defining cooperation can be challenging, the team felt coordination was the most parsimonious way to study interactions. Whether or not cooperation exists on an ultimate scale, any engagement requires coordination, and interactions can be studied and described in this way.
The team discovered that describing cooperation, conversely, is a challenge, as it has different connotations in different fields. For example, in molecular biology, two enzymes are called cooperative when one’s binding state encourages a conformational change in its neighbor. This, an individual response to an environmental change, was not cooperation by the team’s definition but the technical term still applied. So “coordination,” the team argued, also provides a stronger common language to compare social mechanisms across disciplines.
This language can extend beyond the realm of biology and into that of technology, providing a way to compare biotic systems with robotic systems. As in groups of social insects, the team said, each robot in a swarm executes one individual part of an emerging group behavior. Considering the coordinative behavior of robots allows a focus on the building blocks of any larger collective swarm action, the team agreed, which could provide the greatest insight when designing new social engagements.
Still, IDR Team 1 agreed that some social behavior can be analyzed at the level of cooperation. The team used a metaphor: if an animal’s tissues and organs represent cooperation, its genes represent coordination. It isn’t always necessary to consider genetic code while studying an animal. But to compare disparate interactions or explore unfamiliar systems, coordination is the most useful method of study.
Mechanisms and Constraints
After defining and selecting coordination as an ideal method of analysis, IDR Team 1 examined its common mechanisms. Any coordinated social activity, the team concluded, is governed by a set of rules, which vary depending upon the system in question. Robots must follow the precise direction of their programming. But ants also have a “programming” of sorts, operating under what can be called “rules of thumb.” For example, if her nest members are slow to accept her collected food, a foraging ant will delay her return to the foraging path. Even nanoparticles follow a set of rules that tells them how to drift through different concentrations of chemicals.
In addition to these rules, the team described four classes of systems that make coordination of activities possible. These were “recipes,” or set patterns of action; “templates,” or external information that individuals use to adjust their behavior; “stigmergy,” an indirect communication through modification of the environment; and “self-organization,” in which complex patterns arise from individual actions. Different contributions of each class are possible in different coordinated systems. IDR Team 1 noted that these mechanisms shape how interactions shape a society. As one team member summarized: “To understand things socially means to understand how processes are coordinated. Beyond that, the mechanisms of coordination place constraints on how coordinated systems evolve.” Constraints are essential, the team agreed, in understanding biological societies and designing robotic ones.
The team also emphasized the importance of environmental factors in shaping coordinated behavior. The IDR team expanded its discussion of stigmergy, a method of storing cues in the environment that indirectly and anonymously affects other agents. The activity of a termite building a tunnel, for example, is directed by the previous construction of other colony members. The team also discussed other environmental constraints on coordinated behavior, noting that epigenetic changes can affect how a
genome is expressed, the age and lifespan of organisms affect their social interactions, and the physical terrain that a robot must navigate can affect its performance.
The team said that constraints could cause conflict in a biological system—and conversely, biological conflict could constrain certain systems (for example, the development of cells is determined by genetic conflict in chromosomes). The team agreed that conflict drives evolution and development. Genetic conflict can lead to coordination at a species level—for example, opposing maternal and paternal imprinting of growth hormone genes in lions will ultimately evolve into healthy cubs. The team felt that it would be important to incorporate a biological model of constraint and conflict when designing social robots. Selecting the ways robots interact might define the limitations of the technological system as a whole—so engineers should choose carefully.
Bridging the Gap: Biology and Technology
The team asked whether elements of biological systems can be used to design robotic swarms. Natural systems “make cool things happen” under extensive constraints; the team wondered if it is possible to “import that insight” from biology into robotics, which has constraints such as limited battery life. The team suggested that stigmergy could be programmed into robots, if the concept could be understood and inferred from social insects. The team also discussed whether robots should be programmed to cheat—for example, if a robot was rewarded for saving energy while performing a group task, it might slack at its job at the expense of its neighbors. Finally, the IDR team wondered if evolution, a powerful selective force on biological communities, could be implemented in robotic swarms—was it important, the team asked, to “understand evolution from an engineering standpoint?” It could be beneficial to program “natural selection” into robotic swarms. Manufacturing differences could produce heterogeneity in robotic swarms, and if certain “weak” robots could be “pruned out” or assigned a different task, it might benefit the group at large.
Whether comparing technology and biology or different biological systems, IDR Team 1 agreed that it is essential to shift focus from cooperation to coordination. The team believed this would allow more efficient,
accurate, and broadly applicable analysis of social interactions, the mechanisms by which they develop and change, and the constraints that shape such processes. Ultimately, the IDR team concluded this would advance our understanding of collective behavior as a whole, as well as our ability to construct and optimize coordinated systems in the technology of the future.
This page intentionally left blank.