Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 73
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature 4 Smith’s Strategies Evolution, altruism, and cooperation The stunning variety of life forms that surround us, as well as the beliefs, practices, techniques, and behavioral forms that constitute human culture, are the product of evolutionary dynamics. —Herbert Gintis, Game Theory Evolving To understand human sociality we have much to learn from primates, birds, termites, and even dung beetles and pond scum. —Herbert Gintis, Game Theory Evolving In the winter of 1979, Cambridge University biologist David Harper decided it would be fun to feed the ducks. A flock of 33 mallards inhabited the university’s botanical garden, hanging out at a particular pond where they foraged for food. Daily foraging is important for ducks, as they must maintain a minimum weight for low-stress flying. Unlike landlubber animals that can gorge themselves in the fall and live off their fat in the winter, ducks have to be prepared for takeoff at any time. They therefore ought to be good at finding food fast, in order to maintain an eat-as-you-go lifestyle. Harper wanted to find out just how clever the ducks could be
OCR for page 74
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature at maximizing their food intake. So he cut up some white bread into precisely weighed pieces and enlisted some friends to toss the pieces onto the pond. The ducks, naturally, were delighted with this experiment, so they all rapidly paddled into position. But then Harper’s helpers began tossing the bread onto two separated patches of the pond. At one spot, the bread tosser dispensed one piece of bread every five seconds. The second was slower, tossing out the bread balls just once every 10 seconds. Now, the burning scientific question was, if you’re a duck, what do you do? Do you swim to the spot in front of the fast tosser or the slow tosser? It’s not an easy question. When I ask people what they would do, I inevitably get a mix of answers (and some keep changing their mind as they think about it longer). Perhaps (if you were a duck) your first thought would be to go for the guy throwing the bread the fastest. But all the other ducks might have the same idea. You’d get more bread for yourself if you switched to the other guy, right? But you’re probably not the only duck who would realize that. So the choice of the optimum strategy isn’t immediately obvious, even for people. To get the answer you have to calculate a Nash equilibrium. After all, foraging for food is a lot like a game. In this case, the chunks of bread are the payoff. You want to get as much as you can. So do all the other ducks. As these were university ducks, they were no doubt aware that there is a Nash equilibrium point, an arrangement that gets every duck the most food possible when all the other ducks are also pursuing a maximum food-getting strategy. Knowing (or observing) the rate of tosses, you can calculate the equilibrium point using Nash’s math. In this case the calculation is pretty simple: The ducks all get their best possible deal if one-third of them stand in front of the slow tosser and the other two-thirds stand in front of the fast tosser. And guess what? It took the ducks about a minute to figure that out. They split into two groups almost precisely the size that game theory predicted. Ducks know how to play game theory!
OCR for page 75
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature When the experimenters complicated things—by throwing bread chunks of different sizes—the ducks needed to consider both the rate of tossing and the amount of bread per toss. Even then, the ducks eventually sorted themselves into the group sizes that Nash equilibrium required, although it took a little longer.1 Now you have to admit, that’s a little strange. Game theory was designed to describe how “rational” humans would maximize their utility. And now it turns out you don’t need to be rational, or even human.2 The duck experiment shows, I think, that there’s more to game theory than meets the eye. Game theory is not just a clever way to figure out how to play poker. Game theory captures something about how the world works. At least the biological world. And it was in fact the realization that game theory describes biology that gave it its first major scientific successes. Game theory, it turns out, captures many features of biological evolution. Many experts believe that it explains the mystery of human cooperation, how civilization itself could emerge from individuals observing the laws of the jungle. And it even seems to help explain the origin of language, including why people like to gossip. LIFE AND MATH I learned about evolution and game theory by visiting the Institute of Advanced Study in Princeton, home of von Neumann during game theory’s infancy. Long recognized as one of the world’s premier centers for math and physics, the institute had been slow to acknowledge the ascent of biology in the hierarchy of scientific disciplines. By the late 1990s, though, the institute had decided to plunge into the 21st century a little early by initiating a program in theoretical biology. Just as the newborn institute had reached across the Atlantic to bring von Neumann, Einstein, and others to America, it recruited a director for its biology program from Europe—Martin Nowak, an Austrian working at the University of Oxford in England. Nowak was an accomplished mathematical biologist who had mixed bio-
OCR for page 76
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature chemistry with math during his student years at the University of Vienna, where he earned his doctorate in 1988. He soon moved on to Oxford, where he eventually became head of the mathematical biology program. I visited him in Princeton in the fall of 1998 to inquire about the institute’s plans for mixing math with the science of life. Nowak described a diverse research program, touching on everything from the immune system—deciphering the math behind fighting the AIDS virus, for instance—to inferring the origins of human language. Underlying much of his work was a common theme that at the time I really didn’t appreciate: the pervasive relevance of game theory. It makes sense, of course. In biology almost everything involves interaction. The sexes interact to reproduce, obviously. There are the fierce interactions of immune system cells battling viruses, or toxic molecules tangling with DNA to cause cancer. And humans, of course, always interact—cooperatively or contentiously, or just by talking to each other. Evolutionary processes shape the way that such interactions occur and what their outcomes will be. And that’s a key point: Evolution is not just about the origin of new species from common ancestors. Evolution is about virtually everything in biology— the physiology of individuals, the diversity of appearances within groups, the distribution of species in an ecosystem, and the behavior of individuals in response to other individuals or groups interacting with other groups. Evolution underlies all the biological action, and underlying evolution’s power is the mathematics of game theory. “Game theory has been very successfully used in evolution,” Nowak told me. “An overwhelming number of problems in evolution are of a game-theoretic nature.”3 In particular, game theory helps explain the evolution of social behavior in the animal (including humans) kingdom, solving a perplexing mystery in the original formulation of Darwinism: Why do animals cooperate? You’d think that the struggle to survive would put a premium on selfishness. Yet cooperation is common in the biological world, from symbiotic relationships between para-
OCR for page 77
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature sites and their hosts to out-and-out altruism that people often exhibit toward total strangers. Human civilization could never have developed as it has without such widespread cooperation; finding the Code of Nature describing human social behavior will not be possible without understanding how that cooperation evolved. And the key clues to that understanding are coming from game theory. GAMES OF LIFE In the 1960s, even before most economists took game theory seriously, several biologists noticed that it might prove useful in explaining aspects of evolution. But the man who really put evolutionary game theory on the scientific map was the British biologist John Maynard Smith. He was “an approachable man with unruly white hair and thick glasses,” one of his obituaries noted, “remembered by colleagues and friends as a charismatic speaker but deadly debater, a lover of nature and an avid gardener, and a man who enjoyed nothing better than discussing scientific ideas with young researchers over a glass of beer in a pub.”4 Unfortunately I never had a chance to have a beer with him. He died in 2004. Maynard Smith was born in 1920. As a child, he enjoyed collecting beetles and bird-watching, foreshadowing his future biological interests. At Eton College he was immersed in mathematics and then specialized in engineering at Cambridge University. During World War II he did engineering research on airplane stability, but after the war he returned to biology, studying zoology under the famed J. B. S. Haldane at University College London. In the early 1970s, Maynard Smith received a paper to review that had been submitted to the journal Nature by an American researcher named George Price. Price had attempted to explain why animals competing for resources did not always fight as ferociously as they might have, a puzzling observation if natural selection really implied that they should fight to the death if only the fittest survive. Price’s paper was too long for Nature, but the issue remained in the back of Maynard Smith’s mind. A year later, while
OCR for page 78
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature visiting the theoretical biology department at the University of Chicago, he studied game theory and began to explore the ways in which evolution is like a game.5 Eventually, Maynard Smith showed that game theory could illuminate how organisms adopt different strategies to survive the slings and arrows of ecological fortune and produce offspring to carry the battle on to future generations. Evolution is a game that all life plays. All animals participate; so do plants, so do bacteria. You don’t need to attribute any rationality or reasoning power to the organisms—their strategy is simply the sum of their properties and propensities. Is it a better strategy to be a short tree or a tall tree? To be a super speedy quadruped or a slower but smarter biped? Animals don’t choose their strategies so much as they are their strategies. This is a curious observation, I think. If every animal (plant, bug) is a different strategy, then why are there so many different forms of life out there, why so many different strategies for surviving? Why isn’t there one best strategy? Why doesn’t one outperform all the others, making it the sole survivor, the winner of the ultimate fitness sweepstakes? Darwin, of course, had dealt with that issue, explaining how different kinds of survival advantages could be exploited by natural selection to diversify life into a smorgasbord of species (like the specialization of workers in Adam Smith’s pin factory). Maynard Smith, though, took the Darwinian explanation to greater depths, using game theory to demonstrate with mathematical rigor why evolution is not a winner-takes-all game. In doing so, Maynard Smith perceived the need to modify classical game theory in two ways: substituting the evolutionary ideas of “fitness” for utility and “natural selection” for rationality. In economic game theory, he noted, “utility” is somewhat artificial; it’s a notion that attempts “to place on a single linear scale a set of qualitatively distinct outcomes” such as a thousand dollars, “losing one’s girl friend, losing one’s life.” In biology, though, “fitness, or expected number of offspring, may be difficult to measure, but it is unambiguous. There is only one correct way of combining dif-
OCR for page 79
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature ferent components—for example, chances of survival and of reproduction.”6 And “rationality” as a strategy for human game players exhibits two “snags,” Maynard Smith noted: “It is hard to decide what is rational, and in any case people do not behave rationally.” Consequently, he asserted, “the effect of these changes is to make game theory more readily applicable in biology than in the human sciences.”7 To illustrate his insight, he invented a clever but simple animal-fighting game. Known as the hawk-dove game, it showed why one single strategy would not produce a stable population. Imagine such a world, a “bird planet” populated solely by birds. These birds are capable of behaving either like hawks (aggressive, always ready to fight over food), or doves (always peaceful and passive). Now suppose these birds all decide that being hawkish is the best survival strategy. Whenever two of them encounter some food, they fight over it—the winner eats, the loser nurses his wounds, starves, and maybe even dies. But even the winner may suffer some injuries, incurring a cost that diminishes its benefits from getting the food. Now suppose one of these hawkish birds decides that all this fighting is … well, for the birds. He starts behaving like a dove. Upon encountering some food, he eats only if no other bird is around. If one of those hawks shows up, the “dove” flies away. The dove might miss a few meals, but at least he’s not losing his feathers in fights. Furthermore, suppose a few other birds try the dove approach. When they meet each other, they share the food. While the hawks are chewing each other up, the doves are chewing on dinner. Consequently, Maynard Smith noted, an all-hawk population is not an “evolutionary stable strategy.” An all-hawk society is susceptible to invasion by doves. On the other hand, it is equally true that an all-dove society is not stable, either. The first hawk who comes along will eat pretty well, because all the other birds will fly away at the sight of him. Only when more hawks begin to appear will there be any danger of dying in a fight. So the question is, what is the best strategy? Hawk or dove?
OCR for page 80
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature It turns out that the best strategy for surviving depends on how many hawks there are in the population. If hawks are rare, a hawkish strategy is best because most of the opponents will be doves and will run from a fight. If hawks are plentiful, though, they will get into many costly fights—yielding an advantage for dovish behavior. So a society should evolve to include a mix of hawks and doves. The higher the cost of fighting, the fewer the number of hawks. Maynard Smith showed how game theory described this situation perfectly, with an evolutionary stable strategy being the biological counterpart of a Nash equilibrium. While an evolutionary stable strategy is analogous to a Nash equilibrium, it is not always precisely equivalent. In many sorts of games there can be more than one Nash equilibrium, and some of them may not be evolutionary stable strategies. An ecosystem composed of various species with a fixed set of behavioral strategies could be at a Nash equilibrium without being immune to invasion by a mutant capable of introducing a new strategy into the competition. Such an ecosystem would not be evolutionarily stable.8 But the birds are unlikely to appreciate that distinction. In any case, the birds have to choose to play hawk or dove just as the ducks had to decide which bread tosser to favor. The best mix—the evolutionary stable strategy—will be a split population, some percentage doves, some hawks. Exactly what those percentages are depends on the precise costs of fighting compared to the food you miss by fleeing. Here’s one game matrix showing a possible weighting of the costs:
OCR for page 81
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature If two hawks meet, both are losers (getting “scores” of –2) because they beat each other up. If Bird 1 is a hawk and Bird 2 a dove, the dove flies away and gets 0, the hawk gets all the food (2). But if two doves meet, they share the food and both get 1 point. (Or you could say that one dove defers to the other half the time, the 1 point each signifying a 50-50 chance of either bird getting the food.) If you calculate it out, you find that the best mix of strategies (for these values of the costs) is that two-thirds should be doves and one-third hawks.9 (Keep in mind that, mathematically, you could have a mix of hawks and doves, or just birds that play mixed strategies. In other words, if you’re a bird in this scenario, your best bet is to behave like a hawk one-third of the time and behave like a dove two-thirds of the time.)10 Obviously this is a rather simplified view of biology. Hawks and doves are not the only possible behavioral strategies, even for birds. But you can see the basic idea, and you should also be able to see how game theory could describe situations with added complexity. Suppose, for instance, “spectator birds” watched as other birds battled. In fact, like human boxing or football fans, some birds do like to watch the gladiators of their group slug it out in a good fight (as do certain fishes). And that desire to view violence may offer a clue to why societies provide so much violence to view. Spectating may be wired into animal genes by evolutionary history, and maybe game theory has something to do with it. At first glance, spectating offers one obvious survival advantage—you’re less likely to get killed watching than fighting. But you don’t have to be a spectator to avoid the danger of a fight. You can simply get as far away from any fighting as you can. So why watch? The answer emerges naturally from game theory. You may find yourself in an unavoidable fight someday, in which case it would be a good idea to know your opponent’s record. Face it: You can’t always run from a fight. The wimps who retreat from every encounter don’t really enhance their chance of survival, for they will lose out in the competition for food, mates, and other essential resources. On the other hand, looking for a fight at every opportunity is not so smart, either—the battle may
OCR for page 82
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature exact a greater cost than the benefit of acquiring the resource. You would expect clever birds to realize that they might have to fight someday, so they better scout their potential opponents by observing them in battle. The observers (or “eavesdroppers” in biolingo) could choose to be either a hawk or a dove when it’s their turn to fight—depending on what they’ve observed about their adversary. Rufus Johnstone, of the University of Cambridge, extended the math of the hawk-dove game in just this manner to evaluate the eavesdropper factor. In this game, the eavesdropper knows whether its opponent has won or lost its previous fight. An eavesdropper encountering a loser will act hawkish, but if encountering a winner the eavesdropper will adopt a dove strategy and forgo the chance to win the resource. “An individual that is victorious in one round is more likely to win in the next, because its opponent is less likely to mount an escalated challenge,” Johnstone concluded.11 Since eavesdroppers have the advantage of knowing when to run, avoiding fights with dangerous foes, you might guess that eavesdropping would reduce the amount of violent conflict in a society. Alas, the math shows otherwise. Adding eavesdroppers to the hawk-dove game raises the rate of “escalated” fighting— occasions where both combatants take the hawk approach. Why? Because of the presence of spectators! If nobody is watching, it is not so bad to be a dove. But in the jungle, reputation is everything. With spectators around, acting like a dove guarantees that you’ll face an aggressive opponent in your next fight. Whereas if everybody sees that you’re a ferocious hawk, your next opponent may head for the hills at the sight of you. So the presence of spectators encourages violence, and watching violence today offers an advantage for the spectators who may be fighters tomorrow. In other words, the benefit to an individual of eavesdropping—helping that individual avoid high-risk conflict—drives a tendency toward a higher level of high-risk conflict in the society as a whole. But don’t forget that adding spectators is just one of many
OCR for page 83
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature complications that could be considered in the still very simplified hawk-dove game. Fights depend on more than just aggressiveness. Size and skill come into play as well. And one study noted that a bird’s self-assessment of its own fighting skills can also influence the fight-or-flight decision. If the birds know their own skill levels accurately, overall fighting might be diminished. (You can think of this as the Clint Eastwood version of the hawk-dove game: A bird has got to know its limitations.)12 In any case, policy makers who would feel justified in advocating wars based on game theory should pause and realize that real life is more complicated than biologists’ mathematical games. Humans, after all, have supposedly advanced to a civilized state where the law of the jungle doesn’t call all the shots. And in fact, game theory can help show how that civilized state came about. Game theory describes how the circumstances can arise that make cooperation and communication a stable strategy for the members of a species. Without game theory, cooperative human social behavior is hard to understand. EVOLVING ON A LANDSCAPE Game theory can help illuminate how different strategies fare in the battle to survive. Even more important, game theory helps to show how the best strategies might differ as circumstances change. After all, a set of behavioral propensities that’s successful in the jungle might not be such a hot idea in the Antarctic. When evolutionists talk about circumstances changing, typically they’ll be referring to something like the climate, or the trauma of a recent asteroid impact. But the changing strategies of the organisms themselves can be just as important. And that’s why game theory is essential for understanding evolution. Remember the basic concept of a Nash equilibrium—it’s when everybody is doing the best they can do, given what everybody else is doing. In other words, the best survival strategy depends on who else is around and how they are behaving. If your survival hinges on the actions of others, you’re in a game whether you like it or not.
OCR for page 84
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature Using the language of evolution, success in the survival game equates to “fitness.” The fittest survive and procreate. Obviously some individuals score better in this game than others. Biologists like to describe such differences in fitness in geographic terms, using the metaphor of a landscape. Using this metaphor, you can think of fitness—or the goal of a game—as getting a good vantage point, living on the peak of a mountain with a good view of your surroundings. For convenience you can describe your fitness just by specifying your latitude and longitude on the landscape map. Some latitude–longitude positions will put you on high ground; some will leave you in a chasm. In other words, some positions are more fit than others. It’s just another way of saying that some combinations of features and behaviors improve your chance to survive and reproduce. Real biological fitness is analogous to the better vantage point of a mountain peak. In a fitness landscape (just like a real landscape) there can, of course, be more than one peak—more than one combination of properties with a high likelihood for having viable offspring. (In the simple landscape of the all-bird island, you’d have a dove peak and a hawk peak.) In a landscape with many fitness peaks, some would be “higher” than others (meaning your odds of reproducing are more favorable), but still many peaks would be good enough for a species to survive. On a real landscape, your vantage point can be disturbed by many kinds of events. A natural disaster—a hurricane like Katrina, say, or an earthquake and tsunami—can literally reshape the landscape, and a latitude and longitude that previously gave you a great view may now be a muddy rut. Similarly in evolution, a change in the fitness landscape can leave a once successful species in a survival valley. Something like this seems to be what happened to the dinosaurs. You don’t need an asteroid impact to change the biological fitness landscape, though. Simply suppose that some new species moves into the neighborhood. What used to be a good strategy— say, swimming in the lake, away from waterphobic predators— might not be so smart if crocodiles move in. So as evolution
OCR for page 85
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature proceeds, the fitness landscape changes. Your best evolutionary strategy, in other words, depends on who else is evolving along with you. No species is a Robinson Crusoe alone on an island. And when what you should do depends on what others are doing, game theory is the name of the game. Recognizing this ever-shifting evolution landscape is the key to explaining how cooperative behavior comes about. In particular, it helps to explain the vastly more elaborate cooperation exhibited by humans compared with other animals. KIN AND COOPERATION It’s not that nonhuman animals never cooperate. Look at ants, for instance. But such social insect societies can easily be explained by evolution’s basis in genetic inheritance. The ants in an ant colony are all closely related. By cooperating they enhance the prospect that their shared genes will be passed along to future colonies. Similar reasoning should explain some human cooperation— that between relatives. As Maynard Smith’s teacher J. B. S. Haldane once remarked, it would make sense to dive into a river to save two drowning siblings or eight drowning cousins. (On average, you share one-half of a sibling’s genes, one-eighth of a cousin’s.) But human cooperation is not limited to planning family reunion picnics. Somehow, humans evolved to cooperate with strangers. When I visited Martin Nowak, he emphasized that such nonkin cooperation was one of the defining differences between humans and the rest of the planet’s species. The other was language. “I think humans are really distinct from animals in two different ways,” he said. “One is that they have a language which allows us to talk about everything. No other animal species has evolved such a system of unlimited communication. Animals can talk about a lot of things and signal about a lot of things to each other, but it seems that they are limited to a certain finite number of things that they can actually tell each other.” Humans, though, have a “combinatorial” language, a mix-and-match system of sounds that can describe any number of circum-
OCR for page 86
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature stances, even those never previously encountered. “There must have been a transition in evolution,” Nowak said, that allowed humans to develop this “infinite” communication system. Such a flexible language system no doubt helped humans evolve their other distinction—widespread cooperation. “Humans are the only species that have solved the problem of large-scale cooperation between nonrelated individuals,” Nowak pointed out. “That cooperation is interesting because evolution is based on competition, and if you want survival of the fittest, this competition makes it difficult to explain cooperation.”13 Charles Darwin himself noted this “altruism” problem. Behaving altruistically—helping someone else out, at a cost to you with no benefit in return—does seem to be a rather foolish strategy in the struggle to survive. But humans (many of them, at least) possess a compelling instinct to be helpful. There must have been some survival advantage to being a nice guy, no matter what Leo Durocher might have thought. (He was the baseball manager of the mid-20th century who was famous for saying “Nice guys finish last.”) One early guess was that altruism works to the altruist’s advantage in some way, like mutual backscratching. If you help out your neighbor, maybe someday your neighbor will return the favor. (This is the notion of “reciprocal altruism.”) But that explanation doesn’t take you very far. It only works if you will encounter the recipient of your help again in the future. Yet people often help others whom they will probably never see again. Maybe you can still get an advantage from being nice in an indirect way. Suppose you help out a stranger whom you never see again, but that stranger—overwhelmed by your kindness— becomes a traveling Good Samaritan, rendering aid to all sorts of disadvantaged souls. Someday maybe one of the Samaritan’s beneficiaries will encounter you and help you out, thanks to the lesson learned from the Samaritan you initially inspired. Such “indirect reciprocity,” Nowak told me, had been mentioned long ago by the biologist Richard Alexander but was generally dismissed by evolutionary biologists. And on the face of it, it
OCR for page 87
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature sounds a little far-fetched. Nowak, though, had explored the idea of indirect reciprocity in detail with the mathematician Karl Sigmund in Vienna. They had recently published a paper showing how indirect reciprocity might actually work, using the mathematics of game theory (in the form of the Prisoner’s Dilemma) to make the point. The secret to altruism, Nowak suggested, is the power of reputation. “By helping someone we can increase our reputation,” he said, “and to have a higher reputation in the group increases the chance that someone will help you.” The importance of reputation explains why human language became important—so people could gossip. Gossip spreads reputation, making altruistic behavior based on reputation more likely. “It’s interesting how much time humans spend talking about other people, as though they were constantly evaluating the reputations of other people,” Nowak said. “Language helped the evolution of cooperation and vice versa. A cooperative population makes language more important…. With indirect reciprocity you can either observe the person, you can look at how he behaves, or more efficiently you can just talk to people…. Language is essential for this.”14 Reputation breeds cooperation because it permits players in the game of life to better predict the actions of others. In the Prisoner’s Dilemma game, for instance, both players come out ahead if they cooperate. But if you suspect your opponent won’t cooperate, you’re better off defecting. In a one-shot game against an unknown opponent, the smart play is to defect. If, however, your opponent has a well-known reputation as a cooperator, it’s a better idea to cooperate also, so both of you are better off. In situations where the game is played repeatedly, cooperation offers the added benefit of enhancing your reputation. TIT FOR TAT Gossip about reputations may not be enough to create a cooperative society, though. Working out the math to prove that indirect reciprocity can infuse a large society with altruistic behavior turned
OCR for page 88
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature up some problems. Nowak and Sigmund’s model of indirect reciprocity was criticized by several other experts who pointed out that it was unlikely to work except in very small groups. When I next encountered Nowak, in 2004 at a complexity conference in Boston, his story had grown more elaborate. In his talk, Nowak recounted the role of the Prisoner’s Dilemma game in analyzing evolutionary cooperation. The essential backdrop was a famous game theory tournament held in 1980, organized by the political scientist Robert Axelrod at the University of Michigan. Axelrod conceived the brilliant idea of testing the skill of game theoreticians themselves in a Prisoner’s Dilemma contest. He invited game theory experts to submit a strategy for playing Prisoner’s Dilemma (in the form of a computer program) and then let the programs battle it out in a round-robin competition. Each program played repeated games against each of the other programs to determine which strategy would be the most “fit” in the Darwinian sense. Of the 14 strategies submitted, the winner was the simplest— an imitative approach called tit for tat, submitted by the game theorist Anatol Rapoport.15 In a tit-for-tat strategy, a player begins by cooperating in the first round of the game. After that, the player does whatever its opponent did in the preceding round. If the other player cooperates, the tit-for-tat player does also. Whenever the opponent defects, though, the tit-for-tat player defects on the next play and continues to defect until the opponent cooperates again. In any given series of games against a particular opponent, tit for tat is likely to lose. But in a large number of rounds versus many different opposition strategies, tit for tat outperforms the others on average. Or at least it did in Axelrod’s tournament. Once tit for tat emerged as the winner, it seemed possible that even better strategies might be developed. So Axelrod held a second tournament, this time attracting 62 entries. Of the contestants in the second tournament, only one entered tit for tat. It was Rapoport, and he won again. You can see how playing tit for tat enhances opportunities for
OCR for page 89
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature cooperation in a society. A reputation as a tit-for-tat player will induce opponents to cooperate with you, knowing that if they do, you will. And if they don’t, you won’t. Alas, the story gets even more complicated. Just because tit for tat won Axelrod’s tournament, that doesn’t mean it’s the best strategy in the real world. For one thing, it rarely won in head-to-head competition against any other strategy; it just did the best overall (because strategies that defeated tit for tat often lost badly against other strategies). In his talk at the conference, Nowak explored some of the nuances of the tit-for-tat strategy in a broader context. At first glance, tit for tat’s success seems to defy the Nash equilibrium implication that everyone’s best strategy is to always defect. The mathematics of evolutionary game theory, based on analyzing an infinitely large population, seems to confirm that expectation. However, Nowak pointed out, for a more realistic finite population, you can show that a tit-for-tat strategy, under certain circumstances, can successfully invade the all-defect population. But if you keep calculating what would happen if the game continues, it gets still more complicated. Tit for tat is an unforgiving strategy—if your opponent meant to cooperate but accidentally defected, you would then start defecting and cooperation would diminish. If you work out what would happen in such a game, the tit-for-tat strategy becomes less successful than a modified strategy called “generous tit for tat.” So a generous tit-for-tat strategy would take over the population. “Generous tit for tat is a strategy that starts with cooperation, and I cooperate whenever you cooperate, but sometimes I will cooperate even when you defect,” Nowak explained. “This allows me to correct for mistakes—if it’s an accidental mistake, you can correct for it.”16 As the games go on, the situation gets even more surprising, Nowak said. The generous tit-for-tat approach gets replaced by a strategy of full-scale cooperation! “Because if everybody plays generous tit for tat, or tit for tat, then nobody deliberately tries to defect; everybody is a cooperator.” Oh Happy Days.
OCR for page 90
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature Except that “always cooperate” is not a stable strategy. As soon as everybody cooperates, an always-defect strategy can invade, just like a hawk among the doves, and clean up. So you start with all defect, go to tit for tat, then generous tit for tat, then all cooperate, then all defect. “And this,” said Nowak, “is the theory of war and peace in human history.”17 GAMES AND PUNISHMENT Nevertheless, humans do cooperate. If indirect reciprocity isn’t responsible for that cooperation, what is? Lately, one popular view seems to be that cooperation thrives because it is enforced by the threat of punishment. And game theory shows how that can work. Among the advocates of this view are the economists Samuel Bowles and Herbert Gintis and the anthropologist Robert Boyd. They call this idea “strong reciprocity.” A strong reciprocator rewards cooperators but punishes defectors. In this case, a more complicated game illustrates the interaction. Rather than playing the Prisoner’s Dilemma game—a series of one-on-one encounters— strong reciprocity researchers conduct experiments with various versions of public goods games. These are just the sorts of games, described in Chapter 3, that show how different individuals adopt different strategies—some are selfish, some are cooperators, some are reciprocators. In a typical public goods game, players are given “points” at the outset (redeemable for real money later). In each round, players may contribute some of their points to a community fund and keep the rest. Then each player receives a fraction of the community fund. A greedy player will donate nothing, assuring a maximum personal payoff, although the group as a whole would then be worse off. Altruistic players will share some of their points to increase the payoff to the whole group. Reciprocators base their contributions on what others are contributing, thereby punishing the “free riders” who would donate little but reap the benefits of the group (but in so doing punish the rest of the group, including themselves, as well). As we’ve seen, humankind comprises all three sorts
OCR for page 91
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature of players. Further studies suggest why the human race might have evolved to include punishers. In one such test of a public goods game,18 most players began by giving up an average of half their points. After several rounds, though, contributions dropped off. In one test, nearly three-fourths of the players donated nothing by round 10. It appeared to the researchers that people became angry at others who donated too little at the beginning, and retaliated by lowering their own donations—punishing everybody. That is to say, more of the players became reciprocators. But in another version of the game, a researcher announced each player’s contribution after every round and solicited comments from the rest of the group. When low-amount donors were ridiculed, the cheapskates coughed up more generous contributions in later rounds. When nobody criticized the low donors, later contributions dropped. Shame, apparently, induced improved behavior. Other experiments consistently show that noncooperators risk punishment. So it may have been in the evolutionary past that groups containing punishers—and thus more incentive for cooperation—outsurvived groups that did not practice punishment. The tendency to punish may therefore have become ingrained in surviving human populations, even though the punishers do so at a cost to themselves. (“Ingrained” might not be just in the genes, though—many experts believe that culture transmits the punishment attitude down through the generations.) Of course, it’s not so obvious what form that punishment might have taken back in the human evolutionary past. Bowles and Gintis have suggested that the punishment might have consisted of ostracism, making the cost to the punisher relatively low but still inflicting a significant cost on the noncooperator. They show how game theory interactions would naturally lead societies to develop with some proportion of all three types— noncooperators (free riders), cooperators, and punishers (reciprocators)—just as other computer simulations have shown. The human race plays a mixed strategy.
OCR for page 92
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature Still, experts argue about these issues. I came across one paper showing that, in fact, altruism could evolve solely through benefits to the altruistic individual, not necessarily to the group, based on simulations of yet another popular game. Known as the ultimatum game, it is widely used today in another realm of game theory research, the “behavioral game theory” explored by scientists like Colin Camerer. Behavioral game theorists believe that getting to the roots of human social behavior—understanding the Code of Nature—ultimately requires knowing what makes individuals tick. In other words, you need to get inside people’s heads. And the popular way of doing that has spawned a hybrid discipline uniting game theory, economics, psychology, and neuroscience in a controversial new discipline called neuroeconomics.
Representative terms from entire chapter: