Nash’s theory of noncooperative games should now be recognized as one of the outstanding intellectual advances of the twentieth century … comparable to that of the discovery of the DNA double helix in the biological sciences.

—economist Roger Myerson

As letters of recommendation go, it was not very elaborate, just a single sentence: “This man is a genius.”

That was how Carnegie Tech professor R. L. Duffin described John Nash to the faculty at Princeton University, where Nash entered as a 20-year-old graduate student in 1948. Within two years, Duffin’s assessment had been verified. Nash’s “beautiful mind” had by then launched an intellectual revolution that eventually propelled game theory from the fad du jour to the foundation of the social sciences.

Shortly before Nash’s arrival at Princeton, von Neumann and Morgenstern had opened a whole new continent for mathematical exploration with the groundbreaking book *Theory of Games and* *Economic Behavior*. It was the Louisiana Purchase of economics. Nash played the role of Lewis and Clark.

As it turned out, Nash spent more time in the wilderness than Lewis and Clark did, as mental illness robbed the rationality of the

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.

Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
3
Nash’s Equilibrium
Game theory’s foundation
Nash’s theory of noncooperative games should now be recognized as one of the outstanding intellectual advances of the twentieth century … comparable to that of the discovery of the DNA double helix in the biological sciences.
—economist Roger Myerson
As letters of recommendation go, it was not very elaborate, just a single sentence: “This man is a genius.”
That was how Carnegie Tech professor R. L. Duffin described John Nash to the faculty at Princeton University, where Nash entered as a 20-year-old graduate student in 1948. Within two years, Duffin’s assessment had been verified. Nash’s “beautiful mind” had by then launched an intellectual revolution that eventually propelled game theory from the fad du jour to the foundation of the social sciences.
Shortly before Nash’s arrival at Princeton, von Neumann and Morgenstern had opened a whole new continent for mathematical exploration with the groundbreaking book Theory of Games and Economic Behavior. It was the Louisiana Purchase of economics. Nash played the role of Lewis and Clark.
As it turned out, Nash spent more time in the wilderness than Lewis and Clark did, as mental illness robbed the rationality of the

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
man whose math captured rationality’s essence. But before his prolonged departure, Nash successfully steered game theory toward the mathematical equivalent of manifest destiny. Though not warmly welcomed at first, Nash’s approach to game theory eventually captured a major share of the economic-theory market, leading to his Nobel Prize for economics in 1994. By then game theory had also conquered evolutionary biology and invaded political science, psychology, and sociology. Since Nash’s Nobel, game theory has infiltrated anthropology and neuroscience, and even physics. There is no doubt that game theory’s wide application throughout the intellectual world was made possible by Nash’s math.
“Nash carried social science into a new world where a unified analytical structure can be found for studying all situations of conflict and cooperation,” writes University of Chicago economist Roger Myerson. “The theory of noncooperative games that Nash founded has developed into a practical calculus of incentives that can help us to better understand the problems of conflict and cooperation in virtually any social, political, or economic institution.”1
So it’s not too outrageous to suggest that in a very real way, Nash’s math provides the foundation for a modern-day Code of Nature. But of course it’s not as simple as that. Since its inception, game theory has had a complicated and controversial history. Today it is worshiped by some but still ridiculed by others. Some experimenters claim that their results refute game theory; others say the experiments expand game theory and refine it. In any event, game theory has assumed such a prominent role in so many realms of science that it can no longer intelligently be ignored, as it often was in its early days.
IGNORED AT BIRTH
When von Neumann and Morgenstern introduced game theory as the math for economics, it made quite a splash. But most economists remained dry. In the mid-1960s, the economics guru Paul Samuelson praised the von Neumann–Morgenstern book’s insight

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
and impact—in other fields. “The book has accomplished everything except what it started out to do—namely, revolutionize economic theory,” Samuelson wrote.2
It’s not that economists hadn’t heard about it. In the years following its publication, Theory of Games and Economic Behavior was widely reviewed in social science and economics journals. In the American Economic Review, for example, Leonid Hurwicz admired the book’s “audacity of vision” and “depth of thought.”3 “The potentialities of von Neumann’s and Morgenstern’s new approach seem tremendous and may, one hopes, lead to revamping, and enriching in realism, a good deal of economic theory,” Hurwicz wrote. “But to a large extent they are only potentialities: results are still largely a matter of future developments.”4 A more enthusiastic assessment appeared in a mathematics journal, where a reviewer wrote that “posterity may regard this book as one of the major scientific achievements of the first half of the twentieth century.”5
The world at large also soon learned about game theory. In 1946, the von Neumann–Morgenstern book rated a front page story in the New York Times; three years later a major piece appeared in Fortune magazine.
It was also clearly appreciated from the beginning that game theory promised applications outside economics—that (as von Neumann and Morgenstern had themselves emphasized) it contained elements of the long-sought theory of human behavior generally. “The techniques applied by the authors in tackling economic problems are of sufficient generality to be valid in political science, sociology, or even military strategy,” Hurwicz pointed out in his review.6 And Herbert Simon, a Nobel laureate-to-be, made similar observations in the American Journal of Sociology. “The student of the Theory of Games … will come away from the volume with a wealth of ideas for application … of the theory into a fundamental tool of analysis for the social sciences.”7
Yet it was also clear from the outset that the original theory of games was severely limited. Von Neumann had mastered two-person zero-sum games, but introducing multiple players led to

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
problems. Game theory worked just fine if Robinson Crusoe was playing games with Friday, but the math for Gilligan’s Island wasn’t as rigorous.
Von Neumann’s approach to multiple-player games was to assume that coalitions would form. If Gilligan, the Skipper, and Mary Ann teamed up against the Professor, the Howells, and Ginger, you could go back to the simple two-person game rules. Many players might be involved, but if they formed two teams, the teams could take the place of individual players in the mathematical analysis.
But as later commentators noted, von Neumann had led himself into an inconsistency, threatening his theory’s internal integrity. A key part of two-person zero-sum games was choosing a strategy that was the best you could do against a smart opponent. Your best bet was to play your optimal (probably mixed) strategy no matter what anybody else did. But if coalitions formed among players in many-person games, as von Neumann believed they would, that meant your strategy would in fact depend on coordinating it with at least some of the other players. In any event, game theory describing many players interacting in non-zero-sum situations— that is, game theory applicable to real life—needed something more than the original Theory of Games had to offer. And that’s what John Nash provided.
BEAUTIFUL MATH
The book A Beautiful Mind offers limited insight into Nash’s math, particularly in regard to all the many areas of science where that math has lately become prominent.8 But the book reveals a lot about Nash’s personal troubles. Sylvia Nasar’s portrait of Nash is not very flattering, though. He is depicted as immature, self-centered, arrogant, uncaring, and oblivious. But brilliant.
Nash was born in West Virginia, in the coal-mining town of Bluefield, in 1928. While showing some interest in math in high school (he even took some advanced courses at a local college), he planned to become an electrical engineer, like his father. But by the time he enrolled at Carnegie Tech (the Carnegie Institute of Tech-

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
nology) in Pittsburgh, his choice for major had become chemical engineering. He soon switched to chemistry, but that didn’t last, either. Finding no joy in manipulating laboratory apparatus, Nash turned to math, where he excelled.
He first mixed math with economics while taking an undergraduate course at Carnegie Tech in international economics. In that class Nash conceived the idea for a paper on what came to be called the “bargaining problem.” As later observers noted, it was a paper obviously written by a teenager—not because it was intellectually naive, but because the bargaining he considered was over things like balls, bats, and pocket knives. Nevertheless the mathematical principles involved were clearly relevant to more sophisticated economic situations.
When Nash arrived at Princeton in 1948, it had already become game theory’s world capital. Von Neumann was at the Institute for Advanced Study, just a mile from the university, and Morgenstern was in the Princeton economics department. And at the university math department, a cadre of young game theory enthusiasts had begun exploring the new von Neumann– Morgenstern continent in earnest. Nash himself attended a game theory seminar led by Albert W. Tucker (but also explored game theory’s implications on his own).
Shortly after his arrival, Nash realized that his undergraduate ideas about the “bargaining problem” represented a major new game theory insight. He prepared a paper for publication (with assistance from von Neumann and Morgenstern, who “gave helpful advice as to the presentation”).
“Bargaining” represents a different form of game theory in which the players share some common concerns. Unlike the two-person zero-sum game, in which the loser loses what the winner wins, a bargaining game offers possible benefits to both sides. In this “cooperative” game theory, the goal is for all players to do the best they can, but not necessarily at the expense of the others. In a good bargain, both sides gain. A typical real-life bargaining situation would be negotiations between a corporation and a labor union.

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
In his bargaining paper, Nash discussed the situation when there is more than one way for the players to achieve a mutual benefit. The problem is to find which way maximizes the benefit (or utility) for both sides—given that both players are rational (and know how to quantify their desires), are equally skilled bargainers, and are equally knowledgeable about each other’s desires.
When bargaining over a possible exchange of resources (in Nash’s example, things like a book, ball, pen, knife, bat, and hat), the two players might assess the values of the objects differently. (To the athletic minded, a bat might seem more valuable than a book, while the more intellectually oriented bargainer might rank the book more valuable than the bat.) Nash showed how to consider such valuations and compute each player’s gain in utility for various exchanges, providing a mathematical map for finding the location of the optimal bargain—the one giving the best deal for both (in terms of maximizing the increase in their respective utilities).9
SEEKING EQUILIBRIUM
Nash’s bargaining problem paper would in itself have established him as one of game theory’s leading pioneers. But it was another paper, soon to become his doctoral dissertation, that established Nash as the theory’s prophet. It was the paper introducing the “Nash equilibrium,” eventually to become game theory’s most prominent pillar.
The idea of equilibrium is, of course, immensely important to many realms of science. Equilibrium means things are in balance, or stable. And stability turns out to be an essential idea for understanding many natural processes. Biological systems, chemical and physical systems, even social systems all seek stability. So identifying how stability is reached is often the key to predicting the future. If a situation is unstable—as many often are—you can predict the future course of events by figuring out what needs to happen to achieve stability. Understanding stability is a way of knowing where things are going.

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
The simplest example is a rock balanced atop a sharply peaked hill. It’s not a very stable situation, and you can predict the future pretty confidently: That rock is going to roll down the hill, reaching an equilibrium point in the valley. Another common example of equilibrium shows up when you try to dissolve too much sugar in a glass of iced tea. A pile of sugar will settle at the bottom of the glass. When the solution reaches equilibrium, molecules will continue to dissolve out of the pile, but at the same rate as other sugar molecules drop out of the tea and join the pile. The tea is then in a stable situation, maintaining a constant sweetness.
It’s the same principle, just a little more complicated, in a chemical reaction, where stability means achieving a state of “chemical equilibrium,” in which the amounts of the reacting chemicals and their products remain constant. In a typical reaction, two different chemical substances interact to produce a new, third substance. But it’s often not the case that both original substances will entirely disappear, leaving only the new one. At first, amounts of the reacting substances will diminish as the quantity of the product grows. But eventually you may reach a point where the amount of each substance doesn’t change. The reaction continues—but as the first two substances react to make the third, some of the third decomposes to replenish supplies of the first two. In other words, the action continues, but the big picture doesn’t change.
That’s chemical equilibrium, and it is described mathematically by what chemists call the law of mass action. Nash had just this sort of physical equilibrium in mind when he was contemplating stability in game theory. In his dissertation he refers to “the ‘mass-action’ interpretation of equilibrium,” and that such an equilibrium is approached in a game as players “accumulate empirical information” about the payoffs of their strategies.10
When equilibrium is reached in a chemical reaction, the quantities of the chemicals no longer change; when equilibrium is reached in a game, nobody has any incentive to change strategies—so the choice of strategies should remain constant (the game situation is, in other words, stable). All the players should be satis-

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
fied with the strategy they’ve adopted, in the sense that no other strategy would do better (as long as nobody else changes strategies, either). Similarly, in social situations, stability means that everybody is pretty much content with the status quo. It may not be that you like things the way they are, but changing them will only make things worse. There’s no impetus for change, so like a rock in a valley, the situation is at an equilibrium point.
In a two-person zero-sum game, you can determine the equilibrium point using von Neumann’s minimax solution. Whether using a pure strategy or a mixed strategy, neither player has anything to gain by deviating from the optimum strategy that game theory prescribes. But von Neumann did not prove that similarly stable solutions could be found when you moved from the Robinson Crusoe–Friday economy to the Gilligan’s Island economy or Manhattan Island economy. And as you’ll recall, von Neumann thought the way to analyze large economies (or games) was by considering coalitions among the players.
Nash, however, took a different approach—deviating from the “party line” in game theory, as he described it decades later. Suppose there are no coalitions, no cooperation among the players. Every player wants the best deal he or she can get. Is there always a set of strategies that makes the game stable, giving each player the best possible personal payoff (assuming everybody chooses the best available strategy)? Nash answered yes. Borrowing a clever piece of mathematical trickery known as a “fixed-point theorem,” he proved that every game of many players (as long as you didn’t have an infinite number of players) had an equilibrium point.
Nash derived his proof in different ways, using either of two fixed-point theorems—one by Luitzen Brouwer, the other by Shizuo Kakutani. A detailed explanation of fixed-point theorems requires some dense mathematics, but the essential idea can be illustrated rather simply. Take two identical sheets of paper, crumple one up, and place it on top of the other. Somewhere in the crumpled sheet will be a point lying directly above the corresponding point on the uncrumpled sheet. That’s the fixed point. If you don’t believe it, take a map of the United States and place it

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
on the floor—any floor within the United States. (The map represents the crumpled up piece of paper.) No matter where you place the map, there will be one point that is directly above the corresponding actual location in the United States. Applying the same principle to the players in a game, Nash showed that there was always at least one “stable” point where competing players’ strategies would be at equilibrium.
“An equilibrium point,” he wrote in his Ph.D. thesis, “is … such that each player’s mixed strategy maximizes his payoff if the strategies of the others are held fixed.”11 In other words, if you’re playing such a game, there is at least one combination of strategies such that if you change yours (and nobody else changes theirs) you’ll do worse. To put it more colloquially, says economist Robert Weber, you could say that “the Nash equilibrium tells us what we might expect to see in a world where no one does anything wrong.”12 Or as Samuel Bowles described it to me, the Nash equilibrium “is a situation in which everybody is doing the best they can, given what everybody else is doing.”13
Von Neumann was dismissive of Nash’s result, as it did turn game theory in a different direction. But eventually many others recognized its brilliance and usefulness. “The concept of the Nash equilibrium is probably the single most fundamental concept in game theory,” declared Bowles. “It’s absolutely fundamental.”14
GAME THEORY GROWS UP
Nash published his equilibrium idea quickly. A brief (two-page) version appeared in 1950 in the Proceedings of the National Academy of Sciences. Titled “Equilibrium Points in n-Person Games,” the paper established concisely (although not particularly clearly for nonmathematicians) the existence of “solutions” to many-player games (a solution being a set of strategies such that no single player could expect to do any better by unilaterally trying a different strategy). He expanded the original paper into his Ph.D. thesis, and a longer version was published in 1951 in Annals of Mathematics, titled “Non-cooperative Games.”

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
Von Neumann and Morgenstern, Nash politely noted in his paper, had produced a “very fruitful” theory of two-person zero-sum games. Their theory of many-player games, however, was restricted to games that Nash termed “cooperative,” in the sense that it analyzed the interactions among coalitions of players. “Our theory, in contradistinction, is based on the absence of coalitions in that it is assumed that each participant acts independently, without collaboration or communication with any of the others.”15 In other words, Nash devised an “every man for himself” version of many-player games—which is why he called it “noncooperative” game theory. When you think about it, that approach pretty much sums up many social situations. In a dog-eat-dog world, the Nash equilibrium describes how every dog can have its best possible day. “The distinction between non-cooperative and cooperative games that Nash made is decisive to this day,” wrote game theorist Harold Kuhn.16
To me, the really key point about the Nash equilibrium is that it cements the analogy between game theory math and the laws of physics—game theory describing social systems, the laws of physics describing natural systems. In the natural world, everything seeks stability, which means seeking a state of minimum energy. The rock rolls downhill because a rock at the top of a hill has a high potential energy; it gives that energy away by rolling downhill. It’s because of the law of gravity. In a chemical reaction, all the atoms involved are seeking a stable arrangement, possessing a minimum amount of energy. It’s because of the laws of thermodynamics.
And just as in a chemical reaction all the atoms are simultaneously seeking a state with minimum energy, in an economy all the people are seeking to maximize their utility. A chemical reaction reaches an equilibrium enforced by the laws of thermodynamics; an economy should reach a Nash equilibrium dictated by game theory.17
Real life isn’t quite that simple, of course. There are usually complicating factors. A bulldozer can push the rock back up the hill; you can add chemicals to spark new chemistry in a batch of

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
molecules. When people are involved, all sorts of new sources of unpredictability complicate the game theory playing field. (Imagine how much trickier chemistry would become if molecules could think.18)
Nevertheless, Nash’s notion of equilibrium captures a critical feature of the social world. Using Nash’s math, you can figure out how people could reach stability in a social situation by comparing that situation to an appropriate game. So if you want to apply game theory to real life, you need to devise a game that captures the essential features of the real-life situation you’re interested in. And life, if you haven’t noticed, poses all sorts of different circumstances to cope with.
Consequently game theorists have invented more games than you can buy at Toys R Us. Peruse the game theory literature, and you’ll find the matching pennies game, the game of chicken, public goods games, and the battle of the sexes. There’s the stag hunt game, the ultimatum game, and the “so long sucker” game. And hundreds of others. But by far the most famous of all such games is a diabolical scenario known as the Prisoner’s Dilemma.
TO RAT OR NOT TO RAT
As in all my books, a key point has once again been anticipated by Edgar Allan Poe. In “The Mystery of Marie Roget,” Poe described a murder believed by Detective Dupin to have been committed by a gang. Dupin’s strategy is to offer immunity to the first member of the gang to come forward and confess. “Each one of a gang, so placed, is not so much … anxious for escape, as fearful of betrayal,” Poe’s detective reasoned. “He betrays eagerly and early that he may not himself be betrayed.”19 It’s too bad that Poe (who was in fact a trained mathematician) had not thought to work out the math of betrayal—he might have invented game theory a century early.
As it happened, the Prisoner’s Dilemma in game theory was first described by Nash’s Princeton professor, Albert W. Tucker, in 1950. At that time, Tucker was visiting Stanford and had men-

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
tioned his game theory interests. He was asked unexpectedly to present a seminar, so he quickly conjured up the scenario of two criminals captured by the police and separately interrogated.20
You know the story. The police have enough evidence to convict two criminal conspirators on a lesser offense, but need one or the other to rat out his accomplice to make an armed robbery charge stick. So if both keep mum, both will get a year in prison. But whoever agrees to testify goes free. If only one squeals, the partner gets five years. If both sing like a canary, then both get three years (a two-year reduction for copping a plea).
Years in prison for Bob, Alice
If you look at this game matrix, you can easily see where the Nash equilibrium is. There’s only one combination of choices where neither player has any incentive to change strategies—they both rat each other out. Think about it. Let’s say our game theory experts Alice and Bob have decided to turn to crime, but the police catch them. The police shine a light in Bob’s face and spell out the terms of the game. He has to decide right away. He ponders what Alice might do. If Alice rats him out—a distinct possibility, knowing Alice—his best choice is to rat her out, too, thereby getting only three years instead of five. But suppose Alice keeps mum. Then Bob’s best choice is still to rat her out, as he’ll then get off free. No matter which strategy Alice chooses, Bob’s best choice is betrayal, just as Poe’s detective had intuited. And Alice, obviously, must rea-

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
son the same way about Bob. The only stable outcome is for both to agree to testify, ratting out their accomplice.
Ironically, and the reason it’s called a dilemma, they would both be better off overall if they both kept quiet. But they are interrogated separately, with no communication between them permitted. So the best strategy for each individual produces a result that is not the best result for the team. If they both keep mum (that is, they cooperate with each other), they spend a total of two years in prison (one each). If one rats out the other (technical term: defects), but the other keeps mum, they serve a total of five years (all by the silent partner). But when they rat each other out, they serve a total of six years—a worse overall outcome than any of the other pairs of strategies. The Nash equilibrium—the stable pair of choices dictated by self-interest—produces a poorer payoff for the group. From the standpoint of game theory and Nash’s math, the choice is clear. If everybody’s incentive is to get the best individual deal, the proper choice is to defect.
In real life, of course, you never know what will happen, because the crooks may have additional considerations (such as the prospect of sleeping with the fishes if they rat out the wrong guy). Consequently the Nash equilibrium calculation does not always predict how people will really behave. Sometimes people temper their choices with considerations of fairness, and sometimes they act out of spite. In Prisoner’s Dilemma situations, some people actually do choose to cooperate. But that doesn’t detract from the importance of the Nash equilibrium, as economists Charles Holt and Alvin Roth point out. “The Nash equilibrium is useful not just when it is itself an accurate predictor of how people will behave in a game but also when it is not,” they write, “because then it identifies a situation in which there is a tension between individual incentives and other motivations.” So if people cooperate (at least at first) in a Prisoner’s Dilemma situation, Nash’s math tells us that such cooperation, “because it is not an equilibrium, is going to be unstable in ways that can make cooperation difficult to maintain.”21
Though it is a simplified representation of real life, the Prisoner’s Dilemma game does capture the essence of many social

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
interactions. But obviously you cannot easily assess any social situation by calculating the Nash equilibrium. Real-life games often involve many players and complicated payoff rules. While Nash showed that there is always at least one equilibrium point, it’s another matter to figure out what that point is. (And often there is more than one Nash equilibrium point, which makes things really messy.) Remember, each player’s “strategy” will typically be a mixed strategy, drawn from maybe dozens or hundreds or thousands (or more) of pure “specific” strategies. In most games with many players, calculating all the probabilities for all the combinations of all those choices exceeds the computational capacity of Intel, Microsoft, IBM, and Apple put together.
THE PUBLIC GOOD
It’s not hopeless, though. Consider another favorite game to illustrate “defection”—the public goods game. The idea is that some members in a community reap the benefits of membership without paying their dues. It’s like watching public television but never calling in to make a pledge during the fund drives. At first glance, the defector wins this game—getting the benefit of enjoying Morse and Poirot without paying a price. But wait a minute. If everybody defected, there would be no benefit for anybody. The free riders would become hapless hitchhikers.
Similarly, suppose your neighborhood association decided to collect donations to create a park. You’d enjoy the park, but if you reason that enough others in the neighborhood will contribute enough money to build it, you might decline to contribute. If everybody reasons the same way, though, there will be no park. But suppose that defecting (declining to pay) and cooperating (contributing your fair share) are not the only possible strategies. You can imagine a third strategy, called reciprocating. If you are a reciprocator, you pay only if you know that a certain number of the other players have decided to pay. Computer simulations of this kind of game suggest that a mix of these strategies among the players can reach a Nash equilibrium.

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
Experiments with real people show the same thing. One study, reported in 2005, tested college students on a contrived version of the public goods game. Four players were each given tokens (representing money) and told they could contribute as many as they liked into a “public pot,” keeping the rest in their personal account. The experimenter then doubled the number of tokens in the pot. One player at a time was told how much had been contributed to the pot and then given a chance to change his or her contribution. When the game ended (after a random number of rounds), all the tokens were then evenly divided up among all the players.
How would you play? Since, in the end, all four players split the pot equally, the people who put in the least to begin with end up with the most tokens—their share of the pot plus the money they held back in their personal account. Of course, if nobody put any in to begin with, nobody reaped the benefit of the experimenter’s largesse, kind of like a local government forgoing federal matching funds for a highway project. So it would seem to be a good strategy to donate something to the pot. But if you want to get a better payoff than anyone else, you should put in less than the others. Maybe one token. On the other hand, everybody in the group will get more if you put more in the pot to begin with. (That way, you might not get more than everybody else, but you’ll get more than you otherwise would.)
When groups of four played this game repeatedly, a pattern of behavior emerged. Players fell into three readily identifiable groups: cooperators, defectors (or “free riders”), and reciprocators. Since all the players learned at some point how much had been contributed, they could adjust their behavior accordingly. Some players remained stingy (defectors), some continued to contribute generously (cooperators), and others contributed more if others in the group had donated significantly (reciprocators).
Over time, the members of each group earned equal amounts of money, suggesting that something like a Nash equilibrium had been achieved—they all won as much as they could, given the strategy of the others. In other words, in this kind of game, the human race plays a mixed strategy—about 13 percent cooperators,

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
20 percent defectors (free riders), and 63 percent reciprocators in this particular experiment. “Our results support the view that our human subject population is in a stable … equilibrium of types,” wrote the researchers, Robert Kurzban and Daniel Houser.22 Knowing about the Nash equilibrium helps make sense of results like these.
GAME THEORY TODAY
Together with his paper on the bargaining problem (which treats cooperative game situations), Nash’s work on equilibria in many-player games greatly expanded game theory’s scope beyond von Neumann and Morgenstern’s book, providing the foundation for much of the work in game theory going on today. There’s more to game theory than the Nash equilibrium, of course, but it is still at the heart of current endeavors to apply game theory to society broadly.
Over the years, game theorists have developed math for games where coalitions do form, where information is incomplete, where players are less than perfectly rational. Models of all of these situations, plus many others, can be built using game theory’s complex mathematical tools. It would take a whole book (actually, several books) to describe all of those subsequent developments (and many such books have been written). It’s not necessary to know all those details of game theory history, but it is important to know that game theory does have a rich and complex history. It is a deep and complicated subject, full of many highly technical and nuanced contributions of substantial mathematical sophistication.
Even today game theory remains very much a work in progress. Many deep questions about it do not seem to have been given compelling answers. In fact, if you peruse the various accounts of game theory, you are likely to come away confused. Its practitioners do not all agree on how to interpret some aspects of game theory, and they certainly disagree about how to advertise it.
Some presentations seem to suggest that game theory should predict human behavior—what choices people will make in games

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
(or in economics or other realms of life). Others insist that game theory does not predict, but prescribes—it tells you what you ought to do (if you want to win the game), not what any player would actually do in a game. Or some experts will say that game theory predicts what a “rational” person will do, acknowledging that there’s no accounting for how irrational some people (even those playing high-stakes games) can be. Of course, if you ask such experts to define “rational,” they’re likely to say that it means behaving in the way that game theory predicts.
To me, it seems obvious that basic game theory does not always successfully predict what people will do, since most people are about as rational as pi. Neither is it obvious that game theory offers a foolproof way to determine what is the rational thing to do. There may always be additional considerations in making a “rational” choice that have not been included in game theory’s mathematical framework.
Game theory does predict outcomes for different strategies in different situations, though. In principle you could use game theory to analyze lots of ordinary games, like checkers, as well as many problems in the real world where the concept of game is much broader. It can range from trying to beat another car into a parking place to global thermonuclear war. The idea is that when faced with deciding what to do in some strategic interaction, the math can tell you which move is most likely to be successful. So if you know what you want to achieve, game theory can help you—if your circumstances lend themselves to game theory representation.
The question is, are there ever any such circumstances? Early euphoria about game theory’s potential to illuminate social issues soon dissipated, as a famous game theory text noted in 1957. “Initially there was a naive band-wagon feeling that game theory solved innumerable problems of sociology and economics, or that, at the least, it made their solution a practical matter of a few years’ work. This has not turned out to be the case.”23
Such an early pessimistic assessment isn’t so surprising. There’s always a lack of patience in the scientific world; many people want new ideas to pay off quickly, even when more rational observers

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
realize that decades of difficult work may be needed for a theory to reach maturity. But even six decades after the von Neumann– Morgenstern book appeared, you could find some rather negative assessments of game theory’s relevance to real life.
In an afterword to the 60-year-anniversary edition of Theory of Games, Ariel Rubenstein acknowledged that game theory had successfully entrenched itself in economic science. “Game theory has moved from the fringe of economics into its mainstream,” he wrote. “The distinction between economic theorist and game theorist has virtually disappeared.”24 But he was not impressed with claims that game theory was really good for much else, not even games. “Game theory is not a box of magic tricks that can help us play games more successfully. There are very few insights from game theory that would improve one’s game of chess or poker,” Rubenstein wrote.25
He scoffed at theorists who believed game theory could actually predict behavior, or even improve performance in real-life strategic interactions. “I have never been persuaded that there is a solid foundation for this belief,” he wrote. “The fact that the academics have a vested interest in it makes it even less credible.” Game theory in Rubenstein’s view is much like logic—form without substance, a guide for comparing contingencies but not a handbook for action. “Game theory does not tell us which action is preferable or predict what other people will do…. The challenges facing the world today are far too complex to be captured by any matrix game.”26
OK—maybe this book should end here. But no. I think Rubenstein has a point, but also that he is taking a very narrow view. In fact, I think his attitude neglects an important fact about the nature of science.
Scientists make models. Models capture the essence of some aspect of something, hopefully the aspect of interest for some particular use or another. Game theory is all about making models of human interactions. Of course game theory does not capture all the nuances of human behavior—no model does. No map of Los Angeles shows every building, every crack in every sidewalk, or

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
every pothole—if it showed all that, it wouldn’t be a map of Los Angeles, it would be Los Angeles. Nevertheless, a map that leaves out all those things can still help you get where you want to go (although in L.A. you might get there slowly).
Naturally, game theory introduces simplifications—it is, after all, a model of real-life situations, not real life itself. In that respect it is just like all other science, providing simplified models of reality that are accurate enough to draw useful conclusions about that reality. You don’t have to worry about the chemical composition of the moon and sun when predicting eclipses, only their masses and motions. It’s like predicting the weather. The atmosphere is a physical system, but Isaac Newton was no meteorologist. Eighteenth-century scholars did not throw away Newton’s Principia because it couldn’t predict thunderstorms. But after a few centuries, physics did get to the point where it could offer reasonably decent weather forecasts. Just because game theory cannot predict human behavior infallibly today doesn’t mean that its insights are worthless.
In his book Behavioral Game Theory, Colin Camerer addresses these issues with exceptional insight and eloquence. It is true, he notes, that many experiments produce results that seem—at first— to disconfirm game theory’s predictions. But it’s clearly a mistake to think that therefore there is something wrong with game theory’s math. “If people don’t play the way theory says, their behavior does not prove the mathematics wrong, any more than finding that cashiers sometimes give the wrong change disproves arithmetic,” Camerer points out.27 Besides, game theory (in its original form) is based on players’ behaving rationally and selfishly. If actual real-life behavior departs from game theory’s forecast, perhaps there’s just something wrong with the concepts of rationality and selfishness. In that case, incorporating better knowledge of human psychology (especially in social situations) into game theory’s equations can dramatically improve predictions of human behavior and help explain why that behavior is sometimes surprising. That is exactly the sort of thing that Camerer’s specialty, behavioral game theory, is intended to do. “The goal is not to ‘disprove’ game theory … but to improve it,” Camerer writes.28

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
As it turns out, game theory is widely used today in scientific efforts to understand all sorts of things. While Nash’s 1994 Nobel Prize recognized the math establishing game theory’s foundations, the 2005 economics Nobel trumpeted the achievements of two important pioneers of game theory’s many important applications. Economist Thomas Schelling, of the University of Maryland, understood in the 1950s that game theory offered a mathematical language suitable for unifying the social sciences, a vision he articulated in his 1960 book The Strategy of Conflict. “Schelling’s work prompted new developments in game theory and accelerated its use and application throughout the social sciences,” the Royal Swedish Academy of Sciences remarked on awarding the prize.29
Schelling paid particular attention to game-theoretic analysis of international relations, specifically (not surprising for the time) focusing on the risks of armed conflict. In gamelike conflict situations with more than one Nash equilibrium, Schelling showed how to determine which of the equilibrium possibilities was most plausible. And he identified various counterintuitive conclusions about conflict strategy that game theory revealed. An advancing general burning bridges behind him would seem to be limiting his army’s options, for example. But the signal sent to the enemy—that the oncoming army had no way to retreat—would likely diminish the opposition’s willingness to fight. Similar reasoning transferred to the economic realm, where a company might decide to build a big, expensive production plant, even if it meant a higher cost of making its product, if by flaunting such a major commitment it scared competitors out of the market.
Schelling’s insights also extended to games where all the players desire a common (coordinated) outcome more than any particular outcome—in other words, when it is better for everybody to be on the same page, regardless of what the page is. A simple example would be a team of people desiring to eat dinner at the same restaurant. It doesn’t matter what restaurant (as long as the food is not too spicy); the goal is for everyone to be together. When everybody can communicate with each other, coordination is rarely a problem (or at least it shouldn’t be), but in many such

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
situations communication is restricted. Schelling shed considerable light on the game-theoretic issues involved in reaching coordinated solutions to such social problems. Some of Schelling’s later work applied game theory to the rapid change in some neighborhoods from a mixture of races to being largely segregated, and to limits on individual control over behavior—why people do so many things they really don’t want to do, like smoke or drink too much, while not doing things they really want to, like exercising.
2005’s other economics Nobel winner, Robert Aumann, has long been a leading force in expanding the scope of game theory to many disciplines, from biology to mathematics. A German-born Israeli at the Hebrew University of Jerusalem, Aumann took special interest in long-term cooperative behavior, a topic of special relevance to the social sciences (after all, long-term cooperation is the defining feature of civilization itself). In particular, Aumann analyzed the Prisoner’s Dilemma game from the perspective of infinitely repeated play, rather than the one-shot deal in which both players’ best move is to rat the other out. Over the long run, Aumann showed, cooperative behavior can be sustained even by players who still have their own self-interest at heart.
Aumann’s “repeated games” approach had wide application, both in cases where it led to cooperation and where it didn’t. By showing how game theory’s rules could facilitate cooperation, he also identified the circumstances where cooperation was less likely—when many players are involved, for instance, or when communication is limited or time is short. Game theory helps to show why certain common forms of collective behavior materialize under such circumstances. “The repeated-games approach clarifies the raison d’être of many institutions, ranging from merchant guilds and organized crime to wage negotiations and international trade agreements,” the Swedish academy pointed out.
While Nobel Prizes shine the media spotlight on specific achievements of game theory, they tell only a small portion of the whole story. Game theory’s uses have expanded to multiple arenas in recent years. Economics is full of applications, from guiding negotiations between labor unions and management to auctioning

OCR for page 51

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature
licenses for exploiting the electromagnetic spectrum. Game theory is helpful in matching medical residents to hospitals, in understanding the spread of disease, and in determining how to best vaccinate against various diseases—even to explain the incentives (or lack thereof) for hospitals to invest in fighting bacterial resistance to antibiotics. Game theory is valuable for understanding terrorist organizations and forecasting terrorist strategies. For analyzing voting behaviors, for understanding consciousness and artificial intelligence, for solving problems in ecology, for comprehending cancer. You can call on game theory to explain why the numbers of male and female births are roughly equal, why people get stingier as they get older, and why people like to gossip about other people.
Gossip, in fact, turns out to be a crucial outcome of game theory in action, for it’s at the heart of understanding human social behavior, the Code of Nature that made it possible for civilization to establish itself out of the selfish struggles to survive in the jungle. For it is in biology that game theory has demonstrated its power most dramatically, in explaining otherwise mysterious outcomes of Darwinian evolution. After all, people may not always play game theory the way you’d expect, but animals do, where the Code of Nature really is the law of the jungle.