2
Von Neumann’s Games
Game theory’s origins

Games combining chance and skill give the best representation of human life, particularly of military affairs and of the practice of medicine which necessarily depend partly on skill and partly on chance…. It would be desirable to have a complete study made of games, treated mathematically.

—Gottfried Wilhelm von Leibniz (quoted by Oskar Morgenstern, Dictionary of the History of Ideas)

It’s no mystery why economics is called the dismal science.

With most sciences, experts make pretty accurate predictions. Mix two known chemicals, and a chemist can tell you ahead of time what you’ll get. Ask an astronomer when the next solar eclipse will be, and you’ll get the date, time, and best viewing locations, even if the eclipse won’t occur for decades.

But mix people with money, and you generally get madness. And no economist really has any idea when you’ll see the next total eclipse of the stock market. Yet many economists continue to believe that they will someday practice a sounder science. In fact, some would insist that they are already practicing a sounder science—by viewing the economy as basically just one gigantic game.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature 2 Von Neumann’s Games Game theory’s origins Games combining chance and skill give the best representation of human life, particularly of military affairs and of the practice of medicine which necessarily depend partly on skill and partly on chance…. It would be desirable to have a complete study made of games, treated mathematically. —Gottfried Wilhelm von Leibniz (quoted by Oskar Morgenstern, Dictionary of the History of Ideas) It’s no mystery why economics is called the dismal science. With most sciences, experts make pretty accurate predictions. Mix two known chemicals, and a chemist can tell you ahead of time what you’ll get. Ask an astronomer when the next solar eclipse will be, and you’ll get the date, time, and best viewing locations, even if the eclipse won’t occur for decades. But mix people with money, and you generally get madness. And no economist really has any idea when you’ll see the next total eclipse of the stock market. Yet many economists continue to believe that they will someday practice a sounder science. In fact, some would insist that they are already practicing a sounder science—by viewing the economy as basically just one gigantic game.

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature At first glance, building economic science on the mathematical theory of games seems about as sensible as forecasting real-estate trends by playing Monopoly. But in the past half century, and particularly the past two decades, game theory has established itself as the precise mathematical tool that economists had long lacked. Game theory provides precision to the once fuzzy economic notion about how consumers compare their preferences (a measure labeled by the deceptively simple term utility). Even more important, game theory shows how to determine the strategies necessary to achieve the maximum possible utility—that is, to acquire the highest payoff—the presumed goal of every rational participant in the dogfights of economic life. Yet while people have played games for millennia, and have engaged in economic exchange for probably just as long, nobody had ever made the connection explicit—mathematically—until the 20th century. This merger of games with economics—the mathematical mapping of the real world of choices and money onto the contrived realm of poker and chess—has revolutionized the use of math to quantify human behavior. And most of the credit for game theory’s invention goes to one of the 20th century’s most brilliant thinkers, the magical mathman John von Neumann. LACK OF FOCUS If any one person of the previous century personified the word polymath, it was von Neumann. I’m really sorry he died so young. Had von Neumann lived to a reasonably old age—say, 80 or so—I might have had the chance to hear him talk, or maybe even interview him. And that would have given me a chance to observe his remarkable genius for myself. Sadly, he died at the age of 53. But he lived long enough to leave a legendary legacy in several disciplines. His contributions to physics, mathematics, computer science, and economics rank him as one of the all-time intellectual giants of each field. Imagine what he could have accomplished if he’d learned to focus himself!

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature Of course, he accomplished plenty anyway. Von Neumann produced the standard mathematical formulation of quantum mechanics, for instance. He didn’t exactly invent the modern digital computer, but he improved it and pioneered its use for scientific research. And, apparently just for kicks, he revolutionized economics. Born in 1903 in Hungary, von Neumann was given the name Janos but went by the nickname Jancsi. He was the son of a banker (who had paid for the right to use the honorific title von). As a child, Jancsi dazzled adults with his mental powers, telling jokes in Greek and memorizing the numbers in phone books. Later he enrolled in the University of Budapest as a math major, but didn’t bother to attend the classes—at the same time, he was majoring in chemistry at the University of Berlin. He traveled back to Budapest for exams, aced them, and continued his chemical education, first at Berlin and then later at the University of Zurich. I’ve recounted some of von Neumann’s adult intellectual escapades before (in my book The Bit and the Pendulum), such as the time when he was called in as a consultant to determine whether the Rand Corporation needed a new computer to solve a difficult problem. Rand didn’t need a new computer, von Neumann declared, after solving the problem in his head. In her biography of John Nash, Sylvia Nasar relates another telling von Neumann anecdote, about a famous trick-question math problem. Two cyclists start out 20 miles apart, heading for each other at 10 miles an hour. Mean-while a fly flies back and forth between the bicycles at 15 miles an hour. How far has the fly flown by the time the bicycles meet? You can solve it by adding up the fly’s many shorter and shorter paths between bikes (this would be known in mathematical terms as summing the infinite series). If you detect the trick, though, you can solve the problem in an instant—it will take the bikes an hour to meet, so the fly obviously will have flown 15 miles. When jokesters posed this question to von Neumann, sure enough, he answered within a second or two. Oh, you knew the trick, they moaned. “What trick?” said von Neumann. “All I did was sum the infinite series.”

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature Before von Neumann first came to America in 1930, he had established himself in Europe as an exceptionally brilliant mathematician, contributing major insights into such topics as logic and set theory, and he lectured at the University of Berlin. But he was not exactly a bookworm. He enjoyed Berlin’s cabaret-style nightlife, and more important for science, he enjoyed poker. He turned his talent for both math and cards into a new paradigm for economics—and in so doing devised mathematical tools that someday may reveal deep similarities underlying his many diverse scientific interests. More than that, he showed how to apply rigorous methods to social questions, not unlike Asimov’s Hari Seldon. “Von Neumann was a brilliant mathematician whose contributions to other sciences stem from his belief that impartial rules could be found behind human interaction,” writes one commentator. “Accordingly, his work proved crucial in converting mathematics into a key tool to social theory.”1 UTILITY AND STRATEGY By most accounts, the invention of modern game theory came in a technical paper published by von Neumann in 1928. But the roots of game theory reach much deeper. After all, games are as old as humankind, and from time to time intelligent thinkers had considered how such games could most effectively be played. As a branch of mathematics, though, game theory did not appear in its modern form until the 20th century, with the merger of two rather simple ideas. The first is utility—a measure of what you want; the second is strategy—how to get what you want. Utility is basically a measure of value, or preference. It’s an idea with a long and complex history, enmeshed in the philosophical doctrine known as utilitarianism. One of the more famous expositors of the idea was Jeremy Bentham, the British social philosopher and legal scholar. Utility, Bentham wrote in 1780, is “that property in any object, whereby it tends to produce benefit, advantage, pleasure, good, or happiness … or … to prevent the happening of mischief, pain, evil, or unhappiness.”2 So to

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature Bentham, utility was roughly identical to happiness or pleasure—in “maximizing their utility,” individual people would seek to increase pleasure and diminish pain. For society as a whole, maximum utility meant “the greatest happiness of the greatest number.”3 Bentham’s utilitarianism incorporated some of the philosophical views of David Hume, friend to Adam Smith. And one of Bentham’s influential followers was the British economist David Ricardo, who incorporated the idea of utility into his economic philosophy. In economics, utility’s usefulness depends on expressing it quantitatively. Happiness isn’t easily quantifiable, for example, but (as Bentham noted) the means to happiness can also be regarded as a measure of utility. Wealth, for example, provides a means of enhancing happiness, and wealth is easier to measure. So in economics, the usual approach is to measure self-interest in terms of money. It’s a convenient medium of exchange for comparing the value of different things. But in most walks of life (except perhaps publishing), money isn’t everything. So you need a general definition that makes it possible to express utility in a useful mathematical form. One mathematical approach to quantifying utility came along long before Bentham, in a famous 1738 result from Daniel Bernoulli, the Swiss mathematician (one of many famous Bernoullis of that era). In solving a mathematical paradox about gambling posed by his cousin Nicholas, Daniel realized that utility does not simply equate to quantity. The utility of a certain amount of money, for instance, depends on how much money you already have. A million-dollar lottery prize has less utility for Bill Gates than it would for, say, me. Daniel Bernoulli proposed a method for calculating the reduction in utility as the amount of money increased.4 Obviously the idea of utility—what you want to maximize—can sometimes get pretty complicated. But in many ordinary situations, utility is no mystery. If you’re playing basketball, you want to score the most points. In chess, you want to checkmate your opponent’s king. In poker, you want to win the pot. Often

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature your problem is not defining utility, but choosing a good strategy to maximize it. Game theory is all about figuring out which strategy is best.5 The first substantial mathematical attempt to solve that part of the problem seems to have been taken by an Englishman named James Waldegrave in 1713. Waldegrave was analyzing a two-person card game called “le Her,” and he described a way to find the best strategy, using what today is known as the “minimax” (or sometimes “minmax”) approach. Nobody paid much attention to Waldegrave, though, so his work didn’t affect later developments of game theory. Other mathematicians also occasionally dabbled in what is now recognized to be game theory math, but there was no one coherent approach or clear chain of intellectual influence. Only in the 20th century did really serious work begin on devising the mathematical principles behind games of strategy. First was Ernst Zermelo, a German mathematician, whose 1913 paper examining the game of chess is sometimes cited as the beginning of real game theory mathematics. He chose chess merely as an illustration of the more general idea of a two-person game of strategy where the players choose all the moves with no contribution from chance. And that is an important distinction, by the way. Poker involves strategy, but also includes the luck of the draw. If you get a bum hand, you’re likely to lose no matter how clever your strategy. In chess, on the other hand, all the moves are chosen by the players—there’s no shuffling of cards, tossing of dice, flipping coins, or spinning the wheel of fortune. Zermelo limited himself to games of pure strategy, games without the complications of random factors. Zermelo’s paper on chess apparently confused some of its readers, as many secondary reports of his results are vague and contradictory.6 But it seems he tried to show that if the White player managed to create an advantageous arrangement of pieces—a “winning configuration”—it would then be possible to end the game within fewer moves than the number of possible chessboard arrangements. (Having an “advantageous arrangement” means

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature achieving a situation from which White would be sure to win—assuming no dumb moves—no matter what Black does.) Using principles of set theory (one of von Neumann’s mathematical specialties, by the way), Zermelo proved that proposition. His original proof required some later tweaking by other mathematicians and Zermelo himself. But the main lesson from it all was not so important for strategy in chess as it was to show that math could be used to analyze important features of any such game of strategy. As it turns out, chess was a good choice because it is a perfect example of a particularly important type of game of strategy, known as a two-person zero-sum game. It’s called “zero-sum” because whatever one player wins, the other loses. The interests of the two competitors are diametrically opposed. (Chess is also a game where the players have “perfect information.” That means the game situation and all the decisions of all the players are known at all times—like playing poker with all the cards always dealt face up.) Zermelo did not address the question of exactly what the best strategy is to play in chess, or even whether there actually is a surefire best strategy. The first move in that direction came from the brilliant French mathematician Émile Borel. In the early 1920s, Borel showed that there is a demonstrable best strategy in two-person zero-sum games—in some special cases. He doubted that it would be possible to prove the existence of a certain best strategy for such games in general. But that’s exactly what von Neumann did. In two-person zero-sum games, he determined, there is always a way to find the best strategy possible, the strategy that will maximize your winnings (or minimize your losses) to whatever extent is possible by the rules of the game and your opponent’s choices. That’s the modern minimax7 theorem, which von Neumann first presented in December 1926 to the Göttingen Mathematical Society and then developed fully in his 1928 paper called “Zur Theorie der Gesellshaftsspiele” (Theory of Parlor Games), laying the foundation for von Neumann’s economics revolution.8

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature GAMES INVADE ECONOMICS In his 1928 paper, von Neumann did not attempt to do economics9—it was strictly math, proving a theorem about strategic games. Only years later did he merge game theory with economics, with the assistance of an economist named Oskar Morgenstern. Morgenstern, born in Germany in 1902, taught economics at the University of Vienna from 1929 to 1938. In a book published in 1928, the same year as von Neumann’s minimax paper, Morgenstern discussed problems of economic forecasting. A particular point he addressed was the “influence of predictions on predicted events.” This, Morgenstern knew, was a problem peculiar to the social sciences, including economics. When a chemist predicts how molecules will react in a test tube, the molecules are oblivious. They do what they do the same way whether a chemist correctly predicts it or not. But in the social sciences, people display much more independence than molecules do. In particular, if people know what you’re predicting they will do, they might do something else just to annoy you. More realistically, some people might learn of a prediction and try to turn that foreknowledge to their advantage, upsetting the conditions that led to the prediction and so throwing random factors into the outcome. (By the way, in the Foundation Trilogy, that’s why Seldon’s Plan had to be so secret. It wouldn’t work if anybody knew what it was.) Anyway, Morgenstern illustrated the problem with a scenario from The Adventures of Sherlock Holmes. In the story The Final Problem, Holmes was attempting to elude Professor Moriarty while traveling from London to Paris. It wasn’t obvious that Holmes could simply outthink Moriarty. Moriarty might anticipate what Holmes was thinking. But then Holmes could anticipate Moriarty’s anticipation, and so on: I think that he thinks that I think that he thinks, ad infinitum, or at least nauseum.10 Consequently, Morgenstern concluded, the situation called for strategy. He returned to the Holmes–Moriarty issue in a 1935 paper exploring the paradoxes of perfect future knowledge. At that time, after a lecture on these issues, a mathematician

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature named Eduard Čech approached Morgenstern and told him about similar ideas in von Neumann’s 1928 paper on parlor games. Morgenstern was entranced, and he awaited an opportunity to meet von Neumann and discuss the relevance of the 1928 paper to Morgenstern’s views on economics. The chance came in 1938, when Morgenstern accepted a three-year appointment to lecture at Princeton University. (Von Neumann had by then taken up his position at the nearby Institute for Advanced Study.) “The principal reason for my wanting to go to Princeton,” Morgenstern said, “was the possibility that I might become acquainted with von Neumann.”11 As Morgenstern told the story, he soon revived von Neumann’s interest in game theory and began writing a paper to show its relevance to economics. As von Neumann critiqued early drafts, the paper grew longer, with von Neumann eventually joining Morgenstern as a coauthor. By this time—it was now 1940—the paper had grown substantially, and it kept growing, ultimately into a book published by the Princeton University Press in 1944. (Subsequent historical study suggests, though, that von Neumann had previously written most of the book without Morgenstern’s help.12) Theory of Games and Economic Behavior instantly became the game theory bible. In the eyes of game theory believers, it was to economics what Newton’s Principia was to physics. It was a sort of newtonizing of Adam Smith, providing mathematical rigor to describe how individual interactions affect a collective economy. “We hope to establish,” wrote von Neumann and Morgenstern, “that the typical problems of economic behavior become strictly identical with the mathematical notions of suitable games of strategy.” It will become apparent, they asserted, that “this theory of games of strategy is the proper instrument with which to develop a theory of economic behavior.”13 The authors then developed the theory throughout more than 600 pages, dense with equations and diagrams. But the opening sections are remarkably readable, laying out the authors’ goals and intentions in a kind of extended preamble designed to persuade skeptical economists that their field needed an overhaul.

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature While noting that many economists had already been using mathematics, von Neumann and Morgenstern declared that “its use has not been highly successful,” especially when compared to other sciences such as physics. Throughout its early pages, the book draws on physics as the model for how math can make murky knowledge precise and practical—in contrast to economics, where the basic ideas had been expressed so fuzzily that past efforts to use math had been doomed. “Economic problems … are often stated in such vague terms as to make mathematical treatment a priori appear hopeless because it is quite uncertain what the problems really are,” the authors wrote.14 What economics needed was a theory that made precise and meaningful measurements possible, and game theory filled the bill. Von Neumann and Morgenstern were careful to emphasize, though, that their theory was just a first step. “There exists at present no universal system of economic theory,” they wrote, and if such a theory were ever to be developed, “it will very probably not be during our lifetime.”15 But game theory could provide the foundation for such a theory, by focusing on the simplest of economic interactions as a guide to developing general principles that would someday be able to solve more complicated problems. Just as modern physics began when Galileo studied the rather simple problem of falling bodies, economics could benefit from a similar understanding of simple economic behavior. “The great progress in every science came when, in the study of problems which were modest as compared with ultimate aims, methods were developed that could be extended further and further,” von Neumann and Morgenstern declared.16 And so it made sense to focus on the simplest aspect of economics—the economic interaction of individual buyers and sellers. While economic science as a whole involves the entire complicated system of producing and pricing goods, and earning and spending money, at the root of it all is the choicemaking of the individuals participating in the economy.

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature ROBINSON CRUSOE MEETS GILLIGAN Back in the days when von Neumann and Morgenstern were working all this out, standard economic textbooks extolled a simple economic model of their own, called the “Robinson Crusoe” economy. Stranded on a desert island, Crusoe was an economy unto himself. He made choices about how to use the resources available to him to maximize his utility, coping only with the circumstances established by nature. Samuel Bowles, an economist at the University of Massachusetts, explained to me that textbooks viewed economics as just the activities of many individual Robinson Crusoes. Where Crusoe interacted with nature, consumers in a big-time economy interacted with prices. And that was the standard “neoclassical” view of economic theory. “That’s what everybody taught,” Bowles said. “But there was something odd about it.” It seemed to be a theory of social interactions based on someone who had interacted only nonsocially, that is, with nature, not with other people. “Game theory adopts a different framework,” Bowles said. “I’m in a situation in which my well-being depends on what somebody else does, and your well-being depends on what I do—therefore we are going to think strategically.”17 And that’s exactly the point that von Neumann and Morgenstern stressed back in 1944. The Robinson Crusoe economy is fundamentally different, conceptually, from a Gilligan’s Island economy. It’s not just the complication of social influences from other people affecting your choices about the prices of goods and services. The results of your choices—and thus your ability to achieve your desired utility—are inevitably intertwined with the choices of the others. “If two or more persons exchange goods with each other, then the result for each one will depend in general not merely upon his own actions but on those of the others as well,” von Neumann and Morgenstern declared.18 Mathematically, that meant that no longer could you simply compute a single simple maximum utility for Robinson Crusoe. Your calculations had to accommodate a mixture of competing

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature on the strategic aspects of how to achieve what you want, without worrying about the complications involved in defining what you want. However, there remained an important aspect of utility that von Neumann and Morgenstern had to address. Was it even possible, in the first place, to define utility in a numerical way, to make it susceptible to a mathematical theory? (Bernoulli had proposed a way to calculate utility, but he had not tried to prove that the concept could be a basis for making rational choices in a consistent way.) Money (which obviously is numerical) could really be a good stand-in for the more complex concept of utility only if utility can really be represented by a numerical concept. And so they had to show that it was possible to define utility in a mathematically rigorous way. That meant identifying axioms from which the notion of utility could be deduced and measured quantitatively. As it turned out, utility could be quantified in a way not unlike the approach physicists used to construct a scientifically rigorous definition of temperature. After all, primitive notions of utility and temperature are similar. Utility, or preference, can be thought of as just a rank ordering. If you prefer A to B, and B to C, you surely prefer A to C. But it is not so obvious that you can ascribe a number to how much you prefer A to B, or B to C. It was once much the same with heat—you could say that something felt warmer or cooler than something else, but not necessarily how much, certainly not in a precise way—before the development of the theory of heat. But nowadays the absolute temperature scale, based on the laws of thermodynamics, gives temperature an exact quantitative meaning. And von Neumann and Morgenstern showed how you could similarly convert rank orderings into numerically precise measures of utility. You can get the essence of the method from playing a modified version of Let’s Make a Deal. (For the youngsters among you, that was a famous TV game show, in which host Monty Hall offered contestants a chance to trade their prizes for possibly more valuable prizes, at the risk of getting a clunker.) Suppose Monty offers you three choices: a BMW convertible, a top-of-the-line big-

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature screen plasma TV, or a used tricycle. Let’s say you want the BMW most of all, and that you’d prefer the TV to the tricycle. So it’s a simple matter to rank the relative utility of the three products. But here comes the deal. Your choice is to get either the plasma TV, OR a 50-50 chance of getting the BMW. That is, the TV is behind Door Number 1, and the BMW is behind either Door Number 2 or Door Number 3. The other door conceals the tricycle. Now you really have to think. If you choose Door Number 1—the plasma TV—you must value it at more than 50 percent as much as the BMW. But suppose the game is more complicated, with more doors, and the odds change to a 60 percent chance of the BMW, or 70 percent. At some point you will be likely to opt for the chance to get the BMW, and at that point, you could conclude that the utilities are numerically equal—you value the plasma TV at, say, 75 percent as much as the BMW (plus 25 percent of the tricycle, to be technically precise). Consequently, to give utility a numerical value, you just have to arbitrarily assign some number to one choice, and then you can compare other choices to that one using the probabilistic version of Let’s Make a Deal. So far so good. But there remains the problem of operating in a social economy where your personal utility is not the only issue—you have to anticipate the choices of others. And in a small-scale Gilligan’s Island economy, pure strategic choices can be subverted by things like coalitions among some of the players. Again, the theory of heat offers hope. Temperature is a measure of how fast molecules are moving. In principle, it’s not too hard to describe the velocity of a single molecule, just as you could easily calculate Robinson Crusoe’s utility. But you’d have a hard time with Gilligan’s Island, just as it becomes virtually impossible to keep track of all the speeds of a relatively few number of interacting molecules. But if you have a trillion trillion molecules or so, the interactions tend to average out, and using the theory of heat you can make precise predictions about temperature. (The math behind this is, of course, statistical mechanics, which will become even more central to the game theory story in later chapters.)

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature As von Neumann and Morgenstern pointed out, “very great numbers are often easier to handle than those of medium size.”21 That was exactly the point made by Asimov’s psychohistorians: Even though you can’t track each individual molecule, you can predict the aggregate behavior of vast numbers, precisely what taking the temperature of a gas is all about. You can measure a value related to the average velocity of all the molecules, which reflects the way the individual molecules interact. Why not do the same for people? It worked for Hari Seldon. And it might work for a sufficiently large economy. “When the number of participants becomes really great,” von Neumann and Morgenstern wrote, “some hope emerges that the influence of every particular participant will become negligible.”22 With the basis for utility established at the outset, von Neumann and Morgenstern could proceed simply by taking money to be utility’s measure. The bulk of their book was then devoted to the issue of finding the best strategy to make the most money. At this point, it’s important to clarify what they meant by the concept of strategy. A strategy in game theory is a very specific course of action, not a general approach to the game. It’s not like tennis, for instance, where your strategy might be “play aggressively” or “play safe shots.” A game theory strategy is a defined set of choices to make for every possible circumstance that might arise. In tennis, your strategy might be to “never rush the net when your opponent serves; serve and volley whenever you are even or ahead in a game; always stay back when behind in a game.” And you’d have other rules for all the other situations. There’s one additional essential point about strategy in game theory—the distinction between “pure” strategies and “mixed” strategies. In tennis, you might rush the net after every serve (a pure strategy) or you might rush the net after one out of every three serves, staying back at the baseline two times out of three (a mixed strategy). Mixed strategies often turn out to be essential for making game theory work. In any event, the question isn’t whether there is always a good general strategy, but whether there is always an optimum set of

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature rules for strategic behavior that covers all eventualities. And in fact, there is—for two-person zero-sum games. You can find the best strategy using the minimax theorem that von Neumann published in 1928. His proof of that theorem was notoriously complicated. But its essence can be boiled down into something fairly easy to remember: When playing poker, sometimes you need to bluff. MASTERING MINIMAX The secret behind the minimax approach in two-person zero-sum games is the need to remember that whatever one player wins, the other loses (the definition of zero sum). So your strategy should seek to maximize your winnings, which would have the effect of minimizing your opponent’s winnings. And of course your opponent wants to do the same. Depending on the game, you may be able to play as well as possible and still not win anything, of course. The rules and stakes may be such that whoever plays first will always win, for instance, and if you go second, you’re screwed. Still, it is likely that some strategies will lose more than others, so you would attempt to minimize your opponent’s gains (and your losses). The question is, what strategy do you choose to do so? And should you stick with that strategy every time you play? It turns out that in some games, you may indeed find one pure strategy that will maximize your winnings (or minimize your losses) no matter what the other player does. Obviously, then, you would play that strategy, and if the game is repeated, you would play the same strategy every time. But sometimes, depending on the rules of the game, your wisest choice will depend on what your opponent does, and you might not know what that choice will be. That’s where game theory gets interesting. Let’s look at an easy example first. Say that Bob owes Alice $10. Bob proposes a game whereby if he wins, his debt will be reduced. (In the real world, Alice will tell Bob to take a hike and fork over the $10.) But for purposes of illustrating game theory, she might agree.

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature Bob suggests these rules: He and Alice will meet at the library. If he gets there first, he pays Alice $4; if she gets there first, he pays her $6. If they both arrive at the same time, he pays $5. (As I said, Alice would probably tell him to shove it.) Now, let’s say they live together, or at least live next door to each other. They both have two possible strategies for getting to the library—walking or taking the bus. (They are too poor to own a car, which is why Bob is haggling over the $10.) And they both know that the bus will always beat walking. So this game is trivial—both will take the bus, both will arrive at the same time, and Bob will pay Alice $5.23 And here’s how game theory shows what strategy to choose: a “payoff matrix.” The numbers show how much the player on the left (Alice) wins. In a zero-sum game, the numbers in a payoff matrix designate how much the person on the left (in this case, Alice) wins (since it’s zero sum, the numbers tell how much the player on top, Bob, loses). If the number is negative, that means the player on top wins that much (negative numbers signaling a loss for Alice). In non-zero-sum games, each matrix cell will include two numbers, one for each player (or more if there are more players, which makes it very hard to show the matrix for multiplayer games). Obviously, Alice must choose the bus strategy because it always does as well as or better than walking, no matter what Bob

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature does. And Bob will choose the bus also, because it minimizes his losses, no matter what Alice does. Walking can do no better and might be worse. Of course, you didn’t need game theory to figure this out. So let’s look at another example, from real-world warfare, a favorite of game theory textbooks. In World War II, General George Kenney knew that the Japanese would be sending a convoy of supply ships to New Guinea. The Allies naturally wanted to bomb the hell out of the convoy. But the convoy would be taking one of two possible routes—one to the north of New Britain, one to the south. Either route would take three days, so in principle the Allies could get in three days’ worth of bombing time against the convoy. But the weather could interfere. Forecasters said the northern route would be rainy one of the days, limiting the bombing time to a maximum of two days. The southern route would be clear, providing visibility for three days of bombing. General Kenney had to decide whether to send his reconnaissance planes north or south. If he sent them south and the convoy went north, he would lose a day of bombing time (of only two bombing days available). If the recon planes went north, the bombers would still have time to get two bombing days in if the convoy went south. So the “payoff” matrix looks like this, with the numbers giving the Allies’ “winnings” in days of bombing:

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature If you just look at this game matrix from the Allies’ point of view, you might not see instantly what the obvious strategy is. But from the Japanese side, you can easily see that going north is the only move that makes sense. If the convoy took the southern route, it was guaranteed to get bombed for two days and maybe even three. By going north, it would get a maximum of two days (and maybe only one), as good as or better than any of the possibilities going south. General Kenney could therefore confidently conclude that the Japanese would go north, so the only logical Allies strategy would be to send the reconnaissance planes north as well. (The Japanese did in fact take the northern route and suffered heavy losses from the Allied bombers.) Proper strategies are not, of course, always so obvious. Let’s revisit Alice and Bob and see what happened after Alice refused to play Bob’s stupid game. Knowing that she was unlikely ever to get her whole $10 back, she proposed another game that would cause Bob to scratch his head about what strategy to play. In Alice’s version of the game, they go the library every weekday for a month. If they both ride the bus, Bob pays Alice $3. If they both walk, Bob pays Alice $4. If Bob rides the bus and Alice walks, arriving second, Bob pays $5. If Bob walks and Alice rides the bus, arriving first, Bob pays $6. If you are puzzled, don’t worry. This game puzzles Bob, too. Here’s the matrix. Bob realizes there is no simple strategy for playing this game. If he rides the bus, he might get off paying only $3. But Alice, realizing that, will probably walk, meaning Bob would have to pay her $5.

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature So Bob might decide to walk, hoping to pay only $4. But then Alice might figure that out and ride the bus, so Bob would have to pay her $6. Neither Bob nor Alice can be sure of what the other will do, so there is no obvious “best” strategy. Remember, however, that Alice required the game to be played repeatedly, say for a total of 20 times. Nothing in the rules says you have to play the same strategy every day. (If you did, that would be a pure strategy—one that never varied.) To the contrary, Alice realizes that she should play a mixed strategy—some days walking, some days riding the bus. She wants to keep Bob guessing. Of course, Bob wants to keep Alice guessing too. So he will take a mixed strategy approach also. And that was the essence of von Neumann’s ingenious insight. In a two-person zero-sum game, you can always find a best strategy—it’s just that in many cases the best strategy is a mixed strategy. In this particular example, it’s easy to calculate the best mixed strategies for Alice and Bob. Remember, a mixed strategy is a mix of pure strategies, each to be chosen a specific percentage of the time (or in other words, with a specific probability).24 So Bob wants to compute the ratio of the percentages for choosing “walk” versus “bus,” using a recipe from an old game theory book that he found in the library.25 Following the book’s advice, he compares the payoffs for each choice when Alice walks (the first row of the matrix) to the values when Alice takes the bus (the second row of the matrix), subtracting the payoffs in the second row from those in the first. (The answers are -2 and 2, but the minus sign is irrelevant.) Those two numbers determine the best ratio for Bob’s two strategies—2:2, or 50-50. (Note, however, that it is the number in the second column that determines the proportion for the first strategy, and the number in the first column that determines the proportion for the second strategy. It just so happened that in this case the numbers are equal.) For Alice, on the other hand, subtracting the second column from the first column gives -3 and 1 (or 3 and 1, ignoring the minus sign). So she should play the first strategy (bus) three times as often as the second strategy (walk).26

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature Consequently, Alice should ride the bus one time in four and walk three-fourths of the time. Bob should ride the bus half the time and walk half the time. Both should decide which strategy to choose by using some suitable random-choice device. Bob could just flip a coin; Alice might use a random number table, or a game spinner with three-fourths of the pie allocated to walking and one-fourth to the bus.27 If either Alice or Bob always walked (or took the bus), the other would be able to play a more profitable strategy. So you have to keep your opponent guessing. And that’s why game theory boils down to the need to bluff while playing poker. If you always raise when dealt a good hand but never when dealt a poor hand, your opponents will be able to figure out what kind of a hand you hold. Real poker is too complicated for an easy game theory analysis. But consider a simple two-player version of poker, where Bob and Alice are each dealt a single card, and black always beats red.28 Before the cards are dealt, each player antes $5, so there is $10 in the pot. Alice then plays first, and she may either fold or bet an additional $3. If she folds, both players turn over their cards, and whoever holds a black card wins the pot. (If both have black or both have red, they split the pot.) If Alice wagers the additional $3, Bob can then either match the $3 and call (making a total of $16 in the pot) or fold. If he folds, Alice takes the $13 in the pot; if he calls, they turn over their cards to see who wins the $16. You’d think, at first, that if Alice had a red card she’d simply pass and hope that Bob also had red. But if she bets, Bob might think she must have black. If he has red, he might fold—and Alice will win with a red card. Bluffing sometimes pays off. On the other hand, Bob knows that Alice might be bluffing (since she is not a Vulcan), and so he may go ahead and call. The question is, how often should Alice bluff, and how often should Bob call her (possible) bluff? Maybe von Neumann could have figured that out in his head, but I think most people would need game theory.

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature A matrix for this game would show that both players can choose from four strategies. Alice can always pass, always bet, pass with red and bet with black, or bet with red and pass with black. Bob can always fold, always call, fold with red and call with black, or fold with black and call with red. If you calculate the payoffs, you will see that Alice should bet three-fifths of the time no matter what card she has; the other two-fifths of the time she should bet only if she has black. Bob, on the other hand, should call Alice’s bet two-fifths of the time no matter what card he has; three-fifths of the time he should fold if he has red and call if he has black.29 (By the way, another thing game theory can show you is that this game is stacked in favor of Alice, if she always goes first. Playing the mixed strategies dictated by the game matrix assures her an average of 30 cents per hand.) The notion of a mixed strategy, using some random method to choose from among the various pure strategies, is the essence of von Neumann’s proof of the minimax theorem. By choosing the correct mixed strategy, you can guarantee the best possible outcome you can get—if your opponent plays as well as possible. If your opponent doesn’t know game theory, you might do even better. BEYOND GAMES Game theory was not supposed to be just about playing poker or chess, or even just about economics. It was about making strategic decisions—whether in the economy or in any other realm of real life. Whenever people compete or interact in pursuit of some goal, game theory describes the outcomes to be expected by the use of different strategies. If you know what outcome you want, game theory dictates the proper strategy for achieving it. If you believe that people interacting with other people are all trying to find the best possible strategy for achieving their desires, it makes sense that game theory might potentially be relevant to the modern idea of a Code of Nature, the guide to human behavior. In their book, von Neumann and Morgenstern did not speak

OCR for page 27
A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature of a “Code of Nature,” but did allude to game theory as a description of “order of society” or “standard of behavior” in a social organization. And they emphasized how a “theory of social phenomena” would require a different sort of math from that commonly used in physics—such as the math of game theory. “The mathematical theory of games of strategy,” they wrote, “gains definitely in plausibility by the correspondence which exists between its concepts and those of social organizations.”30 In its original form, though, game theory was rather limited as a tool for coping with real-world strategic problems. You can find examples of two-person zero-sum games in real life, but they are typically either so simple that you don’t need game theory to tell you what to do, or so complicated that game theory can’t incorporate all the considerations. Of course, expecting the book that introduces a new field to solve all of that field’s problems would be a little unrealistic. So it’s no surprise that in applying game theory to situations more complicated than the two-person zero-sum game, von Neumann and Morgenstern were not entirely successful. But it wasn’t long before game theory’s power was substantially enhanced, thanks to the beautiful math of John Forbes Nash.