unchanged, to generate an always-changing set of similar system states (Pahl-Wostl, 1995). (Seasonal patterns, for example, are an obvious and easy pattern to discern.) It follows that destruction or erosion of long-term constraining variables, such as, habitat, trophic structure, and behavioral factors such as a learned migration pattern, would be expected to change the set of possible system states so that it includes states unlike those experienced previously and, consequently, reduces the ability to perceive patterns and learn.11

In his book, Emergence, Holland (1998) describes the learning process a computer12 (and presumably humans) must go through to learn the game of checkers. He describes checkers as a very simple example of a complex adaptive system. Checkers has a limited number of pieces subject to a very few rules of movement, and its slow variables (the rules of the game, the size of the board, the kinds of pieces) are comfortably constant. Yet checkers is very difficult to predict and yields an immense number of possible board states. After only the first few moves of a game, it is unlikely that even an experienced player will encounter board configurations identical to those he’s seen before. The state of the “system”—the configuration of the board—is nearly always novel, but patterns of configurations more or less similar to those experienced previously are likely. The train of causation in the system is not stable, varying with each configuration of the board. Feedback about one’s interventions in the system is rarely clear. A “good” move can only be interpreted as such after the game has ended; it is entirely possible that a “double jump” might have led to the loss of a game or that a “poor” move might have set up a winning sequence. Looking ahead to try to predict the outcome of one among a set of alternative moves is an exercise that can yield only an ambiguous answer. So how do we learn to play checkers? Or in our case, how do we learn about the impact of human actions in the ecosystem?

As mentioned earlier, the fundamental basis for learning and prediction in this kind of environment is the recognition of patterns. Because of the multiplicity and novelty of board configurations, and especially because of the adaptive behavior of one’s opponent, outcomes from any given decision cannot be expected to be the mean of outcomes of past similar situations. The adaptive behavior of the player’s opponent introduces a strong tendency for surprise and unintended results, especially for a player with a naïve statistical strategy.

Holland (1998) describes a number of measures that help the player assess and evaluate the current configuration of the board (for example, simple measures such as “pieces ahead,” “kings ahead,” and “net penetration beyond center line”). The same set of measures can be used to assess the likely outcome of alternative moves the player faces. In other words, the player can think through the possible board configurations—two, three, or more moves ahead—that might arise from each alternative move. Conservative and generally more successful assessments assume the other player knows at least as much about the game as the player making the assessment. A kind of worst case precautionary principle



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement