Scientific Uncertainty, Complex Systems, and the Design of Common-Pool Institutions
This paper addresses the question of how we cope with scientific uncertainty in exploited, complex natural systems such as marine fisheries. Ocean ecosystems are complex and have been very difficult to manage, as evidenced by the collapses of many large-scale fisheries (Boreman et al. 1999; Ludwig et al., 1993; National Research Council, 1999). A large part of the problem arises from scientific uncertainty and our understanding of the nature of that uncertainty. The difficulty of the scientific problem in a complex, quickly changing, and highly adaptive environment such as the ocean should not be underestimated. It has created pervasive uncertainty that has been magnified by the strategic behavior of the various human interests who play in the game of fisheries management.
This paper argues that scientific uncertainty in complex systems creates a more difficult conservation problem than necessary because (1) we have built into our governing institutions a very particular and inappropriate scientific conception of the ocean that assumes much more control over natural processes than we might hope to have (i.e., we assume we are dealing with an analog of simple physical systems), and (2) the individual incentives that result from this fiction, even in the best of circumstances, are not aligned with social goals of sustain-
I would like to thank the many people who have commented on various drafts of this chapter. Spencer Apollonio, Jefferson White, Gisli Palsson, Teresa Johnson, Deirdre Gilbert, Yong Chen, Robin Alden, Ted Ames, Elinor Ostrom, William Brennan, Jennifer Brewer, and Carolyn Skinder have all made helpful comments and often have caused me to rethink and rework many of the ideas in the chapter.
ability. As a result, I believe we have slowed significantly the process of learning about the ocean, defined scientific uncertainty and precautionary acts in a way that may turn out to be highly risky, and created dysfunctional management institutions. This chapter suggests we are more likely to find ways to align individual incentives with ecosystem sustainability if we begin to view these systems as complex adaptive systems. This perspective alters especially our sense of the extent and kind of control we might exercise in these systems and, as a result, has strong implications for the kinds of individual rights and collective governance structures that might work.
AN EXAMPLE FROM THE NEW ENGLAND FISHERIES
When ocean fisheries management began after World War II, practical scientific and political concerns dictated a large-scale, single-species approach to management. International fisheries management institutions were given very large geographical jurisdictions, few resources, and little real governance authority. Yet they were asked to develop regimes for the conservation of ocean resources. The scientific problem these institutions and the scientists working for them confronted was extraordinarily difficult, especially given the problems and costs of observation and the relatively undeveloped state of ecological theory at that time.
Consider how one might have started, at that time, to conceptualize a complex system that can be perceived only in the most indirect, costly, and occasional way. The fisheries scientists of that time chose a reductionist approach that emphasized sophisticated mathematical modeling of individual populations. It was consistent with scientific understanding of natural systems, with their (hoped-for) ability to measure and quantify, and with the authority given to the agencies for which they were working.1 In particular, the conception was to concentrate on area- and species-specific populations (stocks) located within broadly identified fishing areas or ecosystems. The International Commission for the Northwest Atlantic Fisheries (ICNAF), for example, broke its enormous jurisdiction into numerous smaller, but still very large, statistical areas that were thought to correspond with major ecological or fishing areas, such as, Georges Bank, the Gulf of Maine, the Grand Banks of Newfoundland, the Scotia Shelf, and so on. Its scientific efforts concentrated almost exclusively on the commercial species of interest to the parties of ICNAF (Halliday and Pinhorn, 1990).
From both a scientific and institutional perspective, it is difficult to argue that these early approaches were “wrong,” given the constraints and the complexity of ocean ecosystems. Nevertheless, a scientific pattern was established—a kind of intellectual path dependency that persists today.2
With the advent of extended national fisheries jurisdiction in 1977, both the United States and Canada adopted with almost no changes the single-species scientific perspective and scale of application that had developed under ICNAF.3 In
both countries, initial fisheries management plans were simply a continuation of a course that had been set by ICNAF. Even today the United States and Canada use the same statistical areas and definitions that were defined in the early 1950s. Except for refinements in statistical procedures, longer data series, the attention to some new species, and much more complete recording of fishing mortality, essentially the same methodology—certainly the same fundamental theory—is still used to assess the status of each stock and reach recommendations about acceptable levels of catch.
The most significant inheritance from the international era, however, was and is the scientific approach that simplifies the reality of complex ocean systems by treating each individual species as if it were an independent or isolated entity. The core of single-species theory is the belief that the future size of individual stocks is strongly related to spawning stock biomass, which, in turn, is strongly determined by how much fishing occurs. The relationship between fishing and spawning stock size is clear and easy to measure. But the theorized relationship between the spawning stock and recruitment is generally unknown and only claimed to exist for a few stocks, and then only at very low population sizes (Hall, 1988; Myers et al., 1995).4 In spite of the absence of confirming evidence, fisheries scientists are firmly convinced that the sustainability of each population depends on the maintenance of an adequate spawning stock biomass.
Consequently, in the day-to-day management of fisheries, there is no attempt to predict recruitment. It is simply hoped, or assumed, that recruitment will proceed at a rate that is close to the average for some recent time period—one or two decades. Fisheries scientists advise managers about desirable catch rates, or amounts, in terms of what they estimate will produce the best yield from the year classes already in the water while maintaining a reasonable level of spawning stock biomass. There is an implicit but strong assumption that ecological interactions are minimal and not disturbed in any fundamental way by simultaneously fishing all or many species at moderate or even high rates. In addition, there are very difficult measurement and estimation problems. Errors of measurement on the order of 30 to 50 percent are common (Hilborn and Walters, 1992; Walters, 1998). As William Fox, science director of the National Marine Fisheries Service (NMFS), puts it, “there’s a bit of experience involved, not something that can be repeated by another scientist. It’s not really science; it’s like an artist doing it—so a large part of your scientific advice comes from art” (Appell, 2001). Most fisheries scientists are reasonably well aware of the shortcomings of the theory and uncertainties regarding measurements and estimates of population size.
THE RESPONSE TO UNCERTAINTY
When these uncertainties became apparent in the early years of extended jurisdiction, they were met by a few interested parties in the fishing industry with honest expressions of skepticism and, more commonly, with gaming strategies
that reflected the interests and circumstances of various individuals and groups. The nonstrategic industry response came in the form of a rather inarticulate skepticism about the underlying theory concerning the relationship between the spawning stock and subsequent recruitment and about how best to conserve, or sustain, the resource (Smith, 1990). I do not believe this argument ever was recognized by government scientists simply because it was not contained within a formally stated doctrine (or maybe it was that “paradigmatic” talking past one another, or incomprehension, that Kuhn, 1962, discusses). Nevertheless, this argument was inextricably bound up with the industry’s highly critical and strategic response to scientists’ uncertainty about estimates of (changes in) stock sizes. These estimates are especially important to industry because they are the basis for short-term policy setting regarding allowable catches and other rules restraining fishing.
Furthermore, because the New England industry at that time was essentially an open-access industry, it had the usual tendency toward a strongly myopic perspective. Industry arguments tended to be supported by a large amount of anecdotal evidence. Almost without exception this evidence was marshaled to show economic hardship and to argue against biological estimates of scarcity and, of course, the need for reduced fishing efforts. Given the patchy nature of the resource and fishermen’s finely honed skills at locating those patches, statements about localized abundance did not impress NMFS scientists, who were doing their best to carry out surveys based on stratified random sampling of the resource. Economic hardship arguments were simply interpreted as exaggerated claims that reflected the expected zero-profit state of the industry given open access.
However, members of the management council,5 who were nearly all nonscientists, were influenced by both the biological and economic hardship arguments. They shared the values of those users or, at least, gave them credence and, as a result, did tend to discount or modify scientific advice in the direction of higher harvests or fewer restrictions. The results of council deliberations were almost always less restrictive, or at least different, regulations than those recommended by NMFS scientists. From the perspective of NMFS scientists, it was as if the council, when given a confidence limit around a recommended catch level, would always choose the higher end of that range rather than the average or an even more conservative level. According to those scientists, the council lacked the political will to act in a way that would conserve the stocks (Rosenberg et al., 1993).
NMFS and the environmental community became very frustrated at the council’s unwillingness to act (or, at least, to act in the way they wanted).6 They viewed the council’s response to this uncertainty as a sure way to gradually, if not quickly, erode the stocks. NMFS officials, in either explicit or tacit agreement (with one another), appear to have decided that the relatively democratic pro-
cesses of the council could not be relied on to achieve the greater good of conservation. Especially problematic was the council’s perceived tendency to sacrifice biological restraint in order to solve politically important economic problems.
NMFS mounted a campaign to require the use of only quantitative data in council decision making, began to provide only point estimates of stock size and changes, and did its best to separate biological decisions from what were called allocative decisions (e.g. NOAA, 1986 [also known as the Calio report]; 1989 [602 guidelines and overfishing definitions]; Sustainable Fisheries Act [Public Law No. 104-297, 110 Stat. 355, 1996]. At the same time, the regulatory process increasingly became the object of court complaints in which NMFS was forced to defend its decisions (really its decisions to accept the advice of the councils). These challenges frequently questioned NMFS science (that is, estimates of changes in population size, not the basic theory) and were most easily met in court by thorough quantification of the basis for the decision. As a result, a strong bias seemed to enter into the choice of regulatory tools. Rules that were easily quantified were strongly preferred. Rules that were more difficult to quantify or that could not be analyzed easily within the context of the standard set of management models were not. For example, industry often proposed spawning area closures. Just as often NMFS opposed these suggestions with statements that no benefit could be shown or that “it doesn’t matter when you kill the fish.”
In short, every effort was made to insulate the regulatory process from the problems posed by scientific uncertainty. The preferred approach of NMFS and a number of environmental groups was to give experts (i.e., NMFS) control over biological objectives and the councils control over who got what—the allocation problem (NOAA, 1986). They hoped that through this approach, biological objectives would not be sacrificed even though it would leave the public (i.e., the councils) to engage in a dogfight over who got what.
This response to the political problems raised by scientific uncertainty is not uncommon; one has to assume that this policy approach was adopted in a good-faith attempt to promote the conservation and sustainability of our fisheries. After all, even if it was realized that current theory was inadequate, it was still the only theory—the only guidance—available, and given the perceived threats to the stocks and a perceived need to act, avoidance of a discussion of scientific uncertainty might have seemed justified.
However, given the inability to verify the core relationship in the theory, this kind of approach to the uncertainty problem carries unusual risks. Precautionary management steps taken on the assumption that the single-species “spawning stock/recruitment” line of causation is the operative long-term determinant of sustainability may turn out to be highly risky if other ecological factors (e.g., habitat, spatial distributions of local stocks, population behavior, trophic hierarchy, and so on, which tend to be ignored in the single-species scientific agenda) are determinative of species abundance. Under these circumstances, the usual
prescription of single-species management—to fish moderately—still could lead to overfishing through the piece-by-piece loss of local stock spawning groups (Ames, 1998; Hutchings, 1996; Rose et al. 2000; Stephenson, 1998; Wroblewski, 1998; Wilson et al., 1999), through the destruction of essential habitat (Watling and Norse, 1996), through a gradual reduction in average trophic level (Pauly et al., 1998), and/or through the reduction or destruction of other ecological factors important to sustainability. In short, restraints appropriate to a single-species approach might simply perpetuate the problem. Taking uncertainty out of the public discussion may deprive us of the only defense we have against the even greater and more catastrophic uncertainty arising from an incomplete or incorrect understanding of the system. Removing uncertainty from the public discussion can be expected to retard our ability to learn, risk the credibility of science and the governance process on unproven theory, and most of all, diminish our long-term ability to conserve the resource (Rosa, 1998a).
The New England experience has been repeated in one form or another all around the globe. It is a problem that afflicts the advisory processes of the New England Council, but it has been just as difficult for the consultative processes of Canada and other countries (e.g., Finlayson, 1994). The problem this history raises is whether a democratic process or any collective process that gives serious weight to user input is capable of dealing with environmental uncertainty in a way that conserves resources. Or is it the case that the strategic response to uncertainty of the various individuals and groups and the resulting difficulty of building trust effectively forecloses successful negotiation of agreements concerning mutual restraint?
The argument of this chapter is that we can probably deal with uncertainty in an open democratic fashion, but that we have to be clear about the kind of uncertainty we face and the design of the institutions we build for dealing with that uncertainty. We can create institutions nicely tailored to a particular scientific theory and preconception of the nature of the uncertainty (we believe) we face, or we can design institutions on an alternative basis, one that assumes as little as possible about the nature of causal relationships and emphasizes the role of collective learning and institutional evolution. The appropriateness of one or the other approach would appear to depend on the state of our scientific knowledge or, alternatively, our ability to test and validate. The next sections of the chapter turn to a brief discussion of the view of uncertainty in a normal, reductionist scientific environment and how one’s view of uncertainty changes in the context of a complex adaptive system.
CONVENTIONAL VIEW OF UNCERTAINTY
As Pahl-Wostl (1995:196) writes, “Judged from a traditional point of view, uncertainty and the lack of predictive capabilities equal ignorance. Such thinking
still pervades most scientific practice. It determines how knowledge is valued, what type of knowledge is required for decision making. It has shaped both scientific and political institutions. Such a view is inadequate to deal with the complexity of the environmental problems facing us today.” Generally we think of three types of uncertainty in the study of natural systems (Walters, 1986:162). There is the uncertainty that arises from exogenous disturbances—noise. There is uncertainty about the values of system parameters, and, finally, there is uncertainty about system structure—sometimes called model uncertainty. A quantitative measure of the first two kinds of uncertainty, according to the American Heritage Dictionary, is simply “the estimated amount or percentage by which an observed or calculated value may differ from the true value.” Implicit in this definition is the assumption that we know or believe we know the basic cause-and-effect relationships—the system structure—in the fishery or whatever we are studying.
In these circumstances, what stands for good science is the ability to detect relationships in what might otherwise appear to be noise and/or to narrow the uncertainty about our knowledge of the value of the parameters of the system. Normally, the smallest confidence interval around parameter estimates is generally believed to be the best science. It is through a continuing scientific process that we reduce or resolve parametric uncertainty. The instance of model uncertainty is also best addressed through a scientific process, but in this case one that consists of the discovery of causal relationships. Once that discovery occurs, the problem of uncertainty melds almost indistinguishably into the statistical process associated with parametric uncertainty.7
From the social point of view, uncertainty is not a desirable state of affairs but it is not especially problematic when science is in a position to learn rapidly. Repeated, consistently good predictions tend to validate the theory and to create trust and a willingness to invest in still more precise knowledge. Eventually, issues that previously might have been subject to strategic, self-interested argument (e.g., whether my steel or yours is better for use in a bridge) instead can be referred to experts for a disinterested (or public interested) decision. Normal peer review for quality control is generally a sufficient safeguard. In these circumstances, relatively insular expert-driven institutions operating under an umbrella of legislative objectives and standards are efficient and consistent with public interest. These are the kind of arrangements we generally make for building, bridge, auto, and pharmaceutical safety, among other things.8
The history of technological advance over the past 200 years illustrates the power of this method. But unlike civil engineering and the many other fields that have flourished using a reductionist approach, the sciences dealing with complex natural and human systems such as marine fisheries have not been able to develop a track record that generates broad social trust. Walters was (at least in 1986:162-163) very pessimistic about our ability to deal with these kinds of systems: “I
doubt that there can, in principle, be any consensus about how to plan for the inevitable structural uncertainties that haunt us, any more than we can expect all human beings to agree on matters of risk taking in general.”
UNCERTAINTY IN COMPLEX ADAPTIVE SYSTEMS
The growth of understanding of complex adaptive systems in the past two decades suggests we may be dealing with ecological and human systems whose structure and dynamic behavior bear little resemblance to the equilibrium, single-species environment characterized by conventional resource theory. If we conceptualize fishery systems from the complex systems perspective, we are likely to approach the uncertainty (and the institutional design) problem in a way very different from the conventional.
In a Newtonian world, the stability of cause-and-effect relationships makes it possible to pursue reductionist science. This stability makes the observation and measurement of system relationships reliable and, more importantly, allows us to accumulate useful knowledge and to intervene in the system with predictable outcomes at whatever scale we find appropriate to our needs. As mentioned earlier, there is no doubt that many parts of our world fit this paradigm well. What is problematical about complex systems in this regard are their pervasive nonlinear, causal relationships (Holling, 1987). At any time a large number of factors may influence the outcome of a particular event, each one to a greater or lesser extent; at another time, the strength of those same causative factors on the same event may be very different. The result is a decline in predictability and/or often a shift in the scale or dimension of predictability (e.g., Levin, 1992; Costanza and Maxwell, 1994; Pahl-Wostl, 1995; Ulanowicz, 1997).
This happens simply because the relative intensity of causal relations in the system changes from time to time. Extreme examples are the regime shifts such as have occurred in response to fishing and/or environmental changes in many places around the world (e.g., Dickie and Valdivia, 1981 [Peru]; Boreman et al., 1999 [Grand Banks and Georges Bank]). Under these circumstances similar species may be present, but in such radically altered proportions that predictions based on extrapolations of past relationships would be far off the mark. Certainly, if one were in a position to compare the entirety of the two systems (before and after the shift) as if they were stable systems, one probably would find strong dissimilarities in the intensity and relative importance of the interactions among components.
Examples less extreme than regime shifts take place as the normal course of events in complex systems. Components in the system are continually adapting and evolving (not simply changing magnitude) in response to developments within the system itself (e.g., fishermen’s response to a change in regulations, changes in the species distribution, or the driving forces in an economic system). Not only are we faced with ignorance about the strength of any particular caus-
ative relationship because of the pervasive nonlinearities of the system, but we can no longer be sure that a particular causative agent still enters the equation. These characteristics of complex adaptive systems clearly limit our ability to extrapolate on the basis of past system states and, consequently, the feasibility of prediction as usually defined from a reductionist scientific perspective (Pahl-Wostl, 1995). Recognition of the instability of the parameters of complex adaptive systems expands our understanding of the possible scope of our ignorance (Ulanowicz, 1997).
Nevertheless, there is perceptible order in these systems. This order can be understood and that understanding allows for the formation of a vision (a fuzzy prediction) of the future. Over time the order is exhibited in what many authors refer to as dynamic, or characteristic, patterns (Pahl-Wostl, 1995; Levin, 1999). I would describe this order as recurring similar patterns, never quite the same, sometimes startlingly novel because of the changing and adapting elements of the system, but also usually distinguishable from patterns in other systems (Holling, 1987).
Recognition of the patterns of change in a particular complex system can lead to an understanding of that system. That is, we can view patterns as historical events and understand the mechanisms that led to a particular outcome. But this understanding may provide us with the ability to predict in only the most qualitative ways—especially when we get beyond the immediate (inertial) term.
This characteristic of complex systems raises fundamental and difficult questions: How can we cope with or successfully intervene in ways that sustain the resources of these systems over the long run if we cannot predict the long-term consequences of our own actions? More importantly, how can we hope to make collective decisions in these circumstances? Won’t honesty about our lack of knowledge lead to a situation in which groups or individuals can honestly question and oppose restraint because it is costly in the short run and with unproven benefits in the long run? In short, if we are in a world of complex systems, does the absence of predictability mean that we have no rational basis for making conservation decisions?
LEARNING IN COMPLEX ADAPTIVE SYSTEMS
In complex ocean systems, learning the appropriate kind and extent of restraint required for sustainability is definitely a more difficult problem than one might be led to believe from a single-species theoretical perspective. Conventional resource management theory and practice is founded on the presumption that it is possible to simplify and predict fisheries systems at the scale of individual stocks using the same methods that have been applied so successfully to physical systems. If managers could predict in this way, even with wide confidence limits, they would be in a position to manipulate outcomes in the system. They would be able to create meaningful property rights and enter into implicit,
or explicit, contracts with fishers (e.g., “If you harvest only x amount today, then in the following year[s] there will be y amount [plus or minus] available to harvest”). These contracts would tend to be enforceable because individual incentives would be aligned with social goals and, as a result, would tend to lead to sustainable resources (Scott, 1992). Unfortunately, this kind of straightforward quid pro quo, top-down, contractual methodology is likely to be effective only when we can quickly learn, predict, and control outcomes.
The lack of predictive ability in complex systems clearly impairs this kind of straightforward contractual methodology; nevertheless, because these systems can be understood in some sense, the basic economic idea of a valuable return to restraint remains viable. The key to understanding the appropriate kinds of restraint lies in the recognition of patterns.
Imagine a world of many possible system states that change from one to another in recognizable, but generally novel, patterns and each with different causative relationships. The system’s propensity for one or another state, then, depends on the probability that a particular set of causative relationships with a particular set of values will appear at any point in time (Ulanowicz, 1997). In circumstances that are close in time or space, one might expect similarity of system states simply because of inertia. As time accumulates (or separating distance becomes greater), there is more scope for change in the circumstances of the system and less predictability. This does not mean the system in a particular place continues to diverge forever from its earlier state; it simply means that the set of possible system states changes.
Part of the reason for recurring patterns may be found in the differing response times (i.e., fast and slow) of the variables in the system (e.g. Simon, 1969; Allen and Starr, 1982; O’Neill et al., 1986; Holling, 1987).9 For example, the highly fecund fish of the ocean can change their numbers dramatically over the course of a single spawning cycle. Other organisms in the system—sponges, corals—may exhibit changes of similar magnitude, but only over a much longer period of time. Generally aspects of the system that are slow to grow or develop or evolve—population age structures that include older animals, physical structures such as corals, tube worm colonies, learned and genetic behavioral aspects of populations such as migration routes and spawning sites—can be expected to constrain the faster elements in the system.10 Put differently, the timing and flows of energy among the population components of the system are constrained by the attributes or structure of the slow or relatively constant components of the system.
If the values of these slow, longer term variables change, the set of possible system configurations changes as well; if the longer term variables remain relatively constant or nearly so, the short term is characterized by recurring configurations derived from a limited set of system states. Thus, one would expect a system in which long-term variables such as habitat and abiotic factors remained
unchanged, to generate an always-changing set of similar system states (Pahl-Wostl, 1995). (Seasonal patterns, for example, are an obvious and easy pattern to discern.) It follows that destruction or erosion of long-term constraining variables, such as, habitat, trophic structure, and behavioral factors such as a learned migration pattern, would be expected to change the set of possible system states so that it includes states unlike those experienced previously and, consequently, reduces the ability to perceive patterns and learn.11
In his book, Emergence, Holland (1998) describes the learning process a computer12 (and presumably humans) must go through to learn the game of checkers. He describes checkers as a very simple example of a complex adaptive system. Checkers has a limited number of pieces subject to a very few rules of movement, and its slow variables (the rules of the game, the size of the board, the kinds of pieces) are comfortably constant. Yet checkers is very difficult to predict and yields an immense number of possible board states. After only the first few moves of a game, it is unlikely that even an experienced player will encounter board configurations identical to those he’s seen before. The state of the “system”—the configuration of the board—is nearly always novel, but patterns of configurations more or less similar to those experienced previously are likely. The train of causation in the system is not stable, varying with each configuration of the board. Feedback about one’s interventions in the system is rarely clear. A “good” move can only be interpreted as such after the game has ended; it is entirely possible that a “double jump” might have led to the loss of a game or that a “poor” move might have set up a winning sequence. Looking ahead to try to predict the outcome of one among a set of alternative moves is an exercise that can yield only an ambiguous answer. So how do we learn to play checkers? Or in our case, how do we learn about the impact of human actions in the ecosystem?
As mentioned earlier, the fundamental basis for learning and prediction in this kind of environment is the recognition of patterns. Because of the multiplicity and novelty of board configurations, and especially because of the adaptive behavior of one’s opponent, outcomes from any given decision cannot be expected to be the mean of outcomes of past similar situations. The adaptive behavior of the player’s opponent introduces a strong tendency for surprise and unintended results, especially for a player with a naïve statistical strategy.
Holland (1998) describes a number of measures that help the player assess and evaluate the current configuration of the board (for example, simple measures such as “pieces ahead,” “kings ahead,” and “net penetration beyond center line”). The same set of measures can be used to assess the likely outcome of alternative moves the player faces. In other words, the player can think through the possible board configurations—two, three, or more moves ahead—that might arise from each alternative move. Conservative and generally more successful assessments assume the other player knows at least as much about the game as the player making the assessment. A kind of worst case precautionary principle
applies. Some alternatives lead to clearly undesirable outcomes, others to outcomes that might be tolerable, and still others to outcomes that might improve the player’s position in the game. These assessments constitute a set of alternative visions of the future and are the basis of the player’s choice of moves. Decisions are in no way perfect and are especially dependent on the player’s experience, but their imperfection is far less debilitating than, say, those facing a player who is using a statistical approach (and who is aware of his opponent’s guile).
From Checkers to Ecosystems
Holland’s checkers game is very interesting and illuminating in its description of how one learns and especially how one develops a vision of the future when causal relations are not stable. But the premise that learning can take place in this way appears to be based on circumstances that—in the case of checkers— are relatively tractable. In particular, learning checkers appears to be eased by the existence of a limited number of system states or board configurations, the ability to construct relatively clear criteria for assessment of possible futures, and the player’s ability to quickly and with low cost acquire experience with the system (including opponents).
Checkers, unlike an ecosystem, does not contain variables of differing time steps, which means that even though the number of board configurations can be very large, that set does not change. If the size of the board could change, or the rules governing the movement of pieces could mutate, checkers might become an extraordinarily difficult game to play well. One could learn to play well under one set of circumstances, and then a mutation of the rules governing movement of pieces or board size might erase or invalidate much of what had been learned to that point. Both the number of possible system states and the number of observations required to recognize patterns typical of the game (in both its new and old states together) would increase greatly. And learning would slow down.
In an ocean ecosystem, if one considers all possible population levels and parameter states, the likelihood of ever observing identical configurations of the system would appear to be rare. On the other hand, the possibility of observing similar, recognizable configurations if the long time step variables in the system are stable (e.g., climate, habitat, particular behavioral patterns) seems much higher. That is, if habitat and other relatively stable, long time step variables of the system remain in place over time, one might expect the system to have a strong propensity to settle into a set of configurations or patterns similar to those that have been observed in the recent past. For example, even though population numbers may be highly variable, the identity of the Gulf of Maine ecosystem is apparent to a fisherman or scientist who has worked there his whole life. Like the checkers player, fishermen learn to recognize system patterns and have some sort of vision of the future, including a hard-to-prove sense of what effects humans have on the system.
Equally difficult compared with checkers is the establishment of social goals. In an ecosystem neither the ultimate nor the proximate goal are clear. Both depend on the structure of rights and the process of governance. A typical open-access regime contains private rights that nearly always generate individual incentives for short-run, profit-maximizing objectives that have little to do with conservation. Other rights regimes are capable of setting more rational long-run goals but face formidable problems about how to achieve those goals. The simple assumption that resource property rights (of almost any sort) will lead to a collective interest in conservation is not obvious in a complex system, as is argued later in this chapter. In other words, unlike checkers, the goal of the game emerges from the rights structure, or rules, used to play the game. This makes the process of deciding what kinds of restraint are appropriate even more difficult.
Whatever management or governance regime is established, it is likely to arrive at a very imprecise vision of the future and of the ways human activity shapes the system. This limited vision of the future is not scientific in the usual use of the word, but it is far more valuable than a sense that the future is totally unpredictable and not subject to influence. For the individual fisherman, its special value lies in the fact that it limits the set of system states that might reasonably be expected to occur in the future (Palsson, 2000). Consequently, his current actions are not immobilized by a sense that the number of outcomes is huge and every outcome equally possible. For example, if one expects certain seasonal patterns, even though they may be strong or weak, late or early, and if one expects certain species to be present even though their abundance may be great or little, the limited set of possible futures represented by these expectations makes preparation for the eventualities of the future possible (Acheson, 1988; Wilson, 1990). If it were not possible to narrow the set of conceivable futures, current action would lack any rational basis unless it were totally myopic and reactive. This limited individual vision of the future is important because it leads to a sense of what kinds of collective restraint are required.
In short, a limited set of familiar system patterns permits the formation of individual visions of the future. This vision is the basis for forward-looking adaptive behavior (Palsson, 2000). It is the rational foundation for investment in both physical and human capital and, importantly, is the basis for restraint with regard to current harvests or harvest activity. These individual visions of the future are the ultimate basis for the construction of a social objective. Consequently, from this perspective, maintenance of familiar system patterns (i.e., of the conditions necessary for “normal” system configurations) becomes the principal objective of management. Maintaining the “old” structures in the system (subject to Holling’s caveat) becomes the principal means to achieve that objective.
This is a very different view of the basis for restraint than that contained in conventional resource theory. Theories based on a presumption of full (or stochastic) knowledge of causal relationships almost invariably emphasize quantitative prescriptions involving the fast variables in the system (e.g., quotas for the
amount of fish caught, number of boats allowed to harvest, and so on) and ignore or assume constant the slow variables in the system. On the other hand, an approach that emphasizes “familiar patterns” suggests a focus on policies designed to maintain those aspects of particular populations and other system components that are long term in nature (e.g., the age structure of populations, learned behavior for migrations, and spawning sites that might be destroyed by loss of local components of metapopulations, habitat, and so on).
The argument is that preservation of the long time step variables—the factors that determine the short-term configurations of the system—is where the emphasis on restraints should be placed because that is where feedback and predictability, such as they are, are available to us. This implies relatively constant rules changed only infrequently. It does not suggest a feverish chasing after the fast variables in the system in an attempt to fine tune. The other side of this same coin is the fisherman’s sense that if current conditions in the system were different— that is, if the structure of long-term variables was different—the expected set of system states also would be unfamiliar and larger and would make learning about the system and economical adaptation to future system states much more difficult. This would reduce the rational basis for restraint and make it much more difficult to achieve a scientific understanding of the system.
In summary, this perspective from complex systems theory leaves us with a sense that we have a very modest, very short-term capability for prediction at the species level, and an even more modest ability to control outcomes at that level in complex systems. We clearly influence the system, but the specificity of outcomes (especially in terms of short time step variables such as recruitment changes in population size) resulting from our actions is likely to escape us. Nevertheless, we can develop imperfect visions of the future of the system, visions that put boundaries on the probable configurations of the system. There may be certain configurations of some elements of the system—the long time step variables— which, if we take steps to protect, we can expect to lead to strong propensities toward “typical” system states and patterns (i.e., states that we can learn to recognize through experience). These “typical” system states and patterns may be no “better” or “worse” than other alternatives in some intrinsic sense, but they have the advantage of being known and familiar. They allow us to learn and to form a vision of the future in spite of the great uncertainties in the system. This knowledge gives us the ability to adapt and provides the foundation for rational investment in the resource.
COLLECTIVE LEARNING IN COMPLEX ADAPTIVE SYSTEMS
The complexity of these systems—their size, spatial distribution, multiple scales, large number of components, continuous change, and other factors—create circumstances in which no one individual or group could hope to adequately
address the learning problem. The problem is a collective problem and, as such, is dependent on social organization and process. By collective learning, I mean simply the way we (collectively) accumulate observations of a phenomenon such as patterns in the ocean, the way we interpret and articulate those observations (convert them to knowledge), and the way we remember that knowledge. From a resource management perspective, the problem in a common-pool, complex system is learning enough to develop a convincing rationale for individual and collective restraint. This is as much a social problem as a scientific problem.13 In fact, it is the difficulty of the collective learning in a complex environment that weaves the social and scientific problems into an inseparable matrix.
The social side of the problem has two closely related facets that are pertinent to the problem of collective learning. The first has to do with the institutions—especially the processes and the rights structures—that give rise to a rationale for stewardship and an incentive to learn. The second facet of the learning problem is the organization, or architecture, of those institutions. This second aspect is related most closely to an institutional attribute that Ostrom (1990) calls congruence. Overall this aspect of the problem has received much less attention than the others. Yet, given the complexity and associated uncertainty of these systems, it is critical to the social ability to efficiently acquire, analyze, and respond to changes in the system.
Nearly always, the literature on common-pool institutions assumes relatively complete (if stochastic) biological knowledge operating in a Newtonian world. This is most obvious in the economics literature, but it is also a pervasive assumption (even if unstated) in the other social science writings in this area. I don’t think the fundamental outlook of either economics or the other social sciences is challenged by a complex systems approach, but the particular kinds of solutions—the institutions and so on—suggested vary dramatically. This is much more true for economics than for the other social sciences because economists tend to employ, and translate into policy, prescriptions derived from analytical models that emphasize optimizing or maximizing behavior on the basis of full, or nearly, full knowledge. Much of the literature on co-management tends to view at least the human environment as complex, and for that reason alone has tended away from the neat analytical conclusions of economists (e.g. Ostrom, 1990, 1997; Pinkerton, 1989; McCay, this volume:Chapter 11). I suggest that a complex systems approach provides a strong theoretical basis that is consistent with most of the important conclusions about the structure of rights and institutional organization contained in the co-management literature.
To conceptualize this problem, I’ll turn to the ideas about the organization of complex systems originally put forward by Simon (1962, 1996).14 These ideas have been adopted by many others working in complex systems (e.g., O’Neill et
al., 1986; Pattee, 1973) and are implicit in most of the aggregation schema used in economics and ecology.15 They provide a fruitful conceptual foundation for addressing the collective learning problem or, what is nearly equivalent, the problem of organization of management institutions.
Simon proposed a compellingly simple generalization about the organization of complex systems—one that makes few assumptions about causal relationships. Namely, Simon proposed that these systems are organized hierarchically and partitioned into nearly decomposable (or independent) subsystems. The key element in Simon’s scheme is the nearly decomposable subsystem. He defines the boundaries of such subsystems in terms of rates of interactions—within each subsystem, rates of interaction are high; between, rates of interaction are lower. In terms of the previous discussion, each subsystem might contain fast and slow variables (reflecting a hierarchy of process), but there is also a tendency for larger scale subsystems in the hierarchy to react more slowly than smaller scale subsystems.
In both natural and artificial systems, near decomposability creates a tendency for efficient use of information, robustness, and resilience (Simon, 1962). A complex computer program, for example, is intractable if organized as a seamless, tightly integrated whole. Even if one were able to construct a seamless program, any small change thereafter would be extraordinarily difficult to implement and any unanticipated bug would be nearly impossible to chase down if everything were connected to everything else. As a practical matter it is possible to conceive, construct, and debug a complex program only if that program is organized in a series of loosely connected, usually nested, nearly decomposable subroutines within which groups of highly interactive variables are brought together. This hierarchical structure with nearly independent components is not only a necessary conceptual tool for the program’s creation, but also a functional aspect that affects its operating resiliency or stability.
All large natural and artificial systems face difficult problems of coordination nearly all of which are solved by finding ways to maintain the advantages of decomposability (Low et al., in press). Business organizations have to be broken up into divisions, each with considerable autonomy if they are to operate with even a modicum of efficiency. Here also the organizational problem is to group together activities with strong interactions and to tie them to the rest of the organization only when those activities impact or need to be coordinated with the activities of the rest of the firm. By doing this, the firm is able to assign particular decision-making responsibilities to that part of the firm with the most pertinent knowledge.16 This allows the firm to better monitor for accountability, and reward on the basis of contribution to firm goals. Avoiding disharmonious incentives, such as might arise when responsibility is unclear and accountability difficult, is a major problem for firms because it has the potential to seriously attenuate intrafirm coordination and the achievement of firmwide goals (e.g. Hurwicz, 1972; Williamson, 1986; Demsetz, 1993; Rosen, 1993).
The federalist political system under which the United States is organized
also creates large numbers of relatively independent local authorities such as towns, counties, states, and the national government, all neatly arranged in a spatially nested hierarchy. The U.S. Constitution, similar constitutions at the state level, and law govern interactions within this well-defined hierarchy. But they also govern many other specialized units and agreements whose purpose is to address interactions whose patterns of occurrence do not conform to the “normal” nested hierarchy. For example, states are part of the federal union but also members of various associations or agreements among states organized for particular purposes, such as the Atlantic States Marine Fisheries Commission. In all these systems the connections between units are generally loose but, on the whole, lead to coordinated activities (Ostrom, 1991).
An important benefit of this form of organization is that the scale of operation of each component of the organization can always be chosen (for efficiency or other reasons) so that it matches the scale of the activity in question, that is, the scale at which the impacts from an activity generate consequences (costs and benefits). So local activities are assigned to local authority, regional to regional, and so on. The other side of that same coin is that the governance of activities at a local scale that might generate costs for neighbors can always be shifted to a higher scale, where wider than local impacts can be handled. When activities do not interact along the neat lines of a spatially nested hierarchy, arrangements can be made for ad hoc components tailored to the structure of that particular problem. In economic terms this kind of polycentric organization is equivalent to the internalization of spatially related externalities, or if one prefers, to the minimization of the transaction costs necessary to resolve spatially relevant externalities.
These ideas apply to natural as well as social systems (O’Neill et al., 1986; Pattee, 1973; Walker, 1992, 1995). Simon and other authors describe ecosystems, the human body, and living organisms in general in terms of nearly decomposable subsystems. From this perspective a straightforward (i.e., simple hierarchical) view of an ocean ecosystem translates into a world of spatially discrete but not completely independent subsystems connected horizontally and aggregated into larger, nested subsystems.
This is consistent with the modern treatment of scale and space in ecology (e.g., MacArthur and Wilson, 1967; O’Neill et al., 1986; Levin, 1992, 1999; Hanski and Gilpin, 1997) and, as I’ll describe, with our ability to organize a collective learning process. It is not consistent with the species-centered approach of conventional management. One of the principal reasons for suggesting this alternative conceptual approach is that the conventional approach does not lend itself to a practical way to manage ecosystems. In other words, when the complexity of the ocean is simplified by looking at individual species, we may blind ourselves to much of the feedback in the system. Just as important, an ecosystem conception based on a species-centered approach only makes sense if one could conceive of “modeling” all the biotic and abiotic interactions in the system. The massive impracticality of such an undertaking leaves one with, at best, ad hoc
adjustments to the conventional approach (see, for example, National Research Council, 1999).
One might think of patches (or nearly decomposable subsystems) of biological activity at a fairly small scale, which are replicated in a similar, but generally novel way, at other locations at the same scale. Patches might be expected to arise because of heterogeneity in the environment. Bottom and coastal typography, currents, wind, and a host of other factors create areas of up-welling, windrows, eddies, and a variety of other features that tend to concentrate biological activity. Patches are separated in space by areas in which the density of organisms and the rate of interactions are relatively low. The flows (e.g., drift and migration) between these patches or subsystems define the phenomenon peculiar to the scale of the subsystem at the next, more aggregate layer in the hierarchy (Levin, 1999).
In other words, an aggregation, or clustering, of subsystems defines a larger scale subsystem. Changes in the composition of the organisms and other biological activity in subsystems (or patches) as well as the differences between subsystems at the same and different scales is the information that is read and interpreted as patterns. Furthermore, from a process-oriented perspective (i.e., observing the nonspecies specific energy flows), rates of interaction may vary considerably over the course of an annual cycle (e.g., photosynthesis), with certain functions such as herbivory and predation occurring at high rates only at those times of the year when migratory species find the local availability of nutrients or prey sufficient to be present (O’Neill et al., 1986).
The high rates of interaction within subsystems are important because they encompass a large part of the feedback about human and other perturbations (Levin, 1999; Levins, 1992). If we are to ever understand the patterns in the system and the kinds of restraint that are appropriate, we have to be able to capture this feedback. If each subsystem were completely independent of other subsystems, it would contain all possible feedback relevant to its own dynamics, even though that feedback might be a very ambiguous or difficult-to-decipher reflection of the patterns generated by the subsystem itself. But, because subsystems are connected to other subsystems, some feedback escapes from the local system (due to migration and drift). This “lost” feedback is potentially subject to capture at the next highest scale in the system, where it emerges as a separate aggregate phenomenon. For that capture to be meaningful, however, there must be some sort of cross-scale network that can acquire information about and make sense of the aggregate phenomenon. Consequently, to the extent that learning about the results of human interventions is possible, capturing the feedback at the various subsystem levels and between levels is necessary if we are to learn about the proximate results of interventions.
This strongly suggests the efficacy of a multiscale institution whose organization and activities parallel the organization and activities of the natural system. The fundamental rationale for this parallelism rests on the assumed nature of feedback within the natural system and the need within the social system to orga-
nize in a way that increases the likelihood of acquiring the information necessary for learning. The presumption is that when the “receptors” in the social system are aligned with feedback in the natural system, information costs are reduced, the possibilities for learning and adaptation are increased, and, of course, the ability to cope with uncertainty is strongly enhanced.
Resource mobility is usually one of the reasons cited for centralized approaches to ecosystem management. Nevertheless, mobility is one of the reasons the (simple) nested, or (multiple nested) polycentric, form of organization is important, especially from the collective learning perspective. Centralized approaches, as tend to be employed with single-species theory, obscure through aggregation and averaging a large part of the spatial and temporal behavior—the patterns—of the system. But the spatial and temporal incidence of events at a broad scale and their correlation with events in local subsystems are a large part of what we recognize as patterns. Even at the local scale aggregation probably obscures the source of many changes in the system (Holling, 1987; Levin, 1992). Current single-species attempts to manage over the range of the stock and to assess the status of the resource principally on the basis of aggregate measures of a very small part of the system—individual populations—essentially mask a large part of the local-aggregate patterns (of both populations and processes) one would expect to be relevant to an understanding of the system.
The problem of learning to recognize patterns is very much a problem of capturing system behavior and changes at a multitude of scales and locations. For scientific purposes it is often sufficient to isolate a particular scale of interest, holding everything higher in the hierarchy constant and treating the variations in lower level subsystems as noise around an average (Ahl and Allen, 1996; O’Neill et al., 1986; Simon, 1996). Resource management, however, does not have the luxury of attending to a single scale. To make the observations and conduct the analysis for management requires an information network spanning units at the same scale and reaching into units at higher and lower scales—a nested hierarchical structure or most probably a polycentric structure. Such a network is necessary to learn from local experience about local and nearby phenomena and is equally important for learning about the spatial and temporal attributes of phenomena that emerge at a larger than local scale. For example, the full extent of a migration pattern may only be observable at a particular large scale, but understanding its direction and timing (including especially exceptions to the general pattern) are often functions of more local phenomena, such as the availability of food. Similarly, understanding of local phenomena is clearly enhanced by knowledge of larger scale events. Thus, changes in aggregate phenomenon are better understood when combined with knowledge of the smaller scale factors from which they emerge, and smaller scale events are better understood in the context of the larger scale factors that contain them (Berkes, this volume:Chapter 9; O’Neill et al., 1986; Rosa 1998b; Young, this volume:Chapter 8).
This kind of organization has other important implications for the collective
learning problem. Refer to the short list of learning difficulties presented earlier and consider a multiscale natural environment with many similar but not identical subsystems and a parallel human organization. First, we are likely to find that the slow rate at which we can gain experience with complex systems can be greatly accelerated if we can pool and compare observations of subsystems. Experiences in similar, proximate subsystems can be aggregated into a relevant collective experience applicable to the scale of those subsystems. This is probably as close as we can come to controlled experiments in these systems (Walters, 1986), but it is possible to learn a lot this way. Furthermore, we can usually accelerate the systemwide adoption of new rules or procedures by first adopting and tailoring them at the most applicable local level (i.e., one that encompassed all the costs and benefits of the change). In a heterogeneous environment, attempts to accomplish the same end might be completely stymied by the need to satisfy all local conditions simultaneously.17
An informative example is the way in which municipalities within a state, or states within the nation, actively compare and contrast one another’s experiences in various realms. Experiments, new methods of doing something, the response to a natural or economic disturbance—whatever happens in one jurisdiction can be followed, modified, and applied in other jurisdictions. These information flows often do not occur within a simple nested hierarchy. Cities and states and, for that matter, all kinds of similar governing units tend to maintain collective organizations for the purpose of articulating their collective experience and developing new ways to operate. This information is then disseminated among members of the organization through publications, model legislation, personal discussions, conferences, and a variety of other networking activities (Levitt and March, 1995).
Importantly, the value of such information to a particular locality can only be assessed by someone (or a group) with a reasonably detailed knowledge, especially including a history, of changes in local factors. Model legislation, for example, is just that; it is usually constructed on the basis of collective (averaged) experience of a small group of early adopters, but with the expectation that it will be modified by localities so that it better fits local circumstances. Localities, in effect, introduce the lower scale “noise” necessary to tailor model legislation to the peculiarities of local circumstances. The same local peculiarities mean the collective value of numerous local experiments (i.e., their aggregate effect) can only be assessed with relatively particularized knowledge of local and aggregate circumstances.
The greater the number of relevant parties that can be brought into the deliberation, the greater the likelihood that common patterns can be identified with confidence even though the circumstances around each locality’s experience may differ. Small numbers, of course, always leave open the strong possibility that unknown factors special to a locality may be responsible for a particular result and, thereby, the value of the collective knowledge that can be acquired from that experience might be diminished. Nevertheless, in circumstances where there are
many similar, redundant local units, a locality can learn from the experiences of a small number of similar units to the extent that it understands those dissimilarities. It can compensate for perceived differences between itself and the other and can adapt its behavior (or an experiment) in ways that are thought to assure a better result (Dietz and Stern, 1998; Low et al., in press). Knowledge of local conditions can penetrate the ignorance embedded in averages.
Another way of looking at the learning advantages conferred by decentralization is in terms of the ability to avoid possibly persistent maladaptive policies. In the conventional view of the scientific process, little thought is given to this problem because it is assumed that the ability to validate theory or policies will select out ones that are maladaptive. But the ambiguity of evidence in complex systems seriously attenuates these selection pressures. As a result, as Gell-Mann (1994:296-305) points out, there is a tendency to substitute external criteria, ones that do not necessarily reflect the adaptive value of the policy. For example, in the absence of clear evidence one way or another, criteria appear that might select for policies that tend to reinforce the power of particular individuals or groups or an agency or a religious or scientific dogma. One could probably trace all sorts of organizational ills, from continuing ineffective policies to serious corruption, to this basic problem. Perhaps the only reasonable institutional response to this problem is to maintain independent (nearly decomposable) local governing units. Their ability to probe different policies and to remain skeptical without great cost is one of the few ways there might be to constraint persistent maladaptive policies, or viewed more positively, to assure the continuing evolution of the institution.
Organization, Rights, and Incentives
Individual incentives are generally the most important factor in the transformation of organizational structure into outcomes (Williamson, 1986; Pfeffer, 1995). Incentives are important for rule compliance and stewardship, as is usually emphasized, but in the context of complex systems they have a particularly strong bearing on the collective learning problem and the feasibility of developing restraining rules. An organization capable of undertaking the kind of learning problem inherent in ecosystem management is, nevertheless, likely to be highly impaired if the individuals who comprise the organization do not have incentives consistent with the goals of the organization. The state, for example, can always (at great cost) use the threat of force to produce compliance with its rules; but there is little it can do, short of providing self-interested incentives, to produce forthcoming engagement in the processes of collective learning and rule development.
The formation of incentives depends on property or quasi-property rights (as is usually emphasized) and also on a set of circumstances that creates feedback and the ability to respond to that feedback (Hurwicz, 1972; Libecap, 1995). If the resource rights possessed by a decision maker provide no way to obtain feedback
or means of control or influence over the resource, even if the interests of the rights holder are coincident with the long-term maintenance of the productivity of the resource, action producing that end is not likely to be forthcoming simply because the appropriate action cannot be identified. Because feedback in single-species theory is assumed to be straightforward and somewhat obvious (at least to social scientists), this problematic aspect of incentive formation, it seems, is usually assumed away without much thought.
From the perspective of a complex, multiscale system, however, the ability to detect, understand, and act on feedback in a way that reinforces a species-specific right is clearly a major problem. If, as argued here, we are likely to have only modest ability to control outcomes in these kinds of systems, then the best we may be able to do is assure the existence of the conditions necessary to produce familiar patterns in the system, that is, the maintenance and protection of the slow-growing structures in the system. In addition, the convenient analytical fiction of a single neurophysical system operating in a simple environment is also seriously misleading. Observations of multiple factors must be made at multiple scales and locations; the resulting information must be transferred to some sort of deliberative/analytical forum and then transformed into a decision to take action or impose restraint. This difficult process by itself is likely to impair the ability to adapt successfully.
From the perspective of individual incentives, this impaired and modest control argues strongly against species-specific rights. Such rights would provide no incentive to protect common resources, such as habitat, necessary for the sustainability of more than one species. Neither would any incentive exist to acquire or provide information that might contribute to the identification of system patterns, also a common resource. And most importantly, species-specific rights would create strong incentives against the creation of rules that might be eminently sensible from a system perspective but of negative or no value to someone holding species-specific rights, such as, restraints on harvesting that protect someone else’s resource relevant habitat.
Consequently, broad rights are much more likely to generate an expectation of a beneficial return to restraint because they conform to the modest level of control that we can exercise. We may find it very difficult to predict species-specific outcomes in these systems, but it is fairly easy to be confident about broad outcomes. For example, temporarily or permanently closing an area to fishing is likely to lead to greater standing biomass in that area, but the quantitative composition of that biomass by particular species may be impossible to predict. A person with narrowly specified rights may or may not benefit from such a policy, depending on the particular outcome. A person with broad rights and the ability to adapt is almost certain to benefit regardless of the particularities and for that reason is much more likely to be predisposed to agree to restraining measures of this sort.
For example, in 1994 a large part of Georges Bank was closed for the purpose of restoring cod stocks. Surveys in 1999 revealed little recovery of cod but a bonanza of scallop growth (Murawski et al., 2000). Holders of rights to fish cod might be very skeptical of additional proposals to close areas because they realize they might never or only occasionally be on the winning end. Holders of rights to other fisheries, those that benefited in this instance, on reflection might come to the same conclusion because cod or dogfish or something entirely unexpected might bloom the next time. If, on the other hand, these fishermen had held rights that allowed them to exploit the systemwide benefits of closures (i.e., multiple-species rights), they would have had strong economic incentives to accept closures, and the experience of the closure on Georges Bank would have reinforced that incentive even more.
Fortunately, rights do not have to be predicated on particular cause-and-effect relationships. Agricultural land rights, for example, are not based upon any particular biological relationship; the rights are valuable because they allow the owner to employ any of a large number of known biological relationships. As climate, market, and known biological relationships change, the owner of agricultural land rights is free to adapt. What is important about the right is that a large class of phenomena about which we do and do not have knowledge of causal relationships and whose impacts are contained within the boundaries of the property are placed under the owner’s potential control. There is, as a result, a strong incentive to learn and adapt in a way that is consistent with profitability (and presumably the social interest in conservation). Even lacking the kind of biological control that is possible with agricultural rights, broad rights in ocean ecosystems allow the owner to adapt to changes in the market and the environment. So long as the variation in the environment conforms with the kinds of patterns expected in the system, individuals can make the preparations (investments) necessary for successful adaptation.18 This capability, combined with information flowing from the collective network, generates the individual’s expectation that restraint is likely to be beneficial.
The problem, then, is that there is very little species-specific control, especially over the period in which abundance is dependent on recruitment. But this is the period that is relevant to sustainability. Broad rights that correspond with the dimension at which there is at least limited control—all species at the (sub)system level, or perhaps, the functional group—align individual incentives with the need to learn about and maintain ecosystem function. However, individual rights are not the only key to learning.
Individuals clearly learn from the experience of others and construct their view of the ecosystem through a complex interplay of their own and others’ experiences at their “own” scale, and how that fits into the aggregate picture that is conveyed to them by individuals or by organizations operating at a larger scale (Michael, 1995; Parson and Clark, 1995). The organizational problem is to place
individuals inside a network that is capable of generating appropriate feedback. For all the reasons discussed to this point, hierarchical and, by necessity, representative governance structures are most likely to be able to convey to individuals the collective experience at all scales in the system—that is, most likely to provide feedback about system patterns.
These same governance organizations also provide the mechanisms for attaching meaning to observations, for deliberation, and for taking ameliorative action. These capabilities are essential to the understanding of the ecosystem. They are capabilities that can be partitioned into their nearly decomposable tasks, but cannot be isolated from the system as a whole. In other words, given the mobility of resources in the system, the rights and the incentives of a person operating at a low level in the hierarchy are dependent on information generated at the same and at higher levels. Patterns at all scales and the efficacy of rules also at all scales are of interest to the individual.
In short, in a complex system, the creation of individual incentives that might lead to collective restraint involves the identification of system patterns, the formation of a broad, not narrowly specified, vision of the future, and the ability to adapt to that future. Given all the difficulties of learning discussed thus far a rights system that relies on only individual learning is likely to be untenable. The collective learning process has to be an enterprise whose organization parallels the structure of feedback in the system. The tight local coupling on the ecosystem side that Levin (1999) refers to has to be captured by tight local coupling on the social side. There have to be broad, relatively stable networks that link multiple localities. A collective deliberation facilitates and converts those deliberations into meaningful restraint or a process that can lead to meaningful restraint (Dietz, 1994; Dietz and Stern, 1998). Thus, a rights system that relies on only individual learning is likely to be untenable.
The individual’s perception of the environment and the formation of his incentives are intimately dependent on this governance process. Inclusion within a stable network of discussion, being a part of the experience and analysis of a broad array of individuals, learning the likely response of others to changes in rules, and having a vote or substantial role in the decision process all contribute to the alignment of individual incentives. However, if this process is not organized so that it can capture feedback about the effect of human interventions, the incentives and the actual behavior of individuals and groups is not likely to lead to conservation. Externalities will persist.
On the other hand, to the extent that individual incentives can be aligned with the social goal of conservation (or sustainability), the state is relieved, by and large, of the need to rely on its police powers and threats of force in order to ensure individual behavior. Administrative and enforcement costs are reduced and the scope of feasible rules is expanded.19 Most important, however, is the change in the kinds of information strategies individuals (and groups) find it in their interest to pursue. In the typical top-down administrative approach to man-
agement, individuals (or groups) rarely find it in their interests to be forthcoming with information. All sorts of exaggerations, games, lies, dissembling, and other behavior is encouraged because there is generally only a limited and costly ability for others to verify such (mis-)information and generally no penalty—and often a reward—for its introduction into the public process.
This kind of behavior always will be difficult to constrain in a complex environment; however, when management organization and resource rights are designed with the problem of learning in mind and actually lead to “tight local coupling” in the form of social networks, problems of information verification can be reduced and the costs of dissembling increased. Individual and collective learning can be encouraged. This increases the feasibility of conducting a constructive “analytical deliberation,” arriving at a shared vision of the future and aligning individual incentives.
This kind of institutional arrangement, which I believe is principally consistent with decentralized, democratic governance, does not resolve scientific uncertainty but it does create a constructive environment in which the collective pursuit of useful knowledge can take place. This may appear to be a woefully complicated process, but it is nothing more than what we accomplish in our everyday governance. Society and the economy are extremely complex, multi-scale, rapidly changing systems in which we’ve learned to govern ourselves.
Finding ways to effectively restrain human activity in complex ecosystems has been very difficult. A large part of the problem arises from scientific uncertainty, which is often used as a pretext for not making hard political decisions for conservation. This chapter suggests we have wrongly characterized our knowledge of the natural environment and, consequently, have viewed the uncertainty and learning problem as if it were a typical engineering problem. As a result, we have created institutions and administrative procedures ill adapted to a solution of the conservation problem.
Usually we assume we are dealing with a classical Newtonian system in which cause-and-effect relationships are stable, or at least can be treated as if they were. In systems that truly conform to this assumption, the normal procedures of science can lead to understanding and reliable prediction. From the social point of view, repeated successful prediction generates trust even when there may be a lack of understanding among affected nonscientists. It also creates the circumstances for effective accountability and provides the rationale for reliance on expert-staffed institutions for the resolution of science-related problems.
Complex adaptive systems do not lend themselves to long-term prediction consistent with the needs of sustainability because of their changing, complex, and usually nonlinear causal relationships. We may be able to understand the structure and dynamics of these systems without being able to predict anything
but broad patterns, or propensities, to use Ulanowicz’ (1997) terminology. This is a fundamentally different and important characteristic when compared with Newtonian systems; it raises two closely related social problems: (1) How do we collectively learn what kinds of restraint will work when the time-honored reductionist process of “predict → test → learn → revise → and predict again” by which we hone our understanding cannot be followed; and (2) in this kind of environment, what kinds of institutions are necessary to best facilitate learning, accountability, and incentive alignment?
Holland (1998) suggests that learning in this kind of environment is based on the identification of recurring system patterns. The checker board game that he uses as an example of pattern learning is a relatively simple example of a complex adaptive system. It presents a limited and stable set of possible system states and patterns; the criteria for successful intervention in the system are fairly clear and the time and resource costs of learning are relatively low.
When this same learning problem is applied to ecosystems, especially those in which humans play an active or dominant role, such as fisheries, the complexity and extent of the environment transforms the learning problem. Patterns in this kind of system, I suggest, are best understood in terms of the differing time steps of variables in the system. The relative stability of slower changing variables, such as habitat, constrains and limits the range of patterns that appear in the more quickly changing aspects of the system, such as the size of populations. It may be possible to ameliorate, or minimize, the learning problem through policies meant to affect the range of patterns we encounter. However, we will always be faced with a multiscale system in which observation is costly, analysis is difficult, and prediction about specific results of our intervention in the environment is not possible. This is not the kind of environment in which it is easy to build an atmosphere of credibility and trust. For all these reasons, learning in this kind of environment is very much a collective enterprise that has to be mediated by institutions. The design of those institutions is important.
An institution’s success in minimizing the cost and difficulty of observation and analysis depends principally on its ability to capture the feedback in the system it governs. To do this well, the organization of institutions must take on a hierarchical structure that reflects the patchy, multiscale hierarchical structure of the natural system. At each level in the hierarchy, institutions must be “positioned” so that their boundaries correspond as much as possible in terms of scale and location to the boundaries of strong interactions in the biological system. There must be connections (information flows) between locations at the same scale and between higher and lower scales as in the ecosystem.
The purpose of this parallelism is to align the “receptors” of the institution as much as possible with the spatial patterns of feedback in the system. In a situation with a crazy quilt of social boundaries that bear no resemblance to ecological boundaries, it might be possible to disaggregate and reaggregate observations in a way that made ecological sense, if analysis and observation were costless. How-
ever, noncongruent boundaries are much more likely to simply compound, or even confound, the learning process. A parallel structure, on the other hand, minimizes observational and analytical problems and, if across-scale and between-scale connections exist, provides for a flow of information that can be used to generate an understanding of processes at various scales and locations.
A very important—the dominant—aspect of the collective learning problem is the need to extend the process of learning down to the individual level. Individual incentives—and, importantly, the willingness to enter into restraining agreements—have to be based on a perception of a beneficial connection between restrained current actions and future states of the natural system. In a complex system, in which it is difficult to predict the future state of system components (e.g., species abundance), this would appear difficult to achieve. Nevertheless, so long as individuals are in a position to adapt to changes in system states, the connection between current and (expected) future states does not have to be mechanically precise. It is sufficient that the resulting (expected) future state(s) are positioned within the set of patterns that characterize the typical system and that individuals are in a position, technologically and legally, to adapt to those new states when they appear (i.e., not tied to the fate of particular species). Under these circumstances the probability of a positive economic outcome for the individual is very high and, as a result, so also is the rationality of entering into restraining agreements.
Acheson, J.M. 1988 Patterns of gear changes in the Maine fishing industry. Maritime Anthropological Studies 1:49-65
Ahl, V., and T.F.H. Allen 1996 Hierarchy Theory: A Vision, Vocabulary, and Epistemology. New York: Columbia University Press.
Allen, T.F.H., and T.B. Starr 1982 Hierarchy. Chicago: University of Chicago Press.
Ames, E. 1998 Cod and haddock spawning grounds in the gulf of Maine. In The Implications of Localized Fishery Stocks, I. Hunt von Herbing, I. Kornfield, M. Tupper, and J. Wilson, eds. New York: Natural Resource, Agriculture, and Engineering Service.
Appell, D. 2001 The New Uncertainty Principle: For complex environmental issues, science learns to take a backseat to political precaution. Scientific American 284:18-19.
Boreman, J., B.S. Nakashima, J.A. Wilson, and R.L. Kendall, eds. 1999 Northwest Atlantic Groundfish: Perspectives on a Fishery Collaspse. Bethesda, MD: American Fisheries Society.
Brodziak, J.K.T., W.J. Overholtz, and P.J. Rago 2001 Does spawning stock affect recruitment of New England groundfish? Canadian Journal of Fishery and Aquatic Sciences 58: 306-318.
Costanza, R., and T. Maxwell 1994 Resolution and predictability: an approach to the scaling problem. Landscape Ecology 9: 47-57.
Demsetz, H. 1993 The theory of the firm revisited. In The Nature of the Firm: Origins, Evolution, and Development, O.E. Williamson and S.G. Winter, eds. New York: Oxford University Press.
Dickie, L.M., and J.E. Valdivia G. 1981 Investigations cooperativa de la anchoveta y su ecosistema (ICANE) between Peru and Canada: A summary report. Boletin Instituto del Mar del Peru Vol. Extraordinario: XIII-XXIII.
Dietz, T. 1994 What should we do? Human ecology and collective decision making. Human Ecology Review 1:301-309.
Dietz, T., and P.C. Stern 1998 Science, values and biodiversity. Bioscience 48(6):441-444.
Finlayson, A.C. 1994 Fishing for Truth: a sociological analysis of northern cod stock assessments from 1997 to 1990. St. Johns, Nfld.: Institute of Social and Economic Research, Memorial University of Newfoundland.
Fogarty, M. 1995 Chaos, complexity, and community management of fisheries: an appraisal. Marine Policy. 19:437-444.
Gell-Mann, M. 1994 The Quark and the Jaguar. New York: W.H. Freeman and Company.
Gunderson, L.H., C.S. Holling, and S.S. Light 1995 Barriers and Bridges to the Renewal of Ecosystems and Institutions. New York: Columbia University Press.
Hall, C.A.S. 1988 An assessment of several of the historically most influential theoretical models used in ecology and of the data provided in their support. Ecological Modelling 43:5-31.
Halliday, R.G., and A.T. Pinhorn 1990 The delimitation of fishing areas in the northwest Atlantic. Journal of Northwest Atlantic Fishery Science 10:1-51.
Hanski, I.A., and M.E. Gilpin 1997 Metapopulation Biology: Ecology, Genetics, and Evolution. San Diego: Academic Press.
Hilborn, R., and D. Gunderson 1996 Chaos and paradigms for fisheries management. Marine Policy 20:87-89
Hilborn, R., and C.J. Walters 1992 Quantitative Fisheries Stock Assessment: Choice, Dynamics, and Uncertainty. New York: Chapman and Hall.
Holland, J. 1998 Emergence. Cambridge, Eng.: Perseus Books.
Holling, C.S. 1973 Resilience and stability of ecological systems. Annual Review of Ecology and Systematics 4:1-23.
1987 Simplifying the complex: The paradigms of ecological function and structure. European Journal of Operational Research 30:139-146.
Hurwicz, L. 1972 On informationally decentralized systems. Pp. 297-336 in Decision and Organization, C.B. McGuire and R. Radner, eds. Amsterdam: North-Holland Publishing Company.
Hutchings, J.A. 1996 Spatial and temporal variation in the density of northern cod and a review of hypotheses for the stock’s collapse. Canadian Journal of Fishery and Aquatic Sciences 53:943-962.
Kuhn, T.S. 1962 The Structure of Scientific Revolutions. Chicago: University of Chicago Press.
Levin, S. 1992 The problem of pattern and scale in ecology. Ecology 73:1943-1967.
1999 Fragile Dominion. Cambridge, Eng.: Perseus Books.
Levins, R. 1992 Evolutionary Ecology Looks at Environmentalism. Unpublished paper delivered at the Symposium on Science, Reason and Modern Democracy, Michigan State University, East Lansing, MI, May 1.
Levitt, B. and J.G. March. 1995 Chester I. Barnard and the Intelligence of Learning. Pp. 11-37 in Organization Theory: From Chester Barnard to the Present and Beyond. O. Williamson, ed. New York: Oxford Univ. Press.
Libecap, G. 1995 The conditions for successful collective action. In Local Commons and Global Interdependence: Heterogeneity and Cooperation in Two Domains, R. Keohane and E. Ostrom, eds. London: Sage.
Low, B. E. Ostrom, C. Simon, and J. Wilson in press Redundancy and diversity: Do they influence optimal management? In Fikret Berkes, Johan Colding, and Carol Folke (Eds.), Navigating Nature’s Dynamics: Building Resilience for Adaptive Capacity in Social Ecological Systems. Cambridge, UK: Cambridge University Press.
Ludwig, D., R. Hilborn, and C.J. Walters 1993 Uncertainty, resource exploitation, and conservation: Lessons from history. Science 260:17, 36.
MacArthur, R.H., and E.O. Wilson 1967 The Theory of Island Biogeography. Princeton: Princeton University Press.
Michael, D.N. 1995 Barriers and bridges to learning in a turbulent human ecology. Pp. 461-485 in Barriers and Bridges to the Renewal of Ecosystems and Institutions, L.H. Gunderson, C.S. Holling, and S.S. Light, eds. New York: Columbia University Press.
Myers, R.A., N.J. Barrowman, J.A. Hutchings, and A.A. Rosenberg 1995 Population dynamics of exploited fish stocks at low population levels. Science 269: 1106-1108.
Murawski, S.A., R. Brown, and L. Hendrickson 2000 Large-scale closed areas as a fishery-management tool in temperate marine systems: The Georges Bank experience. Bulletin of Marine Science 66:775-798.
National Oceanic and Atmospheric Administration (NOAA) 1986 Fishery Management Study. Washington, DC: U.S. Department of Commerce.
1989 50 CFR Part 602, Guidelines for the Preparation of Fishery Management Plans Under the FCMA. Washington, DC: U.S. Department of Commerce.
National Research Council 1999 Sustaining Marine Fisheries. Committee on Ecosystem Management for Sustainable Marine Fisheries. Ocean Studies Board. Commission on Geosciences, Environment, and Resources. Washington, DC: National Academy Press.
O’Neill, R.V., D.L. DeAngelis, J.B. Waide, and T.F.H. Allen 1986 A Hierarchical Concept of Ecosystems. Princeton: Princeton University Press.
Ostrom, E. 1990 Governing the Commons: The Evolution of Institutions for Collective Action. New York: Cambridge University Press.
1997 A Behavioral Approach to the Rational Choice Theory of Collective Action. Presidential Address to the American Political Science Association annual meetings, August 28-31.
Ostrom, V. 1991 Polycentricity: The structural basis of self-governing systems. In The Meaning of American Federalism: Constituting a Self-Governing Society, V. Ostrom, ed. San Francisco: ICS Press.
Pahl-Wostl, C. 1995 The Dynamic Nature of Ecosystems: Chaos and Order Entwined. Chichester, Eng.: John Wiley & Sons.
Palsson, G. 2000 “Finding one’s sea legs”: Learning, the process of enskilment, and integrating fishers and their knowledge into fisheries and science and management. In Finding Our Sea Legs: Linking Fishery People and Their Knowledge with Science and Management, B. Neis and L. Felt, eds. St. John’s, Newfoundland: Institute for Social and Economic Research Press.
Parson, E.A., and W.C. Clark 1995 Sustainable development as social learning: Theoretical perspectives and practical challenges for the design of a research program. In Barriers and Bridges to the Renewal of Ecosystems and Institutions, Gunderson, L.H., C.S. Holling, and S.S. Light, eds. New York: Columbia University Press.
Pattee, H.H. 1973 Hierarchy Theory: The Challenge of Complex Systems. New York: George Braziller.
Pauly, D., V. Christensen, J. Dalsgaard, R. Froeese, and F. Torres, Jr. 1998 Fishing down marine foodwebs. Science 279:860-863.
Pfeffer, J. 1995 Incentives in organizations: The importance of social relations. Pp. 72-97 in Organization Theory: From Chester Barnard to the Present and Beyond, O.E. Williamson, ed. New York: Oxford University Press.
Pinkerton, E. 1989 Co-operative Management of Local Fisheries: New Directions for Improved Management and Community Development. Vancouver: University of British Columbia Press.
Rosa, E.A. 1998a Metatheoretical foundations for post-normal risk. Journal of Risk Research 1:15-44.
1998b Comments on commentary by Ravetz and Funtowicz: ‘Old-fashioned hypertext’. Journal of Risk Research 1:111-115.
Rose, G.A., B. DeYoung, D.W. Kulka, S.V. Goddard, and G.L. Fletcher 2000 Distribution shifts and overfishing the northern cod (Gadus morhua): A review from the ocean. Canadian Journal of Fishery and Aquatic Sciences 57:644-664.
Rosen, S. 1993 Transactions costs and internal labor markets. In The Nature of the Firm: Origins, Evolution, and Development, O.E. Williamson and S.G. Winter, eds. New York: Oxford University Press.
Rosenberg, A.A., M.J. Fogarty, M.P. Sissenwine, J.R. Beddington, and J.G. Shepherd 1993 Achieving sustainable use of renewable resources. Science 262:828-829.
Samuel, A.L. 1959 Some studies in machine learning using the game of checkers. In Computers and Thought, E.A. Feigenbaum and J. Feldman, eds. New York: McGraw-Hill.
Scott, A. 1992 Obstacles to fishery self government. Marine Resource Economics 8(3):187-199.
Simon, H. 1962 The architecture of complexity. Proceedings of the American Philosophical Society 106:467-482. 1996 The Sciences of the Artificial. 3rd ed. Cambridge, MA: MIT Press.
Smith, M.E. 1990 Chaos in fisheries management. Marine Anthropological Studies 3(2):1-13.
Stephenson, R.L. 1998 Consideration of localized stocks in management. A case statement and a case study. In The Implications of Localized Fishery Stocks, I. Hunt von Herbing, I. Kornfield, M. Tupper, and J. Wilson, eds. New York: Natural Resource, Agriculture, and Engineering Service.
Ulanowicz, R. 1997 Ecology, the Ascendent Perspective. New York: Columbia University Press.
Waldrop, M.M. 1992 Complexity: The Emerging Science at the Edge of Order and Chaos. New York: Simon and Schuster.
Walker, B.H. 1992 Biodiversity and ecological redundancy. Conservation Biology 6:18-23.
1995 Conserving biological diversity through ecosystem resilience. Conservation Biology. 9:747-752.
Walters, C.J. 1986 Adaptive Management of Renewable Resources. New York: McGraw-Hill.
1998 Evaluation of quota management policies for developing fisheries. Canadian Journal of Fishery and Aquatic Sciences 55:2691-2705.
Watling, L., and E. Norse 1996 Disturbance of the seabed by mobile fishing gear: A comparison to forest clear-cutting. Conservation Biology 12: 1180-1197.
Williamson, O.E. 1986 The Economic Institutions of Capitalism: Firms, Markets, Relational Contracting. New York: Free Press.
1995 Organization Theory: From Chester Barnard to the Present and Beyond. New York: Oxford University Press.
Wilson, J.A. 1990 Fishing for knowledge. Land Economics 66:12-29.
Wilson, J. A., B. Low, R. Costanza, and E. Ostrom 1999 Scale misperceptions and the spatial dynamics of a social-ecological system. Ecological Economics 31(2) (November): 243-257.
Wilson, J.A., J. French, P. Kleben, S.R. McKay, and R. Townsend 1991 Chaotic dynamics in a multiple species fishery: A model of community predation. Ecological Modelling 58:303-322.
Wilson, J.A., J.M. Acheson, M. Metcalfe, and P. Kelban 1994 Chaos, complexity and community management of fisheries. Marine Policy 8:291-305.
Wroblewski, J.S. 1998 Substocks of northern cod and localized fisheries in Trinity Bay, East Newfoundland and in Gilbert Bay, Southern Labrador. In The Implications of Localized Fishery Stocks, I. Hunt von Herbing, I. Kornfield, M. Tupper, and J. Wilson, eds. New York: Natural Resource, Agriculture, and Engineering Service.