Click for next page ( 160


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 159
Appendix B Markov Matrices of Landscape Change Developments in geographic information systems and remote sen- sory image analysis have made it possible to calculate changes in land cover classes during selected time intervals. The data are assembled in matrices, often known as change matrices, and the analyses are called landscape-change analyses (Vogelmann 1988, Vogelmann and Rock 1989, Lozano-Garcia and Hoffer 1985~. Such analyses are valuable because they tell us what has happened over a region during some time interval in the past. However, by their nature they are retrospective. In contrast, policy must be based on predictive analyses of landscape trajec- tories given the current rates of changes from one land type to the others. The theory of Markov chains provides the mathematical basis for at least a first approximation of the consequences of current trends in land cover distributions. This theory encompasses a large body of literature, most recently reviewed by Baker (1989) and Pastor et al. (1992~. A Markov chain consist of a vector x of the distribution of land covers at time t and a matrix A(~) of transition probabilities of changes from each land cover class to the others during a time period Ail: x+ = A(~)x~. To parameterize a Markov chain of landscape dynamics, a map of the landscape at time t is subdivided into pixels which are assigned individu- 159

OCR for page 159
160 APPENDIX B ally into one of m classes. Classes can be assigned to each pixel taxonom catty (that is, the pixel is occupied by a particular species; Horn 1975, Lippe et al. 1985), through the use of multivariate cluster or principal components analyses (Van Hulst 1979, Usher 1981), or through remotely sensed data such as air photo analyses (Iohnston and Naiman 1990, Pastor et al. 1992) or satellite imagery (Hall et al. 1991~. To obtain transition probabilities, a second map is then prepared for time t + I. The two maps are overlaid atop one another and the number of pixels that changed during ~ units of time from one land cover to another are then enumer- ated. The maximum likelihood estimates of probabilities of change from one land-cover to another during time interval ~ are: Pi,j,: = - ni j m Unit] j=1 where Pi j ~ are the transition probabilities from land cover i to land-cover j in time interval t, and nit are the number of such transitions across all pixels of the landscape of m land cover classes. When the time interval of the model (i.e., annual or decadal) is some- thing other than the desired time interval of the two maps (as frequently happens when using a historic set of air photos), then the probabilities of change can be normalized to the desired time step (Pastor et al. 1992) as follows: p = 1- e(1n(l-Pi j ~ ))/r when i ~ j n Pij =l-2Pij j=1 . . w nen 1 = ~ where ~ is expressed as some fraction of the desired time scale. For example, if transition probabilities are calculated from data layers taken 13 years apart and the user wishes transition probabilities to be expressed in decadal increments, then ~ = 1.3 in the equation above. We are now in a position to use the matrix of transition probabilities to guide policy. Suppose a particular policy is formulated to move the landscape from the current land cover vector to some desired future state. The policy is implemented for, say, ten years. A new map of land cover distribution is made from the monitoring data after 10 years and a matrix of transition probabilities is calculated as above. The question is: Is the .

OCR for page 159
APPENDIX B 161 landscape headed toward the desired distribution of land cover classes and, if so, how long will it take to get there under the new policy? Two properties of the matrix, known as the eigenvalues and eigen- vectors, are useful for answering these questions. These satisfy the equa- tion: Aft= X,u where A is the matrix of transition probabilities, ,u is an eigenvector, and ~ is an eigenvalue (a scalar). Usually a number of eigenvalues and associ- ated eigenvectors satisfy this equation; these are easily calculated with current software packages. If all of the transition probabilities are greater than zero, then any one land cover class can be reached from any other. The matrix is then said to be irreducible (Caswell 1989, Pastor et al. 1992~. Because all columns in an irreducible Markov matrix sum to 1, the dominant (largest) eigenvalue equals 1. The eigenvector of the distribution of land cover classes associ- ated with the dominant eigenvalue then represents the steady state condi- tion of the landscape. When the land cover vector is in this condition, all the inputs to a land cover class by transition from others equals all the outputs from that land-cover class to all others. If the matrix is not irreducible (i.e., some transition probabilities equal zero), then the dominant eigenvector is still the steady state distribution of land cover classes if the dominant eigenvalue of the entire matrix equals that of the largest irreducible submatrix (i.e., a submatrix of non-zero transitions probabilities among a subset of land cover classes). The dominant eigenvector is therefore where the landscape will end up if the current policy is pursued. One can then ask, Is this the desired future condition of the landscape? If not, then policies need to be adjusted. Various alternatives can be determined by "experimenting" with the transition probabilities of the current Markov matrix to see if they yield a new matrix with a dominant eigenvector that matches the desired future conditions. If the dominant eigenvector does represent the desired future condi- tion of the landscape, then one may ask, How long will it take to get there? To determine this, one must calculate the ratio of the dominant eigenvalue to the absolute value of the second largest eigenvalue. This ratio is known as the damping ratio (Usher 1981, Caswell 1989~. The greater this ratio, the faster the approach to steady state.

OCR for page 159
1 2 To: 3 4 162 APPENDIX B The approach is exponentially asymptotic and its rate, r, at any given time, t, is r = ke-7lnP where p is the damping ratio and k is a constant (Caswell 1989~. Because the approach to steady state is asymptotic, it is more convenient to calcu- late the time for some proportion of convergence to steady state, say 95% convergence. This time, tx, is given by to = ln~x) / lntp). The percentage of convergence to steady state equals 100 - (100/x). For example, the time required for 95% convergence to steady state is equivalent to the solution of the equation above for x = 20 (i.e., 100-~100/ 20) = 95~. One can now ask not only whether the desired policy is moving the landscape towards the desired future condition, but is it moving it at an acceptable rate. Again, various alternatives to move the landscape faster (or slower) can be determined by "experimentally" changing certain tran- sition probabilities to correspond to alternative policies. Markov chains lend themselves to hierarchical classification systems. Suppose at the highest level of a classification system there are four land cover classes (say, forests, wetlands, agricultural lands, and urban lands). A simple example of a transition matrix among these land cover classes is given in Table la. Most, if not all, of the transition probabilities at such an aggregated level are greater than 0, although they may be very small. That is, usually all transitions occur. TABLE la Matrix Among Four Land Cover Classes From: 1 2 3 4 X X X X X X X X X X X X X X X X

OCR for page 159
APPENDIX B 163 Now suppose that at the next lower level of the classification, class 1 has 3 subclasses, class 2 has 2 subclasses, class 3 has 3 subclasses, and class 4 has 2 subclasses. A new transition matrix can be calculated for this level of the hierarchy (Table lb). At this level, it often happens that many transitions do not occur the transition probability from one land cover class to another is often zero. Such a matrix is known as a "sparse" matrix, and may pose problems for calculations of eigenvectors and eigen- values unless certain conditions are met (see Caswell 1989 for discussion of this). However, some interesting properties often emerge. One is that there may be a few land cover classes with many positive transition probabili- ties through them. In Table lb, these are land cover classes la, 2a, 3b, and 4b. These are particular land cover subclasses through which transitions between the higher level classes commonly take place. It is particularly important to be able to identify and protect these land cover classes. They are analogous to the concept of "keystone species" in community ecology because they control the dynamics of the landscape. In keeping with this analogy, they may be termed "keystone land cover types." Should they be lost because of some land use practice, then transitions between the higher level classes may not happen. These higher level categories may then become decoupled from one another. This decoupling could then preclude the implementation of certain policies that seek to move the landscape into various desired future conditions: it may no longer be possible to achieve the desired future condition because the key land TABLE lb Transition Matrix (as Above) but with Subclasses Added From: la lb la 1B 1C 2a To: 2b 3a 3b 3c 4a 4b lc 2a ~ X I ~ X ~ 2b 1 3b 3c 4a 4b x Ixl xl xl 1 11 I X X X X X x IXT . 1 ~ :i I X ~1 1 x 1 x 1 x 1 T I T I x Ix X X At_ X ~ ANT ~ I I 1 x 1 T x I ANT x x x ~- X X 1 x 1 X X x 1 x 1 x x

OCR for page 159
164 APPENDIX B cover class that allows the required transitions may no longer be in existence. It is obvious that Markov chains are first-order linear models of changes in land cover classes. They are first order because the changes involve no time delays longer than a single time step, and they are linear because the amount of land transferred from one class to another during a time step is simply a portion of the area of each land cover type. How- ever, landscape dynamics are almost certainly nonlinear and often involve time delays. Time delays can be incorporated into Markov chains by extending them to be second order or higher, but the mathematics becomes more complicated. Nonetheless, the theory of higher-order Markov chains (including time delays) and some preliminary applications to species and landscape dynamics have been established (Baker 1989, Acevedo et al. 1995, Kenkel 1993~. The application of higher-order Markovian models to behavior of indicators of landscape change would greatly benefit from additional research.