National Academies Press: OpenBook

Ecological Indicators for the Nation (2000)

Chapter: Appendix B: Markov Matrices of Landscape Change

« Previous: Appendix A: Variability, Complexity, and the Design of Sampling Procedures
Suggested Citation:"Appendix B: Markov Matrices of Landscape Change." National Research Council. 2000. Ecological Indicators for the Nation. Washington, DC: The National Academies Press. doi: 10.17226/9720.
×
Page 159
Suggested Citation:"Appendix B: Markov Matrices of Landscape Change." National Research Council. 2000. Ecological Indicators for the Nation. Washington, DC: The National Academies Press. doi: 10.17226/9720.
×
Page 160
Suggested Citation:"Appendix B: Markov Matrices of Landscape Change." National Research Council. 2000. Ecological Indicators for the Nation. Washington, DC: The National Academies Press. doi: 10.17226/9720.
×
Page 161
Suggested Citation:"Appendix B: Markov Matrices of Landscape Change." National Research Council. 2000. Ecological Indicators for the Nation. Washington, DC: The National Academies Press. doi: 10.17226/9720.
×
Page 162
Suggested Citation:"Appendix B: Markov Matrices of Landscape Change." National Research Council. 2000. Ecological Indicators for the Nation. Washington, DC: The National Academies Press. doi: 10.17226/9720.
×
Page 163
Suggested Citation:"Appendix B: Markov Matrices of Landscape Change." National Research Council. 2000. Ecological Indicators for the Nation. Washington, DC: The National Academies Press. doi: 10.17226/9720.
×
Page 164

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Appendix B Markov Matrices of Landscape Change Developments in geographic information systems and remote sen- sory image analysis have made it possible to calculate changes in land cover classes during selected time intervals. The data are assembled in matrices, often known as change matrices, and the analyses are called landscape-change analyses (Vogelmann 1988, Vogelmann and Rock 1989, Lozano-Garcia and Hoffer 1985~. Such analyses are valuable because they tell us what has happened over a region during some time interval in the past. However, by their nature they are retrospective. In contrast, policy must be based on predictive analyses of landscape trajec- tories given the current rates of changes from one land type to the others. The theory of Markov chains provides the mathematical basis for at least a first approximation of the consequences of current trends in land cover distributions. This theory encompasses a large body of literature, most recently reviewed by Baker (1989) and Pastor et al. (1992~. A Markov chain consist of a vector x of the distribution of land covers at time t and a matrix A(~) of transition probabilities of changes from each land cover class to the others during a time period Ail: x+ = A(~)x~. To parameterize a Markov chain of landscape dynamics, a map of the landscape at time t is subdivided into pixels which are assigned individu- 159

160 APPENDIX B ally into one of m classes. Classes can be assigned to each pixel taxonom catty (that is, the pixel is occupied by a particular species; Horn 1975, Lippe et al. 1985), through the use of multivariate cluster or principal components analyses (Van Hulst 1979, Usher 1981), or through remotely sensed data such as air photo analyses (Iohnston and Naiman 1990, Pastor et al. 1992) or satellite imagery (Hall et al. 1991~. To obtain transition probabilities, a second map is then prepared for time t + I. The two maps are overlaid atop one another and the number of pixels that changed during ~ units of time from one land cover to another are then enumer- ated. The maximum likelihood estimates of probabilities of change from one land-cover to another during time interval ~ are: Pi,j,: = - ni j m Unit] j=1 where Pi j ~ are the transition probabilities from land cover i to land-cover j in time interval t, and nit are the number of such transitions across all pixels of the landscape of m land cover classes. When the time interval of the model (i.e., annual or decadal) is some- thing other than the desired time interval of the two maps (as frequently happens when using a historic set of air photos), then the probabilities of change can be normalized to the desired time step (Pastor et al. 1992) as follows: p = 1- e(1n(l-Pi j ~ ))/r when i ~ j n Pij =l-2Pij j=1 . . w nen 1 = ~ where ~ is expressed as some fraction of the desired time scale. For example, if transition probabilities are calculated from data layers taken 13 years apart and the user wishes transition probabilities to be expressed in decadal increments, then ~ = 1.3 in the equation above. We are now in a position to use the matrix of transition probabilities to guide policy. Suppose a particular policy is formulated to move the landscape from the current land cover vector to some desired future state. The policy is implemented for, say, ten years. A new map of land cover distribution is made from the monitoring data after 10 years and a matrix of transition probabilities is calculated as above. The question is: Is the .

APPENDIX B 161 landscape headed toward the desired distribution of land cover classes and, if so, how long will it take to get there under the new policy? Two properties of the matrix, known as the eigenvalues and eigen- vectors, are useful for answering these questions. These satisfy the equa- tion: Aft= X,u where A is the matrix of transition probabilities, ,u is an eigenvector, and ~ is an eigenvalue (a scalar). Usually a number of eigenvalues and associ- ated eigenvectors satisfy this equation; these are easily calculated with current software packages. If all of the transition probabilities are greater than zero, then any one land cover class can be reached from any other. The matrix is then said to be irreducible (Caswell 1989, Pastor et al. 1992~. Because all columns in an irreducible Markov matrix sum to 1, the dominant (largest) eigenvalue equals 1. The eigenvector of the distribution of land cover classes associ- ated with the dominant eigenvalue then represents the steady state condi- tion of the landscape. When the land cover vector is in this condition, all the inputs to a land cover class by transition from others equals all the outputs from that land-cover class to all others. If the matrix is not irreducible (i.e., some transition probabilities equal zero), then the dominant eigenvector is still the steady state distribution of land cover classes if the dominant eigenvalue of the entire matrix equals that of the largest irreducible submatrix (i.e., a submatrix of non-zero transitions probabilities among a subset of land cover classes). The dominant eigenvector is therefore where the landscape will end up if the current policy is pursued. One can then ask, Is this the desired future condition of the landscape? If not, then policies need to be adjusted. Various alternatives can be determined by "experimenting" with the transition probabilities of the current Markov matrix to see if they yield a new matrix with a dominant eigenvector that matches the desired future conditions. If the dominant eigenvector does represent the desired future condi- tion of the landscape, then one may ask, How long will it take to get there? To determine this, one must calculate the ratio of the dominant eigenvalue to the absolute value of the second largest eigenvalue. This ratio is known as the damping ratio (Usher 1981, Caswell 1989~. The greater this ratio, the faster the approach to steady state.

1 2 To: 3 4 162 APPENDIX B The approach is exponentially asymptotic and its rate, r, at any given time, t, is r = ke-7lnP where p is the damping ratio and k is a constant (Caswell 1989~. Because the approach to steady state is asymptotic, it is more convenient to calcu- late the time for some proportion of convergence to steady state, say 95% convergence. This time, tx, is given by to = ln~x) / lntp). The percentage of convergence to steady state equals 100 - (100/x). For example, the time required for 95% convergence to steady state is equivalent to the solution of the equation above for x = 20 (i.e., 100-~100/ 20) = 95~. One can now ask not only whether the desired policy is moving the landscape towards the desired future condition, but is it moving it at an acceptable rate. Again, various alternatives to move the landscape faster (or slower) can be determined by "experimentally" changing certain tran- sition probabilities to correspond to alternative policies. Markov chains lend themselves to hierarchical classification systems. Suppose at the highest level of a classification system there are four land cover classes (say, forests, wetlands, agricultural lands, and urban lands). A simple example of a transition matrix among these land cover classes is given in Table la. Most, if not all, of the transition probabilities at such an aggregated level are greater than 0, although they may be very small. That is, usually all transitions occur. TABLE la Matrix Among Four Land Cover Classes From: 1 2 3 4 X X X X X X X X X X X X X X X X

APPENDIX B 163 Now suppose that at the next lower level of the classification, class 1 has 3 subclasses, class 2 has 2 subclasses, class 3 has 3 subclasses, and class 4 has 2 subclasses. A new transition matrix can be calculated for this level of the hierarchy (Table lb). At this level, it often happens that many transitions do not occur the transition probability from one land cover class to another is often zero. Such a matrix is known as a "sparse" matrix, and may pose problems for calculations of eigenvectors and eigen- values unless certain conditions are met (see Caswell 1989 for discussion of this). However, some interesting properties often emerge. One is that there may be a few land cover classes with many positive transition probabili- ties through them. In Table lb, these are land cover classes la, 2a, 3b, and 4b. These are particular land cover subclasses through which transitions between the higher level classes commonly take place. It is particularly important to be able to identify and protect these land cover classes. They are analogous to the concept of "keystone species" in community ecology because they control the dynamics of the landscape. In keeping with this analogy, they may be termed "keystone land cover types." Should they be lost because of some land use practice, then transitions between the higher level classes may not happen. These higher level categories may then become decoupled from one another. This decoupling could then preclude the implementation of certain policies that seek to move the landscape into various desired future conditions: it may no longer be possible to achieve the desired future condition because the key land TABLE lb Transition Matrix (as Above) but with Subclasses Added From: la lb la 1B 1C 2a To: 2b 3a 3b 3c 4a 4b lc 2a ~ X I ~ X ~ 2b 1 3b 3c 4a 4b x Ixl xl xl 1 11 I X X X X X x IXT . 1 ~ :i I X ~1 1 x 1 x 1 x 1 T I T I x Ix X X At_ X ~ ANT ~ I I 1 x 1 T x I ANT x x x ~- X X 1 x 1 X X x 1 x 1 x x

164 APPENDIX B cover class that allows the required transitions may no longer be in existence. It is obvious that Markov chains are first-order linear models of changes in land cover classes. They are first order because the changes involve no time delays longer than a single time step, and they are linear because the amount of land transferred from one class to another during a time step is simply a portion of the area of each land cover type. How- ever, landscape dynamics are almost certainly nonlinear and often involve time delays. Time delays can be incorporated into Markov chains by extending them to be second order or higher, but the mathematics becomes more complicated. Nonetheless, the theory of higher-order Markov chains (including time delays) and some preliminary applications to species and landscape dynamics have been established (Baker 1989, Acevedo et al. 1995, Kenkel 1993~. The application of higher-order Markovian models to behavior of indicators of landscape change would greatly benefit from additional research.

Next: Appendix C: Biographical Sketches of Committee Members and Staff »
Ecological Indicators for the Nation Get This Book
×
Buy Hardback | $55.00 Buy Ebook | $43.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Environmental indicators, such as global temperatures and pollutant concentrations, attract scientists' attention and often make the headlines. Equally important to policymaking are indicators of the ecological processes and conditions that yield food, fiber, building materials and ecological "services" such as water purification and recreation.

This book identifies ecological indicators that can support U.S. policymaking and also be adapted to decisions at the regional and local levels. The committee describes indicators of land cover and productivity, species diversity, and other key ecological processes—explaining why each indicator is useful, what models support the indicator, what the measured values will mean, how the relevant data are gathered, how data collection might be improved, and what effects emerging technologies are likely to have on the measurements.

The committee reviews how it arrived at its recommendations and explores how the indicators can contribute to policymaking. Also included are interesting details on paleoecology, satellite imagery, species diversity, and other aspects of ecological assessment.

Federal, state, and local decision-makers, as well as environmental scientists and practitioners, will be especially interested in this new book.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!