Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 68
Page 68 4 Modeling to Support the TMDL Process This chapter addresses the planning step ( Figure 1-1) that occurs once a waterbody is formally listed as impaired. The main activity required during the planning step is an assessment of the relative contribution of different stressors (sources of pollution) to the impairment. For example, during this step Total Maximum Daily Loads (TMDLs) are calculated for the chemical pollutant (if there is one) causing the impairment, and the maximum pollutant loads consistent with achieving the water quality standard are estimated. Pollutant load limits alone may not secure the designated use, however, if other sources of pollution are present. Changes in the hydrologic regime (such as in the pattern and timing of flow) or changes in the biological community (such as in the control of alien taxa or riparian zone condition) may be needed to attain the designated use, as discussed in Chapter 2. As hydrologic, biological, chemical, or physical conditions change, the estimation of the TMDL can change. Because they represent our scientific understanding of how stressors relate to appropriate designated uses, models play a central role in the TMDL program. Models are the means of making predictions—not only about the TMDL required to achieve water quality standards, but also about the effectiveness of different actions to limit pollutant sources and modify other stressors to reach attainment of a designated use. This chapter discusses the necessity for, and limitations of, models and other predictive approaches in the TMDL process. Thus, it directly addresses the committee's charge of evaluating the TMDL program's information needs and the methods used to obtain information.
OCR for page 69
Page 69 MODEL SELECTION CRITERIA Mathematical models can be characterized as empirical (also known as statistical) or mechanistic (process-oriented), but most useful models have elements of both types. An empirical model is based on a statistical fit to data as a way to statistically identify relationships between stressor and response variables. A mechanistic model is a mathematical characterization of the scientific understanding of the critical biogeochemical processes in the natural system; the only data input is in the selection of model parameters and initial and boundary conditions. Box 4-1 presents a simple explanation of the difference between the two types of models. Water quality models for TMDL development are typically classified as either watershed (pollutant load) models or as waterbody (pollutant response) models. A watershed model is used to predict the pollutant load to a waterbody as a function of land use and pollutant discharge; a waterbody model is used to predict pollutant concentrations and other responses in the waterbody as a function of the pollutant load. Thus, the waterbody model is necessary for determining the TMDL that meets the water quality standard, and a watershed model is necessary for allocating the TMDL among sources. Some comprehensive modeling frameworks [e.g., BASINS (EPA, 2001) and Eutromod (Reckhow et al., 1992)] include both, but most water quality models are of one or the other type. Except where noted, the comments in this chapter reflect both watershed and waterbody models; examples presented may address one or the other model type as needed to illustrate concepts. Although prediction typically is made with a mathematical model, there are certainly situations in which expert judgment can and should be employed. Furthermore, although in many cases a complex mathematical model can be developed, the model best suited for the situation may be relatively simple, as noted in examples described later in the chapter. Indeed, reliance on professional judgment and simpler modeling will be acceptable in many cases, and is compatible with the adaptive approach to TMDLs described in Chapter 5. Highly detailed models are expensive to develop and apply and may be time consuming to execute. Much of the concern over costs of TMDLs appears to be based on the assumption that detailed modeling techniques will be required for most TMDLs. In the quest to efficiently allocate TMDL resources, states should recognize that simpler analyses can often support informed decision-making and that complex modeling studies should be pursued only if warranted by the complexity of the
OCR for page 70
Page 70 BOX 4-1 Mechanistic vs. Statistical Models Suppose a teacher is conducting a lesson on measurements and sets out to measure and record the height and weight of each student. Unfortunately, the scale breaks after the first several children have been weighed. In order to proceed with the lesson (though on a somewhat different tack), a mechanistically inclined teacher might decide to use textbook data on the density of the human body, together with a variety of length measurements of each child (e.g., waist, leg, and arm dimensions), to estimate body volumes as the sum of the volumes of body parts. The teacher may then obtain the weights of the students as the product of density and volume. A statistically inclined teacher, on the other hand, might simply use the data obtained for the first several children in a regression model of weight on height that could then be used to predict the weights of the other students based on their height. The accuracy and utility of each of these two approaches depend on both the details of the input data and the calculation procedures. If the mechanistic teacher has good information on tissue densities, for example, and has the time to make many length measurements, the results may be quite good. Conversely, the statistical approach may yield quite acceptable results at a fraction of the mechanistic effort if enough children had been weighed before the scale broke, and if those children were approximately representative of the whole class in terms of body build. Moreover, the regression model comes with error statistics for its predictions and parameters. Although the same statistical approach would work with other groups of students, additional weight measurements would be required for model calibration. Thus, the benefits of the statistical approach are that it is less costly and its reliability is known, but its use is dependent on data collected for the variable of interest (weight, in this case) under the circumstances of interest. The mechanistic approach has wider application and a clear rationality (the total analytical problem. More complex modeling will not necessarily assure that uncertainty is reduced, and in fact can compound problems of uncertain predictions. As discussed below, accounting for uncertainty and representing watershed processes are two of the possible criteria that need to be considered when selecting an analytical model for TMDL development. TMDLs, which are typically evaluated through predictive modeling, lead to decisions concerning controls on pollutant sources or other stressors. Thus, models used in TMDL analysis provide “decision support.”
OCR for page 71
Page 71 equals the sum of the parts), but it requires more time and effort, and, unless some data are collected for the variable of interest under similar circumstances, its error characteristics are unknown. Of course, in practice, mechanistic and statistical modelers often make considerable use of each other's techniques. In the classroom analogy, for example, it would make sense for the statistically inclined teacher to make more detailed measurements of the weighed students' dimensions and develop a multivariate regression model of weight as a function of torso volume, leg volume, etc., rather than height alone. The more complex model could be applied to a wider range of body builds. Moreover, the regression coefficients would represent the estimated densities of different parts of the body. These could be compared with the textbook values of body density as a test of the rationality of the model. Conversely, the mechanistic teacher might use body density data from the textbook to adjust the height–weight regression equations for use with different age and ethnic groups. This would eliminate the need for collecting additional weight data for these groups. It is also worth distinguishing a third type of model termed stochastic that is widely used in engineering applications and that may have a useful role in TMDL modeling. The objective of stochastic modeling is to simulate the statistical behavior of a system by imposing random variability on one or more terms in the model. Such models are usually fundamentally mechanistic, but avoid mechanistic description of complex processes by using simpler randomized terms. Stochastic models generally require a large number of measurements of certain variables (e.g., inputs, state variables) in order to correctly characterize their random behavior. As an example, consider a mechanistic model of river water quality that includes randomly generated streamflow and pollutant loads. If the randomly generated inputs are realistic (both individually and in relation to each other), then the output may provide a very useful description of the variability to expect in the water quality of the river. Box 4-2 lists desirable model selection/evaluation criteria in consideration of the decision support role of models in the TMDL process. The list is intended to characterize an ideal model. Given the limitations of existing models, it should not be viewed as a required checklist for attributes that all present-day TMDL models must have. EPA has supported water quality model development for many years and, along with the U.S. Geological Survey (USGS), the U.S. Army Corps of Engineers, and the U.S. Department of Agriculture, is responsi-
OCR for page 72
Page 72 BOX 4-2 Model Selection Criteria A predictive model should be broadly defined to include both mathematical expressions and expert scientific judgment. A predictive model useful for TMDL decision support ideally should have the following characteristics: 1. The model focuses on the water quality standard. The model is designed to quantitatively link management options to meaningful response variables. This means that it is desirable to define the TMDL endpoints (e.g., pollutant sources and standard violation parameter) and incorporate the entire “chain” from stressors to response into the modeling analysis. This also means that the spatial/temporal scales of the problem and the model should be compatible. 2. The model is consistent with scientific theory. The model does not err in process characterization. Note that this is different from the often-stated goal that the model correctly represents processes, which, for terrestrial and aquatic ecosystems, cannot be achieved. 3. Model prediction uncertainty is reported. Given the reality of prediction errors, it makes sense to explicitly acknowledge the prediction uncertainty for various management options. This provides decision-makers with an understanding of the risks of options, and allows them to factor this understanding into their decisions. To do this, prediction error estimates are required. 4. The model is appropriate to the complexity of the situation. Simple water quality problems can be addressed with simple models. Complex water quality problems may or may not require the use of complex models (as discussed later in this chapter and in Chapter 5). 5. The model is consistent with the amount of data available. Models requiring large amounts of monitoring data should not be used in situations where such data are unavailable. 6. The model results are credible to stakeholders. Given the increasing role of stakeholders in the TMDL process, it may be necessary for modelers to provide more than a cursory explanation of the predictive model. 7. Cost for annual model support is an acceptable long-term expense. Given growth and change, water quality management will not end with the initial TMDL determination. The cost of maintaining and updating the model must be tolerable over the long term. 8. The model is flexible enough to allow updates and improvements. Research can be expected to improve scientific understanding, leading to refinements in models.
OCR for page 73
Page 73 ble for most models currently being applied for TMDL development. Agency-wide, EPA has funded model development and technology transfer activities for a wide range of models. The greatest concentration of this effort has been at the Center for Exposure Assessment Modeling (CEAM). In contrast to the broad perspective found within EPA as a whole, CEAM has demonstrated a clear preference for mechanistic models, as evidenced by their adoption of the BASINS modeling system (EPA, 2001) as the primary TMDL modeling framework. Models developed at the CEAM and incorporated into BASINS place high priority on correctly describing key processes, which is related to but different from model selection criterion #2 (see Box 4-2). It is important to recognize that placing priority on ultimate process description often will come at the expense of the other model selection criteria. For one thing, an emphasis on process description tends to favor complex mechanistic models over simpler mechanistic or empirical models and may result in analyses that are more costly than is necessary for effective decision-making. In addition, physical, chemical, and biological processes in terrestrial and aquatic environments are far too complex to be conceptually understood or fully represented in even the most complicated models. For the purposes of the TMDL program, the primary purpose of modeling should be to support decision-making. Our inability to completely describe all relevant processes can be accounted for by quantifying the uncertainty in the model predictions. UNCERTAINTY ANALYSIS IN WATER QUALITY MODELS The TMDL program currently accounts for the uncertainty embedded in the modeling exercise by applying a margin of safety (MOS). As discussed in Chapter 1, the TMDL can be represented by the following equation: TMDL = ∑WLA + ∑LA + MOS This states that the TMDL is the sum of the present and near future load of pollutants from point sources and nonpoint and background sources to receiving waterbodies plus an adequate margin of safety (MOS) needed to attain water quality standards. One possible metric for the point source waste load allocation (∑WLA) and the nonpoint source load allocation (∑LA) is mass per unit time, where time is expressed in days. However, other units of time may
OCR for page 74
Page 74 actually be more appropriate. For example, it may be better to use a season as the time unit when the TMDL is calculated for lakes and reservoirs, or a year when contaminated sediments are the main stressor. EPA (1999) gives additional ways in which a TMDL can be expressed: the required reduction in percentage of the current pollution load to attain and maintain water quality standards, the required reduction of pollutant load to attain and maintain riparian, biological, channel, or morphological measures so that water quality standards are attained and maintained, or the pollutant load or reduction of pollutant load that results from modifying a characteristic of a waterbody (e.g., riparian, biological, channel, geomorphologic, or chemical characteristics) so that water quality standards are attained and maintained. The MOS is sometimes a controversial component of the TMDL equation because it is meant to protect against potential water quality standard violations, but does so at the expense of possibly unnecessary pollution controls. Because of the natural variability in water quality parameters and the limits of predictability, a small MOS may result in nonattainment of the water quality goal; however, a large MOS may be inefficient and costly. The MOS should account for uncertainties in the data that were used for water quality assessment and for the variability of background (natural) water quality contributions. It should also reflect the reliability of the models used for estimating load capacity. Under current practice, the MOS is typically an arbitrarily selected numeric safety factor. In other cases, a numeric value is not stated, and rather conservative choices are made about the models used and the effectiveness of best management practices. Consistent with our concerns, NRC (2000) notes that since parameters involved in the TMDL determination are probabilistic and the MOS is a measure of uncertainty, the MOS should be determined through a formal uncertainty and error propagation analysis. There is also a compelling practical reason for explicit and thorough quantification of uncertainty in the TMDL via the MOS—reduction of the MOS can potentially lead to a significant reduction in TMDL implementation cost. On this basis alone, EPA should place a high priority on estimating TMDL forecast uncertainty and on selecting and developing TMDL models with minimal forecast error.
OCR for page 75
Page 75 Model prediction error can be assessed in two ways. First, Monte Carlo simulation can be used to estimate the effect of model parameter error, model equation error, and initial/boundary condition error on prediction error. This process is data-intensive and may be computationally unwieldy for large models. A second and simpler alternative is to compare predictions with observations, although the correct interpretation of this analysis is not as straightforward as it may seem. If a model is “overfitted” to calibration data and the test or “verification” data are not substantially different from the calibration data, the prediction–observation comparison will underestimate the prediction error. The best way to avoid this is to obtain independent verification data substantiated with a statistical comparison between calibration data and verification data. To date, we are aware of no thorough error propagation studies with the mechanistic models favored by EPA (by thorough, we mean that all errors and error covariance terms are estimated and are plausible for the application). Further, the track record associated with even limited uncertainty analyses is not encouraging for water quality models in general. Among empirical models, only the relatively simple steady-state nutrient input–output models have undergone reasonably thorough error analyses. For example, Reckhow and Chapra (1979) and Reckhow et al. (1992) report prediction error of approximately 30 percent to 40 percent for cross-system models that predict average growing season total phosphorus or total nitrogen concentration based on measured annual loading. Prediction errors are likely to be higher for applications based on estimated or predicted loading. Prediction error will be higher still when these simple models are linked to statistical models to predict chlorophyll a, Secchi disk transparency, or an integrative measure of biological endpoints. Most error analyses conducted on mechanistic water quality models have also focused on eutrophication, so relatively little is known of prediction error for toxic pollutants, microorganisms, or other important stressors. In one of the few relatively thorough error propagation studies, Di Toro and van Straten (1979) and van Straten (1983) used maximum likelihood to determine point estimates and covariances for parameters in a seasonal phytoplankton model for Lake Ontario. Of particular note, they found that prediction error decreased substantially when parameter covariances were included in error propagation, underscoring the importance of including covariance terms in error analyses. This result occurred because, while individual parameters might be highly uncertain, specific pairs of parameters (e.g., the half saturation constant and the maximum growth rate in the Michaelis–Menten model) may vary in a
OCR for page 76
Page 76 predictable way (expressed through covariance) and thus may be collectively less uncertain. Di Toro and van Straten found the prediction coefficient of variation to range from 8 percent (for nitrate-N) to 390 percent (for ammonia-N), with half of the values falling between 44 percent and 91 percent. Zooplankton prediction errors tended to be much higher. Beck (1987) found that the error levels cited in these studies are typical of those reported elsewhere. There is evidence to suggest that the current models of water quality, in particular, the larger models, are capable of generating predictions to which little confidence can be attached (Beck, 1987). The need for understanding the prediction uncertainty of chosen models is not new. Indeed, recent TMDL modeling and assessment guidance from EPA often mentions the importance of formal uncertainty analysis in determining the MOS (EPA, 1999). However, EPA has consistently failed to either recommend predictive models that are amenable to thorough uncertainty analysis or provide adequate technical guidance for reliable estimation of prediction error. Conclusions and Recommendations 1. EPA needs to provide guidance on model application so that thorough uncertainty analyses will become a standard component of TMDL studies. Prediction uncertainty should be estimated in a rigorous way, and models should be evaluated and selected considering the prediction error need. The limited error analysis conducted within the QUAL2E-UNCAS model (Brown and Barnwell, 1987) was a start, but there has been little progress at EPA in the intervening 14 years. 2. The TMDL program currently accounts for the uncertainty embedded in the modeling exercise by applying a margin of safety (MOS); EPA should end the practice of arbitrary selection of the MOS and instead require uncertainty analysis as the basis for MOS determination. Because reduction of the MOS can potentially lead to a significant reduction in TMDL implementation cost, EPA should place a high priority on selecting and developing TMDL models with minimal forecast error. 3. Given the computational difficulties with error propagation for large models, EPA should selectively target some postimplementation TMDL compliance monitoring for verification data collection
OCR for page 77
Page 77 to assess model prediction error. TMDL model choice is currently hampered by the fact that relatively few models have undergone thorough uncertainty analysis. Postimplementation monitoring at selected sites can yield valuable data sets to assess the ability of models to reliably forecast response. Large or complex models that pose an overwhelming computational burden for Monte Carlo simulation are particularly good candidates for this assessment. MODELS FOR BIOTIC RESPONSE: A CRITICAL GAP The development of models that link stressors (such as chemical pollutants, changes in land use, or hydrologic alterations) to biological responses is a significant challenge to the use of biocriteria and for the TMDL program. There are currently no protocols for identifying stressor reductions necessary to achieve certain biocriteria. A December 2000 EPA document (EPA, 2000) on relating stressors to biological condition suggests how to use professional judgment to determine these relationships, but it offers no other approaches. As discussed below, informed judgment can be effectively used in simple TMDL circumstances, but in more complex systems, empirical or mechanistic models may be required. There have been some developments in modeling biological responses as a function of chemical water quality. One approach attempts to describe the aquatic ecosystem as a mechanistic model that includes the full sequence of processes linking biological conditions to pollutant sources; this typically results in a relatively complex model and depends heavily on scientific knowledge of the processes. The alternative is to build a simpler empirical model of a single biological criterion as a function of biological, chemical, and physical stressors. Both approaches have been pursued in research dating back at least 30 years, and there has been some progress on both fronts. One promising recent approach is to combine elements of each of these methods. For example, Box 4-3 describes a probability network model that has both mechanistic and empirical elements with meaningful biological endpoints. Advances in mechanistic modeling of aquatic ecosystems have occurred primarily in the form of greater process (especially trophic) detail and complexity, as well as in dynamic simulation of the system (Chapra, 1996). Still, mechanistic ecosystem models have not advanced to the point of being able to predict community structure or biotic integrity. Moreover, the high level of complexity that has been achieved with this
OCR for page 78
Page 78 BOX 4-3 Neuse Estuary TMDL Modeling The Neuse Estuary is listed for chlorophyll a violations (exceedances of 40 g/l), and nitrogen is the pollutant for which a TMDL is developed. Two distinct estuarine models have been developed to guide the TMDL process; one is a two-dimensional process model (CE-Qual-W2), and the other is a probability (Bayes) network model (Borsuk, 2001) depicted in Figure 1. This probability network model has several appealing features that are compatible with the modeling framework proposed here: The probabilities in the model are an expression of uncertainty. The conditional probabilities characterizing the relationships described in Figure 1 reflect a combination of simple mechanisms, statistical (regression) fitting, and expert judgment. Some of the model endpoints—estimated using judgmental probability elicitation, which is a rigorous, established process for quantifying scientific knowledge (Morgan and Henrion, 1990)—such as “shellfish survival” and “number of fishkilis,” characterize biological responses that are more directly meaningful to stakeholders and can easily be related to designated use. The Neuse Bayes network is a waterbody model; it is being linked to the USGS SPARROW watershed model for allocation of the TMDL. approach has made it difficult to use statistically rigorous calibration methods and to conduct comprehensive error analyses (Di Toro and van Straten, 1983; Beck, 1987). The empirical approach depends on a statistical equation in which the biocriterion is estimated as a function of a stressor variable. Success with this empirical approach has been primarily limited to models of relatively simple biological metrics such as chlorophyll a (Peters, 1991; Reckhow et al., 1992). For reasons that are not entirely clear, empirical models of higher-level biological variables, such as indices of biotic integrity, have not been widely used. Regressions of biotic condition on chemical water quality measures are potentially of great value in TMDL development because of their simplicity and transparent error characteristics. Two accuracy issues, however, need to be considered. First is the obvious question of whether the level of statistical correlation between biotic metrics and pollutant concentrations is strong enough that prediction errors will be acceptable to regulators and stakeholders. A second
OCR for page 79
Page 79 ~ enlarge ~ FIGURE 1 Probability network model for the Neuse Estuary TMDL development. and more difficult issue is that of gaining assurance of a cause–effect relationship between chemical predictors and biotic metrics. The construction of empirical models of biotic condition would benefit greatly from (1) observational data that show the effects of changes in chemical concentrations over a time period when other factors have remained relatively constant and (2) inclusion of as many factors that are relevant to biotic condition as possible. The latter, of course, increases the requirement for observational data. Despite these limitations, in the near term, empirical models may more easily fill the need for biological response models than would mechanistic models. Conclusions and Recommendations 1. EPA should promote the development of models that can more effectively link environmental stressors (and control actions) to biological responses. Both mechanistic and empirical models should be
OCR for page 80
Page 80 explored, although empirical models are more likely to fill short-term needs. Such models are needed to promote the wider use of biocriteria at the state level, which is desirable because biocriteria are a better indicator of designated uses than are chemical criteria. ADDITIONAL MODEL SELECTION ISSUES Data Required The use of complex mechanistic models in the TMDL program is warranted if it helps promote the understanding of complex systems, as long as uncertainties in the results are reported and incorporated into decision-making. However, there may be a tendency to use complex mechanistic models to conduct water quality assessments in situations with little useful water quality data and/or involving major remediation expenditures or legal actions. In these situations, there is usually a common belief that the expected realism in the model can compensate for a lack of data, and the complexity of the model gives the impression of credibility. However, given that uncertainty in models is likely to be exacerbated by a lack of data, the recommended strategy is to begin with a simple modeling study and iteratively expand the analysis as needs and new information dictate. For example, a simple analysis using models like those described by EPA (Mills et al., 1985) as screening procedures could be run quickly at low cost to begin to understand the issues. This understanding might suggest (perhaps through sensitivity analysis) that data should be collected on current land use, or that a limited monitoring program is warranted. Following acquisition of that information/data, a revised (perhaps more detailed) model could be developed. This might result in the TMDL (to be further evaluated using adaptive implementation as described in Chapter 5), or it might lead to further data collection and refinement of the model. This strategy for data-poor situations makes efficient use of resources and targets the effort toward information and models that will reduce the uncertainty as the analysis proceeds. The data required for TMDL model development will be a function of the water quality criterion and its location and the analytical procedures used to relate the stressors to the criterion. Data needs may include hydrology (streamflow, precipitation), ambient water quality measures, and land use and elevation in a watershed (see Box 4-4 for more infor-
OCR for page 81
Page 81 BOX 4-4 Data Requirements for TMDL Modeling of Pollutants The data and information required for TMDL modeling must reflect the parameters that affect attainability of water quality standards. Many of the models used today have extremely large data requirements, a fact that must be addressed prior to TMDL development so that adequate data collection can occur. Flow Data. Critical to the process of calibrating and verifying models are flow data, from sources and various locations in the receiving water. Flow data are generally high in quality if gathered as part of unidirectional stream surveys, but become less reliable in areas subject to tidal effects. The USGS is generally considered to be the most reliable source for long-term, high-quality data sets. Tidal records are available, historically and for predictive purposes, for many coastal waters in the United States from the National Oceanic and Atmospheric Administration. Some states have maintained long-term gages in coastal waters, but these are usually few in number. Ambient Water Quality Data. A number of federal agencies, state agencies, regional organizations, and research groups collect surface water quality data. Many of these data are retrievable over the Internet, particularly data from the USGS and EPA. Although there is no universal repository for all surface water quality data, the STORET database is the most comprehensive. Because methods of collection and analysis may vary, there is a need for QA/QC of these data. Land Use Data. All states should have access to a series of land use records and projections. For ease of use, the land use data sets should be made available as Geographic Information System (GIS) coverages. ERA has provided default coverages as a component of its BASINS model. For TMDL purposes, land use data are required for the time period over which water quality data are available in order to calibrate and validate models. Projected land use data are needed for predicting future scenarios. The overall quality of these land use data will vary, often as a function of the level of ground-truthing that was done or the accuracy of the predictions for future land use changes. Point Source Data. Model inputs may include measured values of pollutant loading from point sources (e.g., based on information reported on NPDES Discharge Monitoring Reports submitted by permitted facilities). Other possible data sources include results from periodic compliance inspections and wasteload allocation studies, or data collected as part of field surveys done in support of the TMDL. Such data are generally available and reliable.
OCR for page 82
Page 82 Nonpoint Source Data. Data on pollutant loadings from nonpoint sources are much less available and reliable than data from point sources. This is partly because during high-flow, high-rainfall events, monitoring is only infrequently conducted. For nonpoint sources, Event Mean Concentrations (EMCs) are needed to estimate the loadings that are delivered from each significant land use in a basin. EMCs are useful tools in providing estimated nonpoint source loads. Given the wide range of actual loads that may be associated with nonpoint sources, these estimates frequently represent the best science available. Atmospheric Deposition. Data on pollutant loadings from atmospheric deposition have been compiled by the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) using a nationwide network of precipitation-monitoring sites to generate reliable estimates of loads for many parameters. However, unlike watersheds, airsheds vary in size, depending upon the pollutant of concern and its specific forms and chemistry. Assessing the atmospheric contribution to any one basin is complicated by variations attributable to factors such as seasonal shifts in prevailing winds and distance from contributing sources. Thus, it is currently difficult to differentiate impacts from local sources vs. remote sources. For example, although significant work has been done in the northeastern United States to link sources of nitrous oxides with the areas subject to impact, similar studies elsewhere are not routinely available. Data for parameters other than those covered by NADP sites, as well as data on basin-specific wet and dry atmospheric deposition rates, are also scant. Legacy/Upstream Sources. For many impaired waters, states will need to identify and estimate loads attributed to legacy sources (e.g., PCBs, DDT, or the phosphorus-laden lake sediments) and upstream sources (those entering a waterbody segment upstream of the watershed currently being studied). The availability and reliability of such data vary widely across the nation. Best Management Practices. TMDL development will in many cases require estimates of the treatment efficiency for a best management practice (BMP). Such data are generally not available, except for a small number of well-studied stormwater BMPs and a limited number of pollutants (see NRC, 2000). To account for these deficiencies, states might use best professional judgment to estimate the percent reduction, taking into account treatment provided by similar BMPs and stakeholder input. EPA has recently provided funding for a national database designed to help states track the effectiveness of BMPs as they are developed and evaluated. Databases of BMP effectiveness are currently available at ASCE (1999) and Winer (2000).
OCR for page 83
Page 83 mation). TMDL development will also likely require data on point/ non-point sources and pollutant loads, atmospheric deposition, the effectiveness of current best management practices, and legacy/upstream pollutant sources. Because the amount of available data varies with site, there is no absolute minimum data requirement that can be universally set for TMDL development. Data availability is one source of uncertainty in the development of models for decision support. Although there are other sources of uncertainty as well, models should be selected (simple vs. complex) in part based on the data available to support their use. Simple vs. Complex Models The model selection criteria concerning cost, flexibility, adaptability, and ease of understanding ( Box 4-2) all tend to favor simple models, although they may fail to adequately satisfy the first criterion. There are many situations, however, when an exceedingly simple model is all that is needed for TMDL development, particularly when combined with adaptive implementation (to be discussed in Chapter 5). For example, it is not uncommon in many states for farm fields to straddle small streams, with cows being allowed to freely graze in and around the stream. If a downstream water quality standard is violated, a simple mental model linking the cows to the violation, and subsequent actions in which the first step might be to limit cow access to the riparian corridor, may ultimately be sufficient for addressing the impairment. This example is certainly not intended to suggest that all TMDLs will be simple, but it does suggest the value of simple analyses and iterative implementation. Box 4-5 presents a relatively simple modeling exercise (based on a statistical rather than mechanistic model) that was used successfully to develop a TMDL for clean sediment. With regard to mechanistic models, there is no intrinsic reason to choose the particular scales that have become the basis for representing processes in the majority of mechanistic water quality models. As an alternative, Borsuk et al. (2001) have shown that it is possible to specify relatively simple mechanistic descriptions of key processes in aquatic ecosystems, which limits the dimension of the parameter space so that parameters may be estimated using least squares or Bayesian methods on the available data. The SPARROW model (Smith et al., 1997) is another more statistically based alternative that includes terms and functions that reflect processes. These efforts suggest that a fruitful research direction for the TMDL program is the development of models that are based on
OCR for page 84
Page 84 BOX 4-5 Use of a Simple Empirical Model: Suspended Sediment Rating Curve for Deep Creek, MT One relatively simple form of model that has been used successfully in many TMDL applications is a statistical regression of a water quality indicator on one or more predictor variables. The indicator may be either the pollutant named in the TMDL or a related metric used to determine impairment but not directly involved in the TMDL analysis. Such a model was used to develop a TMDL for suspended sediment in Deep Creek, MT (see Endicott, 1996). The designated use of that waterbody was to support a cold water fishery and its associated biota, especially to provide high-quality spawning areas to rainbow and brown trout from a nearby reservoir. The reservoir and the river provide a blue-ribbon trout fishery. Analyzing the effects of suspended sediment on salmonids is complicated by the fact that sediment concentrations in western trout streams increase dramatically with streamflow in healthy as well as sediment-impaired streams, but are lower at any given flow in the healthy streams than in the impaired streams. Suspended sediment concentrations at all stages of the hydrograph are important biologically. To develop a sediment TMDL at this site, modelers compared the relationship of sediment concentration to streamflow (known as the “sediment rating curve”) at the impaired site to the corresponding sediment rating curve for an unimpaired reference site. Rating curves were developed by regressing sediment concentration on streamflow. In the case of Deep Creek, the sediment– flow relationship is approximately linear with a slope of 0.51 mg I−1 per ft3sec−1. Based on rating curves for reference streams of similar size in the area (Endicott, 1996), an appropriate slope would be 0.26 mg I−1 per ft3 sec−1. Thus, the goal of TMDL implementation is to lower the Deep Creek ratio by about half. According to the approved TMDL management plan, certain channel modifications and a combination of riparian and grazing BMPs are expected to reduce the slope of the sediment rating curve and restore the health of the trout fishery. Determination of whether the control measures have reduced the rating curve slope to the target level can be accomplished in the future by a hypothesis test on the slope parameter of the revised regression of concentration on flow. The Type 1 and Type 2 error rates for this decision-making method will relate directly to the statistical confidence limits on the estimated slope parameter, and are controllable through the quantity of monitoring data collected after the control measures are in place. There are several aspects of this modeling approach that make it well suited to the TMDL problem. The analysis was simple to carry out and relatively easy for stakeholders to understand. Despite its simplicity, the model focuses on a critical aspect of the Deep Creek ecosystem—suspended sediment concentrations over the entire hydrograph. Future decision-making on the success of the management plan can be based on an objective test with known error rates that are controllable through monitoring.
OCR for page 85
Page 85 process understanding yet are fitted using statistical methods on the observational data. Pilot Watersheds Another approach to consolidate modeling efforts and develop TMDLs more efficiently is the pilot watershed concept 1 . Many TMDLs involve small- to medium-sized watersheds that have a dominating non-point source pollution problem (e.g., the Corn Belt region, watersheds draining forested areas, or suburban watersheds). Watersheds located in the same ecoregion may have similar water quality problems and solutions. Thus, a detailed modeling study of one or two benchmark watersheds can provide problem identification and solutions. These findings could potentially be extrapolated to less investigated but similar watersheds. Conclusions and Recommendations If accompanied by uncertainty analysis, many existing models can be used to develop TMDLs in an adaptive implementation framework. Adaptive implementation, discussed in detail in Chapter 5, will allow for both model development over time and the use of currently available data and methods. It provides a level of assurance that the TMDL will ultimately be successful even with high initial forecast uncertainty. 1. EPA should not advocate detailed mechanistic models for TMDL development in data-poor situations. Either simpler, possibly judgmental, models should be used or, preferably, data needs should be anticipated so that these situations are avoided. The strategy of accounting for data-limited TMDLs with increasingly detailed models 1 In various forms, “pilot watersheds” have for years been the basis for understanding land use impacts on water quality. The concept is implicit in the acceptance and use of export coefficients for pollutant load assessment. A prominent example is the series of PLUARG (Pollution from Land Use Activities-Reference Group) studies to determine the total loads of pollutants to the Great Lakes. The group used several pilot watersheds on each side of the border and extrapolated the detailed monitoring and modeling results into the entire Great Lakes basin.
OCR for page 86
Page 86 needs rigorous verification before it should be endorsed and implemented. Starting with simple analyses and iteratively expanding data collection and modeling as the need arises is the best approach. 2. EPA needs to provide guidance for determining the level of detail required in TMDL modeling that is appropriate to the needs of the wide range of TMDLs to be performed. The focus on detailed mechanistic models has resulted in complex, costly, time-consuming modeling exercises for single TMDLs, potentially taking away resources from hundreds of other required TMDLs. Given the variety of existing watershed and water quality models available, and the range of relevant model selection criteria, EPA should expand its focus beyond mechanistic process models to include simpler models. This will support the use of adaptive implementation. 3. EPA should support research in the development of simpler mechanistic models that can be fully parameterized from the available data. This would lead to models that meet several model selection criteria present in Box 4-2, such as consistency with theory, assessing uncertainty, and consistency with available data. 4. To more efficiently use scarce resources, EPA should approve the use of pilot watersheds for TMDL modeling. Rather than detailed models being prepared for every impaired waterbody, pilot TMDLs could be prepared in detail for a benchmark watershed (e.g., a typical suburban or agricultural watershed), and the results could be extrapolated to similar watersheds located in the same ecoregion. The notion of extending modeling results to similar areas, which underlies the presentday use of export coefficients, is reasonable if applied in the framework of adaptive implementation. Such a framework, coupled with the rapid application of specific controls/approaches in a number of watersheds, can reveal where techniques do or do not work and can allow for appropriate modifications. REFERENCES ASCE. 1999 . National Stormwater Best Management Practices (BMP) Database. Version 1.0. Prepared by Urban Water Resources Research Council of ASCE, and Wright Water Engineers, Inc., Urban Drainage and Flood Con-
OCR for page 87
Page 87trol District, and URS Greiner Woodward Clyde, in cooperation with EPA Office of Water, Washington, DC . User's Guide and CD . Beck, M. B. 1987. Water quality modeling: a review of the analysis of uncertainty. Water Resources Research 23: 1393–1442 . Beven, K. J. 1996 . A discussion of distributed hydrological modeling. Distributed hydrological modeling. M. B. Abbott and J. C. Refsgaard, Ed. Dordrecht, Netherlands : Kluwer Academic Publishers , pp. 255–278 . Borsuk, M. E. 2001 . A Probability (Bayes) Network Model for the Neuse Estuary. Unpublished Ph.D. dissertation. Duke University. Borsuk, M. E., C. A. Stow, D. Higdon, and K. H. Reckhow. 2001 . A Bayesian hierarchical model to predict benthic oxygen demand from organic matter loading in estuaries and coastal zones. Ecological Modeling (In press). Brown, L. C., and T. O. Barnwell, Jr. 1987 . The enhanced stream water quality models QUAL2E and QUAL2E-UNCAS: documentation and user manual. EPA-600/3-87/007. Athens, GA : EPA Environmental Research Laboratory . Chapra, S. C. 1996 . Surface Water Quality Modeling. New York : McGraw-Hill . 844 p. Di Toro, D. M., and G. van Straten. 1979 . Uncertainty in the Parameters and Predictions of Phytoplankton Models. Working Paper WP-79-27, International Institute for Applied Systems Analysis, Laxenburg, Austria. Endicott, C. L., and T. E. McMahon. 1996 . Development of a TMDL to reduce nonpoint source sediment pollution to Deep Creek, Montana. Report to Montana Department of Environmental Quality, Helena, Montana. Montana State University , Bozeman, Montana . Environmental Protection Agency (EPA). 1994 . Water Quality Standards Handbook: Second Edition. EPA 823-B-94-005a. Washington, DC : EPA Office of Water . EPA. 1999 . Draft Guidance for water Quality-based Decisions: The TMDL Process (Second Edition), Washington, DC : EPA Office of Water . EPA. 2000 . Stressor Identification Guidance Document. EPA-822-B-00-025. Washington, DC : EPA Office of Water and Office of Research and Development . EPA. 2001 . BASINS Version 3.0 User's Manual. EPA-823-B-01-001. Washington, DC : EPA Office of Water and Office of Science and Technology . 337p. Mills, W. B., D. B. Porcella, M. J. Ungs, S. A. Gherini, K. V. Summers, L. Mok, G. L. Rupp, G. L. Bowie, and D. A. Haith. 1985 . Water Quality Assessment: A Screening Procedure for Toxic and Conventional Pollutants in Surface and Ground Water, Parts I and II. EPA/600/6-85/002a,b. Morgan, M. G., and M. Henrion. 1990 . Uncertainty. New York : Cambridge University Press . 332 p. National Research Council (NRC). 2000 . Watershed Management for Potable Water Supply—Assessing the New York City Strategy. Washington, DC : National Academy Press .
OCR for page 88
Page 88 Peters, R. H. 1991 . A critique for ecology. Cambridge : Cambridge University Press . 366 p. Reckhow, K. H., and Chapra, S. C. 1979 . Error analysis for a phosphorus retention model. Water Resources Research 15: 1643–1646 . Reckhow, K. H., Coffey, S. C., Henning, M. H., Smith, K. and Banting, R. 1992 . Eutromod: Technical Guidance and Spreadsheet Models for Nutrient Loading and Lake Eutrophication. Duke University School of the Environment , Durham, NC . Smith, R. A., G. E. Schwarz, and R. B. Alexander. 1997 . Regional interpretation of water-quality monitoring data. Water Resources Research 33(12): 2781–2798 . Spear, R., and G. M. Hornberger. 1980 . Eutrophication in Peel Inlet – II. Identification of critical uncertainties via generalized sensitivity analysis. Water Research 14: 43–49 . Ulanowicz, R. E. 1997 . Ecology, the ascendant perspective. New York : Columbia University Press . 201p. van Straten, G. 1983 . Maximum likelihood estimation of parameters and uncertainty in phytoplankton models. In: M. B. Beck and G. van Straten (Editors), Uncertainty and Forecasting of Water Quality. Berlin : Springer Verlag . Winer, R. 2000 . National Pollutant Removal Performance Database for Stormwater Treatment Practices, Second Edition. Center for Watershed Protection, Ellicott City, MD. Prepared for EPA Office of Science and Technology, in association with Tetra Tech, Fairfax VA.
Representative terms from entire chapter: