Click for next page ( 54


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 53
4 Designing and Implementing Monitoring Programs The technical design of monitoring programs refers to the process of deciding what to measure; how, where, and when to take the measurements; and how to analyze and interpret the resulting data. Proper analysis and interpretation of monitoring data result in information that helps scientists and managers decide whether regulatory, environmental qualifier, and human health objectives are being met. As emphasized in Chapter 2, when monitoring data have been converted to information in this manner, they generally provide better support for specific management actions. This chapter presents comprehensive guidance for developing the technical design of monitoring programs and describes a procedure for ensuring that the information produced meets the needs of managers and decision makers. This chapter is intended to guide those who implement monitoring programs toward better program design and improved dissemination of information gained from monitoring. An appropriate technical design is critical to the success of monitoring programs because it provides the means for ensuring that data collection, analysis, and interpretation address management needs and objectives. 1b ensure that monitoring systems will produce information that is useful to decision makers, monitoring programs that address public concerns must be developed using a comprehensive methodology such as the one described here. The committee emphasizes the importance of the following overall conclusion related to designing and implementing monitoring programs: Failure to commit adequate resources of time, funding, and expertise to up-front 53

OCR for page 53
54 MANAGING TROUBLED WATERS program design and to the synthesis, inte?pretaiion, and reporangof information will result in failure of the entire program. Without this commitment, effort and money will be spent to collect data and produce information that may be useless. A CONCEPI UAL APPROACH TO DESIGNING MONITORING PROGRAMS Technical design can be challenging. Variability in nature creates "noise" that often obscures the "signal" of human-induced impacts. Mul- tiple human activities occurring within the same area or time span can interact to create complex cumulative effects. Further, choices must be made among the wide array of scientific tools that could be used and the many environmental parameters that could be measured. For example, monitoring to measure degradation in fish communities could focus on the number of species in the community, community trophic structure, the incidence of abnormalities, or many other parameters. The committee found no shortage of good advice concerning the technical design of monitoring programs. Such useful works as Holling (1978), Green (1979), Beanlands and Duinker (1983), Fritz, Rago, and Murarka (1980), NRC (1986), Wolfe (1988), Isom (1986), Rosenberg et al. (1981), Perry, Schaeffer, and Herricks (1987), and O'Connor and Flemer (1987) provide a rich resource of ideas, strategies, and technical methods. However, a major problem revealed in the case studies is a failure to apply the appropriate design tools consistently to fulfill clearly stated monitoring objectives. The case studies and the experience of committee members indicate that too little attention is directed at deciding what measurements are required to address the priority issues defined by the public and decision makers. Such priorities provide the context for selection and application of technical design strategies. The comprehensive methodology presented here is drawn largely from the references cited above. The goal of this synthesis is to provide a methodology for formulating clear monitoring objectives at the outset; for designing statistically sound, cost-effective sampling programs consistent with those objectives; and for synthesizing, interpreting, and reporting monitoring data. The following sections present a design methodology that is an ex- pansion of the central elements of the conceptual framework shown in Figure 4.1. It provides a logical and scientifically based means of linking technical decisions about monitoring design to the information needs of the decision-making process. The methodology is generic and therefore applies to most monitoring situations.

OCR for page 53
DESIGNING AND IMPLEMENTING MONI TORING PR OGRAMS Refine Objectives Reframe Questions Rethink Monitoring Approach Step 1 Define Expectations and Goals OWL Step 2 Define Study Strategy l Step 4 Develop Sampling Design No /an Change c ~ \Be Detected9/ \/ ~ Yes Step 5 Implement Study r Step 6 L Produce Information I No ~Islnformation \ ~ Yes Step 7 ni.~min~t~ Information Make Decisions 55 Step 3 Conduct Exploratory Studies if Needed FIGURE 4.1 Me elements of designing and implementing a monitoring program. General Versus Specific Design Methodologies A generic monitoring design methodology must be applicable to the various requirements of each monitoring category considered in this report compliance, trends, and hypothesis testing. All three categories encompass a broad variety of questions about resources in many different habitats. In addition, resources, the processes that affect them, and hu- man activities vary on diverse spatial and temporal scales. To specific a methodology (i.e., one that specifies the exact models, parameters, sam- pling plans, and analyses) would be applicable only to a narrow range of

OCR for page 53
56 MANAGING TROUBLED WATERS situations. Conversely, a methodology that is too general will not be useful to practitioners. The committee resolved the conflict between the needs for specificity and for generality by developing a conceptual methodology that provides guidance in producing effective technical designs for most situations. The methodology does not furnish answers to all design problems. Instead, it identifies which problems are most important and describes how they can be solved. For example, it leads practitioners through steps that convert monitoring objectives into testable questions. It provides guidance in deal- ing with sources of variability and uncertainty and shows how feedback mechanisms help refine questions and objectives. It demonstrates methods for linking the collection and analysis of monitoring data to the infor- mation needs of the public and decision makers. Examples are used to demonstrate how elements of the methodology would be applied to specific situations. Some steps in the methodology are more relevant to some kinds of monitoring than others. Despite its guidance, the methodology cannot replace local or spe- cific scientific expertise. In fact, its successful application depends on the knowledge and skill of local experts. In this respect, it reflects the decision- making approach adopted by the U.S. Army Corps of Engineers (COE) for disposal of dredged material (Peddicord et al. in press; Cullinane et al. 1986~. A Methodology for Monitoring Design Figure 4.1 shows the main elements of the conceptual methodology, each of which is discussed in detail in subsequent sections. The methodology is based on two principles: monitoring designs must reflect cause-effect relationships while accounting for variability and uncertainty, and specific design decisions (e.g., the number of stations and replicates to be collected) can be made only after objectives and related information needs are clearly established. A lack of clarity in purpose and expectations invariably results in failure to formulate a meaningful monitoring strategy (Green 1979~. Working upward from the bottom of Figure 4.1 helps in understanding the relationships among the steps in the methodology. Information can be disseminated to decision makers (step 7) only after it has been produced (step 6~. Information is produced when the results of a carefully imple- mented study that includes adequate data analysis and interpretation have been summarized and evaluated (step 5~. For a study to be implemented successfully (step 5), it must be designed (step 4) to develop answers to important questions effectively (step 2~. The focused questions that serve as the basis of a monitoring program rely on clear management objectives

OCR for page 53
DESIGNING AND IMPLEMENTING MONITORING PAWNS 57 (step 1). Finally, preliminary studies (step 3) are often necessary to refine questions and technical aspects of the monitoring design. Figure 4.1 also shows three important feedback points. The first, between steps 4 and 2, provides a means of reframing the study's under- lying questions in light of real-world scientific, logistical, and financial constraints. As an example of such feedback, the Minerals Management Service (MMS) of the Department of the Interior evaluated historical data (Bernstein and Smith 1984) to help establish the objectives and design of a large-scale sampling program off California. The finding of this historical evaluation that natural variability made it extremely costly to detect changes in individual species helped focus the sampling program on other less variable and more sensitive parameters. The other feedback points in Figure 4.1 (encompassing steps 6, 7, and 1) allow program designers to review and modify monitoring objectives in light of actual monitoring information about the effectiveness of specific management actions and technological advances that occur during the study. The above, and other, feedback points at more detailed levels of the methodology permit information that results from monitoring to be used to refine the sampling design. Throughout the more detailed description of the methodology that follows, feedback loops emphasize the point that information developed at one stage must be used to refine previous stages in an iterative process. For example, as scientific understanding and predictive ability increase, feedback mechanisms can be used for redirecting resources toward unanswered questions and away from issues that have already been addressed adequately. When such feedbacks are not used, monitoring loses its effectiveness for controlling and understanding human impacts on the environment. For example, electric utilities in Southern California continue to monitor for detrimental effects of thermal discharges from coastal power plants, even though nearly 20 years of monitoring have documented the limited consequences and spatial extent of thermal effects. STEP 1: DEFINE EXPECTATIONS AND GOALS As outlined in Chapter 2, the ultimate goal of monitoring is to produce information that is useful in making management decisions. Therefore two- way communication between scientists responsible for designing monitoring programs and the users of monitoring information is essential. These interactions give decision makers and managers an understanding of the limitations of monitoring and at the same time provide the technical experts who design monitoring programs with an understanding of what questions should be answered. Step 1 of the methodology (see Figure 4.2) is designed to ensure that this communication takes place in a structured context. Such communication is important because anticipated population

OCR for page 53
58 MANAGING TROUBLED WATERS Identify Public Identify Relevant Concerns and Laws, Regulations, and Expectations Permits ] 1 Focus Scientific Understanding Establish Environmental and Human Health Objectives FIGURE 4.2 Step 1: define expectations and goals of monitoring. growth and continued development of the coastal zone will increase the de- mand for monitoring information to support environmental decision making (EPA 1987; Champ, Conti, and Park 1989~. If monitoring programs are to meet these demands, their objectives must integrate public concerns and expectations with the legal and regulatory framework through the use of scientific understanding to identify the relevant questions to be addressed.

OCR for page 53
DESIGNING AND IMPLEMENTING MONITORING PROGRAMS 59 BOX 4.1 A TECHNICAL DESIGN THAT MEETS MANAGEMENT NEEDS DAMOS the Disposal Area Monitoring System~ollects only those data that can be shown, beforehand, to be useful in making management decisions or resolving technical problems (Fredette et al. in press). The DAMOS program clarifies and updates its definition of information needs through its technical advisory committee of in- dependent scientists and through periodic public symposia. Although DAMOS has been criticized for not addressing larger-scale issues, such as the added stress of dredged material disposal on regional oxy- gen depletion, it has successfully addressed most important questions related to dredged material disposal. Most important, monitoring is fully integrated into the decision-making process, with active and on- going interaction between those responsible for monitoring and those responsible for making decisions. Just as the creation of useful information depends on clear monitor- ing objectives, these objectives depend on unambiguous statements about what constitutes useful management information (Cowell 1978~. As Bern- stein and Zalinski (1986) point out when talking about useful information, one must answer the questions "Information about what?" and "Useful to whom, and in what way, specifically?" Stating clear monitoring objec- tives involves answering these questions as precisely and unambiguously as possible. The three case studies identified many instances in which the devel- opment of clear objectives helped translate monitoring data into tnfor- mation that supported management actions. An outstanding example is the DAMOS (Dredged Area Monitoring System) program carried out by the COE New England District to guide decisions about the disposal of dredged material (Fredette et al. in press; Engler and Mathis 1989~. (See Box 4.1.) In another instance, "tiered" monitoring (Fredette et al. in press; Zeller and Wastler 1986), exemplified by the monitoring plan for the 106- mile dumpsite off the East Coast (Werme et al. 1988), is structured to yield information that can answer a hierarchy of questions. Monitoring within the site concentrates on specific questions about the dispersal of disposed material. A finding that material has spread beyond the site boundary triggers a management action: more comprehensive monitoring to answer a higher tier of questions about environmental effects.

OCR for page 53
60 MANAGING TROUBLED WATERS The Southern California Bight case study highlighted real-world im- pediments to developing clearly stated monitoring objectives. In the bight, multiple point and nonpoint sources of contaminants are in close proximity, and effects on a variety of important marine resources overlap. Marine resources in the bight are also affected by regionwide natural disturbances (e.g., El Ninos, storms, and population blooms of organisms) that com- plicate the assessment of changes from human sources. It is much more difficult to document such cumulative effects than it is to measure those from single isolated sources or events. In addition, natural variation of resources and contaminants in the bight frequently occurs on spatial and temporal scales that confound the results of monitoring programs. The limited scientific understanding of how all these processes interact makes it difficult to find clear answers to many of the questions asked by decision makers and the public. All such impediments must be identified and con- sidered when developing objectives for monitoring programs because they affect whether it is possible to fill the information needs identified in the definition of objectives. Many approaches to defining issues and establishing monitoring ob- jectives (see Figure 4.2) within the constraints imposed by the scientific knowledge base and resources (availability of time, money, and personnel) are possible (e.g., Adamus and Clough 1978; Capuzzo and Kester 1987; Gilliland and Risser 1977; WaLker and Norton 1982; Wiersma et al. 1984; Cairns, Dickson, and Maki 1978~. Results of one approach (Clark 1986) that was found by the Southern California Bight case study to be especially useful are summarized in Figure 4.3. This cumulative assessment approach presents a synoptic picture of natural and human sources of disturbance and impacts and their effects on natural resources. Conducting this kind of analysis requires making decisions about which resources are valued and/or vulnerable. It also requires synthesizing available scientific information about how they are impacted. A particularly useful aspect of this approach is the identification of multiple and cumulative impacts. Further, it includes information about the limits of scientific certainty associated with potential impacts. This procedure provided a framework for synthesizing available scientific information on the Southern California Bight in a way that could be used by scientists, environmental decision makers, and the public to begin establishing realistic monitoring objectives. Even though the analysis underlying Figure 4.3 was qualitative and was based on incomplete understanding, it helped participants in the Southern California Bight case study identify potential effects not addressed by ongoing monitoring programs. Figure 4.3 was especially valuable as a tool for synthesizing the available information into a conceptual model of system interactions. This model thus provides an effective starting point

OCR for page 53
ING MONITORING PROGRAMS /al I lion arm< "~ V, O,C V (~ \ ~C ~O ~V - 0 _ ~A SOURCES \ OF PERTURBATION \ a) a, b At =~ 1L tt~ So ~ =~ ~ ~ 4 ~ E ~ ~ ~r' - i c', ~Y ~ Storms ~ ~0 ~ El Ninos ~ ~ ~ ~ ~ O ~ ? ~ ~ ~ ~ Upwelling ~O ~ ? ~O Basin Flushing Mass Sediment Flows Blooms/lnvasions ~ O ~ Diseases O ~? Ecological Interactions EM ~ ? ~ Power Plants 0~ ~ ? Wastewater Outfalls O ho El ~ Dredging River Flow and Storm water Runoff O O ~ Commercial Fishing [A ~ Sport Fishing ~ O Marine Commerce and Boating Habitat Loss and Modification 0 ~0 00 ~ O ? ? Oil Spills Oil Seeps Atmospheric Input POTENTIAL INFLUENCE ASSESSMENT RELIABILITY ~ , ~rat. . r J Controlling _ Major h Moderate ~ Some ~_ , ;, _ - - ? - Some evidence for impact but further study needed Blank - no impact Hiah I -a ~] Moderate I I I ow FIGURE 4.3 Impacts on the marine environment of the Southern California Bight. Note: Individual matrix cells illustrate the presumed relative impact of each source on each component, along with the associated scientific certainty. Columns represent cumulative impacts on individual components; rows represent the effects of individual perturbations on all components. This figure was used to summarize and investigate ways of identifying and ranking impacts in the Southern California Bight. SOURCE: Adapted from Clark 1986.

OCR for page 53
62 MANAGING TROUBLED WATERS for developing monitoring objectives, including the selection of specific resources, impacts, and changes that should be monitored. STEP 2: DEFINE STUDY STRATEGY Figure 4.4 shows the elements of defining a monitoring strategy and developing specific questions to be answered. These questions guide subse- quent steps in the technical design process. Step 2 begins with the general monitoring objectives developed in step 1 and ends with explicit questions to be answered that are the basis for developing a sampling design. The goal of this step is to narrow the focus of monitoring from the vast number of questions and parameters that could be examined to those that will pro- duce the specific information needed. Step 2 is essential because, without clearly stated testable questions, monitoring is often a haphazard collection of data. As Green (1979) emphasizes, "Your results will be as coherent and as comprehensible as your initial conception of the problem." Similarly, in writing about monitoring to detect power plant impacts, Fritz, Rago, and Murarka (1980) stated: "This failure ito formulate clear-cut questions] may account for the relatively inconclusive results produced in environmental assessments." There are no simple guidelines for producing specific questions to be answered. Whatever method is used, it must be pursued with the deter- mination to continue until specific potential impacts on specific resources in specific locations at specific times are identified (e.g., Bain et al. 1986~. 1b be useful, testable questions need not be complex; DAMOS managers were concerned about whether hurricanes would erode dredged material disposal mounds and contribute to the transport and dispersal of contam- inants contained in the dredged material (SAIC 1986~. Their concern led to the question "Within the detection limits of seabed profiling technology, are disposal mounds in Long Island Sound smaller after a hurricane than they were before the hurricane?" In contrast, the monitoring conducted around oil platforms in the Gulf of Mexico was not based on specific ques- tions designed to meet specific information needs, lacked any operational definition of impact, implicitly assumed that impacts would be easily distin- guishable from natural variation, and failed to use an appropriate sampling design. (See Box 4.2.) In their study of impact assessment methods, Beanlands and Duinker (1983) provide a particularly good example of the difference between useful and nebulous questions. The original nebulous question "What would be the impacts of a proposed dam on the fish resources of the river?" failed to help focus the sampling design because it did not ask "What impacts and which fish resources are of concern?" Beanlands and Duinker explain how this original question was refined to provide the specific information

OCR for page 53
DESIGNING AND IMPLEMENTING MONITORING PROGRAMS Modify Resources Identify Resources at Risk Develop Conceptual Model L ~ No ~ Have Appropriate \ Adjust Resources Been ~BY ._ 1t Yes Determine Appropriate Boundaries /Are Selected\ Boundaries Adequate? / ; Yes 1 Predict Responses and/or Changes l I Predictions Jeasonable: ~ Yes Develop Testable Questions FIGURE 4.4 Step 2: Define study strategy. 63 Refine Model needed to make a decision. The refined question was: "What percentage of the Arctic char spawning habitat would be lost given a 0.5 meter reduction in the water level of the river during the month of September?" As shown in Figure 4.4, several steps are involved in progressing from general monitoring objectives (step 1 and Figure 4.3) to specific questions to be answered (step 2 and Figure 4.4~. They include: identifying

OCR for page 53
DESIGNING AND IMPLEMENTING MONITORING P1ROGRAMS 79 without knowledge about at least the relative magnitudes of the various sources of variability. Selecting Variables to Measure Most monitoring programs do not have the resources to monitor all variables of concern. The limited resources available must then be focused on the system attributes that are of the greatest concern and provide the most information about system status or changes in status. Thus actual sampling may not focus directly on the resources at risk but on surrogate variables. Surrogate variables include resources of intrinsic importance (e.g., economically important species, endangered species), early warning indicators (e.g., variables that respond rapidly to the stress of concern), sensitive indicators (e.g., variables that have a high degree of specificity to stress), process indicators (e.g., variables that provide insight into the effects of stress on complex system interactions, and variables with high information redundancy (i.e., those that are generally representative of the behavior of a number of important parameters). The rationale for monitoring surrogate variables is that they might provide clearer or simpler information than the resources would. This statement may not always apply (Wolfe and O'Connor 1986; O'Connor and Demling 1986; Bryan and Gibbs 1987), and specific criteria need to be applied to the selection of surrogate variables on a case-by-case basis. For example, diversity indices are often used to provide summary information about impacts on communities containing many species. However, much important information can be discarded in the calculation of these indices (May 1985~. In addition, changes in diversity can be ambiguous, particularly when the study assemblage is exposed to more than one source of disturbance (NRC 1986~. Criteria that should be used to select surrogate variables include sensitivity to the stress of concern, reliability and specificity of responses, ease and economy of measurements, and relevance of the indicator to specific concerns (NRC 1986~. Leo important issues are involved in the choice of variables to monitor. The first relates to the depth of knowledge about a particular system (e.g., specificity and reliability of responses) and the second to the statistical efficiency of sampling alternative variables (e.g., the signal-to-noise ratio). A prime consideration for any monitored variable is that it should be tied directly to the specific questions to be answered and the resources at risk. In other words, changes in the status of the selected variable must unambiguously reflect changes in the resources at risk. How much they can be tied together depends largely on the depth of knowledge about the system and process being monitored. In well-understood systems, it will be clear which variables to measure and how to draw conclusions about the state of resources from them. For example, understanding the processes

OCR for page 53
80 MANAGING TROUBLED WATERS BOX 4.8 VARIABILITY AFFECTS SELECTION OF VARIABLES Dischargers in the Southern California Bight monitor the levels of contaminants in the tissue of fish collected around wastewater outfalls. But two potentially large and poorly understood sources of variability make it difficult to interpret these data. First, different species of fish are sampled at different outfalls (NRC in press). In other words, different variables (i.e., different species) are being sampled. Second, sampling is conducted at different times around the same outfall. However, contaminant levels in fish vale seasonally as a function of reproductive status (Cross et al. 1986~. These two sources of variability may interact because of differences in the timing of reproductive cycles and in tissue chemistry among species, resulting in data that provide ambiguous information about the impacts of discharges on contaminant levels in fish or about the risk of contaminant discharge to the people who eat the fish. leading to oxygen depletion and eutrophication has focused modeling and monitoring on nutrient levels (Hydroscience 1974; HydroQual 1986~. When a system is less well understood, it may not be apparent which variables will indicate meaningful changes in resources. Then the conceptual model should be used to determine whether a particular variable can be linked to the specific questions to be answered with cause-effect statements. When crucial gaps in scientific understanding occur, research or modeling may be initiated to help determine what measurements should be made. In addition, the available information should be used to make an informed decision about what to monitor now. The kelp bed example described earlier (see Box 4.3) shows how research and modeling provided data that improved the conceptual model. This improved understanding was then used to focus monitoring on quantifying the response of kelp recruitment to power-plant-induced changes in near-bottom irradiance. A second major consideration in selecting monitored variables is their statistical distributions and characteristics (e.g., signal-to-noise ratio). Mon- itored variables should provide the most accurate and precise estimates for the smallest required sampling effort, thus maximizing information return per sampling effort expended. Variables with high variability or unknown distributions (see Box 4.8) impair the ability to draw conclusions from monitoring data. Such variables are not appropriate for routine monitoring programs.

OCR for page 53
DESIGNING AND IMPLEMENTING MONll~ORING PROGRAMS The Sampling Design and Its Statistical Basis 81 The sampling design is the central element in step 4 of the design methodology. (See Figure 4.6.) It provides the logical structure of the study (Cochran 1977; Fisher 1954) because it specifically defines how questions will be evaluated and how variation associated with different sources (e.g., spatial and temporal as well as human-induced variation) will be measured. For example, the kelp bed study (see Box 4.3) was structured around comparisons of characteristics of kelp beds located in the thermal plume against unaffected kelp beds located in reference areas far removed from the thermal plume. Several reference kelp beds were sampled to estimate natural variability among them. This structure defined the type of comparisons that would be used to detect impacts. In addition, the design consisted of sampling for several years before and several years after the power plant began operating to provide a background of natural temporal variability against which to measure changes in conditions that occurred once power plant operations began. In many monitoring and assessment programs, it is not possible to collect preoperational data or to establish baseline conditions before an impact has occurred. Statistical comparisons in such cases are limited to comparing distributions among locations of concern to distributions at sites that are assumed to be appropriate reference areas (Green 1979~. Selection of appropriate reference areas is always problematic. It is a particularly difficult problem in estuaries, where a natural salinity gradient that may vary in location from year to year generally requires broad regional sampling and application of estimation techniques to assess conditions that may occur at any particular location (Holland, Shaughnessy, and Hiegel 1986~. A poorly thought out sampling design usually results in testing of inappropriate questions, incomplete evaluation of questions, inability to separate change due to natural processes from change due to multiple activities, relatively low ability to detect change (low statistical power), and poor use of resources due to oversampling (e.g., Gore, Thomas, and Wat- son 1979; Hurlbert 1984; Stewart-Oaten and Murdoch 1986; Green 1979; Thomas 1978; Bernstein and Zalinski 1983; Taft and Shea 1983; liautmann, McCulloch, and Oglesby 1982; Skalski and McKenzie 1982; Millard and Lettenmaier 1986~. A well-planned sampling design, however, provides a logical basis for evaluating questions and a clear definition of a meaningful level of change, proper matching of variables with questions, quantifica- tion and partitioning of background variability, and proper assignment of sampling units among conditions or treatments. Once a sampling design has been developed, it becomes the basis for a statistical model, which is a formal mathematical statement of the specific questions to be tested. By structuring how questions will be asked

OCR for page 53
82 MANAGING TROUBLED WATERS and by formally describing and partitioning sources of variability, the sta- tistical model furnishes an objective method for allocating sampling or measurement resources. Into statistical tools that aid in the fine-tuning and refinement of the sampling design are optimization and power analyses. When sampling resources are limited, optimization techniques help decide how to make trade-offs needed to control for several sources of variability (e.g., Gunnerson 1966~. Power analysis is a procedure for determining the level of change a given sampling design will detect (Cohen 1988; Maut- mann, McCullough, and Oglesby 1982~. These analyses can be conducted before samples are taken, after part of the samples have been collected, or after the program has ended. This knowledge can be invaluable in determining whether the resources available for monitoring are likely to produce useful information before a program is initiated. If power anal- yses show that meaningful levels of change cannot be detected with the available resources, then the monitoring program can be redirected before these resources are wasted on trying to answer unanswerable questions. They also provide scientists and decision makers with an estimate of the level of uncertainty and thus the degree of confidence they should place in a given analysis result at the conclusion of a program. QUALITY ASSURANCE: AN IMPORTANT ELEMENT OF MONITORING PROGRAM DESIGN AND IMPLEMENTATION A quality assurance program is a system of activities undertaken to ensure that the type, amount, and quality of data collected are adequate to meet study objectives; it is a critical element of all monitoring programs (Taylor 1985; EPA 1979; EPA 1984a). Quality assurance consists of two separate but interrelated activities: quality control and quality assessment Taylor 1985~. Quality control includes activities to ensure that the data collected are of adequate quality given study objectives and the specific hypotheses to be tested (steps 1-4~. Quality control activities frequently undertaken within monitoring programs include standardized sample collection and processing protocols and requirements for technician training (EPA 1984b). The goals of quality control procedures are to ensure that sampling, processing, and analysis techniques are applied consistently and correctly; the number of lost, damaged, and uncollected samples is minimized; the integrity of the data record is maintained and documented from sample collection to entry into the data record; the data are comparable with similar data collected elsewhere; and study results can be reproduced. Quality assessment activities are implemented to quantify the effective- ness of the quality control procedures. They ensure that measurement error is estimated and accounted for and that bias associated with the monitoring

OCR for page 53
DESIGNING AND IMPLEMENTING MONITORING PROGRAMS 83 program can be identified and, if practical, eliminated. Quality assessment consists of both internal and external checks, including repetitive measure- ments, internal test samples, interchange of technicians and equipment, use of independent methods to verify findings, exchange of samples among laboratories, use of standard reference materials, and audits Taylor 1985; EPA 1980, 1984c). To be effective, quality assurance must begin with planning the moni- toring program. Thus the level of uncertainty associated with obtaining the required information can be balanced against the cost of obtaining the data (EPA 1984b). Steps 1-5 activities for defining what to measure and how, where, and when to take measurements are all part of the quality assurance process. Quality assurance must continue to be an integral component of monitoring systems from implementation through information dissem- ination. Activities for converting the data into useful information (steps 6-7) and the feedback loops shown in Figure 4.1 must also be taken into account in designing the quality assurance program. These later activities provide mechanisms for using quality assessment information to modify and improve monitoring. The need for quality assurance programs increases with the complexity of the measurement program and the number of organizations involved Taylor 1978, LOSS). Experience shows that chemical monitoring programs that involve a number of laboratories measuring concentrations of chemi- cal substances are particularly subject to quality assurance problems Taylor 1985~. For example, during the early stages of the Chesapeake Bay Moni- toring Program, nutrient data were collected and analyzed by three regional laboratories, all using different protocols for processing samples. As a re- sult, the data were not comparable and could not be used to depict nutrient distributions accurately (Martin Marietta Environmental Systems 1987~. As is often the case, because of the haste to initiate the collection program, the laboratories' methods and equipment were not evaluated (Taylor 1985~. Another important quality assurance issue associated with monitoring systems is maintaining the integrity of large data sets (Packard, Guggen- heim, and Bernstein 1989~. Into general data management problems must usually be resolved: (1) correction or removal of erroneous individual values and (2) inconsistencies that damage the integrity of the data base. Many erroneous individual values can be identified, validated, and corrected using range checks, filtering algorithms, and comparison to lists of valid values. Entering data twice using different data entry operations and then checking for nonmatches are a particularly effective method for identifying and correcting key-punch errors. Subtle errors that affect the integrity of multiple data entries are much more difficult to identify and correct. For example, errors that affect the relationships among data entries are particu- larly difficult to identify and correct, especially in large regional monitoring

OCR for page 53
84 MANAGING TROUBLED WATERS data bases. Although some data base management systems protect against such errors, others require rigorous cross-checking during data entry to identify and correct these errors. Experience shows that the most effective way to avoid corruption of a data base is to select a data management system that protects against internal inconsistencies and to design the data entry process to minimize the occurrence of errors (Packard, Guggenheim, and Bernstein 1989~. Data entry screens should be simple, and they should mimic the layout of raw data sheets. Typographical errors can be minimized by users selecting from a list of valid values (using lookup tables) rather than typing in the actual values. Quality assurance activities ensure that the goals and objectives of the monitoring program are achieved and that the data that result are adequate for use in making the anticipated decisions. The final and perhaps most important component of quality assurance for a monitoring system is the external review process. Expert reviews should be conducted before samples are taken, at various logical interim phases during a program, and following the analysis and interpretation of the data. STEP 6: CONVERT DATA INTO USEFUL INFORMATION The raw data collected .. . . . .. in a monitoring program frequently do not di- rect~y address public concerns or the information needs of decision makers. Data are individual facts, and information is data that have been processed, synthesized, and organized for a specific purpose. Drucker (1988) described the difference between data and information: "Information is data endowed with relevance and purpose. Converting data into information thus requires knowledge." A useful monitoring program provides knowledge or, more specifically, mechanisms to ensure that knowledge is used to convert data collected into information. For example, measurements of contaminant concentrations in the wa- ter or sediments near a discharge in and of themselves are not useful infor- mation. Contaminant concentration data must be analyzed and mapped to describe patterns and trends. They must then be combined with additional data (e.g., background levels, transport processes, and flux rates) to define exposure. Ultimately, to assess environmental impacts, exposure informa- tion must then be combined with the results of studies of pollutant transport and effects research (e.g., bioassay experiments) to assess the risks to and consequences for receptors and processes. Conversion of monitoring data into information, therefore, involves a range of activities, including data management, statistical analysis, predictive modeling, and fate and effects research. Each of these activities is discussed below.

OCR for page 53
DESIGNING AND IMPLEMENTING MONITORING PROGRAMS Data Management SS The major function of data management activites is to provide easy access to the collected data and related information (e.g., historical trends data, research data, model outputs, data summaries). Because of the amount and complexity of the data that are collected by most monitoring programs and the variety of reports and analyses that are produced, a computer-assisted data management system is usually essential. ~ define and select the appropriate data management system, managers should first determine the volume of data, the long-term uses of the data, existing data management capabilities, the number and background of and relationships among users of the data, the major types of analyses to be conducted, and quality assurance/quality control and reporting requirements. This information ensures a system with the required capacity and degree of access. Monitoring data can be stored in a central location. They can also be accessed through a distributed data management system. In either case, monitoring data and relevant model results should be included in both raw and summarized form to eliminate costly reanalysis. In addition, informa- tion on study characteristics, information on the institution responsible for data collection and storage, and a brief description of sampling methods, data format, quality control procedures, and how to access the data should be readily available for each data set. Data management activities are as important to the success of moni- toring programs as the collection of data. Therefore they should be funded as a continuing core program element, and reports that summarize the types, volume, and quality of data accessible through the system should be prepared and distributed to potential users frequently. Unfortunately, monitoring data are frequently not incorporated into a data management system until most data collection is complete. At this point in many pro- grams, there may not be enough time or money to create an adequate system. This situation lessens the utility of monitoring data to scientists within and outside the program. Data Analysis and Modeling The goals of analysis activities are to summarize and simplify the col- lected data, test for change and differences, generate hypotheses, determine the consequences of observations, and evaluate the uncertainty associated with conclusions drawn from the data. Analysis programs should be de- veloped prior to data collection. This development should include both statistical testing and modeling to ensure that the analysis approach is appropriate to the sampling design and the sampling methods.

OCR for page 53
86 MANAGING TROUBLED WATERS Successful analysis programs cut across institutional and media bound- aries; partition spatial and temporal variations into their major sources (natural and human induced); are based on an understanding of linkages among physical, chemical, and biological attributes; use standard verified modeling approaches, statistical packages, and analysis/data management packages; state and determine the consequences of assumptions inherent in the sampling design and analysis approach; evaluate the sensitivity of anal- yses to assumptions; and summarize analysis results using easily understood graphs, maps, and tables. Statistical analysis helps characterize the data, determine the uncer- tainty associated with measurements, classify the data into appropriate spatial and temporal strata, and test for spatial and temporal change. Gen- erally, many statistical tests are appropriate for any particular situation. Selection of the most appropriate test depends upon data characteristics and the specific question being asked. Numerous publications are avail- able to help scientists and nonscientists identify the most appropriate test, conduct the test, and interpret the results (e.g., Green 1979~. As discussed earlier, forecasting the responses of complex marine systems to human activities and assessing their status and trends with relia- bility are a difficult problem. Simulation models are an assessment tool that can be used to describe environmental complexities while allowing these complexities to be used in forecasting the consequences of environmental change. Simulation models are based on essential system attributes. Research is a basic element in the development of predictive models and the interpretation and synthesis of monitoring data and model out- puts. It is the major process for establishing cause-effect relationships. Correlations and relationships identified during the analysis of monitoring data (e.g., Cairns, Dickson, and Maki 1978; Smith, Bernstein, and Cimberg 1988; Holland, Shaughnessy, and Hiegel 1986) can be an important source of ideas for future experiments and measurements. The Southern Califor- nia Bight case study found that monitoring programs had benefited greatly from their close association with ongoing research programs designed to understand the fate of discharged wastes and assess sublethal effects. The Southern California experience also shows that the results from separately managed and funded research programs can be transferred effectively. Resource allocations for analysis activities are frequently not commen- surate with those for data collection. For example, the Chesapeake Bay case study found that far too little attention and resources were directed at data analysis and synthesis relative to the investment made to collect the data. Data should not be collected unless a commitment is made at the outset that support for analysis activities will be commensurate with that for data collection. One way to address the above problem is to use a phased analysis

OCR for page 53
DESIGNING ANDIMPLEMENTINGMON~O~NGPR~S ~7 approach. In such an approach, the data collected early in the monitoring program are used to develop and refine routine analysis methods, classify the data into spatial and temporal components, determine the adequacy of the sampling design and methods, define the status and its relationship to historical conditions, and develop a preliminary understanding of links between components and processes. Interdisciplinary analyses can follow later in the program. STEP 7: DISSEMINATE RESULTS The results of monitoring programs, especially regional programs, should be disseminated to a range of audiences and at several technical levels. Monitoring programs that produce only technical reports summariz- ing data and scientific findings are not likely to show the public or decision makers that they provide information essential to better environmental protection or management decisions. In fact, management information is produced only when it is delivered to managers and decision makers in a usable, accessible form. Many monitoring programs, especially status and trends studies, extend over years. Interim results of these studies should be disseminated regularly, allowing users to determine whether the type and volume of data that they need are being obtained. If the needed informa- tion is not being obtained, midcourse adjustments can then be made. A phased analysis and reporting approach similar to that used by the Mary- land Department of the Environment (see Box 4.9) keeps target audiences informed about what the information being collected means, what data remain to be collected, what analyses remain to be completed, and why additional data collection and analyses are needed. REALISTIC EXPECTATIONS While acknowledging the importance and utility of monitoring infor- mation, one must not overstate the utility of monitoring information. The marine environment is complex and variable, and it is often difficult to identify and measure clearly the impacts of human origin. These factors, coupled with limitations to scientific knowledge, emphasize the need for realistic expectations. Management of the environment and the monitoring programs that are a part of that management must therefore consider the risks and uncertainties inherent in most actions. Monitoring is limited in terms of its ability to quantify changes and to identify their causes. These limitations must be forthrightly stated, understood, and incorporated in the decision-making process. The reality of imperfect knowledge about marine systems means that monitoring should be used as an opportunity to increase and refine our

OCR for page 53
88 MANAGING TROUBLED WATERS BOX 4.9 DISSEMINATION OF INFORMATION IN THE CHESAPEAKE BAY PROGRAM The Maryland Department of the Environment (MD E) Chesa- peake Bay Water Quality Monitoring Program was designed to assess water quality conditions for the Maryland Chesapeake Bay and to determine the effectiveness of actions and policies to improve and protect water quality. The program disseminates its results to the public, scientists, and decision makers. The reports described here are an example of what monitoring programs should produce. Level I Reports Level I reports, prepared semiannually, summarize the status of data collection activities; they include displays of spatial, seasonal, and long-term trends, analyses of results, and tabular data summaries. One of the two reports also summarizes analyses. They are distributed to all appropriate agencies and organizations. Level II Reports Level II reports, prepared every two years, reach the same au- dience as Level I reports, but they are more interpretive. Level II reports evaluate relationships among study elements, place the data in an ecological and regional perspective, and quantify the effects of major processes affecting water quality. Level III Reports These reports are prepared periodically for politicians, high-level decision makers, and the public. They provide an overall assessment of the status of Chesapeake Bay and changes that have occurred over defined periods. Their objectives are to identify the factors influencing environmental conditions, evaluate restoration actions' and identify management actions and policies that would improve conditions. Executive Summaries Program summaries, prepared annually, are short documents prepared for each major program element. They list the data being collected; describe how, when, and where collections are made; list the name, telephone number, organization, and address of the respon- sible principal investigators; describe how to obtain data summaries and/or raw data; highlight major findings, conclusions, and recom- mendations; and describe future plans. Additional Documents Periodically, MDE prepares and disseminates field and laboratory manuals, data management reports, and findings of special studies conducted to evaluate sampling and processing methods.

OCR for page 53
DESIGNING AND IMPLEMENTING MONITORING PROGRAMS ~9 knowledge of them. Data and information derived from monitoring pro- grams should be used to check, validate, and refine the assumptions, mod- els, and understandings on which the monitoring was based. This iterative feedback increases predictive ability, reduces uncertainty, and ultimately reduces the monitoring effort needed. As discussed in Chapter 2, risk-free decision making is not achievable, and monitoring must be viewed as a way of reducing uncertainty, not of eliminating it. Although not a necessary ingredient of every monitoring program, re- search on natural variability and its causes, ecosystem function, transport and fate of materials, and biological effects of contaminants and habitat alterations is critical to the evolution of knowledge that makes monitoring more effective. At the least, regional trends monitoring should be ac- companied by an ongoing research program designed to contribute to the interpretation of monitoring results. If it is not, the accumulation of data will outstrip maximum use of these data or, worse, will lead to erroneous conclusions. In most monitoring efforts, the need to hold study methods constant for the sake of continuity must be balanced against the need to adapt methods to reflect technological advances. This dilemma cannot be resolved in any arbitrary fashion, and it must be carefully and periodically addressed in each monitoring program. Such adaptation not only includes the collection of additional data and application of new sampling techniques, but it also includes dropping obsolete measurements, reducing monitoring efforts for well-understood processes, and restructuring the entire program when fundamental assumptions are found to be flawed. As knowledge improves and new problems come to light, the resources available for monitoring must be shifted appropriately. Thus a crucial part of technical design is knowing when to stop or reduce the monitoring effort devoted to a particular problem.