Appendix D
Forecasting for Environmental Decision Making: Research Priorities

William Ascher

INTRODUCTION

This brief survey is intended to identify where research on forecasting for environmental decision making is most promising, rather than to assert best practices in the choices of methods and process. Nevertheless, some of the premises and dimensions should be clarified in order to support the recommendation in this analysis.

Premises

An assessment of the research needs for environmental forecasting should rest on three basic premises. First, decision needs (both short term to address today’s policy challenges and long term to improve the scientific capacity to address future policy challenges) should drive the selection of forecasting foci, methodologies, and assessments. Therefore, it is important to set the research objectives for improving environmental forecasting according to the needs of formulating environmental policies and decisions. This does not mean that forecasting in order to improve science is less important than forecasting to meet immediate policy needs. It does mean that considerations of the short- and long-term usefulness of forecasting should drive the research agenda. This depends on aspects of the forecasts per se (reliability, credibility, completeness, and relevance to the policies and specific decisions) and on how the forecasting exercise interacts with the other facets of the decision-making process. Research on how to make forecasts more useful is as important as improving the accuracy of the forecasting methods.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities Appendix D Forecasting for Environmental Decision Making: Research Priorities William Ascher INTRODUCTION This brief survey is intended to identify where research on forecasting for environmental decision making is most promising, rather than to assert best practices in the choices of methods and process. Nevertheless, some of the premises and dimensions should be clarified in order to support the recommendation in this analysis. Premises An assessment of the research needs for environmental forecasting should rest on three basic premises. First, decision needs (both short term to address today’s policy challenges and long term to improve the scientific capacity to address future policy challenges) should drive the selection of forecasting foci, methodologies, and assessments. Therefore, it is important to set the research objectives for improving environmental forecasting according to the needs of formulating environmental policies and decisions. This does not mean that forecasting in order to improve science is less important than forecasting to meet immediate policy needs. It does mean that considerations of the short- and long-term usefulness of forecasting should drive the research agenda. This depends on aspects of the forecasts per se (reliability, credibility, completeness, and relevance to the policies and specific decisions) and on how the forecasting exercise interacts with the other facets of the decision-making process. Research on how to make forecasts more useful is as important as improving the accuracy of the forecasting methods.

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities Second, given the importance of utility, conveying the magnitude and nature of uncertainty is crucial. It is a central concern of research on the communication of scientific information. In addition, determining the magnitude and nature of uncertainty is an essential research task, as is the task of understanding how uncertainty affects the decision process. Third, forecasting is essential regardless of the approach to environmental and resource management. Even if decision makers engage in what they regard as adaptive management,1 forecasting is still required in the selection of optimal strategies. If feedback through monitoring and evaluation calls for policy changes, the decision makers still must project the likely outcomes of available alternatives; without this analysis the adaptation is just as likely to result in a deterioration of outcomes. If adaptive management resorts to policy experiments in the vein of Carl Walters, it is still essential to predict whether the outcomes pose unacceptable risks that would outweigh the benefits of learning through experimentation. Preview of Needs Sound environmental decision making requires forecasts that are more comprehensive in terms of input considerations, outcomes and effects sensitive to threshold effects (nonlinearities) better linked to valuation of outcomes and effects so that they can assist policy makers and the public to understand the magnitude of the costs, risks, and opportunities provide a strong sense of how people are affected perceived as credible2 if credibility is deserved convey the degree (and nature) of their uncertainty, such that hedging strategies can be developed For the forecasting effort (as distinct from the substantive forecast content) to make the most effective contribution to the decision process, it should engage decision makers in the process so that they can ensure the relevance of the choice of what is forecasted and gain confidence in the process focus decision makers’ attention on emerging problems and opportunities provide adequate participation for stakeholders (although what is adequate depends on the specific property rights regime, legal mandates, and other contextual factors)

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities involve sufficiently balanced sponsorship (in terms of funding, analytic effort, and review) to bolster its reliability and perceived credibility How can research address these needs in the short and medium term (i.e., within the next five years or so)? One task is to inventory and assess the approaches and methodologies that already exist and give prominence to the sound alternatives. Because the forecasting task is frequently just one component of analysis, only a limited number of analysts specialize sufficiently in forecasting methodology to have the incentives or time to stay abreast of the forecasting literature, which is dispersed across many journals on forecasting, risk assessment, sectoral specialties (such as energy or land use), limited-circulation reports, and books. “Best practice” inventories are therefore very important, as long as analysts keep in mind that different questions may require different practices. The other task is to develop new approaches where we are reasonably confident that existing approaches are inadequate and there is reason to believe that progress can be made. Yet some areas where one might think improvements can be made may be dead ends because of the intrinsic limitations of the forecasting task. To think through where assessment and research are most needed, it is useful to distinguish 11 aspects of the forecast and the forecasting effort: the units of analysis (e.g., disaggregated trends versus aggregate trends; impacts on particular groups versus national impacts) the methodological approach of the forecasting effort (e.g., econometric models, systems dynamics models, scenario writing, extrapolation) perceived appropriateness of the methods the transparency of assumptions and methods the theoretical content that drives the projections within the chosen methodology (e.g., the relationship between industrial expansion and pollution within an econometric or systems dynamic model) the modes of expressing forecasts and the uncertainties of these forecasts sponsorship integration of the forecasting task with other decision processes (decision-maker involvement; stakeholder involvement) reputation of the forecasters breadth of the expertise of those involved in the forecasting effort potential for the forecasting effort to contribute to the identification of additional policy options

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities APPROACHES TO ENHANCING THE ACCURACY AND RELIABILITY OF FORECASTS FOR ENVIRONMENTAL DECISION MAKING Comprehensiveness of the Initial Mapping of Forecast-Relevant Factors The accuracy and reliability of projections depend on sequencing and balancing comprehensiveness and selectivity.3 The first challenge is to ensure that a sufficient range of potential influences is taken into account in a preliminary assessment so that relevant factors are not ignored. For example, technological progress inputs were often missing from long-term environmental models (this was a highly criticized shortcoming of the Club of Rome models). This broad initial mapping does not mean that all of the factors considered will warrant the same degree of analytic attention or inclusion in the models that ultimately drive the subsequent analysis. The selectivity that follows the initial mapping must reflect both the finite nature of analytic resources and, less obviously, the match between the methods and the understanding of the system. Highly complex models that include poorly understood factors run the dual risks of imparting greater error and making the assessment of uncertainty even more difficult. The challenge of making forecasts more comprehensive in terms of the range of trends has been taken up by many forms: systems dynamics models, integrated scenario writing, and integrated assessment models. The question, then, is which of these approaches can combine multiple trends reliably without becoming black boxes of such complexity in their operations and outputs that forecast users cannot grasp the dynamics of the interactions and therefore cannot assess the model’s coherence, reliability, or sensitivity to variants in the assumptions. A compilation and assessment of these approaches would be a very useful research project. An especially important but often overlooked aspect of effects is the adjustment cost associated with shifting to new technologies to mitigate environmental problems and resource scarcities. It is easy to invoke new technologies as the answers to environmental risks—for example, replacing hydrocarbons to reduce pollution levels. Yet the costs of new infrastructure as well as direct equipment are often underestimated, as is the time needed to make these adjustments. It would be highly worthwhile to assess the existing methodologies for estimating mitigation and transition costs. While many “integrated assessment models” try to provide policy makers with both the identification of trends that policy must address and the implications of specific policy choices, a common weakness of such models is the absence of modeling the myriad policy responses themselves, which influence the trend patterns that have to be addressed during later periods. The early Club of Rome models presumed no responses to greatly increased

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities pollution, resource depletion, etc. Some efforts, particularly from economics, try to finesse the representation of policy response by using aggregate-level optimization models. For example, the policy response to higher energy prices might be modeled by assuming that adjustments will minimize the risk-adjusted energy costs, implying that transitions from one energy source to another follow a highly rational logic. However, the limitations of this approach lie in the strong possibility that real systems do not adjust automatically and immediately (due to rigidities in shifting policies and the uncertainties that policy makers face) and the fact that policy makers’ preferences reflect institutional and personal interests that cannot be captured by assuming system-level optimization. It is worth assessing whether policy response models can be developed through historical or theoretical analyses of the conditions and tipping points of policy response, through scenario writing techniques, or through simulations in which individuals are asked to play out various policy-making roles. Forecasting Nonlinear Trends The capacity to model long-term effects is especially challenging, as the cumulative impacts of gradual changes are often subject to threshold effects that are difficult to model and time. Significant effort has been put into determining how threshold effects can be represented mathematically, but this does not yield insights as to what levels actually trigger nonlinear changes. Thresholds occur when there are changes in the interactions between drivers and affected aspects of the ecosystem (e.g., pollutant concentrations impinge on the chemical processes of life forms; depletion of particular resources makes them price uncompetitive for large-scale use, as in the case of certain timber and fish species). The forecasting approaches would have to be able to merge the trends in the drivers with knowledge of their biophysical and economic impacts. Considerable uncertainty will remain. Beyond this, it may be that the best we can do is sensitivity analysis to determine how much uncertainty is implied by the range of reasonable assumptions about biophysical and economic threshold levels. Forecasting Rare Events As with forecasting nonlinear trends, the forecast of rare events has been addressed in terms of mathematical representation, but the challenge of developing reliable methods for assessing the potential impact of the entire set of low-probability events has hardly been addressed (Cleaves, 1994). By their very nature, the exhaustiveness of the list of “surprises” can rarely be assured. Even for identified events, low-probability events are difficult to characterize in terms of magnitude (for example, a war

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities would certainly affect the environment in various ways, but what scope of war to posit?). Research on developing methods for taking low-probability events into account is worthwhile because of the importance of the task if accomplished, but one should not be overly confident about the chances of success. Improving Reliability Through Model Testing and Evaluation The biggest obstacle to weeding out poor environmental forecasting techniques and models is the long time horizons required of most environmental forecasting efforts. The question, then, is whether other approaches to gauging reliability can be helpful. How effective are the tests of short-term forecasts of models developed to do long-term forecasting? assessments of the track record of particular approaches in predicting the outcomes that have already occurred? backcasting (i.e., evaluating the capacity of the method to “predict” the historical pattern)? comparisons of different models of apparently equivalent levels of expertise in order to show how much uncertainty exists within “the state of the art”? Each of these approaches has some obvious limitations, and although each can help to identify efforts that already have shown indications of deficiency, “passing” these tests does not ensure reliability. Specifically, the short-term accuracy of simple extrapolations or growth curves, as well as more complex models, cannot speak to the possibility that unanticipated changes in patterns and parameters will emerge subsequently. The value of historical assessments of particular methods is limited by, first, the fact that some methods work particularly well in one period but poorly in another and, second, that the success of earlier versions of an approach may not reflect future success, as methods are subjected to what their developers consider to be continual improvement. In addition, many environmental forecasts are conditional projections, insofar as they specify policy responses (or the lack thereof) as premises rather than predictions, and these policy response conditions rarely hold precisely.4 The value of backcasting is compromised by the fact that parameters are typically chosen on the basis of past patterns, as opposed to the possibility that future parameters may change greatly. Finally, although the discrepancies among forecasts by different state-of-the-art approaches reflect uncertainty, the lack of discrepancies does not necessarily reflect reliability and certainty. Nevertheless, it is worthwhile to assess whether combinations of these approaches can

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities succeed in screening out problematic approaches and evaluating degrees of uncertainty. APPROACHES TO ENHANCING THE USEFULNESS OF FORECASTING FOR ENVIRONMENTAL DECISION MAKING The usefulness of forecasts for environmental decision making has five aspects worth analyzing: (1) the capacity to link the biophysical trends to general socioeconomic outcomes and effects that policy makers must address; (2) the capacity to identify specific impacts on particularly relevant groups (e.g., children, the elderly, and other vulnerable populations); (3) the perceived credibility of the forecasts so that they are more likely to be influential—when appropriate—in policy choice; (4) the appropriate identification and expression of uncertainty; and (5) the integration of the forecasting effort with the other facets of the decision-making process. Biophysical Trends and Socioeconomic Outcomes and Effects: Forecasting and Valuation The usefulness of forecasts depends on an additional facet of comprehensiveness: projecting outcomes and effects in addition to drivers. The decision process works by selecting options on the basis of projected outcomes and longer-term effects and then valuing these effects, taking uncertainty and risks into account. Environment forecasting tends to focus on the physical trends, with only very tentative and often crude methodologies for linking the physical trends to the socioeconomic ones. The decision process cannot digest drivers by themselves. For example, the forecast of a 3°C mean temperature increase, or twice the SO2 concentration, is not useful for decision making without knowing the physical impacts and the socioeconomic consequences of these impacts. To improve this state of affairs, forecasting efforts, whether through formal modeling (integrated assessment models [see for example, van Asselt, 2000]) or organized judgmental techniques such as the Delphi, have to project outcomes (e.g., disease incidence, population movements, crop changes, etc.) and the economic costs or benefits of these trends. A strong assessment of these integrated assessment models was completed nearly a decade ago (Weyant et al., 1996); another should be undertaken to determine whether they are bringing enough policy-relevant outcomes and effects into the analysis. The linkage between biophysical forecasting and socioeconomic outcomes and effects depends on combining forecasting and valuation, each a big challenge in itself. Valuation often requires a level of detail that the forecasts lack, sometimes for the good reason that the range of plausible outcomes does not permit such specificity. Long-term economic effects are

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities particularly difficult to forecast, because the markets, tastes, and technology may change in highly unpredictable ways.5 An inventory and assessment of studies that integrate forecasting and valuation would be useful for gauging how well current approaches work and for identifying the obstacles to improvement. Improving the Sense of Impacts on People: Case-Wise Forecasting By forecasting the impacts of environmental outcomes and effects on particular classes or types of individuals, rather than simply projecting aggregate trends, the significance of these impacts for policy making can be better assessed, and policy options can be targeted better as well. For example, the impact of increased industrial concentration on nonmobile retirees who are susceptible to emphysema is likely to be more policy relevant than the trends in average exposure. Thus case-wise projections can often provide more policy-relevant insights than forecasting aggregate trends (Brunner and Kathlene, 1989). Case-wise analysis entails defining a cluster of cases that represent a policy-relevant category, finding prototypes or “specimens.” Yet developing methodologies of case-wise forecasting that are sufficiently comprehensive and credible remains incomplete. Assessing Magnitudes and Types of Uncertainty The degree of confidence in the accuracy of the forecasts plays a pivotal role in determining the degree to which hedging strategies are required. Waiting to eliminate all uncertainty is absurd, although it is often used as a political ploy for inaction. Rather, the decision process needs to recognize and factor in uncertainty. This raises four questions: Do the expressions of uncertainty get heard accurately? For many types of environmentally related forecasts, the degree and nature of uncertainty is not communicated. For example, Tarko and Songchitruska (2003:2) point out that the U.S. Highway Capacity Manual and comparable manuals of other countries “return point estimates that in most cases are mean values … none of the existing capacity manuals handle uncertainty in their procedures.” Can protocols, or at least expectations, be established that require forecasters to report on their uncertainty in constructive ways? Placing such requirements in manuals is one approach, but it may be possible to establish protocols that would be considered as best practice if not compulsory, such that a forecaster who does not express uncertainty constructively would risk a reputational loss. Some progress has been made in this regard through the Moss and Schneider (2000) recommendations for consistent

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities reporting of uncertainty for authors of the Intergovernmental Panel on Climate Change.6 Similarly, Weiss (2002) has proposed a code of ethics for presenting analysis with clear indications of what constitutes what is believed to be fact, “mainstream” opinion, minority opinion, etc., based on legal distinctions among categories of evidence. Do the different sources of uncertainty get recognized as different—which is important for both credibility and for determining how to reduce or cope with uncertainty? The important point here is that understanding the nature of uncertainty in environmental forecasts has the double role of helping to make environmental decisions now and helping to refine the forecasting techniques themselves. Impressive claims have been made about the potential for partitioning uncertainty into various categories and developing cost-research research allocations on this basis (e.g., the Senior Hazardous Analysis Committee of the U.S. Nuclear Regulatory Commission [Budnitz et al., 1997]). Others have questioned the categories of uncertainty (National Research Council, 1997) and express some skepticism about the feasibility of this approach. Nonetheless, it would be worthwhile to apply the uncertainty-parsing techniques to a wide range of environmental forecasting efforts to determine how promising further elaborations of these techniques might be. It would also be worthwhile to develop a taxonomy of uncertainty that is more comprehensive and practical in terms of current decision and directions for improving the methodologies. The distinction between so-called epistemic uncertainty (incomplete knowledge about a phenomenon) and aleatory uncertainty (intrinsic randomness) has been shown to be inadequate. The classification of uncertainty as aleatory depends as much on the model employed as on the state of knowledge of the phenomenon. For example, earthquake prediction errors that arise because the models do not attempt to incorporate the details of specific faults would be regarded as due to aleatory uncertainty; the errors of models that try to model particular faults would be considered as epistemic uncertainty.7 Insofar as further work on environmental forecasting techniques is useful, can we identify the sources of uncertainty in order to focus efforts on improved theory, better information about the state of nature, better parameter estimation of the models, better exogenous time series, and so on? In short, fine distinctions among types of uncertainty, such as distinguishing between uncertainty about the laws of nature and uncertainty about the state of nature, can help orient the most useful research for reducing uncertainty. For example, gaining greater accuracy in projecting certain fish populations may require costly investment in monitoring ocean temperatures and existing fish stocks, but if the models are weak in understanding how fish stocks behave, it may be more cost-effective to improve the models than the monitoring.

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities Over the past 30 years, forecasting has made remarkable progress in abandoning the compulsion to project just “the most likely” trend. Quite sophisticated approaches to conducting and displaying sensitivity analyses have been developed (see, for example, Prinn et al., 1999). Yet significant challenges remain in identifying the sources of uncertainty and expressing the magnitude of uncertainty in ways that are most useful to policy making, especially in light of the huge complexity that these latest efforts introduce. Developments of pluralistic collaborations, in which the forecasts generated by multiple models or experts are displayed without forcing a consensus projection, have emerged during the past few years (Rotman and van Asselt, 2001). The purpose of pluralistic collaborations is not to force or even promote convergence. Nor—as mentioned above—do similar outcomes imply that the forecasts are more reliable, given the common state of affairs that environmental forecasters share outlooks, methods, and data. The appropriate purpose of pluralistic collaborations is to identify where the assumptions are contestable and to demonstrate the implications of different assumptions. Whether this demonstration comes through to stakeholders and decision makers, to help them hedge against uncertainty, remains to be evaluated. Assessing the Reactions to Uncertainty In light of the inevitability of some degree of uncertainty in environmental forecasting, it is important to understand how analysts, stakeholders, and decision makers react to uncertainty. For example, a common reaction by the public to the perception of uncertainty is to become more skeptical of scientific input; the expression of uncertainty violates the public’s “view of science as a simple logical process producing unequivocal answers” (Collins and Bodmer, 1986:98). A common reaction of analysts engaged in ecosystem valuation is to downplay the less certain aspects of value, frequently leaving the more straightforwardly measurable and monetizable aspects to dominate the valuation, often neglecting such benefits as aesthetics and existence values. In some circumstances, uncertainty is seized upon by status-quo-favoring politicians to paralyze policy action. Through a better understanding of the reactions to uncertainty, better processes for coping with uncertainty can be formulated. Forecast Credibility A forecast that is not perceived as credible is of little use, no matter how accurate and enlightening it may be. How can environmental forecasts be conveyed so that the honest expression of uncertainty does not undermine credibility? Without such assurances, forecasters will continue to be tempted

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities to understate uncertainty so as to avoid losing credibility. It is known that perceived usefulness (which presumes credibility) in the eyes of policy makers often depends on whether the forecast (or other expert input) corresponds to the potential user’s preconceived beliefs, whether the decision maker is involved in the analytical effort, and whether the analysis provides results that are compatible with the policy questions that must be decided (Weiss, 1977). In terms of perceived credibility per se, systematic research on its correlates is in its infancy, although it would seem reasonable to hypothesize that appropriate credibility would depend on (1) the track record of prior forecasts associated with the same forecasting group8; (2) the credentials of the forecasters, both personally and in terms of the prestige of their institutional affiliations; (3) the perception of impartiality, based in part on the neutrality or balance of the sponsorship of their studies9; (4) the transparency and plausibility of assumptions and methods10; (5) plausibility of the forecasts11; (6) the perception of honesty in expressing uncertainty; and (7) involvement of decision makers and stakeholders in the forecasting process.12 User surveys would be useful for determining the impact of such strategies as forecasting multiple scenarios, expressing forecasts in probabilistic terms, forecasting ranges rather than point estimates, cofunding of forecasting efforts by opposing groups, and so on. Integrating the Forecasting Effort into the Overall Decision-Making Process Although it has become accepted wisdom that stakeholder and decision-maker participation in the forecasting efforts often provides impressive benefits, the optimal means for doing this remain understudied. It is likely that the best ways to involve stakeholders and decision makers will vary considerably across different contexts. Yet it would still be worthwhile to gather and analyze more cases, in the vein of the Cash et al. (2002) study, to determine both the core lessons and the variations on success or failure. Forecasts as Decision Aids Despite disappointments in the progress of simulations as analytic tools, their potential for use as heuristic models remains. The development of simulations as part of the toolkit of policy dialogue is not rocket science, but it may be a very important step in bringing environmental consequences to the attention of policy makers. Many prior simulation models designed for direct interactions with policy makers were black boxes without particularly reliable or credible outputs. A new generation of decision simulation tools is emerging, but they still have the problem that their increasing complexity can obfuscate the most robust dynamics.13 These decision tools

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities require that both the assumptions and the uncertainties be made explicit. One of the great virtues of doing multiple computer runs, whether with one or more models, is that the uncertainties become obvious as the variations in inputs or model specifications produce different outcomes. An inventory and assessment of current model-based decision-aid ensembles would be useful, with support for refining the most promising of them a good investment. SUMMARY OF ASSESSMENT AND RESEARCH PRIORITIES Compile and assess approaches to balance comprehensiveness and selectivity, specifically approaches for mapping potentially relevant dynamics and then selecting among them for further analysis methods for coping with multiple low-probability events Assess approaches designed to combine biophysical and socioeconomic effects, specifically methods for estimating mitigation and transition costs methods for linking biophysical trends to valuation methods for anticipating policy responses and linking these responses to subsequent trends integrated assessment models Develop sensitivity analysis approaches to gauge the uncertainty levels of applications of nonlinear trend forecasting. Develop more general approaches to case-wise analysis of environmental change impacts on specific groups and explore how to present case-wise results to enhance reliability and perceived credibility. Develop and promote protocols for the systematic expression of uncertainty in forecast reports. Assess whether the presentation of different results (“pluralistic collaborations”) helps to convey uncertainty appropriately and to clarify the implications of different assumptions. Assess the reactions to uncertainty on the part of the public, stakeholders, and decision makers; develop processes that reduce the unconstructive reactions to uncertainty. Refine the identification of correlates of perceived credibility (e.g., specific modes of decision maker participation; specific ways of expressing uncertainty). Develop more refined frameworks to characterize types of uncertainty; assess whether identifying different types of uncertainty can help to make the uncertainty reduction more efficient. Assess forecast evaluation techniques (short-term validation, track record of prior forecasts, backcasting, comparisons of alternative forecasts).

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities Assess existing methods and develop more refined approaches for stakeholder and decision-maker participation in environmental forecasting efforts. Assess decision-aid forecasting models. NOTES 1.   The variation in adaptive management approaches is reflected in the contrast between Kai Lee’s conception of adaptation as reevaluating optimal policy in reaction to feedback (Lee, 1993), and Carl Walters’s conception of adaptation through ambitious experimentation, often nonoptimal, to learn enough about the system in order to formulate better policies (Walters, 1986). 2.   “Credibility” has two senses: whether information is perceived as warranting acceptance as reliable and whether information intrinsically deserves to be perceived as such. To avoid confusion, this analysis will refer to perceived credibility and will refer to the latter sense as “reliability.” 3.   Lasswell (1971:86-88) lists comprehensiveness and selectivity, along with openness, dependability, and creativity, as the criteria for evaluating the intelligence function. 4.   For example, a global warming forecast may be designed to project the consequences of the failure to enact the Kyoto Protocol. If the protocol is implemented more fully than this premise envisions, then the discrepancies between the predictions and the actual results are not “forecast errors” in the same sense as the discrepancies between an absolute forecast and actual outcomes. For the implications of conditional forecasting on the difficulty on assessing forecast accuracy, see Ascher (1989). 5.   An interesting example of this can be found in Kenny et al. (2000), which tries to assess the economic impact of higher temperatures in New Zealand on the production of kiwifruit and corn, as well as the incursion of an invasive grass into fairy pasture. The economic consequences of higher temperatures on the kiwi crop are complicated not only by the uncertainties of temperature increases, but also whether technologies such as chemicals to inhibit premature budding and flowering of kiwi will progress, and whether the kiwi exports will remain as profitable over the long run given changes in tastes and potential competition from other countries. 6.   The summary volume of Climate Change 2001: Impacts, Adaptation, and Vulnerability (McCarthy, Canziani, Leary, Dokken, and White, 2002) notes that among Intergovernmental Panel on Climate Change reports “there was no consistent use of terms to characterize levels of confidence in particular outcomes or common methods for aggregating many individual judgments into a single collective assessment. Recognition of this shortcoming … led to preparation of a guidance paper on uncertainties … for use by all … Working Groups and which has been widely reviewed and debated” (McCarthy et al., 2002:128). 7.   The 1997 National Research Council panel that reviewed the Nuclear Regulatory Commission’s Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Expert noted “epistemic uncertainty would be much greater if, in the assessment of seismic hazard at an eastern U.S. site, instead of representing random seismicity through homogeneous Poisson sources one used a model with an uncertain number of faults, each with an uncertain location, orientation, extent, state of stress, distribution of asperities, and so forth. As little is known about such faults, the total uncertainty about future seismicity and the calculated mean hazard curves would be about the same, irrespective of which model is used. However, the amount of epistemic uncertainty would be markedly different; it would be much greater for the more detailed, fault-based model. Consequently, the fractile hazard curves that represent epistemic uncertainty would also differ greatly … [U]nless one accepts

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities     that all uncertainty is fundamentally epistemic, the classification of … uncertainty as aleatory or epistemic is ambiguous (National Research Council, 1997:32-33) (bold text in original). 8.   Agrawala and Broad (2001:465) point to this factor in their assessment of climate change forecasts. 9.   Busenberg (1999) found this in his assessment of scientific input on environmental risks such as oil spills. 10.   For example, Gibbons (1999) notes the problems that nontransparency has caused in the credibility of predictions of the impact of genetically altered organisms. 11.   This is closely related to the correspondence of the prediction with preconceived expectations (Weiss, 1977), but plausibility of previously unexpected predictions can be reinforced by explanation. 12.   Weiss (1977) found this to be crucial to the impact of expert input in general in her study of U.S. federal executives’ acceptance of technical input. Andrews (2002) found similar patterns in exploring the perceived legitimacy of scientific input by such organizations as the U.S. Office of Technology Assessment. Cash et al. (2002) found the interaction across the science-nonscience boundary to be important across a broad range of environmental issues. 13.   An assessment of the usefulness of stakeholder group exposure to seven global climate change models concluded that “computer models were successful at conveying to participants the temporal and spatial scale of climate change, the complexity of the system and the uncertainties in our understanding of it. However, most participants felt that … most models were not sufficiently user-friendly and transparent for being accessed in an [Integrated Assessment] focus group” (Dahinden, Querol, Jäger, and Nilsson, 2000:253). Welp (2001:538) reaches the same conclusion. REFERENCES Agrawala, S., and K. Broad 2001 Integrating climate forecasts and societal decision making: Challenges to emergent boundary organizations. Science, Technology, and Human Values 26(4):454-477. Andrews, C.J. 2002 Humble Analysis: The Practice of Joint Fact-Finding. Westport, CT: Praeger. Ascher, W. 1989 Beyond accuracy: Progress and appraisal in long-range political-economic forecasting. International Journal of Forecasting 5(4):469-484. Brunner, R.D., and L. Kathlene 1989 Data utilization through case-wise analysis: Some key interactions. Knowledge in Society 2:16-38. Budnitz, R.J., G. Apostolakis, D.M. Boore, L.S. Cluff, K.J. Coppersmith, C.A. Cornell, and P.A. Morris 1997 Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and the Use of Experts. Senior Seismic Hazard Analysis Committee. Washington, DC: U.S. Nuclear Regulatory Commission. Busenberg, G. 1999 Collaborative and adversarial analysis in environmental policy. Policy Sciences 32(1):1-11. Cash, D.W., W. Clark, F. Alcock, N. Dickson, N. Eckley, and J. Jäger 2002 Salience, Credibility, Legitimacy and Boundaries: Linking Research, Assessment and Decision Making. (Working Paper RWP02-046.) Cambridge, MA: John F. Kennedy School of Government Faculty Research, Harvard University.

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities Cleaves, D.A. 1994 Assessing Uncertainty in Expert Judgments About Natural Resources. (U.S. Forest Service General Technical Report SO-1.) New Orleans, LA: U.S. Forest Service. Collins, P.M.D., and W.F. Bodmer 1986 The public understanding of science. Studies in Science Education 13:98. Dahinden, U., C. Querol, J. Jäger, and M. Nilsson 2000 Exploring the use of computer models in participatory integrated assessment—Experiences and recommendations for further steps. Integrated Assessment 1:253-266. Gibbons, M. 1999 Science’s new social contract with society. Nature 402:C81-C84. Kenny, G.J., R.A. Warrick, B.D. Campbell, G.C. Sims, M. Camilleri, P.D. Jamieson, N.D. Mitchell, H.G. McPherson, and M.J. Salinger 2000 Investigating climate change impacts and thresholds: An application of the CLIMPACTS integrated assessment model for New Zealand agriculture. Climatic Change 46:91-113. Lasswell, H.D. 1971 A Pre-View of Policy Sciences. New York: Elsevier. Lee, K. 1993 Compass and Gyroscope: Integrating Science and Politics for the Environment. Washington, DC: Island Press. McCarthy, J.J., O.F. Canziani, N.A. Leary, D.J. Dokken, and K.S. White, eds. 2002 Climate Change 2001: Impacts, Adaptation, and Vulnerability. Cambridge, England: Cambridge University Press. Moss, R.H., and S.H. Schneider 2000 Uncertainties in the IPCC TAR: Recommendations to lead authors for more consistent assessment and reporting. Pp. 33-51 in Guidance Papers on the Cross Cutting Issues of the Third Assessment Report of the IPCC. R. Pachauri, T. Taniguchi, and K. Tanaka, eds. Geneva, Switzerland: World Meteorological Organization. National Research Council 1997 Review of Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts. Panel on Seismic Hazard Evaluation. Committee on Seismology. Board on Earth Sciences and Resources. Commission on Geosciences, Environment, and Resources. Washington, DC: National Academy Press. Prinn, R., H. Jacoby, A. Sokolov, C. Wang, X. Xiao, Z. yang, R. Eckhaus, P. Stone, D. Ellerman, J. Melillo, J. FitzMaurice, D. Kicklighter, G. Holian, and Y. Liu 1999 Integrated global system model for climate policy assessment: Feedbacks and sensitivity studies. Climatic Change 41:469-546. Rotman, J., and M. van Asselt 2001 Uncertainty management in integrated assessment modeling: Towards a pluralistic approach. Environmental Monitoring and Assessment 69:101-130. Tarko, A.P., and P. Songchitruksa 2003 Reporting Uncertainty in the Highway Capacity Manual—Survey Results, Paper presented at the Transportation Research Board Annual Meeting, January 13, Washington, DC. van Asselt, M. 2000 Perspectives on Uncertainty and Risk. Dordrecht, The Netherlands: Kluwer. Walters, C. 1986 Adaptive Management of Renewable Resources. Caldwell, NJ: Blackburn Press. Weiss, C. 2002 Scientific uncertainty in advising and advocacy. Technology in Society 24:375-386.

OCR for page 230
Decision Making for the Environment: Social and Behavioral Science Research Priorities Weiss, C.H., ed. 1977 Using Social Research in Public Policy Making. Lexington, MA: D.C. Heath and Company. Welp, M. 2001 The use of decision support tools in participatory river basin management. Physics and Chemistry of the Earth, Part B: Hydrology, Oceans & Atmosphere 26(7-8):535-539. Weyant, J., O. Davidson, H. Dowlabadi, J. Edmonds, M. Grubb, E.A. Parson, R. Richels, J. Rotmans, P.R. Shakla, R.S.J. Tol, W. Cline, and S. Frankhauser 1996 Integrated assessment of climate change: An overview and comparison of approaches and results. Pp. 367-396 in Climate Change 1995: Economics and Social Dimensions of Climate Change: Scientific-Technical Analysis, Intergovernmental Panel on Climate Change (IPCC). Cambridge, England: Cambridge University Press.