Public agencies involved in environmental, water, and energy resource management have a variety of modeling and information management tools that use climate information. For example, USACE, Bureau of Reclamation, Departments of Energy and Agriculture, Environmental Protection Agency, Forest Service, and Geological Survey all have freely available models that have climate as a primary driver. For the most part, these models have been developed as simulation tools with the intention of managing long-term climate risk. Consequently, their management applications relate to infrastructure sizing and design for mitigating long-term risk, planning, regulatory and operation rule evaluation and formulation, and assessment of impacts of specific practices on environmental attributes. Some of these models have explicit probabilistic inputs and outputs, whereas others are simulation models whose outputs and inputs could be treated as probabilistic. Most have very limited, if any consideration of probabilistic hydrometeorological forecasts, given that their legacy goes back to the 1950s or 1960s in some cases. In what represents a fundamental shift in management thinking and an opportunity for stronger links between forecast producers and manager-users of forecasts, however, the USACE, the main policy-establishing agency for water resources management through large reservoir facilities in the United States, will now consider reservoir operations based on forecasts.17
Almost all of these agencies recognize the need for characterizing and managing uncertainty as part of their mission. However, often due to legal strictures and sometimes due to inertia, there is limited consideration of factors other than long-term risk because it matches a regulatory purpose. Nonetheless, most environmental risk management problems are inherently multiscale (both temporally and spatially). While most of these agencies have operational responsibilities to ensure long-term performance, they are also responsible for responding to events or operational exigencies that result from the residual dynamic risk. If NWS seeks to enhance applications of its probabilistic products within the public sector, launching joint initiatives to consider a comprehensive approach to environmental risk management driven by probabilistic hydrometeorological products and also by changing landscape and social settings would be an important goal. The second point of NWS engagement with the other federal agencies could be to participate with them in addressing one or two high-profile environmental or agricultural projects where probabilistic seasonal forecasts could have a significant impact. This would provide a concrete example of multiagency proactive efforts to bring science forward to address emerging problems. It would also bring engagement from the academic and other communities interested in tackling complex decision problems through innovation in the decision sciences. In addition, NWS would learn more about what probabilistic products to provide.
In general, as NWS moves forward in its interaction with and support of users of sophisticated decision-support systems, it will need to be cognizant that the use of new forecasts in old decision-support systems tailored for deterministic forecasts may actually degrade the system performance (e.g., Yao and Georgakakos, 2001), unless the underlying decision rules are also modified to account for the uncertainty information and updated.
This section provides general guidance on how to identify and characterize user needs. It builds on material from the preceding two sections that describes how decision makers interpret and use uncertain information. The complexity of this task—with a large number of interacting factors influencing the effectiveness of different communication formats and their use in forecast-related decisions—puts any precise specification of user needs far beyond the ability of a single committee. (The private sector, for example, spends millions of dollars each year on customer research.) Instead, the committee recommends a process by which NWS can develop an effective system of provider-user interactions that will lead to identification of user needs and the design and testing of effective probabilistic forecast formats.
As mentioned previously, NHC collected user data about their cone of uncertainty format of hurricane track forecasts in the aftermath of Hurricane Charley. It requested public comments on the original graphic and two new alternatives on its Web site and asked respondents to vote for their preferred graphic from among the three options. This was not a representative survey of the general population. Because it was conducted online, participation was strongly biased toward those with Internet access and, perhaps more importantly, a preexisting interest in NHC and its Web site. The call for comments was advertised by issuing a Public Information Statement to the media, emergency managers, the private sector, and on the Tropical Prediction Center Web site. Thus, the survey was based entirely on individuals self-motivated to take the survey. This almost certainly produced a highly skewed sample. In addition, no demographic information was collected, making it impossible to determine the representativeness of the sample on even demographic characteristics. These are problems that could have been easily avoided had NWS consulted expertise on survey design and sampling.