water shortage. Though such case studies give some basis for estimating the value of climate forecasts, they do not separate climate forecast-related behavior from behavior that may be determined by other factors.
User surveys ask representative samples of respondents to value climate forecasts (Easterling, 1986; McNew et al., 1991). Hence, they are really studies of the perceived value of such forecasts (Stewart, 1997). Stewart argues that user surveys are reliable instruments for gauging subjective forecast value.
Several investigators have relied on interviews and closely related protocol analysis to gain knowledge about how valuable climate forecasts are to decision makers (e.g., Changnon, 1992; Sonka et al., 1992). Stewart describes these techniques as the characterization of forecast users' decision-making protocols based on extensive interviews. For example, Glantz (1977) interviewed a wide range of decision makers in Sahelian Africa to determine what they said they would have done differently had they had available a perfectly accurate forecast of the recently experienced drought of 1973. He learned that, given the lack of effective possible response strategies, most Sahelian decision makers were skeptical that even a perfect forecast would have caused them to do anything differently. Like most of the other descriptive techniques reviewed above, interviews and protocol analysis lack a compelling experimental design that enables causal relations to be unambiguously defined.
Decision experiments take a gaming approach to eliciting information about the value of forecasts to decision makers. Actual decision makers are asked to participate in the experiments. Participants are presented with detailed forecast scenarios and requested to explain in detail what their actions and thoughts would be under each scenario. A regression model is then developed to ''predict'' participants' hypothetical behavior with respect to forecast use. Sonka et al. (1988) used decision experiments to model the behavior of two managers responsible for production planning in a major seed corn manufacturing company. The main problem with decision experiments is that behavior in actual situations may differ systematically from behavior in the simulation.
Easterling and Mendelsohn (in press) used a Ricardian-based econometric approach to estimate the cross-sectional relationships of climate, agricultural land values, and revenues in the United States. Assuming that this relationship is conditioned by cropping systems that are strongly, though not perfectly, adapted to their local average climatic resources (including variability and frequencies of extreme events), the econometric model provides a baseline from which to quantify imperfect adaptation to widespread climate events marked by extreme departure from historic averages. Easterling and Mendelsohn argue that the revenue differences between the baseline and drought conditions, net of input substitutions