include increased knowledge-based confidence among those audiences in their ability to organize effective programs. Research may also lead those involved in network-building activities, including those responsible for SARP-sponsored workshops and pilot projects, to organize these activities in new and more effective ways. Research may also lead to changes in the way workshops and pilot projects are evaluated and in the specifications written to request proposals for such projects.

An important ultimate impact of research would be more effective integration of climate information by decision makers. However, such mission-related impact metrics are unlikely to show discernible progress in the short term, for the several reasons discussed above. On a longer timescale it will be even more difficult, if not impossible, to separate the impacts of research efforts from those of program implementation.

CONCLUSION

Textbook program evaluations can be very valuable. However, given the small size of SARP, the expectation that desired outcomes will take at least several years to achieve, the multiple types and levels of decisions that could be influenced by climate information, the variety of relevant decision makers, and the multiplicity of programmatic approaches to shape decision support systems, such an evaluation approach is not appropriate for SARP. Instead, we recommend a monitoring approach.

A monitoring approach aims at recording and analyzing trends in metrics appropriate for each type of SARP activity (pilot projects, workshops, and use-inspired research). We have drawn on earlier work to identify several possible metrics for each type of activity. Multiple metrics should be sought—some that record processes in SARP and some that tap outputs and outcomes—in a regular monitoring scheme. Data should be recorded at regular intervals, perhaps annually. Whenever possible, monitoring should rely on existing sources of data and data that can be reliably collected without substantial time and resources, to limit the level of effort for monitoring this small program. Representatives from target audiences themselves should be asked to contribute to decisions regarding the details of data collection and surveys that could be most useful for monitoring SARP performance. We recognize that because the program is small and its context is rapidly changing, any form of evaluation will be challenging. Nevertheless, it is important for SARP to be able to learn from experience. It is therefore worthwhile to conduct careful comparative research on the results of major SARP initiatives and to seek to understand how outputs and outcomes are affected by program inputs, characteristics of the decision arenas, and other factors.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement