National Academies Press: OpenBook

Analysis of Global Change Assessments: Lessons Learned (2007)

Chapter: 5 Advice for Effective Assessments

« Previous: 4 Case Studies of Global Change Assessments
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

5
Advice for Effective Assessments

Certain strengths and weaknesses, common to several assessments analyzed by the committee, illuminate critical features of effective assessments. For example, a well-defined mandate and consistent support from those requesting the assessment contributed importantly to the effectiveness of the Arctic Climate Impact Assessment (ACIA) and the Stratospheric Ozone Assessments, while the process outcome of the Global Biodiversity Assessment (GBA) was impaired by lack of a clear mandate from the target audience. Several assessments benefited significantly from well-articulated, multifaceted, and extensive communication strategies. The Ozone Assessments were especially effective in providing relevant information for decision-making processes, whereas the ACIA was particularly outstanding in the scope of its communications outreach. Other components of effective assessments included superior leadership, extensive and well-designed stakeholder engagement, and a transparent and effective science-policy interface. Perhaps the most common weakness of past assessments has been a discrepancy between the scope of the mandate and the funding provided for the assessment effort.

Drawing both on the analysis in this and the preceding chapter and on the relevant literature, the committee identified 11 essential elements of effective assessments:

  1. A clear strategic framing of the assessment process, including a well-articulated mandate, realistic goals consistent with the needs of decision makers, and a detailed implementation plan.

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
  1. Adequate funding that is both commensurate with the mandate and effectively managed to ensure an efficient assessment process.

  2. A balance between the benefits of a particular assessment and the opportunity costs (e.g., commitments of time and effort) to the scientific community.

  3. A timeline consistent with assessment objectives, the state of the underlying knowledge base, the resources available, and the needs of decision makers.

  4. Engagement and commitment of interested and affected parties, with a transparent science-policy interface and effective communication throughout the process.

  5. Strong leadership and an organizational structure in which responsibilities are well articulated.

  6. Careful design of interdisciplinary efforts to ensure integration, with specific reference to the assessment’s purpose, users needs, and available resources.

  7. Realistic and credible treatment of uncertainties.

  8. An independent review process monitored by a balanced panel of review editors.

  9. Maximizing the benefits of the assessment by developing tools to support use of assessment results in decision making at differing geographic scales and decision levels.

  10. Use of a nested assessment approach, when appropriate, using analysis of large-scale trends and identification of priority issues as the context for focused, smaller-scale impacts and response assessments at the regional or local level.

The committee concludes that attention to these elements, many of which have been identified in the previous literature, increase the probability that an assessment will be credible, legitimate, and salient, and therefore will effectively inform both decision makers and other target audiences. In the following findings and recommendations, the committee provides general guidance for incorporating these elements into future assessments.

FRAMING THE ASSESSMENT

Establishing a Clear Mandate

Whether domestic or international in scope, an assessment will benefit by an authorizing environment that ensures it has a clear mandate and the resources necessary to respond to the task. As described in the case of the National Assessment of Climate Change Impacts (NACCI), the process was greatly facilitated by the fact that it was mandated by the U.S. Global

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

Change Research Act (GCRA) of 1990. This requirement helped develop support for the assessment from representatives of the agencies participating in the U.S. Global Climate Change Research Program (GCRP). Similarly, a clear mandate and a strong authorizing environment were provided when a formal proposal was adopted by the foreign ministers of the eight Arctic countries to establish the ACIA. In this case, the mandate by the foreign ministers of the Arctic countries guaranteed that the assessment had a target audience that was not only receptive but also interested in the outcome (Corell 2006). Lack of a mandate can be considered the most significant pitfall of the GBA (Watson 2006). Although the GBA was established by the United Nations Environment Programme to provide a scientific basis for the Convention on Biological Diversity (CBD), it was never formally authorized by the CBD and as a result encountered a number of barriers that might have otherwise been avoided.

These examples illustrate how an assessment with a clear mandate from decision makers is more likely to be viewed as legitimate and salient, particularly if the mandate specifies the degree to which the assessment should identify and evaluate policy or other response options. In fact, maintaining legitimacy requires an assessment to respect the boundary between science and policy, which means that it is to provide policy-relevant information and not to exceed its mandate by providing policy-prescriptive recommendations unless clearly asked to do so.

Specifying Goals and Objectives

The state of the relevant scientific knowledge and the decision-making context that an assessment seeks to inform affects the kinds of decisions to which it is relevant, which in turn should help define the appropriate goals and objectives of an assessment. For example, the goal may be to establish the state of knowledge, to indicate the latest understanding of impacts, or to provide response options. For the last, the goal might be to provide information on the effects of alternative response strategies on relevant impact categories. Different goals will have profound implications for the decisions undertaken during the course of an assessment. Confusion on the part of participants regarding the goals of the assessment can severely limit its effectiveness. Assessment participants must come to a mutual understanding of what they are being asked to do in response to the charge.

As part of the mandate, goals and objectives need to be well articulated, including the kinds of decisions that the assessment should inform, how the assessment will be implemented, and how progress toward goals will be measured (NRC 2005). In particular, in the scoping phase of the assessment, organizers need to identify the target audience and the decisions it is intended to inform as well as the types of information needed to make

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

those decisions. The ACIA provides an excellent example of an assessment with clear goals and objectives that were set by the sponsors. These goals appeared to remain intact throughout the assessment process.

Since external conditions may evolve, goals and objectives may have to be adjusted during an assessment to remain salient to the target audience. The ozone assessments provide an exemplary case of an adaptive assessment process, including explicit changes in goals and objectives made by the authorizing body in response to changes in the state of the knowledge. When changes in approach are required, they should involve a deliberate and transparent decision on the part of the authorizers and leadership of the assessment to ensure legitimacy. To ensure credibility and legitimacy, the rationale for such changes has to be well documented.

An Appropriate Framework

Because design choices made during the initiation of the assessment tend to be structural and difficult to change once the process is under way, they have to be considered carefully. An appropriate framework, which may vary depending on the type of assessment, is required to answer the mandate effectively. Ideally, the framework is specified in advance to guide the process. Problems often arise when a discrepancy exists between the questions to be addressed and the framework and approach used.

The ACIA took great care in preparing such a framework by conducting meetings during a two-year planning phase prior to initiating the assessment process. It drafted a proposal specifying the approach, which was vetted by the Arctic council and the participants. It provided clear guidelines and guaranteed ownership by the relevant target audience and stakeholders.

The ozone assessments and the Intergovernmental Panel on Climate Change (IPCC) provide good examples of choosing an appropriate framework. Both of these studies include effective response assessments in addition to process and impact assessments. The focus on identifying specific technologies to reduce environmental impacts in particular applications or industrial processes seems to have benefited especially the ozone assessments, given the impact the Technology and Economic Assessment Panel (TEAP) had on technology choices, and hence decision making, in the private sector. The attempt to engage individuals from the private sector with the necessary technical expertise to distinguish technical feasibility from economic feasibility has been particularly noteworthy. Therefore, the committee considers the TEAP a particularly effective model for a technology assessment, a special case of response assessment.

Although the NACCI was designed primarily to be an impact assessment, it required the development of scenarios linking atmospheric concentrations of greenhouse gases to their impact on key climate variables.

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

Outputs from general circulation models were developed based on scenarios of human and natural emissions to provide the inputs for sectoral and regional analyses. In fact, most assessments of the impacts of global change nevertheless require incorporation of basic science (process assessments). Since the NACCI was not limited to incorporating references to preexisting literature in assessing impacts, it was able to use the latest model run outputs. However, because of a late start, it was not able to incorporate a phased approach in the selection of specific climate model scenarios. As a result, those responsible for the impact assessment were lacking some of the necessary scientific information to guide their deliberations. In addition to timing issues, the NACCI was also criticized because the models were not generated by U.S. scientists. This example illustrates how the choice of information to be included may affect the credibility of the process, but more importantly, it stresses the importance of project management and timeliness.

The NACCI also illustrates how important the choice of scope and geographic scale is for many other design issues of the assessment. The scope and scale of the assessment should be chosen to match the scope and scale of the decisions it is intended to inform. In the case of climate change, there are a multitude of decisions at various scales spanning numerous public-and private-sector decision makers that could potentially benefit from an assessment. The scope and scale of the information produced (among other considerations) will determine whether or not the assessment outcomes are salient. The choice might also require a phased approach—for example, starting with a more global or national assessment, followed by regional-scale assessments. In fact, the NACCI process was intended as an iterative process, where the national assessment would inform the regional-scale assessment and vice versa. Clearly, such decisions have to be addressed during the inception of the assessment as part of the general framework.

The framework is best specified in a guidance document. This guidance document needs to clearly articulate the mandate and specify the decision the assessment should inform, which will guide the type of assessment (process, impact, response) required. Other issues that the framework should specify include the degree of integration necessary, scope, timing, target audience, leadership, communication strategy, funding, and measures of success. In addition, the respective roles of those requesting and those funding the assessment in scoping the assessment should be clarified in the original guidance document in order to avoid major discrepancies between the assessment’s mandate, expected results, and available funding. Establishing these guidelines in advance and putting them through a vetting process will greatly enhance salience, credibility, and legitimacy.

It is also important to specify how information is judged as credible enough to be included in the assessment. Standards must be established

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

for the inclusion of information. In the case of process assessments, this may simply require a decision on whether only peer-reviewed information should be included or whether original research, not yet peer-reviewed, can be considered. In the case of impact assessments, the development of standards become much more difficult, because of the need to include stakeholder information, such as indigenous knowledge, and the need to make some degree of value judgments. Further, integrated assessments will result in new combinations of multiple types of preexisting knowledge. Although alternative interpretations of the knowledge base are inevitable and must be reflected in the report, standards must be developed to determine what belongs in the knowledge base and how to reflect uncertainty. To ensure the greatest credibility, the knowledge base must be evidence based and not value based, particularly in impact assessments.


Recommendation: The leadership of and those requesting assessments should develop a guidance document that provides a clear strategic framework, including a well-articulated mandate and a detailed implementation plan realistically linked to budgetary requirements. The guidance document should specify the decisions the assessment intends to inform; the assessment’s scope, timing, priorities, target audiences, leadership, communication strategy, funding, and the degree of interdisciplinary integration needed; and measures of success.

Strategic Plan for CCSP’s Assessment Activities

Although the U.S. Climate Change Science Program (CCSP) has a mandate under GCRA to conduct assessments, the program lacks a long-term strategic framework for meeting this mandate. Prior to undertaking future assessments, CCSP will need to clearly express long-term goals for the program as well as specific goals for each assessment, specifying decisions the program intends to inform. A strategic plan comprising overall goals, mandate, and implementation strategy for CCSP assessment activities would enhance the salience, credibility, and legitimacy of future assessments—especially if the plan is accepted at high levels of government as well as within the science agencies and the scientific community. Such an overarching long-term strategic plan for CCSP assessment activities would foster programmatic and funding continuity that could adapt to evolving circumstances and to changes in political administration.


Recommendation: The CCSP should develop a broad strategic plan for its assessment activities that focuses not only on a specific short-term objective, such as preparing the next report or assessment product, but also on longer-term objectives that are in the national interest.

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

ADEQUATE FUNDING

A common problem of several past assessments has been the discrepancy between the mandate and the funding. For example, the NACCI was funded within preexisting agency budgets, and funds were not available for all regions and sectors in a timely way. This resulted in limited support for some teams and delayed funding for others. Overall, it led to unevenness in the team reports and exacerbated difficulties in creating a coherent and consistently high-quality synthesis of all regions and sectors. The budget for the GBA was much lower than for other global assessments of its scope. This had a deleterious effect on the number of meetings that could be held, the number of reports that could be prepared and published, and the size of the support staff. Because funding is required for a broad and representative participation of experts and stakeholders; for administrative support to facilitate the compilation, writing, and review of the product; and for an extensive communication, dissemination, and outreach efforts, insufficient funds can jeopardize critical aspects of an assessment. This discrepancy between mandate and funding can stem from the fact that those who mandate the study do not actually fund it. As an example, ACIA had a very clear mandate from the Arctic Council and the International Arctic Science Committee; however, the funding came from government agencies in the eight Arctic nations. This became a problem from the beginning because the agencies tasked with providing funds were not all fully supportive of the activity.

When broad expert participation is required, the legitimacy of the assessment might be questioned if insufficient funds are available to support individuals from stakeholder communities who would not otherwise be able to participate. Lack of sufficient funding can have a particularly negative effect on the legitimacy of a global assessment if it limits or prevents developing-country involvement.

A well-designed communication strategy requires significant administrative support throughout the process, particularly at the end for dissemination. Therefore, funds need to be reserved for the final phase when intensive outreach and dissemination efforts are required. Because inadequately funded assessments may have to shortchange critically important process steps that lead to the loss of credibility, legitimacy, and/or salience, organizers must be especially strategic about proceeding with the assessment if funding sources are not secured from the outset. Therefore, organizers need to address the following questions before initiating the process: Does it make sense to begin with insufficient funding in the hope that additional resources will become available? If so, what are the implications from a process perspective relative to the goals and objectives? Will it compromise the credibility of the assessment? Should the mandate and scope be adjusted to the available funds?

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

Although the presence of a clear mandate can provide the impetus for fund raising, few assessments had as clear a mandate at their inception as the NACCI, yet a funding shortfall turned out to be a significant problem. An additional consideration relative to funding is the need for programmatic continuity, especially in light of the potential for major changes in direction that result from changes in administration. Since global change research necessarily requires a long-term perspective, abrupt changes in focus and scope can lead to losses in salience, credibility, and legitimacy.

When preparing a framework to be vetted and approved by the sponsors and participants, it is desirable to have the framework contain sufficient detail to link key activities to resource requirements. Large budget items should be anticipated and agreed upon by the key players, with realistic estimates of costs. Failure to anticipate the full range of funding needs can lead to underfunding and an uncertain outcome. It is difficult to raise funds during an ongoing assessment.


Recommendation: Resources made available to conduct an assessment should be commensurate with the mandate. Therefore, the guidance document for the assessment should clarify the role in scoping the assessment mandate of those who are requesting and funding it. The budgeting of resources should focus on ensuring the success of the highest-priority components of the assessment, including aspects that have been shortchanged in the past, such as supporting broad stakeholder participation, communication activities, and dissemination.

ASSESSMENT BENEFITS, OPPORTUNITY COSTS, AND EFFICIENCY CONSIDERATIONS

Assessments as well as the activities tailored to support assessments have mobilized a large number of scientists over the last decades. This effort has affected the national and international research agenda and has engaged research institutions and universities in new types of activities. In certain cases, the need to participate in assessments has facilitated the development of new research disciplines or has brought together different scientific communities that had never cooperated in the past.

The Climatic Impact Assessment Program (CIAP) organized in the early 1970s by the U.S. Department of Transportation provides a striking example that illustrates how the need to address a specific and urgent environmental question has contributed to the development of a new research field. CIAP brought together specialists in dynamic meteorology, radiative transfer, and atmospheric chemistry and led to the formation of a new research community that specializes in questions of the middle atmosphere. This community played a decisive role a few years later when the ques-

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

tion of ozone depletion by industrially manufactured chlorofluorocarbons became an important issue. In many other cases, assessments have brought together experts from different disciplines, teaching them to work together and resulting in subsequent interdisciplinary research projects. In addi-In addi-In addition, assessments can articulate the progress, limitations, and opportunities associated with climate research and promote new directions of research by better defining critical scientific questions and research needs.

Although striving for credibility, legitimacy, and salience is likely to result in an effective assessment, the process might pose a daunting workload and might seem burdensome and inefficient. This issue is exacerbated by the growing number of assessments in which U.S. scientists are involved and the fact that many ongoing assessments are growing in magnitude (Mitchell et al. 2006). The number of experts who participate and the volume of the output are sometimes used as implicit indicators of the credibility and seriousness of the assessment activity (Reid 2006). However, if these indicators are taken to their limit, the assessment process may have diminishing returns and may no longer be efficient. It is important to balance the human and financial costs associated with any assessment against the value and impact of the assessment outcome. More importantly, one has to consider the opportunity cost associated with the human resources being devoted to assessments instead of advancing the scientific understanding of the issue itself.

The assessments examined by the committee have provided significant and tangible benefit, but they also provide valuable perspective on the human resources invested by the scientific community that is not focused on research. To illustrate the growing demand on human resources by assessments, consider, for example, the following observations about U.S. assessment activities:

  1. The number of global change assessment activities is increasing, including some 21 activities planned or under way by the CCSP, which are combined with U.S. leadership or participation in many other activities such as the IPCC.

  2. The scale of assessments is growing significantly. The IPCC is a good case in point. Consider just the metric of the size of the reports. In 1990, 1995, and 2005, the scientific assessment portions of the IPCC were 364, 570, and 881 pages, respectively. The impacts and vulnerabilities sections in 1990, 1995, and 2000 were 242, 447, and 1032 pages, respectively. The response strategies topics in 1990, 1995, and 2000 were 270, 447, and 750 pages, respectively. An argument could be made that these increases in volume describe the wealth of investment and accomplishment associated with new research, but it is also evident that the reports increasingly tend to be comprehensive documents rather than statements of progress or

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

identification of issues. In 2000, the synthesis report for the IPCC alone was 396 pages long. This increase in volume often does not stem from an increase in mandate but from the difficulty assessment participants have limiting the scope and the amount of information to include. The IPCC is not the only assessment generating considerable volumes of information. For example, the first Millennium Ecosystem Assessment (MA) produced 81 chapters totaling more than 3,000 pages.

  1. Many major assessments include a schedule for repeating the process at regular intervals. For example, the act of Congress that created the GCRP also requires national assessments on four-year intervals; the first comprehensive national assessment on climate change was completed over a four-year span, and the current U.S. national assessment effort is now producing 21 synthesis and assessment reports by the CCSP. The IPCC is scheduled at five- to six-year intervals. However, the next IPCC assessment starts immediately upon completion of the last, and it has become essentially a continuous process. There is a perception that the rate of scientific progress is slower than the rate at which the assessments are being conducted. The Ozone Assessments under the Montreal Protocol requires assessments of science, impacts, technology, and economics every four years but is notably an exception, in that it was designed to include updates on the science only and appears to be producing shorter reports with time.

  2. Assessments involve hundreds of climate and climate-related scientists. The tasks of lead authors for the IPCC chapters of various working groups require a significant time commitment. Similarly, the MA involved 1,360 experts from 95 countries. The investment in time includes not just the number of authors, but also the number of reviewers. Consider, for example, that the MA processed 20,745 review comments from 2,516 experts.

  3. There are more than 200 international treaties, most of which require periodic assessments (Mitchell et al. 2006). In 2003 alone, more than 12 such assessments were under way, each engaging hundreds to thousands of scientists.

The considerable time investment also raises questions about the potential for the following unintended and undesirable consequences:

  1. The level of review and the willingness of reviewers to evaluate assessment products may decline with volume and repetition. As a case in point, in the NACCI, the 5-page description of the climate basis included in the overview chapter received 18 pages of expert review (federal agencies and solicitations from expert scientists), while the 60-page foundation chapter, which served as its basis, received only 3 pages of expert review. During the public comment period, the overview received 10 times the vol-

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

ume of comments as the foundation section on climate. Two conclusions are possible from this analysis. First, the community perceives that the synthesis or overview report element will have more impact and therefore is more important to analyze and comment upon. Second, reviewers are taxed by the volume and, therefore, few are willing to review an entire assessment report. Unfortunately, the level and/or quality of review may decline if the task is too onerous.

  1. The growing magnitude of the assessment process may begin to change the participation by scientists from different communities. The magnitude of the effort can influence whether the best expertise can be engaged in the process, either because the “best” experts are often already quite busy or because they may have experienced some burnout in earlier activities and not be willing to serve on additional assessments. Participation may be particularly challenging for young university scientists because of the combined teaching and research mission of these individuals. Since participation in comprehensive assessments cannot be budgeted as sponsored research or teaching, it becomes an unfunded mandate for university researchers. Consequently, the job descriptions (nongovernmental organization [NGO], university, government) and career level of the scientists who participate may change with the growing magnitude and repetition of assessments.

  2. The motivation for participation in any assessment process changes if the process becomes too time consuming. If the assessment is perceived to be of considerable political importance or contentious, then the process may motivate participation from the tails of distribution of scientific opinion or perspective to either help ensure or help prevent a particular outcome.

  3. The impact of assessment products will also change with the volume of reports simply because stakeholder comprehension and willingness to read lengthy reports will decline. The most important outcomes of an assessment may also be blurred by the sheer volume of the discussion.

One conclusion from this analysis is that the human and financial resources required to create a credible assessment should be examined at the start of the process and actions incorporated to ensure that the use of the resources is effective and efficient. Given the important contribution of assessments to policy making and to society in general and growing number of international treaties and national mandates, considerations of efficiency become increasingly important to minimize the opportunity cost to the research community. It may also be time to consider alternative modes of participation, changes to the assessment process, or both, particularly for assessments that are scheduled to reoccur at a given interval. Possibilities include the following:

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
  1. Define specific requirements to add more focus to ongoing assessments so that they are limited to addressing significant new advances (e.g., IPCC) as opposed to being comprehensive. In this manner, the opportunity costs for researchers can be limited. Such an objective might be enabled, for example, by enforcing page limits. This would change the scientific discussion by forcing participants to debate and consider only the most significant results and limitations, rather than trying to be comprehensive in literature searches, citations, and discussions. The short text would promote a higher-quality review by a broader spectrum of scientists. The impact of the report could be greater, especially if it focuses clearly on the most critical aspects of the research outcomes. In addition, the focus on brevity and impact would better enable the best and most involved scientists to participate and perhaps permit greater scientific participation in a larger number of assessments.

  2. Consider having fellowships for young scientists (as done by the MA) or specific opportunities for funding participation by U.S. scientists. The funding could be provided based on peer review, with the objective of identifying and supporting those best able to produce a credible and representative report.

  3. For major assessment activities, consider nested approaches that phase the contributions of different elements of the community. For example, the process assessment could be undertaken first, followed by an impact assessment, a response assessment, or both. Similarly, process assessments could be undertaken at the global or national scale first and used to provide information for process assessments and impact assessment at a smaller, more regional or local scale. In this manner, U.S. assessments could focus on more regional scales if international efforts are acknowledged and used to build national assessments, rather than conducting a redundant effort.

  4. Budget adequate time in the implementation plan for products to be developed. For assessments that are conducted at regular intervals, evaluate the appropriate interval every so often by considering the rate at which new scientific information becomes available and the rate at which the policy context changes and thus requires new questions to be assessed. Depending on the balance between the rates of evolution of the available science versus the decision-making context, consider producing focused, fast-tracked assessments, with an emphasis on the latest improvement in the understanding of an issue required for the evolving policy context.

Acknowledging previous assessment efforts as a starting point is particularly relevant to climate change assessments such as the IPCC and U.S. climate change assessments because U.S.-funded research and scientists already play a major role in supporting the IPCC efforts. Therefore, it seems appropriate that such efforts form the basis of U.S. assessments.

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

Recommendation: Care is required to make sure that the burden on the scientific community is proportional to the aggregate public benefits provided by an assessment. Alternative modes of participation or changes to the assessment process—such as limiting material included in regularly scheduled assessments or running “nested” or phased multiscale assessments—should be considered. As appropriate, U.S. assessments should acknowledge the work of the international community and avoid redundant efforts.

TIMING AND FREQUENCY

A frequent criticism of assessments is that the information is not delivered at the timescale required for the decision-making process. It is critical that a realistic time line be laid out at the design stage with regard to the products of the assessment. Once the schedule is set, expectations need to be managed and met if the credibility of the process is to be maintained. This requires a delicate balancing of the needs of the decision-making community with the knowledge and resources available.

For example, a major criticism of the NACCI was that the assessment effort was late in responding to its congressional mandate. This resulted in the near-simultaneous development of climate scenarios, team guidance, regional and sector team efforts, and synthesis. Perhaps even more problematic was the fact that a change in administration coincided with the release of the report. With this change, the original salience of the report was lost and major legitimacy issues were raised.

The German Enquete Kommission produces assessments that tend to meet the time requirements of decision making by including policy makers and scientists in the ongoing process. Therefore, policy makers benefit from the latest information at the time it becomes available. Indeed the stated rationale for composing investigation committees with both policy makers and scientists or practitioners is that scientific findings can be integrated much more rapidly and comprehensively into parliamentary deliberations. At risk, however, is credibility because scientific discussion within the committee involves individuals from different political parties who may or may not have a scientific background. This can make reaching agreement problematic and may require political compromise.

Assessments such as the IPCC are conducted periodically, thereby offering an opportunity to provide a summary of the state of knowledge at regular intervals. Although this ensures a steady updating of information as mentioned above, it tends to be resource intensive and has led some to question whether such assessments should take place at fixed intervals or instead be driven by the rate of change in the underlying knowledge base. Because of the efficiency issues described in the previous section, the rate at

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

which new information becomes available has to be balanced carefully with the urgency of the decision-making process when deciding on the frequency and scope of assessments.

Consequently, a realistic time line is essential to accomplishing the goals and objectives of an assessment. However, because assessments often have to meet deadlines driven by the mandate or the decision processes they hope to serve, they have sometimes been developed without adequate care given to matching the timeline to a realistic assessment of the amount of work required.


Recommendation: The time line must be consistent with the goals and objectives, the underlying knowledge base, the resources available, and the needs of the decision-making process that the assessment is intended to inform.

IDENTIFYING, ENGAGING, AND RESPONDING TO STAKEHOLDERS

Stakeholders, defined here as interested and affected parties, include several specific categories that are distinguished in this report due to the need to strategically engage diverse groups. The target audience is a subset of stakeholders comprised mainly of those making the decisions the assessment intends to inform, who are sometimes also referred to as the “users” of assessments. It includes intermediaries such as NGOs, professional organizations, and other “science translators” (e.g., a congressional staff person). Those who request and fund the assessment are also a specific subset of the target audience. Another important group of stakeholders are the experts participating, producing, or leading the assessment. Lastly, a large subset of stakeholders consists of those potentially affected by the policies resulting from the use of assessments who may not have been part of the process.

The assessment community has recognized the importance of broad engagement of stakeholders in order to ensure salience and legitimacy. In this section, the committee discusses issues related to addressing the needs of specific target audiences, establishing appropriate boundaries at the science-policy interface, engaging stakeholders beyond the target audience, building the capacity of stakeholders to engage in assessments, and a comprehensive, multifaceted communication strategy. Meeting this objective may require significant resources and may thus need to be balanced with efficiency considerations. However, the importance of stakeholder engagement to the overall success of an assessment implies that budgetary provisions, especially for communication, should reflect this reality.

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

Defining and Responding to the Target Audience

Defining and responding to the needs of the target audience is a critical component of an effective assessment process, requiring a continual dialogue between scientists and the target audience. Involvement of the target audience will also promote legitimacy and ownership of the process. As such, the intended audience needs to be identified in advance along with its information needs and the level of specificity required for that information to be useful. Such dialogue often provides surprising insights about the science itself as well as giving focus to research questions. The Enquete Kommission offers such an example, in which the target audience participates fully in the process and is engaged in a constant dialogue. In such a process, the science topics can be modified based on user demand to continuously ensure salience, but the conclusions drawn from the science should not be changed in response to user demands due to the resulting loss in credibility.

Because many assessments have diverse stakeholders, it may not be possible to deliver relevant information to all potential audiences. For the ACIA, the target audience (the Arctic council and the tribal councils) was well defined and organized from the beginning, was heavily involved in its initial framing phase, and was also involved in the review process. A deliberate process needs to be used to identify and engage the most important and appropriate audiences. For example, the consideration of impacts at an aggregate level may be useful for those who are responsible for negotiating climate treaties at the domestic or international level, but it will be of little value to those responsible for managing a water resource basin, improving the resilience of an electric power system, or making local land-use decisions. Information must be tailored to an appropriate decision-making scale to be useful. However, the limitations of downscaling information to the local scale need to be well articulated. The approach of the MA might set a useful example of how decisions at multiple scales can be informed. The MA provides information at the global scale but has nested within it assessments at the regional scale and, hence, targets multiple audiences at the same time.

The target audience for an assessment may also comprise intermediaries, such as media, NGOs, professional organizations, business associations, or other “science translators,” such as policy advisers and congressional staff members. In many sectors, these intermediaries are consultants or specialists within an industry who focus on translating the assessment information into products that are designed to support particular kinds of decisions. They are commonly the most sophisticated users of assessment products and are therefore a critical target audience.

Managing expectations is important to the success of an assessment. It is critical that the target audience knows exactly what the assessment is

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

intended to be used for and, just as importantly, what it is not intended to be used for. For example, an assessment may focus on regional and sectoral impacts and opportunities for adaptation, but, by choice, exclude issues related to mitigation options and response strategies. Given the resources, stakeholder demands, and political environment, this may be a perfectly rational design. However, it will limit the audience for an assessment. Hence, expectations regarding the context of the assessment must be managed from the outset.

Depending on the type of assessment, audiences may include governments, the private sector, civil society or NGOs, and the scientific community. Responding to the needs of this broad spectrum of target audiences is costly in terms of human and financial resources. Most scientists are not well equipped to design and manage interdisciplinary science-policy discussions; expert facilitators may be required to bridge this knowledge gap successfully. Finding individuals, skilled at handling this interface, can be difficult. Financial support for this activity has been limited in the past, and it has been difficult to maintain the continuous dialogue with the appropriate target audiences.

Providing decision makers with the information they need when they need it is a laudable goal for any assessment. At the same time, given that decision making is very likely to take place at a different pace than the scientific process, assessments are prone to the criticism of not providing information at the level of detail requested by policy makers on the time scale they desire. Salience can be lost by providing information too slowly to meet the needs of an evolving policy process. At the same time, credibility can be lost by providing results that are considered premature by the scientific community. Therefore, the timing of information should play a crucial role in the design of an assessment; however, decision makers need to be realistic with regard to their expectations of when the information will become available and with what degree of certainty.

Because policy making is a dynamic process with many opportunities to learn and make midcourse corrections, a continuous dialogue with the target audience could allow the assessment process to adjust to the changing needs of decision makers, as long as it remains consistent with the overall mandate and true to the scientific evidence.


Recommendation: The intended audience for an assessment should be identified in advance, along with its information needs and the level of specificity required for assessment products to be most salient and useful. In most cases, the target audience should be engaged in formulating questions to be addressed throughout the process, in order to ensure that assessments are responsive to changing information needs. Both human and financial

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

resources should be adequate for communicating assessment products to relevant audiences.

Boundaries at the Science-Policy Interface

Defining an appropriate interface between an assessment process and the policy makers who requested and pay for it is a critical challenge in assessment design. While the involvement of decision makers in local, state, or federal government is crucial to ensure the salience of the information provided, boundaries might be required to ensure the credibility and legitimacy of the process. In particular, those providing the funding and authorization for the assessment should not be in a position to influence the scientific conclusions. The ACIA offers an effective model: policy makers and scientists collaborated in the development of the executive summary of the report. In this collaboration, scientists were given the authority and veto power over the scientific content. In contrast, in the IPCC process, the political oversight and negotiations before the release of the Summary for Policy Makers has led scientists to question the credibility and legitimacy of this particular part of the review process. It would be preferable if the process allowed scientists to retain the ultimate editorial authority over scientific conclusions, as long as a neutral and properly managed review process is in place to ensure that review comments are addressed appropriately. Because the NACCI had such a clear and strong mandate from one administration, it became vulnerable to criticism by the subsequent administration that it was a politically motivated process and was lacking legitimacy. It is conceivable that if more explicit and well-defined boundaries had been in place at the science-policy interface from its inception, this perception of illegitimacy could have been minimized.

How and where to establish the boundary between the assessment producers and those who requested the assessment depends in part on the specific political environment in which the assessment is produced. In the case of the Enquete Kommission, boundaries at the science-policy interface are minimal, resulting in an institutionalized collaboration between policy makers and scientists delivering the information to the decision-making process in the most timely and most effective manner. Such a process might not achieve the same level of credibility and legitimacy for an assessment conducted in the United States, because the cultural assumption in the U.S. science community is that credibility stems from a scientific enterprise fully independent of policy issues and that government review of science can result in credibility problems. Independent of where along the spectrum the science-policy interface falls, each community must maintain its self-identity and protect its sources of legitimacy and credibility. Boundaries are

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

therefore commonly negotiated, articulated, and maintained by assessment participants (Farrell et al. 2006).

Ideally, neutral facilitators can monitor the boundaries. For example, a team of review monitors, composed such that the team is balanced overall in opinions and biases, with experts both from the policy and the science communities who were not involved in the preparation of the assessment, could referee the policy review of the document. The review monitors would ensure that scientists are responsive to amending the policy options or recommendations and that the government review does not attempt to alter the scientific conclusions.


Recommendation: The leadership of and those requesting the assessment should establish a transparent and deliberate interface between participants and those who request or sponsor the assessment. Clear guidelines and boundaries should ensure both salience to those requesting the assessment and legitimacy, especially with respect to the perceived influence those requesting the assessment might have over the scientific conclusions drawn.

Science-Policy Interface for CCSP Assessments

CCSP’s assessment activities have raised credibility and legitimacy issues with some stakeholders, particularly in the science community, because of the way the boundary between science and policy was designed. For example, each assessment product is reviewed by the government and requires approval by high-level government officials, raising the questions of whether the users of the assessments not only control the questions being asked but, at least potentially, also the scientific conclusions. This concern is addressed to some extent because CCSP posts the report in both the pre-and postreview version to allow tracking of the changes. Nonetheless, there remains skepticism about the degree to which government influence may affect scientific outcomes, not only through funding but also through review of final products. Perceptions about the degree of government influence can diminish the value of an assessment in the eyes of many stakeholders. Such perceptions may be difficult to overcome, making it especially important to establish guidelines that will stand the test of time.


Recommendation: CCSP needs to further develop and better communicate a government review process that is considered legitimate and credible by all relevant stakeholders.

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

Stakeholder Engagement in the Process: Balancing Credibility and Legitimacy

Despite general understanding that broad stakeholder engagement can contribute importantly to a successful assessment, how to identify appropriate stakeholders and engage them effectively is not self-evident. Participation of broad audiences throughout the assessment process may increase legitimacy and salience, but it could also weaken the credibility of the process. In addition, the involvement of too many stakeholders could make the assessment process inefficient and too costly. The appropriate balance between broad stakeholder engagement to achieve legitimacy and salience, and the need to achieve efficient, and credible outcomes, will depend on the specific context of each assessment; it will require careful consideration early in the assessment design process.

Despite the tension between various interest groups, it is the experience of committee members that, in many kinds of assessments, more benefits and impact come from engaging stakeholders in the process than from communicating the final product. The ACIA process is a successful model for stakeholder engagement, which was characterized by transparency, inclusiveness, and broad participation by the various stakeholders, including both governments and affected indigenous peoples. The ACIA process benefited from the fact that most stakeholders were already organized; hence, trusted representatives from indigenous peoples’ organization were able to participate and speak on behalf of their organization. This simplified and improved communication with the relevant stakeholders significantly.

A clear and transparent approach to soliciting and selecting stakeholder participation or input needs to be designed during the framing process and included in the guidance document. Since participants are also stakeholders in the process, each participant will bring to the assessment a bias and potential conflict of interests. Requiring all participants to openly state their biases can help ensure that the composition of the committee includes an overall balance of opinion and biases. The legitimacy of any assessment process would be enhanced by a transparent and deliberate approach to ensuring a balance in the opinions of its participants.

Stakeholder engagement builds trust between individuals and between categories of users; results in broader understanding of multiple perspectives; and builds a shared knowledge base that may be useful in other applications. However, it must be recognized that there is a direct relationship between the number of individuals and organizations that engage in a process, either as stakeholders or as participants, and the degree of difficulty in arriving at a consensus. Larger and more inclusive assessments have a lot of “transaction costs” because of the need to bring all participants to a common level of understanding of the science as well as of the goals and objectives of the assessment. Nevertheless, the NACCI provides an example

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

for successful stakeholder engagement (Morgan et al. 2005). The NACCI was characterized by a concerted effort to include a wide range of stakeholders throughout the entire process, as appropriate for key issues within regions and sectors.

A host of critical questions arises regarding who participates in assessments and who the recognized stakeholders are: What disciplines and perspectives should be represented? What sectors, ethnic groups, interest groups, or international entities and governments need to be represented? Who should select the representatives? What criteria are used for the selection? All of these questions have to be addressed carefully and deliberately, ideally in the guidance document, while acknowledging that it is generally better to err on the side of inclusiveness.

Different categories of assessments have inherently different types of stakeholders. In the case of process assessments, stakeholders include the relevant social and natural science experts and agency representatives, particularly when the assessment is expected to inform decisions regarding future research priorities. Although the committee has noted in earlier chapters that the model for producing process assessments is well established, involving hundreds of experts drawn from a variety of disciplines, decisions on how to balance the disciplines are not always as well considered. Because understanding the impacts of global change is extremely complex, it requires the involvement of a multitude of disciplines, and inadvertently excluding any key expert group can lead to a loss in scientific credibility and potentially legitimacy. However, balance is also a critical feature. An additional challenge results from the fact that in most instances, scientists receive no direct financial compensation for their involvement, which may exclude certain experts from participation due to lack of support. Funding issues may limit some categories of potential participants, presenting a challenge to balanced stakeholder participation; the equity implications of funding and need to be considered when planning the stakeholder engagement process.

In the case of impact and response assessments, participation should include the involvement of governments, the private sector, and civil society or NGOs in addition to the scientific community. However, their roles may differ both within and across assessment activities and may depend on which phases of the policy process the assessment intends to inform. Especially if response assessments strive to provide policy options, relevant policy makers must be involved at least in the review process.

In integrated assessments, balancing the participants by disciplines (e.g., natural and social scientists), sectorally, and geographically is an important design consideration. In the international context, more care needs to be given to engaging a broad spectrum of experts, particularly in balancing experts representing developed and developing countries and economies

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

in transition. Extra effort may be required to identify the best talent in developing countries that have historically not been fully engaged because of economic issues or the structure of the community of experts. In both international and national contexts, equity issues need to be considered. Experts can be drawn from both the scientific community and a variety of stakeholder groups, but they should be chosen based on their expertise in areas relevant to the assessment and their ability to participate objectively and constructively in the process. The selection process must be open and transparent, with well-articulated criteria for selection.


Recommendation: A strategy for identifying and engaging appropriate stakeholders should be included in the assessment design to balance the advantages of broad participation with efficiency and credibility of the process.

Capacity Building

Capacity building to develop a common language and technical understanding among stakeholders can greatly enhance the effectiveness of assessments. Not all stakeholders will be familiar with the science or the policy context of a particular assessment, thereby limiting their ability to engage in the process. Decision makers may not be conversant in the relevant science. Scientists and other expert participants may need assistance in communicating effectively with experts from other disciplines and with other stakeholders. Meaningful engagement with the public may also require a degree of capacity building and iterative learning between the “experts” and the public to arrive at a shared set of facts and a focus on issues that are of clear importance to the stakeholders (NRC 1996; Farrell et al. 2001). It is imperative that the engagement be viewed as a “two-way” communication, since the “experts” often have much to learn about impacts, vulnerability, and perceptions as well as data sources and local knowledge of systems (NRC 1996; Jacobs 2002).

Some assessments have involved successful capacity-building activities. In some regions and sectors, the NACCI succeeded in bringing new stakeholders into the global change arena and providing them with sufficient information to truly engage in the process. The ACIA also included effective capacity building, benefiting from the insights and methods developed by the Global Environmental Assessment project. Important lessons incorporated in the ACIA process include the realization that the assessment process itself was part of the outcome. Similarly, the stratospheric ozone assessments continued to improve on their communication and outreach products, resulting in very sophisticated reports and graphics in later assessments. These examples illustrate the importance of evaluating assessment processes and

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

building on prior experiences. A systematic effort is required to improve assessments in the future by drawing lessons from past experience and by developing assessment methodologies and tools. Capacity building should therefore include research support for improving assessment methodologies as well as ensuring that the assessment leadership and participants are familiar with the most recent assessment methodologies and tools.

Investments in capacity building can have payoffs in multiple areas, including (1) expanding the informed audience for the assessment, (2) contributing to future assessment effectiveness, (3) expanding the ability of decision makers to act on scientific information, (4) equipping participants with new knowledge in assessment methodology and tools, and (5) building a scientific community that is more sensitive to needs and concerns of the broader society. In some cases the value of the assessment process, which may involve considerable time commitments on the part of participants, might not be immediately apparent. Thus, additional effort may be required to communicate the benefits and to structure the questions and process such that they are relevant to the participants the assessment aims to engage.

Private-sector participation has been noted as a serious deficiency in multiple U.S. and international assessments. Because engaging business interests has historically been very challenging, special considerations are required to successfully engage private-sector participants. The success of the TEAP has clearly demonstrated the great benefit from designing a process that engages the private sector. Developing a strategy to encourage its participation requires consideration of its decision-making context and business needs. Face-to-face meetings are expensive in terms of time and money, and the connection to either short- or long-term benefits needs to be clear. The global change community needs to be strategic about constructive and creative ways of engagement with the private sector. This might be accomplished by conducting workshops for particular sectors, focused on their concerns, such as identifying economic risks and opportunities and “news you can use,” and through web-based communication techniques.


Recommendation: Capacity-building efforts for diverse stakeholders and assessment participants from various disciplines should be undertaken by CCSP in order to develop a common language and a mutual understanding of the science and the decision-making context. This capacity building may be required to ensure that the most salient questions are being addressed and to meaningfully engage diverse stakeholders in assessment activities.

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

Communication Strategy

A communication strategy is fundamental to the success of an assessment: without effective communication, the scholarly effort is diminished. Ideally, communication is a two-way process of education. Only if an assessment’s scientific findings are effectively communicated, understood, and accepted by targeted audiences can they optimally inform policies and decisions to address the environmental challenges analyzed in the assessment. Furthermore, the target audience must be able to communicate its information needs to the experts conducting the assessment to guarantee that the relevant questions are being addressed. Communication must, therefore, be regarded as a process, not merely an appendage to a report-writing exercise.

The Enquete Kommission provides an example in which the two-way dialogue is guaranteed by having scientists and politicians involved in the process continuously, thereby also increasing the likelihood of timely delivery of the information. However, the Enquete Kommission’s direct involvement of politicians in the process is unlike all other assessments evaluated by the committee. The ACIA exemplifies a more typical approach, where the politicians providing the mandate are not directly engaged. The ACIA is universally recognized for having a well-articulated communication strategy to support the policy-making process. Two important factors contributed to the success of the ACIA’s communication strategy: (1) its communication strategy was planned from the initiation throughout the process and carried out beyond the report production phase including dissemination activities targeting a broad range of audiences; and (2) the intended target audience was identified in advance, and tailored communications were produced. The IPCC and the MA are making very extensive use of the Internet. Reports are readily available and easily downloaded, including excellent color graphics. In addition, these reports are available in multiple languages.

The characteristic complexity of the science and the range of scientific uncertainties add to the communication challenge. There is often an inherent conflict between scientists’ penchant for exactitude and the effective presentation of an environmental assessment to a nontechnical audience (Johnson and Slovic 1995; Johnson 2003). This challenge must be addressed through conscious efforts to simplify language, tables, and scenarios to make them more understandable and to illustrate difficult concepts creatively, particularly when designing dissemination products aimed at the general public. The report should avoid academic jargon and be crafted for easy accessibility. Extreme care must be exercised, however, that any simplified text prepared for differing audiences does not conflict with the more scientific presentations designed for the assessment’s original sponsor.

The ozone assessments have grown increasingly sophisticated over time in their communication approaches and in simplifying their message. From

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

the 1985 assessment, which did not even include an executive summary, subsequent assessments evolved to produce reports with carefully prepared summaries, viewgraphs, talking points, and nontechnical publications (e.g., Common Questions About Ozone) that summarize current knowledge in commonsense terms and (implicitly) address any current attempts to mislead or obscure the consensus.

Executive summaries are one of the most crucial elements for the successful impact of the assessment exercise on policy and decision making. It should be concise, value-free, and clear about assumptions and uncertainties, and should be crafted and reviewed with attention to clarity, substance, relevance, absence of jargon, and the differing needs of policy and decision makers—recognizing that they are generally not specialists.


Recommendation: Assessments should have a comprehensive, multifaceted communication strategy from the start, encompassing an analysis of the potential audiences, ranging from those requesting the assessment to the general public; use multiple modes of engaging them; focus on the decisions the assessment intends to inform (e.g., policy decisions, legislation, technological innovation, standards, international treaties); and include appropriate dissemination activities.

LEADERSHIP AND ORGANIZATIONAL STRUCTURE

The management structure tends to vary from assessment to assessment. The NACCI established the National Assessment Synthesis Team (NAST) made up of experts from industry, academia, government laboratories, and nongovernmental organizations. The NAST, with its three co-chairmen, had substantial authority to guide the process, and their guidance was critical to the successes of the NACCI. The ACIA established a management structure consisting of various steering committees and local secretariats. The ACIA also benefited from having leadership with substantial understanding of the ability to incorporate lessons learned from previous assessment processes. Major decisions of the IPCC are taken by the plenary of government representatives, which elects 30 chairs and vice-chairs who make up the IPCC bureau. Each working group is supported by a technical support unit, and the overall Bureau is supported by a secretariat. The leadership in the ozone assessment was particularly effective due to both familiarity with the scientific issue and the political awareness necessary to communicate the scientific findings effectively (NRC 2005).

The need for a strong leadership structure is self-evident and was a common thread running through the committee’s discussions with those responsible for conducting assessments. A general principle is that the decision-making structure within the assessment needs to be well articulated

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

from the outset. However, identifying the appropriate leadership is challenging, and the committee concurs with the findings of the National Research Council (NRC 2005) on the characteristics of good leadership:

[Good leaders] are committed to progress and are capable of articulating a vision, entraining strong participants, promoting partnerships, recognizing and enabling progress, and creating institutional and programmatic flexibility. Good leaders facilitate and encourage the success of others. They are vested with authority by their peers and institutions, through title, an ability to control resources, or other recognized mechanisms. Without leadership, programmatic resources and research efforts cannot be directed and then redirected to take advantage of new scientific, technological, or political opportunities. (p. 48)

The choice of leadership structures and individuals may not be straightforward, but it is crucial to the success of the endeavor, with significant implications for how effectively the assessment is conducted and how well it is received by the target audience and other stakeholders. Effective assessment leaders respond easily to a changing political environment, provide transparent and legitimate rationale for such a response, and provide consistent messages to participants. In the best of circumstances, individuals with appropriate scientific credentials will naturally emerge, who enjoy the confidence of both the political and the scientific communities, have experience in conducting successful assessments, and are willing to undertake the present one. Since the leaders commonly function as spokespeople for the process, decisions regarding leadership must consider implications for perceptions of objectivity, credibility, and legitimacy.


Recommendation: The leadership and organizational structure of the assessment should be made clear, and the responsibilities of individuals and organizations well articulated.

INTEGRATED ASSESSMENTS

Degree and Nature of Integration

Although multiple definitions of integrated assessments are being used by the community, the committee considers such assessments to result from a process that integrates social, biological, and physical sciences and engineering and allows interdisciplinary synthesis and analysis. Some integrated assessments are integrated after the fact, like the IPCC, and some are actually interdisciplinary and integrated from the beginning, such as the MA. Others, following a more restrictive use of the definition of integrated assessment, comprise a model that explicitly links the dynamics of

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

social, biological, and physical systems. All types allow understanding of complex interlinked phenomena and their implications, as well as building a stronger fabric for decision making. However, a well-integrated assessment is difficult to undertake, and managing them is far more difficult than is generally recognized.

The IPCC is one of the few examples of an attempt at integrated assessment; however, it still develops core science findings in separate teams from the impact and response assessments. The requirement that the IPCC draw only on the peer-reviewed literature has increased its scientific credibility, but at the price of making the integration effort slower and less flexible. In synthesizing information from the literature it is difficult to tell whether the individual pieces are based on a set of common assumptions. To some extent the IPCC can address this problem by reviewing the results of integrated assessments conducted by individual research teams. In addition, difficulties in integrating across the three working groups also stem from the distinct paradigms each is working with. Another issue with integrated assessments is the difficulty of developing a common language between different disciplines, particularly between social and natural scientists. As mentioned before, such difficulties can be overcome if resources are devoted to capacity building and development of a common language between the various disciplines. These problems raise concerns about whether integrated assessments of global change can be conducted effectively at the international level.

Because a fully integrated assessment is much more complex and difficult to achieve than assessment of a single issue, there has to be a clear reason why this approach is undertaken. The effectiveness of the early stratospheric ozone assessments in informing decisions demonstrates that not all assessments need to be fully integrated, although there are many benefits to working toward an integrated approach, including greatly enhanced potential to be policy relevant. An alternative approach is “nesting” specific assessments in a broader matrix to provide for a clear focus and illustrative examples at the smaller scale while allowing for generalized lessons in the larger frame. For example, an integrated assessment model may be much more easily developed around a specific decision-making process at a regional scale, such as water resource management. Consider the challenge of providing a credible assessment of the impact of climate change on a specific watershed in which the objective is to assess the availability of future water resources. The nature of climate projections, including the factors that control the spatial and temporal character of precipitation and evaporation are essential, but insufficient to define future water availability. For example, land-use and land-cover change will also have substantial impact on runoff and evaporation rates, and changes in the character of human waste streams (both air- and waterborne) will substantially influence water quality. Our ability to examine future water resource availability

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

requires an integrated assessment because the impacts and decisions that influence water are place-based, but the drivers of these impacts are also drawn from a much larger scale (e.g., climate change). Such an integrated assessment can best be completed with a “matrix” or “nested” approach, in which the large-scale drivers (e.g., scenarios or projections for future climate change) become one element of the assessment process that can serve as a foundation for a series of other regional or sector assessments. At a regional scale, the vast amount of place-based information, including the additional drivers (e.g., land-use change), can be incorporated into the analysis to provide a more comprehensive treatment of potential changes in water quality and quantity. Such an approach might include the use of regionally based mesoscale models that better address the spatial character of the watershed, detailed watershed models, regional observing and information systems, and projections of population growth and the evolution of human systems.

Such a model can be used to illustrate both impacts and response options, and lessons from that model may be applied to other scales or decision-making processes. The degree and nature of the integration of the assessment represents a design decision that should be made with specific reference to the user’s needs and the purpose of the assessment. This approach is one way to ensure that broad-scale assessments can continue to be developed, while at the same time enhancing the relevance in individual applications where many resource decisions are made.


Recommendation: The degree and nature of interdisciplinary integration of assessments should be chosen with specific reference to the users and purpose of the assessment and the resources needed to do integrated assessments well. Because fully integrated assessments are more readily done at a specific local decision-making scale, attempts should be made to nest them within a global assessment, which may not need to be fully integrated.

Importance of Integrated Assessments for CCSP

The assessment activities mandated by the 1990 GCRA necessitate some degree of integration in that the act requires reporting on the state of scientific understanding of global change, the effects of global change on a range of sectors, and future trends. It is not clear whether the 21 Synthesis and Assessment Products currently being conducted by CCSP to address this mandate will meet the needs of policy makers in the same way that a more integrated approach might. In particular, integrated assessments have the potential to better link understanding of global change phenomena with their impacts, thereby providing better information for decision making. At the same time, the committee recognizes that integrated assessments are

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

challenging. Incorporating a more integrated approach into some of CCSP’s assessment activities could provide an important opportunity to learn more about how to conduct more effective integrated assessments, while also producing integrated, societally relevant outcomes.


Recommendation: The CCSP should invest in experimental applications of integrated assessments, with a specific focus on advising future applications of truly integrated, ongoing, interdisciplinary assessments in the United States.

TREATMENT OF UNCERTAINTY

One of the most difficult tasks in an assessment is the expression of uncertainty. In the case of climate change, uncertainties remain as part of the evolving understanding of various aspects of the greenhouse effect, its likely consequences, and the efficacy of various countermeasures. Some of these uncertainties will not be resolved for decades, if then. An effective characterization of uncertainty in assessments requires determining what kinds of uncertainty information would be useful for decision makers as well as developing quantitative or qualitative measures of uncertainty. While there is evidence that decision makers have an aversion to ambiguity, uncertainty is unavoidable in many decision-making contexts. Once decision makers understand that they are operating in an uncertain environment, they typically prefer that the conclusions of an assessment be accompanied by a description of the level and source of relevant uncertainties. The manner in which uncertainties are acknowledged and characterized will affect both the salience and the credibility of the assessment.

Ways of addressing uncertainty include clearly identifying the uncertainties; characterizing and identifying the source and magnitude of the uncertainties; expert judgments of the level of confidence; and testing this sensitivity through the development of plausible future scenarios. For example, the IPCC has attempted to deal with uncertainty by using words to indicate judgment estimates of confidence (e.g., “virtually certain” denotes a greater than 99 percent chance that a result is true, “very likely” denotes a 90-99 percent chance, etc.). Alternatively, there are formal methods for eliciting expert judgments (Morgan and Henrion 1990), but these can be quite time consuming and, therefore, can only be applied selectively.


Recommendation: Uncertainties should be well articulated in global change assessments to the extent they are understood, and the sources of the uncertainty should be described. There should be a deliberate effort to clarify the importance of alternative assumptions and to illustrate the impacts of uncertainties.

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

A CREDIBLE AND INDEPENDENT REVIEW PROCESS

Assessments differ from more standard scientific publications, and therefore the typical science peer-review process needs modification. Assessments build on prior knowledge; identify recent advances and research needs; attempt to reach consensus on scientific debates; and in some cases, provide response options, including policy options. Because assessments may include policy-relevant information and even some value judgments in the case of impact assessments, assessment reviews need to be conducted to achieve salience, legitimacy, and credibility. In contrast, typical science peer review focuses solely on scientific credibility. Therefore, the review process should be consistent with the goals of the assessment and the type of assessment. For process assessments that focus only on the scientific understanding of the process, an expert peer review may suffice. However, assessments providing decision support or policy options, such as impact and response assessments, may require a broader review, involving stakeholders and decision makers, in particular.

Many assessments that the committee examined have well-established review mechanisms. For the IPCC, the review process includes a peer review, followed by an expert and government review, and finally a review by the governments who are party to the United Nations Framework Convention on Climate Change. The report and the various summaries are then subject to acceptance by the governments, which meet in plenary for this approval process. This government review process has raised some issues regarding credibility due to the potential for government interference with the scientific conclusions. In contrast, the ACIA provides a model of how a government review can be conducted successfully without the resulting perception that governments influenced the scientific consensus. The government review process for the ACIA involved scientists, who had the ultimate authority over the scientific conclusions, and government representatives, who were given the editorial authority over policy options. In the example of the TEAP, no external review process was undertaken because all the key players were already involved and the assessment contained proprietary information.

Because a well-designed review process has the potential to greatly enhance broad stakeholder ownership and the quality of the outcome, it is essential to the credibility, salience, and legitimacy of the assessment. As previously mentioned, an assessment review is distinct from a peer review in that it cannot be undertaken solely from the perspective of scientific credibility, but must also focus on issues of salience and legitimacy. The committee found that an effective approach is a staged review, such as employed in the ACIA, beginning with the scientific community, with subsequent involvement of governments and other relevant stakeholders. To ensure that legitimacy and credibility can be enhanced simultaneously, an approach should

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

be designed that addresses criticisms regarding government reviews that attempt to alter the report’s scientific conclusions inappropriately. This can be addressed by providing clear and transparent guidelines giving experts the ultimate editorial authority over the scientific conclusions in response to government reviews and comments. In addition, neutral review editors from a broad range of disciplines could function as referees to ensure that comments are responded to appropriately and that well-defined guidelines are followed to avoid the perception of government reviews altering scientific conclusions.


Recommendation: An assessment review process should enhance salience and legitimacy in addition to credibility, by engaging interested and affected parties in the review process in addition to the expert community. The design of the review process should be adapted depending on whether it is a process, impact, or response assessment. The use of a well-balanced panel of review editors from a broad range of backgrounds should be considered to ensure that the review comments are responded to appropriately. In addition, a transparent mechanism for a legitimate and credible government review needs to be designed.

DEVELOPING DECISION-SUPPORT APPLICATIONS

Decision-support tools include a wide range of tools and models that link analyses, environmental and social data, and information about decisions and outcomes. They help decision makers understand the sensitivity of relevant systems, assess vulnerability, identify management alternatives, characterize uncertainties, and plan for implementation (Chen et al. 2004; Pyke and Pulwarty 2006). For example, regional tools were developed during the development of the NACCI that allow web-based access to assessment data to assist in making agricultural crop decisions. In its strategic plan (CCSP 2003), the CCSP identified the need for increased efforts to develop decision-support applications, a new emphasis that was lauded in the NRC review of the plan (NRC 2004).

Adaptation to global change in general, and climate change in particular, requires that the institutional context of decisions be recognized in the development of decision-support tools as well as adaptation and mitigation activities. Assessments should be designed to be policy relevant but not policy prescriptive. For example, a response assessment may provide policy options and analysis describing possible policy outcomes but it should not prescribe which response to choose. There are many ways to ensure that decision-support efforts are properly focused and effective, but it will not be possible to support every type of decision at every scale. When selecting specific case studies to be nested within the broader assessment activity,

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

CCSP needs to be strategic about the kinds of decisions to support and the scale at which such support is most urgently needed. It is also important that sufficient resources be dedicated to supporting the development of decision-support tools, which is a relatively new area of emphasis for CCSP (NRC 2004). The critical issue in decision support is to provide useful, policy-neutral information targeted for use in particular sectors and for specific applications.

For assessments intended to inform national- and international-level decisions about how to effectively manage the climate change risk, the information is being applied to issues that are apt to be highly politically charged. Thus, CCSP needs to be thoughtful about how it supports development of decision-support tools so that the information resulting from such tools is credible. The area of cost-benefit analysis is particularly challenging; for example, the IPCC has struggled with whether to conduct such analyses as part of the assessment or instead to synthesize existing analyses conducted by others. Some options would be for CCSP to (1) support the development of the needed decision-support tools; (2) encourage the appropriate science or modeling community to focus their efforts on the needs of policy makers, and then synthesize the results in a manner that will be useful to the policy-making community; or (3) commission the development and application of the requisite decision-support tools as part of the assessment process itself. An example of the third option is CCSP Synthesis and Assessment Product 2.1, in which existing tools are being used to develop new emission scenarios, analyze their impact on the energy system, and assess the costs to the economy.


Recommendation: CCSP should foster and support the development of knowledge systems that effectively build connections between those who generate scientific information and the decision makers who are most likely to benefit from access to the knowledge that is generated. One approach is to support the development of decision-support tools and applications at various scales of decision making that can be used in the context of assessments. In doing so the CCSP should identify decision-making processes of high priority or broad application that address key regional or sectoral vulnerabilities, and then evaluate the decision-support needs in those applications. New analytical and predictive tools can then be devised that have direct benefits in specific assessment applications.

EMPLOYING A NESTED MATRIX APPROACH

Adaptive approaches are needed to continually integrate advances in knowledge into the policy context. Although it would be ideal to address each sector and region at the local, regional, and national scales while

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×

assessing impacts and responses, it is unlikely that sufficient resources will be available to do this comprehensively on an ongoing basis. One way to address the resource issues associated with assessment is to build a broad conceptual framework or matrix linked to smaller-scale illustrative examples. For example, an assessment could be conducted at a national level, accompanied by selected localized case studies of impacts on specific sectors or implications for specific local decision making. The work on the broad themes and trends can be an ongoing effort, while individual, integrated, local, or sectoral assessments can be nested strategically in the broader research agenda. This will help develop an ongoing assessment program that has more coherence over time.

An example of the application of the nested matrix approach is using global climate models to identify likely future changes in temperature and precipitation at the national and regional level that may result from climate change. By connecting such outputs to hydrologic models, it is possible to identify a range of likely impacts on runoff for specific watersheds and evaluate potential vulnerabilities for regions and sectors. Based on that information, specific regions or sectors that are identified as areas of high vulnerability can be selected for a more focused integrated assessment that includes the demographic and institutional context as well as physical parameters. At a regional scale, the vast amount of place-based information, including additional drivers (e.g., land-use change), can be incorporated into the analysis to provide a more comprehensive treatment of potential changes in water quality and quantity.


Recommendation: CCSP should consider implementing a nested matrix concept in developing subsequent assessments.

Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 99
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 100
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 101
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 102
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 103
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 104
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 105
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 106
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 107
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 108
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 109
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 110
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 111
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 112
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 113
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 114
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 115
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 116
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 117
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 118
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 119
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 120
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 121
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 122
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 123
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 124
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 125
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 126
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 127
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 128
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 129
Suggested Citation:"5 Advice for Effective Assessments." National Research Council. 2007. Analysis of Global Change Assessments: Lessons Learned. Washington, DC: The National Academies Press. doi: 10.17226/11868.
×
Page 130
Next: References »
Analysis of Global Change Assessments: Lessons Learned Get This Book
×
Buy Paperback | $60.00 Buy Ebook | $48.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Global change assessments inform decision makers about the scientific underpinnings of a range of environmental issues, such as climate change, stratospheric ozone depletion, and loss of biodiversity. Dozens of assessments have been conducted to date by various U.S. and international groups, many of them influencing public policies, technology development, and research directions. This report analyzes strengths and weaknesses of eight past assessments to inform future efforts. Common elements of effective assessments include strong leadership, extensive engagement with interested and affected parties, a transparent science-policy interface, and well defined communication strategies. The report identifies 11 essential elements of effective assessments and recommends that future assessments include decision support tools that make use of information at the regional and local level where decisions are made.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!