Those leading and participating in an assessment must operate within constraints imposed by prior decisions defining an assessment’s scope, mandate, and organizational setting. Even so, assessment participants still have the opportunity to decide many aspects of its process, content, and presentation. Within an assessment’s previously defined mandate, participants choose what specific subject areas to include or emphasize, what sources of information to include, what methods or tools to use in integrating information, and what (if any) specific policy-relevant questions to answer. They may decide who participates in the assessment, how they are chosen, how they organize their collective work, how they make decisions (particularly in the case of disagreements), and how to identify and involve stakeholders. They choose how to present results, including the content and strength of conclusions, as well as whether to make interpretive judgments that go beyond the present literature, to employ “if-then” statements that link alternative choices to potential outcomes, or to include explicit recommendations for action. They may decide whether the assessment undergoes public or governmental review in addition to scientific peer review. They also decide the scale, form, and manner of dissemination of reports or other outputs.
Many of these design choices are linked with an assessment’s success in achieving credibility, legitimacy, and saliency, although the relationships are both complex and dependent on the assessment’s context. For example, broadening stakeholder participation in an assessment can increase legitimacy but poses risks to credibility to the extent that these participants are perceived as lacking expert standing, thereby, diminishing the assessment’s reliance on scientific expertise. In Chapter 3, the committee discusses in greater detail how these mostly internal design choices can be approached to optimally balance all three attributes in achieving an effective assessment.