good design for a submersible or an adequate method for measuring salinity. In some instances, determining a workable standard for measuring success ahead of time—that is, before the learning activities among participants take place—can be nearly impossible. The agenda that arises, say, in a family visit to a museum may include unanticipated episodes of identity reinforcement, the telling of stories, remindings of personal histories, rehearsals of new forms of expression, and other nuanced processes—all of which support learning yet evade translation into many existing models of assessment.
The type of shared agency that allows for collaborative establishment of goals and standards for success can extend to multiple aspects of informal learning activities. Participants in summer camps, science centers, family activities, hobby groups, and such are generally encouraged to take full advantage of the social resources available in the setting to achieve their learning goals. The team designing a submersible in camp or a playgroup engineering a backyard fort can be thought of as having implicit permission to draw on the skills, knowledge, and strengths of those present as well as any additional resources available to get their goals accomplished. “Doing well” in informal settings often means acting in concert with others. Such norms are generally at odds with the sequestered nature of the isolated performances characteristic of school. Research indicates that these sequestered assessments lead to systematic undermeasurement of learning precisely because they fail to allow participants to draw on material and human resources in their environment, even though making use of such resources is a hallmark of competent, adaptive behavior (Schwartz, Bransford, and Sears, 2005).
Despite the difficulties of assessing outcomes, researchers have managed to do important and valuable work. In notable ways, this work parallels the “authentic assessment” approaches taken by some school-based researchers, employing various types of performances, portfolios, and embedded assessments (National Research Council, 2000, 2001). Many of these approaches rely on qualitative interpretations of evidence, in part because researchers are still in the stages of exploring features of the phenomena rather than quantitatively testing hypotheses (National Research Council, 2002). Yet, as a body of work, assessment of learning in informal settings draws on the full breadth of educational and social scientific methods, using questionnaires, structured and semistructured interviews, focus groups, participant observation, journaling, think-aloud techniques, visual documentation, and video and audio recordings to gather data.
Taken as a whole, existing studies provide a significant body of evidence for science learning in informal environments as defined by the six strands of science learning described in this report.
A range of outcomes are used to characterize what participants learn about science in informal environments. These outcomes—usually described