of learning outcomes. Second, assessments should fit with the kind of participant experiences that make informal learning environments attractive and engaging. Any assessment activities undertaken in these settings should not undermine the very features that make for effective learning. Third, the assessments must be valid; that is, they should measure what they purport to measure (construct validity) and align with opportunities for learning that are present in the environment (often referred to as ecological validity). In short, assessment measures should capture as much of the breadth of learning that a reasonable audience could experience, should align with the nature of the learning experience, and should represent in some faithful way the learning that actually occurs. Doing so is not easy.

Keeping these three criteria in mind, let’s consider how information was collected in some of the examples included in this book. WolfQuest, the computer game discussed in Chapter 1, used online surveys to collect data about the project. The survey asked questions related to the learning goals, as the first criterion suggests. Considering this was an online experience, it is fitting that an online assessment instrument was used. Because of the nature of the activity, assessment did not interfere with the experience. Finally, the assessment was valid; questions asked on the survey align with the project’s goals.

Through the WolfQuest online survey, evaluators asked participants what they knew about wolves before playing the game and what they learned as a result of the game. This pre-post strategy is often used to document changes in learning as a result of the informal experience. However, the evaluators for WolfQuest went further. They conducted content analyses of discussions between WolfQuest players and learned a lot about the social dimension of WolfQuest and ways in which playing the game encouraged cooperative learning and even extended the experiences into other parts of a player’s life. The evaluators also analyzed other forms of “embedded” data—data that are generated through natural engagement with the game (or, by extension, an exhibit, program, interpretive walk, etc.) and can be used to infer outcomes—to examine how using knowledge about wolf behavior and ecology helped players advance in the game. Using embedded data ensures that data collection does not interfere with the experience itself, thus fulfilling two of the three above-mentioned criteria: alignment with the experience and ensuring ecological validity.

There is increasing interest among practitioners, researchers, and evaluators in documenting long-term learning from informal experiences. When evaluators are interested in finding out whether information was retained over time, they typically get in touch with participants through phone or e-mail 1 to 4 months



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement