ing time). Although tracking studies have been done for nearly a century (Robinson, 1928; Melton, 1935), Serrell’s (1998) meta-analysis served to standardize some of the methods and definitions, including a “stop” (planting the feet and attending to an exhibit for at least 2-3 seconds), a “sweep rate” (the speed with which visitors move through a region of exhibits), and a “percentage of diligent visitors” (the percentage of visitors who stop at more than half of the elements). It also suggests benchmarks of success for various types of exhibit format (dioramas, interactives, etc.).

Some researchers have modified the traditional “timing and tracking” approach, creating an unobtrusive structured observation based on holistic measures. These measures recognize that although the amount of time spent in an exhibition is a good quantitative indicator of visitors’ use of a gallery space or exhibit element, it often poorly reflects the quality of their experience with an exhibition. Therefore, to complement quantitative measures, researchers have developed a ranking scale with which they can assess the quality of interactions that visitors have in various sections of an exhibition or at specific exhibit components (Leinhardt and Knutson, 2004). The scale involves time to some degree but not solely.

Participants’ submissions to websites, through comment cards, and even via visitor guest books provide evidence that learners are willing and able to participate in a dialogue with the institution or people who generated the learning resource. Feedback mechanisms have become well established in museums and have been increasingly displayed openly rather than collected through a comment box or other means for staff to review privately. These methods have been assisted by the development of technological systems for automatically caching and displaying a select number of visitor responses, as well as wiki models of distributed editing. For example, the Association of Science-Technology Centers hosts ExhibitFiles, a community site for designers and developers to share their work; the Liberty Science Center has created Exhibit Commons, a website that invites people to submit contributions for display in the museum; and the Tech Museum of Innovation is using Second Life as an open source platform for exhibit design, with plans to replicate some of the best exhibits in its real-world museum.

These means of collecting data may be useful for research as well as for institutional and practical reasons, so it is important to be clear when they are appropriately construed in a science learning framework. Showing up is important and the scale of research of informal learning institutions speaks to their capacity, but making claims about participation in science is not the same as making claims about how many people passed through a particular setting.

Issues of accessibility are important when assessing participation rates in informal environments. Participation may be reduced because activities or environments are inaccessible to some learners, physically or intellectually. Reich, Chin, and Kunz (2006) and Moussouri (2007) suggest ways to



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement